text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The C11 Standard [ISO/IEC 9899:2011] introduced a new term: temporary lifetime. Modifying an object with temporary lifetime is undefined behavior. According to subclause 6.2.4, paragraph 8 A non-lvalue expression with structure or union type, where the structure or union contains a member with array type (including, recursively, members of all contained structures and unions) refers to an object with automatic storage duration and temporary lifetime. Its lifetime begins when the expression is evaluated and its initial value is the value of the expression. Its lifetime ends when the evaluation of the containing full expression or full declarator ends. Any attempt to modify an object with temporary lifetime results in undefined behavior. This definition differs from the C99 Standard (which defines modifying the result of a function call or accessing it after the next sequence point as undefined behavior) because a temporary object's lifetime ends when the evaluation containing the full expression or full declarator ends, so the result of a function call can be accessed. This extension to the lifetime of a temporary also removes a quiet change to C90 and improves compatibility with C++. C functions may not return arrays; however, functions can return a pointer to an array or a struct or union that contains arrays. Consequently, if a function call returns by value a struct or union containing an array, do not modify those arrays within the expression containing the function call. Do not access an array returned by a function after the next sequence point or after the evaluation of the containing full expression or full declarator ends. Noncompliant Code Example (C99) This noncompliant code example conforms to the C11 Standard; however, it fails to conform to C99. If compiled with a C99-conforming implementation, this code has undefined behavior because the sequence point preceding the call to printf() comes between the call and the access by printf() of the string in the returned object. #include <stdio.h> struct X { char a[8]; }; struct X salutation(void) { struct X result = { "Hello" }; return result; } struct X addressee(void) { struct X result = { "world" }; return result; } int main(void) { printf("%s, %s!\n", salutation().a, addressee().a); return 0; } Compliant Solution This compliant solution stores the structures returned by the call to addressee() before calling the printf() function. Consequently, this program conforms to both C99 and C11. #include <stdio.h> struct X { char a[8]; }; struct X salutation(void) { struct X result = { "Hello" }; return result; } struct X addressee(void) { struct X result = { "world" }; return result; } int main(void) { struct X my_salutation = salutation(); struct X my_addressee = addressee(); printf("%s, %s!\n", my_salutation.a, my_addressee.a); return 0; } Noncompliant Code Example This noncompliant code example attempts to retrieve an array and increment the array's first value. The array is part of a struct that is returned by a function call. Consequently, the array has temporary lifetime, and modifying the array is undefined behavior. #include <stdio.h> struct X { int a[6]; }; struct X addressee(void) { struct X result = { { 1, 2, 3, 4, 5, 6 } }; return result; } int main(void) { printf("%x", ++(addressee().a[0])); return 0; } Compliant Solution This compliant solution stores the structure returned by the call to addressee() as my_x before calling the printf() function. When the array is modified, its lifetime is no longer temporary but matches the lifetime of the block in main(). #include <stdio.h> struct X { int a[6]; }; struct X addressee(void) { struct X result = { { 1, 2, 3, 4, 5, 6 } }; return result; } int main(void) { struct X my_x = addressee(); printf("%x", ++(my_x.a[0])); return 0; } Risk Assessment Attempting to modify an array or access it after its lifetime expires may result in erroneous program behavior. Automated Detection Related Vulnerabilities Search for vulnerabilities resulting from the violation of this rule on the CERT website. Related Guidelines Key here (explains table format and definitions) 15 Comments William L Fithen I at least agree with Hal. These examples in juxtaposition do nothing to help me understand the problem being solved. I think the issue is made unnecessarily complex through the use of the compiler internal terminology "sequence point." Programmers do not know what they are, much less where they occur. If we are attempting to teach them something about them, then annotating the examples with some sort of sequence point clues would help. If not, the use of the term obscures the point being made. I'd stick with pure programmer-aware terminologies. Ivan Jager The term "sequence point" is not "compiler internal terminology." It is the terminology used by the C standard. AFAIK, there isn't any other correct terminology. Sequence points are basically the places where order of execution is specified. As a programmer, you do need to know about them, although not necessarily what they they are called. Eg, if you say f( x );g( y );, even as a programmer, you generally want to know that fwill be called before gis called. Anyways, if you were reading the rules in order (or even the titles), you would have come across [EXP30-C. Do not depend on order of evaluation between sequence points] first, which does explain what a sequence point is. If you know of better terminology, would you mind suggesting it? William L Fithen The statement "This program has undefined behavior because there is a sequence point before printf()is called, and printf()accesses the result of the call to addressee()." is not an adequate explanation of the failure here. If you replace "address().a" with "address()", the explanation would be the same, but the code would be correct. It's the ".a" that's wrong and the explanation should reflect that. There must be a sequence point between address() and address().a that's the issue. Ivan Jager It's not the ".a" that is wrong. It's the fact that the printf()function will be trying to access the value returned by address()in the previous sequence point. Perhaps you are confused because they are using a function call as a sequence point. Would it be more clear if they instead used semicolon? I think this is a worse example, because it also violates [DCL30-C. Do not refer to an object outside of its lifetime] Arbob Ahmad When I compile this code with GCC version4.1 and the -Wall switch, I do not get a warning about this bug. I only get a warning that the format string expects type 'char *' but the argument has type 'char[6]' Alex Volkovitsky I can reproduce this on gcc3.4.4 then again... this could just be a case of a less than informative warning, perhaps we should explain a bit more exactly what is going on? Robert Seacord I think this rule needs to be rewritten / expanded. See Hallvard Furuseth printf does not accesses the return value from addressee(). main() does that, and passes its member 'a' to printf(). David Svoboda The code examples don't support the rule. Not sure why the NCCE segfaults, but I suspect it is due to the array, not to anything regarding sequence points. First off, the examples are evidence of something. And I also get the silly compiler warning from gcc that Arbob and Alex report on the NCCE: This is, of course, specific to printf()and its ilk. In fact gcc, is very sticky about passing the 'temporary array' addressee().ato functions...usually it returns an error and rejects typecasting events. At first I thought the problem is that arrays are glorified pointers and the NCCE violates DCL30-C. Declare objects with appropriate storage durations. But as a counterexample, the following code compiles w/o warning and works properly: Also, if struct Xuses something besides arrays, the code also compiles cleanly and works. So I think the NCCE illustrates something bad about arrays, but don't know what. No rule in the Arrays section seems to apply. On a side note, the NCCE may be bad C, but it seems to be valid C+. It compiles cleanly under G+ and runs correctly. I suspect C++ treats temporary values differently than C. For instance, in C++ a function can return a reference to a variable, the result being that you can use a function call as an lvalue. So I suspect C++ is more thorough in its treatment of temporary values (or of arrays, whichever this is about.) Robert Seacord C++ has different semantics. In C++ temporaries (like return values) are preserved until the end of evaluation of the containing full expression or full declarator. David Svoboda OK, having looked at this example more, I am convinced that the problem is better formulated in terms of arrays. C99 section 6.9.1, paragraph 3 explicitly states that functions may not return arrays. The NCCE violates the spirit of this, but not the letter, as its function returns an array wrapped inside a struct. The NCCE behaves the same even if the array being returned lives on the heap, not the stack (eg created with malloc()). Also, I can't recreate the problem without arrays; eg a struct containing a struct works perfectly. The webpage "Extending the Lifetime of Temporary Objects" provides the reference for this rule, claiming a sequence point after the addressee() function and printf() is responisible for the behavior. But the NCCE is not bad because of sequence points, and I think the C99 standard is being misinterpreted here. Obviously a function's return value must be leglitimate across at least one sequence point, otherwise you couldn't do foo( bar()), since there is a sequence point between the bar() call and the foo() call. There is no sequence point in referring to the array within the struct; the only sequence points (by definition) are the call to addressee() and the call to printf(). I haven't found a definitive reference forbidding the NCCE. As for gcc, it does not seem to properly convert the array to a pointer in the NCCE, which is why the NCCE crashes, but works if we wrap the array in a &array[0] expression, as in my previous comment. One telling clue about this is that gcc won't compile the program if you replace 'printf' with some other function taking a char* or char array. I get the impression they were trying to prevent some array casts at the compiler level, and doing other array casts right, and the NCCE was just a loophole they never changed (prob because its claimed to violate sequence points.) As noted in the rule, this is not a problem with MSVC. So this may just be a gcc bug. If it merits a rule, it would be "Don't pass as a function argument an array that is a member of a struct returned by another function. I guess my big question is: Can anyone cite a reference (that isn't about sequence points) saying why the NCCE is bad? David Svoboda Clark Nelson sez: >> Perhaps I am cofused over the meaning of this paragraph, from >> C99 Section 6.5.2.2 says 1999: >> >> If an attempt is made to modify the result of a function call or >> to access it after the next sequence point, the behavior is >> undefined. >> >> It would seem to me that this paragraph renders a simple cascading >> function call like foo( bar()) illegal, because there is a sequence >> point before the call to foo() and after the call to bar(), and that >> sequence point 'kills' the return value of bar(). > > You have to remember that in C, all arguments are passed by value, so what foo receives is not the object returned by bar, but a copy thereof. So it's not the same object, so there's no undefined behavior. > >> So why is foo( bar()) legal, but foo( bar().a) illegal? (assuming >> bar() returns a struct with an array member 'a')? > > Because (bar().a) implicitly takes the address of an object returned by bar, so if foo dereferences that pointer, it is accessing the actual object returned by bar after a sequence point. > >> Rob pointed me to your proposal to amend this paragraph in the C >> standard: >> >> >> >> AFAICT this would kill our rule, as the non-compliant code example >> would become perfectly legal C (it is already legal C++). But, as you >> know, your proposal is not part of the C standard yet, so how do we >> cope in the meantime? > > That's a judgment call. Theoretically, even after a new C standard comes out, there will still be compilers around that won't yet conform to it. So from some perspective, it will still be reasonable to suggest to programmers that they avoid references to rvalue arrays. On the other hand, no one has yet found a C compiler that actually generates code that doesn't satisfy the C++ rule, which suggests that references to rvalue arrays are practically pretty safe. > > Actually, I take it back: in pre-C99 mode, GCC does something entirely unexpected with rvalue arrays. At least on IA-32, if an rvalue array expression is used as an argument to a function, the whole array is passed by value, i.e. copied into the argument block of the called function. No matter what the standard says, it might be reasonable to warn people away from that behavior. > > Clark David Svoboda After studying Clark's responses above to my questions, I tested the NCCE again. The code still coredumps on gcc version 4.2.3 (on an AMD64-bit Ubuntu box), but it works perfectly if I add a --std=c99flag to the compile command, which instructs GCC to adhere to C99 as much as possible. Clark's last two paragraphs sum up the situation pretty effectively. I'll just add: Aaron Ballman At some point, we need to look into the Automated Detection section; the NCCEs all compile cleanly with gcc 4.8.1 in -Wall -Weverything -pedantic mode. Aaron Ballman Clang also does not catch instances of this rule. I've removed the GCC row from the table; I don't believe it catches this rule currently (5.2.0).
https://wiki.sei.cmu.edu/confluence/pages/viewpage.action?pageId=87152058
CC-MAIN-2019-22
refinedweb
2,361
61.77
Recursion and You Recently, I started working on a personal project to crawl TvTropes, essentially a casual wiki for discussing pop culture, make a graph of those works which reference each other, and then calculate the “impact factor” of each work in the same way as Google judges the quality of a website by the people that link to it. I had this as my first attempt at crawling TvTropes. from bs4 import BeautifulSoup from sets import Set import requests import sys tvtropes = "" ls = Set([]) def addLinks(links): '''Add the links to the set we've crawled and return the new ones.''' new = Set([]) for link in links: if link not in ls: ls.add(link) print(link) new.add(link) return new def crawl(url): '''Crawl a website in depth first order and print out every page we find in it's domain.''' if url not in ls: links = getLinks(url) cont = addLinks(links) print(len(ls)) for link in cont: crawl(cont) def getLinks(url): '''Use BeautifulSoup to scrape the page and filter the links on it for certain things we know we don't want to crawl.''' r = requests.get(tvtropes) data = r.text soup = BeautifulSoup(data) links = [link.get('href') for link in soup.find_all('a')] links = [link for link in links if 'mailto' not in link] links = [link for link in links if 'javascript' not in link] links = [link for link in links if '?action' not in link] links = [link for link in links if 'php' not in link or 'pmwiki.php' in link] links = [tvtropes + link if 'http' not in link else link for link in links] links = [link for link in links if tvtropes in link] return links if __name__ == "__main__": crawl(tvtropes) It was horribly broken of course, Python doesn’t let you recurse more than 1000 levels by default. This is actually a feature, since such deep recursion is usually a bug outside of functional programming. The reason I was running into it here was because I was essentially trying to recursively walk a tree with a node for every page on TvTropes, which has about a half million of them by my count. My initial workaround was to just do sys.setrecursionlimit(2000000) and get rid of the limit. This wasn’t the right thing to do and the crawler was missing substantial numbers of pages. The solution to my problem was to just use the scrapy library. It was much faster than my code and wasn’t missing pages. As a general rule, if you have a need for some code that does X and you’re not trying to learn more about X you should look for a library instead of rolling your own. If you have to roll your own, then you should make it an open-source project and give back to the community. The way scrapy “walks the tree” if it has a queue of webpages to crawl that it pushes webpages onto and pops them out of. This gets around the issue of limited stack space, by allocating everything on the heap and is the efficient way if your language does not automatically optimize tail recursion. Spoiler alert, Python doesn’t. In a language that does optimize tail calls, such as Haskell instead of allocating more room on the stack every time we call a function, we can check if the function call is the final expression in a method and choose to reuse the stack space from the function that we’re calling from. This works because we know none of the data stored in that part of the stack is needed anymore.
http://joshuasnider.com/update/stack/recursion/2015/06/29/recursion-and-you/
CC-MAIN-2018-26
refinedweb
610
75.24
On Tue, 2003-06-03 at 22:19, Torsten Knodt wrote: > On Tuesday 03 June 2003 21:46, Bruno Dumon wrote: > > BD> yeah yeah, I agree with that, and for that purpose the tidyserializer is > BD> very valuable. I was only wondering if there were any blocking bugs in > BD> the normal htmlserializer that make it impossible to generate valid html > BD> (next to the namespace problem). > > No real blocking. For most problems, there is a simple workaround. > > BD> (I'll look into applying the tidyserializer.) > > When you or someone else wants to apply it, I'll provide xdocs for it, > including all supported parameter by tidy. great. > > BD> TK> You have to validate the output to see if it's valid. > BD> Is there any other way to validate the output then by validating it? > > Was written bad. You have to validate the output with an external program to > see if it is valid. That's what I meat. ok. > > BD> If "the job" means that Xalan should validate the serialized xml against > BD> the DTD it references, then I think it's a pretty save bet to say that > BD> will never ever happen. > > I hope it removes not allowed and not needed namespaces. but that is quite a heavy process if only for aesthetic purposes. > For deciding what > namespaces are allowed, it has to do validation. true, but only if you are still living in the DTD-area. And since in DTD's you shouldn't be using namespaces in the first place, maybe it is easier to simply make a transformer which drops all namespaces? -- Bruno Dumon Outerthought - Open Source, Java & XML Competence Support Center bruno@outerthought.org bruno@apache.org
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200306.mbox/%3C1054676939.8637.93.camel@yum.ot%3E
CC-MAIN-2017-22
refinedweb
285
73.27
Table of contents - What Are We Going to Build? - Prerequisites - Tools We'll Need - Let's Begin! - Writing our Smart Contract in Solidity: - 1. Get our Local Ethereum Network Running - 2. Creating our Smart Contract - 3. Running Our Contract Locally - 4. Finishing the Smart Contract Logic - 5. Deploying the Smart Contract - 1. Let's create a file called deploy.js inside the scripts folder. Enter the code given below in the deploy.js file we just created. - 2. We'll be using a platform called Alchemy. Sign up [here]. (). - 3. This is the final part of the deployment. Make changes to your hardhat.config.js file by entering the code below: - 4. Now, we're going to deploy the contract. We can do that by moving to the terminal and running the following command: - Building the Web App in React - Send lucky users some Ethereum - Final Steps - ✍🏻 Conclusion. In this Web3 tutorial, we'll be building a fully-functional blockchain, web3 App. We'll be starting with the basics of building a blockchain app and at the end, we'll get our web3 app live for everyone to use. What Are We Going to Build? We'll be building a Decentralized App(dApp) called 🧀 Pick Up Lines. As the name suggests, our users will be able to send some good ol' pickup lines and stand a chance to win Ethereum as a reward. Prerequisites - Beginner to intermediate knowledge of React - Some familiarity with Solidity smart contracts - Basic understanding of blockchain programming Tools We'll Need Work becomes play with the right tools, right? Fortunately, web3 has a plethora of tools at its disposal to achieve the infamous WAGMI 🧘 - Visual Studio Code or any text editor - Hardhat for Ethereum development - Metamask as a crypto wallet - Vercel and Alchemy as hosting platforms Let's Begin! Now, that we have some idea of the final app and the tools that we're going to use, let's start writing code! First, we'll write the smart contract of our blockchain app. Then, we'll build our React app, and at the end, connect those two things to have a full-fledged web3 app. Writing our Smart Contract in Solidity: 1. Get our Local Ethereum Network Running We need to spin up a local Ethereum network. A local Ethereum network is a blockchain network that specifically runs on your local machine. We'll use it for testing while we build our application, as it gives us all the blockchain features without using real cryptocurrency. In this web3 tutorial, we'll be using Hardhat. Since we need to test our blockchain app locally before launching, we'll use fake ETH and fake test accounts to test our smart contract through Hardhat. Most importantly, it will facilitate compiling our smart contract on our local blockchain. Now, head to the terminal and move to the directory you want. Once there, run these commands: mkdir pickup-lines cd pickup-lines npm init -y npm install --save-dev hardhat Next, let's get a sample project running: /*To create our Hardhat project*/ npx hardhat Run the project: /*To compile our contracts*/ npx hardhat compile Test the project: /*To run our tests*/ npx hardhat test The code above sets up a barebone Hardhat project. With no plugins, it allows you to create your own tasks, compile your Solidity code, and run your tests. Basically, you’re creating a blockchain network in a local environment You'll see something similar to this: 2. Creating our Smart Contract Now, let's create a PickupLines.sol file under the Contracts directory. We need to follow a strict folder structure. It's super important because we're building on top of Hardhat, and the default paths for our /contracts, /scriptsand /testare pre-defined. Not following this structure will lead to our Hardhat tasks failing. Be careful! /*Use a license depending on your project.*/ // SPDX-License-Identifier: UNLICENSED /*Code is written for Solidity version 0.4.16, or a newer version*/ /*Built-in Hardhat interactive JavaScript console*/ import "hardhat/console.sol"; /*Main Solidity Contract*/ contract PickupLines { /*Constructor function for our contract*/ constructor() { console.log("I am the Cheesy PickUp Lines' smart contract."); } } 3. Running Our Contract Locally Now, let's create a script to run the smart contract we just built. This will enable us to test it on our local blockchain. Go into the scripts folder and create a file named run.js. In the run.js file, enter the following code: /*The `main` function to run contract locally for an instance.*/ const main = async () => { /*Helper function to get the contract `PickupLines`*/ const contracts = await hre.ethers.getContractFactory("PickupLines"); /*Deploying the contract for an 'instance'*/ const contract = await contracts.deploy(); await contract.deployed(); /*Address of the deployed contract.*/ console.log("Contract deployed to:", contract.address); }; /*A try-catch block for our `main` function*/ const runMain = async () => { try { await main(); process.exit(0); // exit Node process without error } catch (error) { console.log(error); process.exit(1); // exit Node process while indicating 'Uncaught Fatal Exception' error } }; /*Running the `runMain` function.*/ runMain(); Let's run the run.jsfile we just created from our terminal: /*To run the `run.js` file.*/ npx hardhat run scripts/run.js Did that go well? You can see the console.log message we put in our constructor() method. There, you'll also see the contract address too 4. Finishing the Smart Contract Logic Now, let's make our contract a bit fancier. We want to be able to let someone send us a pickup line and then store that line in the blockchain. So, the first thing we need is a function, so anyone can send us a pickup line. In the PickupLines.sol file under the Contracts folder, enter the following code: contract PickUpLines { /*Solidity event, that fires when a new line is submitted.*/ event NewPickUpLine(address indexed from, uint256 timestamp, string line); /*Data members*/ uint256 private seed; /*Seed data*/ uint256 totalLines; /*Total lines data*/ mapping(address => bool) hasWrote; /*Map of all addresses with a line submitted*/ /*A composite data member for a pick up line*/ struct PickUpLine { address writer; string line; uint256 timestamp; } /*Array of all pick up lines submitted.*/ PickUpLine[] pickuplines; constructor() payable { console.log("I am the Cheesy PickUp Lines' smart contract!"); } /*Function for adding a new line to the contract.*/ function newLine(string memory _line) public { /*Adding a new Pickup Line to our blockchain.*/ totalLines += 1; pickuplines.push(PickUpLine(msg.sender, _line, block.timestamp)); hasWrote[msg.sender] = true; emit NewPickUpLine(msg.sender, block.timestamp, _line); } /*Function to get all the lines submitted to the contract.*/ function getTotalLines() public view returns (uint256) { console.log("We have %s total PickUpLines.", totalLines); return totalLines; } } Boom! So, that's how a function is written in Solidity. We also added a totalLines variable that is automatically initialized to 0. This variable is special because it's called a state variable, and it's a special one because it's stored permanently in our contract storage. 5. Deploying the Smart Contract Now, we'll upgrade from our local blockchain to a globally-accessible blockchain. Follow the 4 steps below: 1. Let's create a file called deploy.js inside the scripts folder. Enter the code given below in the deploy.js file we just created. /*The `main` function to deploy contract locally*/ const main = async () => { /*Getting deployer's address.*/ const [deployer] = await hre.ethers.getSigners(); /*Getting deployer's ETH balance*/ const accountBalance = await deployer.getBalance(); /*Logging the Deployer's address and the balance.*/ console.log("Deploying contracts with account: ", deployer.address); console.log("Account balance: ", accountBalance.toString()); /*Deploying the contract.*/ const contracts = await hre.ethers.getContractFactory("PickupLines"); const contract = await contracts.deploy(); await contract.deployed(); /*Logging the address of the deployed contract.*/ console.log("PickupLines address: ", contract.address); }; /*A try-catch block for our `main` function*/ const runMain = async () => { try { await main(); process.exit(0); } catch (error) { console.log(error); process.exit(1); } }; /*Running the `runMain` function.*/ runMain(); 2. We'll be using a platform called Alchemy. Sign up [here]. (alchemy.com). We'll be using Alchemy to deploy our contract on the testnet. This is because if we use the Ethereum Mainnet, then every action/transaction on our app will have a real monetary value. We don't want to do that until our app is fully developed for public usage. For now, we're just testing our app. With a testnet, we'll be able to enjoy all the functions of a blockchain, albeit with fake cryptocurrency. You can get some fake ETH here. Learn more about Alchemy right here. 👇 3. This is the final part of the deployment. Make changes to your hardhat.config.js file by entering the code below: //Find YOUR_ALCHEMY_API_URL in the alchemy dashboard. require("@nomiclabs/hardhat-waffle"); module.exports = { solidity: "0.8.0", networks: { rinkeby: { url: "YOUR_ALCHEMY_API_URL", accounts: ["YOUR_WALLET_ACCOUNT_KEY"] }, }, }; Note: Accessing your private key can be done by opening MetaMask, changing the network to "Rinkeby Test Network", and then clicking the three dots and selecting "Account Details" > "Export Private Key". 4. Now, we're going to deploy the contract. We can do that by moving to the terminal and running the following command: npx hardhat run scripts/deploy.js --network rinkeby Did it work? EPIC. We deployed the contract, and we also have its address on the blockchain. Note the address somewhere, as our website is going to need this so that it knows where to look on the blockchain for your contract. Building the Web App in React 1. Setup a Basic React App with Metamask It's time to start working on our web app. Our contract was pretty simple. Now, let's figure out how the front-end app can interact with our smart contract. Note: We've built a starter kit for you! Here's the link to the repository. You can clone the repository and start working. The purpose of this blog post is to get you accustomed to blockchain development. We won't be going too deep into the front-end development in this section. 2. Connect the Wallet to our App Next, we need an Ethereum wallet. There's many available, but for this project, we're going to use Metamask. Download its browser extension and set up your wallet here. In the App.tsx file inside the src folder, enter the following code: import React, { useEffect, useState } from "react"; import { ethers } from "ethers"; import './App.css'; /*This will be present on the terminal once you deploy the contract.*/ const contractAddress = '0x52BB......'; /*Copy-Paste the ABI file from the contract folder into this repository.*/ import abi from "./utils/PickupLines.json"; export default function App() { /*State variable to store the account connected with the wallet.*/ const [currentAccount, setCurrentAccount] = useState(""); /*Function to check if the wallet is connected to the app.*/ const checkIfWalletIsConnected = async () => { try {") } } catch (error) { console.log(error); } } /*Function to connect the wallet*/ const connectWallet = async () => { try { const { ethereum } = window; if (!ethereum) { alert("Get MetaMask!"); return; } const accounts = await ethereum.request({ method: "eth_requestAccounts" }); console.log("Connected", accounts[0]); setCurrentAccount(accounts[0]); } catch (error) { console.log(error) } } /*React hook to check for wallet connection when the app is mounted.*/ useEffect(() => { checkIfWalletIsConnected(); }, []); return ( <div className="mainContainer"> <div className="dataContainer"> <div className="header"> 🧀 Hey there! </div> <div className="bio"> <span>Welcome to Pick-Up Lines!</span> <button className="button" onClick={null}> Shoot Line </button> {/*If there is no current account render this button*/} {!currentAccount && ( <button className="button" onClick={connectWallet}> Connect Wallet </button> ) </div> </div> </div> ); } 3. Call the Smart Contract From our App We now have a front-end app. We've deployed our contract. We've connected our wallets. Now let's call our contract from the front-end side using the credentials we have access to from Metamask. Add the following pickup function in our App component inside the App.tsx file: const pickup = async () => { try { const { ethereum } = window; if (ethereum) { const provider = new ethers.providers.Web3Provider(ethereum); const signer = provider.getSigner(); const contract = new ethers.Contract(contractAddress, contractABI, signer); /*Get the count of all lines before adding a new line*/ let count = await contract.getTotalLines(); console.log("Retrieved total lines...", count.toNumber()); /*Execute the actual pickup lines from your smart contract*/ const contractTxn = await contract.wave(); console.log("Mining...", contractTxn.hash); await contractTxn.wait(); console.log("Mined -- ", contractTxn.hash); /*Get the count of all lines after adding a new line*/ count = await contract.getTotalLines(); console.log("Retrieved total lines count...", count.toNumber()); } else { console.log("Ethereum object doesn't exist!"); } } catch (error) { console.log(error); } } Now to call the function, let's create a button in the App.tsx file. Add the following code: <button className="button" onClick={pickup}> Send a line </button> When you run this, you'll see that the total line count is increased by 1. You'll also see that Metamask pops us and asks us to pay "gas" which we pay by using our fake crypto. Send lucky users some Ethereum 1. Set a Prize and Select Users to Send Them Ethereum So right now, our code is set to store random pick up lines every single time. Let's make it more interesting by adding a reward algorithm inside our newLine function. Modify the newLine function in the PickupLines.sol file: function newLine(string memory _line) public { if (hasWrote[msg.sender]) { revert( "It seems you've posted a line already. We don't do repeats when it comes to picking up lines!" ); } /*Adding a new Pickup Line to our blockchain.*/ totalLines += 1; pickuplines.push(PickUpLine(msg.sender, _line, block.timestamp)); hasWrote[msg.sender] = true; emit NewPickUpLine(msg.sender, block.timestamp, _line); /*Reward 10% of the senders by creating a random seed number.*/ seed = (block.difficulty + block.timestamp + seed) % 100; if (seed <= 10) { uint256 prizeAmount = 0.0001 ether; require( prizeAmount <= address(this).balance, "The contract has insufficient ETH balance." ); (bool success, ) = (msg.sender).call{value: prizeAmount}(""); require(success, "Failed to withdraw ETH from the contract"); } } Here, the algorithm needs a random number. We take two numbers given to us by Solidity, block.difficulty and block.timestamp, and combine them to create a random number. To make this random number even more random, we'll create a variable seed that will essentially change every time a user sends a new line. We combined all three of these variables to generate a new random seed, %100. Now, we need the value to lie in the range of 0-99 and in order to achieve that, we’ll use the Modulo operator(%) by applying seed % 100. Final Steps 1. Preventing spammers Now, you have a way to randomly select people to reward. It's useful to add a condition to your site so that people can't just spam pickup lines at you. Why? Well, maybe you just don't want them to keep on trying to win the reward over and over by sending multiple lines at you. Or, maybe you don't want just their messages filling up your wall of messages. Modify the newLine function inside the PickupLines.sol file: contract PickUpLines { mapping(address => bool) hasWrote; struct PickUpLine { address writer; string line; uint256 timestamp; } PickUpLine[] pickuplines; /*Adding a new Pickup Line to our blockchain.*/ function newLine(string memory _line) public { /*Condition to check for repetitive submission.*/ if (hasWrote[msg.sender]) { revert( "It seems you've posted a line already. We don't do repeats when it comes to picking up lines!" ); } hasWrote[msg.sender] = true; } } 2. Finalize Congratulations! You've got all the core functionality down. Now, it's time for you to make this your own. Change up the CSS, the text, add some media embeds, add some more functionality, whatever. Make stuff look cool :). ✍🏻 Conclusion. There's always multiple improvements/features one can think of with this blockchain app. Feel free to experiment with the code to improve your skills. This blog is a part of the Hashnode Web3 blog, where a team of curated writers are bringing out new resources to help you discover the universe of web3. Check us out for more on NFTs, blockchains, and the decentralized future.
https://web3.hashnode.com/how-to-build-and-deploy-a-modern-web3-blockchain-app-tutorial
CC-MAIN-2022-27
refinedweb
2,679
60.51
Opened 12 years ago Closed 12 years ago #3691 closed (duplicate) Telling Q Objects to split up foreign keys (extension of #3592) Description This is not a duplicate of #3592. I was working with some of the things in #3592, and it seems from chatting on IRC that people really want a way to do ANDs on a m2m field without the Q objects subsuming them into one JOIN. the patch, you can: from django.db.models.query import Q,QSplit Article.objects.filter(QSplit(Q(tag__value = 'A')) & QSplit(Q(tag__value = 'B'))) > # articles with both tags So now they are split into different Tag entries. This isn't meant to be used in django. Just for people if they need it before the queryset refactoring. Also, this seems like it would be useful in the final version, so it's here for reference. Attachments (1) Change History (2) Changed 12 years ago by comment:1 Changed 12 years ago by The actual bug involved here is the same as in #1801 (that one talks about whole QuerySet objects, not Q() objects, but hte underlying logic is the same). The workaround might be useful for some people, but please put that in the wiki, rather than the ticket tracker, since otherwise there would be no way to effectively resolve this ticket. Patch to django for QSplit
https://code.djangoproject.com/ticket/3691
CC-MAIN-2018-51
refinedweb
225
68.91
In this section, you will learn how to read a file line by line. Java has provide several classes for file manipulation. Here we are going to read a file line by line. For reading text from a file it's better to use a Reader class instead of a InputStream class since Reader classes have the purpose of reading textual input. In the given example, we have simply create a BufferedReader object and start reading from the file line by line until the readLine() method of BufferedReader class returns null, which indicates the end of the file. hello.txt: Here is the code: import java.io.*; public class ReadFile { public static void main(String[] args) throws Exception { File f = new File("C:/Hello.txt"); if (!f.exists() && f.length() < 0) { System.out.println("The specified file does not exist"); } else { FileReader fr = new FileReader(f); BufferedReader reader = new BufferedReader(fr); String st = ""; while ((st = reader.readLine()) != null) { System.out.println(st); } } } } In this above code, the BufferedReader class is used to read text from a file line by line using it's readLine method. Advertisements Posted on: January
http://www.roseindia.net/tutorial/java/core/files/readFilelinebyline.html
CC-MAIN-2017-22
refinedweb
189
64.2
Nodejs ReferenceError: require is not defined in ES module scope Dung Do Tien Jul 01 2022 53 Hello dev guys. I am a beginner about Nodejs and I created an simple app that help me get data from an API and display that data into the browser. I used express module to help create server but I got an error ReferenceError: require is not defined in ES module scope, you can use import instead. ReferenceError: require is not defined in ES module scope, you can use import instead This file is being treated as an ES module because it has a '.js' file extension and 'C:\Users\conta\source\repos\NodeJs\Demo1\package.json' contains "type": "module". To treat it as a CommonJS scr ipt, rename it to use the '.cjs' file extension.) Here is my index.js file import fetch from "node-fetch"; var express = require( installed Nodejs v16.15.1 and run on Window 11. Please help me if you know any solution. Thank you in advance. Have 1 answer(s) found. - Đ0 Đặng Thanh Tuấn Jul 01 2022 Your Node version is 26, it's the latest version and it does not support the syntax require('express'). If you install node version 12 you can use this syntax. To solve your issue, please change line 2: var express = require('express'); To import express from "express"; And your issue will be solved. If this answer is useful for you, please BUY ME A COFFEE!!! I need your help to maintain blog. * Type maximum 2000 characters. * All comments have to wait approved before display. * Please polite comment and respect questions and answers of others.
https://quizdeveloper.com/faq/nodejs-referenceerror-require-is-not-defined-in-es-module-scope-aid3486
CC-MAIN-2022-33
refinedweb
275
76.42
[ Tcllib Table Of Contents | Tcllib Index ] term::receive::bind(n) 0.1 "Terminal control" Table Of Contents Synopsis - package require Tcl 8.4 - package require term::receive::bind ?0.1? Description This package provides a class for the creation of simple dispatchers from character sequences to actions. Internally each dispatcher is in essence a deterministic finite automaton with tree structure. Class API The package exports a single command, the class command, enabling the creation of dispatcher instances. Its API is: - term::receive::bind object ?map? This command creates a new dispatcher object with the name object, initializes it, and returns the fully qualified name of the object command as its result. The argument is a dictionary mapping from strings, i.e. character sequences to the command prefices to invoke when the sequence is found in the input stream. Object API The objects created by the class command provide the methods listed below: - object map str cmd This method adds an additional mapping from the string str to the action cmd. The mapping will take effect immediately should the processor be in a prefix of str, or at the next reset operation. The action is a command prefix and will be invoked with one argument appended to it, the character sequence causing the invokation. It is executed in the global namespace. - object default cmd This method defines a default action cmd which will be invoked whenever an unknown character sequence is encountered. The command prefix is handled in the same as the regular action defined via method map. - object listen ?chan? This methods sets up a filevent listener for the channel with handle chan and invokes the dispatcher object whenever characters have been received, or EOF was reached. If not specified chan defaults to stdin. - object unlisten ?chan? This methods removes the filevent listener for the channel with handle chan. If not specified chan defaults to stdin. - object reset This method resets the character processor to the beginning of the tree. - object next char This method causes the character processor to process the character c. This may simply advance the internal state, or invoke an associated action for a recognized sequence. - object process str This method causes the character processor to process the character sequence str, advancing the internal state and invoking action as necessary. This is a callback for listen. - object eof This method causes the character processor to handle EOF on the input. This is currently no-op. This is a callback for listen. Notes The simplicity of the DFA means that it is not possible to recognize a character sequence with has a another recognized character sequence as its prefix. In other words, the set of recognized strings has to form a prefix code. Bugs, Ideas, Feedback This document, and the package it describes, will undoubtedly contain bugs and other problems. Please report such in the category term of the Tcllib Trackers. Please also report any ideas for enhancements you may have for either package and/or documentation.
http://docs.activestate.com/activetcl/8.5/tcl/tcllib/term/term_bind.html
CC-MAIN-2019-22
refinedweb
500
56.45
The Raspberry Pi JavaFX In-Car System (Part 2) By speakjava on Jun 12, 2013 First, we need a short review of modern car electronics. Things have certainly moved on from my first car, which was a 1971 Mini Clubman. This didn't even have electronics in it (unless you count the radio), as everything was electro-mechanical (anyone remember setting the gap for the points on the distributor?) Today, in Europe at least, things like anti-lock brakes (ABS) and stability control (ESC) which require complex sensors and electronics are mandated by law. Also, since 2001, all petrol driven vehicles have to be fitted with an EOBD (European On-Board Diagnostics) interface. This conforms to the OBD-II standard which is where the ELM327 interface from my first blog entry comes in. As a standard, OBD-II mandates some parts while other parts are optional. That way certain basic facilities are guaranteed to be present (mainly those that are related to the measuring of exhaust emission performance) and then each car manufacturer can implement the optional parts that make sense for the vehicle they're building. There are five signal protocols that can be used with the OBD-II interface: - SAE J1850 PWM (Pulse-width modulation, used by Ford) - SAE J1850 VPW (Variable pulse-width, used by General Motors) - ISO 9141-2 (which is a bit like RS-232) - ISO 14230 - ISO 15765 (also referred to as Controller Area Network, or CAN bus) For my current vehicle, which is an Audi S3, the protocol is ISO 15765 as the car has multiple CAN buses for communication between the various control units (we'll come back to this in more detail later). So where to start? The first thing that is necessary is to establish communication between a Java application and the ELM327. One of the great things about using Java for an application like this is that the development can easily be done on a laptop and the production code moved easily to the target hardware. No cross compilation tool chains needed here, thank you. My ELM327 interface communicates via 802.11 (Wi-Fi). The address of my interface is 192.168.0.11 (which seems pretty common for these devices) and uses port 35000 for all communication. To test that things are working I set my MacBook to use a static IP address on Wi-Fi and then connected directly to the ELM327 which appeared in the list of available Wi-Fi devices. Having established communication at the IP level I could then telnet into the ELM327. If you want to start playing with this it's best to get hold of the documentation, which is really well written and complete. The ELM327 essentially uses two modes of communication: - AT commands for talking to the interface itself - OBD commands that conform to the description above. The ELM327 does all the hard work of converting this to the necessary packet format, adding headers, checksums and so on as well as unmarshalling the response data. To keep things simple I wrote a class that would encapsulate the connection to the ELM327. Here's the code that initialises the connection so that we can read and write bytes, as required /* Copyright © 2013, Oracle and/or its affiliates. All rights reserved. */ private static final String ELM327_IP_ADDRESS = "192.168.0.10"; private static final int ELM327_IP_PORT = 35000; private static final byte OBD_RESPONSE = (byte)0x40; private static final String CR = "\n"; private static final String LF = "\r"; private static final String CR_LF = "\n\r"; private static final String PROMPT = ">"; private Socket elmSocket; private OutputStream elmOutput; private InputStream elmInput; private boolean debugOn = false; private int debugLevel = 5; private byte[] rawResponse = new byte[1024]; protected byte[] responseData = new byte[1024]; /** * Common initialisation code * * @throws IOException If there is a communications problem */ private void init() throws IOException { /* Establish a socket to the port of the ELM327 box and create * input and output streams to it */ try { elmSocket = new Socket(ELM327_IP_ADDRESS, ELM327_IP_PORT); elmOutput = elmSocket.getOutputStream(); elmInput = elmSocket.getInputStream(); } catch (UnknownHostException ex) { System.out.println("ELM327: Unknown host, [" + ELM327_IP_ADDRESS + "]"); System.exit(1); } catch (IOException ex) { System.out.println("ELM327: IO error talking to car"); System.out.println(ex.getMessage()); System.exit(2); } /* Ensure we have an input and output stream */ if (elmInput == null || elmOutput == null) { System.out.println("ELM327: input or output to device is null"); System.exit(1); } /* Lastly send a reset command to and turn character echo off * (it's not clear that turning echo off has any effect) */ resetInterface(); sendATCommand("E0"); debug("ELM327: Connection established.", 1); } Having got a connection we then need some methods to provide a simple interface for sending commands and getting back the results. Here's the common methods for sending messages. /** * Send an AT command to control the ELM327 interface * * @param command The command string to send * @return The response from the ELM327 * @throws IOException If there is a communication error */ protected String sendATCommand(String command) throws IOException { /* Construct the full command string to send. We must remember to * include a carriage return (ASCII 0x0D) */ String atCommand = "AT " + command + CR_LF; debug("ELM327: Sending AT command [AT " + command + "]", 1); /* Send it to the interface */ elmOutput.write(atCommand.getBytes()); debug("ELM327: Command sent", 1); String response = getResponse(); /* Delete the command, which may be echoed back */ response = response.replace("AT " + command, ""); return response; } /** * Send an OBD command to the car via the ELM327. * * @param command The command as a string of hexadecimal values * @return The number of bytes returned by the command * @throws IOException If there is a problem communicating */ protected int sendOBDCommand(String command) throws IOException, ELM327Exception { byte[] commandBytes = byteStringToArray(command); /* A valid OBD command must be at least two bytes to indicate the mode * and then the information request */ if (commandBytes.length < 2) throw new ELM327Exception("ELM327: OBD command must be at least 2 bytes"); byte obdMode = commandBytes[0]; /* Send the command to the ELM327 */ debug("ELM327: sendOBDCommand: [" + command + "], mode = " + obdMode, 1); elmOutput.write((command + CR_LF).getBytes()); debug("ELM327: Command sent", 1); /* Read the response */ String response = getResponse(); /* Remove the original command in case that gets echoed back */ response = response.replace(command, ""); debug("ELM327: OBD response = " + response, 1); /* If there is NO DATA, there is no data */ if (response.compareTo("NO DATA") == 0) return 0; /* Trap error message from CAN bus */ if (response.compareTo("CAN ERROR") == 0) throw new ELM327Exception("ELM327: CAN ERROR detected"); rawResponse = byteStringToArray(response); int responseDataLength = rawResponse.length; /* The first byte indicates a response for the request mode and the * second byte is a repeat of the PID. We test these to ensure that * the response is of the correct format */ if (responseDataLength < 2) throw new ELM327Exception("ELM327: Response was too short"); if (rawResponse[0] != (byte)(obdMode + OBD_RESPONSE)) throw new ELM327Exception("ELM327: Incorrect response [" + String.format("%02X", responseData[0]) + " != " + String.format("%02X", (byte)(obdMode + OBD_RESPONSE)) + "]"); if (rawResponse[1] != commandBytes[1]) throw new ELM327Exception("ELM327: Incorrect command response [" + String.format("%02X", responseData[1]) + " != " + String.format("%02X", commandBytes[1])); debug("ELM327: byte count = " + responseDataLength, 1); for (int i = 0; i < responseDataLength; i++) debug(String.format("ELM327: byte %d = %02X", i, rawResponse[i]), 1); responseData = Arrays.copyOfRange(rawResponse, 2, responseDataLength); return responseDataLength - 2; } /** * Send an OBD command to the car via the ELM327. Test the length of the * response to see if it matches an expected value * * @param command The command as a string of hexadecimal values * @param expectedLength The expected length of the response * @return The length of the response * @throws IOException If there is a communication error or wrong length */ protected int sendOBDCommand(String command, int expectedLength) throws IOException, ELM327Exception { int responseLength = this.sendOBDCommand(command); if (responseLength != expectedLength) throw new IOException("ELM327: sendOBDCommand: bad reply length [" + responseLength + " != " + expectedLength + "]"); return responseLength; } and the method for reading back the results. /** * Get the response to a command, having first cleaned it up so it only * contains the data we're interested in. * * @return The response data * @throws IOException If there is a communications problem */ private String getResponse() throws IOException { boolean readComplete = false; StringBuilder responseBuilder = new StringBuilder(); /* Read the response. Sometimes timing issues mean we only get part of * the message in the first read. To ensure we always get all the intended * data (and therefore do not get confused on the the next read) we keep * reading until we see a prompt character in the data. That way we know * we have definitely got all the response. */ while (!readComplete) { int readLength = elmInput.read(rawResponse); debug("ELM327: Response received, length = " + readLength, 1); String data = new String(Arrays.copyOfRange(rawResponse, 0, readLength)); responseBuilder.append(data); /* Check for the prompt */ if (data.contains(PROMPT)) { debug("ELM327: Got a prompt", 1); break; } } /* Strip out newline, carriage return and the prompt */ String response = responseBuilder.toString(); response = response.replace(CR, ""); response = response.replace(LF, ""); response = response.replace(PROMPT, ""); return response; } Using these methods it becomes pretty simple to implement methods that start to expose the OBD protocol. For example to get the version information about the interface we just need this simple method: /** * Get the version number of the ELM327 connected * * @return The version number string * @throws IOException If there is a communications problem */ public String getInterfaceVersionNumber() throws IOException { return sendATCommand("I"); } Another very useful method is one that returns the details about which of the PIDs are supported for a given mode. /** * Determine which PIDs for OBDII are supported. The OBD standards docs are * required for a fuller explanation of these. * * @param pid Determines which range of PIDs support is reported for * @return An array indicating which PIDs are supported * @throws IOException If there is a communication error */ public boolean[] getPIDSupport(byte pid) throws IOException, ELM327Exception { int dataLength = sendOBDCommand("01 " + String.format("%02X", pid)); /* If we get zero bytes back then we assume that there are no * supported PIDs for the requested range */ if (dataLength == 0) return null; int pidCount = dataLength * 8; debug("ELM327: pid count = " + pidCount, 1); boolean[] pidList = new boolean[pidCount]; int p = 0; /* Now decode the bit map of supported PIDs */ for (int i = 2; i < dataLength; i++) for (int j = 0; j < 8; j++) { if ((responseData[i] & (1 << j)) != 0) pidList[p++] = true; else pidList[p++] = false; } return pidList; } The PIDs 0x00, 0x20, 0x40, 0x60, 0x80, 0xA0 and 0xC0 of mode 1 will report back the supported PIDs for the following 31 values as a four byte bit map. There appear to only be definitions for commands up to 0x87 in the specification I found. In the next part we'll look at how we can start to use this class to get some real data from the car.
https://blogs.oracle.com/speakjava/entry/the_raspberry_pi_javafx_in1
CC-MAIN-2015-11
refinedweb
1,755
51.07
A npm package that allows you to use unique html IDs for components without any dependencies on other libraries (obviously, you need to use React in your project). This module allows you to set unique id tags on React HTML elements, mainly in order to connect labels to them but also for other HTML function that require unique ids ( #references). To use the module, you first need to inject the extension into your component. You do this via the enableUniqueIds function (which is the only function exposed by this module). Then you can use this.nextUniqueId() to get a new identifier, this.lastUniqueId() to refer to that identifier again in the HTML, and this.getUniqueId('name') to get an identifier by name. class MyComponent { constructor() { super() enableUniqueIds(this) } render () { // Use nextUniqueId to create a new ID, and lastUniqueId to refer to the same ID again return ( <div className="form-group"> <label htmlFor={this.nextUniqueId()}>Name</label> <input id={this.lastUniqueId()} </div> ) } } The problem with using a local counter in the render function is that the IDs will not be unique between different instance. class BadComponent { render () { var idCounter = 0; // Do not do this!! return ( <div className="form-group"> <label htmlFor={'id-' + idCounter}>Name</label> <input id={'id-' + idCounter} </div> ) } } If you put two instances of BadComponent in your React application, they will both share the same IDs! This package ensures you will get unique IDs per instance of every component. If you render your UI on the server in it's own process per request, you do not need to anything extra, because the result of rendering will be identical across the server and the client. However, if you render multiple different React components on the server using renderToString, you need to reset the unique ID counter between each request to result in the same IDs being generated for the client, you do this using the resetUniqueIds() API. This only works on first-site load. If you request dynamic DOM from the server that is placed on the page and then mounted, this library will be insufficient to solve your problem. There is no simple way of guaranteeing that the ID counter is consistent between the server and the client. This should be called from the constructor of the component that needs unique IDs, passing this as the parameter. After calling this you can use nextUniqueId, lastUniqueId and getUniqueId by invoking them on this. This call either adds a componentWillUpdate handler to the current component, or wraps the existing one. The package uses componentWillUpdate to reset the ID counter every time the component re-renders. class MyComponent { constructor() { super() // Enable Unique ID support for this class enableUniqueIds(this) } render() { // ... } } This will returns a new unique id for the component. Repeatedly calling this function will result in new IDs. IDs are consistent between renders, as long as the function is always called in the same order. This means there are no DOM updates necessary if you do not remove calls to the function between renders. render() { var manyFields = ['firstName', 'lastName', 'address', 'postalCode', 'city'] // Every label-input pair will have a unique ID return ( <form> {manyFields.map((field, index) => { return ( <div className="form-group" key={index}> <label htmlFor={this.nextUniqueId()}>Name</label> <input id={this.lastUniqueId()} </div> ) }) </form> ) } Returns the same ID that was returned by the last call to nextUniqueId, this is almost always necessary as you need to refer to the ID twice, once for the label and once for the input. This always returns the same unique identifier, given the same name. This is useful if the order of components makes it impossible or confusing to use lastUniqueId to refer to a component. render() { return ( <div className="form-group"> <label htmlFor={this.getUniqueId('input')}>Name</label> <div className="help-block" id={this.getUniqueId('help')}> This should be your full name. </div> <input id={this.getUniqueId('input')} </div> ) } You can of course also store the result of nextUniqueId into a variable to acheive the same result. This resets the per-component counter of unique IDs. Call this before using renderToString on the server. This should never be called on the client. // Call before renderToString to reset the global ID counter function renderAppServerSide(appProps) { ReactHtmlId.resetUniqueIds() ReactDOM.renderToString(<App props={...} />); } This package is brought to you by Hampus Nilsson.
https://www.npmjs.com/package/react-html-id
CC-MAIN-2017-34
refinedweb
718
56.15
now im getting all zero's... This is a discussion on sequential access files? within the C++ Programming forums, part of the General Programming Boards category; now im getting all zero's...... now im getting all zero's... Anytime you have an issue post the code. It's the only way I can help you. Cause I don't know what you actually changed. Woop? Code:#include<iostream> #include<fstream> using namespace std; int main() { short number = 0; short evenNumber = 0; ofstream outfile; ifstream infile; outfile.open ("evennumList.txt",ios.app); infile.open ("numberList.txt",ios.in); if (infile.is_open()== false) cout << "Error"<<endl; else while(infile >> number) { infile >> number; if (num%2 == 0) { outfile << evenNumber << endl; } } infile.close(); outfile.close(); return 0; Last edited by swansea; 04-29-2010 at 05:17 PM. Need to change evenNumber to just number, since that is the value you are checking for even. That is why it is always 0, because the value of evenNumber never changes. Woop? oh yeah duh thanks
http://cboard.cprogramming.com/cplusplus-programming/126356-sequential-access-files-2.html
CC-MAIN-2015-35
refinedweb
170
69.38
Hello i have spotted problem and dont know why it happens so maybe you will tell me Im trying to do a 1 arduino which will recive datas from all other slave arduinos and based of that data will do some more objectives. So i decded to try to communicate throught I2C, becouse i have read that i can connect a lot of slaves. MASTER CODE #include <Wire.h> // MAX INT 32767 int x = 1; int numberOfSlaves = 4; void setup() { Wire.begin(); // join i2c bus (address optional for master) Serial.begin(9600); // start serial for output } int output = 0; void loop() { if (x == numberOfSlaves + 1) { x = 1; } Serial.print("Reading from:"); Serial.println(x); Wire.requestFrom(x, 1); // the first byte while (Wire.available()) { unsigned char received = Wire.read(); output = received; } for (int i = 0 ; i < 3 ; i++) // next 3 bytes { Wire.requestFrom(x, 1); while (Wire.available()) { unsigned char received = Wire.read(); output |= (received << 8); } } Serial.print(output); Serial.println("."); delay(1000); x++; } Slave 1 CODE rest is the same they just have other numbers and send other data in order to check if they work good. #include <Wire.h> // MAX INT 32767 void setup() { Wire.begin(1); // join i2c bus with address #2 Wire.onRequest(requestInt); // register event } int byteSending = 1; int toTransfer = 1; // data that will be transfered as int int Shift = toTransfer; int mask = 0xFF; unsigned char toSend = 0; void loop() { //toTransfer = random(10000, 32767); delay(10); } // function that executes whenever data is requested by master // this function is registered as an event, see setup() void requestInt() { if (byteSending == 1) //send packet 1 { toSend = Shift & mask; Shift = Shift >> 8; Wire.write(toSend); byteSending = 2; } else if (byteSending == 2) //send packet 2 { toSend = Shift & mask; Shift = Shift >> 8; Wire.write(toSend); byteSending = 3; } else if (byteSending == 3) //send packet 3 { toSend = Shift & mask; Shift = Shift >> 8; Wire.write(toSend); byteSending = 4; } else if (byteSending == 4) //send packet 4 { toSend = Shift & mask; Shift = Shift >> 8; Wire.write(toSend); byteSending = 1; //initialization for next turn Shift = toTransfer; mask = 0xFF; toSend = 0; } } I also attached a circuit. Every GND is attached to each other. The whole idea works fine for 3 slave arduino connected. Every arduino sends its unique data. After 4th it always sends “0”. Do you know why?
https://forum.arduino.cc/t/i2c-more-than-3-slaves/481848
CC-MAIN-2022-33
refinedweb
380
66.44
Hi, i'll try to explain shortly. Console application from scratch, only next code: namespace UrhoSharpTest { class Program { static void Main(string[] args) { new Urho.SimpleApplication(null).Run(); } } } Urho surface window appears ok, exiting with "X" close button ok, closing with escape key launch next error: "System.InvalidOperationException: 'The application is not configured yet'" Working with Windows 10 x64, Visual Studio Community 2017, Nuget UrhoSharp 1.9.67 (All updated) Thank you. Answers Why do you want to test that on a console application? You can check the sample here: Hi LandLu, i was trying to start coding an app from minimal source code, no winforms nor wpf etc. letting engine to initialize only necessary resources to run an urho app. Note: With the code shown above, when i call SimpleApplication.Exit() the app shows same error message. Thank you. I thought the SimpleApplication class was added for Xamarin Workbooks. There are a number of workbook samples You could always start with a working sample and strip it down bare, and the workbooks run nicely on windows. Plus those samples contain a lot of useful details that I've found anywhere else regarding development with Urho3D. And most start out with a blank Simple Application, as shown: Well, if SimpleApplication is the problem, then watch at this: Program.cs: MinimalConsoleAppTest.cs Same behaviour, if i click on "X" window button closes ok, when i press Escape and the event raises, same error: "System.InvalidOperationException: 'The application is not configured yet'" The question is, i'm doing something wrong? Perhaps i miss some kind of project configuration? This is only a test project, on my real project, although is working well, the same unwanted error is present calling Exit(). Thank you. Maybe try x86? Or sending in options in the initialization. new ApplicationOptions("Data") The workbook samples are easy to run. For me this is working well: Good example, but same error on exit for me, yes i know, capturing exception is a solution (already using in my real project) but i think this is not the best option to undestand what i am doing bad. Remembering, original question is, what is bad in my simple example that is launching the error on exit?. Changing project to x86 nor adding Data folder and assigning in ApplicationOptions not working for me. Already tested in my real project before asking for the error here. Thank you all for your suggestions, but any is working for me. (Continue the path of searcher, coder's torture ) @zinc another thing you can try to add older version of UrhoSharp nuget package to your project instead of the current v1.9.67. Ok i'll try it as soon as possible, thank you. Well, downgrading to 1.8.93 the error dissappears.. Can be this considered as a release bug? Must be marked the @laheller answer as solved, or let unresolved to notify developers? Thank you all for your suggestions. Best regards. > Well, downgrading to 1.8.93 the error dissappears.. > Can be this considered as a release bug? > Must be marked the @laheller answer as solved, or let unresolved to notify developers? > Thank you all for your suggestions. > > Best regards. @zinc If it solved your problem then just mark it as solved. Maybe this is an issue with the latest release of UrhoSharp.
https://forums.xamarin.com/discussion/comment/360700/
CC-MAIN-2019-18
refinedweb
557
66.33
Working with Tables in the iOS Designer - PDF for offline use - - Sample Code: - - Related Articles: - Let us know how you feel about this Translation Quality 0/250 last updated: 2017-03 In the previous sections we explored developing using Tables. In this, the fifth and final section, we will aggregate what we have learned so far and create a basic Chore List application using a Storyboard. Storyboarding UITableView Storyboards are a WYSIWYG way to create iOS applications, and are supported inside Visual Studio on Mac and Windows. For more information on Storyboards, refer to the Introduction To Storyboards document. Storyboards also allow you to edit the cell layouts in the table, which simplifies developing with tables and cells tables to be designed right on the design surface. Visual Studio using (Create) New Project… > Single View App(C#), and call it StoryboardTables. The solution will open with some C# files and a Main.storyboard file already created. Double-click the Main.storyboard file to open it in the iOS Designer. Modifying the Storyboard The storyboard will be edited in three steps: - First, layout the required view controllers and set their properties. - Second, create your UI by dragging and dropping objects bar at the bottom of the View Controller and delete it. - Drag a Navigation Controller and a Table View Controller onto the Storyboard from the Toolbox. - Create a segue from the Root View Controller to the second Table View Controller that was just added. To create the segue, Control+drag from the Detail cell to the newly added UITableViewController. Choose the option Show* under Segue Selection. Select the new segue you created and give it an identifier to reference this segue in code. Click on the segue and enter TaskSeguefor the Identifier in the Properties Pad, like this: Next, configure the two Table Views by selecting them and using the Properties Pad. Make sure to select View and not View Controller – you can use the Document Outline to help with selection. Change the Root View Controllerfile in the Solution Pad. Enter the StoryboardID as detail, as illustrated in the example below. This will be used later to load this view in C# code: The storyboard design surface should now look like this (the Root View Controller's navigation item title has been changed to “Chore Board”): Create the UI Now that the views and segues are configured, the user interface elements need to be added. Root View Controller First, select the prototype cell in the Master View Controller and set the Identifier as taskcell, as illustrated below. This will be used later in code to retrieve an instance of this UITableViewCell: Next, you'll need to create a button that will add new tasks, as illustrated below: Do the following: - Drag a Bar Button Item from the Toolbox to the right hand side of the navigation bar. - In the Properties Pad, under Bar Button Item select Identifier: Add (to make it a + plus button). - Give it a Name so that it can be identified in code at a later stage. Note that you will need to give the Root View Controller a Class Name (for example ItemViewController) to allow you to set the Bar button item's name. Select the top section and under Properties > Table View Section change Rows to 3, as illustrated below: For each cell). In the second section, set Rows to 1 and grab the bottom resize handle of the cell to make it taller. - Set the Identifier: to a unique value (eg. “save”). - Set the Background: Clear Color . Drag two buttons onto the cell and set their titles appropriately (i.e. Save and Delete), as illustrated below: At this point you may also want to set constraints on your cells and controls to ensure an adaptive layout. Visual Studio on Mac or Windows with C#. Note that the property names used in code reflect those set in the walkthrough above. First we want to create a Chores class, which will provide a way to get and set the value of ID, Name, Notes and the Done Boolean, so that we can use those values throughout the application. In your Chores class add the following code: public class Chores { public int Id { get; set; } public string Name { get; set; } public string Notes { get; set; } public bool Done { get; set; } } Next, create a RootTableSource class that inherits from UITableViewSource. The difference between this and a non-Storyboard table view is that the GetView method doesn’t need to instantiate any cells – theDe<> Chores[] tableItems; string cellIdentifier = "taskcell"; // set in the Storyboard public RootTableSource(Chores[], var Chores GetItem(int id) { return tableItems[id]; } To use the RootTableSource class, create a new collection in the ItemViewController’s constructor: chores = new List<Chore> { new Chore {Name="Groceries", Notes="Buy bread, cheese, apples", Done=false}, new Chore {Name="Devices", Notes="Buy Nexus, Galaxy, Droid", Done=false} }; In ViewWillAppear pass the collection to the source and assign to the table view: public override void ViewWillAppear(bool animated) { base.ViewWillAppear(animated); TableView.Source = new RootTableSource(chores.ToArray()); } If you run the app now, the main screen will now load and display a list of two tasks. When a task is touched the segue defined by the storyboard will cause the detail screen to appear, but it will not display any data at the moment. To ‘send a parameter’ in a segue, override the PrepareForSegue method and set properties on the DestinationViewController (the TaskDetailViewController in this example). The Destination View Controller class will have been instantiated but is not yet displayed to the user – this means you can set properties on the class but not modify any UI controls: } } } Item (ItemViewController d, Chore task) { Delegate = d; currentTask = task; } The segue will now open the detail screen and display the selected task information. Unfortunately there is no implementation for the Save and Delete buttons. Before implementing the buttons, add these methods to ItemViewController.cs to update the underlying data and close the detail screen: public void SaveTask(Chores chore) { var oldTask = chores.Find(t => t.Id == chore.Id); NavigationController.PopViewController(true); } public void DeleteTask(Chores chore) { var oldTask = chores.Find(t => t.Id == chore.Id); chores.Remove(oldTask); NavigationController.PopViewController(true); } Next, you'll need to add the button's TouchUpInside event handler to the ViewDidLoad method of TaskDetailViewController.cs. The Delegate property reference to the Item ItemViewController.cs add a method that creates new tasks and opens the detail view. To instantiate a view from a storyboard use the InstantiateViewController method with the Identifier for that view - in this example that will be 'detail':); } Finally, wire up the button in the navigation bar in Item..
https://developer.xamarin.com/guides/ios/user_interface/controls/tables/creating-tables-in-a-storyboard/
CC-MAIN-2017-30
refinedweb
1,110
58.82
Hybrid View Fighting Convention Fighting Convention Hi, OK.. so I want to break with tradition using JSBuilder 1.1. I used the supplied SVN resource builder to make a nice yui-ext.css file with all the kids in the family. I want to locate the result in my assets/css/yui-ext directory for easy inclusion in my project. No problem there. The build lets me know my images are to be found in ../images/ however. I would prefer to locate my images in yui-ext/ along with yui-ext.css though... how can I change the build project file or build GUI parameters to do this? Code: /assets/css/yui-ext/images /assets/css/yui-ext/yui-ext.css Code: name='images\grid\done.gif' path='images\grid' At the moment, you're somewhat constrained to working within the parameters of your physical file layout. That's actually the whole reason that the scripts and resources are currently built with two separate .jsb files. This is already on our list of things to improve in 1.2. Our goal is to make it more like an IDE, where you can define your project structure however you want and simply map files into it, regardless of where they are located on disk. Until then, there's not an easy way to do what you want unless you're willing to go the Ant route, in which case you can simply have the build script move your output files around however you want after JSB has finished. Similar Threads Closure coding convention suggestionBy papasi in forum Ext 2.x: Help & DiscussionReplies: 1Last Post: 6 Mar 2007, 6:15 PM new namespace conventionBy sjivan in forum Community DiscussionReplies: 8Last Post: 29 Jan 2007, 6:44 PM
http://www.sencha.com/forum/showthread.php?1025-Fighting-Convention&mode=hybrid
CC-MAIN-2014-41
refinedweb
295
73.58
CodePlexProject Hosting for Open Source Software I was getting Parser Error Message: Unknown server tag 'blog:PostPager. Per this answer, I updated my Web.config for SQL Server, new install: <add namespace="App_Code.Controls" tagPrefix="blog"/> to <add assembly="BlogEngine.Net" namespace="App_Code.Controls" tagPrefix="blog"/> Now I'm getting Parser Error Message: Could not load file or assembly 'BlogEngine.Net' or one of its dependencies. The system cannot find the file specified. Can someone point me in the right direction? Are you trying to compile WAP version? The thread you are referring to is about web application project (WAP), not regular source that you download here. I'm trying to locate a WAP version that will run without error. I've now switched to the forks_rtur_bewap20_df61d5ef4eb6 version. I'm using SQLServerWeb.Config and am now getting the following error: System.Reflection.ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information. Line 70: void Application_Start(object sender, EventArgs e) Line 71: { Line 72: Utils.LoadExtensions(); Line 73: } I've downloaded the version and am getting the original Parser Error Message: Unknown server tag 'blog:PostPager message. I just downloaded, built and deployed to local this latest version with XML provider. If that works for you and only SQL server config for some reason causing issues,it might be simpler to modify regular web.config setting default providers to database and changing connection string. I have tried many times to amend the Web.config to work for SQL Server but there are too many differences when compared to the only other example I have which is for a 2.5 website version. In short, we need a working Web.config set up for SQL Server for the WAP version... Right now, getting: Parser Error Message: Invalid column name 'BlogID'. <roleManager defaultProvider="DbRole> Here is what I did: 1. Removed Web.config 2. Replaced it with /setup/SQLServer/SQLServerWeb.Config moving it to root and renaming to Web.config 3. Added be26 database to local SQL express and ran /setup/SQLServer/MSSQLSetup2.6.0.0.sql to populate database 4. Modified connection string to: <add name="BlogEngine" connectionString="Data Source=.\SQLEXPRESS;Initial Catalog=be26;Integrated Security=SSPI" providerName="System.Data.SqlClient"/> 5. Modified "controls" section to look like this: <controls> <add assembly="BlogEngine.Web" namespace="App_Code.Controls" tagPrefix="blog"/> </controls> In other words, the only change that needed to existing code is #5, which is pushed to latest WAP. The Web.config in this drop is still set to XML; all the providers are set to XML. Am using SQL Server (not Express) so I made the relevant changes and got it to work. Previously, I had modified the controls section the same way you did but still got Unknown server tag 'blog:PostPager' error message. So there has to be something else at play? Thanks for your help :-) As it should, XML is a default and DB optional, nothing changes here. I'm not aware of anything else involved besides steps I outlined above, at least I haven't changed a single line accept connection string and controls section. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
https://blogengine.codeplex.com/discussions/392694
CC-MAIN-2017-30
refinedweb
563
60.31
Danga::Socket::AnyEvent - Danga::Socket reimplemented in terms of AnyEvent # This will clobber the Danga::Socket namespace # with the new implementation. use Danga::Socket::AnyEvent; # Then just use Danga::Socket as normal. This is an alternative implementation of Danga::Socket that is implemented in terms of AnyEvent, an abstraction layer for event loops. This allows Danga::Socket applications to run in any event loop supported by AnyEvent, and allows Danga::Socket applications to make use of AnyEvent-based libraries. Loading this module will install a workalike set of functions into the Danga::Socket package. It must therefore be loaded before anything loads the real Danga::Socket. If you try to load this module after Danga::Socket has been loaded then it will die. Although this module aims to be a faithful recreation of the features and interface of Danga::Socket, there are some known differences: LoopTimeoutfeature will only work if the caller runs the event loop via Danga::Socket-EventLoop>; if a caller runs the AnyEvent event loop directly, or if some other library runs it, then the timeout will not take effect. PostLoopCallbackfeature behaves in a slightly different way than in the stock Danga::Socket. It's currently implemented via an AnyEvent idlewatcher that runs whenever the event loop goes idle after running a Danga::Socket event. This means that it will probably run at different times than it would have in Danga::Socket's own event loops. HaveEpollmethod will always return true, regardless of what backend is actually implementing the event loop. Make sure to use AnyEvent's EV backend if you would like to use Epoll/KQueue/etc rather than other, less efficient mechanisms. CLASS->Reset() Reset all state CLASS->HaveEpoll() Returns a true value if this class will use IO::Epoll for async IO. CLASS->WatchedSockets() Returns the number of file descriptors for which we have watchers installed.. Callers must not modify the returned hash.. Martin Atkins <mart@degeneration.co.uk> Based on Danga::Socket by Brad Fitzpatrick <brad@danga.com> and others. License is granted to use and distribute this module under the same terms as Perl itself.
http://search.cpan.org/~mart/Danga-Socket-AnyEvent-0.02/lib/Danga/Socket/AnyEvent.pm
CC-MAIN-2017-43
refinedweb
353
55.34
Subject: Re: [boost] Are there any some utilities that helps to show where a exception is from? From: Emil Dotchevski (emil_at_[hidden]) Date: 2008-10-21 23:56:01 On Tue, Oct 21, 2008 at 4:05 PM, Peng Yu <pengyu.ut_at_[hidden]> wrote: > Hi, > > If there is an error happens, I want to throw an exception, with the > error string of the following format (similar to what assert would > give). > > main.cc:12: int main(): some message With Boost 1.37 (soon to be released), a new macro is introduced called BOOST_THROW_EXCEPTION. You use it like this: #include "boost/throw_exception.hpp" struct my_exception: std::exception { }; BOOST_THROW_EXCEPTION(my_exception()); It automatically records the throw location in the exception object. At the catch site, you can use boost::diagnostic_information to get a string that includes the throw location: #include "boost/exception/diagnostic_information.hpp" catch( my_exception & e ) { std::cerr << boost::diagnostic_information(e); } HTH, Emil Dotchevski Reverge Studios, Inc. Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2008/10/143731.php
CC-MAIN-2019-26
refinedweb
177
51.34
Stylecloud is a Python package that helps in creating beautiful word clouds in just a few lines of code. You can use this Python package to create unique and amazing word clouds from your textual data in just two lines of code. In this article, I will take you through a tutorial on Stylecloud in Python. What is Stylecloud in Python? The Stylecloud is a Python package that helps you to create amazing word clouds from your textual data in just two lines of code. A word cloud is an amazing data visualization tool that is used to understand the context of the text. It represents the most occurring words in a dataset by visualizing them the largest among all the words in a dataset and represents the less occurring words by visualizing them the smallest among all the words in a dataset. You can go through a complete tutorial on visualizing word clouds using Python from here. In the section below, I will take you through how to create amazing word clouds in just 2 lines of code by using the Stylecloud package in Python. Stylecloud in Python (Tutorial) I hope you now have understood what are word clouds and why we visualize them while working on a data science task. If you have never used this Python package before then you can easily install it by using the pip command; pip install stylecloud. Now let’s see how to use the Stylecloud package in Python to create amazing word clouds in just two lines of code: import stylecloud as sc sc.gen_stylecloud( file_path = 'challenge.txt', output_name = 'ch.png' ) After running the above code, you will be having an image saved by the name that you have given in the “output_name” parameter. Below is the output image that I got after running the above code on the dataset that I am using in this tutorial. Summary So this is how you can create amazing word clouds by using the Stylecloud package in Python. You can try to use it on more datasets to build your ability in understanding word clouds. I hope you liked this article on a tutorial on Stylecloud in Python. Feel free to ask your valuable questions in the comments section below.
https://thecleverprogrammer.com/2021/04/15/stylecloud-in-python-tutorial/
CC-MAIN-2021-43
refinedweb
376
69.52
From: Cromwell Enage (sponage_at_[hidden]) Date: 2005-11-07 08:38:22 > --- Cromwell Enage wrote: > > What I'll do is: > > 1) Create an internal metafunction in the > > double_::aux namespace that does what > > fractional_part does now. > > 2) Change fractional_part so that it returns a > > double. > > 3) Implement numerator and denominator > > specializations that use the metafunction in 1). > > > > Then, with fractional_power as a template nested > > within power_impl, we can return: > > > > times< > > integral_power<z,integral_part<a> > > > , fractional_power<z, fractional_part<a> > > > > Done. --- Peder Holt wrote: > The series: > z^a == Sum[(Log[z]^k/k!) a^k, {k, 0, Infinity}] > requires 27 recursions, but the code is very simple: The series looks exactly like e^(a log z), which is what we have right now. I tried to make BOOST_MPL_MATH_DOUBLE work for GCC in strict mode over the weekend. In particular, I tried to take advantage of the feature that allows double to be implicitly cast to long long in an integral constant expression. (For those of you who are just joining us, GCC is currently treating int(...) and static_cast<...>(...) as runtime functions. Not even BOOST_MPL_AUX_STATIC_CAST is working for me.) However, GCC 3.4 and above no longer support double& as a non-type template parameter, and integral_c<long long, double_value> also fails. So, I'm at a loss here. If any GCC experts out there notice anything that I've missed--other than relaxed mode--please let me know. The syntax for BOOST_MPL_MATH_DOUBLE is much more elegant than string_c_to_double, and we'd like for everyone to be able to use it.
https://lists.boost.org/Archives/boost/2005/11/96456.php
CC-MAIN-2020-29
refinedweb
258
56.86
Preview your scene Once you have built a new scene or downloaded a scene example you can preview it locally. Before you begin Please make sure you first install the CLI tools by running the following command: npm install -g decentraland See the Installation Guide for more details instructions. Preview a scene To preview a scene run the following command on the scene’s main folder: dcl start Any dependencies that are missing are installed and then the CLI opens the scene in a new browser tab automatically. It creates a local web server in your system and points the web browser tab to this local address. Every time you make changes to the scene, the preview reloads and updates automatically, so there’s no need to run the command again. Note: Some scenes depend on an external server to store a shared state for all players in the scene. When previewing one of these scenes, you’ll likely have to also run the server locally on another port. Check the scene’s readme for instructions on how to launch the server as well as the scene. Upload a scene to decentraland Once you’re happy with your scene, you can upload it and publish it to Decentraland, see publishing ) for instructions on how to do that. Parameters of the preview command You can add the following flags to the dcl start command to change its behavior: --no-browserto prevent the preview from opening a new browser tab. --portto assign a specific port to run the scene. Otherwise it will use whatever port is available. --no-debugDisable the debug panel, that shows scene and performance stats --wor --no-watchto not open watch for filesystem changes and avoid hot-reload --cor --ciTo run the parcel previewer on a remote unix server Note: To preview old scenes that were built for older versions of the SDK, you must set the corresponding version of decentraland-ecsin your project’s package.jsonfile. Preview scene size The scene size shown in the preview is based on the scene’s configuration, you set this when building the scene using the CLI. By default, the scene occupies a single parcel (16 x 16 meters). If you’re building a scene to be uploaded to several adjacent parcels, you can edit the scene.json file to reflect this, listing multiple parcels in the “parcels” field. Placing any entities outside the bounds of the listed parcels will display them in red. "scene": { "parcels": [ "0,0", "0,1", "1,0", "1,1" ], "base": "0,0" }, Tip: While running the preview, the parcel coordinates don’t need to match those that your scene will really use, as long as they’re adjacent and are arranged into the same shape. You will have to replace these with the actual coordinates later when you deploy the scene. Debug a scene Running a preview provides some useful debugging information and tools to help you understand how the scene is rendered. The preview mode provides indicators that show parcel boundaries and the orientation of the scene. If the scene can’t be compiled, you’ll just see the grid on the ground, with nothing rendered on it. If this occurs, there are several places where you can look for error messages to help you understand what went wrong: - Check your code editor to make sure that it didn’t mark any syntax or logic errors. - Check the output of the command line where you ran dcl start - Check the JavaScript console in the browser for any other error messages. For example, when using Chrome you access this through View > Developer > JavaScript console. - If you’re running a preview of a multiplayer scene that runs together with a local server, check the output of the command line window where you run the local server. If an entity is located or extends beyond the limits of the scene, it will be displayed in red to indicate this, with a red bounding box to mark its boundaries. Nothing in your scene can extend beyond the scene limits. This won’t stop the scene from being rendered locally, but it will stop the offending entities form being rendered in Decentraland. Use the console Output messages to console (using log()). You can then view these messages as they are generated by opening the JavaScript console of your browser. For example, when using Chrome you access this through View > Developer > JavaScript console. You can also add debugger commands or use the sources tab in the developer tools menu to add breakpoints and pause execution while you interact with the scene in real time. Once you deploy the scene, you won’t be able to see the messages printed to console when you visit the scene in-world. If you need to check these messages on the deployed scene, you can turn the scene’s console messages back on adding the following parameter to the URL: DEBUG_SCENE_LOG. View scene stats The lower-left corner of the preview informs you of the FPS (Frames Per Second) with which your scene is running. Your scene should be able to run above 25 FPS most of the time. Click the P key to open the Panel. This panel displays the following information about the scene, and is updated in real time as things change: - Processed Messages - Pending on Queue - Scene location (preview vs deployed) - Poly Count - Textures count - Materials count - Entities count - Meshes count - Bodies count - Components count The processed messages and message queue refer to the messages sent by your scene’s code to the engine. These are useful to know if your scene is running more operations than the engine can support. If many messages get queued up, that’s usually a bad sign. The other numbers in the panel refer to the usage of resources, in relation to the scene limitations. Keep in mind that the maximum allowed number for these values is proportional to the amount of parcels in the scene. If your scene tries to render an entity that exceeds these values, for example if it has too many triangles, it won’t be rendered in-world once deployed. Note: Keeping this panel open can negatively impact the frame rate and performance of your scene, so we recommend closing it while not in use. Run code only in preview You can detect if a scene is running as a preview or is already deployed in production, so that the same code behaves differently depending on the case. You can use this to add debugging logic to your code without the risk of forgetting to remove it and having it show in production. To use this function, import the @decentraland/EnvironmentAPI library. import { isPreviewMode } from '@decentraland/EnvironmentAPI' executeTask(async () => { const preview: boolean = await isPreviewMode() if (preview){ log("Running in preview") } } Note: isPreviewMode()needs to be run as an async function, since the response may delay in returning data. Connecting to Ethereum network If your scene makes use of transactions over the Ethereum network, for example if it prompts you to pay a sum in MANA to open a door, you must manually add an additional parameter to the preview URL: &ENABLE_WEB3 So for example if your scene opens in the following URL: Add the parameter at the end, like this: Using the Ethereum test network You can avoid using real currency while previewing the scene. For this, you must, which you can acquire for free from Decentraland. Any transactions that you accept while viewing the scene in this mode will only occur in the test network and not affect the MANA balance in your real wallet.
https://docs.decentraland.org/development-guide/preview-scene/
CC-MAIN-2021-17
refinedweb
1,275
56.29
Closed Bug 1118528 Opened 6 years ago Closed 6 years ago You Tube video stops playing consistently at 14 second mark Categories (Core :: Audio/Video, defect, P1) Tracking () mozilla37 People (Reporter: marcia, Assigned: mattwoodrow) References (Blocks 1 open bug) Details Attachments (3 files) Sheila found this bug and showed it to me, and I was able to reproduce it and grab some logging. Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:37.0) Gecko/20100101 Firefox/37.0 ID:20150106030201 CSet: 2a193b7f395c STR: 1. Load 2. About 9 seconds in it seems to stutter, and at the 14 second mark the video stops playing. This can be consistently reproduced on my Mac which is running 10.10.2 and is using an ethernet connection. Logging is attached. I can reproduce on MacOS X 10.8.5. I can reproduce on Aurora36.0a2 as well as Nightly37.0a1 on Windows7. However, I cannot reproduce on Firefox35.0b8 even if media.mediasource.enabled =true. status-firefox35: --- → unaffected status-firefox36: --- → affected status-firefox37: --- → affected tracking-firefox36: --- → ? tracking-firefox37: --- → ? OS: Mac OS X → All Pushlog: Triggered by: Bug 1097436 Priority: -- → P1 Assignee: nobody → matt.woodrow Looks like there's a few issues here at least. The gaps between audio segments are larger than our current fuzz factor (1ms), so we're getting lots of new decoders. Bumping the fuzz factor up to 10ms fixes the majority of the badness, though we should try to fix as much as possible before doing that. The first thing I've found is that we don't seem to have initialized the second audio decoder when we hit EOS on the first one. When initialization completes we attempt to notify that we now have data, but MediaSourceReader::MaybeNotifyHaveData doesn't account for any sort of fuzz factor so we still think we're waiting for audio. Anthony, we need to increase the fuzzy tolerance for this video to work properly. You mentioned that one of the demuxers computed a value for this using the timescale, where is the code for that? I haven't been able to find it. Failing that, we need to bump up it to at least 5011 for this video (it's currently 1000). Flags: needinfo?(ajones) Flags: needinfo?(ajones) Attachment #8546195 - Flags: review?(ajones) Comment on attachment 8545681 [details] [diff] [review] Use a fuzzy check to determine if we have data Review of attachment 8545681 [details] [diff] [review]: ----------------------------------------------------------------- ::: dom/media/mediasource/MediaSourceReader.h @@ +128,5 @@ > nsresult SetCDMProxy(CDMProxy* aProxy); > #endif > > private: > + bool SwitchAudioReader(int64_t aTarget, int64_t aError = 0); Add comment explaining what unit aError is. Might as well do aTarget too. Do this in all methods that take aError. Attachment #8545681 - Flags: review?(cajbir.bugzilla) → review+ Status: NEW → RESOLVED Closed: 6 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla37 Comment on attachment 8545681 [details] [diff] [review] Use a fuzzy check to determine if we have data Landed on 36 beta with irc approval from lsblakk. These changes are low risk, and fix a specific issue with youtube. Attachment #8545681 - Flags: approval-mozilla-beta? Flags: qe-verify+ tracking-firefox36: ? → + tracking-firefox37: ? → + Reproduced with Nightly 2015-01-06 on Mac OS X 10.9.5 with str from comment 0. Verified as fixed with Fx 36 beta 1 build 2 (20150114125146) and DevEd 37.0a2 (20150118004006) on Window 8 32-bit, Windows 7 64-bit, Mac OS X 10.9.5 and Ubuntu 14.04 32-bit. Status: RESOLVED → VERIFIED \o/ Thanks, Alexandra.
https://bugzilla.mozilla.org/show_bug.cgi?id=1118528
CC-MAIN-2020-50
refinedweb
580
67.45
Agenda See also: IRC log No regrets, Dave Orchard to scribe HST: Read them, they were helpful to someone who missed the call NM: They look OK approved <DanC> (edits done. 2006/10/17 17:03:54 ) VQ: No plans to combine into a single document The links from the agenda are as follows: 1.29 2006/10/10 21:04:01 VQ: Anyone requesting changes unhappy about the current state? <DanC> minutes are ok by me 2006/10/17 17:03:54 2006/10/10 21:02:11 2006/10/09 15:23:29 RESOLUTION: f2f minutes approved DO: Request move AC Meeting and Backplane Meeting to the end VQ: Agreed NM: I took Tim's start and reworked it, at DO: Haven't read this yet, sorry <DanC> (no need to apologize; it was only available very recently.) <Zakim> timbl, you wanted to say: A third point I was thinking of was: "Some specific feature of the WS stack is required which is not available in the HTTP protocol." NM: To summarize: Starts with assumed motivations (no TCP support; No access to HTTP but Web Services wanted; Web Services facilities such as security necessary) ... points to history and status ... points to team comment bringing the TAG into the discussion ... Is this TimBL solo or TimBL on behalf of the TAG? TBL: on behalf of VQ: Yes TBL: OK, I'm ready to send once DO has agreed NM: Yes, 'hereby' should come out VQ: I'll update the issues list this week DO: So suppose we send this note, and some discussion happens -- then what? ... Any further activity the TAG would like to do? DC: I expect we'll close issue 7 again, with the addition of some comment about WS-Transfer DO: We're moving into a reactive mode. . . ... less proactive than I would prefer <Noah> Specifically, "hereby" should come out because issue 7 is in fact already open. Suggest the sentence should read: The TAG has reopened issue whenToUseGet-7, in part to facilitate discussion of WS-Transfer. <Noah> Also, we do have to edit in reference [4], which is to the 10 Oct Telcon minutes just approved, I.e. DC: Sympathetic, but no immediate inspiration VQ: DO, once you've read the draft, you can suggest changes DO: Well, there's the note, and there's our future action ... Seems to me there's technology missing, which we should take the lead on, if the TAG agrees after talking about it TBL: The HTTP arch. and the Web Services arch are to some extent distinct == a fork ... There is a large community already using WS, hence a need for something such as WS-Transfer naturally arises ... The Web Services arch is too big to force change on, it's too late to make it much more closely integrated with HTTP. . . ER: DO, are you asking us to lead the discussion about this? DO: Well, maybe this _is_ the discussion, i.e. "well, there's a fork, so it goes". . . ... That's not the conclusion I expected NM: I'm comfortable for now to open the discussion, w/o trying to steer it from the start ... We have made some progress, e.g. in getting RESTful bits into the SOAP REC, even if there hasn't been much uptake ... This may be an opportunity to do something similar in this case -- that there's real value in aligning what WS is doing with what HTTP already does ... For example, managing a printer with WS would be richer if it had a URI as well as an EPR, so you can for instance use conneg <Noah> I have prepared an edited copy of the WS-Transfer note. I believe it captures all changes suggested on this call. Will send out as soon as we decide no more changes for now. VQ: Plan is to send a message on behalf of the TAG. Once DO reviews, TimBL can send right away if DO is happy, otherwise we can return to this next week ... But NM is not going to be here. . . NM: I don't need another review, you can go ahead without me even if you change things VQ: Only reason to wait is if DO is not happy <Noah> Edited version is at: DC: Haven't read latest version yet. . . VQ: All please read after the call, unless DO says 'no', TBL can send to public list <timbl> 0063 is OK by me. DO: I will read by end of the day VQ: DO, before you leave, any ideas for this? DO: Versioning, perhaps? VQ: Other suggestions? ... In Edinburgh we discussed Compact URIs (in June) ... In Cannes we reported on our activities in general, focussing on one or two issues In Montreal, ditto, focussing on URNsAndRegistries-50 <Zakim> Noah, you wanted to say that we need to start demonstrating value VQ: Two concrete questions: 1) Should we open an issue on tag soup vs. XML? 2) Should we use our slot at the AC on this topic? TBL: We could try to get it scheduled at a time which would allow more worldwide participation by 'phone ... 6am or 10pm, if you had to choose? DC, NM: 10pm NW: 6am <EdR> 6am <Noah> NM: Not much preference, especially insofar as I'll be in "listen-only" mode either way. DC: We need to be sure not to overlap too much with other AC topics. . . VQ: So, it's just one suggestion -- any others? TBL: Tag soup vs XML may come up whether we talk about it or not <dorchard> I just signed on, no phone yet, any interest in versioning at the AC meeting? <DanC> (do we have an issue about namespaces and media types? ah yes, indeed we do. ) <DanC> (hmm... Subsumed by issue(s) mixedNamespaceMeaning-13 on 22 Apr 2002) VQ: Indeed we should try to focus our contribution on architecture issues ... Should we then take this as our primary topic for the AC meeting? <Noah> Probably a long shot in motivating those concerned, but I think I'm right that it's going to be easier to GRDDL XHTML than TAG soup. If semantic web takes off, then documents in XHTML will, in at least that particular sense, be that much more valuable. VQ: Volunteer to start a discussion on this topic, to get the ball rolling ... We also need someone to make the presentation at the AC meeting <DanC> hard to say, noah. adding tidy seems to be a small cost to add to GRDDL and other semantic web doodads. VQ: Attendees will be TimBL, HST and VQ <Noah> Yes, I suppose you're right. Especially insofar as tidy is viewed as reliable, and in my experience it is indeed surprisingly good. HST: I did it last time, TBL usually declines on the grounds that he has his own slot, VQ as chairman tries to stay neutral. . . VQ: I would prefer not to lead this VQ: So, back to the tag soup vs. XML -- do we need a new issue? DC: TBL's perspective suggests issue 51 already covers it NM: So either we give it a new issue, or we explicitly put it under several issues ... self-description is in that category as well ... it all connects up, for example namespaces vs. microformats, where the heavier weight approach is more in line with self-describing <DanC> standardizedFieldValues-51: Squatting on link relationship names, x-tokens, registries, and URI-based extensibility DC: Can we rename issue 51, that's really what I was concerned with there NM: That's the fine-grained aspect of this, but the self-describing stance is much broader ... The overall value of follow-your-nose is much broader than the field-value concerns of microformats <DanC> (hmm... at some point I asked somebody to write up introducing lanuguage N+1 given languages 1...N are out there. Norm, does that rign a bell? I wonder which action, if any, that was related to.) NM: The motivation for microformats comes from the relative weight of the thing you're trying to do (add a phone number) and the cost of doing it (a namespace declaration and use which is twice the size of the phone number) ... but to me that's a one end of a scale wrt which whole languages are at the other end DC: The key contrast is between doing things guerilla-style, or using a URI you get via a community-approved process TBL: And that leads on to the contrast between grounded-in-public-definitions and meaning-as-use NM: There is a difference between avoiding collision and using a public process [scribe missed discussion of microformat name scoping] VQ: Issue 51 has not been much discussed between January and the Vancouver f2f ... We can wait another week to decide about the AC meeting topic ... Wrt tag soup vs. XML -- let's return to that next week as well, in terms of how we take it forward (new issue vs. ...) TBL: I believe there's a strong desire that we give a report on what we've done DC: I object, that's a waste of time <ht> +0 VQ: The HCG asked if someone from the TAG will come to Ams -- what about you, HST? HST: I will go if asked by the TAG VQ: Is it appropriate for us to name someone DC: Yes <timbl> [others]: yes VQ: Anyone other than HST? DC: Position statement required to get a slice of the programme <timbl> "An XML data model can be seen as a mobile agent that carries with it selected business rules (e.g. bind statements in XForms) for interacting both with humans on the front-end and with a service-oriented architecture (SOA) on the back-end" HST: If I plan to do that, I will send a draft to this group before doing so
http://www.w3.org/2006/10/17-tagmem-minutes.html
CC-MAIN-2016-36
refinedweb
1,654
68.3
If you want it to be 13 just set it to 13. CR = 13; or you could say CR = '\r'; LF = '\n"; You're setting octal values and your debugger is showing them as decimal. Easy fix. Thanks for the response. Keith This is because: '\10' = 10 in octal = 8 in decimal there is also a notation for hexadecimal: 'x10' = 10 in hexadecimal = 16 in decimal There is... strangely enough... no way in C/C++ to enter a decimal value as an escape sequence, for some special characters there are specific sequences though. Such as '\n' for linefeed/newline (10 decimal, '\xa' or '\12') '\r' for carriage return (13 decimal '\xd' or '\15') ... View Tag Cloud Forum Rules
http://forums.codeguru.com/showthread.php?535487-Setting-char-value&mode=hybrid
CC-MAIN-2017-47
refinedweb
118
71.75
Walkthrough: Creating a basic Windows Runtime component in C++ and calling it from JavaScript or C# This walkthrough shows how to create a basic Windows Runtime Component DLL that's callable from JavaScript, C#, or Visual Basic. Before you begin this walkthrough, make sure that you understand concepts such as the Abstract Binary Interface (ABI), ref classes, and the Visual C++ Component Extensions that make working with ref classes easier. For more information, see Creating Windows Runtime Components in C++ and Visual C++ Language Reference (C++/CX). Creating the C++ component DLL In this example, we create the component project first, but you could create the JavaScript project first. The order doesn’t matter. Notice that the main class of the component contains examples of property and method definitions, and an event declaration. These are provided just to show you how it's done. They are not required, and in this example, we'll replace all of the generated code with our own code. To create the C++ component project On the Visual Studio menu bar, choose File, New, Project. In the New Project dialog box, in the left pane, expand Visual C++ and then select the node for Windows Store apps. In the center pane, select Windows Runtime Component and then name the project WinRT_CPP. Choose the OK button. To add an activatable class to the component - An activatable class is one that client code can create by using a new expression (New in Visual Basic, or ref new in C++). In your component, you declare it as public ref class sealed. In fact, the Class1.h and .cpp files already have a ref class. You can change the name, but in this example we’ll use the default name—Class1. You can define additional ref classes or regular classes in your component if they are required. For more information about ref classes, see Type System (C++/CX). To add the required #include directives Add these #include directives to Class1.h: collection.h is the header file for C++ concrete classes such as the Platform::Collections::Vector Class and the Platform::Collections::Map Class, which implement language-neutral interfaces that are defined by the Windows Runtime. The amp headers are used to run computations on the GPU. They have no Windows Runtime equivalents, and that’s fine because they are private. In general, for performance reasons you should use ISO C++ code and standard libraries internally within the component; it’s just the Windows Runtime interface that must be expressed in Windows Runtime types. To add a delegate at namespace scope A delegate is a construct that defines the parameters and return type for methods. An event is an instance of a particular delegate type, and any event handler method that subscribes to the event must have the signature that's specified in the delegate. The following code defines a delegate type that takes an int and returns void. Next the code declares a public event of this type; this enables client code to provide methods that are invoked when the event is fired. Add the following delegate declaration at namespace scope in Class1.h, just before the Class1 declaration. Tip If the code isn’t lining up correctly when you paste it into Visual Studio, just press Ctrl+K+D to fix the indentation for the entire file. To add the public members The class exposes three public methods and one public event. The first method is synchronous because it always executes very fast. Because the other two methods might take some time, they are asynchronous so that they don’t block the UI thread. These methods return IAsyncOperationWithProgress and IAsyncActionWithProgress. The former defines an async method that returns a result, and the latter defines an async method that returns void. These interfaces also enable client code to receive updates on the progress of the operation. To add the private members The class contains three private members: two helper methods for the numeric computations and a CoreDispatcher object that’s used to marshal the event invocations from worker threads back to the UI thread. To add the header and namespace directives In Class1.cpp, add these #include directives: Now add these using statements to pull in the required namespaces: To add the implementation for ComputeResult In Class1.cpp, add the following method implementation. This method executes synchronously on the calling thread, but it is very fast because it uses C++ AMP to parallelize the computation on the GPU. For more information, see C++ AMP Overview. The results are appended to a Platform::Collections::Vector<T> concrete type, which is implicitly converted to a Windows::Foundation::Collections::IVector<T> when it is returned. To add the implementation for GetPrimesOrdered and its helper method In Class1.cpp, add the implementations for GetPrimesOrdered and the is_prime helper method. GetPrimesOrdered uses a concurrent_vector Class and a parallel_for Function loop to divide up the work and use the maximum resources of the computer on which the program is running to produce results. After the results are computed, stored, and sorted, they are added to a Platform::Collections::Vector<T> and returned as Windows::Foundation::Collections::IVector<T> to client code. Notice the code for the progress reporter, which enables the client to hook up a progress bar or other UI to show the user how much longer the operation is going to take. Progress reporting has a cost. An event must be fired on the component side and handled on the UI thread, and the progress value must be stored on each iteration. One way to minimize the cost is by limiting the frequency at which a progress event is fired. If the cost is still prohibitive, or if you can't estimate the length of the operation, then consider using a progress ring, which shows that an operation is in progress but doesn't show time remaining until completion. To add the implementation for GetPrimesUnordered The last step to create the C++ component is to add the implementation for the GetPrimesUnordered in Class1.cpp. This method returns each result as it is found, without waiting until all results are found. Each result is returned in the event handler and displayed on the UI in real time. Again, notice that a progress reporter is used. This method also uses the is_prime helper method. Press Ctrl+Shift+B to build the component. Creating a JavaScript client app To create a JavaScript project Note If you just want to create a C# client, you can skip this section. In **Solution Explorer**, open the shortcut menu for the **Solution** node and choose **Add**, **New Project**. Expand JavaScript (it might be nested under Other Languages) and choose Blank App. Accept the default name—App1—by choosing the OK button. Open the shortcut menu for the App1 HTML that invokes the JavaScript event handlers Paste this HTML into the <body> node of the default.html page: To add styles In default.css, remove the body style and then add these styles: #LogButtonDiv { border: orange solid 1px; -ms-grid-row: 1; /* default is 1 */ -ms-grid-column: 1; /* default is 1 */ } #LogResultDiv { background: black; border: red solid 1px; -ms-grid-row: 1; -ms-grid-column: 2; } #UnorderedPrimeButtonDiv, #OrderedPrimeButtonDiv { border: orange solid 1px; -ms-grid-row: 2; -ms-grid-column:1; } #UnorderedPrimeProgress, #OrderedPrimeProgress { border: red solid 1px; -ms-grid-column-span: 2; height: 40px; } #UnorderedPrimeResult, #OrderedPrimeResult { border: red solid 1px; font-size:smaller; -ms-grid-row: 2; -ms-grid-column: 3; -ms-overflow-style:scrollbar; } To add the JavaScript event handlers that call into the component DLL Add the following functions at the end of the default.js file. These functions are called when the buttons on the main page are chosen. Notice how JavaScript activates the C++ class, and then calls its methods and uses the return values to populate the HTML labels. Press F5 to run the app. Creating a C# client app The C++ Windows Runtime Component DLL can just as easily be called from a C# client as from a JavaScript client. The following steps show how to make a C# client that is roughly equivalent to the JavaScript client in the previous section. To create a C# project In Solution Explorer, open the shortcut menu for the Solution node and then choose Add, New Project. Expand Visual C# (it might be nested under Other Languages), select Windows Store in the left pane, and then select Blank App in the middle pane. Name this app CS_Client and then choose the OK button. Open the shortcut menu for the CS_Client XAML that defines the user interface Add the following ScrollViewer and its contents to the Grid in mainpage.xaml: <ScrollViewer> <StackPanel Width="1400"> <Button x: <TextBlock x:</TextBlock> <Button x:</Button> <ProgressBar x:</ProgressBar> <TextBlock x:</TextBlock> <Button x:</Button> <ProgressBar x:</ProgressBar> <TextBlock x:</TextBlock> <Button x: </StackPanel> </ScrollViewer> To add the event handlers for the buttons In Solution Explorer, open mainpage.xaml.cs. (The file might be nested under mainpage.xaml.) Add a using directive for System.Text, and then add the event handler for the Logarithm calculation in the MainPage class just after OnNavigateTo. Add the event handler for the ordered result: Add the event handler for the unordered result, and for the button that clears the results so that you can run the code again. Running the app Select either the C# project or JavaScript project as the startup project by opening the shortcut menu for the project node in Solution Explorer and choosing Set As Startup Project. Then press F5 to run with debugging, or Ctrl+F5 to run without debugging. Inspecting your component in Object Browser (optional) In Object Browser, you can inspect all Windows Runtime types that are defined in .winmd files. This includes the types in the Platform namespace and the default namespace. However, because the types in the Platform::Collections namespace are defined in the header file collections.h, not in a winmd file, they don’t appear in Object Browser. To inspect the component On the menu bar, choose View, Other Windows, Object Browser. In the left pane of the Object Browser, expand the WinRT_CPP node to show the types and methods that are defined on your component. Debugging tips For a better debugging experience, download the debugging symbols from the public Microsoft symbol servers: On the menu bar, choose Tools, Options. In the Options dialog box, expand Debugging and select Symbols. Select Microsoft Symbol Servers and the choose the OK button. It might take some time to download the symbols the first time. For faster performance the next time you press F5, specify a local directory in which to cache the symbols. When you debug a JavaScript solution that has a component DLL, you can set the debugger to enable either stepping through script or stepping through native code in the component, but not both at the same time. To change the setting, open the shortcut menu for the JavaScript project node in Solution Explorer and choose Properties, Debugging, Debugger Type. Be sure to select appropriate capabilities in the package designer. For example, if you are attempting to programmatically access files in the Pictures folder, be sure to select the Pictures Library check box in the Capabilities pane of the package designer. If your JavaScript code doesn't recognize the public properties or methods in the component, make sure that in JavaScript you are using camel casing. For example, the ComputeResult C++ method must be referenced as computeResult in JavaScript. If you remove a C++ Windows Runtime Component project from a solution, you must also manually remove the project reference from the JavaScript project. Failure to do so prevents subsequent debug or build operations. If necessary, you can then add an assembly reference to the DLL. See Also Developing Bing Maps Trip Optimizer, a Windows Store app in JavaScript and C++ )B71E4A4-5D8A-4A20-B2EC-E40062675EC1
https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh755833(v=vs.140)
CC-MAIN-2019-39
refinedweb
1,990
61.26
I have a simple 3D fps game (think original Wolfenstein and Doom) where the textures are done old-school pixel style, but I find I'm having to save every texture out from Photoshop at a min of say 512x512 (even though the actual texture art for the walls is actually only 32x32) just to stop it looking all aliased when the game is running. Here's a couple images of the game: If I set the textures to Point (no filter) in Unity then they look all jaggy. If I set them to Bilinear or Trilinear in Unity then they look clean, but only if I output them at the much higher resolution from Photoshop first, otherwise Unity's filters turn them into blurred mess. So how am I supposed to use pixel art textures as intended, which is naturally drawn at a low resolution (so they're not taking up a lot of space each) but without any blurring? Wolfenstein and Doom has jaggies, so if you are trying to be authentic to that era of games, why is this a problem? The issue is that there is no way for the render engine to know "this is a line" of "this is a rectangle of pixels" and then anti-alias the jaggy-ness away between scaled up pixels. Know what I mean? You would have to either resort to "voxels" or planes where each pixel is actually a mesh, then turn on anti-aliasing so that the renderer smooths out the mesh edges, getting rid of the jaggies. OR find a vector texture solution which maps 2D vector textures (like SVG format) to 3D geometry. Actually, I may have another solution for you: Do you want your pixel size to remain consistent to how games originally were? If so you could set your resolution to be low and use something like "pixel perfect camera": I think you can only use the pixel perfect camera on 2D games. And the main reason I don't want the jaggies in my game, even though it's based on retro classics like Wolfenstein and Doom, is because unlike on a normal flat screen, in VR this kind jaggy and shimmering visual artifact tends to increase the likelihood that you'll feel sick while playing. I want the old pixel look but I want any visual lines, both on the edges of polygons and in the textures, to stay as clean and clear and sharp as possible--but I get what you're saying about this maybe not being possible properly with what I'm going for. I guess the higher resolution output of the artwork is fine for now. Answer by Vilhien · Jan 04 at 09:24 PM When you change their size in photoshop, under image size make sure "Resample" is unchecked, you shouldn't get any resampling of the pixels upon changing the size that way. I don't think this is an issue with Photoshop's end of things. The original images output from it are perfectly sharp (using nearest neighbor to preserve the clean pixel look). It's when the game does its stuff in Unity that the issues arise. I either have to go with the original images with no Unity filtering options turned on, in which case they can be used at their original low memory size but are jaggy as hell as soon as the camera isn't looking exactly flat on at them and any angles of the pixels aren't perfectly horizontal or vertical, or I have to use one of Unity's texture filter options that makes them completely blurry unless I first go back and save them out of Photoshop scaled up by an order of magnitude (like I said, a 32x32 now needs to be output at 512x512, or 256x256 at a push, just so Unity's filtering doesn't completely blur things). So I don't know how to get nice clean pixel textures on my 3D game's walls without having to save every single normally-32x32 texture at 512x512 instead, which seems like a terrible waste of memory, especially with some of my even smaller textures (16x16 or even 8x8) that still need to be saved at min 128x128 so they don't look either jaggy or blurry in the game with filters on. Something seems a bit off to me, like maybe there's some other option I'm missing where I can still use the tiny texture as is but that also keeps them looking nice and smooth in my 3D game too. . . . Hey bud, Looked around and playing with it a bit on my end as well with my project. Yer right, no filter, point seems to make mine look best as well, but I'm not following any strict guidelines. I did read somewhere that the pixel count should match, so try setting your pixel count to 32. I'm running into all sorts of shader issues myself currently as I'm flipping sprites.. It seems every time I venture out to make a 2d game it ends up 3d. Any other luck so far? Changing the pixel count doesn't seem to make any difference that I can see. I guess it's not a huge deal because the game's file size isn't huge when I save out the build, but it just seems strange that I have to convert all my pixel art sprites/images to something much bigger before putting them into Unity just to avoid the blur when using either Bilinear or Trilinear Unity filters on the texture settings, which I have to use in order to get rid of the jaggy lines that appears on the textures when viewed any way other than perfectly flat/straight on in my 3D game. Answer by Eno-Khaon · Jan 07 at 12:32 AM Since bilinear interpolation naturally blends between every pixel across the entire length of those pixels, you'll need to write a shader which ignores that normal blending. For this, you'll want to decide on your basis for calculation: A) Manually interpolate pixels in a point-filtered texture (which requires no fewer than 4 pixels read), then adapt the blending itself. B) Manipulate texture reads on an interpolated texture (1 pixel read, but more costly) to make it look like a point-filtered texture, then adapt the position of the read pixel. This isn't complete, but should get you started in the right direction for approach B. Using an unlit shader as the baseline to keep it short: Shader "Unlit/FilterScale" { Properties { _MainTex ("Texture", 2D) = "white" {} _Threshold ("Rounding Threshold", Range(0.0, 1.0)) = 0.5 } SubShader { Tags { "RenderType"="Opaque" } Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; }; sampler2D _MainTex; float4 _MainTex_ST; float4 _MainTex_TexelSize; float _Threshold; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos (v.vertex); o.uv = TRANSFORM_TEX(v.uv, _MainTex); return o; } fixed4 frag (v2f i) : SV_Target { // Half-pixel offset to grab the center of each pixel to disguise bilinear-filtered textures as point-sampled float2 halfPixel = _MainTex_TexelSize.xy * 0.5; // Location (0-1) on a given pixel for the current sample float2 pixelPos = frac(i.uv * _MainTex_TexelSize.zw); // sub-pixel position transformed into (-1 to 1) range from pixel center pixelPos = pixelPos * 2.0 - 1.0; float2 scale; float2 absPixelPos = abs(pixelPos); // If sub-pixel position is near an edge (_Threshold), use point-filtering (scale = 0) // Otherwise, use an analog dead zone calculation to approximate blending // (note: can be improved) // if(absPixelPos.x < _Threshold) { scale.x = 0.0; } else { scale.x = (absPixelPos.x - _Threshold) / (1.0 - _Threshold); } if (absPixelPos.y < _Threshold) { scale.y = 0.0; } else { scale.y = (absPixelPos.y - _Threshold) / (1.0 - _Threshold); } // Calculate the new real UV coordinate by blending between the center of the current pixel and the original sample position per axis float2 uvCoord; uvCoord.x = lerp(floor(i.uv.x * _MainTex_TexelSize.z) * _MainTex_TexelSize.x + halfPixel.x, i.uv.x, scale.x); uvCoord.y = lerp(floor(i.uv.y * _MainTex_TexelSize.w) * _MainTex_TexelSize.y + halfPixel.y, i.uv.y, scale.y); float4 col = tex2D(_MainTex, uvCoord); return col; } ENDCG } } } I apologize that it's a little sloppy (I got the algorithm close enough to function well, but can't remember what I'm missing to make it just a little cleaner). Thanks for your feedback. This, however, is way, way beyond my understanding of code and everything around it at this point in time, even your simple explanation before the code :-o, so I'll think I'll just leave it for now and come back to it if necessary later on, when hopefully I'll understand more complicated code and stuff far better. I literally have no clue whatsoever what any of that code means, and I wouldn't have the slightest idea where to start working with/on it. I was really just hopeful of a simple setting in Unity that I'm missing or some basic way of creating the images slightly differently that might have solved the issue, or something along those lines. Thanks again though, and I'll keep this on hand in case I need to come back to it at some future time. No problem. Saving texture memory can be tricky business when you want more out of it. I went back through and added comments to the shader example to help explain what's going on in it. And, yeah, expanding on it to be fully featured isn't exactly simple, but is certainly worth the time and research. Detonator Framework works, but explosion edges are noticeable 1 Answer How can I get smooth reflections instead of these jagged ones? 1 Answer Best method for dealing with rough sprite edges due to rotation? 1 Answer Shader shadow anti-aliasing problem 1 Answer Help me out with a simple smoothing function for accelerometer data? 1 Answer
https://answers.unity.com/questions/1586600/how-to-get-jaggy-free-pixel-textures-in-3d-game.html?sort=oldest
CC-MAIN-2019-43
refinedweb
1,677
54.05
Control data and query access in a REST API The graphQL API provides a simple way to authorize incoming requests. Each Route instance has a authorize method, that takes in a function that must return true or false to indicate if the current request should be authorized or not. The authorization function receives the current request. This contains all the information about the current request, including a manager instance for easy database access. You may call the authorize method on a Route instance to add a new authorization function. You may call this method as many times as you need. All of the authorization functions will be run, and if any of them return false, the request will throw an Unauthorized error. import { tensei, route } from '@tensei/core' tensei() .routes([ route('Get Purchases') .get() .path('customers/:id/purchases') .authorize(({ customer }) => !!customer) .authorize(async ({ customer }) => customer?.hasPermission('Get Purchases')) .handle(async (parent, args, ctx, info) => ctx.customer.purchases) ]) In the above example, two authorization checks will be performed before executing the request handler. If you are using the auth plugin, the context passed to the authorize function will contain the currently authenticated user. You may wish to add custom authorization functions on routes automatically generated by Tensei. To do this, first you need to get the route, and call the authorize function with your own authorization function. A good place to do this is in the tensei().register() function. import { tensei } from '@tensei/core' tensei() .register(({ getRoute }) => { getRoute('updateComment') ?.authorize(({ user, body }) => body?.comment?.user !== user.id) })
https://tenseijs.com/docs/rest-authorization
CC-MAIN-2022-21
refinedweb
256
50.23
Creating truly universal React component systems Announcing styled-components/primitives, an experimental entry point combining styled-components and react. Until now your only option was to use React Native platform extensions to target specific platforms with your components. This allows you to share the logic and structure, which is already amazing, but it still means writing your styling multiple times.. Thanks to @MathieuDutour we’ve now combined the best of react-primitives and styled-components and we’re excited to bring you an experimental release of styled-components/primitives: Build your styled components once and use them on the web, on native and even render them to Sketch! See the release notes if you want to know more, but let’s talk about the journey of getting here. styled-components/native After finishing the first prototype of styled-components, Glen and I realised that it’d be nice to use it for ReactNative apps too. So we sat down and built a converter from CSS strings to ReactNative style objects (now it’s own package, css-to-react-native) and added an entry point, styled-components/native, that gave you access to the ReactNative primitives. That worked well and lots of folks are using styled-components today to build their ReactNative apps with great success. react-primitives Meanwhile, Leland Richardson started hacking on react-primitives. The goal was to define and implement the smallest subset of primitives on all platforms that would allow users to cover 99% of use cases. He boiled it down to six APIs: Animated: Declarative animations. StyleSheet: Styling the primitives. View: A base component for Layout. Text: A base component for Text rendering. Image: A base component for Image rendering. Touchable: A base component for interaction. These six APIs are then implemented and work the same way on the web, native and since recently even in Sketch, allowing us to create truly universal and reusable components. (see his talk at ReactEurope for a more in-depth explanation) The issue was that you couldn’t use styled-components together with react-primitives. While this was something on the core teams’ mind, nobody had actually investigated or even tried adding a “react-primitives mode” to styled-components, it was all just faint dreams. styled-components/primitives Mathieu Dutour submitted a PR to react-sketchapp with a hacked version of styled-components combined with react-primitives to make it compatible with react-sketchapp. Just a day later we had a cleaned up PR against styled-components and here we are: These components don’t just render to Sketch though: These are the exact same styled components, the exact same code, rendering in the browser, as a mobile app and in Sketch! 😱 💅 Try it out! All you have to do to get started is install the two packages with npm install --save styled-components react-primitives, then import styled from 'styled-components/primitives' and you’re good go! (assuming you have Node.js installed) Note that this is an experimental release: There might be bugs and there also isn’t a massive amount of documentation (check out the react-primitives repo) for it yet. That being said, we’re super excited to finally have a first version of this in styled-components and we’re looking forward to hearing how you use styled-components/primitives to build your apps. Stay stylish! 💅
https://medium.com/styled-components/announcing-primitives-support-for-truly-universal-component-systems-5772c7d14bc7?utm_campaign=Fullstack%2BReact&utm_medium=web&utm_source=Fullstack_React_68
CC-MAIN-2017-30
refinedweb
561
51.48
- Content model on the browser: VIE and RDFa - Content persistence and retrieval: PHPCR - Improving performance: AppServer-in-PHP As I've written before, I'm concerned about the state of the PHP ecosystem. There are lots of good applications written in the language, but there is very little code sharing between different projects, mainly because of framework incompatibilities, but also because of quite a strong NIH culture. But there are also bright points. I've recently seen lots of exchange of ideas, and even potential code sharing between some communities including Symfony2, Midgard, TYPO3 and eZ Publish. Much of the vision in these systems is similar, as are many of the engineering principles. When everybody uses reasonable object-oriented design, namespaces, and test-driven development, it is much easier to share. If I had to list three areas where there is most potential for collaboration, these would be: Content model on the browser: VIE and RDFa The age of communicating with your web audience via forms is almost over, and it is time to evolve. HTML5 includes support for the contentEditable attribute which allows rich editing interaction straight on the pages, and there are cool editors supporting that, including Aloha Editor and Mercury. To do proper front-end editing, your CMS and the JavaScript environment have to agree on the content model. Fortunately there is a great solution for this: just annotate your content with some RDFa. Having RDFa on a page allows the browser to understand the content. What is a collection of blog posts for instance, and what is the title of a blog post. With this, my VIE library will provide you with a nice in-browser content management API based on Backbone.js. Getting there is easy: - Annotate your pages with RDFa - Include vie.js to the pages - Implement Backbone.sync This allows a great deal of decoupling in the CMS stack. Suddenly the server side just has to worry about content management and page generation, and newer in-browser technologies can be used for actual content authoring. Using RDFa annotations in your content comes also with another benefit: suddenly your pages themselves are an API into your content model. And search engines can understand and present your content better. If you want to learn more about this, watch my talk from the Aloha Editor Dev Con. Content persistence and retrieval: PHPCR Historically, all CMSs have implemented persistence in their own way. There have been systems using relational databases like MySQL, systems providing their own content repository APIs like Midgard, and also some systems just using XML and the file system. This has reduced integration and code re-use possibilities between systems. In the Java world, a solution exists for this: the Java Content Repository standard (JCR). Now JCR has been ported to PHP. PHPCR provides a standard interface for all content management needs, and has multiple back-ends available. Depending on your deployment needs, you could store your content into a relational database, into Apache Jackrabbit, or into for example MongoDB. PHPCR is great in that you can start small: just model your content with a simple, filesystem-like tree of nodes and properties. Then when you need it, a wealth of functionality is available. Versioning? Query builders? Access control? It is all there for you to use. And, depending on the PHPCR back-end, you'll have the ability to scale up to insane amounts of content. While I've advocated using content repositories for years now, this is the first time PHP has a true standardized, vendor-neutral API for it. And PHPCR is even being integrated into the JCR specification, eventually making it an official standard. Adoption is also picking up. Yesterday I was in a meeting where we had developers from TYPO3, Symfony2, Doctrine and Midgard discussing issues and solutions in the content repository space. I just hope the other projects also pick this specification up. Improving performance: AppServer-in-PHP Of the three, this is probably the most controversial idea. Traditionally PHP is run as a scripting environment on a regular web server, like Apache or Nginx. In such setup, when the server receives a request, it passes it on to the PHP environment. The PHP environment loads all the code needed to fulfill the request, runs it, sends the response back, and unloads everything loaded. This is fine when PHP is being used in the way Rasmus originally intended, as a simple display layer. But nowadays most of PHP runs on a big framework, whether it is MVC or something custom like Drupal. And loading and then discarding a whole framework for each request is simply insane. With AppServer-in-PHP (AiP), you have an environment where even a big framework can perform. AiP provides you with a full server environment for PHP, written in PHP. In this setup, your framework is loaded when the server boots up, and then each request just runs the request processing part of it. During the San Francisco Aloha Dev Con we ported TYPO3 to run on AiP, and the performance results where staggering. A simpler request with not much I/O would run 3-4 times faster than the same code on regular PHP setup, and an I/O -intensive request would still be twice as fast. AiP can't do much about I/O performance, but at least the cost of having a framework is greatly reduced. In short, AppServer-in-PHP is something any developer running web services with a PHP framework should consider. It is also a great way for framework developers to see if they have request isolation problems in their design. This post has been written in the TYPO3 Developer Days 2011 event where I was invited to discuss these ideas, and also help run the RDFa part of the TYPO3 Goes Semantic workshop. Source: {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/my-secret-agenda-php-content
CC-MAIN-2018-43
refinedweb
990
52.9
: * Explicit conversion operators * Initializer lists * Scoped enums * Variadic templates In 2013 RTM, this list will be extended to: * Alias templates * Deleted functions Join the conversationAdd Comment Thanks for the detailed overview and all your work, STL! // transparency++ A small question real quick — now that variadic templates are implemented in the compiler, are the std. lib. emplace functions taking advantage of these? // Context: stackoverflow.com/…/push-back-vs-emplace-back Yes, emplacement is 100% real variadic now. Here's the history: in 2010, although we had faux variadic machinery for make_shared/etc., we didn't use that machinery for emplacement. Instead, we supported single arguments only, which wasn't terribly useful. In 2012, we made emplacement faux variadic. (This increased use of our macro machinery was one of the factors that led us to reduce infinity from 10 to 5.) And in 2013 Preview, emplacement along with everything else is real variadic. The macro machinery is totally gone, making it impossible for us to accidentally use it. (We have similar macro-spam machinery for calling conventions now, but that's different.) This is a very thorough and helpful rundown. Thank you for your effort! I think a lot of C++ users felt like they were out in the cold when there was no news on the November CTP for what amounts to 8 months. This along with the roadmap were exactly what the doctor ordered. Best MS post I've read in a long time. Too few comments of praise here, people. People are probably busy adding deserved complaints to some other MS blog, but that's no excuse for not giving credit here. Well done as always STL, no-nonsense, yet meticulous work, nicely explained. I know many people who appreciate your work and we know it also takes time to track and respond like this and it wouldn't have happened without your personal commitment to make it happen and in being as detailed and relevant as it is. It's not pizza, it's job #1. Thanks again. It's great to see Microsoft finally [fully] embrace C99 and C++11 (as well as working towards C++14). Visual Studio has been a bit behind when it comes to C++ ISO support for a while now, so I am glad to see it keeping up with the times. Nice work. What are the chances of having VS2010, VS2012 and VS2013 Preview installed and used on the same Windows 7 64-bit system ending in a horrible mess requiring reinstalling everything? Lots of great C++ and STL features in VS2013. We're looking forward to these changes! Thank you, Stephan, for the detailed write-up and your typical clarity & attention to detail. It's much appreciated, especially when so much other software ships updates with just "* bug fixes" as the release notes. 😀 @Paul Jurczak: I have all three installed on Windows 7 64-bit. No issues. Stephan T. Lavavej: Wow, I'm _really_ glad to hear that! Once again, thanks a lot for your great work, it's it *very* much appreciated! Everyone: You're welcome! :-> Joel: Sorry about the radio silence! If it were up to me, I'd announce things as they're checked in, because that's the point where we can be confident that something will ship. However, my bosses and boss-like entities have rules about disclosure timelines. Those rules make sense – nobody knows about Halo 16's Neutrino Cannon until it's announced – but they interact strangely with Standard features (where, for example, everyone can see alias templates in the Core Language and Standard Library). Fortunately, Herb's conformance roadmap now answers the questions about what stuff we're planning to implement and when, which should ward off chilly feelings in the future. Paul Jurczak: It is definitely safe to install RTM versions side-by-side; those scenarios are extensively tested. I personally don't know about the 2013 Preview => RTM upgrade story (I'd assume it's tested and supported, but installers make me intensely paranoid). Leo Davidson: Yeah, it's so tempting to just say "we fixed stuff". To avoid bug database/source control archaeology, I used the low-tech method of sending myself an "STL Fixes After VS 2012" mail that I updated whenever we checked stuff in. Then I was able to expand that list of bug numbers into this post. In the MSDN page I noticed the following statement: "New Linker Options. The /Gw (compiler) and /Gy (assembler) switches enable linker optimizations to produce leaner binaries." Can you explain these new options? Great work Stephan and rest of VC++ team! The only features left on my must-have list are default move and C++14 constexpr + generic lambdas. I believe that the std::cbegin/std::cend/std::crbegin & std::crend overloads that delegate to std::begin/std::end/std::rbegin & std::rend are a mistake in the making for the next standard. IMO these overloads should delegate to begin/end/rbegin & rend through argument-dependant lookup; otherwise you will need to define cbegin/cend/crbegin & crend non-member functions for your own types which is a huge hassle. retep9998: I just asked the compiler back-end team, and they told me that those options are like msdn.microsoft.com/…/xsa71f43.aspx but for data instead of functions. They're also going to publish a blog post about them soon. Programmer: ADL was brought up when the Library Working Group was considering my Proposed Resolution for std::cbegin/etc. As I recall, the consensus was that invoking ADL is somewhat scary (it is a continual thorn in the Standard's side, e.g. in range-for), and that requiring authors of custom ranges to provide a few one-liner overloads wasn't too burdensome. In particular, custom ranges with .begin() and .rbegin() automatically work, so only ranges without such members are affected. Incredibly useful write up Stephan, thank you. I'm really looking forward to seeing what kind of transformations and cost savings I'm able to achieve with our ~20 year old code base in conjunction with the new C++ features. – Tom but why not a mention of the update from '12? Is it free or is it pay? I expect this to be the most bug-full of all yet seen so pay seems unreasonable. I really like your videos and articles. Thanks STL for being such a wonderful voice for C++. I really appreciate the thoroughness of the post and the personal touches. However, like many others, I'm puzzled by Microsoft's inability to be the leading implementation of C++. Maybe the pizza delivers are more talented than we thought? Hiya, thanks for the updates. Is there any chance that the noexcept specifier, alignof or alignas will be implemented? Hello STL, Please tell me if you are going to support fenv.h of C99 and cfenv header of C++11? The floating point support is very necessary for our projects. We love to move to your IDE, but it is a major constraint. There is a bug report on Connect for 2012 version, but its still missing in 2013.. connect.microsoft.com/…/c-11-header-cfenv-missing Any plans to support those and stdbool? is there hope for the return of inline x64 assembly? The company i work for isn't made out of money. We are forced to yet again consider going over to an alternative compiler such as gcc or clang, So much for trusting microsoft. We expected full c++ support would come in an update for vs2012. Microsoft in a video got us this impression. Went something like: "Missing features will be released as an update". In other words microsoft yet again failed to follow up on their word to fully support C++11 in vs2012 presumably with an update. As an excuse to delay full C++ support you started working on C++14. A delay tactic. This is like adding water to a bucket while trying to empty the bucket. Of course it will take longer to empty it if you do not speed up the emptying. For every release all we C++ users get are excuses to why full C++ support is not there. Yet other languages such as C# get microsofts full attention. C++ an international _standard_ (with performance and comparability at its core) put at a lower priority than C# a proprietary language aka useless. C# is an egomaniac narcissistic language filled people of equal mind. Disgusting! (Channel 9 videos is a good reference) You even have the arrogance and impudence to force use of C++/cx . Everyone knows this is just a stunt to get more people familiar with C# syntax and to get them to start using it and get locked in. Microsoft is getting more and more greedy and desperate for more money. Using more and more exploitative methods. Methods that in more civilized countries are illegal. You were supposed to listen to our feedback and make it better and not do the exact opposite. Microsoft is using methods that scammers and other criminals use. Now what does that tell you about the company? Remember the NSA leak? (of course some of us already assumed that would be the case) Hello STL, Will you tell the priority of UTF-8 string literals feature? (§2.14.5 of working paper, "u8") Nice and all: VS 2013 pricing has not yet been announced. Phil: noexcept/alignof/alignas will not be implemented in 2013 RTM. As Herb announced, they're tentatively planned for a post-RTM CTP/alpha. The STL needs them, which increases their priority somewhat. Mazda: Pat Brenner checked in fenv.h on June 17, so (barring catastrophe) it will ship in 2013 RTM. As I mentioned in the post, I cannot yet promise that we'll be able to ship cfenv and the other updated wrapper headers in 2013 RTM. (Working on this is third on my list, after getting STL support for alias templates and deleted functions in.) As I mentioned, C99 Core Language _Bool is confirmed for 2013 RTM. However, stdbool.h has not yet been checked in. Alessio T> is there hope for the return of inline x64 assembly? Magic 8 Ball says: Don't count on it. Roman: Unicode string literals weren't mentioned in Herb's slide deck, so you shouldn't expect to see them in the post-RTM CTP/alpha that he announced. > Pat Brenner checked in fenv.h on June 17, so (barring catastrophe) it will ship in 2013 RTM. Thank you! Oops, I was looking at the wrong branch. stdbool.h was checked in on June 6, so it will ship in 2013 RTM. @STL, thank you! Will C99 be fully supported in VS2013? Is there a better conformance test to check all standards for C99? Any plans for C11 too (en.wikipedia.org/…/C11_(C_standard_revision))? In future? 😀 Hello, STL. What about implementing std::dynarray (<dynarray> header) from C++14 in RTM?. @Nice and @Patrick: Let me cut-and-paste something I wrote to a similar question on another comment thread about upgrade policy… I completely understand the frustration with having recently purchased shrink-wrap VS 2012 Professional and now VS 2013 is already here — and the upgrade pricing hasn't been announced yet so people are filling in their own expectations and expecting the worst. (I'm told it will be announced in the near future, but we don't control that; we work on our product and integrate it into VS, then VS goes to market however the whole product decides.) Having said that, here's what my understanding is at this point: We are continuing to ship a free Express compiler, so anyone can get the latest free VC++ optimizing compiler with the latest full conformance available, as usual; and anyone who is a subscription customer already is signed up to get free updates for the life of their subscription so they get it for free too. That leaves people who bought a standalone shrinkwrap VS 2012 Professional (sans subscription), and they will have some upgrade path but I can't speculate. Note that the foregoing is "to the best of my knowledge" to try to be helpful, but don't rely solely on what I say here, I'm not involved in deciding this — the people who do decide this will announce the upgrade policy formally soon. It may be less than you think, but I don't know, and I realize that while pricing hasn't been announced people will quite naturally fill in the blank with the worst case, that's an understandable concern. I hope the following will help each customer decide which VS version they want to purchase: As we said at Build, we'll continue to ship VC++ with more conformance in two streams: CTP releases, and RTM releases, each more often than every 2-3 years. And when a new feature reaches RTM quality, you should expect it in the next VS full release, whenever that turns out to be (on the new cadence), but it would be unusual and a happy surprise to see new conformance features appear in an interim VS Update because even a language feature can cause a breaking change and it would be terrible to have someone install Update N+1 and have that break their project that worked fine on Update N — we'd only consider putting a language feature in an RTM release if we were sure it could not possibly cause a breaking change, and breaking changes can be sly little beasts that hide in more corners than you and we all initially think are possible. Armed with that information you can decide what the right purchase decision is that makes sense for you, knowing what the feature set of VC++ 2013 is going to be (what I announced on Friday in the first two columns). It's too risky to implement such specifications in less half of a year because there are another dozen of chances to leave behind new bugs to the compiler. @Herb It's like saying "I completely understand your frustration when we're giving you the finger". It's a no-op if you're not going to address the issue. I know you're not the person making those decisions, but the dark scenarios we're expecting are not just bragging. They're based on your(as a company) actions so far and anyone expecting any good news for 2012 owners is delusional. To be honest the roadmap you shown is saying absolutely nothing beyond 2013RTM. It basically means that you're working on stuff, but it says nothing about when or how many times we'll have to pay full price of VS for those incremental updates. At this pace it looks something like 3-4 RTMs till we get full C++11. The CTPs have little value if they're going to be the same quality as the first one (I've personally hit two blocking bugs in a 3-line code on day one). It's next to impossible to convince a company to upgrade, because next year the topic will return. It's also impossible to estimate how many versions we can skip because who knows how many RTMs it will actually take. Express is not a company solution. If you just need a compiler there are already at least two that have full C++11(and some of C++14) implementation and are free at that! The Alias Template has to be taken care of when rename variadic templates. The default and delete member functions has to be applicable to SFINAE. The Alias Template has to be taken care of when rename variadic templates. The default and delete member functions has to be applicable to SFINAE too. Great update. I wish it did all of c++ 11 but it's still the great majority so I'm reasonably happy, and this is a really helpful summary. Although please can you implement an option to reinstate the cpu heating when space bar is pressed feature though, as I have code that relies on this feature. @Herb Sutter > @Alex: In what way? Variadics had a long bug tail, which we don't want to repeat, but otherwise the intent is to deliver something similar to last November's CTP — likely a compiler-only preview with additional features. If I had to guess, I would assume it to mean "the VC++ team was very explicit that 'these features will be in VS2012 RTM… Oh, wait, we didn't have time for that, but we will be providing a steady stream of release-quality updates for owners of VS2012…. ok, here's a CTP just to show what you'll get within the next couple of months…. What you expected new features for VS2012? What gave you that crazy idea? You should buy VS2013 instead'". When VS2012 shipped, you did not say "Oh, you guys might get an alpha-quality preview showing what the *next* version of VS will contain". You said that this was a break with the archaic development Microsoft had used until now, and that going forward, new features would be added to the compiler in out-of-band updates. As I've said before, I am a bit befuddled by the apparent belief that if new features are distributed for free, they become risky, but attach them to a new product that people have to pay for, and the risk goes away. What code would break by allowing VS2012 owners to upgrade to the VS2013 compiler for free? Yes, the bean counters who apparently rule Visual Studio with an iron first would obviously have a fit, but you and we are developers, and speaking as developers that would be exactly as safe as having customers *pay* for a VS2013 upgrade and then having the new compiler. The only difference would be the amount of money paid to Microsoft. So the "it's for your own best that we don't release new features for existing compilers" line is… very weak. There is no inherent risk reduction in giving more money to Microsoft. Microsoft is obviously free to do all this (and it is no worse than how Visual Studio has been sold for the past X years). But trying to rationalize it away as "for your own good" seems disingenuous. You could just be honest and say "my bosses have decided that we charge money for new features", because that is what it boils down to. Stephan, firstly many thanks for a very detailed update. It is very disappointing that Visual C++ users aren't going to get automatically generated move constructors and move assignment operators ("rvalue references v3") in a RTM version of VS until likely near the end of 2014 (assuming the current VS release schedule is maintained) As others have pointed out in the comments section of previous Visual C++ Team Blog posts. Having to write (and then maintain) move ctors/assignment operators (where you simply just want what should be the default versions) is time consuming and error prone. My main question though is: Looking at en.cppreference.com/…/copy_constructor "The implicit copy constructor for class T will not be generated if any of the following conditions are true" <snip> "T has a user-defined move constructor or move assignment operator" Currently VS2012 and VS2013 Preview doesn't implement this. Moving forward to VS2013 RTM and when =delete and =default are added does this imply that you will be making this change to be C++11 compliant? i.e if I have defined my own move ctor and move assignment operator for MyClass I will then need to add: MyClass(const MyClass&) = default; MyClass& operator=(const MyClass&) = default; To get the default versions. Thanks @STL: Thanks for a great write up! Observing all those DevDiv/connect bug numbers side by side, made me sigh and wish again that DevDiv bugs were exposed, so us customers won't have to re-discover them the hard way: visualstudio.uservoice.com/…/2665441-publish-internally-known-non-security-bugs I've a proposed extension to C++; introduce the keyword novirtual or nonvirtual. The immediate need for this is when I want extremely strict code checking and a class has virtual methods but not a virtual destructor. My own bias is that structs should not allow virtual methods and classes should require explicit constructors and destructors (default and delete are great additions!) I've since realized this could be extended in a very peculiar way; namely to cut off the vtable at a certain method in the inheritance hierarchy. Don't know why you would do this, but if I thought long enough I might come up with a reason. Still, I'm not opposed to making this strictly for destructors. Then I could compile with full warnings and C4265 and STL would stop complaining. Can we use C99 designated initializers in C++? @asdf: The C99 features are when compiling .c files or with the /TC "compile as C" switch. Mazda: 2013 RTM will not support the entire C99 Core Language, just the selected parts that I listed. I heard from Pat Brenner that our C99 Standard Library support will be complete except for tgmath.h (which requires magic compiler support), but I don't want to absolutely swear to that yet. There are companies that sell conformance test suites (we work with a couple of them) and there are probably open-source ones that I'm not familiar with. As for C11, Herb would have to answer that. Sergey: Sorry, <dynarray> will not be implemented in 2013 RTM. There just isn't enough time to keep cramming in new features given that we're shipping this calendar year. Alias templates and deleted functions in the STL will definitely make it (I am currently working on checking them in) and the C99 wrapper headers are a possibility, but <optional> and <dynarray> are too big to start now. Alex: To be very very clear, we are promising another CTP/alpha. You should not expect new features to appear in production-quality Updates to RTM releases. (It is possible we'll do that in the future, as we did with TR1 in 2008 SP1, but you should not expect this.) JB: That's horrifying. Jesper> "the VC++ team was very explicit that 'these features will be in VS2012 RTM… Oh, wait We never, ever, ever promised or even suggested that variadic templates/etc. would ship in 2012 RTM. In fact, when my "C++11 Features in Visual C++ 11" post was published on Sept 12, 2011, the reaction to the lack of those features was swift and vicious. When what became the Nov 2012 CTP was announced at GoingNative on Feb 2-3, 2012, the announcement didn't distinguish CTPs from Updates clearly enough, but there's absolutely no way the announcement could have been interpreted as applying to 2012 RTM. > What code would break by allowing VS2012 owners to upgrade to the VS2013 compiler for free? None. "I want new features in 2012 Updates for free" and "I want new features in 2013 for free" are different requests. The former (2012 Updates), i.e. updating a toolset in-place, is difficult and risky in technical terms as I explained in my post (or impossible for binary breaking changes in libraries). The latter (2013), i.e. adding a separate toolset, is purely a question of pricing, which has not yet been announced – it presents no technical issues. Alastair: Looks like your comment showed up on VCBlog after all. I answered this on Reddit:…/catwnvu Ofek Shilon: I'd be totally happy with a publicly viewable bug database. Joe: Are you aware of C++11's context-sensitive keyword "final"? That appears to be exactly what you're asking for. Another question to @Stephan T. Lavavej : since Windows 8.1 drops x64 support for a couple of processor (all 130nm x64 CPUs by AMD and intel if I am correct), all the x64 CPU supported now should be SSE3 compilant. Is it a good reason to add SSE3 optimization to the compiler 😀 What says your Magic 8 Ball ? :O > Windows 8.1 drops x64 support for a couple of processor (all 130nm x64 CPUs by AMD and intel if I am correct), all the x64 CPU supported now should be SSE3 compilant..) The x64-targeting compiler has had /arch:AVX since VS 2012, see msdn.microsoft.com/…/jj620901(v=vs.110).aspx , although that is obviously beyond SSE3. Stephan, Thanks for the post. When you release the final version will you release performance numbers compared to other compilers? Intel has a platform and they are making outrageous claims about performance against Microsoft's versions. software.intel.com/…/intel-composer-xe , Benchmarks. Will you set the record straight please? George Carlisle: That's runtime perf (with the SPEC/etc. benchmarks), so you'd have to ask the compiler back-end (code generation) team about that. Jim Hogg is running a series of articles on VCBlog about optimizations. I observe that the linked page compares Intel 13.0 (released Sept 5, 2012) to VC 2010 instead of VC 2012 (released Sept 12, 2012), which was the first release of VC with autovectorization. >.) It's just an observation I did: when I checked which processor started to support those instructions I noted that with those CPUs AMD and Intel started to add SSE3 instruction set too. Dunno about VIA CPUs and maybe the information I get (wiki, amd and intel developer/ark sites) are wrong, or maybe I forgot to chack some CPUs.. it was a quick research… But as far I see now all x64 CPU in the 64-bit version on Windows are SSE3 compilant. For precise information must ask to AMD, intel and VIA.. > The x64-targeting compiler has had /arch:AVX since VS 2012, see msdn.microsoft.com/…/jj620901(v=vs.110).aspx , although that is obviously beyond SSE3. I know that SSE3 optimization is not such a big deal, and I also know that AMD started to support S-SSE3, SSE4.1 and 4.2 in combination with AVX support… So AVX is the best optimization choice after SS2/3… t was just the result of a quick research I made to see if my old Atlhlon 64 x2 was able to run the 64 bit preview of windows 8.1 :p Can you respond to this: gynvael.coldwind.pl Yuhong Bao, what would you like me to comment on specifically? The refusal to issue security updates for the Visual C++ compiler. // Hey STL, I believe this this is a bug in VS2012 and VS2013 preview. // Like the improved colors on the IDE in VS2013. // Hate the prelude to being forced to login to the IDE for any reason. int main() { array<std::future<int>, 1> a; a[0] = async([]() {}); // This should fail because it does not return an int, but it compiles. //a[0] = async( [] () -> int { return 0; } ); // This is what should be required. a[0].get(); } Yuhong Bao: I talked to the compiler team, and they told me that this specific bug has been fixed (by using an STL container instead of handwritten code!). It was fixed in March, so the fix should be in 2013 Preview, but I haven't verified that. As for the general question, they confirmed my understanding: the compiler is not meant to compile untrusted code and it is not security-hardened. It is, of course, theoretically possible for a C++ compiler to robustly handle untrusted code, but in practice that would require an extraordinary amount of work. Therefore, perhaps uniquely among shipping Microsoft products, the compiler does not attempt to be secure. (Of course, this is very different from the security of its generated code and libraries.) Glen: Thanks, I've filed that as DevDiv#737786 "<future>: future<void> should not be convertible to future<int>". That compiles because our internal helper class _Packaged_state<void (_ArgTypes…)> derives from _Associated_state<int>. Shall we remove the C2899 from new version? It's useless and always cause some trouble. STL: Regarding rvalue references v3, do you know anything about the outcome of N3401, "Generating move operations"? I haven't been keeping careful track of all the stuff Core/Evolution have been voting into C++14, but here's what I found: N3401 was an attempt to solve Core 1402. The Editor's Report N3692 says that N3667 "Drafting for Core 1402" was voted in at Bristol and applied to N3690. // Hey STL, thanks for reporting the acknowledging the previous problem. // How about this one, will the RTM of VS 2013 fix problems like this: #include <cstdio> #include <iostream> #include <cstring> int main() { char buf[100]; double dthere = .25; printf("printf %%a: %an", dthere ); sprintf(buf, "%a", dthere ); printf("sprintf %%a: %sn", buf ); float f; double dback = 0.0; int r = sscanf(buf, "%la", &dback); printf("sscanf %%f: %f (result %d)n", dback, r ); } // This program prints: // printf %a: 0x1.000000p-2 // sprintf %a: 0x1.000000p-2 // sscanf %f: 0.000000 (result 0) // but this last value is wrong on VS2012 and VS2013 preview as it should print: // sscanf %f: 0.250000 (result 1) Thanks Glen, I've filed DevDiv#744423 "<stdio.h>: sscanf() can't parse hexfloat" and assigned it to our CRT maintainer. We won't be able to fix this in 2013 RTM, but we'll look at it for the next major release. (The printf/scanf family's format specifiers need to be updated for C99, there are other issues in this area.) Glen: Actually, DevDiv#737786 "<future>: future<void> should not be convertible to future<int>" has already been fixed – it definitely repros with 2012 Update N and definitely does not repro with my current build of 2013. After doing some programmer-archaeology, I found that I fixed this in February, so that should have gotten into 2013 Preview. (I'm actually not sure why I filed the bug; I think I was rebuilding at the time, so I used 2012 to repro, then quoted 2013's _Packaged_state in my reply. I shouldn't have taken that shortcut, especially since I have the ability to use remote builds of 2013 to repro bugs.) Anyways, the fixed source is "future(const _Mybase& _State, _Nil)". The extra parameter prevents future from being unintentionally constructed with things that happen to derive from its base class (like other futures). Hi STL You are indeed correct. I didn't realise that the project had the VS2012 tool chain selected. So my apologies for that. Interestingly, changing the selection to VS2013 express preview in the drop down list and making it stick was problematic. Just changing it and choosing the close X does not seem to save it. You have to go to another section and then it says "hey bone head you forgot to save, would you like to", and if you say yes, then it sticks. Sorry about that. But thanks for taking the time to check this out personally. It's great you go to this effort and makes a difference even if sometimes it's a waste of time for you. Hopefully you'll see the drop list bug. Hopefully that'll show up as fixed too. Thanks again. Glen> Hopefully you'll see the drop list bug. I never use the IDE to build, except when filming videos or dealing with repros submitted in the form of projects (in which case my first goal is to extract a command-line repro). Looks like it took too long for me to post last time: STL: Ah, d'oh. I was looking for papers that had a previous version for N3401 (which doesn't actually make sense), should have looked for 1402 instead. Thanks! And in one build configuration (release only, 32bit and 64bit), VC12 took an hour less than VC11 (about a 20% savings). Thanks to those that made that happen. It wasn't the most scientific of tests (using only similarly configured VMs, but on different hosts), but that's still pretty spiffy. How about XP targeting in Visual C++ 2013? 2013 Preview supports and 2013 RTM will support XP targeting. Hi, Will you support unicode string literals because I am porting some javascript code to c++11 and they use it like that : var nonASCIIwhitespace = /[u1680u180eu2000-u200au2028u2029u202fu205fu3000ufeff]/; it would be so cool to be able to write the following c++ code : std::string nonASCIIwhitespace = u"u1680u180eu2000-u200au2028u2029u202fu205fu3000ufeff"; Don't think it's complicated to implement compared to other c++11 features… Thanks Excellent level of detail. Good to see the mechanics providing the insight. I suspect that the current rendering of this post has a lot more vertical whitespace than originally intended, probably due to the recent blog engine upgrade. It would be nice if you guys could fix that at some point. (There may be some other no-longer-rendered markdown-style formatting too, like the bulleted paragraphs, but they don’t look bad enough for me to be certain that they were ever actually converted into the appropriate HTML by the blog engine.)
https://blogs.msdn.microsoft.com/vcblog/2013/06/28/c1114-stl-features-fixes-and-breaking-changes-in-vs-2013/
CC-MAIN-2016-30
refinedweb
5,555
62.48
- NAME - Synopsis - Description - Distributions - Constructor and initialization - Method: log($message) - Method: run() - See also - Author NAME Rose::DBx::Bouquet - Use a database schema to generate Rose-based source code Synopsis. Description. Distributions This module is available as a Unix-style distro (*.tgz). See for details. See for help on unpacking and installing. Constructor and initialization new(...) returns an object of type Rose::DBx::Bouquet. This is the class's contructor. Usage: Rose::DBx::Bouquet -> new(). This method takes a hashref of options. Call new() as new({option_1 => value_1, option_2 => value_2, ...}). Available options: - exclude. - module This takes the name of a module to be used in the prefix of the namespace of the generated modules. Generate a set of modules under this name. So, Local::Winewould result in: - ./lib/Local/Wine/Rose/*.pm (1 per table) - - ./lib/Local/Wine/Rose/*/Form.pm (1 per table) - - ./lib/Local/Wine/Rose/*/Manager.pm (1 per table) - These examples assume -output_dir is defaulting to ./lib. The default value for 'module' is Local::Wine, because this document uses Local::Winefor all examples, and because you can download the Local::Winedistro from my website, as explained in the FAQ, for testing. - output_dir. - remove This takes either a 0 or a 1. Removes files generated by an earlier run of this program. For instance, given the output listed under the 'module' option above, it removes the directory ./lib/Local/Wine/Rose/. The default value is 0, meaning do not remove files. - tmpl_path This is the path to Rose::DBx::Bouquet'stemplate directory. These templates are input to the code generation process. If not specified, the value defaults to the value in lib/Rose/DBx/Bouquet/.htrose.bouquet.conf. The default value is ../Rose-DBx-Bouquet-1.00/templates. Note: The point of the '../' is because I assume you have done 'cd Local-Wine-1.06' or the equivalent for whatever module you are working with. - verbose This takes either a 0 or a 1. Write more or less progress messages to STDERR, during code generation. The default value is 0. - Availability of Local::Wine Download Local::Wine from The schema is at: Rose::DBx::Bouquetships with rose.app.gen.plin the bin/ directory, whereas Local::Wineships with various programs in the scripts/ directory. Files in the /bin directory get installed via 'make install'. Files in the scripts/ directory are not intended to be installed; they are only used during the code-generation process. Note also that 'make install' installs lib/Rose/DBx/Bouquet/.htrose.bouquet.conf, and - depending on your OS - you may need to change its permissions in order to edit it. - Minimum modules required when replacing Local::Wine with your own code Short answer: - Local::Wine - - Local::Wine::Config. - Local::Wine::Base::Create - - Local::Wine::DB This module will use the default type and domain, where 'type' and 'domain' are Rose concepts. - Local::Wine::Object - Long answer: See the docs for Local::Wine. - Why isn't Local::Wine on CPAN? To avoid the problem of people assuming it can be downloaded and used just like any other module. - Do you support DBIx::Class besides Rose? I did not try, but I assume it would be easy to do. - How does Rose::DBx::Bouquethandle rows with a great many columns? All columns are processed. Future versions of either or both of Rose::DBx::Bouquetand CGI::Application::Bouquet::Rosewill support a 'little language' () which will allow you to specify the columns to be displayed from the current table. - How does Rose::DBx::Bouquethandle foreign keys? When CGI::Application::Bouquet::Rosedisplays a HTML form containing a foreign key input field, you must enter a value (optionally with SQL wild cards) for the foreign key, if you wish to use that field as a search key. Future versions of either or both of Rose::DBx::Bouquetand CGI::Application::Bouquet::Rosewill support a 'little language' which will allow you to specify the columns to be displayed from the foreign table via the value of the foreign key. - A note on option management You'll see a list of option names and default values near the top of this file, in the hash %_attr_data. Some default values are undef, and some are scalars. My policy is this: - But why have such a method of handling options? Because I believe it makes sense for the end user (you, dear reader), to have the power to change configuration values without patching the source code. Hence the conf file. However, for some values, I don't think it makes sense to do that. So, for those options, the default value is a scalar in the source code of this module. - Is this option arrangement permanent? No. Options whose defaults are already in the config file will never be deleted from that file. However, options not currently in the config file may be made available via the config file, depending on feedback. Also, the config file is an easy way of preparing for more user-editable options. Method: log($message) If new() was called as new({verbose => 1}), write the message to STDERR. If new() was called as new({verbose => 0}) (the default), do nothing. Method: run() Do everything. See bin/rose.app.gen.pl for an example of how to call run(). See also Rose::DBx::Garden. Author Rose::DBx::Bouquet:
http://web-stage.metacpan.org/pod/Rose::DBx::Bouquet
CC-MAIN-2020-45
refinedweb
887
59.7
Supporting Vuetify Vuetify is an open source MIT project that has been made possible due to the generous contributions by community backers. If you are interested in supporting this project, please consider: - Becoming a sponsor on Github (supports John) - Becoming a backer on OpenCollective (supports the Dev team) - Become a subscriber on Tidelift - Make a one-time payment with Paypal - Book time with John Sponsors $1500/mo $500/mo $250/mo Open Collective Sponsors Introduction Vuetify is a semantic component framework for Vue. It aims to provide clean, semantic and reusable components that make building your application a breeze. Build amazing applications with the power of Vue, Material Design and a massive library of beautifully crafted components and features. Built according to Google's Material Design Spec, Vuetify components feature an easy-to-remember semantic design that shifts remembering complex classes and markup, to type-as-you speak properties that have simple and clear names. Harness the power of the Vuetify community and get help 24/7 from the development team and our talented community members across the world. Become a backer and get access to dedicated support from the team. Browser Support Vuetify supports all modern browsers, including IE11 and Safari 9+ (using polyfills). From mobile📱 to laptop 💻 to desktop 🖥, you can rest assured that your application will work as expected. Interested in the bleeding edge? Try the Vue CLI 3 Webpack SSR (Server side rendered) template and build websites optimized for SEO. For more information about IE11 and Safari 9+ polyfills, visit our Quick Start Guide Documentation To view our docs 📑, get support 🤯 and the store 🛒 visit us at vuetifyjs.com. Getting Started To get started with Vuetify, you can follow one of these simple set-up instructions. One Click Quick-start Looking to dive right in with zero setup and downtime? Check out our CodePen.io One Minute Quickstart Vue CLI 3 Installation Setting up Vuetify with Vue's CLI is really easy 👌. (Unsure of how to install Vue's CLI on your system? Check out the official Installation Instructions or our Quick Start Guide) If you're setting up a new project, first create it using the CLI's create command. vue create my-app If you are adding Vuetify to an existing Vue CLI 3 project, navigate to your project's root inside your terminal. cd /path/to/project Now, add Vuetify as a plugin with the CLI's add utility. vue add vuetify Note that you can also find and install Vuetify through Vue's UI! Navigate to your project folder, or create a new Vue CLI app, then run the following command. vue ui Once the UI launches in your browser, click the + Add plugin button on the top right corner of your screen. On the next screen, search for Vuetify on the list, select it, and install it using the Install vue-cli-plugin-vuetify button on the bottom right. Vue CLI 3 Ecosystem 🌎 Manual Installation Through Package Managers Looking to add Vuetify to your project directly as a node module? You can easily accomplish this by using yarn or npm package managers 📦. yarn add vuetify # OR npm install vuetify Once you have the library installed, we need to hook it up to Vue. Fire up your favorite IDE, and head to the file in which you are importing Vue to your project and creating your application. import Vue from 'vue' import Vuetify from 'vuetify' // Import Vuetify to your project Vue.use(Vuetify) // Add Vuetify as a plugin For including styles 🎨, you can either place the below styles in your index.html (if using the CLI) or directly at your app's entry point. <link href="" rel="stylesheet"> <link rel="stylesheet" href="[email protected]/css/materialdesignicons.min.css"> <link href="[email protected]/dist/vuetify.min.css" rel="stylesheet"> Or you can import it to your webpack entry point file. This is usually your main.js file. import '' import '@mdi/font/css/materialdesignicons.css' import 'vuetify/dist/vuetify.min.css' Don't forget to install Material Design Icons as a dependency: yarn add @mdi/font -D # OR npm install @mdi/font -D For more information, please visit the quick-start guide. Manual Installation Through CDN To use Vuetify in your project by directly importing it through CDNs (Content Delivery Networks), add the following code to the <head> of your HTML document. <link href="" rel="stylesheet"> <link rel="stylesheet" href="[email protected]/css/materialdesignicons.min.css"> <link href="[email protected]/dist/vuetify.min.css" rel="stylesheet"> Don't forget to add both Vuetify and the main Vue library to your HTML file before the closing </body> tag. <script src="[email protected]/dist/vue.js"></script> <script src="[email protected]/dist/vuetify.js"></script> Community Support Ask your support questions on the Vuetify Discord Community 💬. Frequently asked questions and Gotchas on the FAQ Guide ❓ Contributing Developers interested in contributing should read the Code of Conduct 👩💻👨💻 and the Contribution Guide. Please do not ask general questions in an issue. Issues are only to report bugs, suggest enhancements, or request new features. For general questions and discussions, ask in the Vuetify Discord Community. It is important to note that for each release, the detailed changes are documented in the release notes. Reporting Guide You can report issues by following the Issue Template and providing a minimal reproduction with a CodePen template or a full project at CodeSandbox. Good First Issue To help you get you familiar with our contribution process, we have a list of good first issues that contain bugs which have a relatively limited scope. This is a great place to get started. We also have a list of help wanted issues that you might want to check. Credits Contributors This project exists thanks to all the people who contribute 😍! Backers Thank you to all our backers! 🙏 [Become a backer] Sponsors Support this project by becoming a sponsor. Your logo will show up here with a link to your website. [Become a sponsor] License Vuetify is MIT licensed.
https://nicedoc.io/vuetifyjs/vuetify
CC-MAIN-2019-35
refinedweb
1,003
55.64
Image processing with Python and SciPy Given handles image input and output through pillow, scikit-image, and pyfits. Once loaded, an image may be processed using library routines or by mathematical operations that would take advantage of the speed and conciseness of numpy and scipy. Some of the resources mentioned here require Python >3.4, and at this time Python 3.6 is the current one. Contents Pillow - An Imaging Library The Python Imaging Library (PIL) was developed for Python 2.x and provided functions to manipulate images, including reading, modifying and saving in various standard image formats in a package called "PIL". With the coming of age of Python 3.x, a fork of the older version has evolved that is more suited for the new technologies and is in a package called "Pillow". It continues to improve, and the features described here are tested with "Pillow 5.1" and Python 3.6 as of April 2018. Pillow will probably be on any packaged distribution of Python 3, or it may be installed with (note the capital "P") pip install Pillow Pillow includes the basics of image processing. with functions that are documented by the developers in a handbook describing the methods and giving some examples. Pillow uses the same "namespace" as PIL and older code should work, perhaps with a few modifications to allow for recent developments. Most import for us, Pillow has routines to read and write conventional image formats. Once an image has been read into a numpy array, the full power of Python is available to process it, and we can turn to Pillow again to save a processed image in png or jpg or another format. Flexibile Image Transport System (FITS) files used for astronomy should be managed with astropy or pyfits. As a simple starting example, suppose you have an image that was taken with the camera turned so that "up" is to the side when the image is displayed. Here's how you would read it, rotate it 90 degrees, and write it out again using Pillow. import os from PIL. Many of the processing functions you will find in Python Imaging Library (PIL) are also available in SciPy where we have precise mathematical control over their definitions and operation. Some more advanced techiques are available in SciPy too, courtesy of researchers who have contributed to SciKit as we will see. Python's core routines dependent on matplotlib may be used to display an image, but these are designed for graphics, and limited by the constraints of the matplotlib interface. With a little effort there are better choices.. Scikit-image is often compared to OpenCV, a collection of programs for computer vision that include live video. Both are actively maintained and in many ways complementary, but for physics and astronomy scikit-image is more powerful at this time. The scikit-image package is part of Anaconda and Enthought Python, and that would be a recommended platform for Windows. However for Linux and Mac OSX, pip install scikit-image should work. The package usually requires local compilation of the code when installed this way. Qt bindings are provided by PySide or by Qt, depending on licensing requirements. PySide is less restrictive. pip install PySide While scipy has included an image reader and writer, as of April 2018 this function is deprecated in the base code and rather than use pillow, we can turn to scikit-image. The module to read and write image is skimage.io import skimage.io import numpy as np and the command skimage.io.find_available_plugins() will provide a dictionary of the libraries that may be used to read various file types. For example imlibs = skimage.io.find_available_plugins() and imlibs['pil'] will list the functions that the Python imaging library provides. The package tries the libraries in order until it finds one that works: myimage = skimage.io.imread(filename).astype( np.float32) will read an image and return a numpy array which by default will be an RGB image if the file is a png file, for example. A greyscale image image be specified by including as_grey=True as an argument. A numpy image has a shape that for color has 3 values in each pixel print(myimage.shape) (498, 680, 3) as an example for an RGB image, or (498,680) for a gray scale image. Since numpy by default would store into a 64-bit float and matplotlib (the default display for skimage) requires 32-bit, we specify loading into a 32 bit array while planning ahead to seeing the result. Images may be saved: skimage.io.imsave(filename, nparray) and the file type is determined by file name extension. Images may be displayed, but it takes two steps skimage.io.imshow(myimage) skimage.io.show() when invoking the default matplotlib plugin. The display will look like one created by pyplot. There is a simpler, viewer module too, without pyplot toolbar. import skimage.viewer viewer = skimage.viewer.viewers.ImageViewer(myimage) viewer.show We will use routines from the scikit-image astropy.io.fits as. Images with NumPy import skimage.io.imread as imread import skimage.io.imsave as imsave #import skimage.io.imshow as imshow is an alternative for display -- Astronomical image data are potentially complex and rich, for which quantitative structures have been developed to standardize lossless storage of the data along with the metadata that describe its origin and previous processing. While photographic images are often only 8 bits deep, mixed with red, green and blue in a single image, and compressed to reduce file size, astronomical images are 16 or 32 bits deep in a single color. Until recently, the file sizes needed for astronomy were unrivaled by commodity cameras, but in todays market of megapixel imaging on cell phones, the camera in your pocket also produces very large rich images. The difference in handling them is that for science we want to preserve the data without loss, quantitatively calibrate and measure the flux from the source and map that back to a specific angle in space, while for art or even for some academic uses, it is the beauty, color, and highlights shown in the image display that are important. Commodity images are saved usually in compressed formats such as JPG, or uncompressed TIFF or proprietary binary formats. For astronomy and other quantitative imaging work, the Flexible Image Transport System (or FITS) format is almost universal. It includes the image data, and a header describing the data. FITS files may also be tables of data, or a cube of images in sequence. The standards developed for creating these files are slowing evolving as the needs of big data in astronomy have grown. Programmers and scientists at NASA, the Space Telescope Science Institute, and the academic community at large are contributing to libraries that enable reading, processing, and saving FITS files in Python, as well as in C, and Fortran. Once a FITS file has been read, the header its accessible as a Python dictionary of the data contents, and the image data are in a NumPy array. With Python using NumPy and SciPy you can read, extract information, modify, display, create and save image data. Reading and Displaying a FITS File in Python There are many image display tools for astronomy, and perhaps the most widely used is ds9 which is available for Linux, MacOS, and Windows, as well as in source code. It is always useful to have a version on your computer when you are working with FITS images. A more versatile Java platform for astronomical image viewing that also does and processing' is now widely used for precision astronomical photometry where interactive analysis is needed. AstroImageJ is free and simple to install on most computers, and because it is also a powerful processor, for many purposes it is an all-in-one tool. If you are working with FITS images, this is highly recommended: AstroImageJ website and program source Astronomical Journal article describing Astroimagej However, sometimes we want perform specialized work on image data, and to view it while processing it in Python. This routine demonstrates how to read a FITS file, inspect its header, and show the image on the computer display. It is dependent on the "PyFITS" library developed at Space Telescope Science Institute and incorporated into a larger package by the AstroPy Project. AstroPy's library is part of the Enthought and Canopy distributions of Python. If you are using a system version of python, you may need to install it with pip pip install astropy should do it. The package have several dependences on python (it requires version 3.5 or higher at this time (April 2018), and on numpy and scipy. It is also frequently updated, and while stable may push the "cutting edge" of distributions from conservative operating system distributors like OpenSuse. That's a good reason, if you need this capability, to have a version of Python built with current sources, or to use a complete distribution such as Anaconda or Enthought. A program to work with FITS files would begin by importing the packages that it needs import os import sys import argparse import numpy as np import astropy.io.fits as pyfits import matplotlib import matplotlib.pyplot Importing the FITS modules in this way makes the code backward compatible with the earlier versions of PyFITS. Also, since sometimes submodules are not loaded with the larger package, we explicitly ask for the io.fits components, and. A simpler method is to parse the command line itself using the system utilities: # or the AstroImageJ viewer. Correcting and Combining Images - Signal at each pixel with no light present -- the "dark" image - Signal at each pixel for the same irradiance/pixel -- the "flat" field - Non-linear responses - Absolute calibration to an energy or photon flux based on spectral response with an outlier in one. astropy.io.fits as interpreting it to obtain the celestial coordinates requires -- what else -- PyWCS also developed at the Space Telescope Institute and now included in AstroPy. For example, to access the world coordinate system in a fits file, we would import the module from astropy.wcs import WCS to have the same namespace as the original PyWCS and library functions for conversion to and from celestial coordinates and pixel coordinates in the image. There are many examples of this in a package of utilities we have developed here Alsvid Algorithms for Visualization and Processing of Image Data Other Processing right - 'nearest' - 'bilinear' - 'cubic' - 'bicubic'. AstroImageJ and Alsvid: Examples For examples of Python illustrating image processing, see the examples section. Alsvid is intended as a command line supplement to the powerful Java program AstroImageJ which provides real-time interactivity with astronomical image processing and precision photometry. AstroImageJ is built on the orirignal ImageJ, an image processing program developed at the National Institutes of Health and now maintained as public domain opensource resource. As such, this core component of AIJ offers many specialized tools for image analysis in the biological sciences which are equally useful in Physics and Astronomy. AstroImageJ along side versatile Python desktop processing is a powerful combination for astronomical image analysis. Assignments For the assigned homework to use these ideas, see the assignments section.
http://physics.louisville.edu/astrowiki/index.php?title=Image_processing_with_Python_and_SciPy&direction=prev&oldid=2539
CC-MAIN-2019-39
refinedweb
1,874
52.8
Back to: Java Tutorials For Beginners and Professionals Queue Collections in Java with Examples In this article, I am going to discuss Queue Collections in Java with Examples. Please read our previous article where we discussed Set Collections in Java with Examples. As part of this article, we are going to discuss the following pointers in detail which are related to Java Queue Collections. - Queue Interface in Java - Classes that implement Queue Interface in Java - Methods of Queue Interface - Priority Queue Collection in Java - DeQueue Collection in Java - ArrayDeque Collection in Java Queue Collections in Java Java Queue interface orders the element in FIFO(First In First Out) manner. In FIFO, the first element is removed first and the last element is removed at last. This interface is dedicated to storing all the elements where the order of the elements matter. The Queue interface of the Java collections framework provides the functionality of the queue data structure. It extends the Collection interface. Since the Queue is an interface, we cannot provide the direct implementation of it. Classes that implement Queue Interface in Java: In order to use the functionalities of Queue, we need to use classes that implement it: - Priority Queue - Dequeue Methods of Queue Interface - add(): Inserts the specified element into the queue. If the task is successful, add() returns true, if not it throws an exception. - offer(): Inserts the specified element into the queue. If the task is successful, offer() returns true, if not it returns false. - element(): Returns the head of the queue. Throws an exception if the queue is empty. - peek(): Returns the head of the queue. Returns null if the queue is empty. - remove(): Returns and removes the head of the queue. Throws an exception if the queue is empty. - poll(): Returns and removes the head of the queue. Returns null if the queue is empty. Priority Queue Collection in Java: It implements the Queue interface. The PriorityQueue class provides the functionality of the heap data structure. The PriorityQueue class provides the facility of using a queue. But it does not order the elements in a FIFO manner. It is based on Priority Heap. The elements of the priority queue are ordered according to the natural ordering, or by a Comparator provided at queue construction time, depending on which constructor is used. Creating Priority Queue Syntax : PriorityQueue<Integer> numbers = new PriorityQueue<Integer>(); Here, we have created a priority queue without any arguments. In this case, the head of the priority queue is the smallest element of the queue. And elements are removed in ascending order from the queue. Sample Program to demonstrate Priority Queue import java.util.*; class PriorityQueueDemo { public static void main(String args[]) { PriorityQueue<String> queue = new PriorityQueue<String>(); queue.add("Amit"); queue.add("Vijay"); queue.add("Karan"); queue.add("Jai"); queue.add("Rahul"); System.out.println("head:" + queue.element()); System.out.println("head:" + queue.peek()); System.out.println("Iterating the queue elements:"); Iterator itr = queue.iterator(); while (itr.hasNext()) { System.out.println(itr.next()); } queue.remove(); queue.poll(); System.out.println("After removing two elements:"); Iterator<String> itr2 = queue.iterator(); while (itr2.hasNext()) { System.out.println(itr2.next()); } } } Output: PriorityQueue Example with Complex Data type in Java: import java.util.Objects; import java.util.PriorityQueue; class Employee implements Comparable < Employee > { private String name; private double salary; public Employee (String name, double salary) { this.name = name; this.salary = salary; } public String getName () { return name; } public void setName (String name) { this.name = name; } public double getSalary () { return salary; } public void setSalary (double salary) { this.salary = salary; } @Override public boolean equals (Object o) { if (this == o) return true; if (o == null || getClass () != o.getClass ()) return false; Employee employee = (Employee) o; return Double.compare (employee.salary, salary) == 0 && Objects.equals (name, employee.name); } @Override public int hashCode () { return Objects.hash (name, salary); } @Override public String toString () { return "Employee{" + "name='" + name + '\'' + ", salary=" + salary + '}'; } // Compare two employee objects by their salary @Override public int compareTo (Employee employee) { if (this.getSalary () > employee.getSalary ()) { return 1; } else if (this.getSalary () < employee.getSalary ()) { return -1; } else { return 0; } } } public class PriorityQueueDemo { public static void main (String[]args) { // Create a PriorityQueue PriorityQueue < Employee > employeePriorityQueue = new PriorityQueue <> (); // Add items to the Priority Queue employeePriorityQueue.add (new Employee ("Rajeev", 100000.00)); employeePriorityQueue.add (new Employee ("Chris", 145000.00)); employeePriorityQueue.add (new Employee ("Andrea", 115000.00)); employeePriorityQueue.add (new Employee ("Jack", 167000.00)); /* The compareTo() method implemented in the Employee class is used to determine in what order the objects should be dequeued. */ while (!employeePriorityQueue.isEmpty ()) { System.out.println (employeePriorityQueue.remove ()); } } } Output: DeQueue Collection in Java Deque is an acronym for “double-ended queue”. Java Deque Interface is a linear collection that supports element insertion and removal at both ends. The class which implements this interface is ArrayDeque. It extends the Queue interface. Deque is an interface and has two implementations: LinkedList and ArrayDeque. Creating a Deque Syntax : Deque dq = new LinkedList(); Deque dq = new ArrayDeque(); Methods of Deque - addFirst(): Adds the specified element at the beginning of the deque. Throws an Exception if the deque is full. - addLast(): Adds the specified element at the end of the deque. Throws an exception if the deque is full. - offerFirst(): Adds the specified element at the beginning of the deque. Returns false if the deque is full. - offerLast(): Adds the specified element at the end of the deque. Returns false if the deque is full. - getFirst(): Returns the first element of the deque. Throws an exception if the deque is empty. - getLast(): Returns the last element of the deque. Throws an exception if the deque is empty. - peekFirst(): Returns the first element of the deque. Returns null if the deque is empty. - peekLast(): Returns the last element of the deque. Returns null if the deque is empty. - removeFirst(): Returns and removes the first element of the deque. Throws an exception if the deque is empty. - removeLast(): Returns and removes the last element of the deque. Throws an exception if the deque is empty. - pollFirst(): Returns and removes the first element of the deque. Returns null if the deque is empty. - pollLast(): Returns and removes the last element of the deque. Returns null if the deque is empty. - push(): Adds an element at the beginning of the deque. - pop(): Removes an element from the beginning of the deque. - peek(): Returns an element from the beginning of the deque. ArrayDeque Collection in Java: The ArrayDeque class provides the facility of using deque and resizable-array. It inherits the AbstractCollection class and implements the Deque interface. This is a special kind of array that grows and allows users to add or remove an element from both sides of the queue. Array deques have no capacity restrictions and they grow as necessary to support usage. Array Implementation of Deque Syntax : Deque<String> animal1 = new ArrayDeque<String>(); Sample Program to demonstrate ArrayDeque Collection in Java import java.util.Deque; import java.util.ArrayDeque; class ArrayDequeDemo {: In the next article, I am going to discuss Map Collections in Java with examples. Here, in this article, I try to explain Queue Collections in Java with Examples and I hope you enjoy this Queue collection in Java with Examples article.
https://dotnettutorials.net/lesson/queue-collections-in-java/
CC-MAIN-2021-31
refinedweb
1,195
51.14
Tk_GetFontStruct, Tk_NameOfFontStruct, Tk_FreeFontStruct - maintain database of fonts #include <tk.h> XFontStruct * Tk_GetFontStruct(interp, tkwin, nameId) char * Tk_NameOfFontStruct(fontStructPtr) Tk_FreeFontStruct(fontStructPtr) Interpreter to use for error reporting. Token for window in which font will be used. Name of desired font. Font structure to return name for or delete. Tk_GetFont loads the font indicated by nameId and returns a pointer to information about the font. The pointer returned by Tk_GetFont will remain valid until Tk_FreeFont is called to release it. NameId can be either a font name or pattern; any value that could be passed to XLoadQueryFont may be passed to Tk_GetFont. If Tk_GetFont is unsuccessful (because, for example, there is no font corresponding to nameId) then it returns NULL and stores an error message in interp->result. Tk_GetFont maintains a database of all fonts it has allocated. If the same nameId is requested multiple times (e.g. by different windows or for different purposes), then additional calls for the same nameId will be handled very quickly, without involving the X server. For this reason, it is generally better to use Tk_GetFont in place of X library procedures like XLoadQueryFont. The procedure Tk_NameOfFontStruct is roughly the inverse of Tk_GetFontStruct. If its fontStructPtr argument was created by Tk_GetFontStruct, then the return value is the nameId argument that was passed to Tk_GetFontStruct to create the font. If fontStructPtr was not created by a call to Tk_GetFontStruct, then the return value is a hexadecimal string giving the X identifier for the associated font. Note: the string returned by Tk_NameOfFontStruct is only guaranteed to persist until the next call to Tk_NameOfFontStruct. it to the X server and delete it from the database. font
http://search.cpan.org/~srezic/Tk-804.029/pod/pTk/GetFontStr.pod
CC-MAIN-2014-41
refinedweb
277
56.15
Oct 18, 2017 11:07 AM|Go2Greece|LINK Hello Forum, I think I may have a bad design here, the problem is below - any suggestions would be a great help I used to have code which did a check flag on a single enum - this was simple enough - did the binary values in the enum and it all all worked well If Instance.HasFlag(PageType.AboutUs) this works fine. Now I have two possible Enums that are very different, but both have the name of 'AboutUs'. I have a way of knowing which enum to present in the code Dim Pages As Type = GetType(PageType) So the now I am stuck. I can make a kind of EnumFactory to return the right Enum, and both with have the name 'AboutUs' - but I dont know if this is possible who how to write the code. So the factory (if this is the right way to go) could return EnumA or EnumB both having a name of 'AboutUs' but not necessarily with the same flag value. So if this is possible what do I put here - as the enum or the enum value is not known If Instance.HasFlag(?????.AboutUs) I used to do this by making my own custom flags classes - this worked well, but then I tryed to change to this, as moving forward it is less work and now am stuck on how to check for an unknown enum type.. I dont mind if any solution does not have intellisense as I know if it gets to this code it will have the flag as a possibility Any help would be great and help me understand .NET a little bit more Graham Mattingley Oct 18, 2017 03:49 PM|mgebhard|LINK Your question is very confusing. I believe you are not using a the flag construct correctly. A flag is used for binary encoding and performing bitwise operation on a value. For example, 0000 = None 0001 = User 0010 = SuperUser 0100 = Admin A flag value of 0110 is a User and a SuperUser. To test for a User the logic looks can look like this; 0110 AND 0010 Equals 0010 A C# Example [Flags] public enum MyFlags { None = 0, User = 1, SuperUser = 2, Admin = 4 }; static void Main(string[] args) { MyFlags encodedValues = MyFlags.SuperUser | MyFlags.User; if ((encodedValues.HasFlag(MyFlags.User))) { Console.WriteLine("User"); } if ((encodedValues.HasFlag(MyFlags.SuperUser))) { Console.WriteLine("SuperUser"); } if ((encodedValues.HasFlag(MyFlags.Admin))) { Console.WriteLine("Admin"); } } Results User SuperUser The flags attribute is explained here It seems your issues has to do with a naming collision which can be fixed either by changing the name or placing the enum in a different namespace. However, I believe you have a larger design issue. The problem you are trying to solve is not clear so I can only explain how a Flag Enum works. Oct 18, 2017 06:33 PM|Go2Greece|LINK Hi mgebhard, Thank you for the reply. I agree the way I wrote it is confusing. I will just explain what I am trying to I have the requirement working, but it means storing strings in the DB, without Enums - I was trying to avoid storing a load of strings in a DB when they are not really needed their I have main pages in my application, and each type of main page can have different sub pages. For example a hotel (main page) could have enabled an "booking" sub page. Each main page type can be configured with secondary sub pages. So for example, Hotel A can have sub pages of AboutUs, Video, Pictures, Booking Hotel B can have sub pages of AboutUs, Video, Pictures, Reviews, Terms But both the above are contained in the same Enum (page for Hotels) Now lets say I have another page, Car Hire it has sub pages of, AboutUs, Cars, Insurance This Above car page as a different flags class to the hotel, but shares some common elements - for example About Us So for each Business the plan was to store an UINT32 to represent the enabled URLs This is the problem When I get the the AboutUs Page Code, I need to check the page is enabled for the particular business. The AboutUs page exists in both business types, so the AboutUs Page will receive different Enums where I was planing to do EnumObj.HasFlag(xxxx.AboutUs) This was just to check the business had enabled the page and so 404 if page flag is not found The problem is I cant do this as the EnumObj is an unknown type (until runtime) I wanted to use binary flags as I can store the result in a single Int, Kind Regards Graham Oct 18, 2017 08:00 PM|mgebhard|LINK As I understand the main issue is namespace. You have Hotel.AboutUs and CarHire.AboutUs and need a way to distinguish between the two AboutUs types. That what a namespace is for. I imagine you'll need to update the table that holds the enum values to include the namespace so you know which one you are using. Other than that, I can't wrap my head around the programming problem you are trying to solve. Oct 18, 2017 09:06 PM|Go2Greece|LINK Hi mgebhard, Sorry for my poor explanations, I have tryed to write code to show what I want to do, but the code does not work (I did not expect it too, just as indication of idea) I get two values from the Db, one is the PageId and one if he Enum Saved Value of Selected Flags The value of SubSetOfThing works if I know what the Enum Is, but I dont (but I can work it out from the PageTypeId - so to demonstrate this I made a select case function So I want to make a list of the name strings of an unknown (until runtime) enum, where the list only consist of the value where the flags were set to true when saved. Dim SavedFlagValueFromDB = 31 Dim PageIdFromDB = 2 Dim SubSetOfThings As GetEnum(1) = SavedFlagValueFromDB Dim ListOfFlagedvalues = New List(Of GetEnum(1))() From {SubSetOfThings} Function GetEnum(PageTypeId) As Type Select Case PageTypeId Case 1 : Return GetType(Animals) Case 2 : Return GetType(Vehicles) End Select End Function Then I can go ListOfFlagedvalues.Contains("aboutus") - means I can detect with any given enum if it has a flag called aboutus, that was set then it was saved I know the code is complete rubbish, I just wanted to try and get across the idea Thank you very much from you help, sorry I cant explain things very well, it is just a case of wanting to store a UINT32 in the DB instead of a long list of CSV strings. G Oct 19, 2017 06:52 AM|Billy Liu|LINK Hi Go2Greece, Do you mean you have different Enums for different kind of main page. And you want to get the right Enum for the main page? It think you could set flags for your main pages in a new enum, for example 0000 as the hotel, 0001 as the car. If the flag has 0000, then use the enum for the hotel. Best Regards, Billy Oct 19, 2017 06:56 AM|Go2Greece|LINK Hi mgebhard, Thank you for your reply. Here goes..... On the web project I am working on, there is no way of using Uri Routing as most of the Uri are selected by the users, except a sub set of links for each Business Profile Type. The project will display information on over 100's types of businesses - Hotels, Car Rentals, Events, Accommodation etc etc. For example{businessname}/aboutus/ In the code I first check in the name of the last part of the Uri (aboutus) is valid, then I check if it has been configured / selected by the business. If it has been selected, I go to the view and display the info and if it has not, returns a 404. As it works at the moment, I store in the DB - like this /aboutus/, /terms/, /videos/, /pictures/, /contactus/ per business profile of which there is around 30,000 Each possible type of business has it own flags based unum, with in average 20 entries in there - with the values 2^0 - 2^20 - with flag attribute and there are configured fine. I store the selected ENUM enteries total value in a common DB table shared by all Business Profiles - so it will have something like 32767 as a value. Using the value I want to be able to detect that items have been flaged in the enum, but I dont know what the enum is s it cant be determined. So high level this is what I would like to be able to do I did get this working with class, I made simple classes that work out the flags and set Boolean - this worked fine, because I can get the right class using a method to return it based on the typeId, - like flags class factory (not right description) Then I discovered this was possible with Enums, but the problem seems to me, you cant use an enum (has flag) or any part of the enum if it is not specifically coded by name. G Oct 19, 2017 08:41 AM|Billy Liu|LINK Hi Go2Greece, I think you could also check if the flag was set for enum in your select case. For example: Function GetEnum(PageTypeId As Integer, SavedFlagValueFromDB As Integer, Subpage As String) As Boolean Select Case PageTypeId Case 1 : Dim SubSetOfThings As Hotel = SavedFlagValueFromDB If SubSetOfThings.ToString().Contains(Subpage) Then Return True Else Return False End If Case 2 : Dim SubSetOfThings As Animal = SavedFlagValueFromDB If SubSetOfThings.ToString().Contains(Subpage) Then Return True Else Return False End If End Select Return False End Function Best Regards, Billy Oct 19, 2017 06:06 PM|Go2Greece|LINK Hi Billy Liu, Thank you so much for solving the coding problem I had. I could have never though of doing it this way. I am going to look at the documents for the code you have written and make sure I fully understand this. For my project it is a massive step froward. Thank you very much This is indeed the perfect solution. Kind Regards Graham 9 replies Last post Oct 19, 2017 06:06 PM by Go2Greece
https://forums.asp.net/t/2130505.aspx?HasFlag+in+Unknown+Enum
CC-MAIN-2021-04
refinedweb
1,731
68.44
Here is a listing of online C++ test questions on “Constants” along with answers, explanations and/or solutions: 1. The constants are also called as a) const b) preprocessor c) literals d) none of the mentioned View Answer EXplantion:None. 2. What are the parts of the literal constants? a) integer numerals b) floating-point numerals c) strings and boolean values d) all of the mentioned View Answer Explanation:Because these are the types used to declare variables and so these can be declared as constants. 3. How the constants are declared? a) const keyword b) #define preprocessor c) both a and b d) None of the mentioned View Answer Explanation:The const will declare with a specific type values and #define is used to declare user definied constants. 4. What is the output of this program? #include <iostream> using namespace std; int main() { int const p = 5; cout << ++p; return 0; } b) 6 c) Error d) None of the mentioned View Answer Explanation:We cannot modify a constant integer value. 5. What is the output of this program? #include <iostream> using namespace std; #define PI 3.14159 int main () { float r = 2; float circle; circle = 2 * PI * r; cout << circle; return 0; } a) 12.5664 b) 13.5664 c) 10 d) compile time error View Answer Explanation:In this program, we are finding the area of the circle by using concern formula. Output: $ g++ cons.cpp $ a.out 12.5664 6. Which of the following statement is not true about preprocessor directives? a. These are lines read and processed by the preprocessor b. They do not produce any code by themselves c. These must be written on their own line d. They end with a semicolon View Answer Explantion:None. 7. Regarding following statement which of the statements is true? const int a = 100; a. Declares a variable a with 100 as its initial value b. Declares a construction a with 100 as its initial value c. Declares a constant a whose value will be 100 d. Constructs an integer type variable with a as identifier and 100 as value View Answer Explantion:Because the const is used to declare non-changable values only. a. The first one refers to a variable whose identifier is x and the second one refers to the character constant x b. The first one is a character constant x and second one is the string literal x c. Both are same d. None of the mentioned View Answer Explanation:None. 9. How to declare a wide character in string literal? a) L prefix b) l prefix c) W prefix d) none of the mentioned View Answer Explanation:It can turn this as wide character instead of narrow characters. Sanfoundry Global Education & Learning Series – C++ Programming Language. Here’s the list of Best Reference Books in C++ Programming Language. To practice all features of C++ programming language, here is complete set on 1000+ Multiple Choice Questions and Answers on C++.
http://www.sanfoundry.com/online-c-plus-plus-test-constants/
CC-MAIN-2016-44
refinedweb
496
65.73
A version of vtkArchiver that can be implemented in Python. More... #include <vtkPythonArchiver.h> A version of vtkArchiver that can be implemented in Python. vtkPythonArchiver is an implementation of vtkArchiver that calls a Python object to do the actual work. It defers the following methods to Python: Python signature of these methods is as follows: Definition at line 48 of file vtkPythonArchiver.h. Definition at line 52 of file vtkPythonArchiver.h. Return 1 if this class is the same type of (or a subclass of) the named class. Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h. Reimplemented from vtkArchiver. Reimplemented from vtkArchiver. Methods invoked by print to print information about the object including superclasses. Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes. Reimplemented from vtkArchiver. Specify the Python object to use to perform the archiving. A reference will be taken on the object. Open the arhive for writing. Reimplemented from vtkArchiver. Close the arhive. Reimplemented from vtkArchiver. Insert data of size size into the archive at relativePath. Reimplemented from vtkArchiver. Checks if relativePath represents an entry in the archive. Reimplemented from vtkArchiver.
https://vtk.org/doc/nightly/html/classvtkPythonArchiver.html
CC-MAIN-2021-17
refinedweb
203
62.64
The data encryption and security features included with .NET appear in the System.Security.Cryptography namespace. Most of the classes in this namespace implement various well-known encryption algorithms that have been accepted by organizations and governments as dependable encryption standards. For instance, the DESCryptoServiceProvider class provides features based on the Data Encryption Standard (DES) algorithm, an algorithm originally developed by IBM in the mid-1970s. Symmetric cryptography uses a single secret key to both encrypt and decrypt a block of data. Although these algorithms are often quite fast (when compared to asymmetric cryptography), the need to share the full secret key with others in order to share data may make them inherently less secure. Still, for many applications, "secret key encryption" is sufficient. The .NET Framework includes support for four symmetric encryption algorithms. Data Encryption Standard (DES), a 56-bit block cipher with primary support through the DESCryptoServiceProvider class. This algorithm is generally secure, but due to its small key size (smaller keys are more easily compromised), it is inappropriate for highly sensitive data. RC2 (Rivest Cipher number 2), a 56-bit block cipher with primary support through the RC2CryptoServiceProvider class. The cipher was originally developed by Lotus for use in its Lotus Notes product. It is not excitingly secure, but for this reason, it was given more favorable export freedoms by the United States government. Rijndael (derived from the names of its two designers, Daemen and Rijmen), a variable bit (between 128 to 256 bits) block cipher with primary support through the RijndaelManaged class. It is related to a similar algorithm named Advanced Encryption Standard (AES), and is the most secure of the secret key algorithms provided with .NET. Triple DES, a block cipher that uses the underlying DES algorithm three times to generate a more secure result, with primary support through the TripleDESCryptoServiceProvider class. Although more secure than plain DES, it is still much more vulnerable than the Rijndael or AES standard. The various "provider" classes are tools that must be used together with other cryptography classes to work properly. For instance, this sample code (based on code found in the MSDN documentation) uses the DESCryptoServiceProvider and CryptoStream classes, both members of System.Security.Cryptography, to jointly encrypt and decrypt a block of text. Imports System Imports System.IO Imports System.Text Imports System.Security.Cryptography Class CryptoMemoryStream Public Shared Sub Main() ' ----- Encrypt then decrypt some text. Dim key As New DESCryptoServiceProvider Dim encryptedVersion() As Byte Dim decryptedVersion As String ' ----- First, encrypt some text. encryptedVersion = Encrypt("This is a secret.", key) ' ----- Then, decrypt it to get the original. decryptedVersion = Decrypt(encryptedVersion, key) End Sub Public Shared Function Encrypt(origText As String, _ key As SymmetricAlgorithm) As Byte() ' ----- Uses a crytographic memory stream and a ' secret key provider (DES in this case) ' to encrypt some text. Dim baseStream As New MemoryStream Dim secretStream As CryptoStream Dim streamOut As StreamWriter Dim encryptedText() As Byte ' ----- A memory stream just shuffles data from ' end to end. Adding a CryptoStream to it ' will encrypt the data as it moves through ' the stream. secretStream = New CryptoStream(baseStream, _ key.CreateEncryptor(), CryptoStreamMode.Write) streamOut = New StreamWriter(secretStream) streamOut.WriteLine(origText) streamOut.Close() secretStream.Close() ' ----- Move the encrypted content into a useful ' byte array. encryptedText = baseStream.ToArray() baseStream.Close() Return encryptedText End Function Public Shared Function Decrypt(encryptedText() As Byte, _ key As SymmetricAlgorithm) As String ' ----- Clearly, this is the opposite of the ' Encrypt() function, using a stream reader ' instead of a writer, and the key's ' "decryptor" instead of its "encryptor." Dim baseStream As MemoryStream Dim secretStream As CryptoStream Dim streamIn As StreamReader Dim origText As String ' ----- Build a stream that automatically decrypts ' as data is passed through it. baseStream = New MemoryStream(encryptedText) secretStream = New CryptoStream(baseStream, _ key.CreateDecryptor(), CryptoStreamMode.Read) streamIn = New StreamReader(secretStream) ' ----- Move the decrypted content back to a string. origText = streamIn.ReadLine() streamIn.Close() secretStream.Close() baseStream.Close() Return origText End Function End Class This code combines a DES encryption class with a Stream, a common tool in .NET applications for transferring data from one state or location to another. (Streams are a primary method used to read and write files.) Streams are not too hard to use, but the code still seems a little convoluted. Why doesn't the DESCryptoServiceProvider class simply include Encrypt and Decrypt methods? That's my question, at least. I'm sure it has something to do with keeping the class generic for use in many data environments. Still, as chunky as this code is, it's sure a lot easier than writing the encryption code myself. And it's general enough that I could swap in one of the other secret key algorithms without very much change in the code. In secret key cryptography, you can use any old key you wish to support the encryption and decryption process. As long as you keep it a secret, the content of the key itself isn't really too important. The same cannot be said, though, of asymmetric (public key) cryptography. Because separate keys are used to encrypt and decrypt the data, specific private and public keys must be crafted specifically as a pair. You can't just select random public and private keys and hope that they work together. The components used to support asymmetric cryptography include "generators" that emit public and private key pairs. Once generated, these keys can be used in your code to mask sensitive data. And due to the large key size, it's very difficult for anyone to hack into your encrypted data. Public key encryption is notoriously slow; it takes forever and a day to encode large amounts of data using the source key. This is one of the reasons that the Founding Fathers didn't use public key encryption on the Declaration of Independence. Because of the sluggish performance of asymmetric encryption, many secure data systems use a combination of public-key and secret-key encryption to protect data. The initial authorization occurs with public-key processes, but once the secure channel opens, the data passed between the systems gets encrypted using faster secret-key methods. .NET includes two public key cryptography classes for your encrypting and decrypting pleasure. Digital Signature Algorithm (DSA), an algorithm designed by the United States government for use in digital signatures, with primary support through the DSACryptoServiceProvider class. The RSA algorithm (named after its founders: Ron Rivest, Adi Shamir, and Len Adleman), an older though generally secure asymmetric algorithm, with primary support through the RSACryptoServiceProvider class. I won't be using asymmetric encryption in the Library Project. While the code needed to use these providers is interesting, and while the background information on prime number generation and large number factorization is fascinating, such discussions are beyond the scope of this book. Although hashing algorithms do not give you the ability to encrypt and decrypt data at will, they are useful in supporting systems that secure and verify data content. In fact, hashing is the one cryptography component that we will directly code in the Library Project, so stay alert. Coming up with a hashing algorithm is easy. It took the best minds of the National Security Agency and the Massachusetts Institute of Technology to come up with reliable secret-key and public-key encryption systems, but you can develop a hashing algorithm in just a few minutes. A few years ago, I wrote my own hashing algorithm that I used for years in business applications. That fact alone should prove how simple and basic they can be. Here's a hashing algorithm I just made up while I was sitting here. Public Function HashSomeText(ByVal origText As String) As Long ' ----- Create a hash value from some data. Dim hashValue As Long = 0& Dim counter As Long For counter = 1 To Len(origText) hashValue += Asc(Mid(origText, counter, 1)) If (hashValue > (Long.MaxValue * 0.9)) Then _ hashValue /= 2 Next counter Return hashValue End Function In the code, I just add up the ASCII values of each character in the text string, and return the result. I do a check in the loop to make sure I don't exceed 90% of the maximum Long value; I don't want to overflow the hashValue variable and generate an error. Although HashSomeText does generate a hashed representation of the input data, it also has some deficiencies. It's pretty easy to guess from the hash value whether the incoming content was short or long. Shorter content will usually generate small numbers, and larger output values tend to indicate longer input content. It's not very sensitive to some types of content changes. For instance, if you rearrange several characters in the content, it probably won't impact the hash value. Changing a character will impact the value, but if you change one character from "A" to "B" and another nearby letter from "T" to "S," the hash value will remain unchanged. The shorter the content, the greater the chance that two inputs will generate the same hash value. Perhaps you want something a little more robust. If so, .NET includes several hashing tools. Hash-based Message Authentication Code (HMAC) calculated using the Secure Hash Algorithm number 1 (SHA-1) hash function, made available through the HMACSHA1 class. It uses a 160-bit hash code. There are no specific restrictions on the length of the secret key used in the calculation. While suitable for low-risk situations, the SHA-1 algorithm is susceptible to attack. Message Authentication Code (MAC) calculated using the Triple-DES secret key algorithms (described earlier), made available through the MACTripleDES class. The secret key used in the calculation is either 16 or 24 bytes long, and the generated value is 8 bytes in length. Message-Digest algorithm number 5 (MD5) hash calculation, made available through the MD5CryptoServiceProvider class. MD5 is yet another super-secret algorithm designed by Ron Rivest (that guy is amazing), but it has been shown to contain some flaws that could make it an encoding security risk. The resulting hash value is 128 bits long. Like the HMACSHA1 class, the SHA1Managed class computes a hash value using the SHA-1 hash function. However, it is written using .NET managed code only. HMACSHA1 and some of the other cryptographic features in .NET are simply wrappers around the older Cryptography API (CAPI), a pre-.NET DLL library. SHA1Managed uses a 160-bit hash code. Three other classes SHA256Managed, SHA384Managed, and SHA512Managedare similar to the SHA1Managed class, but use 256-bit, 384-bit, and 512-bit hash codes, respectively. Each of these algorithms uses a secret key that must be included each time the hash is generated against the same set of input data. As long as the input data is unchanged, and the secret key is the same, the resulting hash value will also remain unchanged. By design, even the smallest change in the input data generates major changes in the output hash value.
https://flylib.com/books/en/4.216.1.107/1/
CC-MAIN-2020-45
refinedweb
1,824
54.22
The unique charm of pyopt is more apparent when you see how others currently do command line option parsing. Pyopt - You write a regular function in python with a docstring and you magically expose that function to the command line. In argparse you define a parser, its options, help strings, etc, and then you call the parser's main which returns a data structure. You analyze the data structure - you call whichever function you wanted to run with the relevant parameters as you arrange them. Sounds long? Compared to pyopt it is. Though it is the most flexible and powerful route. Many more things are possible with argparse, though for most use cases this isn't relevant. Plac has more abilities but also a more condensed and complicated syntax. It seems less magical. For example it uses annotations for a lot of things, your function signature can easily become overwhelmed with settings. e.g... in plac: # plac example8_.py def main(dsn, command: ("SQL query", 'option')='select * from table'): print('executing %r on %s' % (command, dsn)) if __name__ == '__main__': import plac; plac.call(main) in pyopt: import pyopt expose = pyopt.Exposer() @expose.mixed def main(dsn, command='select * from table'): '''command - SQL query''' print('executing %r on %s' % (command, dsn)) if __name__ == '__main__': expose.run()
http://code.google.com/p/pyopt/wiki/ComparisonToAlternatives
crawl-003
refinedweb
215
56.96
Created attachment 266827 [details] Testcase, dotted circle is rendered as solid. QUOTE(Vladimir Vukicevic, 2007-05-27 21:16:10 PDT, Bug 368247 comment #30): "-moz-border-radius: 100% will not result in a dotted/dashed circle, because the entire radius is considered a "corner". The latter (well, both) could be fixed at some later time if someone's bored." This bug is looking for "bored" people who could fix that. Testcase is attached. This is not a regression; Firefox 2 ignores dotted/dashed on borders with border-radius. Changing summary because this is not a regression. Note however that the first approach of bug 368247 would have fixed this. I've had both bugs in my votes for a while, and I've just noticed this now... but isn't this bug a dupe of bug 13944? Bug 13944 is essentially fixed now - it was about -moz-border-radius completely disabling dotted/dashed borders. This bug is about only the corners being solid. Created attachment 369916 [details] Testcase with round borders for all possible border styles. *** Bug 540276 has been marked as a duplicate of this bug. *** This works on the new IE9 preview. Does it count as an "IE parity" blocker? It also works on Opera 10.5, and does not work in Safari 4 and Chrome 4. It should, in my opinion, be marked as parity blockers, yes. Also, what are the actual issue(s) that makes fixing this hard? (In reply to comment #8) > what are the actual issue(s) that makes fixing this hard? We need an algorithm to place the dots and dashes around the curve, and smoothly scale their sizes, and so on. We don't have one. Several people have tried and failed to come up with one. I drew up a math problem to model this, here: I posted it to a bunch of my friends, and one of them came up with a Python script and some comments. I haven't looked it over, but here's his comment: ====== Message from Michael O'Kelly ====== See pretty pictures: Code: First finds a chain of circles assuming m=0, then uses the resulting overlap to calculate an optimal m. Inner loops mostly use rapidly-converging searches, along the lines of Victor Shnayder's proposal. They could be replaced with algebraic solutions (e.g. Jon Snitow's) for dramatic speedup. This code doesn't make any assumptions about the input curves--they could be ellipses or anything sufficiently smooth. ====== End Message ====== The key thing to note is that the centers of the circles don't lie on an ellipse: I believe James Socol has some of the actual math worked out, but he says it's very messy and doesn't have a full solution. Maybe that helps, maybe not, but I thought I'd post it here. :) I wrote some tests and in all web engines the result is little strange: (In reply to comment #11) > I wrote some tests and in all web engines the result is little strange: > That doesn't surprise me. Would you be willing to draw, with SVG or whatever, suggested renderings of each of those test boxes? That would be quite helpful. Be aware of bug 19963, which will also cause poor rendering of these examples - probably it should be a dependency, come to think. SVG like this ? : I draw this (box1, 2, 5 & 6) rapidly... for example. Just for the record, WebKit is currently working on this problem. See and Is this gonna be fixed in FireFox4? Chrome, Safari, Opera and IE9 somehow managed to fix this issue. Bumping this one as it's still a problem in FF6. Benjamin, John, please do not post comments in bugs unless they are actively contributing towards fixing the bug. It clutters the bug report and unnecessarily spams everyone on the CC list. See This problem still persists in Mac version 16.0.1 dashed borders render as solid when using -moz-border-radius. Created attachment 673358 [details] Testcase with round borders for all valid (CSS 2.1) border styles Support for -moz-border-radius has been removed starting with Firefox 13. Updating the testcase and hiding invalid ones. Sorry for the spam. this still occurs for input[type=text] border radius with dotted border renders solid around the corners. Persists in Win7 version 16.0.2 Still present in 20.0a2 (2013-02-14) on Ubuntu. Still present in ver. 21.0 on OSX 10.8.3 Still present on 22 (Win7, Ubuntu and OSX). (you can see it live here - select the last Coggy option) In my opinion, this bug should be a priority. It's 5 years old, and is a very basic feature. A lot of website are using dotted lines, and they look pretty bad on firefox. Oh noes, my site doesn't render well on Firefox: border-radius killed my dashed borders! :) Previous comment information: OS: Windows 7 64bits Firefox: Aurora 24.0a2 (2013-08-01) Still here on version 23 :( *** Bug 947833 has been marked as a duplicate of this bug. *** Here is a reproduction of some of the issues with this bug: Created attachment 8386965 [details] shows boder-radius bug with radius 50% When the border-radius is 50%, making a circle the border is rendered as solid When is it going to be resolved? So itching visual bug. (In reply to fantasai from comment #10) > This first link is still active, but unfortunately > ====== Message from Michael O'Kelly ====== > > > > > are all gone now. Did anyone make copies to attach here to this bug, especially of the Python code? > I believe James Socol has some of the actual math worked out, but he says > it's very messy and doesn't have a full solution. It might be helpful to get a hold on that draft as well, even if not fully developed. Someone may have an idea where to go from here. But then, all of this was in 2010 and may be lost by now. :-\ Max Kanat-Alexander from Bugzilla said: "The Secret To Succes is to Suck Less". His exact words." Well, at this pace, as this important bugs are not fixed, Firefox users just leave and go to Chrome. I mean it's a 6 year old bug!!! Source: Patt, this wasn't the kind of feedback I wanted to solicit with my comment #33, and not really helpful towards solving this issue. There are plenty of old (and older) bugs here. The point is that someone will have to come up with an approach to smoothly connect straight and curved parts of the border, either conceptually or in some pseudo or scripted code, and eventually as a patch. Why not take a look at how Google solved it in Chromium? No point in reinventing the wheel. *** Bug 1012868 has been marked as a duplicate of this bug. *** WRT comment #36 - Elbart, I'd just as soon not do Chrome's rendering 'cause it is kinda unattractive. Compare in IE9/10/11 vs Chrome. Has a notch cut out of it. I think IE is intelligently adjusting the dash size or something. (In reply to rsx11m from comment #35) > The point is that someone will have to come up > with an approach to smoothly connect straight and curved parts of the > border, either conceptually or in some pseudo or scripted code, and > eventually as a patch. Any support is better than none. Do we have to aim at "smooth" solution ? See Chrome example. It's not perfect but it is usable. What don't you like about chrome's rendering? is marginal. In IE dashed borders in real life cases (like text input borders) seem more like dotted. Well, what I clearly don't like about chrome's rendering is it causes gaps. The circles is one example, but I don't see that as "marginal" IE's is maybe not ideal because they might allow the dashes to become too small, but that doesn't seem intrinsic to avoiding gaps. That said, if it isn't convenient to do something similar to IE's approach and copying the code is more convenient, by all means do chrome's. Was just an opinion that IE's is more attractive and and handles more cases nicely. On it was suggested that I post my solution proposal here. As I don't know how to post a document here, I try to copy the entire text here, hoping that the layout will not be disturbed too much: Suggestions for solving the dotted curves problem Unlike Chrome, Internet Explorer, Opera and Safari, Firefox v29 still renders dotted curves (e.g. rounded corners of boxes) as solid lines. I complained about this at. However, they suggested that I post my ideas about a possible solution here. 1. The problem When a box is given a dotted border, the straight border sections are correctly dotted. However, when the corners are given a radius, the curved parts are rendered solid instead. This is said to be due to a lack of a good rendering algorithm for dotted curves. I don't have an algorithm either, but I do have ideas for how to possibly find a solution. 2. Suggestions for solutions a. For the time being, I would rather not bother too much about accuracy. Anything is better than the cur-rent situation. Also, a box is not a stand-alone item; it will always be part of a page that contains more information, which will distract much of the reader’s from tiny inaccuracies – but obviously not from the big ones we have now. At a later stage, the solution might be further improved, if need be. b. The exact dot size is more important than the exact dot spacing. The latter may be varied slightly in order to make ‘the ends meet’ correctly. c. There is no need that the straight sections begin and end exactly at the centre of a dot! If they did, a box with four curved corners would need exact ‘fitting’ at eight positions – two for each curved corner. This is not necessary. It would be quite acceptable if all dots flow continuously along the border until the ends meet at one point. 3. Suggestions for algorithms I would describe the lines and curves in terms of analytical geometry. Dots will be placed on ‘carrier paths’, which may be any desired shape, and will normally be invisible. When one dot has been positioned, the next dot position can be calculated by moving along the carrier line. Depending on the situation, it may be on the same carrier path section, or on the next one. This is repeated until either the line ends, or the starting point is reached. (I don’t know if curves crossing itself – e.g. a lemniscate – are possible. If so, then be sure that the crossing point will be passed twice before the process is considered complete. If need be, fine-tuning – having only one dot at the crossing point – can be solved at a later stage.) For a curved carrier path section, there are two ways to calculate the next dot position: a. Along the curve itself: Walk along this circle either over an angle of rotation equal to the spacing needed divided by the border radius (if the curve is a circle), or in small steps until the required spacing is reached. b. Along the secant line to the next point: Draw a circle, centred at the current dot position, and with a ra-dius equal to the spacing needed. The next point is at the intersection of this circle and the carrier path. Method a may result in a visually slightly smaller dot spacing, as the (invisible) actual carrying line is circu-lar, whereas in method b it is straight. I don’t expect this to be a problem; or otherwise it may be simply compensated by a tiny increase in the dot spacing in a later version. At this moment, a basic solution is ur-gently needed; if need be, fine-tuning may be achieved in a later version. 4. Fine-tuning the result First, calculate the actual circumference length, without actually placing the dots. Then divide this circumference by the sum of the required dot size and dot spacing. If the result is not an integer number, then round it to the nearest integer. This rounded result is the actual number of dots needed. The circumference divided by this number of dots produces the actual sum of dot size and dot spacing needed. The dots can then be placed as described in section 3 above, using this modified ‘dot frequency’ (= dot size plus dot spacing). If this fine-tuning is a problem (which I can hardly believe), this fine-tuning might be delayed to a later version. It is too bad that for more than a decade or so this problem has persisted in a browser that has a reputation of being the best W3C-compliant browser available. This problem should be resolved at least basically as soon as possible. H. Hahn Can you provide a patch (see for instructions), or failing that provide an implementation of the algorithm that is as close to being integrated into the source as possible? You may be able to find the current code at dxr.mozilla.org or mxr.mozilla.org. Thanks! I'm afraid I cannot. I never developed browsers or anything alike. It would take far too much time to study dig up how it currently works or how to describe it in more detail so that you could use it as a basis for further work on it. If there are details in my text that are not clear, then you may of course ask me for further explanation. I would rather not dive into coding issues myself. I just described what I think is a logical approach to the problem. After all, if all other browsers did solve the problem, it cannot be that complicated? Anyway, I would emphasize again that fine-tuning is not the most urgent problem. A basic solution would do for the time being. The testcases in attachment 673358 [details] are very simple and the most common cases. While fixing this bug, we should keep an eye on more complex use cases, like diffent radius and styles in one box and may be fix them later in a further step. @sjw@gmx.ch: Different radiuses should not be more complicated. Just "follow the curve", whatever the radius at the current section. It does not even need to be circular. Start with a temporary solid line; its centreline will be the carrier path. When all dots are positioned, the centreline may be removed or made invisible. One more point: Firefox already correctly rotates objects if asked to do so. Apparently it can rotate an object along a specified centre point, over a specified angle. That is just what must be done to "follow a curve": begin to move straight ahead over a very small distance delta (x), and then "rotate" the last-drawn "delta (x)" by its starting point just as far as needed to get back on the curve. Repeat this until the sum of all delta (x) steps equals the required do spacing. Then drop the next dot. Etcetera. Created attachment 8506150 [details] newtab_dashed.png As of Firefox 33, about:newtab is using a rounded dash-border to signal an empty tile-space. Thus this bug has been made more obvious to users. Even an imperfect algorithm would be better than nothing at this point. (In reply to Elbart from comment #48) > As of Firefox 33, about:newtab is using a rounded dash-border to signal an > empty tile-space. Thus this bug has been made more obvious to users. Though this is a platform bug, putting it on the backlog since it affects Frontend. We don't use firefox-backlog for that kind of "Firefox wanted" tracking. Created attachment 8526673 [details] outline_customization.png For the sake of completeness: outline is also affected, as seen in the placeholders in the right panel of the Customization-window. Guys, seven years have passed from moment open this bug. It's time to fix it. *** Bug 1120933 has been marked as a duplicate of this bug. *** This bug is reported in 2007 and still it is not resolved,strange!!! To everyone who is saying the "Just follow Chrome/ium", I say "Please, NO!" For dotted borders, Chrome renders ugly square bits. Although for 1px thick borders, most of the users would not notice, but designers would. And, if the borders get any thicker than 2px, it is apparent to everyone. And square bits look fugly to say it lightly. Any updates on how far we are into this? If a perfect solution is not possible at the moment, perhaps a quick solution would be OK, perhaps it can be improved with feedback in the FF developer edition rather than not implementing anything. Agreed on prior comment. While Firefox is outright buggy, Chrome is the most unattractive. Safari is a bit better than Chrome, and IE seems to do the best at spacing and rendering dots/dashes. Chrome and Safari use the same rendering engine called webkit. Firefox uses its own rendering engine. Under HTML5/CSS3, which are standards, dotted and dashed lines should appear as dotted and dashed lines. On Firefox, dotted lines appear as solid lines. That's not a "quirk" or by design, its wrong and breaks web standards. Firefox cannot render this simple code: element { -moz-border-radius:9em; -webkit-border-radius:9em; border-radius: 9em; border: 2px dotted #bbb; border: 2px dotted #bbb; } Chrome and Safari went their separate ways 2 years ago, when Google forked webkit, and Chrome, at least in my tests, renders less attractively than Safari, with uneven spacing that Safari did not have. Both of them were worse than IE which renders dots very nicely. Also, your code is kinda strange. There's been no need to prefix border-radius for, oh, the past 4 years, and you repeated border twice ☺ And, yeah, the bug is a lot more obvious with round elements as noted multiple times in prior comments testcases and the many existing bug attachments. None of this helps at all in fixing it though, although I guess it helps to show people care about the bug... Sorry, Nemo, yes, I did show "border" twice". I'd only intended to show that one line and decided to include the rest, but didn't delete the original line. Sorry. It should be: element { -moz-border-radius:9em; -webkit-border-radius:9em; border-radius: 9em; border: 2px dotted #bbb; } As webdevs can't dictate what a site is viewed with and as many platforms are corporate for my business and still on XP and maybe even 2000, I have to add legacy stuff. It will be ignored by modern platforms. You were right about webkit, but the two are still very similar code, while Firefox is not. Do you actually like IE? Wow. Good luck with that. @Niel - If you're talking about standards, and how browsers are, or in this case not, adhering to standards, then keep your code too "Standards-Only" And secondly, just look at the examples posted here in IE if you can. It's the **best** reproduction of how it should really be. So yeah, in this case, I think it'll be good to learn something from IE. Sorry guys, my mistake. I was just reporting a bug as its what I thought this place was about, not a battleground. If you want to criticise the code I showed how to reproduce the issue, feel free if that's what you get off on. IE has long been castigated as a closed platform and very hard to write code for, its better, but still a bit flaky, which is common knowledge. But I'm not here to defend Open Source. I'll wait until the bug is fixed. Ah, developers. The only group of people who for the life of them can't figure out how to work together. EVERY development discussion turns into a battleground. I, personally, thank you for keeping this bug discussion alive. Even if a comment doesn't "solve" the issue, it still reminds people and keeps them aware of it, so thank you. I have seen several Firefox "updates" coming along during the last half year or so, but often I was wondering what was the actual urgency of the problems they solved. I am glad to see now that more and more people agree that the dotted-curve problem is getting really urgent. The way it is now, is a real shame for what is considered the most reliable browser around. Now who is going to solve it? Once again, not everybody writing here is sufficiently familiar with ins and outs of the Firefox innards. So not everyone of us is able to actually solve the problem. But there must be people who are -- it is they who produce all those tiny improvements we do get. So why not tackle this ugly-curve problem as soon as possible? There's an easy way to make it render fine for the majority of use cases without having to implement something complicated. If the box has equal width on all corners, we could draw the rounded rectangle with a dotted/dashed pen, this is easy. Otherwise we resort to the current painting. People rarely use different widths for the corners a lot in the real world, even if they want, they probably don't need to. Created attachment 8636535 [details] [diff] [review] (WIP) Support dotted/dashed border-radiused corners. Here is a WIP patch (just for record), there should be a lot of space to improve accuracy/performance, and cleanup, but at least it works in most cases. I'll post the some testcases and rendering result. Created attachment 8636537 [details] static test for various width Created attachment 8636538 [details] controllable test Created attachment 8636539 [details] result of WIP patch result for * attachment 673358 [details] * * attachment 8636537 [details] * attachment 8636538 [details] Just noticed following note. > Note: There is no control over the spacing of the dots and dashes, > nor over the length of the dashes. Implementations are encouraged to > choose a spacing that makes the corners symmetrical. If we tweak the spacing of each edge to make them symmetric, corner implementation will become simpler :) I'll do it first, then will bring some things to discuss. Created attachment 8637263 [details] [diff] [review] (WIP 2) Support dotted/dashed border-radiused corners. Several improvement from previous one. * variable overlap for dotted * variable dash length for dashed * symmetric border side * merge dotted+dotted corner with same width and no radius into single dot Created attachment 8637265 [details] result of WIP 2 patch Created attachment 8637436 [details] Rough algorithm used by the WIP 2 path Attached image describes the way the patch uses. There're several issues for now. One of the big issue is the performance. Currently there're nested binary seaches, up to 5 loops nesting, so it's heavy when border-width is thin and curve is long. I guess some of them could be solved mathematically, by directly calculating the value instead of searching, and others may be improved by applying more linear approximation. Any hint is welcome :) Created attachment 8637440 [details] Design issue on dotted corner Then, here's a design issue in dotted corner In attached image, to fill the corner area with circles, there can be a circle which is smaller than circles at both ends, even if both border-width is same. Does anyone have any opinion on this, or have a reference image? Have you looked at the patches from Bug 652650 ? They might have some usefuless. From the look of your renderings this work will fix bug #19963 as well? Also, thank you for your these patches. I can’t comment on their technical proficiency, but I love the diagrams explaining the algorithm you’re using. Created attachment 8637818 [details] Possible improvement for dashed corner joint (In reply to fantasai from comment #75) > Have you looked at the patches from Bug 652650 ? They might have some > usefuless. Thank you for letting me know that :D Yes, it helps a lot. So, you used elliptic arc instead of cubic bezier curve for calculation. It may be simpler than cubic bezier (yeah, this cubic bezier itself is the approximation of elliptic arc, but original elliptic arc will be suitable for some case), I'll try it. Also, the testcase there revealed that we need half offset for entire dashed side and corner. data:text/html;charset=utf-8,<!DOCTYPE html>%0D%0A<style>%0D%0A p { height%3A 100px%3B width%3A 200px%3B border-width%3A 10px 20px%3B border-color%3A orange blue%3B border-radius%3A 60px%3B border-style%3A dashed%3B }%0D%0A<%2Fstyle>%0D%0A<p>and. (In reply to Robin Whittleton from comment #76) > From the look of your renderings this work will fix bug #19963 as well? This patch follows the rule in the bug only in dotted+dotted with same border-width and no border-radius case. "24 dotted" case in attachment 8637265 [details] is this case. For other cases that dot overlaps with other side, this patch joints them with solid corner (may have small radius). like "16 dotted 24 dotted" case in attachment 8637265 [details], which draws heart mark at every corner, that is half dots in both side, and solid rectangle in the corner. For example, the dotted+solid case is differ from attachment 323815 [details], with this patch, it's drawn as "16 solid 24 dotted" case. Created attachment 8638919 [details] [diff] [review] (WIP 3) Support dotted/dashed border-radiused corners. Fixed the dashed and several minor rendering, and some cleanup. Not yet touched to performance. Created attachment 8638920 [details] result of WIP 3 patch ! Created attachment 8638960 [details] dot size and dash length issue (In reply to Leon Sorokin from comment #81) > ! Thank you for pointing them out. About the dotted style, the actual issue is that dots in straight side (including -90deg dot) are not perfect circles, but 1px wider, this is an existing bug (in other words, the circles in corner are rendered with expected size). Currently we're rendering it as dashed line with [0, 2 * borderWidth] pattern and round line cap (0 length line with round line cap in both side, and gap between them), it's rendered like that. We might have to find another way to render dotted side, to improve this. maybe render each circles separately? I'll try this, and check the performance. About dashed style, this patch is now calculating the width of dash/gap separately for side and corner. So side and corner may have different dash/gap width. Also, in attached case, left side has 1 px length which is rendered as solid border, since no dashed line fits there. this makes the segment in -90deg a bit longer. To improve them, I think we have to calculate dashed width and offset for side and corner at the same time (so, we shouldn't joint corner and side with half segment), at least for simple case, like all border has same width, and radii are large enough. Created attachment 8640231 [details] [diff] [review] (WIP 4) Support dotted/dashed border-radiused corners. Addressed the dotted side's issue, and improved performance for some simple cases. Created attachment 8640232 [details] result of WIP 4 patch ? (In reply to fantasai from comment #85) > ? Yeah, it might be better. But I'd like to leave it to another bug, since it also affects other styles. WIP 4 looks much better, though the spacing is still somewhat biased towards the top & bottom edges: Created attachment 8640602 [details] static test for various width testcase used for WIP 4 result (In reply to Leon Sorokin from comment #87) > WIP 4 looks much better, though the spacing is still somewhat biased towards > the top & bottom edges: Thank you for pointing that out, the glitch was caused by floating point number handling and optimization for simple case didn't work (it still exists in semi-optimized case tho). On Flame, displaying attachment 8640602 [details] takes about 5 seconds to redraw, after scroll. I guess this is not acceptable performance :/ Created attachment 8642053 [details] [diff] [review] (WIP 5) Support dotted/dashed border-radiused corners. Fixed the floating point number handling, and improved dotted overlap/dashed length finding algorithms, not they can find the best value almost within 8 iterations. This takes 3 or 4 seconds for each redraw on Flame, slightly faster than WIP 4, but still too slow. btw, is there any proper way to evaluate the rendering performance? Created attachment 8642055 [details] static test for various width added 2 more test case with 0 border-width Created attachment 8642056 [details] result of WIP 5 patch Created attachment 8644116 [details] [diff] [review] (WIP 6) Support dotted/dashed border-radiused corners. * improved search algorithms * fixed bug 19963 (I guess I need to split this patch into several parts later) * corner between dotted side and non-dotted side * corner between dotted sides with different border widths * now no *unintentional* heart mark * some minor optimization * cleanup Created attachment 8644117 [details] static test for various width added more tests Created attachment 8644118 [details] controllable test added more options Created attachment 8644119 [details] result of WIP 6 patch so much testcases :) corner between moz-border-colors is not supported Created attachment 8647735 [details] [diff] [review] (WIP 7) Support dotted/dashed border-radiused corners. can I get some feedback on this? Changes are following (I'll post some figures shortly): * Applied dotted/dashed to border radius corner + nsCSSBorderRenderer::DrawDashedCorner + nsCSSBorderRenderer::DrawDottedCornerSlow and DottedCornerFinder + nsCSSBorderRenderer::DrawDashedCornerSlow and DashedCornerFinder * Made dotted/dashed border side symmetric, by changing the spacing + nsCSSBorderRenderer::CalculateDashedOptions from nsCSSBorderRenderer::DrawDashedSide and nsCSSBorderRenderer::DrawDottedSide * Draw dotted side with separated circles, instead of dashed with ROUND + nsCSSBorderRenderer::DrawDottedSide * Made interaction between dotted side and other side better + SIDE_CLIP_RECTANGLE_CORNER and SIDE_CLIP_RECTANGLE_NO_CORNER in nsCSSBorderRenderer::GetSideClipSubPath + nsCSSBorderRenderer::IsCornerMergeable for dotted sides + start/end position calculation (nsCSSBorderRenderer::GetStraightBorderPoint) in nsCSSBorderRenderer::DrawDashedSide and nsCSSBorderRenderer::DrawDottedSide * Changed DASH_LENGTH from 3 to 2 rendering result seems to be much better with 2 * Added several utility functions for Corner/Side + GetHorizontalSide/GetVerticalSide + GetCWSide/GetCCWSide + GetCWCorner/GetCCWCorner here's try run result the failure in 461512-1.html seems to be almost same reason as dotted cannot be tested, so I think that part should also be commented out. then, I have some questions: 1. What would be the best way to test border-radius rendering? 2. What kind of performance test should I do? is there any upper bound for time taken by each redraw (especially with mobile device)? Created attachment 8647736 [details] result of WIP 7 patch Created attachment 8647737 [details] DottedCornerFinder's behavior Created attachment 8647738 [details] DashedCornerFinder's behavior Created attachment 8647739 [details] corner interaction between dotted and other side? Thank you roc and jrmuizel :) (In reply to Robert O'Callahan (:roc) (Mozilla Corporation) from comment #103) >. Thanks, now I understand the meaning of the mask images in bug 19963. So I can mask the rendered image and check the correctness. I'll try it :) > >. I'll prepare cache for overlap and dash length calculation in finder classes. Those parts take so much time in the calculation, and it should be same for symmetrical borders. (In reply to Jeff Muizelaar [:jrmuizel] from comment ? There are some reasons why I use bezier curve in many places: 1. In most case, I need to handle the curve as parametric curve like x = Fx(t) and y = Fy(t), and in that case I think bezier curve is less-expensive than elliptic arc. iiuc, elliptic arc needs sin/cos or sqrt or something like that, doesn't it? 2. in dashed border, finally we need to fill each segment with the region consisted with bezier curves and straight lines (am I correct?), so we need control points for them, and in that case calculating everything with bezier curve will be straightforward. 3. I cannot find a good estimation for elliptic arc's length that is better than bezier curve's length for given segment. 4. Just because I'm not familiar with elliptic arc. so, if you know better solution with elliptic arc, let me know that ;) I use elliptic arc when it seems to be simpler, like when calculating whole arc's length (GetQuarterEllipticArcLength), solving equation with x and y (CalculateDistanceToEllipticArc), etc. Created attachment 8654682 [details] [diff] [review] (WIP 8) Support dotted/dashed border-radiused corners. Changes: * Added several figures as comments * Added cache for best overlap and dash length for given corner parameters Now it takes up to 1 or 2 seconds for each redraw on Flame * Added reftests It renders dashed/dotted borders, overlays two masks: not-filled region with filled-color, and not-unfilled region with unfilled-color, and compares them with full-filled/fill-unfilled boxes * Always calculate best dashed length for dashed side (even with width <= 2.0) * Separated into several files * Changed DASH_LENGTH back to 3 (looks like [1,3] range is good enough) * Use arc length instead of direct distance in dashed corner calculation * If dot may overflow from outer rect, don't draw it (will post in next comment) * If corner between dotted sides is merged into one dot and sides have same color, draw the dot with single path instead of 2 half circles * Applied mozilla coding style for newly added code * Fixed dash offset in simple case for dotted border in nsCSSBorderRenderer::DrawBorders there strokeOptions.mDashOffset is set to 0.5f but it should be 0.5f * mBorderWidths[0], former results in 3 dots filled 1 dot gap in 2px dotted border because of AntialiasMode::NONE * Removed unused method declaration SetupStrokeStyle * Added some missing #include and using namespace. Questions: 8654683 [details] static test for various width Created attachment 8654684 [details] result of WIP 8 patch Created attachment 8654685 [details] Another design issue in dotted side with too-short length One more question, what would be the expected rendering in attached image? If width or height is smaller than border-width, there's no space to draw a circle with border-width/2 radius in that side. In WIP 8 patch, I just skip rendering that circle, but it results in empty border. Created attachment 8654686 [details] Rendering issue with border-radius extends into the opposite side When the border-radius curve extends into the opposite side, the side and corner overlaps and the side overflows from the curve's revion. I'd like to leave them to another bug since it happens also with solid border. We'll need to handle that case specially. looks like the spacing uniformity in the simple dotted case has regressed since WIP7 (In reply to Leon Sorokin from comment #111) > looks like the spacing uniformity in the simple dotted case has regressed > since WIP7 > > Thank you for pointing that out :) Yeah, in that one, previous rendering should be better, I'll check them. however, I don't do anything for "uniformity", each side and corner is rendered separately and spacing is also calculated separately, to place dots at the end of each side, and make corner starts and ends with dots (otherwise corner rendering becomes more difficult), so spacing uniformity is not guaranteed now. (of course, I'll try choosing better spacing for given length) Created attachment 8655353 [details] [diff] [review] (WIP 9) Support dotted/dashed border-radiused corners. oops, that was just a typo :P > static bool > IsSingleCurve(Float aMinR, Float aMaxR, > Float aMinBorderRadius, Float aMaxBorderRadius) > { > return aMinR > 0.0f && > aMinBorderRadius > aMaxR * 4.0f && >- aMinBorderRadius / aMaxBorderRadius < 0.5f; >+ aMinBorderRadius / aMaxBorderRadius > 0.5f; > } no other changes from WIP 8 in Comment #106. re-posting questions just in case: 8655355 [details] result of WIP 9 patch Comment on attachment 8655353 [details] [diff] [review] (WIP 9) Support dotted/dashed border-radiused corners. Review of attachment 8655353 [details] [diff] [review]: ----------------------------------------------------------------- ::: layout/base/BezierUtils.h @@ +48,5 @@ > +mozilla::gfx::Float CalculateDistanceToEllipticArc(const mozilla::gfx::Point& P, > + const mozilla::gfx::Point& normal, > + const mozilla::gfx::Point& origin, > + mozilla::gfx::Float width, > + mozilla::gfx::Float height); Add comments explaining all these functions, especially where you have "2", "A" and "B" versions. ::: layout/base/BorderCache.h @@ +13,5 @@ > + > +namespace mozilla { > +// Cache for best overlap and best dashLength. > + > +struct fourFloats FourFloats @@ +96,5 @@ > +}; > + > +#else > + > +class fourFloatsHashKey : public PLDHashEntryHdr FourFloatsHashKey (In reply to Tooru Fujisawa [:arai] from comment #113) > re-posting questions just in case: > 1. Where should I store cache data? > Currently cache is allocates as a global variable, but I think that > data should be associated with document. I think it should be global. It's common to have multiple documents with the same styles styles --- multiple pages open from the same site. >. A simple approach that should be OK is to use a hashmap and clear the entire cache when it gets full. Created attachment 8659312 [details] [diff] [review] Part 0: Add missing includes and namespaces. Thank you for your feedback :) Now I split patches into 6 parts, for each change. First, I bumped into some errors when adding files to UNIFIED_SOURCES, and there are some more errors while testing non-unified build in layout/base/. So, as Part 0, I'd fix them. [layout/base/AccessibleCaretEventHub.cpp] > +#include "nsCanvasFrame.h" > !aPresShell->GetCanvasFrame()->GetCustomContentContainer()) { nsPresShell::GetCanvasFrame returns nsCanvasFrame*. [layout/base/AccessibleCaretManager.cpp] > +#include "mozilla/Preferences.h" > Preferences::AddUintVarCache(&caretTimeoutMs, Preferences is used here. > +#include "nsContainerFrame.h" > focusableFrame = focusableFrame->GetParent(); nsIFrame::GetParent returns nsContainerFrame. [layout/base/ActiveLayerTracker.cpp] > +using namespace dom; > KeyframeEffectReadOnly* effect = mozilla::dom:: is omitted. [layout/base/MobileViewportManager.cpp] > +#include "gfxPrefs.h" > if (gfxPrefs::APZAllowZooming()) { gfxPrefs is used here. > +#include "nsIDOMEvent.h" > event->GetType(type); nsIDOMEvent is used here. > +#include "nsIFrame.h" > if (!nsLayoutUtils::GetDisplayPort(root->GetContent(), nullptr)) { root is nsIFrame*. > +#include "nsIScrollableFrame.h" > nsIScrollableFrame* scrollable = do_QueryFrame(root); 'do_QueryFrame::operator nsIScrollableFrame *<nsIScrollableFrame>' is requested here. > +#include "nsLayoutUtils.h" > LayoutDeviceToLayerScale res(nsLayoutUtils::GetResolution(mPresShell)); nsLayoutUtils is used here. > +#include "nsPresContext.h" > CSSToLayoutDeviceScale cssToDev((float)nsPresContext::AppUnitsPerCSSPixel() nsPresContext is used here. > +#include "UnitTransforms.h" > CSSToParentLayerScale zoom = ViewTargetAs<ParentLayerPixel>(defaultZoom, ViewTargetAs is used here. [layout/base/RestyleManager.cpp] > +using namespace dom; > FlattenedChildIterator it(aElement); mozilla::dom:: is omitted. [layout/base/TouchCaret.cpp] > +#include "nsDocShell.h" > docShell->AddWeakScrollObserver(this); nsDocShell is used here. [layout/base/TouchCaret.h] > +class nsDocShell; > WeakPtr<nsDocShell> mDocShell; nsDocShell is used here. [layout/base/TouchManager.cpp] > +#include "mozilla/TouchEvents.h" > WidgetTouchEvent event(true, NS_TOUCH_END, widget); WidgetTouchEvent is used here. > +#include "nsIFrame.h" > +#include "nsView.h" > nsCOMPtr<nsIWidget> widget = frame->GetView()->GetNearestWidget(&pt); frame is nsIFrame*, and GetView() returns nsView*. > +using namespace mozilla; > nsRefPtrHashtable<nsUint32HashKey, dom::Touch>* TouchManager::gCaptureTouchList; mozilla:: is omitted. > +using namespace mozilla::dom; > nsCOMPtr<EventTarget> targetPtr = oldTouch->mTarget; mozilla::dom:: is omitted. [layout/base/TouchManager.h] > +#include "mozilla/BasicEvents.h" > bool PreHandleEvent(mozilla::WidgetEvent* aEvent, WidgetEvent is used here > +#include "mozilla/dom/Touch.h" > +#include "nsRefPtrHashtable.h" > static nsRefPtrHashtable<nsUint32HashKey, mozilla::dom::Touch>* gCaptureTouchList; nsRefPtrHashtable and mozilla::dom::Touch are used here [layout/base/ZoomConstraintsClient.cpp] > +#include "gfxPrefs.h" > constraints.mAllowZoom = aViewportInfo.IsZoomAllowed() && gfxPrefs::APZAllowZooming(); gfxPrefs is used here > +#include "mozilla/Preferences.h" > Preferences::RemoveObserver(this, "browser.ui.zoom.force-user-scalable"); Preferences is used here. [layout/base/nsLayoutDebugger.cpp] > +#include "nsAttrValue.h" > content->GetClasses()->ToString(tmp); GetClasses() returns nsAttrValue*. [layout/base/nsLayoutUtils.cpp] > +#include "nsTextFragment.h" > GetText()->AppendTo(aResult, offset, length); GetText() returns nsTextFragment* [layout/base/nsPresShell.cpp] > +#include "gfxPrefs.h" > if (gfxPrefs::MetaViewportEnabled() || gfxPrefs::APZAllowZooming()) { gfxPrefs is used here [layout/generic/nsCanvasFrame.h] > +#include "mozilla/dom/Element.h" > explicit nsCanvasFrame(nsStyleContext* aContext) nsCOMPtr<mozilla::dom::Element>::assert_validity is requested in nsCanvasFrame ctor. [layout/style/nsStyleStruct.h] > +class nsStyleContext; > nsStyleContext* aContext) const; nsStyleContext needs declaration. Created attachment 8659314 [details] [diff] [review] Part 1: Fix spacing of simple 2px dotted border without radius. Part 1 fixes the rendering result of 2px dotted border with no radius. Because of AntialiasMode::NONE is used and mDashOffset is 0.5, there 3px filled and 1px spacing, on OSX. mDashOffset should be relative to border width. I'll post different in next comment. I don't add test for this because the rendering result still depends on OS, especially the at the corner. Created attachment 8659316 [details] Rendering result of 2px dotted border Part 1 fixes attaches case. Created attachment 8659317 [details] [diff] [review] Part 2: Split constants for border rendering to BorderConsts.h. Part 2 just makes BorderConsts.h, which has some definition used by several files (also files added by Part 4) Created attachment 8659323 [details] [diff] [review] Part 3: Improve spacing and corner interaction of dashed/dotted border. Part 3 applies following changes: * Make spacing of dashed/dotted border variable, to make it symmetric * Disable or add fuzzy option to test which is affected by anti-alias caused by spacing change * Draw dots in dotted border with width>2px separately, to make it perfect circle * Make corner interaction between dotted sides, or dotted and other sides. * at corner between dotted sides with same width, dots are merged into single dot * at corner between dotted sides with different width, wider side draws the dot at corner * at corner between dotted side and other side, other side always draws the corner Created attachment 8659324 [details] [diff] [review] Part 4: Support dotted/dashed border-radiused corners. Part 4 adds support for border-radius with dotted/dashed. Now cache is implemented by hashmap on global, entries are cleared when 257-th cache is stored. Static test doesn't hit this, only 120 to 130 cache entries. I might have to investigate how much caches we need for real websites (do you know where it's used in practice?). Then, in WIP 9 patch, the cache didn't work correctly (the performance improvement comes from other changes, like parameter tweak), now it's fixed and the static test is shown almost smoothly on Flame, it takes 1 or 2 seconds on some heavy style, but I don't need to wait for rendering for other cases, while scrolling. Created attachment 8659325 [details] [diff] [review] Part 5: Add testcases for dashed/dotted borders. Part 5 adds reftests for Part 3 and Part 4. Almost green on try run: Created attachment 8659326 [details] Rendering result of static test A couple of us in the Toronto office are going to try to take a look reviewing this during our lunch break. Comment on attachment 8659323 [details] [diff] [review] Part 3: Improve spacing and corner interaction of dashed/dotted border. Review of attachment 8659323 [details] [diff] [review]: ----------------------------------------------------------------- :::. @@ +764,5 @@ > + // |#########| > + // | ####### | > + // | ##### | > + // | | > + Float minimum = borderWidth * /*(1.0f + sqrt(2.0f)) / 2.0f*/ 1.5f ; Why are you approximating this as 1.5? Also, if you can add a paragraph of text describing the overall approach you're taking in Part 3 and Part 4 that would be helpful in reviewing. Created attachment 8661946 [details] Two dots at the corner with small radius and same border-width. Thank you for reviewing :D (In reply to Jeff Muizelaar [:jrmuizel] from comment #126) > :::. (In reply to Jeff Muizelaar [:jrmuizel] from comment #127) > Also, if you can add a paragraph of text describing the overall approach > you're taking in Part 3 and Part 4 that would be helpful in reviewing. Sure, I'll add those comments, will post updated patches by weekend (hopefully tomorrow). > @@ +764,5 @@ > > + // |#########| > > + // | ####### | > > + // | ##### | > > + // | | > > + Float minimum = borderWidth * /*(1.0f + sqrt(2.0f)) / 2.0f*/ 1.5f ; > > Why are you approximating this as 1.5?. (In reply to Tooru Fujisawa [:arai] from comment #128) > >. It's worth adding a comment about this. Created attachment 8662302 [details] [diff] [review] Part 3: Improve spacing and corner interaction of dashed/dotted border. Added comments to GetStraightBorderPoint and DrawDashedOrDottedSide in Part 3, and DrawDashedOrDottedCorner in Part 4. Tell me if those comments doesn't make sense, or if some required parts are not described. Thanks! Also, fixed the minimum length to draw dash in corner/side in Part 3, there, minimum dash length was lower than 1 (that is too short dashed segment) in previous patch, but dash length should be in [1, 3] range. This affects to some testcase, so updated them in Part 5. Created attachment 8662303 [details] [diff] [review] Part 4: Support dotted/dashed border-radiused corners. Created attachment 8662304 [details] [diff] [review] Part 5: Add testcases for dashed/dotted borders. Created attachment 8662305 [details] Rendering result of static test Tooru, do you think you could get some timings of the border drawing call before and after this code? That will give us a feeling for whether we need to worry about the performance of the new implementation. Created attachment 8664431 [details] Time taken by nsCSSBorderRenderer::DrawBorders Here's the result of performance comparison :) (I'll post screenshot of patched rendering in next comment) Let me know if more cases or detailed data is needed. Created attachment 8664434 [details] Rendering result of "Time taken by nsCSSBorderRenderer::DrawBorders" file (In reply to Tooru Fujisawa [:arai] from comment #136) > Created attachment 8664434 [details] > Rendering result of "Time taken by nsCSSBorderRenderer::DrawBorders" file That's great. If it's easy to do, it would be nice to separate out the Moz2D/drawing portion of the time from the nsCSSBorderRenderer::DrawBorders calculations. Created attachment 8664479 [details] Time taken by nsCSSBorderRenderer::DrawBorders Updated. in P1 and P2, 1st number (red) is the time taken by calculation, 2nd number (blue) is the time taken by drawing. Created attachment 8664480 [details] Rendering result of "Time taken by nsCSSBorderRenderer::DrawBorders" file (In reply to Tooru Fujisawa [:arai] from comment #139) > Created attachment 8664480 [details] > Rendering result of "Time taken by nsCSSBorderRenderer::DrawBorders" file What OS was the timing done on? oh, sorry I forgot to note the basic information. I tested on OSX 10.10.5, with 64bit build nightly, on iMac Late 2013 (2.9 GHz Intel Core i5, 16 GB 1600 MHz DDR3, NVIDIA GeForce GT 750M 1024 MB) Sorry for the delay, I'm going to try and finish up the review (at least up to part 3) this week. Is it possible to get timing information for just the first part (Up to part 3) of the patch? Created attachment 8673463 [details] Time taken by nsCSSBorderRenderer::DrawBorders (with Part 1-3) Thanks! Here's result with Part 1-3 Created attachment 8673472 [details] Rendering result of "Time taken by nsCSSBorderRenderer::DrawBorders (with Part 1-3)" file with Part 1-3 Here's the rendering result with part 1-3 (corners are still rendered with solid stroke, even if they're merged) Thank you for reviewing :) Is there anything I can do here for now? (looks like same patch is still applicable and no rebase is needed) [~OOT] Could somebody please install the patch and tell if this testcase with "border-style: dashed solid;" - lags not more than with "border-style: solid;". This is what I'd expect from the patch. > the testcase: Just compare the testcase with checkbox "border-style" enabled/disabled (you can also try animation) To be clear, this patch does not handle the bug 780366's case, so the rendering result is still wrong and the performance of it won't be meaningful value for the case (I think we need totally different approach for the case, than current one). Attachment 8654686 [details] in comment #110 is the result for almost same situation. > lags not more than with "border-style: solid;". Here's results: without animation (10 samples for each) "solid" border: 288 [us] "dashed solid" border: 641 [us] with animation (200 samples for each) "solid" border: 307 [us] (min:273, max:364) "dashed solid" border: 729 [us] (min:688, max:964) Those should be in acceptable range. Created attachment 8684600 [details] [diff] [review] Part 0-followup: Add missing includes and namespaces. Just noticed that there some more missing #include's added from last time. No other changes in other parts. *** Bug 1225837 has been marked as a duplicate of this bug. ***. ::: layout/base/BezierUtils.cpp @@ +260,5 @@ > +Float > +GetQuarterEllipticArcLength(Float a, Float b) > +{ > + Float A = a + b, S = a - b; > + Float A2 = pow(A, 2.0f), S2 = pow(S, 2.0f); It's more performant to write these as: A2 = A*A; S2 = S*S; @@ ? ::: layout/base/BezierUtils.h @@ +1,4 @@ > +/* -*- Mode: C++; tab-width: 2; indent-tabs-mode: nil; c-basic-offset: 2 -*- */ > +/* This Source Code Form is subject to the terms of the Mozilla Public > + * License, v. 2.0. If a copy of the MPL was not distributed with this > + * file, You can obtain one at. */ It would be nice if BezierUtils.cpp and BezierUtils.h could go in gfx/2d as that's a more likely place for people to look for this kind of thing. @@ +6,5 @@ > +#ifndef mozilla_BezierUtils_h_ > +#define mozilla_BezierUtils_h_ > + > +#include "mozilla/gfx/2D.h" > +#include "gfxRect.h" Is this include needed? ::: layout/base/DashedCornerFinder.cpp @@ +128,5 @@ > + > +DashedCornerFinder::Result > +DashedCornerFinder::Next(void) > +{ > + Float lastTo, lastTi, to, ti; I think calling these lastOuterT and outerT would be clearer than abbreviating down to a single letter. @@ ? ::: layout/base/DashedCornerFinder.h @@ +175,5 @@ > + Point mLastPi; > + Float mLastTo; > + Float mLastTi; > + > + // Length for each segment, radio of the border width at that point. Is 'radio' a typo? Created attachment 8702087 [details] [diff] [review] Part 4: Support dotted/dashed border-radiused corners. Thank you for reviewing! :D (In reply to Jeff Muizelaar [:jrmuizel] from comment #150) >. Okay, added Bezier type, that has 'Point mPoint[4]' member. > @@ ? I should've added the command and the reference for it. It's Ramanujan's approximation: Added comment, and slightly modified the formula to reduce the number of division. > @@ +6,5 @@ > > +#ifndef mozilla_BezierUtils_h_ > > +#define mozilla_BezierUtils_h_ > > + > > +#include "mozilla/gfx/2D.h" > > +#include "gfxRect.h" > > Is this include needed? mozilla::css::Corner is defined there, and GetBezierPointsForCorner uses it. > @@ ? wrapped those 2 functions with GetSubBezier function. SplitBezierA and SplitBezierB is defined in BezierUtils.cpp with 'static'. > ::: layout/base/DashedCornerFinder.h > @@ +175,5 @@ > > + Point mLastPi; > > + Float mLastTo; > > + Float mLastTi; > > + > > + // Length for each segment, radio of the border width at that point. > > Is 'radio' a typo? Oops, it should be 'ratio'. Created attachment 8702088 [details] [diff] [review] Part 0: Add missing includes and namespaces. Some files are changed from last time, and fixed gfx/2d directory too, as BezierUtils.cpp is moved there and I hit error there. [gfx/2d/DataSurfaceHelpers.h] > DataAtOffset(DataSourceSurface* aSurface, > - DataSourceSurface::MappedSurface* aMap, > + const DataSourceSurface::MappedSurface* aMap, > IntPoint aPoint); This is not include or namespace tho, this declaration does not match to definition in DataSurfaceHelpers.cpp. [gfx/2d/Quaternion.h] > +#include <ostream> > friend std::ostream& operator<<(std::ostream& aStream, const Quaternion& aQuat); ostream is used here > +#include "mozilla/gfx/Point.h" > Point3D RotatePoint(const Point3D& aPoint) { Point3D is used here. [layout/base/DisplayItemScrollClip.cpp] > +#include "DisplayItemClip.h" > str.AppendPrintf("<%s>", scrollClip->mClip ? scrollClip->mClip->ToString().get() : "null"); mClip is DisplayItemClip. [layout/base/GeometryUtils.cpp] > +#include "nsContentUtils.h" > if (nsContentUtils::IsCallerChrome()) { nsContentUtils is used here. [layout/base/nsPresShell.cpp] > +#include "gfxUserFontSet.h" > gfxUserFontSet* fs = mPresContext->GetUserFontSet(); gfxUserFontSet is used here. [layout/generic/nsGridContainerFrame.h] > + typedef mozilla::LogicalSize LogicalSize; > const LogicalSize& aComputedMinSize, > const LogicalSize& aComputedSize, > const LogicalSize& aComputedMaxSize); LogicalSize needs namespace. [layout/style/nsFontFaceLoader.h] > - TimeStamp mStartTime; > + mozilla::TimeStamp mStartTime; TimeStamp needs namespace. Created attachment 8702089 [details] [diff] [review] Part 0-followup: Add missing includes and namespaces. r=jrmuizel Just removed unnecessary changes, such as TouchCaret.h that is already removed. Created attachment 8702090 [details] [diff] [review] Part 1: Fix spacing of simple 2px dotted border without radius. also, rebase other parts. Created attachment 8702091 [details] [diff] [review] Part 2: Split constants for border rendering to BorderConsts.h. Created attachment 8702092 [details] [diff] [review] Part 3: Improve spacing and corner interaction of dashed/dotted border. r=jrmuizel Thank you for reviewing :) I'm going to land Part 0-2 after try run, as they don't depend on other parts. Bug 382721 - Part 0: Add missing includes and namespaces. r=jrmuizel Bug 382721 - Part 1: Fix spacing of simple 2px dotted border without radius. r=jrmuizel Bug 382721 - Part 2: Split constants for border rendering to BorderConsts.h. r=jrmuizel *** Bug 973037 has been marked as a duplicate of this bug. *** *** Bug 995677 has been marked as a duplicate of this bug. *** *** Bug 1257447 has been marked as a duplicate of this bug. ***
https://bugzilla.mozilla.org/show_bug.cgi?id=382721
CC-MAIN-2016-22
refinedweb
8,745
55.64
In SOLR-1586, the proposed patch introduces the concept that a Solr response can declare a namespace for part of the response (in this case, it is using the tags defined by georss.org to specify a point, etc.). I'm not sure what to make of this. My gut reaction says no, but I'm not a namespace expert and I also don't feel strongly about it. Discussion points: 1. If there are standard namespaces, then people can use them to do fun XML things 2. If we allow them, we get all of the other benefits of namespaces... 3. The indexing side doesn't support them, so it seems odd to put in something like <field name="point">55.3 27.9</field> and get back <georss:point 55.3 27.9</georss:point>. At the same time, it seems equally weird to get back <str name="point">...</str> when there is in fact more semantic information available about this particular field that would otherwise require more work by an application to make sense of. 4. If we let in other namespaces, we then are opening ourselves to longer responses, etc. It is also likely the case that there isn't just one standard. This likely could mean slower responses, etc. 5. If people wanted them, they could just do XSLT, but that is an extra step too. An alternative is that we could refactor things a bit and allow the FieldType to specify the tag name instead of it being hardcoded in the writers. This way people writing FieldTypes could define them. For instance, we could have FieldType.getTagName() that could be overridden and clients could have tools for introspecting this. I'm not sure what effect any of this would have on downstream clients, either. Thoughts? -Grant
http://mail-archives.apache.org/mod_mbox/lucene-solr-dev/200912.mbox/%3CCC360C9E-23AD-4BFE-9EC4-6B41F24AF846@apache.org%3E
CC-MAIN-2018-17
refinedweb
302
74.08
Tic. Input The first line of the input gives the number of test cases, T. T test cases follow. Each test case consists of 4 lines with 4 characters each, with each character being ‘X’, ‘O’, ‘.’ or ‘T’ (quotes for clarity only). Each test case is followed by an empty line. Output For each test case, output one line containing “Case #x: y”, where x is the case number (starting from 1) and y is one of the statuses given above. Make sure to get the statuses exactly right. When you run your code on the sample input, it should create the sample output exactly, including the “Case #1: “, the capital letter “O” rather than the number “0”, and so on. Limits The game board provided will represent a valid state that was reached through play of the game Tic-Tac-Toe-Tomek as described above. Small dataset 1 ≤ T ≤ 10. Large dataset 1 ≤ T ≤ 1000. Sample Input 6 XXXT …. OO.. …. XOXT XXOO OXOX XXOO XOX. OX.. …. …. OOXX OXXX OX.T O..O XXXO ..O. .O.. T… OXXX XO.. ..O. …O Sample Output Case #1: X won Case #2: Draw Case #3: Game has not completed Case #4: O won Case #5: O won Case #6: O won My Solution Brute force solved the problem correctly and in time. #include <stdio.h> char* processBoard(char board[][4]){ int i,j; int countX; int countO; int empty = 0; /*check rows*/ for(i=0;i<4;i++){ countX = 0; countO = 0; for(j=0;j<4;j++){ if(board[i][j]=='.') empty = 1; else if(board[i][j]=='T'){ countX++; countO++; } else if(board[i][j]=='X') countX++; else if(board[i][j]=='O') countO++; } if(countX==4) return "X won"; else if(countO==4) return "O won"; } /*check columns*/ for(i=0;i<4;i++){ countX = 0; countO = 0; for(j=0;j<4;j++){ if(board[j][i]=='T'){ countX++; countO++; } else if(board[j][i]=='X') countX++; else if(board[j][i]=='O') countO++; } if(countX==4) return "X won"; else if(countO==4) return "O won"; } /*check diagonals*/ countX = 0; countO = 0; for(i=0;i<4;i++){ if(board[i][i]=='T'){ countX++; countO++; } else if(board[i][i]=='X') countX++; else if(board[i][i]=='O') countO++; } if(countX==4) return "X won"; else if(countO==4) return "O won"; countX = 0; countO = 0; for(i=0;i<4;i++){ if(board[i][3-i]=='T'){ countX++; countO++; } else if(board[i][3-i]=='X') countX++; else if(board[i][3-i]=='O') countO++; } if(countX==4) return "X won"; else if(countO==4) return "O won"; /*no winners foundd*/ if(empty) return "Game has not completed"; else return "Draw"; } int main(){ int i,j; int t,k; char board[4][4]; char c; char *result; scanf("%d",&t); c = getchar(); for(k=0;k<t;k++){ for(i=0;i<4;i++){ for(j=0;j<4;j++){ c = getchar(); board[i][j] = c; } c = getchar(); } result = processBoard(board); if(k>0) printf("n"); printf("Case #%d: %s",k+1,result); c = getchar(); } return 0; }
https://www.programminglogic.com/google-code-jam-2013-qualification-round-problem-1/
CC-MAIN-2019-13
refinedweb
520
58.52
Author: like a must read. Or is it? The problem for any book on this topic is that there is a great deal of information around already on what makes good JavaScript style and how to use good software engineering techniques to make what you produce better. The criticism leveled at JavaScript is made in the knowledge of all of this "best practice". The argument that has to be defeated is that JavaScript is inherently unruly in ways that style cannot fix. Part 1 of Nicholas Zakas' book is all about style, and if you have been programming in JavaScript for any length of time there is nothing new to discover here. It is well explained, however, and it would make a good source book if you were trying to set up a style guide. Topics covered include how to format your programs, using comments, naming variables, using control statements and declarations. As stated, all well written and well argued. If you don't know these ideas then this is an excellent potted guided to writing JavaScript in good style. The second part of the book is about larger scale issues and not just the way the code sits on the page. This is potentially much more interesting. First we have a look at the ideas of loose coupling with the UI. The advice here is to keep the separation of the layers - HTML, CSS and JavaScript. Good advice, but more difficult to achieve than you might think, and at the moment I don't think there is a really good solution. Next we look at the implementation of namespaces using objects and controlling code using modules. A better way of crafting event handlers is also introduced along with separating data from code, dealing with exceptions and browser detection. All of this is welcome but there isn't anything radical in the mix. If you already know about these ideas, possibly from you experience with other languages then there isn't much for you to read here. The problem is that JavaScript isn't a class based language and it is so weakly typed that you hardly notice that it is. Working with JavaScript is very different from working with Java or C# say. You can do things that you can't do in these strongly typed languages. There are so many things that this section of the book should deal with and doesn't. There is the issue of mixins, closures, promises, functional programming, how to use prototype, object organization and so on. What is said in Part 2 is all correct, but it hardly scratches the surface of how JavaScript can be used and how it should be used. The final part of the book is called "Automation" and it is just the standard techniques used in other languages applied to JavaScript. For example the first thing we learn is that you should have just one object per JavaScript file - this is a Java convention and it applies to one class per file not one object. This is not particularly an idea that should apply to a dynamic language like JavaScript that doesn't have classes. The rest of the section discusses the Ant build system, code validation, minification and compression, documentation and testing. All of which you should know about, but they hardly make a remarkable contribution to creating maintainable JavaScript. In the end this book suffers from simply repeating much of the advice and ideas that surround using strongly-typed, class-based languages. Perhaps one solution to making JavaScript programs maintainable is to turn JavaScript into Java or C#, or whatever your favorite language is, but this is to miss the point. JavaScript is very different from these classical languages and it has the potential to be more productive and more maintainable. Does this book tell you how to create maintainable JavaScript? No. But you still need to know the ideas it expresses and you need to take JavaScript development as seriously as it does. Should you read this book?If you don't know about good style and good programming practices that mostly apply to other languages then yes, because you need to know this. [ ... ]
http://i-programmer.info/bookreviews/29-javascript/4701-maintainable-javascript.html
CC-MAIN-2016-50
refinedweb
700
69.52
News Feed Item On-Site Oxidation and Staged Construction Schedule Reduce Construction and Operating Risks RENO, NEVADA -- (Marketwired) -- 04/22/14 -- Allied Nevada Gold Corp. ("Allied Nevada", "we', "us", "our" or the "Company") (TSX:ANV)(NYSE MKT:ANV) is pleased to provide a summary of the prefeasibility study results for the Hycroft mill expansion, completed by M3 Engineering and Technology ("M3") in association with the Company. M3 developed the process flow sheet, capital cost estimate, operating cost estimate and financial model, while Allied Nevada developed the mineral reserves and mine plan. The prefeasibility study assumes a two-phase construction plan for the mill expansion. With the successful completion and positive results of the Ambient Alkaline Oxidation ("AAO") pilot plant, we have incorporated the onsite oxidation of the sulfide concentrate into the prefeasibility study, which should allow us to produce dore on-site for sale (as compared with the previous plan of selling concentrate). Key Highlights of the Prefeasibility Study Results of the prefeasibility study, based on a $1,300 per ounce of gold price and a $21.67 per ounce of silver price, include the following: -- Average annual production for the first five years of full production (2018-2022) of approximately 450,000 ounces of gold and 21.0 million ounces of silver (approximately 800,000 ounces gold equivalent(2)) -- Average adjusted cash costs per ounce(3) for the first five years of full production of $478 annually (with silver as a byproduct credit) -- Mining and processing life of 20 years -- Two phase construction schedule reduces construction risk and moderates capital spending program; 60,000 tons of ore per day ("tpd") capacity in Phase 1 and increasing to 120,000 tpd in Phase 2 -- Addition of on-site AAO circuit to oxidize and process sulfide concentrate to produce dore on-site -- Phase 1 capital of $900 million(1) to deliver annual production of 550,000 ounces gold equivalent in 2017 -- Phase 2 capital of $422 million to increase average annual production to approximately 800,000 ounces gold equivalent beginning in 2018 -- Life-of-mine ("LOM") after tax internal rate of return ("IRR")(1) of 26.5% -- LOM net present value ("NPV")(1) of $1.7 billion (at a 5% discount) "I have stated repeatedly since joining Allied Nevada that I believe that the Hycroft mill expansion is a project that needs to be built," commented Randy Buffington, President and CEO. "There aren't many large projects today located in politically stable jurisdictions with complimentary infrastructure like this one. The ability to produce dore on-site using the AAO process is a significant step toward derisking the project. I believe this prefeasibility study addresses many of the risks associated with the previous plans, while maintaining robust financial returns." The following summarizes LOM assumptions for the mill expansion project: ---------------------------------------------------------------------------- LOM (2014-2033) ---------------------------------------------------------------------------- Total tons of ore processed (000s) 934,020 ---------------------------------------------------------------------------- Grade - Au (ounces per ton) 0.011 ---------------------------------------------------------------------------- Grade - Ag (ounces per ton) 0.50 ---------------------------------------------------------------------------- Total gold ounces sold (000s) 7,366 ---------------------------------------------------------------------------- Total silver ounces sold (000s) 328,606 ---------------------------------------------------------------------------- Total gold equivalent ounces sold (000s) 12,843 ---------------------------------------------------------------------------- Total revenue at $1,300/oz Au and $21.67/oz Ag (millions) $16,696 ---------------------------------------------------------------------------- Revenue per ton processed $17.88 ---------------------------------------------------------------------------- Mining cost per ton mined $1.40 ---------------------------------------------------------------------------- Milling cost per ton of ore milled (includes all treatment costs) $8.83 ---------------------------------------------------------------------------- Heap leach cost per ton of ore heap leached (includes crushing) $2.53 ---------------------------------------------------------------------------- G&A cost per ton of ore processed $0.35 ---------------------------------------------------------------------------- Nevada Net Proceeds Tax and refining cost per ton of ore processed $0.45 ---------------------------------------------------------------------------- Construction; tails leach tanks; oxidation tanks; an oxygen plant; starter tails capacity; and the associated infrastructure for these facilities. In Phase 1, the rail spur and power requirements are expected to be constructed. Current estimates assume that the oxygen plant will be constructed by a third party vendor, which is accounted for in operating costs. The flow sheet will be designed with three streams to process; (1) whole ore; (2) sulfide with the AAO plant; or (3) transitional material with the AAO circuit and a tails leach plant. Our projections are based on construction of Phase 1 commencing in the last quarter of 2014, which is dependent on our ability to secure the necessary financing. Phase 1 is scheduled to be completed within 24 months, which would result in commissioning in the last quarter of 2016. Upon successful commissioning of Phase 1 of the mill expansion, we intend to begin construction of Phase, construction of Phase 2 is currently projected to begin commissioning in the last quarter of 2017. ------------------------------------------------------ PHASE 1 PHASE 2 ---------------------------------------------------------------------------- Expansion includes: - Mill infrastructure - 1 x SAG mill - 1 x SAG mill - 2 x Ball mills - 2 x Ball mills - Increased flotation capacity - Partial flotation - Additional tanks for AAO circuit and tails leach circuits - Partial AAO circuit - Oxygen plant (over the fence) - Tails leach plant - Rail spur and trona handling - Power line - Tails impoundment ---------------------------------------------------------------------------- Flow Sheet Description and Significant Changes Mining initially will be conducted using the existing fleet of mining equipment at Hycroft. Between 2017 and 2025 we expect, oxidation, tails leach and Merrill-Crowe (existing). Significant changes to the flow sheet include the elimination of cleaner flotation and the autoclave facility and the addition of a fourth ball mill, AAO circuit, oxygen plant and tails leach plant. We anticipate adding a high-grade filter press and additional refinery capacity to the existing Merrill-Crowe plant to process the increase in production from the sulfide mill. The current mill design is expected to have three stream capabilities, including: 1. Mill Stream 1 (Whole ore leach) - highly oxidized transitional ore is ground and leached in the tails leach plant. 2. Mill Stream 2 (Sulfide processing) - sulfidic ore is ground and floated to create a concentrate. The concentrate is oxidized through the AAO plant and leached to extract the gold and silver. 3. Mill Stream 3 (Partially oxidized transition processing) - transition material that is partially oxidized is ground and floated to create a concentrate. The concentrate is oxidized and leached, while the remaining material that does not float (the flotation tails) is leached directly. Average recovery rates vary depending on the process stream. LOM average recoveries are projected to be as follows: ---------------------------------------------------------------- GOLD SILVER ---------------------------------------------------------------- Contained Recovered Recovery Contained Recovered Recovery Ounces Ounces (%) Ounces Ounces (%) --------------------------------------------------------------------------- Heap Leach: --------------------------------------------------------------------------- - Run-of- mine 1,695,600 929,373 54.8 61,559,646 8,681,552 14.1 --------------------------------------------------------------------------- - Crushed 334,456 238,776 71.4 13,612,611 3,004,196 22.1 --------------------------------------------------------------------------- Mill stream 1 836,646 556,804 66.6 35,563,100 26,969,509 75.8 --------------------------------------------------------------------------- Mill stream 2 6,552,751 4,678,108 71.4 259,717,083 208,878,702 80.4 --------------------------------------------------------------------------- Mill stream 3 1,009,395 741,827 73.5 93,343,558 83,948,674 89.9 -----------================================================================ TOTAL 10,428,847 7,144,887 68.5 463,795,999 331,482,633 71.5 -----------================================================================ Economic Analysis As noted above, the results of the revised prefeasibility study indicate an IRR projected to be 26.5% and a NPV projected to be $1.7 billion at a discount rate of 5%, assuming gold and silver prices of $1,300 per ounce and $21.67 per ounce, respectively and based on additional assumptions set forth in the table titled "Assumptions used in the prefeasibility study estimate" at the end of this press release. The initial capital to construct the mill and associated infrastructure is on a go-forward basis and does not include capital spent to date on the mill expansion such as mills and motors, crushing and excavation. The cash flow model considers the current heap leach revenue and costs as part of the project. The capital estimate to construct the mill in two phases is presented in the following table. -------------------------------- PHASE 1 PHASE 2 Total millions millions millions ---------------------------------------------------------------------------- Direct costs $525.4 $276.2 $801.6 ---------------------------------------------------------------------------- Indirect costs (includes contingency) $262.9 $138.3 $401.1 ---------------------------------------------------------------------------- Owners cost $111.8 $7.4 $119.2 ---------------------------------------------------------------------------- Total capital $900.1 $421.9 $1,322.0 ---------------------------------------------------------------------------- The Hycroft mill expansion is projected to generate a significant amount of gold and silver at relatively low adjusted cash costs per ounce(3). The project, however, is extremely sensitive to metal prices. The following table illustrates the sensitivity to changes to the calculated IRR and NPV(1) at 0% and 5% discount rates at various gold and silver prices and based on a constant ratio of the silver price to the gold price of 60:1. Additional sensitivity would result from changes to this ratio. ---------------------------------------------------------------------------- After tax NPV After tax NPV Metal Prices (0%) (5%) After tax IRR ---------------------------------------------------------------------------- Au Ag Billions Billions % ---------------------------------------------------------------------------- $1,200 $20.00 $2.3 $1.1 17.5 ---------------------------------------------------------------------------- $1,300 $21.67 $3.2 $1.7 26.5 ---------------------------------------------------------------------------- $1,400 $23.33 $4.0 $2.2 36.8 ---------------------------------------------------------------------------- Next Steps The Board of Directors has reviewed the results of the prefeasibility study and approved moving forward with completing the feasibility study. M3 is expected to complete the feasibility study by the end of the third quarter. Financing We have retained Credit Suisse Securities (USA) LLC and expect to retain Scotia Capital Inc. to advise us and execute on financing and/or investment options for the mill expansion capital requirements. Mineral Reserve and Resource Update The revised prefeasibility study was based on the Proven and Probable Mineral Reserves estimated at December 31, 2013, as updated to reflect the new mine plan and for the economics of the study, of 10.6 million ounces of gold and 467.1 million ounces of silver. Proven and Probable Mineral Reserves were calculated using a $1,200 per ounce gold price and a $20 per ounce silver price. Conference Call Information We will host a conference call and webcast on April 23, 2014, at 8:00 am PT, to discuss the revised prefeasibility study. The listen-only webcast can be accessed from the home page of our website at. To dial-in to the conference call and participate in the questions and answer session, please dial: North America toll-free - 1-866-782-8903 Outside of North America - 1-647-426-1845 An audio recording of the call will be archived on our website at. Assumptions used in the prefeasibility study estimate: ---------------------------------------------------------------------------- Parameter Assumption Description ---------------------------------------------------------------------------- Mining years 17 years 150 million tons per year ("mtpy") by 2017, 200 mtpy by 2025 ---------------------------------------------------------------------------- Processing years 20 years Last 3 years are from stockiles ---------------------------------------------------------------------------- Inflation None - real basis All projected revenue and costs were assumed to be in January 1, 2014 real terms, with no inflation applied. ---------------------------------------------------------------------------- Starting basis January 1, 2014 go- All economic analyses were done forward on a January 1, 2014, "go- forward" basis. ---------------------------------------------------------------------------- Capital structure Unlevered No debt financing or interest payments were assumed. ---------------------------------------------------------------------------- Discount rate 5% real All the NPVs shown in this report were calculated using a discount rate of 5%. Sensitivity analysis has been completed for 0% and 10% discount rates. ---------------------------------------------------------------------------- Metal prices (base $1,300/oz Au, Commodity prices were assumed to case) $21.67/oz Ag (60:1) be constant over the DCF timeframe. ---------------------------------------------------------------------------- Refining charge $ 0.75 per Au and Ag Applied a refining charge of oz $ 0.50/oz and a deleterious elements charge of $0.25/oz. ---------------------------------------------------------------------------- Melt loss -0.5% Au and Ag Applied to account for melt losses during the refining process. ---------------------------------------------------------------------------- Payable metal 99.9% Au and 99.0% Ag Assumptions to arrive at payable metal are based on the current contract with Johnson Matthey. ---------------------------------------------------------------------------- Summary of updated mine plan and assumed economics: -------------------------------------------- First 5 Years Life of Project Average Average LOM Totals (2018-2022) (2018-2029) (2014-2033) ---------------------------------------------------------------------------- Production Information: Total tons of ore processed - heap leach (000's) 11,070 8,342 225,934 Tons of ore processed - mill (000's) 43,680 43,750 708,085 Tons of waste mined (000's) 88,938 98,100 1,454,192 ---------------------------------------------------------------------------- Total tons (000's) 143,688 150,192 2,388,212 ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- Contained gold 615,094 610,151 10,428,847 ---------------------------------------------------------------------------- Contained silver 27,620,521 27,144,492 463,795,999 ---------------------------------------------------------------------------- Ounces sold - gold 449,413 438,040 7,366,306 Ounces sold - silver 21,022,341 21,247,041 328,605,552 ---------------------------------------------------------------------------- Ounces sold - gold equivalent 799,786 792,157 12,843,065 ---------------------------------------------------------------------------- Cash Flow Information: Cash inflows: ---------------------------------------------------------------------------- Revenue from metal sales ($ 000's) 1,039,722 1,029,805 16,695,985 ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- Cash outflows: ---------------------------------------------------------------------------- Operating costs ($ 000's) 670,241 637,105 11,173,832 ---------------------------------------------------------------------------- Income taxes ($ 000's) 16,685 37,764 497,408 Inventory adjustments ($ 000's) 2,792 21,742 (263,593) Reclamation spending & salvage ($ 000's) - - 77,413 Capital spending ($ 000's) 45,965 37,841 2,029,305 ---------------------------------------------------------------------------- Total Cash Outflows ($ 000's) 735,683 734,451 13,514,364 ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- Net Cash Flow ($ 000's) 304,038 295,353 3,181,620 ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- After-tax NPV @ 5% ($ 000's) 1,681,985 After-tax NPV @ 10% ($ 000's) 887,850 After-tax Internal Rate of Return % 26.5 Adjusted cash costs(3) per gold ounce sold: With silver as byproduct credit ($ / Oz) 478 404 550 Gold equivalent ($ / Oz) 838 804 870 ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- From a silver production and sales point of view: Silver equivalent production (ounces Ag) 47,987,121 47,529,441 40,774,889 Adjusted cash costs per ounce with gold as a byproduct credit ($/silver ounce sold) $2 $1 $3 ---------------------------------------------------------------------------- Metal selling prices used in determining the economics for the project were $1,300 per ounce of gold and $21.67 per ounce of silver. Gold equivalent is calculated using a 60:1 silver to gold ounce ratio. No assurance or guarantee is provided that the calculated IRR or NPV values will be achieved. Actual results may differ materially. Allied Nevada will file an NI 43-101 compliant Technical Report within the regulatory timeframe. Once filed the report will be available at or under the Company's profile at. ---------------------------------------------------------------------------- Proven & Probable Mineral Reserves - December 31, 2013 ---------------------------------------------------------------------------- Tons Grades Contained Ounces (000s) ---------------------------------------------------------------------------- (000s) Au Ag AuEq Au Ag AuEq ---------------------------------------------------------------------------- Proven Heap Leach 163,479 0.009 0.14 0.011 1,440 22,446 1,814 Probable Heap Leach 45,561 0.008 0.72 0.020 342 32,924 891 ---------------------------------------------------------------------------- Total Proven & Probable Heap Leach 209,041 0.009 0.26 0.013 1,782 55,370 2,705 ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- Proven Mill 593,559 0.012 0.57 0.022 7,144 340,823 12,825 Probable Mill 148,402 0.011 0.48 0.019 1,630 70,953 2,812 ---------------------------------------------------------------------------- Total Proven & Probable Mill 741,960 0.012 0.55 0.021 8,774 411,776 15,980 ---------------------------------------------------------------------------- ---------------------------------------------------------------------------- TOTAL PROVEN & PROBABLE MINERAL RESERVES 951,001 0.011 0.49 0.019 10,556 467,146 18,342 ---------------------------------------------------------------------------- Waste 1,444,275 ---------------------------------- Total Tons 2,395,276 ---------------------------------- Strip Ratio 1.52 ---------------------------------- For additional information on key assumptions, parameters and methods used to estimate the mineral reserves, including quality assurance measures and other technical information in respect of the Hycroft Mine, please refer to our technical report entitled "Technical Report - Allied Nevada Gold Corp. - Hycroft Mine, Winnemucca, Nevada, USA" and dated March 6, 2013. Allied Nevada expects to file an updated Technical Report within the regulatory timeframe required by National Instrument 43-101 requirements.; future gold and silver prices; the timing of or expected results of the completed feasibility study; delays in processing gold and silver; the potential for confirming, upgrading and expanding gold and silver mineralized material at Hycroft; reserve and resource estimates and the timing of the release of updated estimates; estimates of gold and silver grades; future prices for gold and silver; recovery rates for gold and silver; anticipated operating, capital and construction costs, anticipated sales, project economics, net present values and expected rates of return; the realization of expansion and construction activities and the costs and timing thereof; availability and cost of Allied Nevada's exploration and property advancement efforts will not be successful; risks relating to fluctuations in the price of gold and silver; an increase in the cost or timing of new projects; the inherently hazardous nature of mining-related activities; uncertainties concerning reserve, resource and grade estimates; uncertainties relating to obtaining approvals and permits from governmental regulatory authorities; and availability and timing of capital for financing the Company's exploration, development and expansion activities, including the uncertainty of being able to raise capital on favorable terms or at all;. The technical contents of this news release have been prepared under the supervision of Daniel Roth, Project Manager at M3 Engineering and Technology, and Tony Peterson, Corporate Mine Engineer at Allied Nevada Gold Corp., a Registered Professional Engineer in the State of Colorado #43867 who are Qualified Persons as defined by National Instrument 43-101. For further information regarding the quality assurance program and the quality control measures applied, as well as other relevant technical information, please see the Hycroft Technical Report which will be filed within the regulatory timeframe. Non-GAAP Measures -.. (1) All costs are on a go-forward basis and consider capital spent to date as sunk costs. Net Present Value ("NPV") and Internal Rate of Return ("IRR") are calculated using $1,300 per gold ounce and $21.67 per silver ounce and additional assumptions set forth in the table titled "Assumptions used in the prefeasibility study estimate" at the end of this press release. No assurance or guarantee is provided that the calculated IRR or NPV values will be achieved. Actual results may differ materially. (2) Gold equivalent values are calculated using a 60:1 silver to gold ratio. (3) The term "adjusted cash costs per ounce" is a non-GAAP financial measure. Non-GAAP financial measures do not have any standardized meaning prescribed by GAAP and, therefore, should not be considered in isolation or as a substitute for measures of performance prepared in accordance with GAAP. See the section at the end of this press release and in the most recently filed Annual Report on Form 10-K titled "Non- GAAP Financial Measures" for further information on adjusted cash costs per ounce. Contacts: Allied Nevada Gold Corp. Randy Buffington President & CEO (775) 358-4455 Allied Nevada Gold Corp. Tracey Thom Vice President, Investor Relations (775) 789-0119 Published April.
http://news.sys-con.com/node/3067973
CC-MAIN-2016-44
refinedweb
3,042
50.57
Re: Class export by regular DLL (vc++) be used in vb From: Ralph (msnews.20.nt_consulting32_at_spamgourmet.com) Date: 12/07/04 - Next message: Frank Adam: "Re: RichTextBox changing properties" - Previous message: Dan: "Re: Window display mystery" - In reply to: polybear: "Class export by regular DLL (vc++) be used in vb" - Messages sorted by: [ date ] [ thread ] Date: Tue, 7 Dec 2004 08:51:14 -0600 "polybear" <poly@mega> wrote in message news:%23mLBZJC3EHA.2012@TK2MSFTNGP15.phx.gbl... > > I had a simple class export by regular DLL written by vc++, > Other develop tool like VB are going to use it. > But i don't know how the refernce/declare the class in VB .... > > Can i use this class in VB directly , or should i pack this class > into ActiveX or COM object format then reference it in VB IDE? > > What is the difference between ActiveX and COM in usage? > A little confused by your question, as you appear to be referring to two separate entities. A 'regular' DLL refers to a dynamic library which exports functions (using the WINAPI convention). There are two basic methods to use these functions in your VB app. 1) Use the Declare Function directive to declare them in your VB code, or 2) Create a typelib for the DLL and reference that in your app. Of the two, IMHO, a type library is the only way to go, unless you are only going to import a few functions. (Using the Declare statement adds to the size of a program, is mildly slower than using a type library, and can lead to subtle errors if you aren't consistent.) But then you mention "classes". There is no simple way to use (import) a C++ class (native or MFC) in a VB application. The easiest and sanest way is to wrap the c++ classes with an ActiveX component (perhaps using ATL) and import that into your VB app. Perhaps even rewriting the original DLL as an ActiveX component (assuming you have the source code). So what do you really have - a 'regular' DLL or what? The practical difference between "ActiveX" and "COM" is just a matter of names and context. COM is a protocol outlining a binary standard for interprocess communication and sharing of services. The actual engine to implement COM is OLE. Back in the old days (VB1-4(16bit)) the various components were called "OLE" servers/containers/controls, etc. The M$ marketing started calling them "ActiveX Components". We very seldom ever work with COM directly, instead we use various implementations, but in general anything based on COM is called "COM". It can get even more confusing when programmers refer to "COM+" as simply "COM". COM+ is actually a collection of advanced COM runtime implementations that is embedded in the OS to provide additional services. hth -ralph - Next message: Frank Adam: "Re: RichTextBox changing properties" - Previous message: Dan: "Re: Window display mystery" - In reply to: polybear: "Class export by regular DLL (vc++) be used in vb" - Messages sorted by: [ date ] [ thread ]
http://www.tech-archive.net/Archive/VB/microsoft.public.vb.general.discussion/2004-12/0995.html
crawl-002
refinedweb
503
61.06
sl_se_key_descriptor_t Struct Reference Contains a full description of a key used by an SE command. #include <sl_se_manager_types.h> Contains a full description of a key used by an SE command. Field Documentation ◆ type Key type. ◆ size Key size, applicable if key_type == SYMMETRIC. ◆ flags Flags describing restrictions, permissions and attributes of the key. ◆ storage Storage location for this key. Optional password for key usage (8 bytes). If no password is provided (NULL pointer), any key not stored as plaintext will be stored with a password of all-zero bytes. ◆ domain Pointer to domain descriptor if this key contains an asymmetric key on a custom domain The reason for pointing instead of containing is to make it possible to have the parameters in ROM.
https://docs.silabs.com/gecko-platform/3.2/service/api/structsl-se-key-descriptor-t
CC-MAIN-2022-33
refinedweb
122
57.98
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #4709 defect closed fixed (fixed) _newclient.HTTP11ClientProtocol logs something awefully confusing in its write req errback Description I'm not sure if there's meaning behind logging the word foo in log.err() call below? Perhaps this should be changed to something clearer. def ebRequestWriting(err): if self._state == 'TRANSMITTING': self._state = 'GENERATION_FAILED' self.transport.loseConnection() self._finishedRequest.errback( Failure(RequestGenerationFailed([err]))) else: log.err(err, "foo") Attachments (1) Change History (10) Changed 5 years ago by djfroofy comment:1 Changed 5 years ago by djfroofy - Keywords review added - Owner changed from jknight to exarkun comment:2 Changed 5 years ago by djfroofy Branch on launchpad lp:~djfroofy/twisted/webclientlogging-4709 comment:3 Changed 5 years ago by exarkun - Owner exarkun deleted comment:4 Changed 5 years ago by exarkun - Type changed from enhancement to defect comment:5 Changed 5 years ago by jesstess - Branch set to branches/webclient-logging-4709 comment:6 Changed 5 years ago by jesstess comment:7 Changed 5 years ago by jesstess - Resolution set to fixed - Status changed from new to closed comment:8 Changed 5 years ago by jesstess - Cc jesstess added - Keywords review removed Thanks for the bug report, patch, and unit test, djfroofy! comment:9 Changed 5 years ago by <automation> Note: See TracTickets for help on using tickets. This makes logging a little clearer
http://twistedmatrix.com/trac/ticket/4709
CC-MAIN-2015-48
refinedweb
236
52.6
webpack has a reputation for being super complex and difficult to implement. But as its most basic, it can do a lot with little development effort. Let's walk through a simple example together. webpack has a reputation for being pretty gnarly. If you've dug through the code of an established project using webpack, it's likely mind-boggling at best. Shoot, take a look at the source code for Next.js — they have an entire directory to manage webpack configuration. That complexity is due, in large part, to its power. webpack can do a lot. Fortunately, the fine folks building this free and open source tool have been working hard to make each version a little easier to use than the previous. And now, you can start very simply, with very little config. Thus, you can justify the power of webpack in the smallest and simplest of projects. Let's do that. Let's build a super simple build pipeline to bundle multiple JavaScript modules together into a single file that we can load from any HTML page. You can take a look at the full code example at any point if you get stuck. There's one big gotcha we'll have to overcome along the way. The output bundle is obfuscated and anonymous. That means we can't access it by default. And even if we could, we likely wouldn't know how to navigate it. In our case, we want to access our bundled code from external places (like an HTML file), so we're going to load our main exports into an App object that we can then access from that main HTML file. Specifically in this example, we want to be able to call App.Logger.sayHi() and see it print "Hi!" to the console. Let's go! If you have a project ready to go, great! If not, feel free to follow my steps to get started, with the following notes: These are the dependencies we're going to use: http-server webpack webpack-cli And here are the scripts to add to package.json: package.json { // ... "scripts": { "build": "WEBPACK_ENV=production webpack", "dev": "webpack", "serve": "http-server dist -p 8000" } } Now let's add a couple JavaScript files. First, our Logger at src/modules/logger.js: src/modules/logger.js const sayHi = () => { console.log("Hi!"); }; export { sayHi }; And our main file ( src/main.js), which will be responsible for exporting the Logger object. src/main.js import * as Logger from "./modules/logger"; export { Logger }; If this were a bigger web project where you have more files in your src directory, you'd probably want to put these files in some other nested place, like a js directory. Next, let's add our webpack config. This code example is commented so you can see what's going on: webpack.config.js const path = require("path"); // Used to determine whether to watch the files or build. const env = process.env.WEBPACK_ENV || "development"; module.exports = { // The main file for the bundle. entry: "./src/main.js", output: { // Name of the bundle file. filename: "bundle.js", // Directory in which the bundle should be placed. // Here we're saying `dist/js/bundle.js` will be our bundled file. path: path.resolve(__dirname, "dist/js"), // These two library items tells webpack to make the code exported by main.js available as a variable called `App`. libraryTarget: "var", library: "App", }, mode: env, // If we're in development mode, then watch for changes, otherwise just do a single build. watch: env !== "production", }; To summarize: main.js is the primary targeted file, which will be bundled to dist/js/bundle.js. The exports from main.js will be available globally in an App variable. When the WEBPACK_ENV is set to something other than production, webpack will watch for changes. When WEBPACK_ENV is set to production, it will build the bundle and then stop running. We're setting this variable automatically in the scripts added to package.json. Now let's add a simple index.html file to the dist directory, which is where the bundled JS file is going to be placed. dist/index.html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <meta http- <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>Simple Webpack ES6 Pipeline</title> </head> <body> <p>Hello there.</p> <script src="/js/bundle.js"></script> <script> App.Logger.sayHi(); </script> </body> </html> In most real-world cases, you're probably going to have some sort of a build step that would place the file here, likely provided by the static site generator or other framework you're using. In this case, we're placing the file in here as though it was already built so we can stay focused and not worry about all that setup. We actually have two commands we have to run to get this to work. First, build the JavaScript bundle: $ npm run build Then you can run the web server: $ npm run serve Now visit localhost:8000, open your browser's console, and you should see the message "Hi!" printed! If you want to to make changes to JavaScript and see them reflected without reloading your web server, you can use two terminal tabs. Run npm run serve in one to run the web server, and npm run dev in the other, which will watch for JavaScript changes and rebuild. That's all it really takes to get up and running with webpack. Starting with a simple foundation is the key to understanding and working with webpack. Now you can build on this base and create something truly awesome and unique to you. webpack has been helping us write modular front-end JavaScript for many years. Learn the basics of module bundling and why webpack is so powerful. A brief description of JavaScript, with a few links to dig in and learn more. These are the first steps I take when I start a new JavaScript project.
https://www.seancdavis.com/posts/javascript-webpack-build-pipeline/
CC-MAIN-2022-33
refinedweb
1,001
75.5
Homework 3: Trees, Data Abstraction Due by 11:59pm on Wednesday, July Abstraction Mobiles Acknowledgements. This mobile example is based on a classic problem from MIT Structure and Interpretation of Computer Programs, Section 2.2.2. We are making a planetarium mobile. A mobile is a type of hanging sculpture. A binary mobile consists of two arms. Each arm is a rod of a certain length, from which hangs either a planet or another mobile. For example, the below diagram shows the left and right arms of Mobile A, and what hangs at the ends of each of those arms. We will represent a binary mobile using the data abstractions below. - A mobilemust have both a left armand a right arm. - An armhas a positive length and must have something hanging at the end, either a mobileor planet. - A planethas a positive size, and nothing hanging from it. Arms-length recursion (sidenote) Before we get started, a quick comment on recursion with tree data structures. Consider the following function. def min_depth(t): """A simple function to return the distance between t's root and its closest leaf""" if is_leaf(t): return 0 # Base case---the distance between a node and itself is zero h = float('inf') # Python's version of infinity for b in branches(t): if is_leaf(b): return 1 # !!! h = min(h, 1 + min_depth(b)) return h The line flagged with !!! is an "arms-length" recursion violation. Although our code works correctly when it is present, by performing this check we are doing work that should be done by the next level of recursion—we already have an if-statement that handles any inputs to min_depth that are leaves, so we should not include this line to eliminate redundancy in our code. def min_depth(t): """A simple function to return the distance between t's root and its closest leaf""" if is_leaf(t): return 0 h = float('inf') for b in branches(t): # Still works fine! h = min(h, 1 + min_depth(b)) return h Arms-length recursion is not only redundant but often complicates our code and obscures the functionality of recursive functions, making writing recursive functions much more difficult. We always want our recursive case to be handling one and only one recursive level. We may or may not be checking your code periodically for things like this. Q1: Weights Implement the planet data abstraction by completing the planet constructor and the size selector so that a planet is represented using a two-element list where the first element is the string 'planet' and the second element is its size. The total_weight example is provided to demonstrate use of the mobile, arm, and planet abstractions. def mobile(left, right): """Construct a mobile from a left arm and a right arm.""" assert is_arm(left), "left must be a arm" assert is_arm(right), "right must be a arm" return ['mobile', left, right] def is_mobile(m): """Return whether m is a mobile.""" return type(m) == list and len(m) == 3 and m[0] == 'mobile' def left(m): """Select the left arm of a mobile.""" assert is_mobile(m), "must call left on a mobile" return m[1] def right(m): """Select the right arm of a mobile.""" assert is_mobile(m), "must call right on a mobile" return m[2] def arm(length, mobile_or_planet): """Construct a arm: a length of rod with a mobile or planet at the end.""" assert is_mobile(mobile_or_planet) or is_planet(mobile_or_planet) return ['arm', length, mobile_or_planet] def is_arm(s): """Return whether s is a arm.""" return type(s) == list and len(s) == 3 and s[0] == 'arm' def length(s): """Select the length of a arm.""" assert is_arm(s), "must call length on a arm" return s[1] def end(s): """Select the mobile or planet hanging at the end of a arm.""" assert is_arm(s), "must call end on a arm" return s[2] def planet(size): """Construct a planet of some size.""" assert size > 0 "*** YOUR CODE HERE ***" def size(w): """Select the size of a planet.""" assert is_planet(w), 'must call size on a planet' "*** YOUR CODE HERE ***" def is_planet(w): """Whether w is a planet.""" return type(w) == list and len(w) == 2 and w[0] == 'planet' def examples(): t = mobile(arm(1, planet(2)), arm(2, planet(1))) u = mobile(arm(5, planet(1)), arm(1, mobile(arm(2, planet(3)), arm(3, planet(2))))) v = mobile(arm(4, t), arm(2, u)) return (t, u, v) def total_weight(m): """Return the total weight of m, a planet or mobile. >>> t, u, v = examples() >>> total_weight(t) 3 >>> total_weight(u) 6 >>> total_weight(v) 9 >>> from construct_check import check >>> # checking for abstraction barrier violations by banning indexing >>> check(HW_SOURCE_FILE, 'total_weight', ['Index']) True """ if is_planet(m): return size(m) else: assert is_mobile(m), "must get total weight of a mobile or a planet" return total_weight(end(left(m))) + total_weight(end(right(m))) Use Ok to test your code: python3 ok -q total_weight Q2: Balanced Implement the balanced function, which returns whether m is a balanced mobile. A mobile is balanced if two conditions are met: - The torque applied by its left arm is equal to that applied by its right arm. The torque of the left arm is the length of the left rod multiplied by the total weight hanging from that rod. Likewise for the right. For example, if the left arm has a length of 5, and there is a mobilehanging at the end of the left arm of weight 10, the torque on the left side of our mobile is 50. - Each of the mobiles hanging at the end of its arms is balanced. Planets themselves are balanced, as there is nothing hanging off of them. def balanced(m): """Return whether m is balanced. >>> t, u, v = examples() >>> balanced(t) True >>> balanced(v) True >>> w = mobile(arm(3, t), arm(2, u)) >>> balanced(w) False >>> balanced(mobile(arm(1, v), arm(1, w))) False >>> balanced(mobile(arm(1, w), arm(1, v))) False >>> from construct_check import check >>> # checking for abstraction barrier violations by banning indexing >>> check(HW_SOURCE_FILE, 'balanced', ['Index']) True """ "*** YOUR CODE HERE ***" Use Ok to test your code: python3 ok -q balanced Q3: Totals Implement totals_tree, which takes a mobile (or planet) and returns a tree whose root is the total weight of the input. For a planet, totals_tree should return a leaf. For a mobile, totals_tree should return a tree whose label is that mobile's total weight, and whose branches are totals_trees for the ends of its arms. As a reminder, the description of the tree data abstraction can be found here. >>> from construct_check import check >>> # checking for abstraction barrier violations by banning indexing >>> check(HW_SOURCE_FILE, 'totals_tree', ['Index']) True """ "*** YOUR CODE HERE ***" Use Ok to test your code: python3 ok -q totals_tree Trees Q4: Replace Loki at Leaf Define replace_loki_at_leaf, which takes a tree t and a value lokis_replacement. replace_loki_at_leaf returns a new tree that's the same as t except that every leaf label equal to "loki" has been replaced with lokis_replacement. If you want to learn about the Norse mythology referenced in this problem, you can read about it here. def replace_loki_at_leaf(t, lokis_replacement): """Returns a new tree where every leaf value equal to "loki" has been replaced with lokis_replacement. >>> yggdrasil = tree('odin', ... [tree('balder', ... [tree('loki'), ... tree('freya')]), ... tree('frigg', ... [tree('loki')]), ... tree('loki', ... [tree('sif'), ... tree('loki')]), ... tree('loki')]) >>> laerad = copy_tree(yggdrasil) # copy yggdrasil for testing purposes >>> print_tree(replace_loki_at_leaf(yggdrasil, 'freya')) odin balder freya freya frigg freya loki sif freya freya >>> laerad == yggdrasil # Make sure original tree is unmodified True """ "*** YOUR CODE HERE ***" Use Ok to test your code: python3 ok -q replace_loki_at_leaf Q5: Has Path Write a function has_path that takes in a tree t and a string word. It returns True if there is a path that starts from the root where the entries along the path spell out the word, and False otherwise. (This data structure is called a trie, and it has a lot of cool applications!---think autocomplete). You may assume that every node's label is exactly one character. def has_path(t, word): """Return whether there is a path in a tree where the entries along the path spell out a particular word. >>> greetings = tree('h', [tree('i'), ... tree('e', [tree('l', [tree('l', [tree('o')])]), ... tree('y')])]) >>> print_tree(greetings) h i e l l o y >>> has_path(greetings, 'h') True >>> has_path(greetings, 'i') False >>> has_path(greetings, 'hi') True >>> has_path(greetings, 'hello') True >>> has_path(greetings, 'hey') True >>> has_path(greetings, 'bye') False >>> has_path(greetings, 'hint') False """ assert len(word) > 0, 'no path for empty word.' "*** YOUR CODE HERE ***" Use Ok to test your code: python3 ok -q has_path Q_2<< measurements from physical devices) with known precision, so that when computations are done with such approximate quantities the results will be numbers of known precision. For example, if a measured quantity x lies between two numbers a and b, Alyssa would like her system to use this range in computations involving x. Alyssa's idea is to implement interval arithmetic as a set of arithmetic operations for combining "intervals" (objects that represent the range of possible values of an inexact quantity). The result of adding, subracting, multiplying, or dividing two intervals is also an interval, one that represents the range of the result. Alyssa suggests the existence of an abstraction called an "interval" that has two endpoints: a lower bound and an upper bound. She also presumes that, given the endpoints of an interval, she can create the interval using the constructor of an Abstract Data Type. Using this constructor and the appropriate) Q7: Q8: Sub Interval Using reasoning analogous to Alyssa's, define a subtraction function for intervals. Try to reuse functions that have already been implemented if you find yourself repeating code. def sub_interval(x, y): """Return the interval that contains the difference between any value in x and any value in y.""" "*** YOUR CODE HERE ***" Use Ok to unlock and test your code: python3 ok -q sub_interval -u python3 ok -q sub_interval Q9: Q10: Q11: (see Q10: Par Diff for these functions!). Is she right? Why? Write a function that returns a string containing a written explanation of your answer: Note: To make a multi-line string, you must use triple quotes """ like this """. def multiple_references_explanation(): return """The multiple reference problem...""" Q12:
https://inst.eecs.berkeley.edu/~cs61a/su21/hw/hw03/
CC-MAIN-2021-49
refinedweb
1,722
57.71
This tutorial describes how to the Gradle build tool. 1. Introduction to the Gradle build system 1.1. What is the Gradle build system? Gradle is a general purpose build management system. Gradle supports the automatic download and configuration of dependencies or other libraries. It supports Maven and Ivy repositories for retrieving these dependencies. Gradle supports multi-project and multi-artifact builds. 1.2. Projects and tasks in Gradle Gradle builds are described via a or multiple build.gradle files. At least one build file is typically located in the root folder of the project. Each build file defines a project and its tasks. Projects can be something which should be built or something that should be done. Each project consists of tasks. A task represents a piece of work which a build performs, e.g., compile the source code or generate the Javadoc. These build files are based on a Domain Specific Language (DSL). In this file you can use a combination of declarative and imperative statements. You can also write Groovy or Kotlin code, whenever you need it. Tasks can also be created and extended dynamically at runtime. The following listing represents a very simple build file. task hello { doLast { println 'Hello Gradle' } } To execute the hello task in this build file, type gradle hello on the command line in the directory of the build file. If the Gradle output should be suppressed, use the -q (quiet) parameter. gradle hello # alternative add the -q flag gradle -q hello 1.3. Comments in Gradle build files You can use single and multiline comments in Gradle build files. // Single line comment /* Multi line comment */ 1.4. Project settings and description By default, Gradle uses the directory name as project name. You can change this by creating a settings.gradle file in the directory which specifies the project name. rootProject.name ='com.vogella.gradle.first' You can also add a description to your project via the build.gradle file. description =""" Example project for a Gradle build Project name: ${project.name} More detailed information here... """ task hello { doLast { println 'Hello Gradle' } } Use the gradle project command to get information about your project. The following listing shows the output. :projects ------------------------------------------------------------ Root project - Example project for a Gradle build Project name: com.vogella.gradle.first More detailed information here... ------------------------------------------------------------ Root project 'com.vogella.gradle.first' - Example project for a Gradle build Project name: com.vogella.gradle.first More detailed information here... No sub-projects To see a list of the tasks of a project, run gradle <project-path>:tasks For example, try running gradle :tasks BUILD SUCCESSFUL Total time: 1.048 secs 2. Gradle plug-ins The Gradle build system uses plug-ins to extend its core functionality. A plug-in is an extension to Gradle which typically adds some preconfigured tasks. Gradle ships with a number of plug-ins, and you can develop custom plug-ins. One example is the Java plug-in. This plug-in adds tasks to your project which allow compiling Java source code, run unit tests and to create a JAR file. A plug-in is included in a build.gradle file with the apply plugin: 'pluginname' statement. For example the entry apply plugin: 'com.android.application' makes the Android plug-in available for a Gradle build. Gradle provides also a registry for plug-ins via Gradle Plugin search. 2.1. IDE support for Gradle The Gradleware company is developing Eclipse Gradle tutorial via the Eclipse Buildship project. Other IDEs like IntelliJ and Android Studio already include also good Gradle support. 3. Installing and configuring Gradle The usage of Gradle requires an JDK (Java Development Kit) installation. 3.1. Download and extract Gradle The latest version of Gradle can be found on the Gradle Download page. Download the latest Complete distribution. It is a gradle-${version}-all.zip, where ${version} is a placeholder for the current version. Extract the contents of the downloaded zip file a new folder. 3.2. Installing Gradle on Windows Add the folder to which you extracted Gradle to your PATH environment variable. By pressing Win + Pause the system settings can be opened. First the Advanced System Settings have to be selected and then the Environment Variables button needs to be pressed. In the Environment Variables dialog the (1) GRADLE_HOME and JAVA_HOME user variables should be set. After that the (2) Path entry in the system variables is selected, the modify button can be pressed to add the bin folder of the Gradle installation to the Path. 3.3. Installing Gradle on Linux/Mac 3.3.1. Manual installation The JAVA_HOME variable must point to a proper jdk and $JAVA_HOME/bin must be part of the PATH environment variable. Add Gradle to the path by running export PATH=/usr/local/gradle/FOLDER_TO_WHICH_YOU_EXTRACTED_GRADLE/bin:$PATH in a terminal. 3.3.2. Installation with SDKMAN! SDKMAN! is a command-line tool that allows you to install multiple Gradle versions and switch between them. It runs on any UNIX based operating system. Installing SDKMAN! You install it from the command-line. If you have already installed SDKMAN! you can skip this step. curl -s "" | bash After you’ve installed SDKMAN! you have to restart you terminal before using it. Installing Gradle and setting default version sdk install gradle 3.2 sdk default gradle 3.2 gradle -v Switching Gradle version sdk install gradle 2.13 # use 2.13 for current terminal session sdk use gradle 2.13 gradle -v 3.3.3. Check if Gradle installation was successful Open a command line and type gradle, which will run Gradle’s help task by default. 3.4. Using the Gradle daemon for improved startup time Gradle allows to start Gradle as daemon to avoid starting the Java virtual machine for every build. To configure that create a file called gradle.properties in the ${HOME}/.gradle and add the following line to it: org.gradle.daemon=true You can also place the gradle.properties file to the root directory of your project and commit it to your version control system. If Gradle is not used for a few hours, the daemon stops automatically. Executing gradle with the --daemon parameter on the command line starts the gradle daemon. To stop the daemon interactively use the gradle --stop command. 3.5. Specify custom JVM settings for Gradle The GRADLE_OPTS environment variable offers the opportunity to set specific JVM options for Gradle. In Using the Gradle daemon for improved startup time the performance for the JVM startup is improved, but another performance killer for large builds can be a too small maximum heap space. Therefore it makes sense to increase it for Gradle. export GRADLE_OPTS=-Xmx1024m defines that Gradle can use 1 GB as maximum heap size. On Windows OS environment variables are usually defined via the system property UI. If you want to set JVM settings not globally but on a per project basis you can place them in <Your app folder>/gradle.properties: org.gradle.jvmargs=-Xms2g -Xmx4g -XX\:MaxHeapSize\=3g 3.6. Typical .gitignore file for Gradle projects If you are using Git as version control system, you can use the following .gitignore file as template for a Gradle project. # Android built artifacts *.apk *.ap_ *.dex # Java build artifacts class files *.class # other generated files bin/ gen/ build/ # local configuration file (for Android sdk path, etc) local.properties # OSX files .DS_Store # Eclipse project files .classpath .project # Android Studio *.iml .idea .gradle #NDK obj/ 4. Exercise - Create a Java project with Gradle command line Gradle provides scaffolding support for creating Gradle based projects via the command line. Create a new directory on your file system, switch to it and run the following command. gradle init --type java-library --test-framework junit-jupiter This creates a new Gradle based Java project which uses JUnit Jupiter for unit testing. You can run the build via gradle build And the generated test via: gradle test Gradle will generate a test report in the `/build/reports/tests/test" folder. 5. Exercise: Configure Gradle properties When using the gradle command the first time a .gradle folder is created in the ${USER_HOME} directory. On Linux this is usually /home/${yourUserName}/.gradle On Windows this is usually C:\Users\${yourUserName}\.gradle On Mac this is usually /Users/${yourUserName}/.gradle Inside this .gradle folder a gradle.properties file with the following contents has to be created. org.gradle.warning.mode=all Now the default of the org.gradle.warning.mode property will be overridden. (Default is summary) 6. Dependency management for Java projects 6.1. Managing dependencies with Gradle Gradle allows managing the classpath of your projects. It can add JAR files, directories or other projects to the build path of your application. It also supports the automatic download of your Java library dependencies. Simply specify the dependency in your Gradle build file. This triggers Gradle to download the library including its transient dependencies during the build. A Java library is identified by Gradle via its project’s groupId:artifactId:version (also known as GAV in Maven). This GAV uniquely identifies a library in a certain version. You can use the Search Maven Website to search for the GAV of a library in Maven Central. To add a dependency add an entry to the dependency section in your build.gradle file as demonstrated by the following listing. dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) compile 'com.squareup.okhttp:okhttp:2.5.0' testCompile 'junit:junit:4.12' } 6.2. Specifying the repositories to search for dependencies In your build file you specify the remote repositories to look for dependencies. Gradle supports Maven and Ivy repositories to search for dependencies. The following listing shows how to configure Maven central as dependency source. repositories { mavenCentral() } It is also possible to configure the target as a URL. repositories { maven { url "" } } You can also specify other targets, for example Bintray as Maven repository. repositories { maven ("") } The next listing demonstrates how to define an Ivy dependency. repositories { ivy { url "" } } You can also add different repositories at once. repositories { maven ("") jcenter { url "" } } You can also reference artifacts from the file system. apply plugin: 'java' repositories { mavenCentral() } dependencies { compile group: 'commons-collections', name: 'commons-collections', version: '3.2' testCompile group: 'junit', name: 'junit', version: '4.+' runtime files('libs/library1.jar', 'libs/library2.jar') runtime fileTree(dir: 'libs', include: '*.jar') compile fileTree(dir: "${System.properties['user.home']}/libs/cargo", include: '*.jar') } } 6.3. Show dependencies of a project (also transient dependencies) . The following command shows all transient dependencies of a Gradle project. gradle dependencies 6.4. Gradle cache and deleting the cache You can refresh dependencies in your cache with the command line option --refresh-dependencies. You can also delete the cached files under ~/.gradle/caches. With the next build Gradle attempts to download the dependencies again. 6.5. Excluding transitive dependencies Sometimes you have dependencies on packages that define conflicting transitive dependencies. One solution is to exclude a dependency from a specific module: compile 'org.springframework:spring-web:4.3.10.RELEASE' { exclude group: 'com.google.code.gson', module: 'gson' } If you have multiple dependency that define a dependency you want to exclude, you can do so at the project level: configurations.all { exclude group: 'com.google.code.gson', module: 'gson' } Following the same approach we can exclude a dependency only during runtime: configurations.runtime { exclude group: 'com.google.code.gson', module: 'gson' } 6.6. Forcing a specific version of a transitive dependency It is possible to force gradle to pick a specific version when it encounters conflicting transitive dependencies. Keep in mind that you might have to manually update this version when you are upgrading the packages that depend on it. configurations.all { resolutionStrategy.force 'com.google.code.gson:gson:2.8.1' } 7. Running a build When starting a Gradle build via the command line, the gradle command tool looks for a file called build.gradle in the current directory. Gradle also supports abbreviation of tasks, e.g,. to start the task lars using the gradle l command is sufficient. The abbreviation must uniquely identify a task, otherwise Gradle gives you an error message, which tell that the abbreviation is ambiguous. CamelCase can also be used for an abbreviation, e.g., the task gradle vogellaCompany can also be called with the gradle vC command. A Gradle build can be triggered via the gradle or gradle -q command. The -q or --quiet parameter makes the execution of Gradle less verbose. A specific task can be addressed like this: gradle -q other, which runs the "other" task. You can of course also use the Gradle wrapper script, if that is avaiable. To define a different build file the -b buildFileName option can be used. In scenarios where no network connection is available the --offline parameter can be used. This runs the Gradle build offline, which means that Gradle does not try to reach resources from the network during a build. E.g., for dependencies from an artifact repository like Maven Central or Bintray. To get a detailed output of what Gradle is doing you can specify the --info parameter. 8. Gradle Tasks 8.1. Default Gradle tasks Gradle also offers tasks for introspection of Gradle itself, so a Gradle project can be analyzed by using Gradle’s default tasks. A good example is the tasks task, which shows the available tasks of a project. When typing gradle -q tasks, a list of tasks is shown. This command lists the base tasks even without an build.gradle file. Gradle also tries to give some guidance for the usage of invoked tasks, as shown in the bottom of the console output. The gradle tasks --all command would also list dependent tasks, which are invoked before the actual task. When running gradle tasks --all the output looks quite similar to the one before, except of the init task, which depends on the wrapper task. 8.2. Creating custom Gradle tasks You already created a first minimalistic task in a build.gradle file has been created. task hello { doLast { println 'Hello Gradle' } } When running the gradle -q tasks task with this build.gradle file, the hello task will be listed under "Other tasks". Tasks without a group are considered as private tasks. For instance, the Gradle Task View of the Eclipse Gradle Plug-in does not show such tasks. But they can be shown by activating the right entry in the view’s menu. Groups can be applied with the group property and a description can be applied by using the description property. In case the group already exists the hello task is added to it. If the group does not exist, it is created. task hello { group 'vogella' description 'The hello task greets Gradle by saying "Hello Gradle"' doFirst { println 'Hello Gradle' } doLast { println 'Bye bye Gradle' } } 8.3. Task structure Gradle has different phases, when working with tasks. First of all there is a configuration phase, where the code, which is specified directly in a task’s closure, is executed. The configuration block is executed for every available task and not only for those tasks, which are later actually executed. After the configuration phase, the execution phase then runs the code inside the doFirst or doLast closures of those tasks, which are actually executed. task onlySpecifiesCodeForConfigurationPhase { group 'vogella' description 'Configuration phase task example.' println 'I always get printed even though, I am not invoked' } task anotherUnrelatedTask { doLast { println 'I am in the doLast execution phase' } } When running gradle -q anotherUnrelatedTask the following is printed: I always get printed even though, I am not invoked I am in the doLast execution phase The first statement comes from the configuration phase in which the task definition of the onlySpecifiesCodeForConfigurationPhase is evaluated. 8.4. Task dependencies Gradle allows the definition of default tasks in the build file. These are executed, if no other tasks are specified. Tasks can also define their dependencies. Both settings are demonstrated in the following build file. defaultTasks 'clean', 'compile' task clean { doLast { println 'Executing the clean task' } } task compile { doLast { println 'Executing the compile task' } } task other(dependsOn: 'compile') { doLast { println "I'm not a default task!" } } task cleanOther { doLast { println "I want to clean up before running!" } } cleanOther.dependsOn clean, compile Hooking into predefined task executions for default tasks or tasks from plug-ins can also be done by using the dependsOn method. For instance when certain things have to be done right after the compilation of java code: apply plugin: 'java' task invokedAfterCompileJava(dependsOn: 'compileJava') { doLast { println 'This will be invoked right after the compileJava task is done' } } An alternative to creating a new task, which depends on the 'compileJava' task, a new execution block can also be directly applied to an existing task, e.g., the 'compileJava' task. apply plugin: 'java' compileJava.doFirst { println 'Another action applied to the "compileJava" task' } compileJava.doLast { println 'Another doLast action is also applied' } When running the javaCompile task all actions, which have been applied to the javaCompile task, are run one by one according to the order they have been applied to the task. 8.5. Skipping Tasks Skipping tasks can be done by passing a predicate closure to the onlyIf method of a task or by throwing a StopExecutionException. task eclipse { doLast { println 'Hello Eclipse' } } // #1st approach - closure returning true, if the task should be executed, false if not. eclipse.onlyIf { project.hasProperty('usingEclipse') } // #2nd approach - alternatively throw an StopExecutionException() like this eclipse.doFirst { if(!usingEclipse) { throw new StopExecutionException() } } 8.5.1. Accessing system variables like the user home directory You can access system variables. For example, to get the user home directory use the following: def homePath = System.properties['user.home'] 9. Exercise: Gradle Tasks 9.1. Using the tasks Gradle task The target of this exercise is to get an overview about the default tasks, which are delivered by default. Open a command line and execute the following command: gradle -q tasks 9.2. Using the help task The target of this exercise is to make use of the help task to get more information about other tasks, e.g., the init task. gradle -q help --task init 9.3. Create a Groovy project The previous exercise informed about the usage of the init task. gradle -q init --type groovy-library 9.4. Optional - Import the new groovy project Eclipse Buildship can be used to import the project into the Eclipse IDE. 9.5. Using the dependencies task In order to see the project’s dependencies (including the transitive ones) the dependencies task has to be invoked. ./gradlew dependencies If the optional Buildship import exercise has been done the dependencies task can also be invoked using the Gradle Tasks view. 10. Using the Gradle wrapper. The wrapper is the preferred way of starting a Gradle build, as it make the execution of the build independent of the installed Gradle version. The wrapper script can be created via gradle wrapper. As a result you find a gradlew for *nix based systems and gradlew.bat for window systems. These files can be used instead for the gradle command, and if Gradle is not installed on the machine, Gradle is automatically downloaded and installed. It is also possible to define a task which defines the version of the wrapper. If this task is executed, it creates the wrapper and downloads the correct version of Gradle. wrapper { gradleVersion = '4.9' } The version of the Gradle Wrapper can also be defined, when creating it via the command line. gradle wrapper --gradle-version 4.9 Without this explicit version parameter Gradle will automatically pick the latest version. 10.1. Configure GRADLE_OPTS for the Gradle wrapper GRADLE_OPTS can also be defined inside the gradlew or gradlew.bat file. #!/usr/bin/env bash ############################################################################## ## ## Gradle start up script for UN*X ## ############################################################################## # Add default JVM options here. # You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. DEFAULT_JVM_OPTS="-Xmx1024m" #... {more lines} @if "%DEBUG%" == "" @echo off @rem ########################################################################## @rem @rem Gradle startup script for Windows @rem @rem ########################################################################## @rem Set local scope for the variables with windows NT shell if "%OS%"=="Windows_NT" setlocal @rem Add default JVM options here. You can also use JAVA_OPTS # and GRADLE_OPTS to pass JVM options to this script. set DEFAULT_JVM_OPTS=-Xmx1024m @rem ... {more lines} 11. Exercise: Configuring the Wrapper Task Tasks like Gradle’s Wrapper Task are available by default and certain properties of it, e.g., gradleVersion can be configured like this: wrapper { gradleVersion = '4.9' } 12. Exercise - Create Custom Gradle Tasks 12.1. Exercise - Hello Gradle Task Create a helloGradle task, which prints Hello Gradle with the group workshop and a proper description. Then use the ./gradlew tasks command to see your new helloGradle task in the console or use Buildship in the Eclipse IDE. Then invoke the helloGradle task by calling ./gradlew hG or again use Buildship in the Eclipse IDE. 12.2. Exercise - Dependencies between tasks Create two new tasks called learnGroovy, which prints Learn Groovy, and learnGradle, which prints 'Learn Gradle'. These tasks should have reasonable dependencies. 12.3. Exercise - doFirst action for the tasks Extend the learnGroovy task so that it prints Install Eclipse IDE with Buildship before it prints Learn Groovy. 13. Exercise: Creating a Copy Task New tasks can also be created and derived from another task and specify certain properties. The Copy task type can be used to specify such a task, which is able to copy files. Create a new project with the following build.gradle file: task copyFile(type: Copy) { from 'source' into 'destination' } Create a source folder inside this project and add a text file to this folder. When running the copyFile task it copies the text file to a new destination folder. 14. Exercise: Specifying a custom Task in another gradle file Create a new Gradle project, which contains the following structure. The CheckWebsite.groovy class looks like this: package com.example import org.gradle.api.DefaultTask import org.gradle.api.tasks.TaskAction import java.io.IOException; import org.jsoup.Jsoup; import org.jsoup.nodes.Document; class CheckWebsite extends DefaultTask { String url = '' @TaskAction void checkWebsite() { // check the given website by using the url try { Document doc = Jsoup.connect(url).get(); String title = doc.title(); println title println url } catch (IOException e) { e.printStackTrace(); } } } Since this class has external dependencies to jsoup, a build.gradle file for this class has to be created. So the build.gradle inside the buildSrc folder, which is responsible for building the CheckWebsite class, looks like this: plugins { id 'groovy' } repositories { jcenter() } dependencies { compile 'org.jsoup:jsoup:1.8.3' } Finally the main build.gradle file in the root folder makes use of the new com.example.CheckWebsite task type. task defaultWebsiteCheck(type: com.example.CheckWebsite) task checkGradleWebsite(type: com.example.CheckWebsite) { url = '' } wrapper { gradleVersion = '4.9' } 15. Exercise: Trigger Gradle build from Java code This exercise describes how to trigger a gradle build from Java code. 15.1. Create new gradle projects Create two new gradle projects with the names BaseProject (this project starts the gradle build) and TargetProject (this project is built by the BaseProject). Make sure the BaseProject applies the java plugin. 15.2. Add dependencies Add the following dependency to the BaseProject. compile 'org.gradle:gradle-tooling-api:4.0-rc-2' 15.3. Build TargetProject Create a class named Application with a static main method like the following. import java.io.File; import org.gradle.tooling.BuildLauncher; import org.gradle.tooling.GradleConnector; import org.gradle.tooling.ProjectConnection; public class Application { public static void main(String[] args) { ProjectConnection connection = GradleConnector.newConnector().forProjectDirectory(new File("path/to/targetproject")).connect(); try { BuildLauncher build = connection.newBuild(); build.forTasks("build"); build.run(); } catch (Exception e) { e.printStackTrace(); } finally { connection.close(); } } } This method first creates a ProjectConnection to the project that should be build and connects to it. Make sure to replace path/to/targetproject with the path of the TargetProject. From the project connection, a new BuildLauncher can be obtained. With the help of the method forTasks() you can specify the gradle tasks that should be executed. The BuildLauncher also provides some other methods to configure the build. You can, for example, set gradle build arguments or change the Java version to build the project with. By calling the run() method the build is finally executed. Make sure to close the connection in the finally block. 16. Building Java projects 16.1. The Java plug-in The Java plug-in provides tasks to compile Java source code, run unit tests, create Javadoc and create a JAR file. 16.2. Default project layout of Java projects This plug-ins assume a certain setup of your Java project (similar to Maven). src/main/java contains the Java source code src/test/java contains the Java tests If you follow this setup, the following build file is sufficient to compile, test and bundle a Java project. apply plugin: 'java' To start the build, type gradle build on the command line. SourceSets can be used to specify a different project structure, e.g., the sources are stored in a src folder rather than in src/main/java. apply plugin: 'java' sourceSets { main { java { srcDir 'src' } } test { java { srcDir 'test' } } } 16.3. Java project creation with the init task Gradle does not yet support multiple project templates (called archetypes) like Maven. But it offers an init task to create the structure of a new Gradle project. Without additional parameters, this task creates a Gradle project, which contains the gradle wrapper files, a build.gradle and settings.gradle file. When adding the --type parameter with 'java-library' as value, a java project structure is created and the build.gradle file contains a certain Java template with JUnit. The build.gradle file will look similar to this: /* * ... deleted the generated text for brevity */ // Apply the java plugin to add support for Java apply plugin: 'java' //.12' // Declare the dependency for your favourite test framework you want to use //' } A project hosted at Github Gradle-templates project provides more templates beyond the init task. The Gradle team is also working on this archetype/template topic. 16.4. Specifying the Java version in your build file Usually a Java project has a version and a target JRE on which it is compiled. The version and sourceCompatibility property can be set in the build.gradle file. version = 0.1.0 sourceCompatibility = 1.8 When the version property is set, the name of the resulting artifact will be changed accordingly, e.g., {my-lib-name}-0.1.0.jar. If the artifact is an executable java application the MANIFEST.MF file must be aware of the class with the main method. apply plugin: 'java' jar { manifest { attributes 'Main-Class': 'com.example.main.Application' } } 17. Building Groovy projects 17.1. The Groovy plug-in The Groovy plug-in for Gradle extends the Java plug-in and provides tasks for Groovy programs. apply plugin: 'groovy' repositories { mavenCentral() } dependencies { implementation 'org.codehaus.groovy:groovy-all:2.4.5' testImplementation 'junit:junit:4.12' } To start the build, type gradle build on the command line. 17.2. Default project layout of Groovy projects This plug-ins assume a certain setup of your Groovy project. src/main/groovy contains the Groovy source code src/test/groovy contains the Groovy tests src/main/java contains the Java source code src/test/java contains the Java tests If you follow this setup, the following build file is sufficient to compile, test and bundle a Groovy project. 19. Running JUnit 5 tests with Gradle To use Junit 5 for Java, add the following to your dependencies closure in your 'build.gradle` file. Use at least Gradle 6.0 for this to avoid already fixed issues. dependencies { // more stuff testImplementation(enforcedPlatform("org.junit:junit-bom:5.4.0")) // JUnit 5 BOM testImplementation("org.junit.jupiter:junit-jupiter") } 19.1. Test naming conventions for Gradle The Gradle "test" task scans all compiled classes in the source folder of the project, e.g., /src/test/java or /src/test/groovy. JUnit classes are identified by: Class or a super class extends TestCase or GroovyTestCase Class or a super class is annotated with @RunWith Class or a super class contain a method annotated with @Test You can set the scanForTestClasses property to false, if you do not want automatic test class detection. In this case, if no additional include / exclude patterns are specified, the default for included classes are /Tests.class”, “/*Test.class” and the default excluded classes are “*/Abstract*.class”. 19.2. Include and Exclude particular Tests The test configuration in general is described at Gradle Test tasks description. The Test class has a include and exclude method. These methods can be used to specify, which tests should actually be run. Only run included tests: test { include '**my.package.name/*' } Skip excluded tests: test { exclude '**my.package.name/*' } 19.3. Show all test output in the terminal By default Gradle doesn’t print the standard output of your tests to the terminal. To see all output of your tests add this to your build file: test { testLogging.showStandardStreams = true } 20. Building multiple projects with Gradle 20.1. Creating a multi project build structure A business application usually does not consist of only one single project/module, but has many projects, which should be build. Gradle has the concept of a root project, which can have many sub projects. The root project is specified by a build.gradle file, like the single projects before. To specify, which projects belong to the build a settings.gradle file is used. For instance there might be this project structure: root_project core ui util settings.gradle Having this project structure the settings.gradle file would look like this: include 'core', 'ui', 'util' # altenative way would be #include 'core' #include 'ui' #include 'util' Besides the tasks task Gradle also provides a projects help task, which can be run in the root_project folder. > gradle projects 20.2. Specifying a general build configuration In a build.gradle file in the root_project general configurations can be applied to all projects or just to the sub projects. allprojects { group = 'com.example.gradle' version = '0.1.0' } subprojects { apply plugin: 'java' apply plugin: 'eclipse' } This specifies a common com.example.gradle group and the 0.1.0 version to all projects. The subprojects closure applies common configurations for all sub projects, but not to the root project, like the allprojects closure does. 20.3. Project specific configurations and dependencies The core, ui and util sub projects can also have their own build.gradle file. If they have specific needs, which are not already applied by the general configuration of the root project. For instance the ui project usually has a dependency to the core project. So the ui project needs its own build.gradle file to specify this dependency. dependencies { compile project(':core') compile 'log4j:log4j:1.2.17' } Project dependencies are specified with the project method. Alternatively you can also define the dependencies of a project in the root build.gradle file. But it is considered good practice to define the dependencies in the project specific build.gradle files, hence the following approach is only included for demonstration purpose. allprojects { apply plugin: 'java' repositories { mavenCentral() } } project(':com.example.core').dependencies { compile project(':com.example.model') compile 'log4j:log4j:1.2.17' } 21. Deployment with Gradle 21.1. How to deploy with Gradle Gradle offers several ways to deploy build artifacts to artifact repositories, like Artifactory or Sonatype Nexus. 21.2. Using the maven-publish plugin The most common way is to use the maven-publish plugin, which is provided by Gradle by default. // other plug-ins apply plugin: 'maven-publish' publishing { publications { mavenJava(MavenPublication) { from components.java } } repositories { maven { url "$buildDir/repo" } } } There are several publish options, when the java and the maven-publish plugin is applied. The deployment to a remote repository can be done like this: apply plugin: 'groovy' apply plugin: 'maven-publish' group 'workshop' version = '1.0.0' publishing { publications { mavenJava(MavenPublication) { from components.java } } repositories { maven { // default credentials for a nexus repository manager credentials { username 'admin' password 'admin123' } // url to the releases maven repository url "" } } } More information about the deployment to a Maven artifact repository can be found here: Publish to Maven repository with Gradle. 22. Integration with Ant Gradle supports running Ant tasks via the Groovy AntBuilder. 23. Convert Maven Projects to Gradle Gradle provides an incubating init task, which helps with the creation of new Gradle projects. This task can also convert Apache Maven pom.xml files to Gradle build files, if all used Maven plug-ins are known to this task. In this section the following pom.xml maven configuration will be converted to a Gradle project. <project xmlns="" xmlns: <modelVersion>4.0.0</modelVersion> <groupId>com.example.app</groupId> <artifactId>example-app</artifactId> <packaging>jar</packaging> <version>1.0.0-SNAPSHOT</version> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> </dependencies> </project> Running the gradle init --type pom command on the command line results in the following Gradle configuration. The init task depends on the wrapper task so that also a Gradle wrapper is created. The resulting build.gradle file looks similar to this: apply plugin: 'java' apply plugin: 'maven' group = 'com.example.app' version = '1.0.0-SNAPSHOT' description = """""" sourceCompatibility = 1.5 targetCompatibility = 1.5 repositories { maven { url "" } } dependencies { testImplementation group: 'junit', name: 'junit', version:'4.11' } 24. Developing custom Gradle plug-ins 24.1. Why creating Gradle plug-ins As a general rule, it is useful to have a build that is as much declarative as possible as this simplifies future maintenance. Therefore it is advised to avoid complex code in your Gradle build file. If you need custom logic you should place it into a custom Gradle plug-in. 24.2. Gradle DSL Each Gradle plug-in comes with a DSL. To see all properties of a Gradle object, you can use the following code snippet: println variants.properties .sort{it.key} .collect{it} .findAll{!filtered.contains(it.key)} .join('\n') For example to define a tasks which show the properties of all android.applicationVariants (in an Android project), use: task showAndoidVariantsInformation { doLast { android.applicationVariants.all { variants -> println variants.properties .sort{it.key} .collect{it} .findAll{!filtered.contains(it.key)} .join('\n') } } } 25. Exercise - Creating a simple Gradle plugin The 'java-gradle-plugin' Gradle plug-in simplifies creating custom Gradle plug-ins. This plug-in is current incubating. It does the following: the gradleApi() dependency is automatically added the gradleTestKit() dependency is automatically added the necessary plug-in descriptor files automatically 25.1. Create a Gradle project Selectin Eclipse to create a new Gradle project. The com.vogella.gradleplugin as project name. Stick to the defaults of the wizard and create the project. 25.2. Apply the 'java-gradle-plugin' plug-in Change the build.gradle file to the following: plugins { id 'java-gradle-plugin' } gradlePlugin { plugins { vogellaPlugin { id = 'com.vogella.gradleplugin' implementationClass = 'com.vogella.gradleplugin.MyPlugin' } } } repositories { jcenter() } dependencies { // No need to add gradleApi() here, because it is applied by the 'java-gradle-plugin' plug-in // We want to merge and parse SpotBugs xml files with XSLT compile('net.sf.saxon:Saxon-HE:9.8.0-12') // Use JUnit test framework testImplementation 'junit:junit:4.12' } wrapper { gradleVersion = '4.9' } In the /src/main/java folder create the following two classes. package com.vogella.gradleplugin; import org.gradle.api.DefaultTask; import org.gradle.api.tasks.TaskAction; public class MyTask extends DefaultTask { @TaskAction public void myTask() { System.out.println("Hello from vogella task"); } } package com.vogella.gradleplugin; import org.gradle.api.Plugin; import org.gradle.api.Project; public class MyPlugin implements Plugin<Project> { @Override public void apply(Project project) { project.getTasks().create("myTask", MyTask.class); } } Run the build task to build to plug-in and see next exercises on how to deploy and consume the plug-in. 26. Exercise - Deploy your custom Gradle plug-in to your local Maven repository Add the maven-publish Gradle plug-in and the publishing closure to the build.gradle file. plugins { id 'java-gradle-plugin' id 'maven-publish' } group = 'com.vogella' version = '0.0.1-SNAPSHOT' sourceCompatibility = 1.8 // ... more code publishing { publications { mavenJava(MavenPublication) { from components.java } } } Now additional publishing tasks are available and the publishToMavenLocal task can be used to add the Gradle plug-in to the local Maven repository. ./gradlew pTML 27. Exercise - Using the new plug-in To use the new Gradle plug-in a dependency to it has to be defined. If you pushed your plug-in to your local Maven repository, Gradle should find it and make it available. buildscript { repositories { mavenLocal() } dependencies { classpath 'com.vogella:com.vogella.gradleplugin:0.0.1-SNAPSHOT' } } apply plugin: 'com.vogella.gradleplugin' Now the new task from the com.vogella.gradleplugin should be available: ./gradlew tasks ./gradlew myTask 28. Optional Exercise - Publishing the plug-in to the Gradle plug-in portal To publish a Gradle plug-in in the Gradle Plug-in Portal the com.gradle.plugin-publish Gradle plug-in can be used. Before uploading Gradle plug-ins to the portal you must register at and obtain the api keys from your profile. The gradle.publish.key and gradle.publish.secret properties have to be added to the gradle.properties. Then the build.gradle file needs to be adjusted to be able to upload Gradle plug-ins. plugins { id 'java-gradle-plugin' id 'maven-publish' id 'com.gradle.plugin-publish' version '0.9.10' } // more code ... pluginBundle { website = '${Web site for your plugin}' vcsUrl = '{your-repo}' plugins { vogellaPlugin { id = 'com.vogella.gradleplugin' displayName = 'Vogella Sample Plug-in' description = 'Vogella Sample Plug-in for trying the ' tags = ['Vogella','Training','Gradle','Sample'] // Gradle's plug-in portal does not support SNAPSHOTs version = '0.0.1' } } } The following task can then be used to upload the Gradle plug-in. ./gradlew publishPlugins When the plug-in has been published the plugins closure can be used to make use of the Gradle plug-in. plugins { id "com.vogella.gradleplugin" version "0.0.1" } So the more verbose buildscript closure and the apply plugin method from previous chapters can be omitted. 29. Debugging Gradle Plug-ins 29.1. Activate remote debugging The following properties have to be specified in the gradle.properties file to enable remote debugging: org.gradle.daemon=true org.gradle.jvmargs=-XX:+HeapDumpOnOutOfMemoryError -Xmx4g -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5006 When running a Gradle build locally with these settings remote debugging can be accessed via localhost on port 5006. 29.2. Remote debugging in the Eclipse IDE You should import a certain Gradle plug-in into the Eclipse IDE by using the Buildship tooling. Then break points can be added to the Gradle plug-in’s source files. After that open the debug configuration and right click Remote Java Application to create a New Debug Configuration with the follwing settings: Press the Debug button to run the remote debugger. Then a Gradle build, which uses the desired Gradle plug-in, can be run by either using the Gradle Tasks View of the Buildship tooling inside the Eclipse IDE or invoke a Gradle build from command line. When a break point is hit during the Gradle task processing the Eclipse IDE will stop at that point. 30. Using code analysis tools Gradle provides several plugins for analyzing the code base of a Gradle project. 30.1. Jacoco for Gradle projects To use Jacoco for test coverage analysis the following code must be added to the top level build.gradle file plugins { id 'jacoco' } jacocoTestReport { reports { xml.enabled true html.enabled true } } If you have a multi project Gradle project then you need to add the jacocoTestReport and the jacoco plugin to the subprojects section of the top level build.gradle file. plugins { id 'jacoco' } subprojects { apply plugin: 'jacoco' jacocoTestReport { reports { xml.enabled true html.enabled true } } } If you would like to have a consolidated xml file for all the subprojects, e.g. to be used by SonarQube, you can create it with the following task in the top level build.gradle task generateMergedReport(type: JacocoReport) { dependsOn = subprojects.test additionalSourceDirs.setFrom files(subprojects.sourceSets.main.allSource.srcDirs) sourceDirectories.setFrom files(subprojects.sourceSets.main.allSource.srcDirs) classDirectories.setFrom files(subprojects.sourceSets.main.output) executionData.setFrom project.fileTree(dir: '.', include: '**/build/jacoco/test.exec') reports { xml.enabled true xml.destination file("../coverage-reports/coverage.xml") } } This task will save the consolidated xml file on the top level of the project under coverage-reports. To finally create the consolidated xml file, run Gradle with the created task generateMergedReport. ./gradle clean build generateMergedReport 32.
https://www.vogella.com/tutorials/GradleTutorial/article.html
CC-MAIN-2021-17
refinedweb
6,775
50.73
Introduction to Python – Part 1 We now live in an ocean of data. And of course, that is literally true for those of us who study the ocean. We’ve come a long way from the early days of oceanography, when scientists like Nansen, Ekman, and Bjerknes might collect a few dozen data points while on a ship, or from their calculations, or from a lab experiment, and then painstakingly draw graphs of their data by hand. (Ekman’s classic 1905 paper is a great reminder of what science was like decades before the first computer, and how much awesome stuff they still could do.) But now, with today’s modern instruments and ocean observatories, we can collect thousands of data points every day from dozens of instruments at the same location or spread across the world. This is both a blessing and a curse. Thanks to these new tools, we can study the ocean in more detail and at larger and longer scales than ever before. But on the down side, there is no way human hands or minds can make sense of all of this data without help. That is why learning how to program is now a skill that all oceanographers need to learn. While most students don’t have to become expert programmers, they do need to learn enough to processes, analyze and visualize the datasets they hope to use in their research. A Virtual REU This past summer, we put together our first Virtual REU (Research Experience for Undergraduates) in response to the cancellation of many traditional REUs due to the pandemic. Because we couldn’t take our students out to sea, we focused on teaching them how to utilize datasets we already have in hand, like the treasure-trove of data from the OOI. Of course, there’s not much you can do with the raw OOI dataset using a tool like Excel, let alone pencil and paper, so we decided it was important to provide students with a basic primer on oceanographic programming before they dove into their research projects. Below is the first of 4 Python notebooks I developed this summer to support our students during the 2-week mini-workshop we ran prior to students’ 6-week research experience. In the end, we only used 2 of the notebooks. (Developing 2x more than I need tends to be my style.) But I hope to share all of these notebooks with our Data Labs community over the next few weeks, in the hopes that you might find them helpful for developing your own classes or courses for introducing basic data processing skills using Python to your students. Activity 1 – Python Basics & NDBC Weather Data This first notebook (below) ended up requiring 2 sessions to cover. In the first session, we highlighted why learning a programming tool like Python is important to becoming an oceanographer. (Here are the slides I used.) Specifically, we covered: - A quick quick introduction to Google Colab - The importance of Reproducible Research (check out The Scientific Paper is Obsolete from the Atlantic) - How programming notebooks help with reproducible research and collaboration - And some Python programming basics. The second session was far more fun. We focused on the bottom half of the notebook, which demonstrates how, with a few lines of code, students can quickly access and plot data from NDBC. After a quick demo, we broke students up into small groups (using Zoom’s breakout rooms feature) and asked them to make a plot or two to show the full class at the end. A few students had some familiarity with programming, and we made sure they were dispersed throughout the small groups, so each group had a “ringer” to help. More importantly, we focused on using the NDBC dataset for two key reasons. - NDBC moorings are primarily supported by the National Weather Service, and thus focus on weather measurements like air/water temperatures, barometric pressure and winds that should be familiar with students. - The NDBC data portal, and specifically their DODS interface, makes it easy to access data from hundreds of buoys around the world. This allowed students to choose a research question that was of interest to them, and have plenty of options to choose from. To my mind, NDBC is the best data center available that a) is easy to access, b) has a wide geographic reach, and c) has datasets that are easy to interpret. While trying to introduce students to programming, data processing and data visualization, I feel it’s better to keep the data as simple as possible to keep the cognitive load down. Plus being able to understand and interpret the results can help students increase their confidence as they build all of these skills. Teaching is hard enough. Introducing students to programming, data visualization and interpreting messy real-world data requires a lot of flexibility. (And that’s before we even get into the challenges of remote learning.) The NDBC dataset, which we continued to use for a mini-research project as part of the 2-week workshop, made this easier and more fun. This was my first attempt at teaching all these skills at once, and I learned a lot myself. So while this notebook is far from perfect, I hope you still find it helpful. This post is part of our 2020 Summer REU Intro to Python series. See also Part 2, Part 3 and Part 4. Activity 1 - Python Basics & NDBC Weather Data¶ 2020 Data Labs REU Written by Sage Lichtenwalner, Rutgers University, June 9, 2020 Welcome to Python! In this notebook, we will demonstrate how you can quickly get started programming in Python, using Google's cool Colaboratory platform. Colab is basically a free service that can run Python/Jupyter notebooks in the cloud. In this notebook, we will demonstrate some of the basics of programming Python. If you want to lean more, there are lots of other resources and training sessions out there, including the official Python Tutorial. But as an oceanographer, you don't really need to know all the ins-and-outs of programming (though it helps), especially when just starting out. Over the next few sessions we will cover many of the basic recipes you need to: - Quickly load some data - Make some quick plots, and make them look good - Calculate a few basic statistics and averages - And save the data to a new file you can use elsewhere. Getting Started¶ Jupyter notebooks have two kids of cells. "Markdown" cells, like this one which can contain formatted text, and "Code" cells, which contain the code you will run. To execute the code in a cell, you can either: - click the Play icon on the left - type Cmd (or Ctrl) + Enter to run the cell in place - or type Shift + Enter to run the cell and move the focus to the next cell. You can try all these options on our first very elaborate piece of code in the next cell. After you execute the cell, the result will automatically display underneath the cell. 2+2 print("Hello, world!") # This is a comment As we go through the notebooks, you can add your own comments or text blocks to save your notes. # Your Turn: Create your own print() command here with your name print() A note about print() - By default, a Colab/Jupyter notebook will print out the output from the last line, so you don't have to specify the print()command. - However, if we want to output the results from additional lines (as we do below), we need to use print()on each line. - Sometimes, you can suppress the output from the last line by adding a semi-colon ;at the end. 3 4 5 print(3) print(4) print(5) # Your Turn: Try some math here 5*2 The order of operations is also important. print(5 * 2 + 3) print(5 * (2+3)) print((5 * 2) + 3) # We can eailsy assign variables, just like in other languages x = 4 y = 2.5 # And we can use them in our formulas print(x + y) print(x/y) # What kind of objects are these? print(type(x)) print(type(y)) # A string needs to be in quotes (single or double) z = 'Python is great' z # You can't concatenate (add) strings and integers print( z + x ) # But you can multiply them! print( z * x ) # If you convert an integer into a string, you can then catenate them print( z + ' ' + str(x) + ' you!' ) # A better way print( 'Python is great %s you!' % x ) my_list = [3, 4, 5, 9, 12, 13] # The fist item my_list[0] # The last item my_list[-1] # Extract a subset my_list[2:5] # A subset from the end my_list[-3:] # Update a value my_list[3] = 99 my_list # Warning, Python variables are object references and not copies by default my_second_list = my_list print( my_second_list ) my_second_list[0] = 66 print( my_second_list ) print( my_list ) # The first list has been overwritten # To avoid this, create a copy of the list, which keeps the original intact my_list = [3, 4, 5, 9, 12] my_second_list = list(my_list) # You can also use copy.copy() or my_list[:] my_second_list[0] = 66 print( my_second_list ) print( my_list ) Arrays¶ Note, a list is not an array by default. But we can turn it into an array using the NumPy library. NumPy is an essential library for working with scientific data. It provides an array object that is very similar to Matlab's array functionality, allowing you to perform mathematical calculations or run linear algebra routines. my_list * x import numpy as np a = np.array(my_list) a * x Note, we won't be explicitly creating NumPy arrays much in this course. But later on, when we load datasets using Pandas or Xarray, the actually arrays under the hood will be numpy arrays. my_dict = {'temperature': 21, 'salinity':35, 'sensor':'CTD 23'} my_dict # Grab a list of dictionary keys my_dict.keys() # Accessing a key/value pair my_dict['sensor'] Functions, Conditions and Loops¶ If you're familiar with how to do these in Matlab or R, it's all very similar, just with a different syntax. Remember, Python uses spaces to group together sub-elements, rather than parentheses, curly braces, or end statements. Traditionally, you can use 2 or 4 spaces to indent lines. def times_two(num): return num * 2; times_two(3) def my_name(name='Sage'): return name; my_name() Here one quick example that demonstrates how to define a function, use a conditional, and iterate over a for loop all at once. # A more complicated function def my_func(number): print('Running my_func') if type(number)==int: for i in range(number): print(i) else: print("Not a number") my_func('Test') my_func(4) Fun with NDBC Data¶ Now that we've covered some basics, let's start having some fun with actual ocean data. The National Data Buoy Center (NDBC) provides a great dataset to start with. And for this example, we'll use my favorite buoy Station 44025. To load datasets like this, there are 2 popular libraries we can use. - Pandas - Great for working with "spreadsheet-like" tables that have headers and rows, like Excel or CSV files - Can easily load text or CSV files - Xarray - Supports multidimensional arrays (e.g. x,y,z,t) - Can open NetCDF files or data from Thredds servers which are common in Oceanography - If you're using a Thredds server, you don't have to load all the data to use it NDBC actually makes their data available in a variety of ways. Text files are often more intuitive. However, the NDBC text files require a few hoops to load a use (each file is a separate year, dates are in multiple columns, etc.). Luckily, NDBC also provides a Thredds server DODS, which we can use to quickly load some data to play with. import xarray as xr !pip install netcdf4 data = xr.open_dataset('') # The Dataset data # Let's look at one variable data.air_temperature # And one piece of metadata data.air_temperature.long_name # Now let's make a quick plot data.air_temperature.plot(); # Let's subset the data in time data2 = data.sel(time=slice('2019-01-01','2020-01-01')) # Let's make that quick plot again data2.air_temperature.plot(); import matplotlib.pyplot as plt # We can even plot 2 variables on one graph data2.air_temperature.plot(label="Air Temperature") data2.sea_surface_temperature.plot(label="Sea Surface Temperature") plt.legend(); Tomorrow, we'll delve a lot more into data visualization and many of the other plotting commands you can use. But now, it's your turn to create your own plots. Try plotting different: - Variables (see options above) - Time ranges (you will need to reload the dataset) - Different stations (you will need to change the dataset URL). Check out the NDBC homepage for available stations As you create your graphs, try to write figure captions that describe what you think is going on. # Your Turn: Create some plots Additional Intros and References¶ 2019 Data Labs Quick Intro to Pytyon 2018 Python Basics for Matlab Wizards Rowe Getting Started with Python Nice work sage. I enjoy these posts. Thanks Hugh! I hope you find them helpful, or at least worth sharing with others.
https://datalab.marine.rutgers.edu/2020/10/introduction-to-python-part-1/
CC-MAIN-2022-40
refinedweb
2,199
67.99
In this post, we will learn about razor. Razor is a programming syntax used to create dynamic web pages using C sharp or VB.net. it allows you to write C Sharp code or VB code with the HTML markup. the razor syntax is separated from the HTML markup using @ symbol. If you cannot follow this tutorial, use this GitHub repo as a reference. To track changes, see this commit. Understanding Index component The application that we created now contains three main components. They are: - Index.razor - Counter.razor - About.razor Index.razor is a component that serves as the default home page of our application. It is located in BlazorApp.Client/Pages. Components are the building blocks of a blazer application. it can be a button, navigation bar, a footer or any reusable part of the application. We will learn more about competence later in this tutorial. Here’s the content of Index.razor. @page "/" <h1>Hello, world!</h1> Welcome to your new app. <SurveyPrompt Title="How is Blazor working for you?" /> As you can see this code is a combination of standard HTML markup, C Sharp code, and some custom tags. The first line starts with the @page directive. It defines the URL (or the route) at which this page can be accessed. The ‘/’ symbol means that this page is the root page of our application. So, this page can be accessed at. Now, let’s replace this code and learn more about Razor syntax. The code block The code block can be used to embed C# code in your component. A code block is created using the @code directive. @code{ // Write your code here. } You can declare variables or classes in the code block. Let me show you an example. @code{ string message = "Hello World"; } In this code, I declared a variable named message and saved “Hello World” in it. But how will we display this variable in our index component? We can write the C# code along with HTML by prefixing the code with an @ symbol. Replace the code of Index.razor as shown below. @page "/" <h1>@message</h1> Welcome to your new app. @code{ string message = "Hello, World!"; } Now, run the application by pressing Ctrl +F5. Note that the text “Hello, World!” stored in the C# variable message was displayed in the HTML <h1> tag. Declaring Functions We can also declare functions in the @code block. Here’s an example. In this example, I am going to display the length of message in an h2 tag using a custom function. @page "/" <h1>@message</h1> <h2>Message is @GetLength(message) chars long.</h2> Welcome to your new app. @code{ string message = "Hello, World!"; int GetLength(string text) { return text.Length; } } Note that in the h2 tag, we directly call our GetLength method and passed message variable as the parameter. Explicit expressions So far, we used the @ symbol to specify a C# code. But in some cases, the expressions may become larger. It may even have multiple lines. In such cases, we should explicitly specify the start and end of an expression by enclosing then within the () brackets. Now, we are going to append a string to our variable message and display it in an <h2> tag. <h2>@message + " Geekinsta"</h2> When we run this code, we’ll get an output like this. Hello, World! + " Geekinsta" Why did this happen? Here, we should explicitly specify the start and end of the C# expression by enclosing it in a pair of brackets. @page "/" <h1>@message</h1> <h2>Message is @GetLength(message) chars long.</h2> <h2>@(message + " Geekinsta")</h2> Welcome to your new app. @code{ string message = "Hello, World!"; int GetLength(string text) { return text.Length; } } Run your application and reload the browser to see the output. Event handlers In addition to binding data to HTML elements, we can also add event listeners. Let me show you how to handle the click event of a button using C#. <button @Click Me</button> Here, I created a button and displayed the text “Click Me”. We we click the button, the onclick event handler will execute. Click event handler in this code is an arrow function and it changes the value of the variable message to You clicked me”. Run (Ctrl + F5) or build (Ctrl + B) the project and click the button. Text displayed in the h1 tags will be changed to “You clicked me”. Now we have created an interactive application without using JavaScript. Declaring classes In addition to methods and variables, we can also declare classes in our component using Razor syntax. @page "/" <h1>@message</h1> <h2>Message is @GetLength(message) chars long.</h2> <h2>@(message + " Geekinsta")</h2> <button class="btn btn-primary" @Click Me</button> <button class="btn btn-success" @Write to console</button> Welcome to your new app. @code{ string message = "Hello, World!"; int GetLength(string text) { return text.Length; } public class MyClass { public static void WriteToConsole(string text = "Message from MyClass") { Console.WriteLine(text); } } } In this example, I created a class named MyClass and a button with text Write to console. When we click on the button, a static method named WriteToConsole is called and a message will be displayed in the browser’s console. Reusable classes The class that we created just now is accessible only in our Index component. We cannot use this class in other components. Let’s make this class reusable. In the BlazorApp.Client project, create a new folder named Helpers. This is where we will keep all our reusable classes and helpers. Then add a class file named ConsoleHelper.cs and copy the content of our current class to it. ConsoleHelper.cs using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; namespace BlazorApp.Client.Helpers { public class ConsoleHelper { public static void WriteToConsole(string text = "Hi form Blazor App") { Console.WriteLine(text); } } } Now, add reference to the ConsoleHelper class in our Index component just below the @page directive. @using Helpers; Create a button and a call the WriteToConsole() method from its click event. <button type="button" class="btn btn-info" @Call console helper</button> Run the application and click the button. The text will be displayed in the browser console. To open the console, press Ctrl + Shift + I or Right-click anywhere in the page -> select Inspect element -> Console. If you forgot to add a reference to our Helper namespace, you will get an error when you try to run the application. If your code is not working as expected, refer the code in the repository. Adding reference to the ConsoleHelper class in each and every components (if needed) is not a good practice. In the BlazorApp.Client project, have you noticed a file named _Imports.razor? In this file, we can add reference to namespaces that are commonly in the components of our application. Therefore, add a reference to our Helpers namespace in this file so that we no longer have to add a reference to this namespace in each component. App.Client @using BlazorApp.Client.Shared @using Helpers; Now, we can remove the reference to Helpers namespace from Index component of the app. This is how you can create reusable classes that can be called from any component. In razor pages, comments should start with @* and end with *@. @* using Helpers; *@ Anything written as comments will not be executed. Loops How can we display a list of items in a component? For that, we can use loops. Let us learn how to write loops in razor syntax using an example. In this example, we are going to display a list of Customers and their details in our Index component. In the @code block, create a new class named Customer with three properties Name, Email, and IsSubscribedToNewsletter. public class Customers { public string Name { get; set; } public string Email { get; set; } public bool IsSubscribedToNewsletter { get; set; } } Create a list of customers and store it in a List. List<Customer> Customers = new List<Customer>() { new Customer(){Name= "John Doe", Email="john@mail.com", IsSubscribedToNewsletter=true}, new Customer(){Name= "Jane Doe", Email="jane@mail.com", IsSubscribedToNewsletter=true}, new Customer(){Name= "Janet Doe", Email="janet@mail.com", IsSubscribedToNewsletter=false}, }; Next, we are displaying this List in the component using a foreach loop. You can use a for loop also. <h1>Customers</h1> @foreach(var customer in Customers) { <div class="card mb-3"> <div class="card-header">@customer.Name</div> <div class="card-body">@customer.Email</div> </div> } Run the application and reload the page to see the output. Conditionals We can also use conditionals like if and else in the Razor syntax. In the above example, we simply displayed a list of customers and their email IDs. Now, let us use conditionals to display whether they have subscribed to the newsletter or not. If a customer has subscribed to the newsletter, the status will be displayed in green color. Otherwise, the status will be displayed in red color. So, let us make some changes to the code. <h1>Customers</h1> @foreach(var customer in Customers) { <div class="card mb-3"> <div class="card-header"> @customer.Name @if (customer.IsSubscribedToNewsletter) { <span class="badge badge-success">Subscribed</span> } else { <span class="badge badge-danger">Not subscribed</span> } </div> <div class="card-body">@customer.Email</div> </div> } Have you noticed that I used the if conditional to determine the color in which the status is displayed? In the next part of this tutorial, we’ll learn about components. If you have something to ask, let me know in the comments below.
https://www.geekinsta.com/razor-syntax-in-blazor/
CC-MAIN-2021-31
refinedweb
1,587
68.67
Closed Bug 1335651 Opened 6 years ago Closed 6 years ago Setup a TC index path for toolchain builds Categories (Firefox Build System :: Task Configuration, task) Tracking (Not tracked) mozilla54 People (Reporter: glandium, Assigned: glandium) References Details Attachments (3 files) No description provided. Comment on attachment 8832336 [details] Bug 1335651 - Automatically add the script to files-changed for toolchain jobs. Attachment #8832336 - Flags: review?(dustin) → review+ Comment on attachment 8832337 [details] Bug 1335651 - Setup an index path in the gecko.cache namespace for toolchain builds. This is a nice design, and will be a great improvement after a bit of refinement. ::: taskcluster/taskgraph/merkle.py:14 (Diff revision 1) > + > + > +def hash_paths(base_path, patterns): > + """ > + Give a list of path patterns, return a digest of the contents of all > + the corresponding files, in a flat Merkle tree fashion, not unlike I'm not sure what you mean by "flat Merkle tree fashion" here. It looks like it's a simple ordered list of (pathname, hash) tuples for each referenced file. I suppose that's equivalent to a one-node Merkle tree with paths as keys (rather than dentry names)? Or am I missing something about how this is implemented? Is there some optimization to avoid, for example, hashing `js/src/**` over and over? ::: taskcluster/taskgraph/task/transform.py:84 (Diff revision 1) > def __init__(self, kind, task): > self.dependencies = task['dependencies'] > self.when = task['when'] > super(TransformTask, self).__init__(kind, task['label'], > - task['attributes'], task['task']) > + task['attributes'], task['task'], > + task.get('index-paths')) Probably wise to call this with a keyword argument (`index_paths=task.get('index-paths')`) to allow future expansion. ::: taskcluster/taskgraph/task/transform.py:90 (Diff revision 1) > + optimized, taskId = super(TransformTask, self).optimize(params) > + if optimized: > + return optimized, taskId * Assume that the toolchain task description contains a few supporting files in `when.files-changed`. * I push a commit modifying `toolchain.py` * add_index_paths implementation below means that the new contents of `toolchain.py` are included in the hash, but the file is not included in `when.files-changed`. * `super().optimize()` does not find an existing indexed task for this hash, so it returns `False, None` * The few supporting files in `when.files-changed` haven't been modified, so the task is optimized away * The decision task fails because other (build) tasks depend on this toolchain build, which has just been optimized away. Am I missing something? In general, for each task I think we want to do one of two types of optimization: * "have relevant things changed?" -- for leaf tasks that other tasks do not depend on * examples: tests, lint * "have I already done this?" -- for tasks that provide artifacts to other tasks * examples: docker images, toolchains Doing both for the same task is, I think, problematic due to issues like those I outlined above. Maybe the easiest fix is to say that if a Task can have *either* when.index-paths-exist *or* when.files-changed, but not both (moving `Task.index-paths` to `Task.when['index-paths-exist']` in the process). ::: taskcluster/taskgraph/transforms/job/toolchain.py:48 (Diff revision 1) > files.append('taskcluster/scripts/misc/{}'.format(run['script'])) > > + label = taskdesc['label'] > + subs = { > + 'name': label.replace('toolchain-', '').split('/')[0], > + 'digest': hash_paths('.', files), We've tried to avoid assuming the working directory is the root of the repository, instead calculating GECKO based on the file path. For example, see `taskcluster/taskgraph/docker.py`. Attachment #8832337 - Flags: review?(dustin) → review- Comment on attachment 8832335 [details] Bug 1335651 - Move index_paths from DockerImageTask to the base Task class. Attachment #8832335 - Flags: review?(dustin) → review+ Comment on attachment 8832335 [details] Bug 1335651 - Move index_paths from DockerImageTask to the base Task class. Comment on attachment 8832336 [details] Bug 1335651 - Automatically add the script to files-changed for toolchain jobs. Comment on attachment 8832337 [details] Bug 1335651 - Setup an index path in the gecko.cache namespace for toolchain builds. One important change below, but with that change I'm happy to see this land. ::: taskcluster/ci/toolchain/linux.yml:21 (Diff revision 3) > - when: > - files-changed: > + extra: > + resources: This will end up appearing in the final task definition, but it's not useful there. Consider moving it to the `run` section, instead, and represesnting it in `toolchain_run_schema` where some comments can explain what it does? Attachment #8832337 - Flags: review?(dustin) → review+ Pushed by mh@glandium.org: Move index_paths from DockerImageTask to the base Task class. r=dustin Automatically add the script to files-changed for toolchain jobs. r=dustin Setup an index path in the gecko.cache namespace for toolchain builds. r=dustin Status: NEW → RESOLVED Closed: 6 years ago Resolution: --- → FIXED Target Milestone: --- → mozilla54 Product: TaskCluster → Firefox Build System
https://bugzilla.mozilla.org/show_bug.cgi?id=1335651
CC-MAIN-2022-40
refinedweb
776
59.19
As announced yesterday, the new February 2010 release of F# is out. For those using Visual Studio 2008 and Mono, you can pick up the download here. This release is much more of a stabilization release instead of adding a lot of features including improvements in tooling, the project system and so on. One of the more interesting announcements today was not only the release, but that the F# PowerPack, a collection of libraries and tools, was released on CodePlex under the MS-PL license. By releasing the PowerPack on CodePlex, it allows the F# team to have the PowerPack grow more naturally, free of major release cycles such as Visual Studio releases. What’s included in this release? What’s in the Box? The PowerPack includes such tools as: - FsLex – a Lexical Analyzer, similar in nature to OCamlLex - FsYacc – a LALR parser generator which shares the same specification as OCamlYacc - FsHtmlDoc – an HTML document generator for F# code Just as well, there are quite a few libraries which include: - FSharp.PowerPack.dll – includes additional collections such as the LazyList, extension methods for asynchronous workflows, native interoperability, mathematical structures units of measure and more - FSharp.PowerPack.Compatibility.dll – support for OCaml compatibility - FSharp.PowerPack.Linq.dll – provides support for the LINQ provider model - FSharp.PowerPack.Parallel.Seq.dll – perhaps the most interesting in that it provides support for Parallel LINQ and the Task Parallel Library Just to name a few… Let’s look briefly though at some of the features. LINQ Support One piece that’s not particularly new, but under-looked is the support for LINQ providers through the FSharp.PowerPack.Linq.dll library. This means we could support providers such as NHibernate, LINQ to SQL, Entity Framework, MongoDB or any other provider. To enable this behavior, simply use the query function with an F# expression. An F# quotation is much like the .NET BCL Expression but with a few extra added goodies and represented in the <@ … @> form. let result = query <@ ... @> Just as well, there are additional query functions that are necessary when dealing with data including: - contains - groupBy - groupJoin - join - maxBy - minBy Each one of those are rather self explanatory in how it gets transformed back into F# quotations/expressions. Let’s look at a simple example of grouping customers in California by their customer level from a LINQ to SQL provider. #if INTERACTIVE #r "FSharp.PowerPack.Linq.dll" #endif open Microsoft.FSharp.Linq open Microsoft.FSharp.Linq.Query let context = DbContext() let groupedCustomers = query <@ seq { for customer in context.Customers do if customer.BillingAddress.State = "CA" then yield customer } |> groupBy (fun customer -> customer.Level) @> As you can see, inside the query function, we have a sequence expression to iterate through our customers looking for all in California, and then outside of the sequence expression, we call the groupBy function which allows us to Key off the customer level. Perhaps there is one more interesting piece than LINQ support, which is what we find in the FSharp.PowerPack.Parallel.Seq.dll library. Parallel Extensions via PSeq In a previous post, I went over how you could use F# and Parallel LINQ (PLINQ) as well as the Task Parallel Library (TPL) together nicely. There was a bit of a translation layer needed at the time due to the inherent mismatch between .NET Func and Action delegates and F# functions. What’s nice now is the support for PLINQ and TPL now comes out of the box with the F# PowerPack through the aforementioned library and in particular the PSeqModule. This module contains many of the same combinators as the SeqModule such as filter, map, fold, zip, iter, etc but with the backing of both PLINQ and the TPL. For a quick example, we can do the Parallel Grep sample using F# and the PSeq module. open System open System.IO open System.Text.RegularExpressions open System.Threading let regexString = @"^[\w-\.]+@([\w-]+\.)+[\w-]{2,4}$" let searchPaths = [@"C:\Tools";@"C:\Work"] let regexOptions = RegexOptions.Compiled ||| RegexOptions.IgnoreCase let regex = new ThreadLocal<Regex>(Func<_>(fun () -> Regex(regexString, regexOptions))) let searchOption = SearchOption.AllDirectories let files = seq { for searchPath in searchPaths do for file in Directory.EnumerateFiles(searchPath, "*.*", searchOption) do yield file } type FileMatch = { Num : int; Text : string; File : string } let matches = files |> PSeq.withMergeOptions(ParallelMergeOptions.NotBuffered) |> PSeq.collect (fun file -> File.ReadLines(file) |> Seq.map2 (fun i s -> { Num = i; Text = s; File = file } ) (seq { 1 .. Int32.MaxValue }) |> Seq.filter (fun line -> regex.Value.IsMatch(line.Text))) This above sample looks in the Tools and Work directory of my C drive and determines whether there are any email addresses in there, in parallel. We’ll cover more of this in depth in the near future, but this is enough to whet your appetite. Conclusion With the new release of the F# language, we also have a welcome surprise in the F# PowerPack now finding a home on CodePlex. This move by the F# team allows the PowerPack to grow more naturally and not be confined to major cycles such as .NET Framework or Visual Studio releases. Sometimes, the best way to learn the language is to just learn how the libraries were written, and given it is on CodePlex, we now easily have that opportunity.
http://codebetter.com/matthewpodwysocki/2010/02/11/the-f-powerpack-released-on-codeplex/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+CodeBetter+%28CodeBetter.Com%29
crawl-003
refinedweb
874
55.54
14 January 2011 09:22 [Source: ICIS news] LONDON (ICIS)--The section of the Rhine between ?xml:namespace> Florian Krekel, spokesman for shipping authority Wasser- und Schifffahrtsamt Bingen, said experts arrived at the scene of the accident late on Thursday, and would decide on which measures to take this morning. He said high water levels, which were caused by a sudden drop in temperature, were expected to fall again over the weekend and that a decision would be taken then on whether to open up the section of the river for shipping again. A team was now in the process of trying to anchor the ship, which capsized at around 05:00 local time (04:00 GMT) on Thursday. Two of the ship’s crew of four people remained missing, despite divers searching the waters and the vessel, he said. The two workers who were rescued were hospitalised with hypothermia. The ship, which was en route from BASF’s chemicals production hub in The cause of the accident was not yet
http://www.icis.com/Articles/2011/01/14/9425933/rhine-shipping-disrupted-at-least-until-17-january.html
CC-MAIN-2014-10
refinedweb
171
64.64
Hi folks I've just seen this thread after writing something similar to the above autogrid layout. The code below is a diff to add two new tiling styles. The first is 'columns' which take an additional argument, the number of columns to make, e.g.: stiler columns 3 The second, 'fair', is just a special case of columns that attempts to make all windows square, i.e. the number of columns is basically sqrt(number of windows). stiler fair I'm using this with clusterssh and 20-30 xterms at a time and it is working quite well. If the number of windows is not a factor of the number of columns there will be gaps at the bottom. I would welcome an elegant way to address this. I'm not a python hacker, so feel free to improve my style. --- stiler 2010-03-06 00:51:57.000000000 -0500 +++ stiler_columns.py 2010-03-06 15:50:35.000000000 -0500 @@ -150,6 +150,36 @@ return layout + +def get_columns_tile(wincount, ncolumns): + ## 2nd term rounds up if num columns not a factor of + ## num windows; this leaves gaps at the bottom + nrows = (wincount/ncolumns) + int(bool(wincount%ncolumns)) + + layout = [] + x = OrigX + y = OrigY + + height = int(MaxHeight/nrows - WinTitle - WinBorder) + width = int(MaxWidth/ncolumns - 2*WinBorder) + + for n in range(0,wincount): + column = n % ncolumns + row = n / ncolumns + + x = OrigX + column * width + y = OrigY + (int((MaxHeight/nrows)*(row))) + layout.append((x,y,width,height)) + + return layout + + +def get_fair_tile(wincount): + import math + ncolumns = int(math.ceil(math.sqrt(wincount))) + return get_columns_tile(wincount, ncolumns) + + def get_max_all(wincount): layout = [] x = OrigX @@ -264,6 +294,22 @@ arrange(get_horiz_tile(len(winlist)),winlist) +def columns(ncolumns): + winlist = create_win_list() + active = get_active_window() + winlist.remove(active) + winlist.insert(0,active) + arrange(get_columns_tile(len(winlist), ncolumns),winlist) + + +def fair(): + winlist = create_win_list() + active = get_active_window() + winlist.remove(active) + winlist.insert(0,active) + arrange(get_fair_tile(len(winlist)),winlist) + + def cycle(): winlist = create_win_list() winlist.insert(0,winlist[len(winlist)-1]) @@ -298,6 +344,10 @@ vertical() elif sys.argv[1] == "horizontal": horiz() +elif sys.argv[1] == "columns": + columns(int(sys.argv[2])) +elif sys.argv[1] == "fair": + fair() elif sys.argv[1] == "swap": swap() elif sys.argv[1] == "cycle": Offline Hmmm, I've just noticed these two lines are redundant: x = OrigX y = OrigY Offline Thanks, stiler is great, I use it every day. Do you think you could implement untiling of all windows? I've also just experienced my first bug where - without identifyable cause - stiler wouldn't work anymore. The traceback: Traceback (most recent call last): File "/usr/bin/stiler", line 102, in <module> (Desktop,OrigXstr,OrigYstr,MaxWidthStr,MaxHeightStr,WinList) = initialize() File "/usr/bin/stiler", line 52, in initialize current = filter(lambda x: x.split()[1] == "*" , desk_output)[0].split() IndexError: list index out of range Yesterday i switched to Xorg 1.8. Do you think this is related? I'll update here if i can solve the issue. Maybe it was just a one-time thing, though (update: it was). Regards, demian Last edited by demian (2010-10-01 15:47:29) Offline I understand nothing. I install stiler from aur. I have wmctrl and xdotool. When I write in a console "style left" that is positioned to the left. But as I do with other applications? Can i do this with the keyboard? Please teach me Offline Stiler doesn't work with python 3 because the commands module has been removed. You get the following error whenever you try to run it: File "/usr/bin/stiler", line 22, in <module> import commands ImportError: No module named commands Offline Stiler doesn't work with python 3 because the commands module has been removed. You get the following error whenever you try to run it: File "/usr/bin/stiler", line 22, in <module> import commands ImportError: No module named commands Just change the Shebang in /usr/bin/stiler from #!/usr/bin/python to #!/usr/bin/python2 Offline Hi, I'm not using Arch, but I'm posting here because that's where I found this wonderful stiler script in the first place. The short story is : upgrading to Fedora 15 caused stiler to stop working for me. That is because wmctrl returns incomplete window list for some reason. I guess that is due to some change to the Xlib or Openbox. Anyway, I thought it might be a good idea to completely get rid of wmctrl. So I've used to implement a small python module (xutils.py) that can provide a window list, active window and current desktop without using wmctrl. This actually fixed my bug so I can use stiler again. With the current code, stiler is still using wmctrl to resize/move windows, but it's just because I haven't had time to implement the window move/resize functionnality in python. My modifications are available here: So if anybody run into the same situation where wmctrl is broken, just grab this version and it might work. Offline who needs tiling WM's when you can have openbox with this - difficult to use gimp and chromium in awesome WM :-). exactly what I was looking for . In particular the 2monitor mod. ty so much Last edited by Silicontoad (2011-07-16 13:29:56) Offline Don't know if soulfx still reads this thread, but I noticed a weird problem. I'm using Openbox and the stiler-grid-git package from the AUR. "stiler simple" works perfectly for all applications except for LibreOffice... for some reason, the LibreOffice window ends up being placed 21 pixels (the exact height of my titlebar in .stilerrc) higher than where it should be, and the window is 1 pixel taller than it should be. I have no idea what's causing this... here's the output from running "stiler simple" on a terminal window when the only other window is LibreOffice - the output is the same as it is for two terminal windows, which tile perfectly: [<...> ~]$ stiler simple -vv DEBUG:Simple Window Tiler:corner widths: [0.33, 0.5, 0.67] DEBUG:Simple Window Tiler:center widths: [0.34, 1.0] DEBUG:Simple Window Tiler:0x1a00004 is type NORMAL, state Normal DEBUG:Simple Window Tiler:0x1a00047 is type NORMAL, state Normal DEBUG:Simple Window Tiler:moving window: 0x1a00004 to (0,30,864,849) DEBUG:Simple Window Tiler:moving window: 0x1a00047 to (864,30,736,849) I've used stiler on Xfce; Xfwm seemed to not have this issue. Thanks for working on this project; it's really useful. Last edited by thorion (2012-10-25 03:50:40) Offline Don't know if soulfx still reads this thread, Considering that he hasn't posted since 2009, it seems unlikely. (You can get this information from the "User list" using the link at the top left side of your screen, or simply by clicking on his user name on one of his posts). Last edited by 2ManyDogs (2012-10-25 03:59:15) Offline Good to know; I probably should've checked that. It also seems like the last commit to stiler on github <> was 3 years ago. I'll update this if I find a solution. Offline Good to know; I probably should've checked that. It also seems like the last commit to stiler on github <> was 3 years ago. I'll update this if I find a solution. In case anyone else is having problems, I've implemented a really crude workaround. In the script (found at /usr/bin/stiler for this AUR package), I found the def move_window function and added this: def move_window(windowid,PosX,PosY,Width,Height): """ Resizes and moves the given window to the given position and dimensions """ PosX = int(PosX) PosY = int(PosY) # beginning of my addition """ Workarounds for certain stubborn apps """ if commands.getoutput("xprop -id "+windowid+" | grep _NET_WM_NAME | grep LibreOffice") != "": logging.debug("Detected LibreOffice window on "+windowid) logging.debug(commands.getoutput("xprop -id "+windowid+" | grep _NET_WM_NAME | grep LibreOffice")) PosY = PosY + WinTitle + 1 Height = Height - 1 if commands.getoutput("xprop -id "+windowid+" | grep _NET_WM_NAME | grep Wicd") != "": logging.debug("Detected Wicd window on "+windowid) logging.debug(commands.getoutput("xprop -id "+windowid+" | grep _NET_WM_NAME | grep Wicd")) PosY = PosY + WinTitle - 32 # end of my addition logging.debug("moving window: %s to (%s,%s,%s,%s) " % (windowid,PosX,PosY,Width,Height)) I should definitely grab the window by application instead of by window name, but that should be easy to change. Last edited by thorion (2012-10-27 05:06:14) Offline This script is awesome. I modified it to allow re-sizing the current window to top half, bottom half, and top-left, top-right, bottom-left, and bottom-right. Let me know if anyone wants a copy. I don't know anything about using GIT so not sure how to get it on there... Offline
https://bbs.archlinux.org/viewtopic.php?pid=946842
CC-MAIN-2016-36
refinedweb
1,463
65.32
Top 10 Interview Questions & Answers for Java Developers Java is one of the most popular programming languages among developers. Its uncomplicated syntax and compatibility with all the major operating systems makes it an ideal choice. Today, there are more than 10 million developers who work with the Java language. With competition this intense, organizations hiring Java developers are quite resolute in their hiring priorities. Only professionals with proven technical competencies in Java and who can design, code, build, and deploy any kind of applications are hired. But no matter how great your coding skills, acing the interview is essential to land the Java role of your dreams. We’ve compiled 10 of the top questions and model answers often asked at Java Developer interviews – read on to find out where you stand! #1 Can you differentiate between J2SDK 1.5 and J2SDK 5.0? Your answer: There is really no difference between J2SDK 1.5 and J2SDK 5.0. The versions have been rebranded by Sun Microsystems. #2 What is the number of bits used to represent ASCII, Unicode, the UTF-16, and the UTF-8 characters? Your answer: Unicode requires 16 bits. ASCII requires 7 bits – however, it is usually represented as 8 bits. UTF-8 presents characters in 8, 16, and 18 digit patterns. UTF-16 requires 16 bit or larger patterns. #3 Is it possible to import same package or class twice? Will the JVM load the package twice at runtime? Your answer: It is possible to import the same package or class more than once. It will not have any effect on the compiler or JVM. The JVM will load the class more than once, irrespective of the number of times you import the same class. #4 IS JVM an overhead? Your answer: No and Yes. JVM is an extra layer that translates Byte code into Machine code. Java provides an additional layer of translating the Source code when compared to languages like C. C++ Compiler – Source Code -> Machine Code Java Compiler – Source Code -> Machine Code JVM – Byte Code -> Machine Code Though it looks like an overhead, this additional translational allows Java to run Apps on all platforms since JVM provides the translation to Machine code as per the underlying Operating System. #5 There are two objects – a and b – with the same hashcode. I will insert two objects inside the hashmap. hMap.put(a,a); hMap.put(b,b); where a.hashCode()==b.hashCode() How many objects will be inside the hashmap? Your answer: There can be 2 elements with the same hashcode. When two elements have the same hashcode, Java uses the equals to further differentiation. So there can be one or two objects depending on the content of the objects. #6 Could you provide some implementation of a Dictionary having a large number of words? Your answer:. #7 What do you think the output of this code will be? enum Day { MONDAY,TUESDAY,WEDNESDAY,THURSDAY,FRIDAY,SATURDAY,SUNDAY } public class BuggyBread1{ public static void main (String args[]) { Set mySet.add(Day.SATURDAY); mySet.add(Day.WEDNESDAY); mySet.add(Day.FRIDAY); mySet.add(Day.WEDNESDAY); for(Day d: mySet){ System.out.println(d); } } } Your answer: WEDNESDAY FRIDAY SATURDAY Only one FRIDAY will be printed since the Set doesn’t allow duplicated. Elements will be printed in the order that the constants are declared in in the Enum. TreeSet maintains the elements in the ascending order which is identified by the defined compareTo method. The compareTo method in the Enum has been defined such that the constant declared later are greater than the constants declared prior. #8 What is the difference between System.out, System.err, and System.in? Your answer:. #9 How does the substring() method of String class create memory leaks? Your answer: The substring method can build a new String object with reference to the while char array, to avoid copying it. Thus, you can inadvertently keep a reference to a very big character array with just one character string. #10 Why is Char array preferred over String to store a password? Your answer: String is immutable in Java and stored in the String pool. Once it is created it stays in the pool until the garbage is created making the password available in the memory. It is a security risk because anyone who has access to the memory dump can find the password as clear text. So there you go. These 10 questions should prepare you well for your interview. [Preparing for Java Certification? Here’re 50 Java Certification Practice Test Questions. Take this free practice test to know where you stand!] Good Luck! Recommended articles for you 40+ Resources to Help You Learn Java OnlineArticle Five Important Steps to Upskilling Your Software Development...Article Course Announcement: Software Engineer Masters Program
https://www.simplilearn.com/java-developers-interview-questions-answers-article
CC-MAIN-2018-43
refinedweb
800
58.28
Testing Testing is extremely important. Without testing, you cannot be sure that your code is doing what you think. Testing is an integral part of software development, and should be done while you are writing code, not after the code has been written. There are two main types of tests, both of which you should include in your code. Runtime (sanity) tests - these are light-weight tests performed while the code is running to ensure that everything is ok, e.g. arguments passed to a function make sense and are valid inputs. Correctness (unit) tests - these are heavier tests, typically run and written separately from the code, that test that the functions give the correct answers and behave in the expected way. Runtime tests These are run in a function to ensure that the function is being called correctly with sensible (sane) arguments. For example, lets consider the following script; """Module containing functions used to demonstrate the need for testing""" def addArrays(a, b): """Function to add together the two passed arrays, returning the result.""" c = [] for a_, b_ in zip(a, b): c.append(a_ + b_) return c Use nano to copy and paste the above script into a file called addarrays.py. Then open a new ipython session in the same directory as addarrays.py and type; from addarrays import addArrays c = addArrays( [1,2,3], [4,5,6] ) print(c) should show that c is equal to [5, 7, 9]. Now type; c = addArrays( [1,2], [4,5,6] ) print(c) Now c is seen to be equal to [5, 7]. Is this what you expected? The problem is that addArrays expects both arrays to contain the same number of items. The first array was smaller than the second, but it did not give any error. Should it have returned [5,7,6] instead? To clean the function, we need to add a runtime test that checks that both arrays have the same length. If they don’t, then we need to report this back to the user using a sensible error message. We do this using an exception. Exit ipython and use nano to edit the addarrays.py script. Change it so that the function looks like this; def addArrays(a, b): """Function to add together the two passed arrays, returning the result.""" if len(a) != len(b): raise ValueError("Both arrays must have the same length.") c = [] for a_, b_ in zip(a, b): c.append(a_ + b_) return c Here, we raise a ValueError, which indicates that something is wrong with the value of one of the arguments. A list of all Python exceptions is here. Also note that you can create your own exceptions as well, instructions here (although this is beyond what we have time to cover in this course). Now when we call the function incorrectly, we get a sensible error message. Check this by opening a new ipython session and typing; from addarrays import addArrays c = addArrays([1,2], [3,4,5]) and you should see; --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-2-fdd61ba0cb11> in <module>() ----> 1 addarrays.addArrays([1,2], [3,4,5]) /path/to/addarrays.py in addArrays(a, b) 5 6 if len(a) != len(b): ----> 7 raise ValueError("Both arrays must have the same length.") 8 9 c = [] ValueError: Both arrays must have the same length. The benefit of an exception is that it provides a way for your function to test and report when something has gone wrong. If something has gone wrong, it can be reported back to the user with a sensible error message. Also, unlike just printing a message and exiting the program, exceptions provide a way to recover from errors. This is achieved using “try” blocks. For example try typing the following; a = [1,2] b = [3,4,5] try: c = addArrays(a,b) print(c) except ValueError: print("Something went wrong calling addArrays") You should see that the string Something went wrong calling addArrays is now printed to the screen. A try block lets you try to run a piece of code. If an exception is raised, then the exception is caught in the except block. This can be used either to present an even cleaner error message, or to fix the problem, e.g. try typing this; a = [1,2] b = [3,4,5] try: c = addArrays(a,b) except ValueError: while len(a) < len(b): a.append(0) while len(b) < len(a): b.append(0) c = addArrays(a,b) print( c ) Now you will see that the c is equal to the array [4, 6, 5]. Because the arrays a and b were not the same length, the first call to addArrays in the try block caused a ValueError exception to be raised. This was caught in the except ValueError block. In here, because a was smaller than b, zeroes were appended onto a until it had the same size as b. Now the next call of addArrays in the except ValueError block was successful, allowing c to be created and printed at the end. So you can see that exceptions allow us to fix problems in the context of how the function is called. Note that it would not be appropriate to add this fix into addArrays itself, as addArrays cannot know itself whether or not the arrays contain numbers, or whether or not it would be appropriate to make the arrays equal by padding with zeroes. Only the code that calls addArrays knows the context of the call, and thus what an appropriate fix would be. Exceptions provide a way for addArrays to signal that a problem has occurred, and the try block provides the way for the caller to fix the problem. Correctness tests The second set of tests are correctness (also called unit) tests. These are tests that are run on a function to test that it is giving the correct output. For example, we can test that addArrays is adding together numbers correctly by creating a new function to test it, e.g. in a new ipython session type; from addarrays import addArrays def test_add(): a = [1,2,3] b = [4,5,6] expect = [5,7,9] c = addArrays(a,b) if c == expect: print("OK") else: print("BROKEN") test_add() You should see that the test passed and the string OK was printed to the screen. Testing manually works but is time-consuming and error prone - we might forget to run a test. What we need is a way to collect together all of the tests and to automate them. The first thing to do is to create a testing script for our module, which is typically called “test_MODULENAME.py”, so in our case, it would be “test_addarrays.py”. Into this file, we should add all of our tests, e.g. using nano copy and paste in the following; from addarrays import addArrays def test_add(): a = [1,2,3] b = [4,5,6] expect = [5,7,9] c = addArrays(a,b) assert expect == c The only change here is that we have used assert. This is a statement that does nothing if the passed test is true, but that will raise an AssertionError exception if the test is false. We can run the test manually using ipython, e.g. in a new ipython session type: from test_addarrays import test_add test_add() You should see that nothing happens, as the test passes. This is still a bit manual. Fortunately, there is a package called pytest which automates running test scripts like this. pytest automatically finds, runs and reports on tests. Exit ipython and then, on the command line type; pytest You should see printed to the screen; ============================= test session starts ============================== platform linux -- Python 3.5.2, pytest-3.0.5, py-1.4.32, pluggy-0.4.0 rootdir: /panfs/panasas01/training/train01, inifile: collected 1 items test_addarrays.py . =========================== 1 passed in 0.02 seconds =========================== This automatically searched all the python files in the directory for functions that started with test_ and ran them. You can check this by breaking the code, e.g. edit addarrays.py and change the function to the following (replaces a_ + b_ with a_ - b_); def addArrays(a, b): """Function to add together the two passed arrays, returning the result.""" if len(a) != len(b): raise ValueError("Both arrays must have the same length.") c = [] for a_, b_ in zip(a): c.append(a_ - b_) return c Now go back to the command line and run pytest again, e.g. pytest You should now see something like ============================= test session starts ============================== platform linux -- Python 3.5.2, pytest-3.0.5, py-1.4.32, pluggy-0.4.0 rootdir: /panfs/panasas01/training/train01, inifile: collected 1 items test_addarrays.py F =================================== FAILURES =================================== ___________________________________ test_add ___________________________________ def test_add(): a = [1,2,3] b = [4,5,6] expect = [5,7,9] c = addArrays(a,b) > assert expect == c E assert [5, 7, 9] == [-3, -3, -3] E At index 0 diff: 5 != -3 E Use -v to get the full diff test_addarrays.py:8: AssertionError =========================== 1 failed in 0.07 seconds =========================== You can see that it has picked out the line at which the assert failed (marked with the > on the left). What follows (on lines beginning with E) is then a little bit of help to see why it failed. First it prints out the assert line again but with the variables expanded out so you can see exactly what it was comparing. Then it tells you what part of the comparison failed, in this case it found that the first elements didn’t match ( 5 != -3). When 1 + 1 = 2.0000001 One problem with testing that a calculation is correct is that computers don’t do floating point arithmetic too well. For example, in a new ipython session type; expected = 0 actual = 0.1 + 0.1 + 0.1 - 0.3 assert expected == actual While this may work, you will likely that an AssertionError exception was raised, e.g. --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-3-18a1029b2615> in <module>() ----> 1 assert expected == actual AssertionError: We can see what caused the problem by printing the value of actual, e.g. print(actual) On my machine, I get the value 5.55111512313e-17. The problem is that computers are continually rounding floating point numbers. Rounding errors can accumulate during a calculation and these can lead to seemingly wrong predictions such that 0.1 + 0.1 + 0.1 - 0.3 != 0. Rounding errors can cause problems in your code, and also cause problems when writing tests. If you are going to compare floating point numbers, then you must make the comparison to within a threshold or delta, e.g. expected agrees with actual if abs(expected - actual) < 0.0000000000000001. Notice the use of python’s inbuilt absolute value ( abs) function - in this case, it is important that you specify this as the absolute difference. Otherwise, if ever actual is greater than expected (depsite being within the threshold), it will fail, not giving you what you were hoping for. Thresholds are application-specific. Fortunately, pytest provides an approx function that allows you to compare floating point numbers to within different thresholds. It does this by comparing two numbers up to a specified absolute or relative precision, e.g. type import pytest assert actual == pytest.approx(expected) prints nothing, because 5.55111512313e-17 is equal to 0 to within a relative precision of 1e-6 (the default for pytest.approx). Now type; assert actual == pytest.approx(expected, abs=1e-10) This again prints nothing, as 5.55111512313e-17 is equal to 0 up to 10 decimal places. Now try; assert actual == pytest.approx(expected, abs=1e-17) This should now raise an AssertionError exception that looks like this; --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-44-c91bdbddaf67> in <module>() ----> 1 assert actual == pytest.approx(expected, abs=1e-17) AssertionError: When should we test? Testing is extremely important, and is the only way that you can check whether or not your code performs as you expect. Documentation tells the user what a function is supposed to do, while tests provide the guarantee that the function actually does it. Ideally you should write tests continually during development of software. For example, my workflow is to plan a new function, then write the documentation for the function (so that I don’t forget my plan!), then write the function, and then write tests to ensure that the function is working. I am then able to move onto the next function I need to write, safe in the knowledge that the previous functions I have written will work and will not cause obscure and difficult to find bugs. Then, as I continue to develop the software, and it is used by other people, I will discover new bugs or will receive bug reports. I then turn these bug reports into new unit tests, so that, once fixed, those bugs cannot reappear in my code. Obviously, we can’t write tests to cover every problem, and indeed trying to write too large a test suite would cost us more time than would be worthwhile. However, you will quickly work out how much is the right amount of testing, through trial and error. There is definitely no excuse for never testing, and any effort expended in writing tests is less painful than dealing with the aftermath of either; - A bug being discovered in your script just before you publish a paper on the results, leading you to have to delay publication or, worse, have to make a retraction. - Or (as happened once to myself) having to tell another scientist that all of their calculations have to be run again as the script they had been using had a bug that rendered all output incorrect. In addition, you should also periodically review your tests, like code, to avoid - Pass when they should fail, false positives. - Fail when they should pass, false negatives. - Don’t test anything. Also, never, ever write ‘empty’ tests, such as; def test_critical_correctness(): # TODO - will complete this tomorrow! pass These give a false sense of security! Summary Testing - Costs time while coding, but saves time in the long run (less effort spent debugging, less effort spent recovering from bugs found just before paper publication). - Gives confidence that code does what we want and expect it to. - Promotes trust that code, and so research, is correct. - Mirrors your documentation. Documentation provides the promise of what the code will do. Tests provide the proof. One of the problems with testing is that you want to test if an action will be correct, without necessarily performing the action. For example, you may want to test that your script will correctly identify which files to remove, without actually removing those files. Or, you may want to test that your script will correctly form the command line to run an external program, without actually running that program. Mocking is the process of testing without acting. If you want to learn more about mocking, see this post. Exercise Expand test_addarrays.py with more tests, e.g. a function to test that addArrays correctly adds together arrays of negative numbers, a function to test that addArrays correctly adds arrays of strings, and a function to test that addArrays correctly adds together empty arrays. Try to think of all of the things that could break the code. Also add a function that tests that addArrays correctly reports when the arrays are the wrong size, e.g. def test_wrongsize(): a = [1, 2, 3] b = [4, 5] with pytest.raises(ValueError): addArrays(a, b) Also add in tests for floating point addition, using pytest.approx. Run your tests with pytest. If you get stuck, an example test script is here
https://chryswoods.com/intermediate_python/testing.html
CC-MAIN-2018-17
refinedweb
2,635
71.85
. Filing this bug based on support Ticket #283236 Basically, my breakpoints stopped being resolved in my Android Project, but they work in referenced projects, PCL's etc. Env Info:. I've tried cleaning/rebuilding and various combinations of checking & unchecking android configuration options (link assemblies, enable/disable instrumentation, shared runtime, etc.) I made no changes to my .csproj file and in fact have since rolled back to a point I know was working (1 month ago) - but the problem persists. I have periods (.) but no spaces ( ) or any other special characters (no @ signs) in my folder paths. I have 2 PCL's, 2 binding projects, and a couple of submodules. I can hit break points in my PCL's and subprojects, just not the main project. Issue does not exist in Xamarin Studio on the Mac. Here's my basic test case: Submodule 1 implements Android.App.Application & overrides onCreate. e.g. public class Submodule1App : Application Main Project inherits Submodule1App and overrides OnCreate, e.g. MainApp : Submodule1App 1.) Set a break in MainApp at base.OnCreate(). 2.) Set a break in Submodule1App at base.OnCreate() Expected: MainApp breakpoint hits before Submodule1App breakpoint Actual: MainApp never hits breakpoint, but Submodule1App does. Rolling back to the previous stable release fixed this issue:. Forgot to include VS version info: Microsoft Visual Studio Enterprise 2015 Version 14.0.24720.00 Update 1 Microsoft .NET Framework Version 4.6.01038 Hi Matt, As per shared version info you're not seeing this issue on latest stable, and you're still using 4.0.0 bits. Can you please confirm if updating to 4.0.1 helps? Our current stable build is 4.0.1.96. Thanks! Hi Jose, I upgraded to: Xamarin 4.0.1.96 (dcea9c1) and ran into the problem. when I downgraded to: Xamarin 4.0.0.1717 (1390b70) the breakpoints are working again. While I was on 4.0.1.96; I also ran the gamut of other suggestions online, including - none of which worked for me: As a side note, I just had this issue crop up in my current configuration.. It was working fine all day then just stopped. The only thing I can think of is that I stopped a few builds in the middle of compilation. One was stuck processing string resource files, so I did a clean and a rebuild, then I lost connection to my Mac in the middle of the build so I reboot both the Mac and the PC. I'm not sure if that was what broke it or not, but I will try to do an uninstall/reinstall and see if I can reproduce the issue. Matt, can you provide debug logs (that's Debug -> Windows -> Output)? The case looks fairly normal to me, so this should not be happening at all. Maybe there's an error in the logs that will help me understand. Joaquin, I have an example for you that is not for a client. Nevertheless, I would still prefer to share the debug logs privately. Bugzilla is not letting me mark it as private. Can I email them? Basically, my case still exists: 1.) set a breakpoint anywhere in a TOP-LEVEL android app 2.) set a breakpoint anywhere in either a PCL or an android Library project downstream from the the first. Expected: The TOP-LEVEL breakpoint is hit Actual: only the downstream breakpoint is hit. Note that this is a new project and I was able to hit breakpoints when i first started but after normal usage it stopped after a few deploys. I.e. Deploy 1 breakpoints hit, deploy 3 breakpoints stop working and not amount of cleans/rebuilds/delete bi and obj do anything. here's some updated env info: Microsoft Visual Studio Enterprise 2015 Version 14.0.25123.00 Update 2 Microsoft .NET Framework Version 4.6.01038 NuGet Package Manager 3.4.2 Visual Studio Tools for Universal Windows Apps 14.0.25208.00 Xamarin 4.0.3.214 (0dd817c) Xamarin.Android 6.0.3.5 (a94a03b) Xamarin.iOS 9.6.1.8 (3a25bf1) I'll try a repro and confirm. I have tried several combinations and couldn't repro in 4.7. The original bug has to do with improperly created mdbs, but all that code changed in 4.4-4.6. I'm marking this as resolved for QA verification. Please @Matt do reopen if the error reappears. Verified on Microsoft Visual Studio Enterprise 2015 Version 14.0.25431.01 Update 3 Microsoft .NET Framework Version 4.7.02053 Installed Version: Enterprise Xamarin 4.7.10.6 (ac395c3ba) Xamarin.Android 8.0.0.23 (5257e43) Xamarin.iOS 11.2.0.8 (9a9f054) Detailed build Info: Not able to reproduce this issue hence Marking it as verified as not reproducible
https://xamarin.github.io/bugzilla-archives/39/39122/bug.html
CC-MAIN-2019-39
refinedweb
800
68.97
Regular readers would be familiar that Oracle's ADF solution is built on top of JavaServer Faces (JSF). ADF supports bean scopes such as ViewScope, PageFlowScope and BackingBeanScope on top of the regular JSF ApplicationScope, SessionScope and RequestScope beans. Readers should also be familiar that the beans have a defined life (ie. scope) as detailed in the JDev Web Guide: In specifically focusing on the life cycle of ADF PageFlowScope and BackingBeanScope beans, the guide states (to paraphrase): 1) A PageFlowScope bean for a Task Flow (either bounded or unbounded) survives for the life of the task flow. 2) A BackingBeanScope bean for a page fragment will survive from when receiving a request from the client and sending a response back. The implication of this is when we're using Bounded Task Flows (BTFs) based on page fragments, it's typical to have a PageFlowScope bean to accept and carry the parameters of the BTF, and one or more BackingBeanScope beans for each fragment within the BTF. Sample Application With this in mind let's explore a simple application that shows this sort of usage in play. You can download the sample application from here. The application's Unbounded Task Flow (UTF) includes a single SessionScope bean MySessionBean carrying one attribute mySessionBeanString as follows: public class MySessionBean {Note the System.out.println on the constructor to tell us when the bean is instantiated. private String mySessionBeanString = "mySessionBeanValue"; public MySessionBean() { System.out.println("MySessionBean initialized"); } public void setMySessionBeanString(String mySessionBeanString) { this.mySessionBeanString = mySessionBeanString; } public String getMySessionBeanString() { return mySessionBeanString; } } The UTF also includes a single page MyPage.jspx containing the following code:. When this page is first rendered the inputText makes reference to the SessionScope variable. JSF doesn't pre-initialize managed beans, only on first access do they get instantiated. As such as soon as the inputText is rendered we should see the message from the MySessionBean constructor when it accesses the mySessionBeanString via the EL expression: MySessionBean initialized If we were to comment out the region, run this page and press the commandButton, we would only see the initialized message once, as the session bean lives for the life of the user's session. Now let's move onto considering the region and embedded Bounded Task Flow (BTF) called FragmentBTF.xml. Points of note for the BTF are: a) The Task Flow binding has its Refresh property = ifNeeded b) And the BTF expects a parameter entitled btfParameterString, which takes its value from our SessionScope beans variable: a) The BTFs has its initializers and finalizers set to call a "none" scope TaskFlowUtilsBean to simply print a message when the task flow is initialized and finalized. This will help us to understand when the BTF restarts and terminates. b) The BTF has one incoming parameter btfParameterString. To store this value the BTF has its own PageFlowScope bean called MyPageFlowBean, with a variable myPageFlowBeanString to carry the parameter value. public class MyPageFlowBean {Again note the System.out.println to help us understand when the PageFlowScope bean is initialized. private String myPageFlowBeanString; public MyPageFlowBean() { System.out.println("MyPageFlowBean initialized"); } public void setMyPageFlowBeanString(String myPageFlowBeanString) { this.myPageFlowBeanString = myPageFlowBeanString; } public String getMyPageFlowBeanString() { return myPageFlowBeanString; } } c) The BTF contains a single fragment MyFragment.jsff with the following code: Inside the fragment are:Inside the fragment are: c.1) An inputText to output the current value for the MyPageFlowBean.myPageFlowBeanString. Remember this value is indirectly derived from the btfParamaterString of the BTF. c.2) A second inputText to output the value from another bean, this time a BackingScopeBean which we consider next. d) The BackingBeanScope bean is where we'll see some interesting behaviour. Let's explain its characteristics first: d.1) The BackingBeanScope bean entitled MyBackingBean is managed by the BTF. d.2) It is only referenced by the MyFragment.jsff within the BTF, via the inputText above in c.2. d.3) The BackingBeanScope bean has the following code which includes the usual System.out.println on the constructor: public class MyBackingBean {d.4) It contains a variable myBackingBeanString which is referenced via an EL expression by the inputText explained in c.2. private String myBackingBeanString; private MyPageFlowBean myPageFlowBean; public MyBackingBean() { System.out.println("MyBackingBean initialized"); myPageFlowBean = (MyPageFlowBean)resolveELExpression("#{pageFlowScope.myPageFlowBean}"); myBackingBeanString = myPageFlowBean.getMyPageFlowBeanString(); } setMyBackingBeanString(String myBackingBeanString) { this.myBackingBeanString = myBackingBeanString; } public String getMyBackingBeanString() { return myBackingBeanString; } } d.5) However note that the constructor of the bean grabs a reference to the PageFlowScope bean and uses that to access the myPageFlowBeanString value, writing the value to the myBackingBeanString. While this example is abstract for purposes of this blog post, it's not uncommon in context of a BTF for a backing bean to want to access state from the BTF's page flow scope bean. As such the technique is to use the JSF classes to evaluate an EL expression to return the page flow scope bean. This is what the resolveELExpression method in the backing bean does, called via the constructor and given to a by-reference-variable in the backing bean to hold for its life/scope. At this point we have all the moving parts of our very small application. Scenario One - Initialization Let's run through the sequence of events we expect to occur when this page renders for the first time agaom, this time including the BTF-region processing as well as the parent page's processing: 1) From earlier we know that when the page first renders we'll see the SessionScope bean's constructor logged as the inputText in MyPage.jspx accesses mySessionBeanString. MySessionBean initialized 2) Next as the region in MyPage.jspx is rendered, the FragmentBTF is called for the first time and we can see two log messages produced: MyPageFlowBean initialized Task Flow initialized It's interesting we see the PageFlowScope bean instantiated before the Task Flow but this makes sense as the MySessionBean.mySessionBeanString needs to be passed as a parameter to the BTF before the BTF actually starts. 3) As the MyFragment.jsff renders for the first time, we then see the MyBackingBean initialized for the first time: MyBackingBean initialized So to summarize at this point by just running the page and doing nothing else, the following log entries will have been shown: MySessionBean initialized MyPageFlowBean initialized Task Flow initialized MyBackingBean initialized In turn the page looks like this, note the value from the MySessionBean pushing itself to the MyPageFlowBean and the MyBackingBean: Scenario 2 - The logic bomb With the page now running we'll now investigate how there's a bomb in our logic. Up to now if we've been developing an application based on this structure, we've probably run this page a huge amount of times and seen the exact same order from above. One of the problems for developers is we start and stop our application so many times, we don't use the system like a real user does where the application is up and running for a long time. This can hide issues from the developers. With our page running, say we want to change the SessionScope bean's value. Easy enough to do, we simply change the value in the mySessionBeanString inputText: 1) As the session scope bean lives for the life of the user's session, we don't expect to see the bean newly instantiated. 2) Because the region's task flow binding refresh property is set to ifNeeded, and the source value of the btfParameterString has been updated, we expect the BTF to restart. As the content of the region are executed, based on our previous understanding logically we should see the following log entries: Task Flow finalized MyPageFlowBean initialized Task Flow initialized MyBackingBean initialized (Note the Task Flow finalized message first. This is separate to this discussion but given the BTF is restarting, the previous BTF instance need to be finalized first). Yet the actual log entries we see are: MyBackingBean initialized Task Flow finalized MyPageFlowBean initialized Task Flow initialized And the resulting page looks like this: The explanation is simple enough based on 2 rules we've established in this post: 1) Firstly we know beans are only instantiated on access. 2) In returning to the Oracle documentation the scope of a BackingBean is: "A BackingBeanScope bean for a page fragment will survive from when receiving a request from the client and sending a response back." With these two rules in mind, when we consider the first scenario, the reason the BackingBean is instantiated after the PageFlowScope bean is because the contents of the BTF fragment are rendered after the BTF is started. As such the access to the BackingBean is *after* the PageFlowScope bean as the fragment hasn't been rendered yet. With the second scenario, as the fragment is already rendered on the screen, the reason the BackingBean is instantiated before the PageFlowScope bean (and even the termination and restart of the BTF) is the incoming request from the user will reference the BackingBean (maybe writing values back to the bean) causing it to initialize at the beginning of the request as per rule 2 above. Officially "Erk!". Then as the BackingBean in its constructor references the PageFlowScope bean, it gets a handle on the previous BTF instance's PageFlowScope bean as the new BTF instance has yet to start and create a new PageFlowScope instance with the new value passed from the caller, and thus why we see the old value in the page for myBackingBeanString. The specific coding mistake in the examples above is the retrieval of the PageFlowScope bean in the BackingBeanScope's constructor. The solution is that any methods of the BackingBeanScope that require the value from the PageFlowScope should resolve access to the PageFlowScope each time it's required, not once in the constructor. If you consider the blog post References to UIComponents in Session-Scope beans by Blake Sullivan you'll see a number of techniques for solving this issue. Conclusion Understanding the scope of beans is definitely important for JSF and ADF programming. But the scope of a bean doesn't imply the order of instantiation, and the order of instantiation is not guaranteed so we need to be careful our understanding doesn't make assumptions about when beans will be in/out of scope. Readers who are familiar with JSF2.0 know that CDI and Annotations will work around these issues. However for ADF programmers CDI for the ADF scoped beans is currently not a choice (though this may change). See the Annotations section of the ADF-JSF2.0 whitepaper from Oracle. Errata This blog post was written against JDev 11.1.1.4.0. 4 comments: Nice post Chris! The link to your demoapp should be "" /Torben Thanks for pointing out the error Torben, corrected. CM. Excellent post! Unfortunately, the whitepaper roadmap does not refer to CDI beans. Moreover, Weblogic does not support it right now (or there is any definite answer in OTN when this is going to happen), so even if you are building JSF 2.0 you still cannot use them. Do you happen to know when Oracle is planning adopt them in ADF or WLS? Thanx in advance. Spyros Hi Spyros, thanks for the compliments. Unfortunately no, I don't know when. I'll take a punt though, presumably CDI requires a 1.6 JEE server. As such if WLS 12c is just around the corner based on 1.6, at least one piece of the puzzle will be complete. However ADF itself also needs to support them, and Oracle may not. Time will tell. CM.
http://one-size-doesnt-fit-all.blogspot.com/2011/11/adf-faces-logic-bomb-in-order-of-bean.html
CC-MAIN-2013-48
refinedweb
1,927
52.09
Say you're building a component that shows up in lots of places, like a header. Look at designs and sure enough, same header on every page. Logo, few buttons, username. You create a reusable component. DRY all the way. const Header = () => {return (<Box><Logo /><MenuItem /><MenuItem /><UserMessages /><UserProfile /></Box>)} Fantastic! You have a header that works everywhere. A new page appears "Hey, header looks weird on homepage. Where's the signup button?" Your designer's not happy. Header that's the same everywhere can't be same on the homepage. That's for marketing, they need signups. obviously 🙄 You add a prop. const Header = (props: { showSignup: boolean }) => {return (<Box><Logo /><MenuItem /><MenuItem />{!props.showSignup && <UserMessages />}{!props.showSignup && <UserProfile />}{props.showSignup && <SignupButton />}</Box>)} Cool, you've kept it DRY. Same header, same styling, signup yes or no. A funnel shows up "Hey, users are getting distracted out of this purchase funnel. Header too busy" Your universal header is costing you money. When users are on the path to paying, you want nothing to stand in their way. Another boolean! "Oh and let's pretend this is a fullscreen modal, add a close button. Keep the logo" Two booleans! const Header = (props: {showSignup: booleanhideMenu: booleanshowClose: boolean}) => {return (<Box><Logo />{!props.hideMenu && <MenuItem />}{!props.hideMenu && <MenuItem />}{!props.showSignup && <UserMessages />}{!props.showSignup && <UserProfile />}{props.showSignup && <SignupButton />}{props.showClose && <CloseButton />}</Box>)} Well that's not confusing at all. With 3 booleans, your header has 8 possible incantations. Add one more ask from design and you're up to 16. 😅 You've created a mess Your beautiful universal header component is hard to use. Complexity is exploding, the code is getting hairy, and you're the only person on your team who knows how to hold it. For everyone else it's frustrating as heck. Get it right for this screen, breaks on that screen. Debugging feels like whack-a-mole. And I'm showing you a simple example. In production code these booleans start interacting. If hideMenu and showSignup then do X, otherwise if not showClose do Y. Nobody wants to touch your component ever again. It's too tricky. Variants to the rescue How many of those 8 incantations are valid? I count 3. Then why allow the other 5? If 5 out of 8 ways to use your component are bugs, something's wrong. Here's what you do 👉 turn those flags into a variant prop. Use TypeScript and you even get autocomplete 😍 const Header = (props: { variant: "homepage" | "funnel" }) => {let hideMenu, showClose, showSignupswitch (variant) {case "homepage":showSignup = truecase "funnel":hideMenu = trueshowClose = true}return (<Box><Logo />{!hideMenu && <MenuItem />}{!hideMenu && <MenuItem />}{!showSignup && <UserMessages />}{!showSignup && <UserProfile />}{showSignup && <SignupButton />}{showClose && <CloseButton />}</Box>)} Now the rest of your team can use your header with ease: <Header variant="..." />. And thanks to TypeScript their IDE tells them what's available. AND we found a bug in my pseudocode. You don't want user menus showing up in funnel headers 😅 Cheers, ~Swizec PS: quick tip for shorter emails 👉 break your wrist️
https://swizec.com/blog/variants-a-quick-tip-for-better-react-components
CC-MAIN-2021-43
refinedweb
501
70.09
NGravatar 1.0.1 NGravatar provides MVC HtmlHelper and UrlHelper extension methods for rendering Gravatar avatars from gravatar.com. The project is licensed under the MIT open-source license and is hosted at Google Code. See for more information. Gravatar avatars are retrieved based on an email address and optional parameters. A rendered Gravatar avatar on an MVC view page might look something like: @Html.Gravatar("ngravatar@kendoll.net", 80, htmlAttributes: new { style = "border:10px solid blue;" }) The above line would render an <img> tag with the source of the Gravatar for "ngravatar@kendoll.net". The size, default image, rating, and any additional HTML attributes can also be specified. Using NGravatar requires that the "NGravatar.Html" namespace be added to the Web.config (or the top of a view with a @using directive). See the source at Google Code for an example project. Install-Package NGravatar -Version 1.0.1 dotnet add package NGravatar --version 1.0.1 <PackageReference Include="NGravatar" Version="1.0.1" /> paket add NGravatar --version 1.0.1 Release Notes Version 1.x.x is a complete rewrite of NGravatar with many breaking changes. To prevent these changes from affecting your code, don't update past version 0.x.x. Instructions on how to do so can be found here: Dependencies - Microsoft.AspNet.Mvc (>= 3.0.20105.1) - NGravatar.Core (>= 1.0.1) Used By NuGet packages This package is not used by any NuGet packages. GitHub repositories This package is not used by any popular GitHub repositories.
https://www.nuget.org/packages/NGravatar
CC-MAIN-2021-10
refinedweb
252
61.12
Question: Typically, the way I'd define a true global constant (lets say, pi) would be to place an extern const in a header file, and define the constant in a .cpp file: constants.h: extern const pi; constants.cpp: #include "constants.h" #include <cmath> const pi=std::acos(-1.0); This works great for true constants such as pi. However, I am looking for a best practice when it comes to defining a "constant" in that it will remain constant from program run to program run, but may change, depending on an input file. An example of this would be the gravitational constant, which is dependent on the units used. g is defined in the input file, and I would like it to be a global value that any object can use. I've always heard it is bad practice to have non-constant globals, so currently I have g stored in a system object, which is then passed on to all of the objects it generates. However this seems a bit clunky and hard to maintain as the number of objects grow. Thoughts? Solution:1 It all depends on your application size. If you are truly absolutely sure that a particular constant will have a single value shared by all threads and branches in your code for a single run, and that is unlikely to change in the future, then a global variable matches the intended semantics most closely, so it's best to just use that. It's also something that's trivial to refactor later on if needed, especially if you use distinctive prefixes for globals (such as g_) so that they never clash with locals - which is a good idea in general. In general, I prefer to stick to YAGNI, and don't try to blindly placate various coding style guides. Instead, I first look if their rationale applies to a particular case (if a coding style guide doesn't have a rationale, it is a bad one), and if it clearly doesn't, then there is no reason to apply that guide to that case. Solution:2 A legitimate use of singletons! A singleton class constants() with a method to set the units? Solution:3 You can use a variant of your latter approach, make a "GlobalState" class that holds all those variables and pass that around to all objects: struct GlobalState { float get_x() const; float get_y() const; ... }; struct MyClass { MyClass(GlobalState &s) { // get data from s here ... = s.get_x(); } }; It avoids globals, if you don't like them, and it grows gracefully as more variables are needed. Solution:4 It's bad to have globals which change value during the lifetime of the run. A value that is set once upon startup (and remains "constant" thereafter) is a perfectly acceptable use for a global. Solution:5 I can understand the predicament you're in, but I am afraid that you are unfortunately not doing this right. The units should not affect the program, if you try to handle multiple different units in the heart of your program, you're going to get hurt badly. Conceptually, you should do something like this: Parse Input | Convert into SI metric | Run Program | Convert into original metric | Produce Output This ensure that your program is nicely isolated from the various metrics that exist. Thus if one day you somehow add support to the French metric system of the 16th century, you'll just add to configure the Convert steps (Adapters) correctly, and perhaps a bit of the input/output (to recognize them and print them correctly), but the heart of the program, ie the computation unit, would remain unaffected by the new functionality. Now, if you are to use a constant that is not so constant (for example the acceleration of gravity on earth which depends on the latitude, longitude and altitude), then you can simply pass it as arguments, grouped with the other constants. class Constants { public: Constants(double g, ....); double g() const; /// ... private: double mG; /// ... }; This could be made a Singleton, but that goes against the (controversed) Dependency Injection idiom. Personally I stray away from Singleton as much as I can, I usually use some Context class that I pass in each method, makes it much easier to test the methods independently from one another. Solution:6 Why is your current solution going to be hard to maintain? You can split the object up into multiple classes as it grows (one object for simulation parameters such as your gravitational constant, one object for general configuration, and so on) Solution:7 My typical idiom for programs with configurable items is to create a singleton class named "configuration". Inside configuration go things that might be read from parsed configuration files, the registry, environment variables, etc. Generally I'm against making get() methods, but this is my major exception. You can't typically make your configuration items consts if they have to be read from somewhere at startup, but you can make them private and use const get() methods to make the client view of them const. Solution:8 This actually brings to mind the C++ Template Metaprogramming book by Abrahams & Gurtovoy - Is there a better way to manage your data so that you don't get poor conversions from yards to meters or from volume to length, and maybe that class knows about gravity being a form acceleration. Also you already have a nice example here, pi = the result of some function... const pi=std::acos(-1.0); So why not make gravity the result of some function, which just happens to read that from file? const gravity=configGravity(); configGravity() { // open some file // read the data // return result } The problem is that because the global is managed prior to main being called you cannot provide input into the function - what config file, what if the file is missing or doesn't have g in it. So if you want error handling you need to go for a later initialization, singletons fit that better. Solution:9 Let's spell out some specs. So, you want: (1) the file holding the global info (gravity, etc.) to outlive your runs of the executable using them; (2) the global info to be visible in all your units (source files); (3) your program to not be allowed to change the global info, once read from the file; Well, (1) Suggests a wrapper around the global info whose constructor takes an ifstream or file name string reference (hence, the file must exist before the constructor is called and it will still be there after the destructor is invoked); (2) Suggests a global variable of the wrapper. You may, additionally, make sure that that is the only instance of this wrapper, in which case you need to make it a singleton as was suggested. Then again, you may not need this (you may be okay with having multiple copies of the same info, as long as it is read-only info!). (3) Suggests a const getter from the wrapper. So, a sample may look like this: #include <iostream> #include <string> #include <fstream> #include <cstdlib>//for EXIT_FAILURE using namespace std; class GlobalsFromFiles { public: GlobalsFromFiles(const string& file_name) { //...process file: std::ifstream ginfo_file(file_name.c_str()); if( !ginfo_file ) { //throw SomeException(some_message);//not recommended to throw from constructors //(definitely *NOT* from destructors) //but you can... the problem would be: where do you place the catcher? //so better just display an error message and exit cerr<<"Uh-oh...file "<<file_name<<" not found"<<endl; exit(EXIT_FAILURE); } //...read data... ginfo_file>>gravity_; //... } double g_(void) const { return gravity_; } private: double gravity_; }; GlobalsFromFiles Gs("globals.dat"); int main(void) { cout<<Gs.g_()<<endl; return 0; } Solution:10 Globals aren't evil Had to get that off my chest first :) I'd stick the constants into a struct, and make a global instance of that: struct Constants { double g; // ... }; extern Constants C = { ... }; double Grav(double m1, double m2, double r) { return C.g * m1 * m2 / (r*r); } (Short names are ok, too, all scientists and engineers do that.....) I've used the fact that local variables (i.e. members, parameters, function-locals, ..) take precedence over the global in a few cases as "apects for the poor": You could easily change the method to double Grav(double m1, double m2, double r, Constants const & C = ::C) { return C.g * m1 * m2 / (r*r); } // same code! You could create an struct AlternateUniverse { Constants C; AlternateUniverse() { PostulateWildly(C); // initialize C to better values double Grav(double m1, double m2, double r) { /* same code! */ } } } The idea is to write code with least overhead in the default case, and preserving the implementation even if the universal constants should change. Call Scope vs. Source Scope Alternatively, if you/your devs are more into procedural rather thsn OO style, you could use call scope instead of source scope, with a global stack of values, roughly: std::deque<Constants> g_constants; void InAnAlternateUniverse() { PostulateWildly(C); // g_constants.push_front(C); CalculateCoreTemp(); g_constants.pop_front(); } void CalculateCoreTemp() { Constants const & C= g_constants.front(); // ... } Everything in the call tree gets to use the "most current" constants. OYu can call the same tree of coutines - no matter how deeply nested - with an alternate set of constants. Of course it should be encapsulated better, made exception safe, and for multithreading you need thread local storage (so each thread gets it's own "stack") Calculation vs. User Interface We approach your original problem differently: All internal representation, all persistent data uses SI base units. Conversion takes place at input and output (e.g. even though the typical size is millimeter, it's always stored as meter). I can't really compare, but worksd very well for us. Dimensional Analysis Other replies have at least hinted at Dimensional Analysis, such as the respective Boost Library. It can enforce dimensional correctness, and can automate the input / output conversions. Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2019/05/tutorial-proper-way-to-make-global.html
CC-MAIN-2020-40
refinedweb
1,664
60.24
. One interesting feature of Cython is that it supports native parallelism (see the cython.parallelmodule). The cython.parallel.prange function can be used for parallel loops; thus one can take advantage of Intel® Many Integrated Core Architecture (Intel® MIC Architecture) using the thread parallelism in Python. Cython in Intel® Distribution for Python* 2017 Intel® Distribution for Python* 2017 is a binary distribution of Python interpreter, which accelerates core Python packages, including NumPy, SciPy, Jupyter, matplotlib, Cython, and so on. The package integrates Intel® Math Kernel Library (Intel® MKL), Intel® Data Analytics Acceleration Library (Intel® DAAL), pyDAAL, Intel® MPI Library and Intel® Threading Building Blocks (Intel® TBB). For more information on these packages, please refer to the Release Notes. The Intel Distribution for Python 2017 can be downloaded here. It is available for free for Python 2.7.x and 3.5.x on OS X*, Windows* 7 and later, and Linux*. The package can be installed as a standalone or with theIntel® Parallel Studio XE 2017. Intel Distribution for Python supports both Python 2 and Python 3. There are two separate packages available in the Intel Distribution for Python: Python 2.7 and Python 3.5. In this article, the Intel® Distribution for Python 2.7 on Linux (l_python27_pu_2017.0.035.tgz) is installed on a 1.4 GHz, 68-core Intel® Xeon Phi™ processor 7250 with four hardware threads per core (a total of 272 hardware threads). To install, extract the package content, run the install script, and then follow the installer prompts: $ tar -xvzf l_python27_pu_2017.0.035.tgz $ cd l_python27_pu_2017.0.035 $ ./install.sh After the installation completes, activate the root environment (see the Release Notes): $ source /opt/intel/intelpython27/bin/activate root Thread Parallelism in Cython In Python, there is a mutex that prevents multiple native threads from executing bycodes at the same time. Because of this, threads in Python cannot run in parallel. This section explores thread parallelism in Cython. This functionality is then imported to the Python code as an extension module allowing the Python code to utilize all the cores and threads of the hardware underneath. To generate an extension module, one can write Cython code (file with extension .pyx). The .pyx file is then compiled by the Cython compiler to convert it into efficient C code (file with extension .c). The .c file is in turn compiled and linked by a C/C++ compiler to generate a shared library (.so file). The shared library can be imported in Python as a module. In the following multithreads.pyx file, the function serial_loop computes log(a)*log(b) for each entry in the A and B arrays and stores the result in the C array. The log function is imported from the C math library. The NumPy module, the high-performance scientific computation and data analysis package, is used in order to vectorize operations on A and B arrays. Similarly, the function parallel_loop performs the same computation using OpenMP* threads to execute the computation in the body loop. Instead of using range, prange (parallel range) is used to allow multiple threads executed in parallel. prange is a function of the cython.parallel module and can be used for parallel loops. When this function is called, OpenMP starts a thread pool and distributes the work among the threads. Note that the prange function can be used only when the Global Interpreter Lock (GIL) is released by putting the loop in a nogil context (the GIL global variable prevents multiple threads to run concurrently). With wraparound(False), Cython never checks for negative indices; withboundscheck(False), Cython doesn’t do bound check on the arrays. $ cat multithreads.pyx cimport cython import numpy as np cimport openmp from libc.math cimport log from cython.parallel cimport prange from cython.parallel cimport parallel THOUSAND = 1024 FACTOR = 100 NUM_TOTAL_ELEMENTS = FACTOR * THOUSAND * THOUSAND X1 = -1 + 2*np.random.rand(NUM_TOTAL_ELEMENTS) X2 = -1 + 2*np.random.rand(NUM_TOTAL_ELEMENTS) Y = np.zeros(X1.shape) def test_serial(): serial_loop(X1,X2,Y) def serial_loop(double[:] A, double[:] B, double[:] C): cdef int N = A.shape[0] cdef int i for i in range(N): C[i] = log(A[i]) * log(B[i]) def test_parallel(): parallel_loop(X1,X2,Y) @cython.boundscheck(False) @cython.wraparound(False) def parallel_loop(double[:] A, double[:] B, double[:] C): cdef int N = A.shape[0] cdef int i with nogil: for i in prange(N, schedule='static'): C[i] = log(A[i]) * log(B[i]) After completing the Cython code, the Cython compiler converts it to a C code extension file. This can be done by a disutilssetup.py file (disutils is used to distribute Python modules). To use the OpenMP support, one must tell the compiler to enable OpenMP by providing the flag –fopenmp in a compile argument and link argument in the setup.py file as shown below. The setup.py file invokes the setuptools build process that generates the extension modules. By default, this setup.py uses GNU GCC* to compile the C code of the Python extension. In addition, we add –O0 compile flags (disable all optimizations) to create a baseline measurement. $"], extra_compile_args = ["-O0", "-fopenmp"], extra_link_args=['-fopenmp'] ) ] ) Use the command below to build C/C++ extensions: $ python setup.py build_ext –-inplace Alternatively, you can also manually compile the Cython code: $ cython multithreads.pyx This generates the multithreads.c file, which contains the Python extension code. You can compile the extension code with the gcc compiler to generate the shared object multithreads.so file. $ gcc -O0 -shared -pthread -fPIC -fwrapv -Wall -fno-strict-aliasing -fopenmp multithreads.c -I/opt/intel/intelpython27/include/python2.7 -L/opt/intel/intelpython27/lib -lpython2.7 -o multithreads.so After the shared object is generated. Python code can import this module to take advantage of thread parallelism. The following section will show how one can improve its performance. You can import the timeit module to measure the execution time of a Python function. Note that by default,timeit runs the measured function 1,000,000 times. Set the number of execution times to 100 in the following examples for a shorter execution time. Basically, timeit.Timer () imports themultithreads module and measures the time spent by the function multithreads.test_serial(). The argument number=100 tells the Python interpreter to perform the run 100 times. Thus,t1.timeit(number=100) measures the time to execute the serial loop (only one thread performs the loop) 100 times. Similarly, t12.timeit(number=100) measures the time when executing the parallel loop (multiple threads perform the computation in parallel) 100 times. - Measure the serial loop with gcc compiler, compiler option –O0 (disabled all optimizations). $ python Python 2.7.12 |Intel Corporation| (default, Oct 20 2016, 03:10:12) [GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux2 Type "help", "copyright", "credits" or "license" for more information. Intel(R) Distribution for Python is brought to you by Intel Corporation. Please check out: Import timeit and time t1 to measure the time spent in the serial loop. Note that you built with gcccompiler and disabled all optimizations. The result is displayed in seconds. >>> import timeit >>> t1 = timeit.Timer("multithreads.test_serial()","import multithreads") >>> t1.timeit(number=100) 2874.419779062271 - Measure the parallel loop with gcc compiler, compiler option –O0 (disabled all optimizations). The parallel loop is measured by t2 (again, you built with gcc compiler and disabled all optimizations). >>> t2 = timeit.Timer("multithreads.test_parallel()","import multithreads") >>> t2.timeit(number=100) 26.016316175460815 As you observe, the parallel loop improves the performance by roughly a factor of 110x. Measure the parallel loop with icc compiler, compiler option –O0 (disabled all optimizations). Next, recompile using the Intel® C Compiler and compare the performance. For the Intel® C/C++ Compiler, use the –qopenmp flag instead of –fopenmp to enable OpenMP. After installing the Intel Parallel Studio XE 2017, set the proper environment variables and delete all previous build: $ source /opt/intel/parallel_studio_xe_2017.1.043/psxevars.sh intel64 Intel(R) Parallel Studio XE 2017 Update 1 for Linux* $ rm multithreads.so multithreads.c -r build To explicitly use the Intel icc to compile this application, execute the setup.py file with the following command: $ LDSHARED="icc -shared" CC=icc python setup.py build_ext –-inplace The parallel loop is measured by t2 (this time, you built with Intel compiler, disabled all optimizations): $ python >>> import timeit >>> t2 = timeit.Timer("multithreads.test_parallel()","import multithreads") >>> t2.timeit(number=100) 23.89365792274475 - Measure the parallel loop with icc compiler, compiler option –O3. For the third try, you may want to see whether or not using –O3 optimization and enabling Intel® Advanced Vector Extensions (Intel® AVX-512) ISA on the Intel® Xeon Phi™ processor can improve the performance. To do this, in the setup.py, replace –O0 with –O3 and add –xMIC-AVX512. Repeat the compilation, and then run the parallel loop as indicated in the previous step, which results in: 21.027512073516846. The following graph shows the results (in seconds) when compiling with gcc, icc without optimization enabled, and icc with optimization, Intel AVX-512 ISA: The result shows that the best result (21.03 seconds) is obtained when you compile the parallel loop with the Intel compiler, and enable auto-vectorization (-O3) combined with Intel AVX-512 ISA (-xMIC-AVX512) for the Intel Xeon Phi processor. By default, the Intel Xeon Phi processor uses all available resources: it has 68 cores, and each core uses four hardware threads. A total of 272 threads or four threads/core are running in a parallel region. It is possible to modify the core and number of thread running by each core. The last section shows how to use an environment variable to accomplish this. - To run 68 threads on 68 cores (one thread per core) executing the loop body for 100 times, set theKMP_PLACE_THREADS environment as below: $ export KMP_PLACE_THREADS=68c,1t - To run 136 threads on 68 cores (two threads per core) running the parallel loop for 100 times, set theKMP_PLACE_THREADS environment as below: $ export KMP_PLACE_THREADS=68c,2t - To run 204 threads on 68 cores (three threads per core) running the parallel loop for 100 times, set the KMP_PLACE_THREADS environment as below: $ export KMP_PLACE_THREADS=68c,3t The following graph summarizes the result: Conclusion This article showed how to use Cython to build an extension module for Python in order to take advantage of multithread support for the Intel Xeon Phi processor. It shows how to use the setup script to build a shared library. The parallel loop performance can be improved by trying different compiler options in the setup script. This article also showed how to set different number of threads per core.
http://www.digit.in/apps/thread-parallelism-in-cython-33993.html
CC-MAIN-2017-34
refinedweb
1,773
57.16
Hello, I’m trying to apply DSP effects to my system. I’m under the impression that if I use the system to apply the effect, as it is shown in the examples, that it will apply the effect to all sounds playing and all sounds that will be played until the effect is turned off. Is this assumption correct? Also, when I set up the DSP, I do it exactly as it’s done in the example code, the FMOD_RESULT’s are always returned FMOD_OK, but the sound system seems to not be affected at all. Does the format of the audio file matter at all? I’m using OGG format for pretty much every sound in my game. More specifically, this is how I’m doing things… In the constructor of my class, the FMOD::DSP object is created, and I have a function that is a public class member that flips the effect on and off and is pretty much a copy-paste of the example code. Any advice? Thanks in advance! - bmantzey asked 10 years ago - You must login to post comments make sure you’re using FMOD_SOFTWARE for your sounds. - brett answered 10 years ago
http://www.fmod.org/questions/question/forum-27770/?tab=answers&sort=active
CC-MAIN-2018-47
refinedweb
200
69.01
Key:lgbtq The lgbtq=* tag indicates LGBTQ+ community-friendly places for Lesbian, Gay, Bi, Trans, Queer, etc. people. The tag values can be understood as degrees of acceptance of the LGBTQ+ community. This tag can be added to: Pubs, Bars, Cafes, Nightclubs, Sports clubs, Dancing schools, Sexual, medical or general counseling, Foundations, Libraries, Museums, Shops, Community Centres, Toilets... Only places which openly mark themselves as a LGBTQ+ venue should be tagged as such. In places with widespread oppression against LGBTQ+ people, such places will not openly advertise like this, so they should not be mapped. How to Map Add lgbtq=* tag to the object Use lgbtq:*=* for more options, e.g. lgbtq:trans=primary for a venue aimed at trans people. Values lgbtq=primary Locations which cater to an exclusively or predominantly LGBTQ+ audience, either by the managers running it that way, or by overwhelming convention. Examples - amenity=bar + lgbtq=primary - What is commonly called a gay bar - amenity=bar + lgbtq=primary + lgbtq:women=primary - A bar which is aimed at LGBTQ* women (some people may call this a Lesbian bar) - amenity=community_centre + lgbtq=primary + lgbtq:trans=primary - A trans community centre - amenity=place_of_worship + lgbtq=primary - A place of worship which is aimed towards LGBTQ* people lgbtq=welcome The location has some verifiable indication that LGBTQ+ clientele are welcome, but does not cater primarily to them. Indications of welcome might, for example, be in the form of signage or a statement on the location's website, which can be surveyed by other mappers. Note that OpenStreetMap's long standing verifiability rule means we cannot tag "LGBTQ+ unwelcome" establishments as an opposite. lgbtq=friendly has a similar meaning but is less used. lgbtq=only This venue is only for LGBTQ+ people. This is probably rare, but might be useful with subkeys. This is not for places which are mostly attended by LGBTQ+ people, which is often too subjective, but for places which essentially ban others. Examples - amenity=nightclub + lgbtq=primary + lgbtq:men=only - A nightclub which is run to cater to LGBTQ+ people, and only allows in men. lgbtq=no This venue denies entry to LGBTQ+ people. In some places it is illegal to discriminate in this way, so this tag might be uncommon. For example there should be almost no places in the UK, Ireland, Germany, some states in the USA, etc. which should have this tag. However, if it is verifiable that a place explicitly disallows LGBTQ+ people in a region that does not legally prevent such discrimination, this value is a valid option. This value should not be used to tag "LGBTQ+ unwelcome" establishments (see above lgbtq=welcome). Examples - amenity=place_of_worship + lgbtq=no - A place of worship which denies entry to LGBTQ+ people. When using lgbtq:*=only, other categories of people must be presumed to be banned. lgbtq:women=only implies lgbtq:men=no. lgbtq=yes lgbtq=yes should be interpreted as lgbtq=primary, but should not be used. yes is the opposite of no, but the opposite of lgbtq=no is lgbtq=only. For clarity it is best to avoid lgbtq=yes (and lgbtq:*=yes). More Specific / sub cultures lgbtq=* can be used as a namespace, for more fine grained detail for different categories of people/sub-cultures. The following is a suggested guide, you are free to expand this or use your own values. Notes - Since bi & pan (etc.) people exist, lesbian or gay might be not accurate, since the venue might allow lesbians and bi women (or gay men and bi men), and harmful, since it contributes to bi-erasure. - For the avoidance of doubt, men, women, etc is trans inclusive. men means trans and cis men, etc. Uses Places, websites, or apps, using this tagging scheme: See also - QueerMap by qiekub not updated, on hold - MapBeks - Wikipedia's LGBT portal - community_centre:for=lgbtq - Tagging in Support of Women and Girls - Historical Interest: - gay=* - an original, nearly identical, proposal. But inaccurate since it lumps all LGBTQ+ people into "gay". - Proposed features/Visitors orientation a proposal - OpenQueerMap alas defunct OSM Logos There are some LGBTQ+ versions of the OpenStreetMap Logos. LGBTQ+ Rainbow Pride - - - - - - Genderqueer Pride Genderfluid Pride - - References - ↑ “iD release notes”. 21 May 2019 . Possible synonyms
https://wiki.openstreetmap.org/wiki/Key:lgbtq
CC-MAIN-2022-27
refinedweb
700
54.52
clime 0.3.1 Convert functions into multi-command program breezily. The full version of this documentaion is at clime.mosky.tw. Clime Clime lets you convert any module into a multi-command CLI program with zero configuration. The main features: - It works well with zero configuration. Free you from the configuration hell. - Docstring (i.e., help text) is just configuration. When you finish your docstring, the configuration of aliases and metavars is also finished. - It generates usage for each command automatically. It is a better choice than the heavy optparse or argparse for most of the CLI tasks. CLI-ize ME! Let me show you Clime with an example. We have a simple script with a docstring here: # file: repeat.py def repeat(message, times=2, count=False): '''It repeats the message. options: -m=<str>, --message=<str> The description of this option. -t=<int>, --times=<int> -c, --count ''' s = message * times return len(s) if count else s After we add this line: import clime.now Our CLI program is ready! $ python repeat.py twice twicetwice $ python repeat.py --times=3 thrice thricethricethrice It also generates a pretty usage for this script: $ python repeat.py --help usage: [-t <int> | --times=<int>] [-c | --count] <message> or: repeat [-t <int> | --times=<int>] [-c | --count] <message> If you have a docstring in your function, it also shows up in usage manual with --help. $ python repeat.py repeat --help usage: [-t <int> | --times=<int>] [-c | --count] <message> or: repeat [-t <int> | --times=<int>] [-c | --count] <message> It repeats the message. options: -m=<str>, --message=<str> The message. -t=<int>, --times=<int> -c, --count You can find more examples in the clime/examples. Command describes more about how it works. Installation Clime is hosted on two different platforms, PyPI and GitHub. Install from PyPI Install Clime from PyPI for a stable version: $ sudo pip install clime If you don’t have pip, execute $ sudo apt-get install python-pip to install pip on Debian-base Linux distribution. - Author: Mosky - Documentation: clime package documentation - License: MIT - Platform: any - Categories - Package Index Owner: mosky - DOAP record: clime-0.3.1.xml
https://pypi.python.org/pypi/clime
CC-MAIN-2017-22
refinedweb
355
67.15
. A "Hello World" plugin This "Hello World" plugin example shows how to register a plugin with Pants. It defines a new hello-world goal with two tasks. Here's how to create it: If you don't have an existing Pants project to work with, create one. Locate its config file, typically pants.iniin the repo root. Create a directory for your plugins. In this example we will use the plugins/directory in the repo root, but there is no convention, and you can put them wherever you like. In the plugins/directory, create following filesystem structure: hello/ __init__.py register.py tasks/ __init__.py your_tasks.py __init__.pyfiles can be empty - you're just saying to Python that you created modules. In your_tasks.pyplace the following content: from pants.task.task import Task class HelloTask(Task): def execute(self): print("Hello") class WorldTask(Task): def execute(self): print("world!") Task is a simple base class for your tasks - you must implement the execute method. In register.pyplace the following content: from pants.goal.goal import Goal from pants.goal.task_registrar import TaskRegistrar as task from hello.tasks.your_tasks import HelloTask, WorldTask def register_goals(): Goal.register(name="hello-world", description="Say hello to your world") task(name='hello', action=HelloTask).install('hello-world') task(name='world', action=WorldTask).install('hello-world') This creates a new goal named hello-world, and registers the two tasks to it. In pants.iniplace the following content: [GLOBAL] pants_version: 1.9.0 pythonpath: ["%(buildroot)s/plugins"] backend_packages: ["hello"] backend_packages defines which plugins you want to use in your project. If you want to use custom plugins directly from source when building in the same repo, you need to put them on the pythonpath so Pants can find them. You are ready to use your plugin! First try to find your goal by typing ./pants goals: ... hello-world: Say hello to your world ... Now you can use your plugin by typing ./pants hello-world: ... Executing tasks in goals: hello-world XX:XX:XX 00:00 [hello-world] XX:XX:XX 00:00 [hello]Hello XX:XX:XX 00:00 [world]world! XX:XX:XX 00:00 [complete] SUCCESS Note that to consume the custom plugin as a published artifact (say on PyPI), instead of directly from the repo, then instead of backend_packages and pythonpath you would set plugins: [GLOBAL] pants_version: 1.9.0 plugins: ["myorg.hello==1.7.6"] Similarly, if your custom plugin is consumed directly from the repo, but has dependencies on published artifacts, then you list those in plugins: [GLOBAL] pants_version: 1.9.0 pythonpath: ["%(buildroot)s/plugins"] backend_packages: ["hello"] plugins: ["some.dependency==4.5.11"] See below for more details..
https://www.pantsbuild.org/howto_plugin.html
CC-MAIN-2018-51
refinedweb
448
51.75
>>And we are too cycled on PIDs. Why? I think this is the most minor >>feature which can easily live out of mainstream if needed, while >>virtualization is the main goal.> > > although I could be totally ignorant regarding the PID> stuff, it seems to me that it pretty well reflects the> requirements for virtualization in general, while being> simple enough to check/test several solutions ...> > why? simple: it reflects the way 'containers' work in> general, it has all the issues with administration and> setup, similar to 'guests' (it requires some management> and access from outside, while providing security and> isolation inside), containers could be easily built on> top of that or as an addition to the pid space structures> at the same time it's easy to test, and issues will show> up quite early, so that they can be discussed and, if> necessary, solved without rewriting an entire framework.I would disagree with you. These discussions IMHO led us to the wrong direction.Can I ask a bunch of questions which are related to other virtualization issues, but which are not addressed by Eric anyhow?- How are you planning to make hierarchical namespaces for such resources as IPC? Sockets? Unix sockets?Process tree is hierarchical by it's nature. But what is heirarchical IPC and other resources?And no one ever told me why we need heierarchy at all. No any _real_ use cases. But it's ok.- Eric wants to introduce name spaces, but totally forgots how much they are tied with each other. IPC refers to netlinks, network refers to proc and sysctl, etc. It is some rare cases when you will be able to use namespaces without full OpenVZ/vserver containers.But yes, you are right it will be quite easy for us to use containers on top of namespaces :)- How long these namespaces live? And which management interface each of them will have?For example, can you destroy some namespace? Or it is automatically destroyed when the last reference to it is dropped? This is not that simple question as it may seem to be, especially taking into account that some namespaces can live longer (e.g. time wait buckets should live some time after container died; or shm which also can die in a foreign context...).So I really hate that we concentrated on discussion of VPIDs, while there are more general and global questions on the whole virtualization itself.> now that you mention it, maybe we should have a few> rounds how those 'generic' container syscalls would > look like?First of all, I don't think syscalls like"do_something and exec" should be introduced.Next, these syscalls interfaces can be introduced only after we discussed the _general_ concepts, like: how these containers exist, live, are created, waited and so on. But this is impossible to discuss on PID example only. Because:1. pids are directly related to process lifetime. no much issues here.2. pids are hierarchical by its nature.3. there are much more approaches here, then in network for example.Kirill-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2006/2/20/92
CC-MAIN-2016-50
refinedweb
538
64.2
I have written a spock test where I'm trying to make assertions on a list of items. Let's say as an example I want to check every number in a list if it is equal to 500: def numbers = [1,2,3,4] numbers.each{ assert it == 500 } Assertion failed: assert it == 500 | | 1 false def "Check if each number in a list is 500"{ given: "A list of numbers" def numbers = [1,2,3,4] expect: "each number to be 500" numbers.each{ assert it == 500 } You can also have something like this: @Unroll def "Check if #number is 500"(){ expect: number == 500 where: number << [1,2,3,4] } Not sure that fits your needs though
https://codedump.io/share/VC2zYww46IqB/1/make-assertions-on-a-list-in-spock
CC-MAIN-2017-04
refinedweb
119
69.96
Sieve of Eratosthenes In Python Table of Contents Hey Learner, Here in this article, we will learn the algorithm of Sieve of Eratosthenes In Python. So if you want to learn this algorithm these types of algorithms, then follow our site for easy to learn. So let’s get started. Sieve of Eratosthenes is the algorithm, which is one of the most efficient ways to find all primes smaller than n when n is smaller than 10 million or so in a very fast way and easily with less space and time complexity. What is the Complexity of Sieve of Eratosthenes. To find the prime number between 1 to n in O(N) time by using the Sieve Of Eratosthenes algorithm. Sieve of Eratosthenes In Python Algorithms for Sieve of Eratosthenes is used for prime number with less time complexity and fast in the compilation. Algorithms for Sieve of Eratosthenes Step 1. Create array integers from 2 to n: (2, 3, 4, …, n). Step 2. Initially, let p equal 2, the first prime number. Step 3. Starting a loop from (p=2;p*p <=n;p++ ) Step 4. if ( prime [p] == true) Step 5. For (i=p*p;i<=n ;i=i+p) prime[p]=false; Step 6. and repeat from step 3. Program For Sieve of Eratosthenes in Python def sieve(n): arr = [1 for i in range(n+1) ] i=2 while(i*i<=n): if (arr[i] == 1): for j in range(i*i, n+1, i): arr[j]=0 i +=1 #print all the number which is prime for i in range(2,n+1): if(arr[i]==1): print(i, end=' ') #main if __name__ =='__main__': n= int(input()) sieve(n) Input : 50 Output : 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 So if you want to learn more python algorithms then follow this page Python. Sieve of Eratosthenes Video Tutorial Learn More Python Tutorials - Sieve of Eratosthenes In C - Python string Methods - List methods in Python - python Sequence Tutorial - Fibonacci Program in Python - Python Sub-String - Lower String in python - Python Tkinter for GUI - Install Python in your system If you have any queries regarding Sieve of Eratosthenes In Python, then ask in the comment section, I am happy to answer your question. And If You got some benefits from this article then please share it with your friends. 3 thoughts on “Sieve of Eratosthenes In Python” norvasc reviews amlodipine interactions with supplements amlodipine pregnancy [url= swelling[/url] ’ atorvastatin blood thinner lipitor indication atorvastatin calcium 20 mg [url= calcium 80 mg tablet[/url] ’ half life lisinopril lisinopril iv lisinopril medscape [url= and sulfa allergy[/url] ’
https://technoelearn.com/sieve-of-eratosthenes-in-python/
CC-MAIN-2022-21
refinedweb
447
65.46
The sys module provides a number of functions and variables that can be used to manipulate different parts of the Python runtime environment. 1.13.1 Working with Command-line Arguments The argv list contains the arguments that were passed to the script, when the interpreter was started, as shown in Example 1-66. The first item contains the name of the script itself. Example 1-66. Using the sys Module to Get Script Arguments." 1.13.2 Working with Modules The path list contains a list of directory names in which, as Example 1-67 shows. Example 1-67. Using the sys Module to Manipulate the Module Search Path Example 1-68 demonstrates the builtin_module_names list, which contains the names of all modules built into the Python interpreter. Example 1-68. Using the sys Module to Find Built-in Modules File: sys-builtin-module-names-example-1.py import sys def dump(module): print module, "=>", if module in sys.builtin_module_names: print "" else: module = _ _import_ _(module) print module._ _file_ _ dump("os") dump("sys") dump("string") dump("strop") dump("zlib") os => C:pythonlibos.pyc sys => string => C:pythonlibstring.pyc strop => zlib => C:pythonzlib.pyd The modules dictionary contains all loaded modules. The import statement checks this dictionary before it actually loads something from disk. As you can see from Example 1-69, Python loads quite a bunch of modules before handing control over to your script. Example 1-69. Using the sys Module to Find Imported Modules File: sys-modules-example-1.py import sys print sys.modules.keys() ['os.path', 'os', 'exceptions', '_ _main_ _', 'ntpath', 'strop', 'nt', 'sys', '_ _builtin_ _', 'site', 'signal', 'UserDict', 'string', 'stat'] 1.13.3 Working with Reference Counts The getrefcount function (shown in Example 1-70) returns the reference count for a given objectthat is, the number of places where this variable is used. Python keeps track of this value, and when it drops to 0, the object is destroyed. Example 1-70. Using the sys Module to Find the Reference Count. 1.13.4 Checking the Host Platform Example 1-71 shows the platform variable, which contains the name of the host platform. Example 1-71. Using the sys Module to Find the Current). 1.13.5 Tracing the Program The setprofiler function allows you to install a profiling function. This is called every time a function or method is called, at every return (explicit or implied), and for each exception. Let's look at Example 1-72. Example 1-72. Using the sys Module to Install a Profiler Function in Example 1-73 is similar, but the trace function is called for each new line: Example 1-73. Using the sys Module to Install a trace Function. 1.13.6, as shown in Example 1-74. Example 1-74. Using the sys Module to Redirect Output HUM303226R" # restore standard output sys.stdout = old_stdout print "M303205303205303205303205L!" heja sverige friskt hum303266r M303205303205303205303205L! An object that implements the write method is all it takes to redirect output. .) 1.13.7 Exiting the Program When you reach the end of the main program, the interpreter is automatically terminated. If you need to exit in midflight, you can call the sys.exit function, which takes an optional integer value that is returned to the calling program. It is demonstrated in Example 1-75. Example 1-75. Using the sys Module to Exit the, as Example 1-76 shows. Example 1-76. Catching the sys.exit Call File: sys-exit-example-2.py import sys print "hello" try: sys.exit(1) except SystemExit: pass print "there" hello there If you want to clean things up after yourself, you can install an "exit handler," which is a function that is automatically called on the way out. This is shown in Example 1-77. Example 1-77. Catching the sys.exit Call Another Way File: sys-exitfunc-example-1.py import sys def exitfunc(): print "world" sys.exitfunc = exitfunc print "hello" sys.exit(1) print "there" # never printed hello world In Python 2.0, you can use the atexit module to register more than one exit handler. Core Modules More Standard Modules Threads and Processes Data Representation File Formats Mail and News Message Processing Network Protocols Internationalization Multimedia Modules Data Storage Tools and Utilities Platform-Specific Modules Implementation Support Modules Other Modules
https://flylib.com/books/en/2.722.1/the_sys_module.html
CC-MAIN-2018-09
refinedweb
726
58.48
! Whenever a data constructor is applied, each argument to the constructor is evaluated if and only if the corresponding type in the algebraic datatype declaration has a strictness flag, denoted by an exclamation point. For example: data STList a = STCons a !(STList a) -- the second argument to STCons will be -- evaluated before STCons is applied | STNil to illustrate the difference between strict versus lazy constructor application, consider the following: stList = STCons 1 undefined lzList = (:) 1 undefined stHead (STCons h _) = h -- this evaluates to undefined when applied to stList lzHead (h : _) = h -- this evaluates to 1 when applied to lzList ! is also used in the "bang patterns" (GHC extension), to indicate strictness in patterns: f !x !y = x + y1,f2 :: Maybe Int -> String; (+++),(++++) :: (a->b)->(c->d)->(a,c)->(b,d); f1 x = case x of {Just n -> "Got it"}; f2 x = case x of {~(Just n) -> "Got. 9 --Line comment character. Everything after main = print "hello world" -- comment here 10 > In a bird style Literate Haskell file, the > character is used to introduce a code line. comment line > main = print "hello world" 11 asRenaming module imports. Like import qualified Data.Map as M main = print (M.empty :: M.Map Int ()) 12) that scope over all of the guards and expressions of the alternative. An alternative of the form pat -> exp where decls is treated as shorthand for: pat | True -> exp where decls _|_. 13 class A class declaration introduces a new type class and the overloaded operations that must be supported by any type that is an instance of that class. class Num a where (+) :: a -> a -> a negate :: a -> a 14 data The data declaration is how one introduces new algebraic data types into Haskell. For example: data Set a = NilSet | ConsSet a (Set a) Another example, to create a datatype to hold an [[Abstract syntax tree]] for an expression, one could use: data Exp = Ebin Operator Exp Exp | Eunary Operator Exp | Efun FunctionIdentifier [Exp] | Eid SimpleIdentifier where the types Operator, FunctionIdentifier and SimpleIdentifier are defined elsewhere. See the page on types for more information, links and examples. 15 default Ambiguities in the class Num are most common, so Haskell provides a way to resolve them---with a default declaration: default (Int) Only one default declaration is permitted per module, and its effect is limited to that module. If no default declaration is given in a module then it assumed to be: default (Integer, Double) 16 deriving data and newtype declarations contain an optional deriving form. If the form is included, then derived instance declarations are automatically generated for the datatype in each of the named classes. Derived instances provide convenient commonly-used operations for user-defined datatypes. For example, derived instances for datatypes in the class Eq define the operations == and /=, freeing the programmer from the need to define them. data T = A | B | C deriving (Eq, Ord, Show) 17 do Syntactic sugar for use with monadic expressions. For example: do { x ; result <- y ; foo result } is shorthand for: x >> y >>= \result -> foo result 18. For example, the type expression) data Foo = forall a. MkFoo a (a -> Bool) | Nil MkFoo :: forall a. a -> (a -> Bool) -> Foo Nil :: Foo [MkFoo 3 even, MkFoo 'c' isUpper] :: [Foo] 19 foreign A keyword for the foreign function interface that is enabled by -ffi, -fffi or implied by -fglasgow-exts 20 hiding When importing modules, without introducing a name into scope, entities can be excluded by using the form hiding (import1 , ... , importn ) which specifies that all entities exported by the named module should be imported except for those named in the list. For example: import Prelude hiding (lookup,filter,foldr,foldl,null,map) 21 if, then, else A conditional expression has the form: if e1 then e2 else e3 and returns the value of e2 if the value of e1 is True, e3 if e1 is False, and _|_ otherwise. max a b = if a > b then a else b 22 import Modules may reference other modules via explicit import declarations, each giving the name of a module to be imported and specifying its entities to be imported. For example: module Main where import A import B main = A.f >> B.f module A where f = ... module B where f = ... 23 infix, infixl, infixr. There are three kinds of fixity, non-, left- and right-associativity (infix, infixl, and infixr, respectively), and ten precedence levels, 0 to 9 inclusive (level 0 binds least tightly, and level 9 binds most tightly). module Bar where infixr 7 `op` op = ... 24 instance An instance declaration declares that a type is an instance of a class and includes the definitions of the overloaded operations - called class methods - instantiated on the named type. instance Num Int where x + y = addInt x y negate x = negateInt x 25 let, in Let expressions have the general form: let { d1 ; ... ; dn } in e They introduce a nested, lexically-scoped, mutually-recursive list of declarations (let is often called letrec in other languages). The scope of the declarations is the expression e and the right hand side of the declarations.Within 26 mdothe recursive 27 module taken from: Technically speaking, a module is really just one big declaration which begins with the keyword module; here's an example for a module whose name is Tree: module Tree ( Tree(Leaf,Branch), fringe ) where data Tree a = Leaf a | Branch (Tree a) (Tree a) fringe :: Tree a -> [a] fringe (Leaf x) = [x] fringe (Branch left right) = fringe left ++ fringe right 28 newtype The newtype declaration is how one introduces a renaming for an algebraic data type into Haskell. This is different from type below, as a newtype requires a new constructor as well. As an example, when writing a compiler one sometimes further qualifies Identifiers to assist in type safety checks: newtype SimpleIdentifier = SimpleIdentifier Identifier newtype FunctionIdentifier = FunctionIdentifier Identifier Most often, one supplies smart constructors and destructors for these to ease working with them. See the page on types for more information, links and examples. For the differences between newtype and data, see Newtype. 29 qualified Used to import a module, but not introduce a name into scope. For example, Data.Map exports lookup, which would clash with the Prelude version of lookup, to fix this: import qualified Data.Map f x = lookup x -- use the Prelude version g x = Data.Map.lookup x -- use the Data.Map version Of course, Data.Map is a bit of a mouthful, so qualified also allows the use of as. import qualified Data.Map as M f x = lookup x -- use Prelude version g x = M.lookup x -- use Data.Map version 30 type The type declaration is how one introduces an alias for an algebraic data type into Haskell. As an example, when writing a compiler one often creates an alias for identifiers: type Identifier = String This allows you to use Identifer wherever you had used String and if something is of type Identifier it may be used wherever a String is expected. See the page on types for more information, links and examples. Some common type declarations in the Prelude include: type FilePath = String type String = [Char] type Rational = Ratio Integer type ReadS a = String -> [(a,String)] type ShowS = String -> String 31 where Used to introduce a module, instance or class: module Main where class Num a where ... instance Num Int where ... And to bind local variables: f x = y where y = x * 2 g z | z > 2 = y where y = x * 2
http://www.haskell.org/haskellwiki/index.php?title=Keywords&oldid=22666
CC-MAIN-2014-42
refinedweb
1,251
59.43
In general, differentiating between various 8-bit character sets is a hairy problem. If you have nothing else to go on besides the text files, I suspect you're going to need clever heuristics. But try Encode::Guess first; it might be enough. ++. Exactly! (2) If your text includes the ESCAPE character, it may have ISO-2022 shift sequences in it which identifty the character set. All the registered character sets are at. The actual escape codes are defined in each PDF file. There doesn't seem to be a comprehensive table anywhere on the internet! Note that when ISO registry #165 says that the escape sequence (for G2) is ESC 2/4 2/10 4/5, that means "\e\x24\x2A\x45". (Of course "\x24\x2A\x45" are the characters $ * E You don't have to understand about G0, G1, G2 to recognize the character sets, although you would to actually translate them to Unicode. I don't know if Encode handles ISO-2022 encoding generally. ICU handles the more commonly used parts of it. Some general character set links: No. The encoding is what determines that 65 66 67 should be displayed as ABC (or something else). There's nothing attached to "65" that would indicate US-ASCII should be used. However, there are ways of determining the probable encoding. Good luck! You might want to have a look at the Encode:: namespace on CPAN. Encode
http://www.perlmonks.org/?node_id=564168
CC-MAIN-2015-48
refinedweb
238
65.93
The. A kind of a "personal curiosity" question here: are the newly supported C++0x features used internally in the upcoming Microsoft products? I understand that it's probably not the right place to ask such a general question - and you might not know whether, e.g. the Windows and Office teams (which, I would imagine, are two biggest C++ users in Microsoft) use them. But do you use them in the C++ compiler itself, for example, and/or in the libraries? Stephan, thanks for the heads-up. Looking forward to all the tasty deliciousness of VC10! I want it now! I want it now! Alas, probably won't be able to adopt C++0x-isms for my project for two more cycles (4 years). Engineering bureaucracy. :-( Kudos to the VC10 team (and expecting even more C++0x implementation goodness for VC11, of course), and to the JTC1/SC22/WG21 folks. @int19h: I don't think any of the C++0x features will be used in the coming product wave. Those features are not yet stable for production use and developers also first needs to get used to it. Or how Steven Sinofsky said it: "We’re building a class library, not a compiler test suite, so there is no need to try to use all the features of C++." > C++0x Working Draft: > It looks like C++ 2003. I have't found any 0x improvements there. > It looks like C++ 2003. I have't found any 0x improvements there. You have to look more attentively :) New additions are marked with cyan, removals are in striked-out red. For a few new things to look at - 5.1.1 is about lambdas; 6.5.4, range-based for-statement; 7.6, attributes; 14.9-10, concepts. > int19h: I don't think any of the C++0x features will be used in the coming product wave. Those features are not yet stable for production use and developers also first needs to get used to it. Well, you can't avoid lambdas when using PPL - or are you saying that you guys aren't using PPL internally, either? What about "eating our own dog food"? Or are you saying that it's not yet stable at this specific moment, but will be sufficiently stable for internal use some time before VC10 is released? Personally, I would feel more comfortable using the release version of VC10 and PPL knowing that it was used by someone else to build successful production apps before me ;) @int16h: I'm not affiliated with Microsoft, all MS employees have a [MSFT] behind their name. I also haven't looked at the Parallel Pattern Library yet. But Windows 7 will definitely not be build with the VC10 compiler, so no C++0x features. I'm not sure if they use the VC8 or VC9 compiler, I guess they've switched to the VC9 one. Maybe someone from the VC team can shed some light on it. I haven't seen the Win7 SDK yet. > 1) Are you going to implement the "range-based for-loop" The range-based for statement is powered by concepts (see N2798 6.5.4 [stmt.ranged]), so it won't be implemented in VC10. > given that you pretty much have it implemented already as "for each", it would be a fairly simple change to the syntax, no? The "for each" extension is highly limited; it calls .begin() and .end() (so it'll work with tr1::unordered_foo), but it's not customizable like the range-based for statement. > 2) Will nullptr take on its C++0x semantics, complete with nullptr_t? nullptr won't be implemented in VC10. > 3) As I understand, you'll be moving all the stuff from TR1 that makes reappearance in C++0x into namespace std. We're dragging TR1 components into namespace std with using-declarations. > What about the new stuff, such as std::unique_ptr? We're implementing some C++0x library features in VC10. I can confirm that unique_ptr and forward_list are being implemented. (Note that they've been checked into my branch, but they're not in the CTP. The CTP doesn't contain any C++ Standard Library changes.) > VC has become so much better at implementing stadards - and doing it fast - that I am still happy. Glad to hear it! [Nikola Smiljanic] > Will VC10 support extern templates? No. [Jalf] > Shame other features like concepts didn't make it, but I can see why. Concepts were simply never on the table. > 1: These are the 0x core language features that'll make it in. How about library changes? This is my area. We're currently working on rewriting our Standard Library implementation for improved performance and C++0x conformance. For example, support for rvalue references is being added to our Standard Library implementation, which will trigger dramatic performance improvements in certain scenarios (and which will supersede VC8-9's limited and fragile Swaptimization). We're also picking up some new C++0x libraries, such as the aforementioned unique_ptr and forward_list (and my personal favorite, make_shared<T>()). > (And how about the new unicode char types?) That's a language feature, which won't be implemented in VC10. > 2: Can we expect to see more 0x support added over VC10's lifetime, through servicepacks and such? Magic 8 Ball says: Cannot predict now. > are the newly supported C++0x features used internally in the upcoming Microsoft products? The limiting factor is how fast other teams can pick up the new compiler. VS itself is the first user, as the new compiler makes its way into all of our development branches. It takes longer for teams outside DevDiv, such as Windows and Office, to pick up the new compiler - also, they ship at different times, and are understandably loathe to pick up new toolchains late in their development cycles. > But do you use them in the C++ compiler itself JonCaves would know for sure, but this is my understanding: currently we don't, because of how our build system works. Our compiler builds itself, of course, but we start with a checked-in "Last Known Good" (or LKG) compiler. As the name implies, this a build which is known to behave reasonably. The LKG compiler is used to build the new "phase 0" compiler, and then the phase 0 compiler is used to build itself, producing the phase 1 compiler. If that happens without anything being mangled horribly, the compiler can compile itself and we're looking good. Periodically, the LKG build is updated, which is when new compiler features become available. (Due to how our build system works, the LKG compiler, and not the phase 1 compiler, is used to build the rest of VS.) After the next LKG update, the compiler itself (and anything else within VS, etc.) will be able to use rvalue references and other features unconditionally. > and/or in the libraries? Yes, although we currently guard uses of rvalue references with preprocessor machinery that detects whether we're being compiled with the LKG compiler or the rvalue reference-aware compiler. These guards will be removed after the next LKG update, which is when we can really start having some fun. [Andre] > Those features are not yet stable for production use Actually, they are quite stable. (There was an epic bug in template argument deduction that broke perfect forwarding with rvalue references, but it was found and fixed in time for the CTP.) If you find otherwise, prove me wrong by filing Microsoft Connect bugs! :-) Thank you very much for detailed replies, Stephan! > VS itself is the first user, as the new compiler makes its way into all of our development branches. if VS uses the new features (now or soon), that's certainly good enough for me to sleep well ;) > We're dragging TR1 components into namespace std with using-declarations. Erm... I might be wrong, but isn't it going to Break Things (for template specialization purposes etc), unless you have "strong using" implemented, and are using it for that purpose? > If you find otherwise, prove me wrong by filing Microsoft Connect bugs! :-) Personally, I'm probably going to wait until there is a non-VPC CTP release before seriously playing with the new stuff. VPC is just too slow and inconvenient for me to deal with - it's good enough to quickly look at the new features, but experimenting is a pain. Then again, mmm, lambdas... I might just have to endure :) @Stephan: I wasn't saying that the current CTP is buggy, but I doubt that Win7 will be build with a compiler from the development branch ;) Does the Windows devision use publicly released versions? How do they decide up to which label (LKG ?) they branch? Do they integrate only selected changelists thereafter? How is the workflow if someone from the Windows team finds a bug in the compiler? I doubt they make fixes on their own in their branch but have to report them to get them fixed? What do lambda expressions add to C++ besides making advanced use of the language even more difficult for humans to parse? If I were in charge of a progamming group, I'd probably force people to use named functions to preserve readability. @Sniffy, for small functors lambdas *are* more readable. Compare the "cout << n" sample where a lambda expression is used to the explicitly given struct LambdaFunctor with operator() overloaded. While reading the for_each line you have to lookup LamdaFunctor to see what kind of functor is called and what it does. The "inline" lambda lets you read everything in one place. -- SvenC @SvenC: I see the same problem with lambdas as with delegates in C#, code clustered through the whole app at places where it doesn't belong. But unlike delegates I don't think lambdas promote "unstructured" programming. But I think it will take a while before lambdas are well used. If I compare code written in the 90s with todays code there is a huge different, and so it will be with lambdas in the next years. The auto keyword is very useful for iterators, but I have the fear that novice users will use it the VB way :-< > Erm... I might be wrong, but isn't it going to Break Things (for template specialization purposes etc) That's a good question. I'll ask. > Does the Windows devision use publicly released versions? I don't know very much about this (from where I'm sitting, Windows is in a galaxy far, far away, on the other side of the street), but they've definitely used hotfixed versions in the past. > How is the workflow if someone from the Windows team finds a bug in the compiler? > I doubt they make fixes on their own in their branch but have to report them to get them fixed? I believe that they request hotfixes from us. [SvenC] > for small functors lambdas *are* more readable. Note that all of my lambdas aren't equally realistic; meow.cpp's [](int n) { cout << n << " "; } , cubicmeow.cpp's [](int n) { return n * n * n; } , and capturekittybyvalue.cpp's [x, y](int n) { return x < n && n < y; } are very realistic (the last one is a very compelling use for captures, I was rather pleased with it), but overridekitty.cpp's [=, &sum, &product](int& r) mutable { ... } is obviously a desperate attempt to demonstrate mixed value and reference captures (there *are* uses for this in real programs; distilling it down to a small example is difficult). If you inspect other languages with closures (Standard ML, Haskell, Common Lisp, Ruby, Smalltalk ... whatever), you'll see that closures are a very integral and most of all SIMPLE thing to use. The feature you presented above is more or less the opposite of that. This new C++ standard has about as much reality as the new Fortran 2003 standard. C++ outlived its usefulness for application development.. > int19h, I don't see how you can even compare static_assert to concepts :) I'm not. I'm just saying that static_assert does cover one particularly common request, which is to get meaningful error messages on errorneous template instantiations. "Well, I can kind of see why, but I think missing Concepts means that you have no real C++/0x support - that's the single most important new feature." True, but I think it'd be silly to have expected C++0x support from VC10. I don't know when they expect to ship VC10, but assuming they keep the trend of shipping early relative to the year it's named after, we'll see VC10 at some point in 2009. And C++0x is only *barely* going to make it in 2009, so at best, VC10 could implement a draft version, but certainly not the final one. It was never realistic to expect a complete C++0x implementation so soon. "Once again us C++ developers are feeling like second class citizens. Maybe it's time to jump ship..." Why, exactly? Because they give us *some* new features before the standard is finalized? Unlike C#, C++ isn't defined by the people writing the compiler. In C#, people can say "wouldn't it be cool to add feature X", implement X in the next version of the compiler, and then add it to the C# spec. C++ works the opposite way. A committee decides on the language features *first*, and then compiler writers try to catch up. (Of course the compiler vendors are represented in the standards committee, but the point is that the features go in the standard spec *first*, and *then* in compilers) I'll feel C++ is given a second-class treatment if they don't have a decent C++0x implementation in VC11. But I can't blame them for keeping VC10 mainly a C++03 implementation. Anyway, good to hear that there's more c++0x to come in VC10. I got the impression from first reading the post that this was it, these were the 0x features that would be in VC10. > I got the impression from first reading the post that this was it, these were the 0x features that would be in VC10. See Stephan's post above, particularly: "Paraphrasing our libraries and compiler front-end program manager Damien Watkins, the CTP is only the first look at our VC10 functionality and there is definitely more to come." i just have one thing to say, disappointed about lack of typeof In case you haven't noticed yet, "decltype" was the only feature about which it was not said plainly, "no, it won't be there in VC10". So cross your fingers :) One other question regarding lambdas: how well do they play together with C++/CLI? Particularly, can they capture: 1. Managed handles, either by value or by reference. 2. Tracking references. Also, is there a plan to provide some easy way to make a delegate instance out of a lambda? Well, first of all, lambdas in C++ means that LINQ is now possible. So the question is, on the C++/CLI are you implementing this? Apart from that, I think the biggest thing which I would really like is the intellisense updates, its not as bad now processor usage wise, but it is unable to do anything useful. > Well, first of all, lambdas in C++ means that LINQ is now possible. No, it doesn't. C++0x lambdas are not the same as C# 3.0 lambdas (particularly as far as lifetime is concerned). Also, for LINQ, lambdas alone are not sufficient - you actually need "expression trees". And C++0x lambdas cannot be faithfully transformed to expression trees for a number of reasons, from what I can see. Like I said above, it is possible to write a simple wrapper function that takes a C++ lambda, and gives you a delegate of any type that might be needed. This will work for LINQ to Objects/XML/DataSet (and any other that doesn't deal with expression trees). "For laughs, this means that the following is valid C++0x: [](){}(); []{}(); " See, this is one thing I dislike about the C++ standards committee: They joke too much. For example, there's the infamous warning in C++2003 that template specializations suck, don't use them, but phrased as a limerick. And now they've decided that wouldn't it be cute if any balanced sequence of empty braces were acceptable by the parser? (Okay, not "any" sequence, but {{}}[]{[](){}}([](){}()); is acceptable, for example. For any given line noise, it's getting harder and harder for a human to determine whether it's valid C++.) The committee should have ruled that a lambda's body must contain at least one statement, and that an empty parameter-list must not be dropped. Also, they should not have allowed type inference in the one special case you mentioned (the case in which the lambda's body contains only a return statement) --- they should have ruled either that type inference always happened, or that it never happens. The committee is just trying to ensure that there will never such thing as IOCCC for C++ - because it's going to be too easy ;) The lambdas for C++ itself may be different, but since the syntax itself has been formalised then is there anything stopping them from twiddling it a bit and getting them to work as needed under C++/CLI. Don't forget, C++ classes are different from the managed classes, but they kept the syntax similar but added what it needed to support the managed classes. > One other question regarding lambdas: how well do they play together with C++/CLI? I don't know anything about managed code, but I've forwarded your question to JonCaves. [Anonymous Cowherd] > "For laughs, this means that the following is valid C++0x: ..." > See, this is one thing I dislike about the C++ standards committee: They joke too much. That was my joke, not the Committee's. (I'm not a Committee member.) Sometimes my silliness doesn't involve cats, you know. > For example, there's the infamous warning in C++2003 that template specializations suck, don't use them, but phrased as a limerick. That's incorrect. Template specializations (both explicit and partial) are extremely powerful and easily usable. The warning is: "When writing a specialization, be careful about its location; or to make it compile will be such a trial as to kindle its self-immolation." (C++03 14.7.3/7) This means that you have to ensure that a specialization is declared before it's used, which is easy unless you're doing something excessively tricky. > And now they've decided that wouldn't it be cute if any balanced sequence of empty braces were acceptable by the parser? That's incorrect. This "falls out" of the grammar; it's not some intentionally cute thing. > The committee should have ruled that a lambda's body must contain at least one statement That would require *additional* Standardese, because empty compound-statements are perfectly valid. It's also inconsistent, because named functions can have empty bodies. > and that an empty parameter-list must not be dropped. That would be as simple as removing the opt subscript from lambda-parameter-declaration in the definition of lambda-expression. Someone considered making the lambda-parameter-declaration optional to be a useful feature. As I explained in my post, "Whether eliding the lambda-parameter-declaration is good style is up to you to decide." > Also, they should not have allowed type inference in the one special case you mentioned > (the case in which the lambda's body contains only a return statement) --- they should > have ruled either that type inference always happened, or that it never happens. Making the lambda-return-type-clause mandatory would produce needless verbosity (the very thing that lambdas are trying to reduce), especially given that a lot of lambdas are going to return bool. But there are real problems with always performing automatic type deduction; in particular, when there are multiple return statements returning different types. (This is easier to encounter than it sounds, especially given the usual arithmetic conversions.) Making the lambda-return-type-clause optional when and only when the lambda's body is a single return statement is a simple and effective rule, easy to learn and remember. If you want to make all of your lambda return types explicit, you're certainly welcome to. int19h: JonCaves says: You can only pass a variable with a managed type as an argument to a lambda - you can't capture a variable that has a managed type. We have no plans to "merge" lambdas and delegates. > You can only pass a variable with a managed type as an argument to a lambda - you can't capture a variable that has a managed type. Yep, just tried that myself. "Error C3498: 's': you cannot capture a variable that has a managed type". I can't understand why it is the case, though. I can understand why it's not possible to capture managed references - and C# lambdas cannot capture them for the same reason. I can also understand why it's tricky to capture an object handle by value (though if there would be some extension syntax to force the function object created from the lambda to be a "value class", it could work). But there's no reason why I can't capture an object handle by reference! I mean, I can write this: struct handwritten_lambda String^& s; handwritten_lambda(String^& s) : s(s) { } void operator() (int x) s += x.ToString(); and then use it: std::vector<int> v; v.push_back(123); String^ s = "Foo"; std::for_each( v.begin(), v.end(), handwritten_lambda(s)); and it works, as expected! But, I cannot write this: [&s](int x) { s += x.ToString(); }); Why? Given that C++0x lambdas essentially do the same, they should not have any restrictions on top of what the plain transformation to a function object as described in the standard has. By the way, I have just noticed that function objects generated from lambdas do not have "result_type" typedef, and therefore std::tr1::result_of cannot be applied to them. Looking at the wording of the Standard, it seems that it does not require the lambdas to provide result_type either... is this a Committee oversight, or is it deliberate? If the latter, then what is the suggested workaround? >? [Faisal Vali] > What about constexpr - will that be implemented? Not in VC10. > I wouldn't call lambdas "low-hanging fruit"... Lower than most. It's still all front-end work. It's basically recognizing a syntax and converting it to a different syntax, with a class and operator() overload. It's something you could do as a code-generation path with an external application, rather than as part of the compiler itself. Compare that to, for example, variadic templates. The idea sounds simple, but the pack/unpack semantics can go almost anywhere and are very complicated to get right. Hi. I’m Arjun Bijanki, the test lead for the compiler front-end and Intellisense engine. One afternoon I just want to repeat the question from earlier since it seemed to get skipped. What will VC10's support for the C99 standard look like? In particular, variable length arrays? > What will VC10's support for the C99 standard look like? I didn't see any mention of C99 in any talks on VC10 so far. So, I guess, the answer is "there won't be one". Given that g++ doesn't support a lot of C99 stuff either, I don't see why would it matter - you can't write portable C99 today anyway (too few compilers support the full spec), and this is unlikely to change anytime soon. It seems that no-one really cares about it that much. C++0x includes the C99 Standard Library (see N2798 17.1/9). Well, I guess library is relatively easier - once you need it (when C++0x is finalized; VC11?), you can pick it from Dinkumware as usual, and, it being a pure C library, it's probably easier to tailor for your needs than their STL and TR1 stuff. Language features such as struct literals and VLAs, though - I still wouldn't expect them anytime soon. If ever. VLAs in particular, as it seems that (surprisingly) no-one else has gotten them right, either - at least the C99 status page for gcc still listed them as "broken" last time I checked. Part 1 of this series covered lambdas , auto , and static_assert . Today, I'm going to talk about rvalue As you may have noticed my emphasis on polyglot programming on this blog.  I’ve been following the As you may have noticed my emphasis on polyglot programming on this blog. I’ve been following the language [Nacsa Sándor, 2009. január 20. – február 12.] Komplett fejlesztő környezet Windows kliens és web alkalmazások Part 1 of this series covered lambdas , auto , and static_assert . Part 2 of this series covered rvalue Visual Studio 2010 Beta 1 introduces a number of exciting new features for the C++ developer as we include Many recursive algorithms have initial parameters. For example, Fibonacci Number is defined as: Fn = Visual Studio 2010 Beta 1 is now available for download. I've recently blogged about how Visual C++ in
http://blogs.msdn.com/vcblog/archive/2008/10/28/lambdas-auto-and-static-assert-c-0x-features-in-vc10-part-1.aspx
crawl-002
refinedweb
4,239
63.29
Colorful, flexible, lightweight logging for Swift 3, Swift 4 & Swift 5. Great for development & release with support for Console, File & cloud platforms. Log during release to the conveniently built-in SwiftyBeaver Platform, the dedicated Mac App & Elasticsearch! Docs | Website | Twitter | Privacy | License During Development: Colored Logging to Xcode Console Learn more about colored logging to Xcode 8 Console with Swift 3, 4 & 5. For Swift 2.3 use this Gist. No need to hack Xcode 8 anymore to get color. You can even customize the log level word (ATTENTION instead of ERROR maybe?), the general amount of displayed data and if you want to use the 💜s or replace them with something else 😉 During Development: Colored Logging to File Learn more about logging to file which is great for Terminal.app fans or to store logs on disk. On Release: Encrypted Logging to SwiftyBeaver Platform Learn more about logging to the SwiftyBeaver Platform during release! Browse, Search & Filter via Mac App Conveniently access your logs during development & release with our free Mac App. On Release: Enterprise-ready Logging to Your Private and Public Cloud Learn more about legally compliant, end-to-end encrypted logging your own cloud with SwiftyBeaver Enterprise. Install via Docker or manual, fully-featured free trial included! Google Cloud & More You can fully customize your log format, turn it into JSON, or create your own destinations. For example our Google Cloud Destination is just another customized logging format which adds the powerful functionality of automatic server-side Swift logging when hosted on Google Cloud Platform. Installation - For Swift 4 & 5 install the latest SwiftyBeaver version - For Swift 3 install SwiftyBeaver 1.8.4 - For Swift 2 install SwiftyBeaver 0.7.0 Carthage You can use Carthage to install SwiftyBeaver by adding that to your Cartfile: Swift 4 & 5: github "SwiftyBeaver/SwiftyBeaver" Swift 3: github "SwiftyBeaver/SwiftyBeaver" ~> 1.8.4 Swift 2: github "SwiftyBeaver/SwiftyBeaver" ~> 0.7 Swift Package Manager For Swift Package Manager add the following package to your Package.swift file. Just Swift 4 & 5 are supported: .package(url: "", .upToNextMajor(from: "1.9.0")), CocoaPods To use CocoaPods just add this to your Podfile: Swift 4 & 5: pod 'SwiftyBeaver' Swift 3: target 'MyProject' do use_frameworks! # Pods for MyProject pod 'SwiftyBeaver', '~> 1.8.4' end Usage Add that near the top of your AppDelegate.swift to be able to use SwiftyBeaver in your whole project. import SwiftyBeaver let log = SwiftyBeaver.self At the the beginning of your AppDelegate:didFinishLaunchingWithOptions() add the SwiftyBeaver log destinations (console, file, etc.), optionally adjust the log format and then you can already do the following log level calls globally: // add log destinations. at least one is needed! let console = ConsoleDestination() // log to Xcode Console let file = FileDestination() // log to default swiftybeaver.log file let cloud = SBPlatformDestination(appID: "foo", appSecret: "bar", encryptionKey: "123") // to cloud // use custom format and set console output to short time, log level & message console.format = "$DHH:mm:ss$d $L $M" // or use this for JSON output: console.format = "$J" // add the destinations to SwiftyBeaver log.addDestination(console) log.addDestination(file) log.addDestination(cloud) // // log anything! log.verbose(123) log.info(-123.45678) log.warning(Date()) log.error(["I", "like", "logs!"]) log.error(["name": "Mr Beaver", "address": "7 Beaver Lodge"]) // optionally add context to a log message console.format = "$L: $M $X" log.debug("age", context: 123) // "DEBUG: age 123" log.info("my data", context: [1, "a", 2]) // "INFO: my data [1, \"a\", 2]" Server-side Swift We ❤️ server-side Swift 4 & 5 and SwiftyBeaver supports it out-of-the-box! Try for yourself and run SwiftyBeaver inside a Ubuntu Docker container. Just install Docker and then go to your the project folder on macOS or Ubuntu and type: # create docker image, build SwiftyBeaver and run unit tests docker run --rm -it -v $PWD:/app swiftybeaver /bin/bash -c "cd /app ; swift build ; swift test" # optionally log into container to run Swift CLI and do more stuff docker run --rm -it --privileged=true -v $PWD:/app swiftybeaver Best: for the popular server-side Swift web framework Vapor you can use our Vapor logging provider which makes server logging awesome again 🙌 Documentation Getting Started: Logging Destinations: - Colored Logging to Xcode Console - Colored Logging to File - Encrypted Logging & Analytics to SwiftyBeaver Platform - Encrypted Logging & Analytics to Elasticsearch & Kibana Advanced Topics: Stay Informed: Privacy SwiftyBeaver is not collecting any data without you as a developer knowing about it. That's why it is open-source and developed in a simple way to be easy to inspect and check what it is actually doing under the hood. The only sending to servers is done if you use the SBPlatformDestination. That destination is meant for production logging and on default it sends your logs plus additional device information end-to-end encrypted to our cloud service. Our cloud service can not decrypt the data. Instead, you install our Mac App and that Mac App downloads the encrypted logs from the cloud and decrypts and shows them to you. Additionally, the Mac App stores all data that it downloads in a local SQLite database file on your computer so that you actually "physically" own your data. The business model of the SwiftyBeaver cloud service is to provide the most secure logging solution in the market. On purpose we do not provide a web UI for you because it would require us to store your encryption key on our servers. Only you can see the logging and device data which is sent from your users' devices. Our servers just see encrypted data and do not know your decryption key. SwiftyBeaver is fully GDPR compliant due to its focus on encryption and transparency in what data is collected and also meets Apple’s latest requirements on the privacy of 3rd party frameworks. Our Enterprise offering is an even more secure solution where you are not using anymore our cloud service and Mac App but you send your end-to-end encrypted logs directly to your own servers and you store them in your Elasticsearch cluster. The Enterprise offering is used by health tech and governmental institutions which require the highest level of privacy and security. End-to-End Encryption SwiftyBeaver is using symmetric AES256CBC encryption in the SBPlatformDestination destination. No other officially supported destination uses encryption. The encryption used in the SBPlatformDestination destination is end-to-end. The open-source SwiftyBeaver logging framework symmetrically encrypts all logging data on your client's device inside your app (iPhone, iPad, ...) before it is sent to the SwiftyBeaver Crypto Cloud. The decryption is done on your Mac which has the SwiftyBeaver Mac App installed. All logging data stays encrypted in the SwiftyBeaver Crypto Cloud due to the lack of the password. You are using the encryption at your own risk. SwiftyBeaver’s authors and contributors do not take over any guarantee about the absence of potential security or cryptopgraphy issues, weaknesses, etc.; please also read the LICENSE file for details. Also if you are interested in cryptography in general, please have a look at the file AES256CBC.swift to learn more about the cryptographical implementation. License SwiftyBeaver Framework is released under the MIT License.
https://opensourcelibs.com/lib/swiftybeaver
CC-MAIN-2022-40
refinedweb
1,195
54.83
Mark Struberg a écrit : > Hi Ludovic! > > One possible solution is the locking as you already explained. > Another one which worked well for me is to wrap JSF sendRedirect and close > the EM in there. Would need to think how we can make this solution more > broad and generally available. > A third solution would theoretically be to âpostponeâ the redirect > until the end of the request. > > This is surely an area where we could make it a LOT easier for Seam2 users > (and other CRUD style projects) to get their apps ported over to CDI. > So any ideas, objections and tricks are welcome :) The three possibilities you mention look pretty similar to me. In all cases, the goal is to have one and only one "JSF thread" at a time. So, you just let go resource queries, other servlet queries but lock on JSF queries. In the implementation I use for "open view conversation" pattern, I recognize "true" JSF requests by the fact they have a dswid parameter. If this parameter is not set, I just pass the query to the next layer, else I lock on the http session (and not the hibernate session as I wrongly stated before). Something like : public class SequentialJSFQueriesFilter implements Filter { @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { HttpServletRequest httpreq = (HttpServletRequest)request; String dswid = httpreq.getParameter("dswid"); if (StringUtils.isEmpty(dswid)) { chain.doFilter(request, response); return; } HttpSession httpSession = httpreq.getSession(); try { if (httpSession != null) { synchronized(httpSession) { chain.doFilter(request, response); } } } catch(...) { /// ... } } } The performance impact is not that bad. Of course, it would be better without locking, but keeping the same DB session also has its own advantages. And it allows you to go the easy way, such as keeping DTOs and serializing them in the session. You can heavily use features such as cascading operations without fearing horrible side effects - those who one day "merged" objects in a complex hierarchy, then "updated" them and accidentally deleted dozens of "cascading dependencies" know what I mean. :-> And, well, again, if you know you are coding a demanding application, with a high workload, a need for replication, and so on, you just don't go that way. :-) Of course, a custom implementation can add the little bells and whistles some application needs, still to go the easy way. In my case, I also handle for instance a "nokeepsession" parameter, which I use to invalidate the http session once the request has been processed. Some of my applications are used to batch generate thousands of static pages. With this kind of simple trick, I have an easy mean to quickly close useless http sessions. Ludovic | | AVANT D'IMPRIMER, PENSEZ A L'ENVIRONNEMENT. |
http://mail-archives.apache.org/mod_mbox/deltaspike-users/201504.mbox/%3Cd3e5d06b04796d216650fd797e94e14d.squirrel@webmail.senat.fr%3E
CC-MAIN-2017-39
refinedweb
449
63.59
I'm coding by python about Gtalk I use XMPPPY. But I can chat with GTalk client but the problem is I can't accept invitation. Is XMPPY can do ? --------------Solutions------------- Looks like you want to "authorize" the request. I'm assuming you've received the request at the client. In the roster class (xmpp.roster) there is an "Authorize" method. It sends the "subscribed" packet to accept the roster request. This is what the method looks like: def Authorize(self,jid): """ Authorise JID 'jid'. Works only if these JID requested auth previously. """ self._owner.send(Presence(jid,'subscribed')) Wait, that may not be what you're asking at all. Are you trying to process a chat or a roster request? Take a look at the example (small) client they have called xtalk.py. I think you're interested in the xmpp_message method.
http://www.pcaskme.com/xmpppy-have-function-to-manage-invitation-in-client-side/
CC-MAIN-2019-04
refinedweb
143
79.16
157067/complete-the-function-diff-do-not-modify-the-rest def diff(a,b): # end function print(" *** Find difference between A(x1,y1) and B(x2,y2) ***") ins = input("Enter x1 y1 x2 y2 : ").split() pointA= (int(ins[0]),int(ins[1])) pointB= (int(ins[2]),int(ins[3])) print("A B ==>",pointA,pointB) print("Differnce from", pointA,"to", pointB,"==>",diff(pointA,pointB)) print("Differnce from", pointB,"to", pointA,"==>",diff(pointB,pointA)) output *** Find difference between A(x1,y1) and B(x2,y2) *** Enter x1 y1 x2 y2 : 1 2 3 4 A B ==> (1, 2) (3, 4) Differnce from (1, 2) to (3, 4) ==> (2, 2) Differnce from (3, 4) to (1, 2) ==> (-2, -2) It returns a random floating point number ...READ MORE You can use the following code block: x=range(1,100) len(x) Output: ...READ MORE The dir() function returns all properties and methods of ...READ MORE Hello @kartik, Let's say you have this important ...READ MORE It's because of the way in which ...
https://www.edureka.co/community/157067/complete-the-function-diff-do-not-modify-the-rest
CC-MAIN-2022-33
refinedweb
171
59.13
GCD of two numbers in Python In the program, we will learn to find the GCD of two numbers that is Greatest Common Divisor. The highest common factor (HCF) or Greatest Common Divisor (GCD) of two given numbers is the largest or greatest positive integer that divides the two number perfectly. For example, here, we have two numbers 12 and 14 Output: GCD is 2 Algorithm: - Define a function named gcd(a,b) - Initialize small =0 and gd =0 - If condition is used to check if a is greater than b. - if true small==b, else small==a. - using for loop with range(1, small+1), check if((a % i == 0) and (b % i == 0)) - if true gd=I and gd value is returned in variable t - Take a and b as input from the user. - The function gcd(a,b) is called with a and b as parameters passed in the function. - Print the value in the variable t. - Exit Code: def gcd(a,b): small=0 gd=0 if a>b: small==b else: small==a for i in range(1, small+1): if((a % i == 0) and (b % i == 0)): gd=i return gd a=int(input("Enter the first number: ")) b=int(input("Enter second number: ")) t=gcd(a,b) print("GCD is:",t) Output: Enter the first number: 60 Enter second number: 48 GCD is: 12 Report Error/ Suggestion
https://www.studymite.com/python/examples/program-to-find-the-gcd-of-two-numbers-in-python/
CC-MAIN-2021-39
refinedweb
234
62.72
notes on the future of TwistedTrial Become a Real Twisted Application Currently Trial uses an abomination to make sure Deferred-returning tests finish before the next test starts. We'd all like it to behave as a regular Twisted application. That is: set things up, run the reactor, call reactor.stop when done. The implementation might look like: class AsyncTest(unittest.TestCase): def run(self): from twisted.internet import reactor d = self.runDeferred() d.addCallback(lambda x: reactor.stop()) reactor.run() class TestCase(unittest.TestCase, AsyncTest): def __init__(self, methodName='runTest'): self._methodName = methodName def runDeferred(self): testMethod = getattr(self, self._methodName) d = defer.maybeDeferred(self.setUp)) d.addCallback(lambda x: defer.maybeDeferred(testMethod)) d.addCallback(lambda x: defer.maybeDeferred(self.tearDown)) return d class TestSuite(unittest.TestSuite, AsyncTest): def runDeferred(self): ds = [test.runDeferred() for test in self] # probably use something that runs them sequentially return defer.gatherResults(ds) This would allow Trial tests to be safely added to a pyunit suite (provided that all the Trial tests were contained within one Trial test suite/case). No matter what the implementation looks like, we'll need to address the issues of timeout and test cleanup. Timeout Currently, Trial test cases can have a timeout attribute. The timeout attribute guarantees that the test and all of its callbacks and errbacks will complete within the given timeout period. If the test takes longer than that, it will fail. This should be the testers responsibility, not Trial's. The only way we support it now is by crashing the reactor at the end of each test. Cleanup I don't know enough the use cases for cleanup. I assume that if a test leaves sockets etc around after it is completed, then an error should be reported. The harsh, ridiculous cleanup that Janitor does probably will have to go, and be replaced with something like what is described in #1334 Random Stuff - Make failUnlessRaises re-raise KeyboardInterrupt - Make TestCase._wait actually call result.stop() - Make the setup, method and teardown call/errbacks public - Make sure cleanup calls re-raise KeyboardInterrupt - Make makeTodo accept None correctly (see TestCase.getTodo) - Double-check TestVisitor only has methods that are used. Assertion stuff Trial has a multiplicity of assertion methods. It's ugly, and there is often reason to want still more assertion methods. Two solutions have been proposed: Assertion objects class Equal(object): def __init__(self, a, b): self.a, self.b = a, b def passed(self): return self.a == self.b def value(self): return self.a def message(self): return 'rs != %r' % (self.a, self.b) class TestCase(unittest.TestCase): def check(self, assertion): if not assertion.passed(): self.fail(assertion.message()) return self.value() This could be combined with some sort of registration mechanism that makes self.assertEqual just work. The advantages of this approach are debatable. It doesn't seem to reduce code, or make it easier to write assertions. By relying on composition instead of inheritance, it clears up a little namespace pollution. check could be moved to an abstract base class, but then we just start encouraging people to put assertions outside of unit tests, which is a demonstrably bad idea (see the Conch tests). py.test magic The other solution is better, but harder. Much harder. Add a method to TestCase called test (I prefer the name contend myself) which takes a boolean expression and then provide a bunch of magic that displays the values of variables and sub-expressions of that boolean expression. py.test is unreleased and the code we need isn't exposed in a public interface. exarkun has volunteered to look into implementing this. Original Author: JonathanLange
http://twistedmatrix.com/trac/wiki/TrialDevNotes?version=8
CC-MAIN-2014-42
refinedweb
613
52.15
Beginning Game Development Part IX –Direct Sound Part II I've built time-tracking applications before, but I'm not that great at using them. It seems that I just can't remember to pause and resume the timer consistently enough. If I keep it logged in while I run to the store, it's not all that accurate! This article talks about the Win32 API call that you can use to get a handle on user activity. To be more useful, I decided to also add the necessary glue to expose the features as a component to add to your applications from the Visual Studio toolbox, much as a Timer or StatusBar control is used. This makes it easy to wire up the properties and events with less coding later. To run this sample, you will need to have Visual Studio 2005 Express Edition installed (either Visual C# or Visual Basic version). An archive containing the full source code and a pre-compiled EXE is linked from the top of the article. Feel free to use this as it is or to expand it as you see fit. So the big question is: how do you know if the user is active? At some level, Windows must know, since the screensaver is based on idle time, but how does it know? If you think about it, user activity is really only measurable by user input. If the mouse is being clicked or the keyboard pressed, the user is clearly active. How do we know if the mouse is being clicked or the keyboard pressed? Why Windows messages, of course! You've probably hooked into form events to detect if it's closing, moved, or if a drag-and-drop operation is occurring. You can also hook into events to see mouse movements and key presses. Unfortunately, this is limited to events within your form and its controls. Once the mouse moves out of the form, it's no longer visible. As it turns out, you can hook into events at the system level as well, though you'll see a lot more coming through. This isn't as straight-forward from managed code. It is possible though. On the other hand, if filtering system messages isn't to your liking, there's always the GetLastInputInfo method! Visual Basic <DllImport("User32.dll")> _ Private Shared Function GetLastInputInfo(ByRef plii As LASTINPUTINFO) As Boolean End Function Visual C# [DllImport("User32.dll")] private static extern bool GetLastInputInfo(ref LASTINPUTINFO plii); This wonderfully simple function populates a structure with the timestamp of the last time a mouse or keyboard message occurred at the system level. With this in hand, you just need to decide how to determine when the user is truly idle. For example, if the last user input timestamp was 2 seconds ago, is the user idle? Maybe they are reading a dialog and are about to click. Has it been a minute? Maybe they are reading a PDF that they just opened. The definition of idle depends on what the scenario is. Keeping track of the user's active/idle state is more than just calling GetLastInputInfo. To be more comprehensive, I also added a timer to the class to determine how much time the user has spent in the Active state, when the timer was started (or reset), a feature to enable/disable/reset the timer, and properties to determine how much time to consider until a user is inactive. It also raises events when the user switches between active and idle states. Bundling it into a component simplifies the main code and makes it easier to add the relevant features. Visual Basic Private Sub GetLastInput(ByVal userState As Object) GetLastInputInfo(Me.lastInput) Me._lastActivity = Me.lastInput.dwTime If (Environment.TickCount - Me.lastInput.dwTime) > Me._idleThreshold Then If Me._userActiveState <> UserActivityState.Inactive Then Me._userActiveState = UserActivityState.Inactive Me.activityStopWatch.Stop() Me.RaiseUserIdleEvent() End If ElseIf Me._userActiveState <> UserActivityState.Active Then Me._userActiveState = UserActivityState.Active Me.activityStopWatch.Start() Me.RaiseUserActiveEvent() End If End Sub Visual C# private void GetLastInput(object userState) { GetLastInputInfo(ref this.lastInput); this._lastActivity = this.lastInput.dwTime; if ((Environment.TickCount - this.lastInput.dwTime) > this._idleThreshold) { if (this._userActiveState != UserActivityState.Inactive) { this._userActiveState = UserActivityState.Inactive; this.activityStopWatch.Stop(); this.RaiseUserIdleEvent(); } } else if (this._userActiveState != UserActivityState.Active) { this._userActiveState = UserActivityState.Active; this.activityStopWatch.Start(); this.RaiseUserActiveEvent(); } } Components in the Visual Studio Toolbox fit into categories: components and controls. It's a pretty fine line – essentially a control is a component with a user interface. Components are the controls that aren't visible at runtime and appear beneath the form at design time. By creating a user activity component, you can drag it from the Toolbox to the form, set properties and wire up events in the Properties pane, and keep the form code as uncluttered as possible. Creating a component isn't much more work than creating any well-encapsulated class. In fact, I originally created the UserActivityTimer class like any other class. I just realized that it made more sense to create it as a component for better reuse. The first step is to extend the System.ComponentModel.Component class. Already, your class (if public) will show up in the Toolbox when you rebuild the project. It will also show your public properties and events when dragged to a form, though it's not a very complete view. Provided you have setup your class properly with properties and events, you will have a pretty easy time finishing your work. Adding attributes to the class, properties, and events help you to create an experience closer to the Microsoft-supplied components/controls. Unless specified, all attributes are in the System.ComponentModel namespace. Visual Basic <DefaultValue(False)> _ <Category("Behavior")> _ <Description("The current state of the timer.")> _ Public Property Enabled() As Boolean Get Return Me._timerEnabled End Get Set(ByVal value As Boolean) ' If in design-time change the value but not the actual state If Not Me.DesignMode Then If value = True Then Enable() Else Disable() End If Else Me._timerEnabled = value End If End Set End Property Visual C# [DefaultValue(false)] [Category("Behavior")] [Description("The current state of the timer.")] public bool Enabled { get { return this._timerEnabled; } set { // If in design-time change the value but not the actual state if (!this.DesignMode) { if (value == true) Enable(); else Disable(); } else { this._timerEnabled = value; } } } Note also, the check to the DesignMode property. This comes from inheriting from the Component class. This is important, because when a developer sets properties in the Visual Studio designer, the object actually gets its properties set. Often, you don't really want to take any action in design mode. This is how you tell the difference. With the component built, its properties show up in the Properties pane just like any other component: Figure 1 - The component's properties With a component in place, it's much easier to create an application around it. For this sample, I decided to create a simple user interface to expose the information. It doesn't expose all information, but it's a good sampling of useful data. You can enable or disable the timer from the notification icon in the system tray. Figure 2 - User Interface at runtime All information shown is obtained through properties of the component. A standard Timer component is used to update the UI. Formatting the time properly is manual work, and the number of seconds must be multiplied by 1000 to convert it to milliseconds. The events are raised from the component which runs on its own thread. For this reason, it's not possible to directly set the UI controls when the event fires without causing a threading exception. There are two ways to solve this. You could delegate the call to the form's thread, as is done in the sample. This adds a small amount of complexity in code and clarity, but is a pretty common solution, and with proper code comments anyone should be able to grasp it. The problem with this method is that a flood of events will cause a flood of UI updates. This might not be very efficient. Another way is to update state variables in the form class when events fire. Then, when the form's UI update timer fires, it could use the state variables to determine what to show. There would be potential issues with threading concurrency if the event fires at the same moment as the Timer executes, but this can either be handled with locks, or ignored at the expense of occasionally inaccurate information. This also reduces UI updates to the Timer's update interval regardless of how often events fire. Visual Basic Private Sub updateTimer_Tick(ByVal sender As Object, ByVal e As EventArgs) Handles updateTimer.Tick Dim ts As TimeSpan = userTimer.ActiveTime ' Not necessary to update the status label since the active/idle events to it statusLabel.Text = userTimer.UserActiveState.ToString() Dim totalActive As String =) End Sub Visual C# private void updateTimer_Tick(object sender, EventArgs e) { TimeSpan ts = userTimer.ActiveTime; // Not necessary to update the status label since the active/idle events to it statusLabel.Text = userTimer.UserActiveState.ToString(); string totalActive =); } This application isn't terribly useful as it is, but it could be a good foundation for a time tracking application. Adding a few fields to select a project would let you keep track of time spent. You could use the Enable/Disable properties to let a user pause the timer, and the Reset method to switch projects. Another purpose would be in corporate development to track user productivity to a fine level. With a higher interval it might also serve as a good way to close unneeded resources such as network/database connections when a user isn't actively using an application anyway. You could achieve the same effect when the screensaver kicks in, but this makes it easy to use an independent threshold. Just drop the UserActivityTimer onto a form, set the IdleThreshold property, and wire up some actions to the events. Hopefully it's intuitive enough to put to use quickly. In this article, I've shown how to compute total user activity time and keep track of the user's state with events using a Win32 API call that returns the last keyboard/mouse input event at the system level. This functionality is then bundled into a component for easy use in other applications. The sample application exposes this information to test it out and demonstrate how to use it. I threw it together in order to keep better track of my own time, but hopefully it will be useful for other projects as well. If you haven't yet, download Visual Studio 2005 Express Edition for Visual C# or Visual Basic and have fun with it! I just enhanced this tool by adding functionality to show how much time is spent is which all applications. As they dont allow us to upload the code in reply, I will try to upload the code at some other location and will provide the link here. @Kuldeep: Put it on CodePlex or the Channel9 Sandbox @ddod: One possible way to track this is to see what the active process is. I just read Kuldeep's comment. I seems that he is claiming he has modified this code to include actual application usage. I would love to see the way you did this. Did you ever get this code uploaded anywhere? Looking at the running process is one way that we are currently investigating. We are looking into several other ways (one of which is using WMI to watch for start and delete events). However we are always interested in seeing what others are doing. Thanks for he help. hey, why don't you use stand by time .This tell exactly how much time the user id idle @srinivas, That's a good idea in theory, however there is also a certain amount of time that needs to be elapsed before the computer goes into standbye. For example, if I just decide to pick up and go out somewhere, my computer won't go into standbye at all because I've turned that feature off. Another good example is when users change their standbye times to, say, 15 or 20 minutes. Unless there is a way to detect what their standbye time is, I can't see this being very accurate. But, I still like it! Hi, is there a programatic way that I can simulate input so as to reset the idle timer? I can't able to download the file. Please help me..
https://channel9.msdn.com/coding4fun/articles/Activity-Monitor
CC-MAIN-2020-34
refinedweb
2,110
55.84
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including gitbook-plugin-code-highlighter with all npm packages installed. Try it out: require()any package directly from npm awaitany promise instead of using callbacks (example) This service is provided by RunKit and is not affiliated with npm, Inc or the package authors. GitBook plugin to highlight specific lines in code blocks. Here are a couple of highlighted Python code lines using a yellow background: The above example was generated from these source lines: from os import environ &&&from fabric.api import * from fabric.context_managers import cd &&&from fabric.contrib.files import sed Make sure you have GitBook and the gitbook-cli installed. The default highlight plugin that is built into GitBook must be disabled, because it prevents other plugins from processing code blocks. Here is an example book.json with the highlight plugin disabled and this code-highlighter plugin enabled. { "author": "Matthew Makai", "cover": "cover.jpg", "gitbook": "2.x.x", "plugins": ["-highlight", "code-highlighter"], "title": "The Full Stack Python Guide to Deployments", "pdf": { "pageNumbers": true, "headerTemplate": " ", "footerTemplate": " " } } Run gitbook install to pull down the latest plugin version from NPM. Within a block code prepend &&& to each line that should be highlighted. Then add a .code-line-highlight property with a background-color to the .css files under the styles/ directory. For example, your styles/ directory can contain a pdf.css with this line: .code-line-highlight {background-color: #ffff00;} That will highlight each selected line in yellow. For more information on styling, refer to the GitBook docs. The &&& mark and the CSS class can be made configurable, but I have not added that feature just yet.
https://npm.runkit.com/gitbook-plugin-code-highlighter
CC-MAIN-2019-30
refinedweb
293
60.21
Cannot call AS3 Function from JavaScriptjohngrese May 3, 2012 2:18 AM Hi, I am trying to call an ActionScript 3 function with JavaScript. The problem I cannot seem to get any of the browsers to trigger this AS3 function. The debuggers say that the function is undefined when I call it on the flash object. What is supposed to happen is - the javascript function calls the AS3 function, and passes it an integer value. That integer is then used to pull one of 20 swfs, and load that into the flash movie. Please let me know if you guys can think of anything that I can do to make this work! I've been stuck for hours, and can't seem to make it happen. Here is my actionScript: import flash.net.URLRequest; import flash.display.Loader; import flash.external.ExternalInterface; var movies:Array = ["idle-ok.swf", "idle-good.swf"]; // It first loads this "hello swf". var helloLoader:Loader = new Loader(); var helloURL:URLRequest = new URLRequest("idleAvatar.swf"); helloLoader.load(helloURL); addChild(helloLoader); var setAvatar:Function = loadAvatar; ExternalInterface.addCallback("callSetAvatar", setAvatar); // Then, this function should be called by JavaScript to load one of the other swfs. function loadAvatar(indx:Number){ var url:URLRequest = new URLRequest(movies[0]); var ldr:Loader = new Loader(); ldr.load(url); addChild(ldr); } Here is my JavaScript: <script type="text/javascript"> function callExternalInterface(val) { document.getElementById("loader").callSetValue(val); } </script> Here is my embed code: <div> <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="250" height="350" id="loader" name="loader"> <param name="movie" value="loader.swf" /> <param name="allowscriptaccess" value="always" /> <!--[if !IE]>--> <object type="application/x-shockwave-flash" data="loader.swf" width="250" height="350"> <param name="allowscriptaccess" value="always" /> <!--<![endif]--> <a href=""> <img src="" alt="Get Adobe Flash player" /> </a> <!--[if !IE]>--> </object> <!--<![endif]--> </object> </div> <a href="#" onClick="callExternalInterface(1)">CLICK</a> 1. Re: Cannot call AS3 Function from JavaScriptNed Murphy May 3, 2012 4:47 AM (in response to johngrese) You don't appear to be calling any function in your actionscript. The names between and within do not agree. document.getElementById("loader").callSetValue(val); should be calling a function identified with the string "callSetValue" in your actionscript, but you only have a function identified with "callSetAvatar" and the function associated with that on your AS3 side is specified to be setAvatar... ExternalInterface.addCallback("callSetAvatar", setAvatar); but you do not have a function named setAvatar, you have one named loadAvatar Here's a link to a tutorial that might help you to see how the functions are specified between javascript and actionscript using the ExternalInterface class. It gets confusing so I suggest you copy the code to a word processing file so you can clearly highlight where they have the same language between them. 2. Re: Cannot call AS3 Function from JavaScriptjohngrese May 3, 2012 1:44 PM (in response to Ned Murphy) There is a line here that creates an "alias" var setAvatar:Function = loadAvatar; So, the setAvatar function is defined... I've also tried it without the alias... same result. Thanks a lot thought dude! I checked out your tutorial, and I was able to modify the files to make it work for me. I still haven't figured out what the deal was though..... I'm going to assume it was "software gremlins". That seems like the most likely cause Thanks for the suggestion, your tutorial helped immensely!
https://forums.adobe.com/thread/998263?tstart=0
CC-MAIN-2015-18
refinedweb
572
57.37
Multiline Edit Box with Automatic Scroll Bar Display The Problem If you need a multiline Edit Control, you can implement one at design time by using the Resource Editor and setting the styles Multiline, Horizontal scroll, Auto HScroll, Vertical scroll, and Auto VScroll from the Styles Tab in Edit Properties. Or, if you create the Edit Control from code at run time, you need to set the styles WS_HSCROLL, WS_VSCROLL, ES_MULTILINE, ES_AUTOHSCROLL, and ES_AUTOVSCROLL. For displaying multiple lines, you need to enter Ctrl+RETURN interactively from the keyboard during program execution, or from the code you need to separate the lines with the sequence "\r\n", for example: m_oEdit.SetWindowText(_T("First Line\r\nSecond Line\r\n...\r\n")); The problem with these approaches is that the scroll bars are visible even when they are not needed; for example, when the Edit Box is empty or when the text is very short. This problem is solved by the CScrollEdit class presented in this article. The Solution The CScrollEdit class is derived from the CEdit MFC class. As explained before, the styles WS_HSCROLL, WS_VSCROLL, ES_MULTILINE, ES_AUTOHSCROLL, and ES_AUTOVSCROLL have to be set for proper functioning of the Edit Control. When the scroll bars are not needed for display, they can be hidden by calling the function ShowScrollBar(): ShowScrollBar(SB_HORZ, FALSE); ShowScrollBar(SB_VERT, FALSE); The problem is how to detect when the scroll bars are needed and when they are not needed. This is done by the function CheckScrolling(), which is the core of the CScrollEdit class: void CScrollEdit::CheckScrolling(LPCTSTR lpszString) { CRect oRect; GetClientRect(&oRect); CClientDC oDC(this); int iHeight=0; BOOL bHoriz = FALSE; CFont* pEdtFont = GetFont(); if(pEdtFont != NULL) { //Determine Text Width and Height SIZE oSize; CFont* pOldFont = oDC.SelectObject(pEdtFont); //Determine the line Height oSize = oDC.GetTextExtent(CString(_T(" "))); iHeight = oSize.cy; //Text Width int iWidth=0, i =0; CString oStr; //Parse the string; the lines in a multiline Edit are //separated by "\r\n" while(TRUE == ::AfxExtractSubString(oStr, lpszString, i, _T('\n'))) { if(FALSE == bHoriz) { int iLen = oStr.GetLength()-1; if(iLen >=0) { //Eliminate last '\r' if(_T('\r') == oStr.GetAt(iLen)) oStr = oStr.Left(iLen); oSize = oDC.GetTextExtent(oStr); if(iWidth < oSize.cx) iWidth = oSize.cx; if(iWidth >= oRect.Width()) bHoriz = TRUE; } } i++; } oDC.SelectObject(pOldFont); //Text Height iHeight *= i; } ShowHorizScrollBar(bHoriz); ShowVertScrollBar(iHeight >= oRect.Height()); } The idea in this function is to parse the Edit Control's text and extract the lines (which are separated in the text by the sequence "\r\n"). Then for getting the size of each line the GetTextExtent() function is used. The height of the text is determined by multiplying the height of a line with the number of lines. If the height of the text is greater or equal than the height of the Edit Control's client rectangle then the vertical scroll bar has to be displayed, otherwise is not displayed. For displaying the horizontal scroll bar it is enough to detect just one line with the width greater than or equal to the width of the Edit Control's client rectangle. The function CheckScrolling() is called from two places: - From the function SetWindowText(), which is overloading the CWnd::SetWindowText() function: - From the handler OnCheckText() which is treating the special user message UWM_CHECKTEXT: void CScrollEdit::SetWindowText(LPCTSTR lpszString) { CheckScrolling(lpszString); CEdit::SetWindowText(lpszString); } LRESULT CScrollEdit::OnCheckText(WPARAM wParam, LPARAM lParam) { CString oStr; GetWindowText(oStr); CheckScrolling(oStr); return 0; } The message UWM_CHECKTEXT is posted from the OnChar() handler (the handler for the WM_CHAR message) each time the user is editing inside the Edit Control: void CScrollEdit::OnChar(UINT nChar, UINT nRepCnt, UINT nFlags) { //Possible Text Change PostMessage(UWM_CHECKTEXT); CEdit::OnChar(nChar, nRepCnt, nFlags); } How to Use the Code Using the CScrollEdit class is very easy, and it is demonstrated by the accompanying demo project. You need to include the source files ScrollEdit.h and ScrollEdit.cpp in your project. In ScrollEdit.cpp, change the line #include "ScrollEditTst.h" to the appropriate #include for your application's header file. Then, declare the control member variable in the appropriate header file: #include "ScrollEdit.h" //... CScrollEdit m_oScrollEdit; and do subclassing in the appropriate place; for example, in the OnInitDialog() handler: m_oScrollEdit.SubclassDlgItem(IDC_EDIT1, this); m_oScrollEdit.SetWindowText(_T("")); The initialization with text _T("") is necessary for the proper initialization of the scroll bar's display (in this case, they are not displayed initially). DownloadsDownload demo project - 14 Kb Download source - 3 Kb
http://www.codeguru.com/cpp/controls/editctrl/article.php/c3917/Multiline-Edit-Box-with-Automatic-Scroll-Bar-Display.htm
crawl-003
refinedweb
739
53.92
. Bill. I took ABC's ingredients and shuffled them around a bit. Python was similar to ABC in many ways, but there were also differences. Python's lists, dictionaries, basic statements and use of indentation differed from what ABC had. ABC used uppercase for keywords. I never got comfortable with the uppercase, neither reading nor typing it, so in Python keywords were lowercase. I think my most innovative contribution to Python's success was making it easy to extend. That also came out of my frustration with ABC. ABC was a very monolithic design. There was a language design team, and they were God. They designed every language detail and there was no way to add to it. You could write your own programs, but you couldn't easily add low-level stuff. For example, one of the big frustrations for software developers using big mainframe computers in the 60s, 70s, and 80s was input/output (IO). All those IO systems were way too complicated. The ABC designers realized their users' confusion with IO, and decided to do something different. But I think they went overboard. Instead of IO, where you could read a file and write a file, ABC's designers decided to just have global variables in the program. Their users already understood the concept of global variables. So ABC's designers made those global variables persistent. If you quit your session, all your global variables were saved by the system to a disk file. When you started another session, all your global variables were restored. It worked fairly well, to an extent. It is similar to the idea of a workspace, for example, in Smalltalk. There was a print statement that wrote to the screen, and an input statement that read from a keyboard, but there was no way to redirect IO to or from a file. They literally didn't have any other IO available. Around that same time, personal computers became available. Personal computers had all this wonderful packaged software that dealt in files. There was a spreadsheet file, a word processor file, a graphics editor file. The ABC users wanted to write little ABC programs that took something from their word processor file and pushed it back into the spreadsheet, or the other way around, but they couldn't because of the limitation on IO. Bill Venners: They wanted to massage files. Guido van Rossum: They wanted to massage data, and the data just happened to be in files. It made things difficult that the language didn't have files as a concept. Guido van Rossum: What made the lack of file support in the ABC language worse was that it wasn't easy to extend ABC. You couldn't say, "This language is implemented in C, so let's just add another function to the standard library that does open a file." ABC had no concept of a standard library. It had built-in commands, but the parser knew about them. It had built-in functions that were very much integrated in the runtime. Adding to the language's standard functionality was very difficult. For Python, I felt extensibility was obviously a great thing to have. I already knew we would want to use Python on different platforms. I knew we wanted to use Python on Amoeba, the operating system we were developing, and on UNIX, the operating system we were using on our desktops. I knew we would also want to support Windows and Macintosh. I realized that each of those systems had certain functionality that was consistent everywhere, like the standard IO library in C—but there was also different functionality. If you wanted to draw bits on the screen on a Windows system, you had to use different code and a different programming model than you would on a Macintosh or on Unix. I came up with a flexible extensibility model for Python. I said: "We'll provide a bunch of built-in object types, such as dictionaries, lists, the various kinds of numbers, and strings, to the language. But we'll also make it easy for third-party programmers to add their own object types to the system." ABC also didn't have namespaces, because it was intended as a relatively small scale language. It only had functions and procedures. You couldn't group them together. Later they added a namespace mechanism, but I thought it was fairly clumsy. By then I had some experience with Modula-2 and Modula-3, so I decided the module would be one of Python's major programming units. I decided Python would have two different kinds of modules: You can write modules in Python, but you can also write a module entirely in C code. And such a module can make new types and new objects available. That turned out to be a successful idea, because immediately my CWI colleagues, the users, and I started writing our own extension modules. The extension modules let you do all sorts of things: communicate with graphics libraries, data flow libraries, and all sorts of file formats. Bill Venners: So if I write a module in C, I can use it from my Python program and the types look just like Python types? Guido van Rossum: That's right. In Python, the way to use a module is always through import statements. Python's import works slightly different from Java's import, but it has the same idea behind it. You import some module name, and the system uses several different ways to locate that module. There's a search path. It looks for files of different types. If you're looking for import foo, it will eventually find either a file foo.py or foo.so (or foo.dll on Windows). foo.py is a piece of Python source code. The Python source is parsed and interpreted. That makes functions and/or classes available to the program. foo.so, or foo.dll, is a piece of precompiled machine code. It is usually implemented in C or C++, but some people use Fortran to write their extensions that will link to large Fortran libraries. The way you use a precompiled machine code module is, from the Python point of view, exactly the same. You import it. You can list the module's contents to see what's inside. Or, if you're already familiar with the module, you can just start using it. Come back Monday, January 20 for Part II of this conversation with Python creator Guido van Rossum. Have an opinion about Python, the ABC language, indentation-based statement grouping, or extensibility? Discuss this article in the News & Ideas Forum topic, The Making of Python Resources Python.org, the Python Language Website: Introductory Material on Python: Python Tutorial: Python FAQ Wizard: Guido van Rossum's home page: Other Guido van Rossum Interviews:
http://www.artima.com/intv/pythonP.html
CC-MAIN-2014-10
refinedweb
1,148
74.39
Finding The Distance Between Latitude / Longitude Locations In ColdFusion Someone asked me to write up a demo on finding all of the zip codes in the local proximity to a given zip code (such as you would do with a Store Locator). I have never done this before, so I figured I better start with the basics. Seeing as most zip code databases work off of latitude and longitude, I figured I would first start out trying to find the earthly distance between two sets of latitude and longitude readings. As I have no working knowledge of how to do this, I did what everyone else does - I Googled for an answer. After combing through a number of results, I finally settled on this one:. The formula look pretty simple so I figured converting it to ColdFusion would be straightforward. One thing that you will notice in the GetLatitudeLongitudeProximity() method is that we are multiplying the difference in degrees by an approximate number of miles-per-degree. This is an approximation and will change (ie. become less accurate) the closer you get to the North and South Pole where the longitude lines get closer together. However, for our purposes, I am just going to run with it. - <cffunction - name="GetLatitudeLongitudeProximity" - access="public" - returntype="numeric" - output="false" - - <!--- Define arguments. ---> - <cfargument - name="FromLatitude" - type="numeric" - required="true" - hint="I am the starting latitude value." - /> - <cfargument - name="FromLongitude" - type="numeric" - required="true" - hint="I am the starting longitude value." - /> - <cfargument - name="ToLatitude" - type="numeric" - required="true" - hint="I am the target latitude value." - /> - <cfargument - name="ToLongitude" - type="numeric" - required="true" - hint="I am the target longitude value." - /> - <!--- Define the local scope. ---> - <cfset var LOCAL = {} /> - <!--- - The approximate number of miles per degree of latitude. - Once we have the difference in degrees, we will use this - to find an approximate horizontal distance. - ---> - <cfset LOCAL.MilesPerLatitude = 69.09 /> - <!--- - Calculate the distance in degrees between the two - different latitude / longitude locations. - ---> - <cfset LOCAL ) ) - ) - ) - ) /> - <!--- - Given the difference in degrees, return the approximate - distance in miles. - ---> - <cfreturn Round( LOCAL.DegreeDistance * LOCAL.MilesPerLatitude ) /> - </cffunction> - <cffunction - name="DegreesToRadians" - access="public" - returntype="numeric" - output="false" - - <!--- Define arguments. ---> - <cfargument - name="Degrees" - type="numeric" - required="true" - hint="I am the degree value to be converted to radians." - /> - <!--- Return converted value. ---> - <cfreturn (ARGUMENTS.Degrees * Pi() / 180) /> - </cffunction> - <cffunction - name="RadiansToDegrees" - access="public" - returntype="numeric" - output="false" - - <!--- Define arguments. ---> - <cfargument - name="Radians" - type="numeric" - required="true" - hint="I am the radian value to be converted to degrees." - /> - <!--- Return converted value. ---> - <cfreturn (ARGUMENTS.Radians * 180 / Pi()) /> - </cffunction> Once I had the primary ColdFusion user defined function and its two helper methods in place (for radian-degree conversion), I just needed to test them. I calculated the distance between my office zip code (10016) and several other locations in New York and Boston: - <!--- Get the starting location. NYC. ---> - <cfset Location10016 = { - Latitude = 40.7445, - Longitude = -73.9782 - } /> - <!--- Get the target location. NYC. ---> - <cfset Location10011 = { - Latitude = 40.7409, - Longitude = -73.9997 - } /> - <!--- Get the target location. Croton on the Hudson. ---> - <cfset Location10520 = { - Latitude = 41.2219, - Longitude = -73.8870 - } /> - <!--- Get the target location. Boston. ---> - <cfset Location02155 = { - Latitude = 42.4255, - Longitude = -71.1081 - } /> - <!--- Output the distance. ---> - <cfoutput> - <p> - Distance (10016 - 10011): - #GetLatitudeLongitudeProximity( - Location10016.Latitude, - Location10016.Longitude, - Location10011.Latitude, - Location10011.Longitude - )# - miles - </p> - <p> - Distance (10016 - 10520): - #GetLatitudeLongitudeProximity( - Location10016.Latitude, - Location10016.Longitude, - Location10520.Latitude, - Location10520.Longitude - )# - miles - </p> - <p> - Distance (10016 - 02155): - #GetLatitudeLongitudeProximity( - Location10016.Latitude, - Location10016.Longitude, - Location02155.Latitude, - Location02155.Longitude - )# - miles - </p> - </cfoutput> When we run this code, I get the following output: Distance (10016 - 10011): 1 miles Distance (10016 - 10520): 33 miles Distance (10016 - 02155): 188 miles That looks accurate enough for me. Next step will be to move this calculation into a SQL query. directly in mysql. Note that I have not personally tested this. Very cool post though. Thanks! select asciiname,latitude,longitude, acos(SIN( PI()* 40.7383040 /180 )*SIN( PI()*latitude/180 ) )+(cos(PI()* 40.7383040 /180)*COS( PI()*latitude/180) *COS(PI()*longitude/180-PI()* -73.99319 /180) )* 3963.191 AS distance FROM allcountries WHERE 1=1 AND 3963.191 * ACOS( (SIN(PI()* 40.7383040 /180)*SIN(PI() * latitude/180)) + (COS(PI()* 40.7383040 /180)*cos(PI()*latitude/180)*COS(PI() * longitude/180-PI()* -73.99319 /180)) ) < = 1.5 ORDER BY 3963.191 * ACOS( (SIN(PI()* 40.7383040 /180)*SIN(PI()*latitude/180)) + (COS(PI()* 40.7383040 /180)*cos(PI()*latitude/180)*COS(PI() * longitude/180-PI()* -73.99319 /180)) ) Google Map API is still minimal. I ended up using a Multimap API to do the search, (for which I need to tidy up and release my component to the community), and your function will certainly be a welcome addition to the code. Yes, I am geekily in love with structures, xml and playing remote services. :) @Brian, I decided that I needed to start out just figuring the distance to get comfortable with the calculation. I know formatting on my comments is a hard thing to do since tabbing is not kept, but as you can see, the SQL statement has almost zero readability. That's why I wan't to really look at the formula in a highly formatted way in ColdFusion. Once I have that done and tested, I decided (as per the last line in my blog post) that the next step would be to move it into the database SQL statements. In my research, I did find someone that said he actually had much better performance pulling out approximate matches (based on degree differences only) and then using PHP to run the actual calculations. I think this makes a lot of sense considering the number of zip codes in the database (over 42,000). This might actually be my next step of experimentation. @Matt, Nothing wrong with an API - offload the burden of the search! Nice one Ben. I've had to do something similar a while ago, but as Matt mentioned above, the 'free' databases for the UK are dodgy or incomplete. I ended up using the Royal Mail's database, which as mentioned costs a fortune but is very complete. It'd be great to have a service like this one you found on sourceforge for the UK. @Marcos, I didn't even know there were free versions of zip code database here. I have one we needed for a project. It was really cheap though, like $30. That's annoying that they are so expensive over in the UK! decimal(21,20) DECLARE @distance float SET @pi = 3.14159265358979323846 SET @x = sin( @lat1 * @pi/180 ) * sin( @lat2 * @pi/180 ) + cos(@lat1 *@pi/180 ) * cos( @lat2 * @pi/180 ) * cos( abs( (@long2 * @pi/180) - (@long1 *@pi/180) ) ) SET @x = atan( ( sqrt( 1- power( @x, 2 ) ) ) / @x ) SET @distance = abs(( 1.852 * 60.0 * ((@x/@pi)*180) ) / 1.609344) RETURN @distance END Ben, There's also a UDF on cflib that does the distance calculation and returns the results in km (kilometers), sm (statute miles), nm (nautical miles), or radians: I've been using this code for a while now and it works really well (it came out 4 or 5 years ago and runs awesome). @Jeff, Rob, Thanks guys. Looks like everyone has a different solution to this. I wish I knew enough about the mathematics behind it to see how each formula differs. if you're just trying to isolate points within a certain distance of a given point, a simple bounding box search will save you tons of work. take your search point add/subtract the distance you're looking for to build up 2 pairs of points that will make up your bounding box. use the bounding box in your WHERE clause & bob's your uncle. if you need precise distance per point *then* run those calculations. @PaulH, I think that is actually going to be my next plan of attack. I saw a thread where a guy talked about doing that to get a rough list, then once the data was out of the database, he performed the more precise calculations on the subset of zip codes. "As I have no working knowledge of how to do this, I did what everyone else does - I Googled for an answer." Everyone, that is, except the person who asked you "to write up a demo on finding all of the zip codes in the local proximity to a given zip code" in the first place. I appreciate it's not a particularly straight-forward problem and certainly not something a programmer will do very often in his/her career, but seriously... that's a big ask. Did they even have a go before asking you to do it for them? @George, Ha ha, good point :) I figured it was an interesting topic and one that I've never done before, so... what the heck. ...and in general this is a good google recommended reference for calculations w/geographic coords: @ben depending on the end use, 90% of the time a bounding box is "good enough". you're looking at a point representing a polygon anyway (if your use case is zipcodes) which makes all the distance calculations more or less a sham. to be done "right", for some kind of consumer application, you're probably looking at routing (unless the users are crows that is) or real GIS operations (like doing an intersection) if another end use. otherwise i'd just go w/google maps, esri or whatever. btw if you've looked at the mysql "spatial" bits, a bounding box attack is used for most all their "spatial" functions (intersection, union, point-in-polygon searches, etc.). very sham-ful from and old school GIS point of view ;-) this task is:' . '?appid=' . $yahooAPIAppID . '&output=php&zip='.$zip; Where $yahooAPIAppID is a key registered with yahoo maps api and the output value can undoubtedly be changed to fit various formats. contributing. @Brian, Always glad to help. Check out my latest post: This demonstrates that using a bounding box model is practically just as accurate and much easier to use. @PaulH, These calcs are fine for flying city-to-city type long distance but does not help too much for driving distances. Any thoughts on driving distance between two geopoints? as i said, "routing". googlemaps work well enough, even here in thailand. if that's not good enough & you have better transport infrastructure data (which is probably the only reason it's not "good enough") have a look at postGIS/pgRouting. if you're lighting cigars w/ben franklins, look at ESRI. It is the online postcode to postcode distance calculator allows you to enter two postcodes and after clicking the 'Calculate Distance' button the Online Postcode to PostCode Travel Distance Calculator will calaculate. Hello Ben, community, I was using this code for about 6 years now and according to my client, it seems to be quite accurate. It basically does the following: 1) If a client call asking for a service tech in his/her area, we ask for the zip code and enter it on a form. Here is the code: I hope it helps. @Ben Nadel, do you know of any web service that gives me the latitude and longitude after entering the zip/posta code? @Dani, I have answer my own question with this great component from the Jedi: So when feeding these functions set through a larger scale system we came across a bit of a untraceable error that makes me think there might need to be better handling. 1.0000000000000002 must be within range: ( -1 : 1 ) got and wise ones?. So unless I am misunderstanding it, your method would work at the equator but would break down completely closer to the poles. -Brian 96% accurate. Calculate the max long, max lat, and the min long, and the min lat and do a search BETWEEN minlong maxlong AND BETWEEN minlat and maxlat using the fields where appropriate - although I did this in PHP for distance calculation I found out it was pretty accurate except it was finding areas within a square and some areas of the square where a little off - not by much 1 - 5 miles but still pretty close.
http://www.bennadel.com/blog/1489-finding-the-distance-between-latitude-longitude-locations-in-coldfusion.htm?_rewrite
CC-MAIN-2015-11
refinedweb
2,023
56.96
According to my monitoring page, that's another ~16K PPD. (It's the "lambic" node, second one down.) Edit: The glamour shot -- the "lambic" node, in all its glory: Moderators: farmpuma, just brew it! BIF wrote:It is heated by the sun, and that would be the problem. 95-100+ F during summertime months. just brew it! wrote:Just found an excuse to turn up another node for Frankie for the final week. Took on a little water in the basement after some thunderstorms today; there's a patch of carpet that's damp... farmpuma wrote:My limited experience with the 6.23 SMP client and the A3 work units with bonus points was rock solid stable. I gave up when the majority of work units resulted in 20MB upload files even though the client was configured for nothing over 5MBs. The 20MB files took about four hours to upload and the server often timed out before the upload was complete. I still have my passkey and SMP folding will resume when real internet makes it to my house. #!/usr/bin/python import socket import select import threading import SocketServer import time global upCount global downCount upCount = 0 downCount = 0 class ThreadedTCPRequestHandler(SocketServer.BaseRequestHandler): def handle(self): global upCount global downCount cur_thread = threading.currentThread() name = cur_thread.getName() sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect(('127.0.0.1', 80)) clientDone = False serverDone = False while not (clientDone and serverDone): idle = True readers, writers, err = select.select([self.request, sock], [], [], 0) if self.request in readers: idle = False data = self.request.recv(4096) if len(data) == 0: if clientDone: break clientDone = True else: upCount += len(data) while upCount > 0: time.sleep(0.05) sock.send(data) if sock in readers: idle = False response = sock.recv(4096) downCount += len(response) if len(response) == 0: if serverDone: break serverDone = True clientDone = True else: while downCount > 0: time.sleep(0.05) self.request.send(response) if idle: time.sleep(0.05) sock.close() class ThreadedTCPServer(SocketServer.ThreadingMixIn, SocketServer.TCPServer): pass def client(ip, port, message): sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.connect((ip, port)) sock.send(message) response = sock.recv(32768) sock.close() if __name__ == "__main__": # Port 0 means to select an arbitrary unused port HOST, PORT = "0.0.0.0", 12080.setDaemon(True) server_thread.start() while 1: time.sleep(0.05) if upCount > 0: upCount -= 2000 if downCount > 0: downCount -= 5000 BIF wrote:How do you have a hard cider mishap? In the basement? just brew it! wrote:That will still need some increase in production. I am pretty much maxed out now. So nothing more to give. We will need may be ~50K ppd more in order to 4 million. I say we will be lucky if we achieve that.Looks like 4 million is in reach? Yeah, down a little from last year. Oh well. just brew it! wrote:Didn't realize the script I used to start the client on the 9550 had the -oneunit option set. So of course it was idling when I got up this morning. D'oh! (Fixed now...)(Fixed now...) BIF wrote:just brew it! wrote:Didn't realize the script I used to start the client on the 9550 had the -oneunit option set. So of course it was idling when I got up this morning. D'oh! (Fixed now...)(Fixed now...) Whoops! I'm curious, how many PPD can the 9550 produce? BIF wrote:Fascinating! I like the idea of having the current stats all rolled up into one page.
https://techreport.com/forums/viewtopic.php?p=1140113
CC-MAIN-2016-36
refinedweb
590
69.38
Okay, I'm brand spankin' new to Java, but not programming. Here's a very simple application for which I am receiving an error. public class CoolJava { public static void main (String args[]) { System.out.println("Java is Cool"); } } I have the Java 2 SDK, ver 1.3.1 installed on a Win2K platform. After installing it I adjusted the Windows path to include the ".../bin" directory; rebooted the computer. The above was created in a text editor and saved as "CoolJava.java". I opened the command prompt and changed directories to that containing this file. Next, I typed "javac CoolJava.java". It compiled to bytecode without a hitch, producing the file "CoolJava.class". Finally, I typed "java CoolJava" to create the executable. Herein lies the problem. I received the following message. "Exception in thread "main" java.lang.NoClassDefFoundError: CoolJava" Argh. Why? Your feedback is greatly appreciated. Respectfully, ASP Hi, Did you set your classpath as well? Here is an example of a path and classpath setting. path=C:\Program Files\Java\j2sdk1.4.0_03\bin; set CLASSPATH=C:\Program Files\Java\j2re1.4.0_03\lib\rt.jar; This is almost definately the cause of the classnotfound exception. Hope this helped. Michael Michael, Thanks for you reply. Well, although I did not make it clear, my intention was to deliberately avoid setting an environment variable for the Classpath. It turns out I did a little more research and found a solution. Since I was keeping both the ".java" and ".class" files in the same directory, I only needed to identify the current directory for the source files. So, for example... javac <myFile>.java - compiles to Java bytecode ...and while in the same directory as the newly compiled file... java cp . <myFile>.class That worked. Respectfully, ASP Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?139182-self-post-the-page-and-move-to-another-page&goto=nextoldest
CC-MAIN-2018-17
refinedweb
314
53.98
Rules A rule defines a series of actions that Bazel should perform on inputs to get a set of outputs. For example, a C++ binary rule might take a set of .cpp files (the inputs), run g++ on them (the action), and return an executable file (the output). Note that,, make sure you are familiar with the evaluation model. You must understand the three phases of execution and the differences between macros and rules. A few rules are built into Bazel itself. These native rules, such as cc_library and java_binary, provide some core support for certain languages. By defining your own rules, you can add similar support for languages and tools that Bazel does not support natively. Rules defined in .bzl files work just like native rules. For example, their targets have labels, can appear in bazel query, and get built whenever they are needed for a bazel build command or similar. When defining your own rule, you get to decide what attributes it supports and how it generates its outputs. The exact behavior of a rule during the analysis phase is governed by its implementation function.. During the analysis phase, a rule’s implementation function can create additional output files. Since all labels have to be known during the loading phase, these additional output files are not associated with labels or Targets. Generally these are intermediate files needed for a later compilation step, or auxiliary outputs that don’t need to be referenced in the target graph. Even though these files don’t have a label, they can still be passed along in a provider to make them available to other depending targets at analysis time. A generated file that is addressable by a label is called a predeclared output. There are multiple ways for a rule to introduce a predeclared output: If the rule declares an outputsdict in its call to rule(), then each entry in this dict becomes an output. The output’s label is chosen automatically as specified by the entry, usually by substituting into a string template. This is the most common way to define outputs. The rule can have an attribute of type outputor output_list. In this case the user explicitly chooses the label for the output when they instantiate the rule. (Deprecated) If the rule is marked executableor test, an output is created with the same name as the rule instance itself. (Technically, the file has no label since it would clash with the rule instance’s own label, but it is still considered a predeclared output.) By default, this file serves as the binary to run if the target appears on the command line of a bazel runor bazel testcommand. See Executable rules below. All predeclared outputs can be accessed within the rule’s implementation function under the ctx.outputs struct; see that page for details and restrictions. Non-predeclared outputs are created during analysis using the ctx.actions.declare_file and ctx.actions.declare_directory functions. Both kinds of outputs may be passed along in providers. Although the input files of a target – those files passed through dependency attributes – can be accessed indirectly via ctx.attr, it is more convenient to use ctx.file and ctx.files. For output files that are predeclared using output attributes (attributes of type attr.output or attr.output_list), ctx.attr will only return the label, and you must use ctx.outputs to get the actual File object. and target a different architecture. The build can be complex and involve multiple steps. Some of the intermediate binaries, like the compilers and code generators, have to run on your machine (the host); some of the binaries such the dependencies of a target in the same configuration as the target itself, i.e. without transitioning. When a target depends on a tool, the label attribute will specify a transition to the host configuration. This causes the tool and all of its dependencies to be built for the host machine, assuming those dependencies do not themselves have transitions. For each dependency attribute, you can decide whether the dependency target should be built in the same configuration, or transition to the host configuration (using cfg). If a dependency attribute has the flag executable=True, the configuration must be set explicitly. and/or collected transitively from the rule’s dependencies: def _rule_implementation(ctx): ... transitive_runfiles = depset(transitive= [dep.transitive_runtime_files for dep in ctx.attr.special_dependencies]) runfiles = ctx.runfiles( # Add some files manually. files = [ctx.file.some_data_file], # Add transitive files from dependencies manually. transitive_files = transitive_runfiles, # Collect runfiles from the common locations: transitively from srcs, # deps and data attributes. collect_default = True, ) # Add a field named "runfiles" to the DefaultInfo provider in order to actually # create the symlink tree. return [DefaultInfo(runfiles=runfiles)] Note that non-executable rule outputs can also have runfiles. For example, a library might need some external files during runtime, and every dependent binary should know about them. Also note that if an action uses an executable, the executable’s runfiles can be used when the action executes._files provider to provide information about which files should be measured when code coverage data collection is enabled: def _rule_implementation(ctx): ... return struct(instrumented_files = struct( #( ... )
https://docs.bazel.build/versions/0.17.2/skylark/rules.html
CC-MAIN-2020-45
refinedweb
860
57.37
As we begin to implement more and more tools and concepts into our Java applications, we also open the door for a wider variety of bugs. Throughout this course we'll periodically pause to walk through how to identify and resolve common errors Java students experience. As we learned earlier, Java is a compiled language. Our .java source code files must be compiled into .class bytecode files the computer is capable of reading. Because the compiler must compile all source code before our programs can run, it is usually the first to spot errors. At this point in the course, most errors you see will come from the compiler. This lesson will walk through how to decipher compiler errors by addressing three of the most common error messages. While there are many different reasons for the compiler to throw errors, most error messages look fairly similar. Here's an example: /Users/epicodus_staff/Desktop/perry/Java-Prework-Car-Dealership/src/main/java/App.java Error:(13, 29) java: constructor Vehicle in class models.Vehicle cannot be applied to given types; required: int,java.lang.String,java.lang.String,int,int found: int,java.lang.String,java.lang.String,int reason: actual and formal argument lists differ in length Each compiler error message follows this general format. Let's consider each piece closely: /Users/staff/Desktop/java/car-dealership/src/main/java/App.java:9. Each error message should contain the file path of the source code file containing the error. Use this to hone in on where the issue is occurring. If you see multiple different filenames with differing line numbers that indicate errors, such as one in a testing file and one in a class file, the testing file is likely calling the method that contains the error. A brief explanation of the error should be provided directly after the file path. In the example above, we're provided: error: constructor Vehicle in class Vehicle cannot be applied to given types;. Always read this line very, very carefully. After this explanation, the offending line of code is usually included. After this line of broken code, there is often section containing additional details. This content differs slightly depending on the type of error, but will usually contain valuable details. In the message above, the following is included: required: int,java.lang.String,java.lang.String,int,int found: int,java.lang.String,java.lang.String,int reason: actual and formal argument lists differ in length Remember, our error said constructor Vehicle in class Vehicle cannot be applied to given types. This section is providing more details about why we cannot apply the constructor: required:Refers to the arguments the constructor requires. found:Refers to the actual arguments the compiler found us using in the source code. reason:Provides a reason why these arguments don't work. In our case, actual and formal argument lists differ in length. That is, the number of arguments the constructor requires and the number of arguments we actually gave it are different. If we check the specific line of code in the file provided by the error, we can see that this usage of the constructor is missing an argument: public class App { public static void main(String[] args) { Console myConsole = System.console(); Vehicle hatchback = new Vehicle(1994, "Subaru", "Legacy", 170000); # <-- missing the fifth argument. Vehicle suv = new Vehicle(2002, "Ford", "Explorer", 100000, 7000); Vehicle sedan = new Vehicle(2015, "Toyota", "Camry", 50000, 30000); ... The "constructor cannot be applied to given types" error depicted in the example above is fairly common. It generally means that a constructor was unable to create an object with the information provided to it. As we saw, you can use error's details to figure out why it was unable to create an object. Another common error is "incompatible types". This usually occurs when we declare something as one data type, but place an object of another data type in its spot. This error looks something like this: /Users/epicodus_staff/Desktop/perry/Java-Prework-Car-Dealership/src/main/java/App.java Error:(17, 66) java: incompatible types: java.lang.String cannot be converted to int Again, notice the error provides the specific file and line of code. It also says String cannot be converted to int. If you look closely, you'll notice that the fourth argument in the constructor is a string: Vehicle crossover = new Vehicle(1998, "Toyota", "Rav-4", "200000", 3500); But, when we defined our constructor, we clearly stated that this argument should be an int: ... public Vehicle(int year, String brand, String model, int miles, int price) { this.year = year; this.brand = brand; this.model = model; this.miles = miles; this.price = price; } ... If we remove the quotation marks, the issue should be solved. When this error occurs pay special attention to the declared data types in the line(s) of code referenced. At least one piece of information does not match the intended data type. "Cannot find symbol" is another common error. It looks something like this: /Users/epicodus_staff/Desktop/perry/Java-Prework-Car-Dealership/src/main/java/models/Vehicle.java Error:(19, 22) java: cannot find symbol symbol: variable rice location: class models.Vehicle A symbol is a line of code that represents something else. When we create variables or methods, their names are symbols. This error usually means we spelled something incorrectly, or didn't define something properly. In our case, we can look closely at the specific file and line of code the error provides: public Vehicle(int year, String brand, String model, int miles, int price) { this.year = year; this.brand = brand; this.model = model; this.miles = miles; this.price = rice; } ... And look, there's the error! The price = rice; line provided by the message is clearly trying to set the price member variable equal to rice. But we didn't pass in an argument named rice we passed in an argument named price: ... public Vehicle(int year, String brand, String model, int miles, int price) { ... So, Java is throwing an error because we never defined anything named rice. It does not understand what rice is. Or, it 'cannot find the symbol' rice. If we fix our typo by changing rice to the correct price, the error should be resolved. As you'll discover, there are many different errors the compiler can throw. These are only a few of the most common. But if you follow the process demonstrated here to carefully decipher error messages, you'll be able to squash bugs of any variety. System.outto Log Values We are already familiar with using System.out.println();. It's a simple way to communicate with the user and ask them questions. But System.out is also useful for catching errors and debugging. We can use it anywhere in our application; inside complex branching, a loop, or even inside a test. This can be especially helpful when working with objects in tests, which we will do later this week. Here is an example test using System.out.println(): @Test public void runPingPong_replaceMultiplesOf3_ArrayList() throws Exception { PingPong testPingPong = new PingPong(); ArrayList<Object> expectedOutput = new ArrayList<Object>(); System.out.println(expectedOutput.toString()); expectedOutput.add(1); expectedOutput.add(2); expectedOutput.add("ping"); System.out.println(expectedOutput.toString()); assertEquals(expectedOutput, testPingPong.runPingPong(3)); } and here is how this will display in the console: [] [1, 2, ping] Process finished with exit code 0 Before you ask for help from your instructor, be sure to try and resolve the issues you are having by getting more information with a System.out. Similar to the JavaScript debugger; we used in Intro to Programming, IntelliJ uses a debugger to help resolve bugs in tests and application code. If code compiles without an error and doesn't crash, but doesn't give you the result we expect, using System.outand the debugger can help track down what's actually happening. Here's how we use the debugger in IntelliJ. Open a file we wish to debug. Click on the right-hand side of the editor window, next to the line number. We should see a red dot appear. This is called a breakpoint. (Note that we cannot set a breakpoint on an empty line). A breakpoint is like writing the keyword debugger into our JavaScript code. When the parser hits the breakpoint, code execution will freeze, and we can step through code slowly to inspect it for issues. If we click the Run button, we won't see any changes though. We need to run the app in "debug mode" for the breakpoint to fire. Find the small button next to Run button that looks suspiciously like a beetle. Now, if we hit this button, our code will execute in debug mode. Find an app to experiment with, set a breakpoint, and try out debug mode now. When the debugger fires, we'll see controls similar to the JavaScript debugger. The most important areas of this screen are numbered for reference: Here are currently-active variables and their content. We can pop open anything preceded with an |> to inspect it's contents. This is very useful to determine if objects or other variables contain the correct content. Shows the current execution point Step into the next line in the file. This is the command we'll use most frequently. (Blue Button) Step to the next line executed. This means to step into code, such as a function. In many cases this will be code we did not write, such as a function call. Try stepping out (5) if we get stranded. We can ignore the Red "force step in" button, generally speaking. Step Out button. If we stepped into a function; whether our own, or method we did not write, we can step back out with this button. Re-Run the application in debug mode, say, after a code change. Resume the app, ignoring the breakpoint. Pause Stop the app completely. Here we can see the lines being executed, and the variables they are producing. Get comfortable using the debugger for both testing and frontend developing. It can save a huge amount of trial, error, and frustration. To run the debugger in a test, set a breakpoint and hit the beetle button, or, when right-clicking on the editor tab, select Debug yourFilenameTest. And don't forget to remove breakpoints when you are done debugging!
https://www.learnhowtoprogram.com/java/java-basics-9a4b2b2a-6de4-44b5-9f35-0e506177d73b/java-debugging-techniques-and-tips
CC-MAIN-2019-04
refinedweb
1,731
58.28
Popular programming languages like C, C++, C#, Java uses the curly brackets {} in order to define code blocks for namespaces, for, while, function, and related implementation. Python has a bit different mechanism to define code blocks which are called indentation. Indentation Indentation is creating Python code blocks by using spaces to the left. As an example, the developer uses 3 spaces from the start of the line for the first level indentation. Another 3 spaces for the 2nd level indentation etc. This space count should be consistent and the same for all indentation implementation. Using indentation force the Python code to be more readable and easy to grasp but also create problems if the space count is not the same or tabs can break the code block and Python interpretation. Indentation Example Lets make some Python indentation example in order to understand better. Every block definitions like for, while, function requires indentation in order to create and code block for this statements. for i in range(1,5): print("We will check ",i) if(i%2==0): print(i,"is diviseable to 2") - for i in range(1,5): line do not have indentation but using the for loop requires indentation definition. - print(“We will check “,i) line indentation 3 spaces which makes it the code clock of the previous for loop. - if(i%2==0): line is the code block of the for loop and requires 2nd level indentation because of if definition. - print(i,”is diviseable to 2″) is the 2nd level indentation which is the previous if statement code block. Indentation Space Count The space count for the indentation is very important because it should be the same and consistent throughout the Python file or script. Generally, 3 spaces are used for each level of the indentation. Using the tab will create errors because the indentation requires spaces or whitespaces which are not the same as tabs even they may seem the same. If you use tab as indentation you should continue to use tab for every indentation but this will create problems in general. Python 3 does not allow to use of both tabs and spaces for indentation. Indentation Errors Especially newcomers to the Python programming language will face indentation errors in the first months of their learning process. Even there may be different indentation errors the most popular and generic indentation error is “IndentationError: unexpected indent” which is like below.
https://pythontect.com/python-indentation/
CC-MAIN-2022-21
refinedweb
405
51.68
StealJS 1.0 is here and represents an important milestone along its mission. This article reiterates that important mission, goes over a few of 1.0's most useful features, explains how to upgrade for 0.16 users, and looks ahead to what's coming on StealJS's roadmap. StealJS's mission is to make it cheap and easy to do the right thing. Doing the right thing, when building for the web, includes things such as writing tests and breaking your applications into smaller mini-applications (modlets) that can be composed together. Our competitors see themselves as compilers. A compiler is important, we have one in steal-tools, but a compiler also adds friction to your development workflow and removes the immediacy that the web platform provides. That friction disincentivizes you from doing the right thing. StealJS is about removing the friction; you can have your cake and eat it too! Some of the new features include: The ~ lookup scheme This scheme works like so: import widget from "~/components/my-widget/my-widget"; This works the same way as using your package name; by finding the module starting at your application's root. We’ve noticed developers run into problems where they need to rename their project, and using the package name to look-up modules creates a refactor hazard. Using the tilde instead allows you to rename to your hearts content. And is much shorter to write! Conditional module syntax StealJS also now supports conditional module loading. This is available through steal-conditional. This provides a way to load modules based on run-time conditions. Conditionals makes it much easier to implement internationalization and to load polyfills only when needed. Steal-conditional supports 2 types of conditionals, substitution and boolean. Here’s an example of a substitution conditional: import words from "~/langs/#{lang}"; This example conditionally loads a language pack based on name substitution. So you can imagine having a "~/langs/en" and a "~/langs/zh" module that contains words/phrases for each of the languages your website supports. When built for production, each language pack will be separated out into their own bundles, so your user only ever loads their own language. StealJS is about making doing the right thing easy and loading as little code as possible in production is the right thing. Easier production usage Using StealJS in development is as easy as adding a script tag that links to steal.js: <script src="./node_modules/steal/steal.js"></script> In the past using StealJS in production has been a little more complicated and the more configuration you use (like putting your bundles somewhere other than the default location) the more configuration you have needed in your script tag. With 1.0 we wanted to greatly simplify this, so now steal-tools will create a steal.production.js that is preconfigured for you. Now all you need to do is add the script tag (like in development!) <script src="./dist/steal.production.js"></script> Additionally you no longer need to use the "instantiated" configuration to prevent loading your main CSS bundle. Just include your CSS in the <head> like normal: <head> … <link rel="stylesheet" href="./dist/bundles/app/index.css"> </head> Upgrading While there are a few notable features like those listed above, overall StealJS 1.0 represents a stable base on which to build and an acknowledgement that StealJS has already been used in production for the last several years; 1.0 was overdue! If you’ve been using StealJS 0.16, upgrading is straightforward and can be done in a couple of hours. See the migration guide for more information on how to upgrade. For all of the changes in 1.0, see the changelog. Where StealJS Goes From Here The first version of StealJS (then called Include.js) was released in 2007 as part of JavaScript MVC 0.8.2. We are just 1 year away from StealJS's 10 year anniversary! With 1.0 released, StealJS is in a great position to accelerate on its mission of making doing the right thing cheap. Some of the things we have planned include: - A new smaller and faster production loader that will allow you to load your modules in parallel (using <script async>). - Support for HTTP2 Server Push and <link rel=preload>, making page loads even faster. - steal-tools gaining the ability to optimize assets outside of your JavaScript dependency graph. - Faster development load times using things like caching and bundling of development dependencies. These are being developed in the open and you can help! StealJS has adopted an RFC process where new ideas and features are discussed in a more formal process. We hope that by creating this process, developers who are interested in seeing new features in StealJS will see a path to getting changes made. Now at 9 years, StealJS's mission is as relevant as ever. With a stable 1.0 now StealJS, with your help, we can put the focus back on realizing its goals.
https://www.bitovi.com/blog/stealjs-1.0-release
CC-MAIN-2019-35
refinedweb
836
65.01
part of .NET Framework 4.0 and Windows Communication Foundation (WCF) 4.0, ADO.NET Entity Framework integration is provided using WCF Data Services. The WCF Data Services exposes entity data (provided by ADO.NET EF) as a data service. WCF Data Services enables the creation and consumption of OData services for the web. Note that WCF Data Service was formerly known as ADO.NET Data Services. To design a WCF Data Service, we need to do the following: Design the Data Model using ADO.NET EF: Create a Data Service: Configure Access Rules for the Data Service: In this article demonstration, I have used Sql Server 2008 and the Database is ‘Company’ with the following table: Step 1: Open VS2010 and create a Blank solution, name it as ‘WCF_DataService_And_Silverlight_4’ Step 2: In this solution, add a new WCF Service application and call it ‘WCF_DataService’. Delete IService1.cs and Service1.svc from the project, as we are going to make a use of the WCF Data Service Template. Step 3: Right-click the project and add a new ADO.NET Entity Data Model, name it as ‘CompanyEDMX’. Follow the wizard, select ‘Company’ as database and select Department table. The EDMX file should get added in the project as shown below: Step 4: Right-click on the project, name it as ‘WCF_DataService_Company.svc’ and add the new WCF Data Service in the project. Make the following changes in it: Step 5: Build the project and make sure that it is error free. Step 6: Right-Click on WCF_DataService_Company.svc and view in browser, you should see the following: The Url will be ‘:<some Port>/WCF_DataService_Company.svc/’ Step 7: Now change the Url to the one shown below: ‘:<some port>/WCF_DataService_Company.svc/Departments’ you will find all rows from Department table in the form of an Xml document as below: If for some reason, you do not see xml and instead you are getting a Feed, then make the following changes from Internet Options (in IE): Step 7: In the same solution, add a new Silverlight application and name it as ‘SL4_Client_WCF_Data-Service’. Make sure that you will host it in the same WCF Service project created above by keeping the defaults shown in the following window: This will add ‘SL4_Client_WCF_Data-ServiceTestPage.aspx’ and ‘SL4_Client_WCF_Data-ServiceTestPage.html’ in the ‘WCF_DataService’ project. Step 8: In the Silverlight project, add the WCF service reference and name the service namespace as MyRef. Step 9: Open MainPage.xaml and drag drop a button and DataGrid element in it: Step 10: In the MainPage.Xaml.cs use the following namespaces: Step 11: In the MainPage class level, declare the following objects: In .NET 4.0, the DataServiceCollection<T> class is introduced in WCF Data Services to support databinding with the controls in a client application. This class is especially used in WPF and Silverlight. Step 12: In the UserControl_Loaded event, write the following code for creating an instance of the objectContext proxy for the WCF data service using the Service uri and an instance of DataServiceCollection: Step 13: In the ‘Get Data’ button click event, make an async call to the WCF Data Service by firing the query as shown below: The query is executed asynchronously and the result is generated which is further assigned to the ItemsSource property of the DataGrid. Step 14: Run the application and click on Get Data button. The following output will be displayed: Conclusion: WCF data Services provides integration with Data Model built using ADO.NET EF. Since the data received is in the form of XML, any client application built on Open standards can easily consume it. The entire source code of this article can be download over here Note: Here the aspx/html text page is in the same WCF service project, the WCF Service is currently hosted on the local port and not on IIS.
https://www.dotnetcurry.com/wcf/720/read-data-wcf-data-service-silverlight
CC-MAIN-2018-39
refinedweb
647
62.58
On 4/7/06, Mike Sparr - <mike@goomzee.com> wrote: > Hi Rahul, > > I have consolidated questions that have been looming below, plus response to > your input on variable scoping. > > > WILL CHECK SYNTAX BASED ON SUGGESTIONS. > Tomorrow I will upgrade our snapshot with your changes. This message got > filtered as junk mail so I didn't see it until now. Thanks for the input. > Yes, based upon spec I should've been able to lazily instantiate the vars > w/o expr="" but will test that out with this snapshot. I did try the > <onentry><var...> and got exceptions but that could've been because I didn't > declare them as blank. I think I used expr="''" and other variations to get > it to work. > <snip/> An empty expr attribute is no longer needed if you just want to define a variable with <var> (with a null associated value) -- fixed earlier in the week. The messages you are seeing in the logs are coming from the namelist in the <send> element, when the value of one or more variables in the namelist is null (one log message per variable) -- that is just to flag that a value of null was sent over the wire, which *might* be of interest for error reporting purposes. However, the error message is quite misleading, it says "UNDEFINED_VARIABLE" where it should really say something to the effect of "NULL_PARAMETER". If you want, you can file a report in bugzilla (details here [1]) for a better error message, so we don't forget this. > > SVN CHECKOUT? > The datamodel seems like best option so I'm looking forward to updating our > snapshot. Offhand, do you know the svn command/url to check out your > latest? First I'll test against your nightly binary tomorrow, but I have an > ANT build script for packaging everything so I want to pull down source. > <snap/> Sure, the Commons SCXML website [2] has all the information, including the SVN checkout URL and page on building Commons SCXML (see navigation bar on left). Building should be straightforward, since even with ant the dependencies are downloaded (which may not always be the most optimal approach, though its often easier). If you have trouble building, please ping the list. > > XMLNS ON/OFF IN DIGESTER? > In addition, I recall a while back that we modified your source to exclude > the xmlns handling in the digester - what was your final solution to that? > Did you turn it off, leave it on or make it configurable via the factory > suggestion? > <snip/> Yes, I remember that discussion. The outcome then was to allow digester instances configured for SCXML parsing to be available for further customization. Take a look at the SCXMLDigester class [3], specifically the newInstance() method. This page [4] from the user guide talks about customizing the Digester per individual needs, and though it talks about custom actions, the idea for any kind of customization is similar. The default SCXML digester is namespace aware. > > EL QUESTION (substring): > The voice browser being used (Nuance) had built-in grammars and leveraging > the currency grammar returns "USD20" for 20 dollars. In the EL I've tried > various attempts to parseInt or substring or replace to no avail. How would > I remove the currency code prefix using the EL in the framework? Do you > have a document on available functions - I've tried JSP 2.0 functions and > they didn't exist and documentation on Commons EL usage resorts to JavaDoc > but little/no function reference. Suggest Commons EL team copy PHP.net and > list all available functions with description/examples... > <snap/> Function invocation with JSP 2.0 EL (or why there was a lack thereof) has been a fun topic. JSP 2.0 EL does allow you to invoke "namespaced" functions, which are mapped to static Java methods. There are no "available functions". There is JSTL, which provides a functions tag library, and it does have the substringAfter() which is probably what you'd need here. The FunctionMapper in the Commons SCXML distro is not mapped to recognise that. It will indeed be useful, and for that, please open an enhancement request in bugzilla. The list of JSTL functions, BTW, is here [5]. <sidebar> While EL was the only expression language available in the early days of Commons SCXML, we can now use JEXL [6] as well. JEXL should be preferred since it allows method invocation and the ability to define custom functions would probably be easier given that JEXL 1.1 (not yet released) will support static method invocation. Migration from EL-based SCXML documents to JEXL-based SCXML documents, in many cases, is not much work. EL based documents make sense in servlet container based usecases, specifically those that may be tied to JSP/JSF technologies. Not sure if your usecase is so tied. </sidebar> > > SUGGESTION/QUESTION (toggle hot deploy/performance): > I think it would be beneficial to add in configurable hot-deploy/performance > modes. Hot-deploy would reload the engine document upon change and > performance would not. This likely would not be part of the commons > contrib. but do you know how I could implement that. The flexibility of > scripting new cases in the XML is enticing but continual server reboots to > clear memory hassle, especially for development. Suggestions? > <snip/> These are good points. If you're looking for hints, one possibility might be to look at how servlet containers "listen to" updates to web application descriptors (take Tomcat for instance, since we have the source ;-). If this can be done concisely, it can definitely be part of Commons SCXML, but its probably not on the 1.0 roadmap. If you have ideas about this (or even otherwise), you can open another enhancement request in bugzilla. Furthermore, in theory, it is also possible to programmatically alter the state machine's in memory representation (using the Commons SCXML object model), and this opens up the topic of enabling "adaptive behavior" -- and I find that quite interesting. To make that practical, some utilities will need to be added to Commons SCXML so its fairly straighforward for the user, such that most internal wirings for the object model are taken care of automagically. This should also be on the long term roadmap for Commons SCXML (another enhancement request, I might open this one myself instead of asking you to open all these ;-). > > SUGGESTION/QUESTION (performance/memory for executor store): > If app runs a long time, executor store may become rather large. Suggested > implementation? Currently Hashmap w/o worker for cleanup, ttl, timestamp. > Curious if best with timestap, ttl and perhaps LRU LinkedList > implementation? > <snap/> We now use a stateless model for Commons SCXML, see issue 38311 [7] for the details. This means the document needs to be parsed only once (only one SCXML object is created), and the SCXMLExecutor's should be made candidates for being collected when you're done with them (in the worst case, they'll be lost as each session terminates/expires). So am not sure if anything beyond that is needed. Are you having any memory issues when using the latest nightlies? -Rahul (long, possibly fragmented, URLs below) [1] [2] [3] [4] [5] [6] [7] > > Mike > > <snip/> --------------------------------------------------------------------- To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org For additional commands, e-mail: commons-user-help@jakarta.apache.org
http://mail-archives.apache.org/mod_mbox/commons-user/200604.mbox/%3Cce1f2ea80604072225q796f0b60w9e1b0b819937164a@mail.gmail.com%3E
CC-MAIN-2016-40
refinedweb
1,221
63.09
Performance improvements can help you achieve real-time execution. The following lists contain performance improvements you can make to models that contain the RTL-SDR Receiver block. Run your application in Accelerator or Rapid Accelerator mode instead of Normal mode. Be aware that some scopes do not plot data when run in Rapid Accelerator mode. When using accelerator or rapid accelerator mode, on the Model Configuration Parameters dialog box, search for Compiler optimization level to Optimizations on (faster runs). Use frame-based processing. With frame-based processing, the model processes multiple samples during a single execution call to a block. Consider using frame sizes from roughly 100 to several thousand. In Model Configuration Parameters > Data Import/Export, turn off all logging. The model must be single-rate. If the model requires resampling, then choose rational coefficients that keep the model single-rate. Do not add any Buffer blocks to the model. If you want to create convenient frame sizes, do it in your data sources. Using a Buffer block typically degrades performance. Avoid feedback loops. Typically, such loops imply scalar processing, which slows the model considerably. Avoid using scopes. To visualize your data, send it to a workspace variable and post-process it. If your model has Constant blocks with values that do not change during simulation, make sure that the sample time is set to inf (default). If you are generating code from the model, set the Solver setting to Fixed-step/discrete. Set tasking mode to SingleTasking. To improve performance, you can generate a standalone executable for your Simulink® model. The generated code runs without Simulink in the loop. To perform any code generation, you must have an appropriate compiler installed. See for compilers supported in the current release. You can generate generic real-time target (GRT) code if you have a Simulink Coder™ license. To do so, set Model Configuration Parameters > Code Generation > System target file to grt.tlc (Generic Real-Time Target). When you select the option to generate code for any target (not just GRT), clear the Model Configuration Parameters > Hardware Implementation > Test hardware > Test hardware is the same as production hardware check box. Then, set the Device type to MATLAB Host Computer. You can create generated code with a smaller stack than the GRT code if you have an Embedded Coder® license. To do so, set Model Configuration Parameters > Code Generation > System target file to ert.tlc (Embedded Coder). Then, add the following lines to the Model Configuration Parameters > Code Generation > Custom Code > Include custom C code in generated: > Source file: #include <stdio.h> #include <stdlib.h> RTL-SDR Receiver | comm.SDRRTLReceiver
https://au.mathworks.com/help/supportpkg/rtlsdrradio/ug/model-performance-optimization.html
CC-MAIN-2019-30
refinedweb
438
51.14
Chapter 19 XML Contents: Parsing with JAXP and SAX 1 Parsing with SAX 2 Parsing and Manipulating with JAXP and DOM Traversing a DOM Tree Traversing a Document with DOM Level 2 The JDOM API Exercises XML, or Extensible Markup Language, is a meta-language for marking up text documents with structural tags, similar to those found in HTML and SGML documents. XML has become popular because its structural markup allows documents to describe their own format and contents. XML enables "portable data," and it can be quite powerful when combined with the "portable code" enabled by Java. Because of the popularity of XML, there are a number of tools for parsing and manipulating XML documents. And because XML documents are becoming more and more common, it is worth your time to learn how to use some of those tools to work with XML. The examples in this chapter introduce you to simple XML parsing and manipulation. If you are familiar with the basic structure of an XML file, you should have no problem understanding them. Note that there are many subtleties to working with XML; this chapter doesn't attempt to explain them all. To learn more about XML, try Java and XML, by Brett McLaughlin, or XML Pocket Reference, by Robert Eckstein, both from O'Reilly & Associates. The world of XML and its affiliated technologies is moving so fast that it can be hard just keeping up with the acronyms, standards, APIs, and version numbers. I'll try to provide an overview of the state of various technologies in this chapter, but be warned that things may have changed, sometimes radically, by the time you read this material. Parsing with JAXP and SAX 1 The first thing you want to do with an XML document is parse it. There are two commonly used approaches to XML parsing: they go by the acronyms SAX and DOM. We'll begin with SAX parsing; DOM parsing is covered later in the chapter. At the very end of the chapter, we'll also see a new, but very promising, Java-centric XML API known as JDOM. SAX is the Simple API for XML. SAX is not a parser, but rather a Java API that describes how a parser operates. When parsing an XML document using the SAX API, you define a class that implements various "event" handling methods. As the parser encounters the various element types of the XML document, it invokes the corresponding event handler methods you've defined. Your methods take whatever actions are required to accomplish the desired task. In the SAX model, the parser converts an XML document into a sequence of Java method calls. The parser doesn't build a parse tree of any kind (although your methods can do this, if you want). SAX parsing is typically quite efficient and is therefore your best choice for most simple XML processing tasks. The SAX API was created by David Megginson (). The Java implementation of the API is in the package org.xml.saxand its subpackages. SAX is a defacto standard but has not been standardized by any official body. SAX Version 1 has been in use for some time; SAX 2 was finalized in May 2000. There are numerous changes between the SAX 1 and SAX 2 APIs. Many Java-based XML parsers exist that conform to the SAX 1 or SAX 2 APIs. With the SAX API, you can't completely abstract away the details of the XML parser implementation you are using: at a minimum, your code must supply the classname of the parser to be used. This is where JAXP comes in. JAXP is the Java API for XML Parsing. It is an "optional package" defined by Sun that consists of the javax.xml.parserspackage. JAXP provides a thin layer on top of SAX (and on top of DOM, as we'll see) and standardizes an API for obtaining and using SAX (and DOM) parser objects. The JAXP package ships with default parser implementations but allows other parsers to be easily plugged in and configured using system properties. At this writing, the current version of JAXP is 1.0.1; it supports SAX 1, but not SAX 2. By the time you read this, however, JAXP 1.1, which will include support for SAX 2, may have become available. Example 19.1 is a listing of ListServlets1.java, a program that uses JAXP and SAX to parse a web application deployment descriptor and list the names of the servlets configured by that file. If you haven't yet read Chapter 18, Servlets and JSP, you should know that servlet-based web applications are configured using an XML file named web.xml. This file contains <servlet>tags that define mappings between servlet names and the Java classes that implement them. To help you understand the task to be solved by the ListServlets1.java program, here is an excerpt from the web.xml file developed in Chapter 18:<servlet> <servlet-name>hello</servlet-name> <servlet-class>com.davidflanagan.examples.servlet.Hello</servlet-class> </servlet> <servlet> <servlet-name>counter</servlet-name> <servlet-class>com.davidflanagan.examples.servlet.Counter</servlet-class> <init-param> <param-name>countfile</param-name> <!-- where to save state --> <param-value>/tmp/counts.ser</param-value> <!-- adjust for your system--> </init-param> <init-param> <param-name>saveInterval</param-name> <!-- how often to save --> <param-value>30000</param-value> <!-- every 30 seconds --> </init-param> </servlet> <servlet> <servlet-name>logout</servlet-name> <servlet-class>com.davidflanagan.examples.servlet.Logout</servlet-class> </servlet> ListServlets1.java includes a main()method that uses the JAXP API to obtain a SAX parser instance. It then tells the parser what to parse and starts the parser running. The remaining methods of the class are invoked by the parser. Note that ListServlets1extends the SAX HandlerBaseclass. This superclass provides dummy implementations of all the SAX event handler methods. The example simply overrides the handlers of interest. The parser calls the startElement()method when it reads an XML tag; it calls endElement()when it finds a closing tag. characters()is invoked when the parser reads a string of plain text with no markup. Finally, the parser calls warning(), error(), or fatalError()when something goes wrong in the parsing process. The implementations of these methods are written specifically to extract the desired information from a web.xml file and are based on a knowledge of the structure of this type of file. Note that web.xml files are somewhat unusual in that they don't rely on attributes for any of the XML tags. That is, servlet names are defined by a <servlet-name>tag nested within a <servlet>tag, instead of simply using a nameattribute of the <servlet>tag itself. This fact makes the example program slightly more complex than it would otherwise be. The web.xml file does allow idattributes for all its tags. Although servlet engines are not expected to use these attributes, they may be useful to a configuration tool that parses and automatically generates web.xml files. For completeness, the startElement()method in Example 19.1 looks for an idattribute of the <servlet>tag. The value of that attribute, if it exists, is reported in the program's output. Example 19.1: ListServlets1.javapackage com.davidflanagan.examples.xml; import javax.xml.parsers.*; // The JAXP package import org.xml.sax.*; // The main SAX package import java.io.*; /** * Parse a web.xml file using JAXP and SAX1. Print out the names * and class names of all servlets listed in the file. * * This class implements the HandlerBase helper class, which means * that it defines all the "callback" methods that the SAX parser will * invoke to notify the application. In this example we override the * methods that we require. * * This example uses full package names in places to help keep the JAXP * and SAX APIs distinct. **/ public class ListServlets1 extends org.xml.sax.HandlerBase { /** The main method sets things up for parsing */ public static void main(String[] args) throws IOException, SAXException, ParserConfigurationException { // Create a JAXP "parser factory" for creating SAX parsers javax.xml.parsers.SAXParserFactory spf=SAXParserFactory.newInstance(); // Configure the parser factory for the type of parsers we require spf.setValidating(false); // No validation required // Now use the parser factory to create a SAXParser object // Note that SAXParser is a JAXP class, not a SAX class javax.xml.parsers.SAXParser sp = spf.newSAXParser(); // Create a SAX input source for the file argument org.xml.sax.InputSource input=new InputSource(new FileReader(args[0])); // Give the InputSource an absolute URL for the file, so that // it can resolve relative URLs in a <!DOCTYPE> declaration, e.g. input.setSystemId("file://" + new File(args[0]).getAbsolutePath()); // Create an instance of this class; it defines all the handler methods ListServlets1 handler = new ListServlets1(); // Finally, tell the parser to parse the input and notify the handler sp.parse(input, handler); // Instead of using the SAXParser.parse() method, which is part of the // JAXP API, we could also use the SAX1 API directly. Note the // difference between the JAXP class javax.xml.parsers.SAXParser and // the SAX1 class org.xml.sax.Parser // // org.xml.sax.Parser parser = sp.getParser(); // Get the SAX parser // parser.setDocumentHandler(handler); // Set main handler // parser.setErrorHandler(handler); // Set error handler // parser.parse(input); // Parse! } StringBuffer accumulator = new StringBuffer(); // Accumulate parsed text String servletName; // The name of the servlet String servletClass; // The class name of the servlet String servletId; // Value of id attribute of <servlet> tag // When the parser encounters plain text (not XML elements), it calls // this method, which accumulates them in a string buffer public void characters(char[] buffer, int start, int length) { accumulator.append(buffer, start, length); } // Every time the parser encounters the beginning of a new element, it // calls this method, which resets the string buffer public void startElement(String name, AttributeList attributes) { accumulator.setLength(0); // Ready to accumulate new text // If its a servlet tag, look for id attribute if (name.equals("servlet")) servletId = attributes.getValue("id"); } // When the parser encounters the end of an element, it calls this method public void endElement(String name) { if (name.equals("servlet-name")) { // After </servlet-name>, we know the servlet name saved up servletName = accumulator.toString().trim(); } else if (name.equals("servlet-class")) { // After </servlet-class>, we've got the class name accumulated servletClass = accumulator.toString().trim(); } else if (name.equals("servlet")) { // Assuming the document is valid, then when we parse </servlet>, // we know we've got a servlet name and class name to print out System.out.println("Servlet " + servletName + ((servletId != null)?" (id="+servletId+")":"") + ": " + servletClass); } } /** This method is called when warnings occur */ public void warning(SAXParseException exception) { System.err.println("WARNING: line " + exception.getLineNumber() + ": "+ exception.getMessage()); } /** This method is called when errors occur */ public void error(SAXParseException exception) { System.err.println("ERROR: line " + exception.getLineNumber() + ": " + exception.getMessage()); } /** This method is called when non-recoverable errors occur. */ public void fatalError(SAXParseException exception) throws SAXException { System.err.println("FATAL: line " + exception.getLineNumber() + ": " + exception.getMessage()); throw(exception); } } Compiling and Running the Example To run the previous example, you need the JAXP package from Sun. You can download it by following the download links from. Once you've downloaded the package, uncompress the archive it is packaged in and install it somewhere convenient on your system. In Version 1.0.1 of JAXP, the download bundle contains two JAR files: jaxp.jar, the JAXP API classes, and parser.jar, the SAX and DOM APIs and default parser implementations. To compile and run this example, you need both JAR files in your classpath. If you have any other XML parsers, such as the Xerces parser, in your classpath, remove them or make sure that the JAXP files are listed first; otherwise you may run into version-skew problems between the different parsers. Note that you probably don't want to permanently alter your classpath, since you'll have to change it again for the next example. One simple solution with Java 1.2 and later is to temporarily drop copies of the JAXP JAR files into the jre/lib/ext/ directory of your Java installation. With the two JAXP JAR files temporarily in your classpath, you can compile and run ListServlets1.java as usual. When you run it, specify the name of a web.xml file on the command line. You can use the sample file included with the downloadable examples for this book or specify one from your own servlet engine. There is one complication to this example. Most web.xml files contain a <!DOCTYPE>tag that specifies the document type (or DTD). Despite the fact that Example 19.1 specifies that the parser should not validate the document, a conforming XML parser must still read the DTD for any document that has a <!DOCTYPE>declaration. Most web.xml have a declaration like this<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.2//EN" ""> <!DOCTYPE>declaration from the web.xml file you process with ListServlets1. Parsing with SAX 2 Example 19.1 showed how you can parse an XML document using the SAX 1 API, which is what is supported by the current version of JAXP (at this writing). The SAX 1 API is out of date, however. So Example 19.2 shows how you can accomplish a similar parsing task using the SAX 2 API and the open-source Xerces parser available from the Apache Software Foundation. Example 19.2 is a listing of the program ListServlets2.java. Like the ListServlets1.java example, this program reads a specified web.xml file and looks for <servlet>tags, so it can print out the servlet name-to-servlet class mappings. This example goes a little further than the last, however, and also looks for <servlet-mapping>tags, so it can also output the URL patterns that are mapped to named servlets. The example uses two hashtables to store the information as it accumulates it, then prints out all the information when parsing is complete. The SAX 2 API is functionally similar to the SAX 1 API, but a number of classes and interfaces have new names and some methods have new signatures. Many of the changes were required for the addition of XML namespace support in SAX 2. As you read through Example 19.2, pay attention to the API differences from Example 19.1. Example 19.2: ListServlets2.javapackage com.davidflanagan.examples.xml; import org.xml.sax.*; // The main SAX package import org.xml.sax.helpers.*; // SAX helper classes import java.io.*; // For reading the input file import java.util.*; // Hashtable, lists, and so on /** * Parse a web.xml file using the SAX2 API and the Xerces parser from the * Apache project. * * This class extends DefaultHandler so that instances can serve as SAX2 * event handlers, and can be notified by the parser of parsing events. * We simply override the methods that receive events we're interested in **/ public class ListServlets2 extends org.xml.sax.helpers.DefaultHandler { /** The main method sets things up for parsing */ public static void main(String[] args) throws IOException, SAXException { // Create the parser we'll use. The parser implementation is a // Xerces class, but we use it only through the SAX XMLReader API org.xml.sax.XMLReader parser=new org.apache.xerces.parsers.SAXParser(); // Specify that we don't want validation. This is the SAX2 // API for requesting parser features. Note the use of a // globally unique URL as the feature name. Non-validation is // actually the default, so this line isn't really necessary. parser.setFeature("", false); // Instantiate this class to provide handlers for the parser and // tell the parser about the handlers ListServlets2 handler = new ListServlets2(); parser.setContentHandler(handler); parser.setErrorHandler(handler); // Create an input source that describes the file to parse. // Then tell the parser to parse input from that source org.xml.sax.InputSource input=new InputSource(new FileReader(args[0])); parser.parse(input); } HashMap nameToClass; // Map from servlet name to servlet class name HashMap nameToPatterns; // Map from servlet name to url patterns StringBuffer accumulator; // Accumulate text String servletName, servletClass, servletPattern; // Remember text // Called at the beginning of parsing. We use it as an init() method public void startDocument() { accumulator = new StringBuffer(); nameToClass = new HashMap(); nameToPatterns = new HashMap(); } // When the parser encounters plain text (not XML elements), it calls // this method, which accumulates them in a string buffer. // Note that this method may be called multiple times, even with no // intervening elements. public void characters(char[] buffer, int start, int length) { accumulator.append(buffer, start, length); } // At the beginning of each new element, erase any accumulated text. public void startElement(String namespaceURL, String localName, String qname, Attributes attributes) { accumulator.setLength(0); } // Take special action when we reach the end of selected elements. // Although we don't use a validating parser, this method does assume // that the web.xml file we're parsing is valid. public void endElement(String namespaceURL, String localName, String qname) { if (localName.equals("servlet-name")) { // Store servlet name servletName = accumulator.toString().trim(); } else if (localName.equals("servlet-class")) { // Store servlet class servletClass = accumulator.toString().trim(); } else if (localName.equals("url-pattern")) { // Store servlet pattern servletPattern = accumulator.toString().trim(); } else if (localName.equals("servlet")) { // Map name to class nameToClass.put(servletName, servletClass); } else if (localName.equals("servlet-mapping")) {// Map name to pattern List patterns = (List)nameToPatterns.get(servletName); if (patterns == null) { patterns = new ArrayList(); nameToPatterns.put(servletName, patterns); } patterns.add(servletPattern); } } // Called at the end of parsing. Used here to print our results. public void endDocument() { List servletNames = new ArrayList(nameToClass.keySet()); Collections.sort(servletNames); for(Iterator iterator = servletNames.iterator(); iterator.hasNext();) { String name = (String)iterator.next(); String classname = (String)nameToClass.get(name); List patterns = (List)nameToPatterns.get(name); System.out.println("Servlet: " + name); System.out.println("Class: " + classname); if (patterns != null) { System.out.println("Patterns:"); for(Iterator i = patterns.iterator(); i.hasNext(); ) { System.out.println("\t" + i.next()); } } System.out.println(); } } // Issue a warning public void warning(SAXParseException exception) { System.err.println("WARNING: line " + exception.getLineNumber() + ": "+ exception.getMessage()); } // Report a parsing error public void error(SAXParseException exception) { System.err.println("ERROR: line " + exception.getLineNumber() + ": " + exception.getMessage()); } // Report a non-recoverable error and exit public void fatalError(SAXParseException exception) throws SAXException { System.err.println("FATAL: line " + exception.getLineNumber() + ": " + exception.getMessage()); throw(exception); } } Compiling and Running the Example The ListServlets2example uses the Xerces-J parser from the Apache XML Project. You can download this open-source parser by following the download links from. Once you have downloaded Xerces-J, unpack the distribution in a convenient location on your system. In that distribution, you should find a xerces.jar file. This file must be in your classpath to compile and run the ListServlets2.java example. Note that the xerces.jar file and the parsers.jar file from the JAXP distribution both contain versions of the SAX and DOM classes; you should avoid having both files in your classpath at the same time. Parsing and Manipulating with JAXP and DOM The first two examples in this chapter used the SAX API for parsing XML documents. We now turn to another commonly used parsing API, the DOM, or Document Object Model. The DOM API is a standard defined by the World Wide Web Consortium (W3C); its Java implementation consists of the org.w3c.dompackage and its subpackages. The current version of the DOM standard is Level 1. As of this writing, the DOM Level 2 API is making its way through the standardization process at the W3C. The Document Object Model defines the API of a parse tree for XML documents. The org.xml.dom.Nodeinterface specifies the basic features of a node in this parse tree. Subinterfaces, such as Document, Element, Entity, and Comment, define the features of specific types of nodes. A program that uses the DOM parsing model is quite different from one that uses SAX. With the DOM, you have the parser read your XML document and transform it into a tree of Nodeobjects. Once parsing is complete, you can traverse the tree to find the information you need. The DOM parsing model is useful if you need to make multiple passes through the tree, if you want to modify the structure of the tree, or if you need random access to an XML document, instead of the sequential access provided by the SAX model. Example 19.3 is a listing of the program WebAppConfig.java. Like the first two examples in this chapter, WebAppConfigreads a web.xml web application deployment descriptor. This example uses a DOM parser to build a parse tree, then performs some operations on the tree to demonstrate how you can work with a tree of DOM nodes. The WebAppConfig()constructor uses the JAXP API to obtain a DOM parser and then uses that parser to build a parse tree that represents the XML file. The root node of this tree is of type Document. This Documentobject is stored in an instance field of the WebAppConfigobject, so it is available for traversal and modification by the other methods of the class. The class also includes a main()method that invokes these other methods. The getServletClass()method looks for <servlet-name>tags and returns the text of the associated <servlet-class>tags. (These tags always come in pairs in a web.xml file.) This method demonstrates a number of features of the DOM parse tree, notably the getElementsByTagName()method. The addServlet()method inserts a new <servlet>tag into the parse tree. It demonstrates how to construct new DOM nodes and add them to an existing parse tree. Finally, the output()method uses an XMLDocumentWriterto traverse all the nodes of the parse tree and convert them back into XML format. The XMLDocumentWriterclass is covered in the next section and listed in Example 19.4. Example 19.3: WebAppConfig.javapackage com.davidflanagan.examples.xml; import javax.xml.parsers.*; // JAXP classes for parsing import org.w3c.dom.*; // W3C DOM classes for traversing the document import org.xml.sax.*; // SAX classes used for error handling by JAXP import java.io.*; // For reading the input file /** * A WebAppConfig object is a wrapper around a DOM tree for a web.xml * file. The methods of the class use the DOM API to work with the * tree in various ways. **/ public class WebAppConfig { /** The main method creates and demonstrates a WebAppConfig object */ public static void main(String[] args) throws IOException, SAXException, ParserConfigurationException { // Create a new WebAppConfig object that represents the web.xml // file specified by the first command-line argument WebAppConfig config = new WebAppConfig(new File(args[0])); // Query the tree for the class name associated with the specified // servlet name(new PrintWriter(new OutputStreamWriter(System.out))); } org.w3c.dom.Document document; // This field holds the parsed DOM tree /** * This constructor method is passed an XML file. It uses the JAXP API to * obtain a DOM parser, and to parse the file into a DOM Document object, * which is used by the remaining methods of the class. **/ public WebAppConfig(File configfile) throws IOException, SAXException, ParserConfigurationException { // Get a JAXP parser factory object javax.xml.parsers.DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); // Tell the factory what kind of parser we want dbf.setValidating(false); // Use the factory to get a JAXP parser object javax.xml.parsers.DocumentBuilder parser = dbf.newDocumentBuilder(); // Tell the parser how to handle errors. Note that in the JAXP API, // DOM parsers rely on the SAX API for error handling parser.setErrorHandler(new org.xml.sax.ErrorHandler() { public void warning(SAXParseException e) { System.err.println("WARNING: " + e.getMessage()); } public void error(SAXParseException e) { System.err.println("ERROR: " + e.getMessage()); } public void fatalError(SAXParseException e) throws SAXException { System.err.println("FATAL: " + e.getMessage()); throw e; // re-throw the error } }); // Finally, use the JAXP parser to parse the file. This call returns // A Document object. Now that we have this object, the rest of this // class uses the DOM API to work with it; JAXP is no longer required. document = parser.parse(configfile); } /** * This method looks for specific Element nodes in the DOM tree in order * to figure out the classname associated with the specified servlet name **/ public String getServletClass(String servletName) { // Find all <servlet> elements and loop through them. NodeList servletnodes = document.getElementsByTagName("servlet"); int numservlets = servletnodes.getLength(); for(int i = 0; i < numservlets; i++) { Element servletTag = (Element)servletnodes.item(i); // Get the first <servlet-name> tag within the <servlet> tag Element nameTag = (Element) servletTag.getElementsByTagName("servlet-name").item(0); if (nameTag == null) continue; // The <servlet-name> tag should have a single child of type // Text. Get that child, and extract its text. Use trim() // to strip whitespace from the beginning and end of it. String name =((Text)nameTag.getFirstChild()).getData().trim(); // If this <servlet-name> tag has the right name if (servletName.equals(name)) { // Get the matching <servlet-class> tag Element classTag = (Element) servletTag.getElementsByTagName("servlet-class").item(0); if (classTag != null) { // Extract the tag's text as above, and return it Text classTagContent = (Text)classTag.getFirstChild(); return classTagContent.getNodeValue().trim(); } } } // If we get here, no matching servlet name was found return null; } /** * This method adds a new name-to-class mapping in in the form of * a <servlet> sub-tree to the document. **/ public void addServlet(String servletName, String className) { // Create the <servlet> tag Element newNode = document.createElement("servlet"); // Create the <servlet-name> and <servlet-class> tags Element nameNode = document.createElement("servlet-name"); Element classNode = document.createElement("servlet-class"); // Add the name and classname text to those tags nameNode.appendChild(document.createTextNode(servletName)); classNode.appendChild(document.createTextNode(className)); // And add those tags to the servlet tag newNode.appendChild(nameNode); newNode.appendChild(classNode); // Now that we've created the new sub-tree, figure out where to put // it. This code looks for another servlet tag and inserts the new // one right before it. Note that this code will fail if the document // does not already contain at least one <servlet> tag. NodeList servletnodes = document.getElementsByTagName("servlet"); Element firstServlet = (Element)servletnodes.item(0); // Insert the new node before the first servlet node firstServlet.getParentNode().insertBefore(newNode, firstServlet); } /** * Output the DOM tree to the specified stream as an XML document. * See the XMLDocumentWriter example for the details. **/ public void output(PrintWriter out) { XMLDocumentWriter docwriter = new XMLDocumentWriter(out); docwriter.write(document); docwriter.close(); } } Compiling and Running the Example The WebAppConfigclass uses the JAXP and DOM APIs, so you must have the jaxp.jar and parser.jar files from the JAXP distribution in your classpath. You should avoid having the Xerces JAR file in your classpath at the same time, or you may run into version mismatch problems between the DOM Level 1 parser of JAXP 1.0 and the DOM Level 2 parser of Xerces. Compile WebAppConfig.java in the normal way. To run the program, specify the name of a web.xml file to parse as the first command-line argument and provide a servlet name as the second argument. When you run the program, it prints the class name (if any) that is mapped to the specified servlet name. Then it inserts a dummy <servlet>tag into the parse tree and prints out the modified parse tree in XML format to standard output. You'll probably want to pipe the output of the program to a paging program such as more. Traversing a DOM Tree The WebAppConfigclass of Example 19.3 parses an XML file to a DOM tree, modifies the tree, then converts the tree back into an XML file. It does this using the class XMLDocumentWriter, which is listed in Example 19.4. The write()method of this class recursively traverses a DOM tree node by node and outputs the equivalent XML text to the specified PrintWriterstream. The code is relatively straightforward and helps illustrate the structure of a DOM tree. Note that XMLDocumentWriteris just an example. Among its shortcomings: it doesn't handle every possible type of DOM node, and it doesn't output a full <!DOCTYPE>declaration. Example 19.4: XMLDocumentWriter.javapackage com.davidflanagan.examples.xml;import org.w3c.dom.*; // W3C DOM classes for traversing the document import java.io.*; /** * Output a DOM Level 1 Document object to a java.io.PrintWriter as a simple * XML document. This class does not handle every type of DOM node, and it * doesn't deal with all the details of XML like DTDs, character encodings and * preserved and ignored whitespace. However, it does output basic * well-formed XML that can be parsed by a non-validating parser. **/ public class XMLDocumentWriter { PrintWriter out; // the stream to send output to /** Initialize the output stream */ public XMLDocumentWriter(PrintWriter out) { this.out = out; } /** Close the output stream. */ public void close() { out.close(); } /** Output a DOM Node (such as a Document) to the output stream */ public void write(Node node) { write(node, ""); } /** * Output the specified DOM Node object, printing it using the specified * indentation string **/ public void write(Node node, String indent) { // The output depends on the type of the node switch(node.getNodeType()) { case Node.DOCUMENT_NODE: { // If its a Document node Document doc = (Document)node; out.println(indent + "<?xml version='1.0'?>"); // Output header Node child = doc.getFirstChild(); // Get the first node while(child != null) { // Loop 'till no more nodes write(child, indent); // Output node child = child.getNextSibling(); // Get next node } break; } case Node.DOCUMENT_TYPE_NODE: { // It is a <!DOCTYPE> tag DocumentType doctype = (DocumentType) node; // Note that the DOM Level 1 does not give us information about // the the public or system ids of the doctype, so we can't output // a complete <!DOCTYPE> tag here. We can do better with Level 2. out.println("<!DOCTYPE " + doctype.getName() + ">"); break; } case Node.ELEMENT_NODE: { // Most nodes are Elements Element elt = (Element) node; out.print(indent + "<" + elt.getTagName()); // Begin start tag NamedNodeMap attrs = elt.getAttributes(); // Get attributes for(int i = 0; i < attrs.getLength(); i++) { // Loop through them Node a = attrs.item(i); out.print(" " + a.getNodeName() + "='" + // Print attr. name fixup(a.getNodeValue()) + "'"); // Print attr. value } out.println(">"); // Finish start tag String newindent = indent + " "; // Increase indent Node child = elt.getFirstChild(); // Get child while(child != null) { // Loop write(child, newindent); // Output child child = child.getNextSibling(); // Get next child } out.println(indent + "</" + // Output end tag elt.getTagName() + ">"); break; } case Node.TEXT_NODE: { // Plain text node Text textNode = (Text)node; String text = textNode.getData().trim(); // Strip off space if ((text != null) && text.length() > 0) // If non-empty out.println(indent + fixup(text)); // print text break; } case Node.PROCESSING_INSTRUCTION_NODE: { // Handle PI nodes ProcessingInstruction pi = (ProcessingInstruction)node; out.println(indent + "<?" + pi.getTarget() + " " + pi.getData() + "?>"); break; } case Node.ENTITY_REFERENCE_NODE: { // Handle entities out.println(indent + "&" + node.getNodeName() + ";"); break; } case Node.CDATA_SECTION_NODE: { // Output CDATA sections CDATASection cdata = (CDATASection)node; // Careful! Don't put a CDATA section in the program itself! out.println(indent + "<" + "![CDATA[" + cdata.getData() + "]]" + ">"); break; } case Node.COMMENT_NODE: { // Comments Comment c = (Comment)node; out.println(indent + "<!--" + c.getData() + "-->"); break; } default: // Hopefully, this won't happen too much! System.err.println("Ignoring node: " + node.getClass().getName()); break; } } // This method replaces reserved characters with entities. String fixup(String s) { StringBuffer sb = new StringBuffer(); int len = s.length(); for(int i = 0; i < len; i++) { char c = s.charAt(i); switch(c) { default: sb.append(c); break; case '<': sb.append("<"); break; case '>': sb.append(">"); break; case '&': sb.append("&"); break; case '"': sb.append("""); break; case '\'': sb.append("'"); break; } } return sb.toString(); } } Traversing a Document with DOM Level 2 Example 19.5 is a listing of DOMTreeWalkerTreeModel.java, a class that demonstrates DOM tree traversal using the DOM Level 2 TreeWalkerclass. TreeWalkeris part of the org.w3c.dom.traversalpackage. It allows you to traverse, or walk, a DOM tree using a simple API. More importantly, however, it lets you specify what type of nodes you want and automatically filters out all other nodes. It even allows you to provide a NodeFilterclass that filters nodes based on any criteria you want. The DOMTreeWalkerTreeModelimplements the javax.swing.tree.TreeModelinterface, which enables you to easily display a filtered DOM tree using a Swing JTreecomponent. Figure 19.1 shows a filtered web.xml file being displayed in this way. What is interesting here is not the TreeModelmethods themselves (refer to Chapter 10, Graphical User Interfaces for an explanation of TreeModel), but how the implementations of those methods use the TreeWalkerAPI to traverse the DOM tree. Figure 19.1: DOMTreeWalkerTreeModel display of a web.xml file The main()method parses the XML document named on the command line, then creates a TreeWalkerfor the parse tree. The TreeWalkeris configured to show all nodes except for comments and text nodes that contain only whitespace. Next, the main()method creates a DOMTreeWalkerTreeModelobject for the TreeWalker. Finally, it creates a JTreecomponent to display the tree described by the DOMTreeWalkerTreeModel. Note that this example uses the Xerces parser because of its support for DOM Level 2 (which, at the time of this writing, is not supported by JAXP). Because the example uses Xerces, you must have the xerces.jar file in your classpath in order to compile and run the example. At the time of this writing, DOM Level 2 is reasonably stable but is not yet an official standard. If the TreeWalkerAPI changes during the standardization process, it will probably break this example. Example 19.5: DOMTreeWalkerTreeModel.javapackage com.davidflanagan.examples.xml; import org.w3c.dom.*; // Core DOM classes import org.w3c.dom.traversal.*; // TreeWalker and related DOM classes import org.apache.xerces.parsers.*; // Apache Xerces parser classes import org.xml.sax.*; // Xerces DOM parser uses some SAX classes import javax.swing.*; // Swing classes import javax.swing.tree.*; // TreeModel and related classes import javax.swing.event.*; // Tree-related event classes import java.io.*; // For reading the input XML file /** * This class implements the Swing TreeModel interface so that the DOM tree * returned by a TreeWalker can be displayed in a JTree component. **/ public class DOMTreeWalkerTreeModel implements TreeModel { TreeWalker walker; // The TreeWalker we're modeling for JTree /** Create a TreeModel for the specified TreeWalker */ public DOMTreeWalkerTreeModel(TreeWalker walker) { this.walker = walker; } /** * Create a TreeModel for a TreeWalker that returns all nodes * in the specified document **/ public DOMTreeWalkerTreeModel(Document document) { DocumentTraversal dt = (DocumentTraversal)document; walker = dt.createTreeWalker(document, NodeFilter.SHOW_ALL,null,false); } /** * Create a TreeModel for a TreeWalker that returns the specified * element and all of its descendant nodes. **/ public DOMTreeWalkerTreeModel(Element element) { DocumentTraversal dt = (DocumentTraversal)element.getOwnerDocument(); walker = dt.createTreeWalker(element, NodeFilter.SHOW_ALL, null,false); } // Return the root of the tree public Object getRoot() { return walker.getRoot(); } // Is this node a leaf? (Leaf nodes are displayed differently by JTree) public boolean isLeaf(Object node) { walker.setCurrentNode((Node)node); // Set current node Node child = walker.firstChild(); // Ask for a child return (child == null); // Does it have any? } // How many children does this node have? public int getChildCount(Object node) { walker.setCurrentNode((Node)node); // Set the current node // TreeWalker doesn't count children for us, so we count ourselves int numkids = 0; Node child = walker.firstChild(); // Start with the first child while(child != null) { // Loop 'till there are no more numkids++; // Update the count child = walker.nextSibling(); // Get next child } return numkids; // This is the number of children } // Return the specified child of a parent node. public Object getChild(Object parent, int index) { walker.setCurrentNode((Node)parent); // Set the current node // TreeWalker provides sequential access to children, not random // access, so we've got to loop through the kids one by one Node child = walker.firstChild(); while(index-- > 0) child = walker.nextSibling(); return child; } // Return the index of the child node in the parent node public int getIndexOfChild(Object parent, Object child) { walker.setCurrentNode((Node)parent); // Set current node int index = 0; Node c = walker.firstChild(); // Start with first child while((c != child) && (c != null)) { // Loop 'till we find a match index++; c = walker.nextSibling(); // Get the next child } return index; // Return matching position } // Only required for editable trees; unimplemented here. public void valueForPathChanged(TreePath path, Object newvalue) {} // This TreeModel never fires any events (since it is not editable) // so event listener registration methods are left unimplemented public void addTreeModelListener(TreeModelListener l) {} public void removeTreeModelListener(TreeModelListener l) {} /** * This main() method demonstrates the use of this class, the use of the * Xerces DOM parser, and the creation of a DOM Level 2 TreeWalker object. **/ public static void main(String[] args) throws IOException, SAXException { // Obtain an instance of a Xerces parser to build a DOM tree. // Note that we are not using the JAXP API here, so this // code uses Apache Xerces APIs that are not standards DOMParser parser = new org.apache.xerces.parsers.DOMParser(); // Get a java.io.Reader for the input XML file and // wrap the input file in a SAX input source Reader in = new BufferedReader(new FileReader(args[0])); InputSource input = new org.xml.sax.InputSource(in); // Tell the Xerces parser to parse the input source parser.parse(input); // Ask the parser to give us our DOM Document. Once we've got the DOM // tree, we don't have to use the Apache Xerces APIs any more; from // here on, we use the standard DOM APIs Document document = parser.getDocument(); // If we're using a DOM Level 2 implementation, then our Document // object ought to implement DocumentTraversal DocumentTraversal traversal = (DocumentTraversal)document; // For this demonstration, we create a NodeFilter that filters out // Text nodes containing only space; these just clutter up the tree NodeFilter filter = new NodeFilter() { public short acceptNode(Node n) { if (n.getNodeType() == Node.TEXT_NODE) { // Use trim() to strip off leading and trailing space. // If nothing is left, then reject the node if (((Text)n).getData().trim().length() == 0) return NodeFilter.FILTER_REJECT; } return NodeFilter.FILTER_ACCEPT; } }; // This set of flags says to "show" all node types except comments int whatToShow = NodeFilter.SHOW_ALL & ~NodeFilter.SHOW_COMMENT; // Create a TreeWalker using the filter and the flags TreeWalker walker = traversal.createTreeWalker(document, whatToShow, filter, false); // Instantiate a TreeModel and a JTree to display it JTree tree = new JTree(new DOMTreeWalkerTreeModel(walker)); // Create a frame and a scrollpane to display the tree, and pop them up JFrame frame = new JFrame("DOMTreeWalkerTreeModel Demo"); frame.getContentPane().add(new JScrollPane(tree)); frame.setSize(500, 250); frame.setVisible(true); } } The JDOM API Until now, this chapter has considered the official, standard ways of parsing and working with XML documents: DOM is a standard of the W3C, and SAX is a de facto standard by virtue of its nearly universal adoption. Both SAX and DOM were designed to be programming language-independent APIs, however. This generality means they can't take full advantage of the features of the Java language and platform, however. As I write this chapter, there is a new (still in beta release) but promising API targeted directly at Java programmers. As its name implies, JDOM is an XML document object model for Java. Like the DOM API, it creates a parse tree to represent an XML document. Unlike the DOM, however, the API is designed from the ground up for Java and is significantly easier to use than the DOM. JDOM is an open-source project initiated by Brett McLaughlin and Jason Hunter, who are the authors of the O'Reilly books Java and XML and Java Servlet Programming, respectively. Example 19.6 shows how the JDOM API can be used to parse an XML document, to extract information from the resulting parse tree, to create new element nodes and add them to the parse tree, and, finally, to output the modified tree as an XML document. Compare this code to Example 19.3; the examples perform exactly the same task, but as you'll see, using the JDOM API makes the code simpler and cleaner. You should also notice that JDOM has its own built-in XMLOutputterclass, obviating the need for the XMLDocumentWritershown in Example 19.4. Example 19.6: WebAppConfig2.javapackage com.davidflanagan.examples.xml;import java.io.*; import java.util.*; import org.jdom.*; import org.jdom.input.SAXBuilder; import org.jdom.output.XMLOutputter; /** * This class is just like WebAppConfig, but it uses the JDOM (Beta 4) API * instead of the DOM and JAXP APIs **/ public class WebAppConfig2 { /** The main method creates and demonstrates a WebAppConfig2 object */ public static void main(String[] args) throws IOException, JDOMException { // Create a new WebAppConfig object that represents the web.xml // file specified by the first command-line argument WebAppConfig2 config = new WebAppConfig2(new File(args[0])); // Query the tree for the class name associated with the servlet // name specified as the 2nd command-line argument(System.out); } /** * This field holds the parsed JDOM tree. Note that this is a JDOM * Document, not a DOM Document. **/ protected org.jdom.Document document; /** * Read the specified File and parse it to create a JDOM tree **/ public WebAppConfig2(File configfile) throws IOException, JDOMException { // JDOM can build JDOM trees from a variety of input sources. One // of those input sources is a SAX parser. SAXBuilder builder = new SAXBuilder("org.apache.xerces.parsers.SAXParser"); // Parse the specified file and convert it to a JDOM document document = builder.build(configfile); } /** * This method looks for specific Element nodes in the JDOM tree in order * to figure out the classname associated with the specified servlet name **/ public String getServletClass(String servletName) throws JDOMException { // Get the root element of the document. Element root = document.getRootElement(); // Find all <servlet> elements in the document, and loop through them // to find one with the specified name. Note the use of java.util.List // instead of org.w3c.dom.NodeList. List servlets = root.getChildren("servlet"); for(Iterator i = servlets.iterator(); i.hasNext(); ) { Element servlet = (Element) i.next(); // Get the text of the <servlet-name> tag within the <servlet> tag String name = servlet.getChild("servlet-name").getContent(); if (name.equals(servletName)) { // If the names match, return the text of the <servlet-class> return servlet.getChild("servlet-class").getContent(); } } return null; } /** * This method adds a new name-to-class mapping in in the form of * a <servlet> sub-tree to the document. **/ public void addServlet(String servletName, String className) throws JDOMException { // Create the new Element that represents our new servlet Element newServletName = new Element("servlet-name"); newServletName.setContent(servletName); Element newServletClass = new Element("servlet-class"); newServletClass.setContent(className); Element newServlet = new Element("servlet"); newServlet.addChild(newServletName); newServlet.addChild(newServletClass); // find the first <servlet> child in the document Element root = document.getRootElement(); Element firstServlet = root.getChild("servlet"); // Now insert our new servlet tag before the one we just found. Element parent = firstServlet.getParent(); List children = parent.getChildren(); children.add(children.indexOf(firstServlet), newServlet); } /** * Output the JDOM tree to the specified stream as an XML document. **/ public void output(OutputStream out) throws IOException { // JDOM can output JDOM trees in a variety of ways (such as converting // them to DOM trees or SAX event streams). Here we use an "outputter" // that converts a JDOM tree to an XML document XMLOutputter outputter = new XMLOutputter(" ", // indentation true); // use newlines outputter.output(document, out); } } Compiling and Running the Example In order to compile and run Example 19.6, you must download the JDOM distribution, which is freely available from. This example was developed using the Beta 4 release of JDOM. Because of the beta status of JDOM, I'm not going to try to give explicit build instructions here. You need to have the JDOM classes in your classpath to compile and run the example. Additionally, since the example relies on the Xerces SAX 2 parser, you need to have the Xerces JAR file in your classpath to run the example. Xerces is conveniently bundled with JDOM (at least in the Beta 4 distribution). Finally, note that JDOM is undergoing rapid development, and the API may change somewhat from the Beta 4 version used here. If so, you may need to modify the example to get it to compile and run. Exercises - Many of the examples in this chapter were designed to parse the web.xml files that configure web applications. If you use the Tomcat servlet container to run your servlets, you may know that Tomcat uses another XML file, server.xml, for server-level configuration information. In Tomcat 3.1, this file is located in the conf directory of the Tomcat distribution and contains a number of <Context>tags that use attributes to specify additional information about each web application. Write a program that uses a SAX parser (preferably SAX 2) to parse the server.xml file and output the values of the pathand docBaseattributes of each <Context>tag. - Using a DOM parser instead of a SAX parser, write a program that behaves identically to the program you developed in Exercise 19-1. - Rewrite the server.xml parser again, using the JDOM API this time. - Write a Swing-based web application configuration program that can read web.xml files, allow the user to modify them, and then write out the modified version. The program should allow the user to add new servlets to the web application and edit existing servlets. For each servlet, it should allow the user to specify the servlet name, class, initialization parameters, and URL pattern. - Design an XML grammar for representing a JavaBeans component and its property values. Write a class that can serialize an arbitrary bean to this XML format and deserialize, or recreate, a bean from the XML format. Use the Java Reflection API or the JavaBeans Introspectorclass to identify the properties of a bean. Assume that all properties of the bean are either primitive Java types, Stringobjects, or other bean instances. (You may want to extend this list to include Fontand Colorobjects as well.) Further assume that all bean classes define a no-argument constructor and all beans can be properly initialized by instantiating them and setting their public properties. Back to: Java Examples in a Nutshell, 2nd Edition © 2001, O'Reilly & Associates, Inc. webmaster@oreilly.com
http://oreilly.com/catalog/jenut2/chapter/ch19.html
crawl-002
refinedweb
7,575
50.23
Announcing TypeScript 3.9 Daniel Today we’re excited to announce the release of TypeScript 3.9!,, check out our website! But if you’re already using TypeScript in your project, you can either get it through NuGet or use npm with the following command: npm install typescript You can also get editor support by - Downloading for Visual Studio 2019/2017 - Installing the Insiders Version of Visual Studio Code or following directions to use a newer version of TypeScript - Using PackageControl with Sublime Text 3. For this release our team been has been focusing on performance, polish, and stability. We’ve been working on speeding up the compiler and editing experience, getting rid of friction and papercuts, and reducing bugs and crashes. We’ve also received a number of useful and much-appreciated features and fixes from the external community! - Improvements in Inference and Promise.all - Speed Improvements // @ts-expect-errorComments - Uncalled Function Checks in Conditional Expressions - Editor Improvements - Breaking Changes Improvements in Inference and Promise.all Recent versions of TypeScript (around 3.7) have had updates to the declarations of functions like Promise.all and Promise.race. Unfortunately, that introduced a few regressions, especially when mixing in values with null or undefined. interface Lion { roar(): void } interface Seal { singKissFromARose(): void } async function visitZoo(lionExhibit: Promise<Lion>, sealExhibit: Promise<Seal | undefined>) { let [lion, seal] = await Promise.all([lionExhibit, sealExhibit]); lion.roar(); // uh oh // ~~~~ // Object is possibly 'undefined'. } This is strange behavior! The fact that sealExhibit contained an undefined somehow poisoned type of lion to include undefined. Thanks to a pull request from Jack Bates, this has been fixed with improvements in our inference process in TypeScript 3.9. The above no longer errors. If you’ve been stuck on older versions of TypeScript due to issues around Promises, we encourage you to give 3.9 a shot! What About the awaited Type? If you’ve been following our issue tracker and design meeting notes, you might be aware of some work around a new type operator called awaited. This goal of this type operator is to accurately model the way that Promise unwrapping works in JavaScript. We initially anticipated shipping awaited in TypeScript 3.9, but as we’ve run early TypeScript builds with existing codebases, we’ve realized that the feature needs more design work before we can roll it out to everyone smoothly. As a result, we’ve decided to pull the feature out of our main branch until we feel more confident. We’ll be experimenting more with the feature, but we won’t be shipping it as part of this release. Speed Improvements TypeScript 3.9 ships with many new speed improvements. Our team has been focusing on performance after observing extremely poor editing/compilation speed with packages like material-ui and styled-components. We’ve dived deep here, with a series of different pull requests that optimize certain pathological cases involving large unions, intersections, conditional types, and mapped types. - - - - - - Each of these pull requests gains about a 5-10% reduction in compile times on certain codebases. In total, we believe we’ve achieved around a 25% reduction the material-ui-styles project’s compile time. Furthermore, we’ve gotten feedback from teams at Microsoft that TypeScript 3.9 has reduced their compile time from 26 seconds to around 10 seconds. We also have some changes to file renaming functionality in editor scenarios. We heard from the Visual Studio Code team that when renaming a file, just figuring out which import statements needed to be updated could take between 5 to 10 seconds. TypeScript 3.9 addresses this issue by changing the internals of how the compiler and language service caches file lookups. While there’s still room for improvement, we hope this work translates to a snappier experience for everyone! // @ts-expect-error Comments Imagine that we’re writing a library in TypeScript and we’re exporting some function called doStuff as part of our public API. The function’s types declare that it takes two strings so that other TypeScript users can get type-checking errors, but it also does a runtime check (maybe only in development builds) to give JavaScript users a helpful error. function doStuff(abc: string, xyz: string) { assert(typeof abc === "string"); assert(typeof xyz === "string"); // do some stuff } So TypeScript users will get a helpful red squiggle and an error message when they misuse this function, and JavaScript users will get an assertion error. We’d like to test this behavior, so we’ll write a unit test. expect(() => { doStuff(123, 456); }).toThrow(); Unfortunately if our tests are written in TypeScript, TypeScript will give us an error! doStuff(123, 456); // ~~~ // error: Type 'number' is not assignable to type 'string'. That’s why TypeScript 3.9 brings a new feature: // @ts-expect-error comments. When a line is prefixed with a // @ts-expect-error comment, TypeScript will suppress that error from being reported; but if there’s no error, TypeScript will report that // @ts-expect-error wasn’t necessary. As a quick example, the following code is okay // @ts-expect-error console.log(47 * "octopus"); while the following code // @ts-expect-error console.log(1 + 1); results in the error Unused '@ts-expect-error' directive. We’d like to extend a big thanks to Josh Goldberg, the contributor who implemented this feature. For more information, you can take a look at the ts-expect-error pull request. ts-ignore or ts-expect-error? In some ways // @ts-expect-error can act as a suppression comment, similar to // @ts-ignore. The difference is that // @ts-ignore will do nothing if the following line is error-free. You might be tempted to switch existing // @ts-ignore comments over to // @ts-expect-error, and you might be wondering which is appropriate for future code. While it’s entirely up to you and your team, we have some ideas of which to pick in certain situations. Pick ts-expect-error if: - you’re writing test code where you actually want the type system to error on an operation - you expect a fix to be coming in fairly quickly and you just need a quick workaround - you’re in a reasonably-sized project with a proactive team that wants to remove suppression comments as soon affected code is valid again Pick ts-ignore if: - you have an a larger project and and new errors have appeared in code with no clear owner - you are in the middle of an upgrade between two different versions of TypeScript, and a line of code errors in one version but not another. - you honestly don’t have the time to decide which of these options is better. Uncalled Function Checks in Conditional Expressions In TypeScript 3.7 we introduced uncalled function checks to report an error when you’ve forgotten to call a function. function hasImportantPermissions(): boolean { // ... } // Oops! if (hasImportantPermissions) { // ~~~~~~~~~~~~~~~~~~~~~~~ // This condition will always return true since the function is always defined. // Did you mean to call it instead? deleteAllTheImportantFiles(); } However, this error only applied to conditions in if statements. Thanks to a pull request from Alexander Tarasyuk, this feature is also now supported in ternary conditionals (i.e. the cond ? trueExpr : falseExpr syntax). declare function listFilesOfDirectory(dirPath: string): string[]; declare function isDirectory(): boolean; function getAllFiles(startFileName: string) { const result: string[] = []; traverse(startFileName); return result; function traverse(currentPath: string) { return isDirectory ? // ~~~~~~~~~~~ // This condition will always return true // since the function is always defined. // Did you mean to call it instead? listFilesOfDirectory(currentPath).forEach(traverse) : result.push(currentPath); } } Alexander further improved the uncalled function checking experience by adding a quick fix! Editor Improvements The TypeScript compiler not only powers the TypeScript editing experience in most major editors, it also powers the JavaScript experience in the Visual Studio family of editors and more. CommonJS Auto-Imports in JavaScript One great new improvement is in auto-imports in JavaScript files using CommonJS modules. In older versions, TypeScript always assumed that regardless of your file, you wanted an ECMAScript-style import like import * as fs from "fs"; However, not everyone is targeting ECMAScript-style modules when writing JavaScript files. Plenty of users still use CommonJS-style require(...) imports like so const fs = require("fs"); TypeScript now automatically detects the types of imports you’re using to keep your file’s style clean and consistent. For more details on the change, see the corresponding pull request. Code Actions Preserve Newlines TypeScript’s refactorings and quick fixes often didn’t do a great job of preserving newlines. As a really basic example, take the following code. const maxValue = 100; /*start*/ for (let i = 0; i <= maxValue; i++) { // First get the squared value. let square = i ** 2; // Now print the squared value. console.log(square); } /*end*/ If we highlighted the range from /*start*/ to /*end*/ in our editor to extract to a new function, we’d end up with code like the following. const maxValue = 100; printSquares(); function printSquares() { for (let i = 0; i <= maxValue; i++) { // First get the squared value. let square = i ** 2; // Now print the squared value. console.log(square); } } That’s not ideal – we had a blank line between each statement in our for loop, but the refactoring got rid of it! TypeScript 3.9 does a little more work to preserve what we write. const maxValue = 100; printSquares(); function printSquares() { for (let i = 0; i <= maxValue; i++) { // First get the squared value. let square = i ** 2; // Now print the squared value. console.log(square); } } You can see more about the implementation in this pull request Quick Fixes for Missing Return Expressions There are occasions where we might forget to return the value of the last statement in a function, especially when adding curly braces to arrow functions. // before let f1 = () => 42 // oops - not the same! let f2 = () => { 42 } Thanks to a pull request from community member Wenlu Wang, TypeScript can provide a quick-fix to add missing return statements, remove curly braces, or add parentheses to arrow function bodies that look suspiciously like object literals. Support for “Solution Style” tsconfig.json Files Editors need to figure out which configuration file a file belongs to so that it can apply the appropriate options and figure out which other files are included in the current “project”. By default, editors powered by TypeScript’s language server do this by walking up each parent directory to find a tsconfig.json. One case where this slightly fell over is when a tsconfig.json simply existed to reference other tsconfig.json files. // tsconfig.json { "files": [], "references": [ { "path": "./tsconfig.shared.json" }, { "path": "./tsconfig.frontend.json" }, { "path": "./tsconfig.backend.json" }, ] } This file that really does nothing but manage other project files is often called a “solution” in some environments. Here, none of these tsconfig.*.json files get picked up by the server, but we’d really like the language server to understand that the current .ts file probably belongs to one of the mentioned projects in this root tsconfig.json. TypeScript 3.9 adds support to editing scenarios for this configuration. For more details, take a look at the pull request that added this functionality. Breaking Changes Parsing Differences in Optional Chaining and Non-Null Assertions TypeScript recently implemented the optional chaining operator, but we’ve received user feedback that the behavior of optional chaining ( ?.) with the non-null assertion operator ( !) is extremely counter-intuitive. Specifically, in previous versions, the code foo?.bar!.baz was interpreted to be equivalent to the following JavaScript. (foo?.bar).baz In the above code the parentheses stop the “short-circuiting” behavior of optional chaining, so if foo is undefined, accessing baz will cause a runtime error. The Babel team who pointed this behavior out, and most users who provided feedback to us, believe that this behavior is wrong. We do too! The thing we heard the most was that the ! operator should just “disappear” since the intent was to remove null and undefined from the type of bar. In other words, most people felt that the original snippet should be interpreted as foo?.bar.baz which just evaluates to undefined when foo is undefined. This is a breaking change, but we believe most code was written with the new interpretation in mind. Users who want to revert to the old behavior can add explicit parentheses around the left side of the ! operator. (foo?.bar)!.baz } and > are Now Invalid JSX Text Characters The JSX Specification forbids the use of the } and > characters in text positions. TypeScript and Babel have both decided to enforce this rule to be more comformant. The new way to insert these characters is to use an HTML escape code (e.g. <div> 2 > 1 </div>) or insert an expression with a string literal (e.g. <div> 2 {">"} 1 </div>). Luckily, thanks to the pull request enforcing this from Brad Zacher, you’ll get an error message along the lines of Unexpected token. Did you mean `{'>'}` or `>`? Unexpected token. Did you mean `{'}'}` or `}`? For example: let directions = <div>Navigate to: Menu Bar > Tools > Options</div> // ~ ~ // Unexpected token. Did you mean `{'>'}` or `>`? That error message came with a handy quick fix, and thanks to Alexander Tarasyuk, you can apply these changes in bulk if you have a lot of errors. Stricter Checks on Intersections and Optional Properties Generally, an intersection type like A & B is assignable to C if either A or B is assignable to C; however, sometimes that has problems with optional properties. For example, take the following: interface A { a: number; // notice this is 'number' } interface B { b: string; } interface C { a?: boolean; // notice this is 'boolean' b: string; } declare let x: A & B; declare let y: C; y = x; In previous versions of TypeScript, this was allowed because while A was totally incompatible with C, B was compatible with C. In TypeScript 3.9, so long as every type in an intersection is a concrete object type, the type system will consider all of the properties at once. As a result, TypeScript will see that the a property of A & B is incompatible with that of C: Type 'A & B' is not assignable to type 'C'. Types of property 'a' are incompatible. Type 'number' is not assignable to type 'boolean | undefined'. For more information on this change, see the corresponding pull request. Intersections Reduced By Discriminant Properties There are a few cases where you might end up with types that describe values that just don’t exist. For example declare function smushObjects<T, U>(x: T, y: U): T & U; interface Circle { kind: "circle"; radius: number; } interface Square { kind: "square"; sideLength: number; } declare let x: Circle; declare let y: Square; let z = smushObjects(x, y); console.log(z.kind); This code is slightly weird because there’s really no way to create an intersection of a Circle and a Square – they have two incompatible kind fields. In previous versions of TypeScript, this code was allowed and the type of kind itself was never because "circle" & "square" described a set of values that could never exist. In TypeScript 3.9, the type system is more aggressive here – it notices that it’s impossible to intersect Circle and Square because of their kind properties. So instead of collapsing the type of z.kind to never, it collapses the type of z itself ( Circle & Square) to never. That means the above code now errors with: Property 'kind' does not exist on type 'never'. Most of the breaks we observed seem to correspond with slightly incorrect type declarations. For more details, see the original pull request. Getters/Setters are No Longer Enumerable In older versions of TypeScript, get and set accessors in classes were emitted in a way that made them enumerable; however, this wasn’t compliant with the ECMAScript specification which states that they must be non-enumerable. As a result, TypeScript code that targeted ES5 and ES2015 could differ in behavior. Thanks to a pull request from GitHub user pathurs, TypeScript 3.9 now conforms more closely with ECMAScript in this regard. Type Parameters That Extend any No Longer Act as any In previous versions of TypeScript, a type parameter constrained to any could be treated as any. function foo<T extends any>(arg: T) { arg.spfjgerijghoied; // no error! } This was an oversight, so TypeScript 3.9 takes a more conservative approach and issues an error on these questionable operations. function foo<T extends any>(arg: T) { arg.spfjgerijghoied; // ~~~~~~~~~~~~~~~ // Property 'spfjgerijghoied' does not exist on type 'T'. } export * is Always Retained In previous TypeScript versions, declarations like export * from "foo" would be dropped in our JavaScript output if foo didn’t export any values. This sort of emit is problematic because it’s type-directed and can’t be emulated by Babel. TypeScript 3.9 will always emit these export * declarations. In practice, we don’t expect this to break much existing code, but bundlers may have a harder time tree-shaking the code. You can see the specific changes in the original pull request. Exports Now Use Getters for Live Bindings When targeting module systems like CommonJS in ES5 and above, TypeScript will use get accessors to emulate live bindings so that changes to a variable in one module are witnessed in any exporting modules. This change is meant to make TypeScript’s emit more compliant with ECMAScript modules. For more details, see the PR that applies this change. Exports are Hoisted and Initially Assigned TypeScript now hoists exported declarations to the top of the file when targeting module systems like CommonJS in ES5 and above. This change is meant to make TypeScript’s emit more compliant with ECMAScript modules. For example, code like export * from "mod"; export const nameFromMod = 0; previously had output like __exportStar(exports, require("mod")); exports.nameFromMod = 0; However, because exports now use get-accessors, this assignment would throw because __exportStar now makes get-accesors which can’t be overridden with a simple assignment. Instead, TypeScript 3.9 emits the following: exports.nameFromMod = void 0; __exportStar(exports, require("mod")); exports.nameFromMod = 0; See the original pull request for more information. What’s Next? We hope that TypeScript 3.9 makes your day-to-day coding fun, fast, and an overall joy to use. To stay in the loop on our next version, you can track the 4.0 Iteration Plan and our Feature Roadmap as it comes together. Happy hacking! – Daniel Rosenwasser and the TypeScript Team I still better use javascript that TypeScript.. To each their own, but have you tried TypeScript? Anything in particular you don’t like? Having started with JavaScript, and done PHP, ColdFusion, and finally C# work, when I came back to doing more with JavaScript I really wished I could bring over some of the intelligence of typed languages like Java and C#. With TypeScript I’m writing code that’s less likely to have errors, and is more understandable after I’ve left it for a while. It doesn’t make sense for projects with small amounts of JavaScript, but I work with a couple projects with significant JavaScript functionality that I’d love to migrate over. I just upgraded this morning and saw a consistent 20% improvement in our webpack bundle build times. Fantastic! Awesome! Thanks for the excellent continued support for TypeScript. It gets heavy use at my company. I love this dev blogs, so exiting!!
https://devblogs.microsoft.com/typescript/announcing-typescript-3-9/
CC-MAIN-2021-43
refinedweb
3,229
55.64
Please note the forum guidelines, which forbid simply asking people to do all your work for you. Thread closed. Please note the forum guidelines, which forbid simply asking people to do all your work for you. Thread closed. This assumption is the reason why singletons are used. But simple parameter passing and/or dependency injection make for better program structure IMO. So instead of being able to procure a resource... It's not Microsoft-compatible. Cygwin tries to be Linux-compatible. #include <boost/cstdint.hpp> boost::int32_t i32; Doesn't look that tough to me ;) You could just get Code::Blocks for Windows. The other option is to write a small wrapper program that does what the Code::Blocks wrapper does (grab return code and execution time and write them... Stroustrup's "The C++ Programming Language" has been released in its C++11 version. If the other program has been in the market for 25 years - well, patents only last 20. So any patents that protect the old program should in theory be expired or about to expire. Of course, the US... Getting instances for decltype/sizeof checks via casting nullptr is unnecessarily verbose. Just use std::declval, or if that doesn't exist in your implementation, declare it yourself: template... Boost.Container has flat_set and flat_map, which are implemented in terms of a dynamic array, but provide the interface of an ordered associative container, which means that, iterator validity... Sorry, can't reproduce on my computer. My Clang is some pre-3.3 custom build, but that shouldn't really make a difference. Strange, that doesn't look like something Clang should have a problem with. I'm currently trying to install GCC 4.8 on my Mac; if I succeed, I can try this out. It's all really just one error,... For 1 and 2 this should actually be easy. Every combination here is ordered, so if 1 and 2 both appear, they have to appear in the first two places. In other words, all the qualifying combinations... Closing, this looks way too much like a homework quiz. Try macports or homebrew or something like that for Macs, they often have -dev packages as well. Do you have to use Qt5? The "normal" way is uncompressed RGB values, row by row, pixel by pixel, going left to right, top to bottom. This is the most suitable for fast blitting to the screen or... Test with another compiler. This looks very much like a bug to me. However, I'm pretty sure that the full specialization of PropagatorLoop isn't allowed inside the outer template by pickier... Because C is weird. If you have int a; a = 7; at the global scope, the first line is a so-called "tentative definition". This construct is an unholy hack in the C standard to maintain... This is pretty much the same question as with just a slightly different context. Why would you want to use C++ as your UI description language? Are you some kind of masochist? :p Just describe the UI with QML and write the important part of your application (the logic) in C++. Be extremely wary of the C course you're enrolled in. Here's an interesting review of the book you're supposed to use: C: The Complete Nonsense As long as you don't instantiate the container's copy constructor (i.e. never try to copy the container), that doesn't matter. You can, in theory, use dynamic_cast to cast the Foo reference down to a Poo or Boo reference, but try to avoid it. It is rather poor style and inhibits extensibility. (It's not always avoidable.) While you can use move-only objects (such as unique_ptr) in containers, some methods of the containers have more extensive requirements, i.e. they require the element type to be copyable and not just... This thread is clearly not useful anymore. No embedded system JVM is doing any relevant dynamic profiling-based optimization that I've ever heard. They don't have the memory for that.
https://cboard.cprogramming.com/search.php?s=125e59e5fb5f669003dd70df551f6a43&searchid=5722181
CC-MAIN-2020-34
refinedweb
680
67.45