text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Data Science Like a Pro: Anaconda and Jupyter Notebook on Visual Studio Code. Setting up Anaconda on Visual Studio Code (VS Code) opens up the option for advanced integrations and workflows that accept multiple frameworks, far more programming languages, continuous development, and continuous integration pipelines.
With Anaconda on Visual Studio Code, you can package backends, frontends, machine learning, and Jupyter Notebooks without having to deal with several projects. For example, in one Visual Studio Code project I have Django, React, Jupyter Notebooks, and use Jenkins for automated deployment and QA.
Benefits of Anaconda on Visual Studio Code
- Full-Stack Development — machine learning models, Jupyter Notebooks, backends, frontends, APIs, and automated QA script all from one project on one editor!
- Virtual Environment Management — handling virtual environments in Jupyter Notebook/Labs is possible but very limited and disconnected from the source code. The Visual Studio Code way of handling virtual environments is more OS-native and easier to access and control.
- Language Support — as a general-purpose editor, Visual Studio Code can support more languages and includes language-specific extensions like linting.
- Extensions and Customizations — take it to the next level with extensions and customizations: Linting, AWS, and deployment, hundreds of themes, common code snippets, icon packs, IntelliSense autocomplete, HTTP servers, comments/documentation templates, git, the list goes on…
- View Kernel Variables — instead of peeking at a DataFrame with
df.head()you can use the Visual Studio Code option "Show Variables Active in Jupyter Kernel". This feature will show you all variables, even the ones that you forgot were active.
- Feel Like a Pro — set it to a dark theme, learn a few shortcuts, work with the terminal, and watch your teammates drool while you do data science like a pro!
Let’s get started!
Install Anaconda
Head over to Anaconda and download the installer that corresponds to your operating system. I recommend installing the latest Python version unless you have a compelling reason to use an older one. For example, if you’re building a solution for a client with an older version of Python and you need to match their version then this would be a valid reason to choose an older Python version.
For detailed step-by-step instructions, I defer to the Official Anaconda Documentation.
Install Visual Studio Code
The free editor, Visual Studio Code can be downloaded from here. For instructions on how to install it head over to the official Visual Studio Code Documentation. I won’t cover these steps since the official documentation does a great job.
Set up Visual Studio Code
Start by creating a new folder for your project. For example Documents/Notebooks. Then open Visual Studio Code and select Start > Open Folder…
Go into the folder that you created and click “Select Folder”. Visual Studio Code will load your folder.
Next, select a Python Interpreter. Hit Ctrl+Shift+P and select Python: Select Interpreter.
A Python Interpreter reads your script and translates them into Python byte code. You may have multiple Python interpreters if you’ve installed the vanilla Python from the official Python website and if you just installed Anaconda. Also, if you’re on macOS Catalina for example, your OS already has Python 2.7.16 installed as shown below:
Back to Visual Studio Code, select Python 3.7.6 64-bit (‘base’:conda). Your version may be different if you downloaded a different version of Anaconda.
Now let’s create our virtual environment. You can think of an environment as a bubble that holds your packages and dependencies exclusively for the project that you are working on. By using a virtual environment “bubble” you can have multiple projects each using different versions of the same package. For example, one project can be on Pandas v0.25.3 while another project can be on Pandas 1.0.5. Another benefit of virtual environments is that deploying the project on a server is easier as the virtual environment keeps a concise list of packages and versions that you have installed.
To create a virtual environment, enter Ctrl+Shift+`, a Visual Studio Code terminal should open up. Type in the following command:
conda create --name myenv
Go through the process of creating the environment then enter the following command to activate the new environment:
conda activate myenv
Now the path on your terminal should change to something like:
(myenv) C:\Users\Miguel\Documents\Notebooks>
Now let’s install the Anaconda Visual Studio Code Extension. Enter Ctrl+Shift+X, search for the Anaconda Extension Pack, and Install it.
We’re done with the setup, and we can try creating a notebook. Go back to the Explorer (Ctrl+Shift+E), create a new file by clicking on the New File Icon, then enter main.ipynb and hit Enter.
A new notebook should open up on the side:
When the new file opens up it might take a few seconds, but it will eventually look somewhat like a Jupyter Notebook.
Now try typing in:
Then enter Shift+Enter to run the cell.
The notebook will print the test statement.
We’re just about done. If you want to do a more thorough test then you can type something like:
import pandas as pd
test_dataframe = pd.DataFrame({'a':[1,2,3], 'b':['x', 'y', 'z']}) test_dataframe
From here on out you can create more notebooks or write custom packages to use throughout your project. To install new packages install them with the
conda install command in the Visual Studio Code terminal and the virtual environment will take care of the rest.
Ready to start predicting the stock market? See my Predicting The Stock Market Post.
Originally published at on July 11, 2020. | https://analystadmin.medium.com/data-science-like-a-pro-anaconda-and-jupyter-notebook-on-visual-studio-code-85b54f1778ab?responsesOpen=true&source=---------5---------------------------- | CC-MAIN-2021-21 | refinedweb | 946 | 54.63 |
- Silverlight
- .NET
- C# Elements
- C# Constructs
- C# Data Access
- C# Security/Debug
- C# Code
- C# Videos
- Windows Phone
- About
C# Parallel For Loop
Run Time Comparison between Sequential For Loop and Parallel For Loop
"This video demonstrates the C# Parallel For loop is three times faster than a Regular For loop on the task of counting to 1 Billion, 10 Times. The Parallel For loop saturates all four cores for 100% total CPU usage as compared to 33% CPU usage for the Regular For loop."
Run times were recorded for the Regular For loop, followed by the Parallel For loop. The C# code used in this demonstration is listed below.
The Experiment
A Regular (sequential) For loop and a Parallel For loop where both coded in C# using Visual Studio 2012. The Stopwatch class was used to measure the elapsed times. Both For loops performed the task of counting to 1 Billion in the inner loop with the outer loop executing 10 times. The performance of the processor and its individual cores were shown using the All CPU Meter gadget from AddGadgets.com along with the Windows 7 resource monitor. I also used the CPUID utility to obtain information about the CPU and to monitor the temperatures of the CPU cores. The screen was captured and encoded with Microsoft Expression Encoder 4 Pro into an .mp4 format. The final editing of the video was performed with Microsoft Movie Maker.
The Parallel.For method is part of the Task Parallel Library (TPL) which supports data parallelism. Data parallelism partitions the source data so that multiple threads can operate on different segments concurrently. In this experiment the Parallel For was used to parallelize the outer loop which executed 10 times. The inner loop incremented a variable from 1 to 1 billion.
The Parallel For loop in this experiment used thread-local variables for each task. When the tasks were complete the values from the thread-local variables were use to create the final result. A generic version of the Parallel For was used with a type parameter of <long>. The first two parameters of the Parallel For are the beginning and ending iteration values. The third parameter initializes the local state. In this code the third parameter initialized the thread-local variable to zero. The forth parameter used a lambda expression to define the loop logic. The fifth parameter defined a method that is called one time after all the threads have completed. Note the Interlocked class was used to support multiple thread usage of the subtotal variable. In this experiment when using single-thread addition on the Work variable resulted in a value of only 2,500,000,000, or 1/4 the true 1E10 work value. The Interlocked.add method was required to add the values from each thread into the final total.
Parallel.For<long>(0, 10, () => 0, (myInt, loop, subtotal) =>
{
for (long myLong = 0; myLong < 1E9; myLong++) subtotal++;
return subtotal;
},
(x) => Interlocked.Add(ref work, x)
);
This experiment was performed more than ten times with nearly identical results. When the work load was increased from 1E10 to 1E11 the Parallel For loop was over 3.5 times faster. It would be interesting to vary the work type, work load, and Parallel For parameter values and determine the degree of processing improvement in different scenarios.
C# Code Used in Demonstration Video
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Globalization;
using System.Linq;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
namespace ParallelTests
{
class Program
{
static void Main(string[] args)
{
long work = 0;
Stopwatch stopWatch = new Stopwatch();
stopWatch.Start();
#region Regular For Loop Region
//Console.WriteLine("Regular for Loop is running ...\n");
//for (int myInt = 0; myInt < 10; myInt++)
//{
// for (long myLong = 0; myLong < 1E9; myLong++) { work++; }
//}
#endregion
#region Parallel For Loop Region
Console.WriteLine("Parallel for Loop is running ...\n");
Parallel.For<long>(0, 10, () => 0, (myInt, loop, subtotal) =>
{
for (long myLong = 0; myLong < 1E9; myLong++) subtotal++;
return subtotal;
},
(x) => Interlocked.Add(ref work, x)
);
#endregion
// Calculate and Display Execution Time
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
string elapsedTime = String.Format("{0:00}:{1:00}:{2:00}.{3:00}",
ts.Hours, ts.Minutes, ts.Seconds,
ts.Milliseconds / 10);
Console.WriteLine("RunTime " + elapsedTime + "\n");
// Display Work Value
Console.WriteLine("Work is: " + work.ToString("E", CultureInfo.InvariantCulture));
Console.WriteLine("\n\n");
}
}
}
The Run Time Environment
CoreTemp Running with Individual Core Temperatures Displayed in Task Bar
Saturating the cores for a prolonged period of time can increase the core temperatures, especially if the CPU is overclocked. The test computer contains an AMD Phenom II 940 BE overclocked from 3.0 to 3.5 GHz. I use the CoreTemp utility to display the individual core temperatures in the task bar and also have it's overheat alarm set to 70 C. I monitor the core temperatures when the computer is under a heavy load.
Before starting the experiment I cleaned the radiator on the CPU cooler for the first time in about three years. I noticed the core temperatures dropped a few degrees, especially when under a heavy load. Wiki How to Do Anything has a good article on How to Build a Powerful Quiet Computer that covers the principles of computer cooling.
| http://www.kcshadow.net/wpdeveloper/?q=parallelfor | CC-MAIN-2018-22 | refinedweb | 876 | 57.57 |
You must use form 2553. I'm giving links along with the answer.
Form 2553 must be signed and dated by the president, vice president, treasurer, assistant treasurer, chief accounting officer, or any other corporate officer (such as tax officer) authorized to sign. If Form 2553 is not signed, it will not be considered timely filed.
Required form to be filed to change filing status:
Step 1. Timely file a paper copy of the Form 2553 with the appropriate Service Center as directed in the Form 2553 instructions. You may mail or fax this form.
Step 2. The corporation will receive an acknowledgement and approval of the S corporation election. If the notification of approval is not received, the corporation should follow-up with the Service Center where the Form 2553 was filed.
Step 3. File the last C Corporation return (Form 1120) by the due date or extended due date. Note: Some taxpayers are required to file electronically. For additional information on which Form 1120 filers are required to file electronically, please see T.D. 9363, I.R.B. 2007-49.
Step 4. File the S Corporation return (Form 1120S) by the due or extended due date. Note: Some taxpayers are required to file electronically. For additional information on which Form 1120S filers are required to file electronically, please see T.D. 9363, I.R.B. 2007-49.
The filing of the initial Form 1120S return will finalize the change of the entity’s filing requirement on the Internal Revenue Service’s). | http://www.justanswer.com/business-law/4an64-change-corporation-corporation.html | CC-MAIN-2014-23 | refinedweb | 253 | 64.61 |
Constraints on Type Parameters (C# Programming Guide). These restrictions are called constraints. Constraints are specified by using the where contextual keyword. The following table lists the six types of constraints:
If you want to examine an item in a generic list to determine whether it is valid or to compare it to some other item, the compiler must have some guarantee that the operator or method it has to call will be supported by any type argument that might be specified by client code. This guarantee is obtained by applying one or more constraints to your generic class definition. For example, the base class constraint tells the compiler that only objects of this type or derived from this type will be used as type arguments. Once the compiler has this guarantee, it can allow methods of that type to be called in the generic class. Constraints are applied by using the contextual keyword where. The following code example demonstrates the functionality we can add to the GenericList<T> class (in Introduction to Generics (C# Programming Guide)) by applying a base class constraint.
public class Employee {); because all items of type T are guaranteed to be either an Employee object or an object that inherits from Employee.
Multiple constraints can be applied to the same type parameter, and the constraints themselves can be generic types, as follows:
By constraining the type parameter, you increase the number of allowable operations and method calls to those supported by the constraining type and all types in its inheritance hierarchy. Therefore, when you design generic classes or methods, if you will be performing any operation on the generic members beyond simple assignment or calling any methods not supported by System.Object, you will have to apply constraints to the type parameter.
When applying the where T : class constraint, avoid the == and != operators on the type parameter because these operators will test for reference identity only, not for value equality. This is the case even if these operators are overloaded in a type that is used as an argument. The following code illustrates this point; the output is false even though the String class overloads the == operator.
The reason for this behavior is that, at compile time, the compiler only knows that T is a reference type, and therefore must use the default operators that are valid for all reference types. If you must test for value equality, the recommended way is to also apply the where T : IComparable<T> constraint and implement that interface in any class that will be used to construct the generic class.
Type parameters that have no constraints, such as T in public class SampleClass<T>{}, are called unbounded type parameters. Unbounded type parameters have the following rules:
The != and == operators cannot be used because there is no guarantee that the concrete type argument will support these operators.
They can be converted to and from System.Object or explicitly converted to any interface type.
You can compare to null. If an unbounded parameter is compared to null, the comparison will always return false if the type argument is a value type.. | https://msdn.microsoft.com/en-us/library/d5x73970(v=vs.110).aspx | CC-MAIN-2015-18 | refinedweb | 520 | 51.38 |
KIMAP2::FetchJob::FetchScope
#include <fetchjob.h>
Detailed Description
Used to indicate what message data should be fetched.
This doesn't provide the same fine-grained control over what is fetched that the IMAP FETCH command normally does, but the common cases are catered for.
Definition at line 74 of file fetchjob.h.
Member Enumeration Documentation
Used to indicate what part of the message should be fetched.
Definition at line 82 of file fetchjob.h.
Member Data Documentation
Specify to fetch only items with mod-sequence higher then
changedSince.
The server must have CONDSTORE capability (RFC4551).
Default value is 0 (ignored).
Definition at line 173 of file fetchjob.h.
Enables retrieving of Gmail-specific extensions.
The FETCH response will contain X-GM-MSGID, X-GM-THRID and X-GM-LABELS
Do NOT enable this, unless talking to Gmail servers, otherwise the request may fail.
Definition at line 183 of file fetchjob.h.
Specify what message data should be fetched.
Definition at line 163 of file fetchjob.h.
Specify which message parts to operate on.
This refers to multipart-MIME message parts or MIME-IMB encapsulated message parts.
Note that this is ignored unless
mode is Headers or Content.
If
mode is Headers, this sets the parts to get the MIME headers for. If this list is empty, the headers for the whole message (the RFC-2822 headers) are fetched.
If
mode is Content, this sets the parts to fetch. Parts are fetched wholesale. If this list is empty, the whole message body is fetched (all MIME parts together).
Definition at line 158 of file fetchjob.h.
The documentation for this class was generated from the following files:
Documentation copyright © 1996-2020 The KDE developers.
Generated on Wed Jul 1 2020 23:05:58 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online. | https://api.kde.org/kdepim/kimap2/html/classKIMAP2_1_1FetchJob_1_1FetchScope.html | CC-MAIN-2020-29 | refinedweb | 311 | 69.38 |
{-# OPTIONS_GHC -cpp #-} ----------------------------------------------------------------------------- -- | -- Module : Test.QuickCheck.Batch -- Copyright : (c) Andy Gill 2001 -- License : BSD-style (see the file libraries/base/LICENSE) -- -- Maintainer : libraries@haskell.org -- Stability : experimental -- Portability : non-portable (uses Control.Exception, Control.Concurrent) -- -- A batch driver for running QuickCheck. -- -- /Note:/ in GHC only, it is possible to place a time limit on each test, -- to ensure that testing terminates. -- ----------------------------------------------------------------------------- {- - Here is the key for reading the output. - . = test successful - ? = every example passed, but quickcheck did not find enough good examples - * = test aborted for some reason (out-of-time, bottom, etc) - # = test failed outright - - We also provide the dangerous "isBottom". - - Here is is an example of use for sorting: - - testOptions :: TestOptions - testOptions = TestOptions - { no_of_tests = 100 -- number of tests to run - , length_of_tests = 1 -- 1 second max per check - -- where a check == n tests - , debug_tests = False -- True => debugging info - } - - prop_sort1 xs = sort xs == sortBy compare xs - where types = (xs :: [OrdALPHA]) - prop_sort2 xs = - (not (null xs)) ==> - (head (sort xs) == minimum xs) - where types = (xs :: [OrdALPHA]) - prop_sort3 xs = (not (null xs)) ==> - last (sort xs) == maximum xs - where types = (xs :: [OrdALPHA]) - prop_sort4 xs ys = - (not (null xs)) ==> - (not (null ys)) ==> - (head (sort (xs ++ ys)) == min (minimum xs) (minimum ys)) - where types = (xs :: [OrdALPHA], ys :: [OrdALPHA]) - prop_sort6 xs ys = - (not (null xs)) ==> - (not (null ys)) ==> - (last (sort (xs ++ ys)) == max (maximum xs) (maximum ys)) - where types = (xs :: [OrdALPHA], ys :: [OrdALPHA]) - prop_sort5 xs ys = - (not (null xs)) ==> - (not (null ys)) ==> - (head (sort (xs ++ ys)) == max (maximum xs) (maximum ys)) - where types = (xs :: [OrdALPHA], ys :: [OrdALPHA]) - - test_sort = runTests "sort" testOptions - [ run prop_sort1 - , run prop_sort2 - , run prop_sort3 - , run prop_sort4 - , run prop_sort5 - ] - - When run, this gives - Main> test_sort - sort : ..... - - You would tie together all the test_* functions - into one test_everything, on a per module basis. - -} module Test.QuickCheck.Batch ( run -- :: Testable a => a -> TestOptions -> IO TestResult , runTests -- :: String -> TestOptions -> -- [TestOptions -> IO TestResult] -> IO () , defOpt -- :: TestOptions , TestOptions (..) , TestResult (..) , isBottom -- :: a -> Bool , bottom -- :: a {- _|_ -} ) where import Prelude import System.Random #ifdef __GLASGOW_HASKELL__ import Control.Concurrent #endif import Control.Exception hiding (catch, evaluate) import qualified Control.Exception as Exception (catch, evaluate) import Test.QuickCheck import System.IO.Unsafe data TestOptions = TestOptions { no_of_tests :: Int, -- ^ number of tests to run. length_of_tests :: Int, -- ^ time limit for test, in seconds. -- If zero, no time limit. -- /Note:/ only GHC supports time limits. debug_tests :: Bool } defOpt :: TestOptions defOpt = TestOptions { no_of_tests = 100 , length_of_tests = 1 , debug_tests = False } data TestResult = TestOk String Int [[String]] | TestExausted String Int [[String]] | TestFailed [String] Int | TestAborted Exception tests :: Config -> Gen Result -> StdGen -> Int -> Int -> [[String]] -> IO TestResult tests config gen rnd0 ntest nfail stamps | ntest == configMaxTest config = return (TestOk "OK, passed" ntest stamps) | nfail == configMaxFail config = return (TestExausted "Arguments exhausted after" ntest stamps) | otherwise = do (if not (null txt) then putStr txt else return ()) case ok result of Nothing -> tests config gen rnd1 ntest (nfail+1) stamps Just True -> tests config gen rnd1 (ntest+1) nfail (stamp result:stamps) Just False -> do return (TestFailed (arguments result) ntest) where txt = configEvery config ntest (arguments result) result = generate (configSize config ntest) rnd2 gen (rnd1,rnd2) = split rnd0 batch n v = Config { configMaxTest = n , configMaxFail = n * 10 , configSize = (+ 3) . (`div` 2) , configEvery = \n args -> if v then show n ++ ":\n" ++ unlines args else "" } -- | Run the test. -- Here we use the same random number each time, -- so we get reproducable results! run :: Testable a => a -> TestOptions -> IO TestResult run a TestOptions { no_of_tests = n, length_of_tests = len, debug_tests = debug } = #ifdef __GLASGOW_HASKELL__ do me <- myThreadId ready <- newEmptyMVar r <- if len == 0 then try theTest else try (do -- This waits a bit, then raises an exception in its parent, -- saying, right, you've had long enough! watcher <- forkIO (Exception.catch (do threadDelay (len * 1000 * 1000) takeMVar ready throwTo me NonTermination return ()) (\ _ -> return ())) -- Tell the watcher we are starting... putMVar ready () -- This is cheating, because possibly some of the internal message -- inside "r" might be _|_, but anyway.... r <- theTest -- Now, we turn off the watcher. -- Ignored if the watcher is already dead, -- (unless some unlucky thread picks up the same name) killThread watcher return r) case r of Right r -> return r Left e -> return (TestAborted e) #else Exception.catch theTest $ \ e -> return (TestAborted e) #endif where theTest = tests (batch n debug) (evaluate a) (mkStdGen 0) 0 0 [] -- | Prints a one line summary of various tests with common theme runTests :: String -> TestOptions -> [TestOptions -> IO TestResult] -> IO () runTests name scale actions = do putStr (rjustify 25 name ++ " : ") f <- tr 1 actions [] 0 mapM fa f return () where rjustify n s = replicate (max 0 (n - length s)) ' ' ++ s tr n [] xs c = do putStr (rjustify (max 0 (35-n)) " (" ++ show c ++ ")\n") return xs tr n (action:actions) others c = do r <- action scale case r of (TestOk _ m _) -> do { putStr "." ; tr (n+1) actions others (c+m) } (TestExausted s m ss) -> do { putStr "?" ; tr (n+1) actions others (c+m) } (TestAborted e) -> do { putStr "*" ; tr (n+1) actions others c } (TestFailed f num) -> do { putStr "#" ; tr (n+1) actions ((f,n,num):others) (c+num) } fa :: ([String],Int,Int) -> IO () fa (f,n,no) = do putStr "\n" putStr (" ** test " ++ show (n :: Int) ++ " of " ++ name ++ " failed with the binding(s)\n") sequence_ [putStr (" ** " ++ v ++ "\n") | v <- f ] putStr "\n" bottom :: a bottom = error "_|_" -- |) isBottom :: a -> Bool isBottom a = unsafePerformIO (do a' <- try (Exception.evaluate a) case a' of Left _ -> return True Right _ -> return False) | http://www.haskell.org/ghc/docs/6.8.2/html/libraries/QuickCheck/src/Test-QuickCheck-Batch.html | CC-MAIN-2014-35 | refinedweb | 887 | 54.15 |
Presentation References
AMIGO Program
Deployment Portal: Planning an Upgrade
Remedy 9.x Upgrade Enablement
Upgrading Atrium Core – What you should know to prevent upgrade failures
Knowledge Article 000097520
Recommended Hotfixes
Webinar Q&A
________________________________________________________________
Q: Is there a possibility to install a clean 9.x environment and import Customizations and Data from a 8.1 system afterwards? What is the best way to achieve this?
A: Yes, Customers can either upgrade existing env to preserve transactional data or setup Fresh OOB 9.x env and migrate, overlays, foundation and config data and go live with new version if they don’t want to preserve transactional data. If customer want to migrate both foundation/config and transactional data and go live with Fresh 9.1, yes, they can use this approach but they need to be technically strong and understand the upgrade process internals. We will publish a white paper on this with various scenarios by end of next month.
________________________________________________________________
Q: Currently on ARS 7.5/ITSM 7.6 and want to upgrade to 9.1. We have 2 options available Upgrade or Fresh implementation. Is there a recommendation between the two options?
A: Choosing option between upgrade vs Fresh implementation is based on your decision of keeping Transactional data or not. As you are coming from overlay unaware env, you need to upgrade to 8.1 and follow overlay conversion and all other tasks then upgrade to 9.x. If you want to preserve Transactional data then you need to upgrade otherwise Fresh install and migrate foundation/config data.
Upgrade from ARS 7.5/ITSM 7.6 to 9.1 should be done in two step process, upgrade to 7604 SP5, Create overlays, then upgrade to 9.1
Fresh install and migrate data from old env : setup 9.1 env, migrate foundation/config data. Migrating transactional data is little challenging as you need to define the mappings and changes to DDM XML files
________________________________________________________________
Q: Details on setting up upgrade servers using staged vs accelerated (database-only) approach?
A: This is documented here:. There are 2 options for setting up the staged server; we have removed the "accelerated" option and replaced it with the second option of simply installing the original ARServer version only. And then proceeding with the upgrade.
________________________________________________________________
Q: Want to upgrade from Remedy 8.1 to 9.1 with SQLServer 2016 and Windows 2016 R2, but 8.1 was released prior to the Release of Microsoft 2016 products. Is this supported?
A: BMC policy on "XYZ or higher" is with an assumption of maintaining previous compatibility, but has not explicitly been tested. It also is not recommended to perform OS, DB, and Remedy upgrade at the same time.
________________________________________________________________
Q: In the past upgrade failed on case insensitive setup. Do upgrades 9.1 to SP02 or SP03 support Oracle 12c setup case insensitive (all indexes are converted to functional indexes)?
A: The install will now succeed but it will not convert regular indexes to Linguistic. This still has to be done manually.
________________________________________________________________
Q: In a single AR System Server environment how does one ensure that escalations will not start up after upgrading from 7.6.04 to 9.1.02?
A: There are several steps to ensure that escalations are disabled once the upgrade is complete. If the server is configured as a member of a server group then first remove it from the group, set the escalation ranking to blank and select the Disable Escalations option.
________________________________________________________________
Q: Never have done an upgrade, but have detailed documentation of all customizations done. How can I approximate how much time it will take to reconcile these customizations? Trying to get an idea of total upgrade time.
A: The time needed to reconcile customizations will vary depending on their scale, complexity and relevance when taking into account functional changes in the new versions of the applications. The best way to estimate the required time is to test the upgrade and reconcile a representative group of customizations to get an idea of how long each will take and scale from there.
________________________________________________________________
Q: We are upgrading from 7.6.04 to 9.1 this year. Can we continue to use our Remedy User fat client? We have CTI integrations with ININ/I3 that will not work as web based calls.
A: Yes, the User Tool client will continue to work with 9.1 servers in the same way that it did with earlier versions. Although the client has been deprecated there have not been any changes which will prevent it from working or reduce its functionality. You may use the fat client for accessing simple forms such as server information page, user form etc.
________________________________________________________________
Q: Saw webinar advertised for ITSM 9.1 Service Pack 3 and Smart IT 1.6, when will they be available?
A: The products will be available June 2017.
________________________________________________________________
Q: Are SR migrating possible and simple from version 7.6.04 to version 9 or is it necessary to perform some manual treatment?
A: If you are upgrading the environment all SRDs would be upgraded. For Advanced Interface Forms (AIFs) there are differences between these versions that require update to your custom forms. This process is documented in the SRM documentation. You can migrate from 7604 to 9.x but you need to understand the behind the scene logic to fix some issues or open each SRD and navigate through wizard, save and publish. BMC Best practice is to migrate SRDs from same version.
________________________________________________________________
Q: Question about the new hierarchical group security in v9.1x. We currently have multiple companies defined for our different business units, ex Corporate, Research, IT etc. All of these companies are in same physical company though and should be able to submit/view change records. What is best way to facilitate this with V9.1? It seems that following the initial install support groups are set to have the parent company as parent group but that isolates each company’s data. How can we easily have a top level group that is above all of the other companies?
A: You can create a "Super Company", give everyone access to it and then the "Super Company" a parent group of your other companies as needed.
________________________________________________________________
Q: we are planning an upgrade from ITSM 8.1 to ITSM 9.1 and want to break the upgrade into small tasks, based on compat info:
- Upgrade midtier to 9.x
- Migrate from atriumSSO to RSSO 9
- Upgrade (ARS, ITSM, and reconcile customizations)
A quick first attempt with MT and RSSO showed some problems with Atrium CMDB
A: Java and Tomcat versions should be reviewed, as long as updated versions are used on both components it should be fine. Agents should change, Midtier authenticator should change it is possible that the AR configuration changes minimally if the Identity provided didn’t change only the connector. Recommend opening a support case to review this situation.
________________________________________________________________
Q: Smart IT: Is it possible to use end user filter by default?
A: You can set any filter, even those created by a user, to be the default filter for that user. It is controlled by each user... they can set their default filter. Also, they can create their own filters.
________________________________________________________________
Q: Smart IT: Can the REQ number be part of the filters and columns displayed?
A: if you are viewing Incidents and what to see the Request ID that might have caused that Incident to be created we do have a column for Request ID... click on the three dots on the right and add it to the view.
________________________________________________________________
Q: Smart IT: When shall we be able to search by product and operation cat?
A: For Assets, we can search by Product Cat. We don't have Product Cat/Op Cat as a search option for Incidents on the Ticket Console. Parent ID field: can it be used for WorkOrders to display the REQ number (currently looks to work only for tasks).
________________________________________________________________
Q: AR System 9.1.01 to 9.1.02 (latest service pack). Information about doing this type of upgrade.
A: Upgrading a Service Pack is a standard process. You would choose an upgrade path and perform the upgrade. The AR Server can be done as part of a ZDT (Zero Downtime Upgrade). For more information review the documentation -
________________________________________________________________
Q: Biggest difference between 9.1 and 8.1 upgrade?
A: 8.1 had many improvements to upgrading in comparison to earlier versions. With 9.1 there have been additional improvements: Configuration Checker run as part of the installer to identify problems prior to upgrading, installation times have been improved in comparison to earlier versions, and there have been many application performance improvements and automated upgrade tasks. Full Text Search (FTS) has been improved in 9.1 as well.
________________________________________________________________
Q: Have 8.1 with 3 servers in a server group, and 2 load balanced midtier, with ASSO. As we are planning to upgrade to 9.1 should we do a staged upgrade?
A: If you can accept downtime, then In Place would be ok, but if they cannot accept much downtime a staged server upgrade would be the better option. Detailed information is available in the documentation - and sample upgrade plans are available via the AMIGO collateral in KA#11571 -
________________________________________________________________
Q: Currently on AR System 8.1 Sybase and want to migrate to AR System 9.1 SQLServer or Oracle DB (decision pending) on different hardware. Do we load AR System 9.1 on the new hardware, then migrate to the new database? or Load AR System 8.1 on the new hardware, then migrate to the new database, and then upgrade to 9.1.
A: Although there are 3rd party database tools that exist to migrate data from one database type to another they may not convert/represent the data properly. Better to install 9.1 clean on your new hardware and new database, then use tools such as Migrator, DDM, etc. to migrate the data. Also depends on amount of data, you can choose DDM tool or you can use 3rd party DB migrations to migrate data and then you need to run all MSM:Migration tasks related to 9.1 and POST DDM scripts. You need to export overlays and import on 9.1 and use 3-way recon to reconcile overlays. You can build SLAs using SLM Console. You need to build FTS. We will publish a white paper with detailed steps by end of June 2017. If you are targeting your upgrade and need immediate help, Please submit support ticket for additional help on this.
________________________________________________________________
Q: AMIGO Program has useful information. We are a partner and want to get KA's that can be reviewed to help prepare for upgrades.
A: The AMIGO Program KA#11571 has links, sample test & upgrade plans, and other documentation resources. These can be found in the KA and are pdfs attached to the KA you can download.
________________________________________________________________
Q: Doing a staged server installation after restoring the database does anything else beyond AR need to be installed?
A: No, only "current" production AR System server needs to be installed prior to performing the 9.1 upgrade.
________________________________________________________________
Q: Different database architecture migration - Sybase to SQLServer, are there suggested tools?
A: No suggestion, but Engineering team helped one of the customer where they used DB Best tool to migrate 4TB data from DB2 database to SQL Server. It truncates the tables in the destination and migrate data. No specific recommendation, but your choice on the tool you use. Also, Alderstone is a 3rd party service that does database level migration.
________________________________________________________________
Q: Currently at Version 8.1.02 with MyIT and SmartIT, what is the impact upgrading to 9.1 on MyIT and/or SmartIT?
A: After upgrading or applying a ITSM patch you need to re-apply the User Experience patch again. This will take care of the integrations between AR and MyIT/Smart IT.
________________________________________________________________
Q: To upgrade from 8.1 to 9.x it should be across the board - midtier, arserver, etc. or is it okay to upgrade just the midtier initially?
A: If you are not using CMDB, then you can upgrade midtier so it is on a different version from the AR Server.
________________________________________________________________
Q: Two separate 8.1.02 Environment (one for IT Users and another business users) and would like to understand the multitenancy feature in terms of upgrading.
A: As they are separate environments they cannot be upgraded and combined into one environment. As there maybe ID conflicts of record data. The multi-tenancy feature is an application feature similar to previous versions. It does not allow support of duplicate forms (e.g. user forms, incident, etc.). That isn't the way the feature works.
________________________________________________________________
Q: Migrating ITSM 7.6.04 to ITSM 9.1 new hardware environment. The customizations are done via overlays. Is it supported to migrate 7.6.04 overlays to 9.1 using migrator?
A: There is a major architectural changes with reference to AST forms hence exporting overlays from 7604 and importing on 9.1 will be a challenging task. In the Normal upgrade, installer scripts delete or modify the fields on the form to match target version of forms. If your 7604 overlay points to the field that doesn’t exist in 9.1 then overlay may not be imported. You need to spend lot of time in troubleshooting. You can setup 7604 OOB env , import overlays from your prod, upgrade to 9.1, convert overlays to Granular overlays, do 3-way recon and then export these overlays and import on new 9.1 env. Adjusting customizations when upgrading -
________________________________________________________________
Q: See a lot of entries in ft_pending for over a year that were not processed.
A: Review, many of entries maybe pointing to incorrect servers based on configuration changes you made. So, these can safely be removed.
________________________________________________________________
Q: AR System database backup from source system, it was recommended to stop all services first. Is that still a requirement?
A: There is a concern when taking a backup of a "running" database is in regards to transactional consistency. When doing this for upgrade, it is recommended to ensure all services are stopped to ensure the "state" of the database. This also depends on the type of database you are using SQL Server, Oracle, etc. In Oracle, it will backup tables in alphabetic order Bx table, Hx table, Tx table, etc. if data was added each of the tables for each form would have different record count. This would cause a problem when performing DDM to move the new/changed data. For SQLServer have not observed a problem.
________________________________________________________________
Q: Is there somebody we can contact if we have further questions after the webinar?
A: You can open a support case based on the product component (Server, midtier, apps, etc.) and they can help.
________________________________________________________________
Q: In our 8.1 Atrium, we never performed phase 3 of the installer.....how does that impact moving to 9.1?
A: ITSM installer will give you error message if Phase 3 is not completed in prior upgrade.
Here are manual steps to complete phase 3 / Attribute deletion:
1. Run ITSM_DeleteAttributes.drm using cmdbdriver executable
Eg: One can launch ITSM installer and cmdbdriver will be present in TEMP\Utilities or in \rik path
• Open command prompt and set required SYSTEM variable like LD_LIB path to TEMP\Utilities
• ./cmdbdriver -s ar_server -u AR_USER -p “password” -t 0 -x //bmc/BMCRemedyITSMSuite/Workflow/phase3/applications/raf/workflow/en/ITSM_DeleteAttributes.drm
<Path to CMDB Driver>\cmdbdriver.exe -s <Server Name> -u <User Name> -p <Password> -t <TCD PORT> -x "C:\Program Files\BMC Software\BMCRemedyITSMSuite\Workflow\phase3\applications\raf\workflow\en\ITSM_DeleteAttributes.drm"export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/arsys/ARSystem/bin
2. Update SHARE:App properties to 3-Complete for RAF.
3. Delete objects from itsm_phase3_post_bulkobject_deletion. This xml can be found in raf\workflow\en path.
________________________________________________________________
Q: What is the default number of escalation pools in 9.x and does the out of the box escalation take A:
A: Some of the applications now include escalations set to run on additional pools but the server defaults to only one. The thread count for escalations will need to be increased to take advantage of the pool configuration changes.
________________________________________________________________
Q: Upgrading from 8.1 with no SP. What sort of impact to Web Service Integrations should we expect? And to other types of integrations? Are there types of integrations that you would expect to have issues and likely to have to be rebuilt?
A: If your question specific to AtriumWS? the answer is no. AtriumWS is the same as before. However, we have REST API functional for ARS and the CMDB
________________________________________________________________
Q: Regarding the BMC encryption, is there an issue with upgrading to 9.x when it is enabled?
A: Is this the built-in or Performance/Premium encryption? In the case of the former it should be transparent. The latter may need to be disabled during the upgrade but we will confirm this in the final version of these questions.
________________________________________________________________
Q: We have other systems, like monitoring and others, that create incidents in remedy and we return information, so bi-directional. Would you expect those types of web service integrations to have any issues? We also have some AI Jobs too?
A: Once the webservices are configured they function well and transfer data without major issues. There has been major changes on Webservices integrations, I would recommend not only considering upgrading to 9.x but to install the latest SP and patch available at the time.
________________________________________________________________
Q: Does this approach work with Remedy 7.1 which is a lot older.
A: The docs contain a section on upgrading from earlier versions 000097520 - Upgrading AtriumCore from 7.5 to 8.1 and 9.0 - What you should know to prevent upgrade failures. You can also check KA#11571 the AMIGO Program and open a case to discuss alternatives to upgrading, such as setting up a new 9.1 environment.
________________________________________________________________
Q: could you post that KB number to fix the install issue of 8.x installer against MS SQL 2016 / MS win 2016 to the chat?
A: It's in the Communities at the moment -- it will be added to the Knowledge Base soon
________________________________________________________________
Q: We have AR 8.1 and all customized applications. If we upgrade to AR 9.1, do you see any impact to our customized application? Is there any built in functionality available in AR 8.1 but not available in AR 9.1?
A: No features have been removed but new ones have been added and there have been some changes to functionality – e.g. web services support for SOAP 1.1 - that should be tested to ensure they work as expected. Please see the documentation for details -
________________________________________________________________
Q: Within the new R9 Smart IT we miss the possibility to search by product and operational catalogue. Is this possible?
A: There is current no pre-defined filter for product or operational categorization. That's something we are considering to add as part of the product roadmap. Today you can add those fields as columns in the console, and you could sort by these columns. There's ideas in the BMC Community for this enhancement. Feel free to vote on the idea to make sure that it reflects the demand in the customer base.
________________________________________________________________
Q: Is it a good practice to import the 7.6 Custom workflows after upgrade on a 9.1 Workflows And also is it possible to compare Custom Workflows in 3 Way Recond ?
A: If the custom workflow has been created as overlays they will be preserved during the upgrade and you will reconcile them after the upgrade. There is no separate step to import them. If you are exporting manually from 7604 and importing on 9.1 might overwrite base objects with older version and you would see functional issues.
________________________________________________________________
Q: We use Interactive Intelligence (I3) for our CIC solution. Our Tech Support Engineers do "cherry picking" from the queue. So, that is custom and different than most companies.
A: Suggest you open an AMIGO case to get some additional help as it a complex case. I would consider the Oracle upgrade as a separate step and consider adding the Redhat system as a new member of a server group on the current version. Then upgrade
________________________________________________________________
Q: We changed some native BMC objects (Active Links, Filters, etc) in Best Practice Mode (we created Overlays). When we upgrade our environment (7.6.04 to 9.x), will we be able to identify these custom native objects during the upgrade phases?
A: Yes, the overlays will be preserved and you can identify them as part of the reconciliation process
________________________________________________________________
Q: what was the KB article for AMIGO links and attachments? I think KB11711 was mentioned, but can't find it
A: Article 000011571
________________________________________________________________
Q: What about the REQ number related to INCs, WOs,... can it be configured to display it as a column?
A: You can show service requests as records in the Ticket Console, but for a backend record (incident, workorder, change) you cannot show the REQ ID as a column in the console. We currently only show parent request ID for tasks as column in the console. Overall, we are looking to expand both the filter and the column configuration options in the Smart IT console to give our customers more flexibility.
________________________________________________________________
Q: Is there some native functionality in Remedy 9 that allows the user to respond to a satisfaction survey or approve a change request through a button directly from the email?
A: Email based approval for change request: yes, this is an OOB feature. This was added in v8.1 and is still available in Remedy v9. And yes, we have options to send our surveys to users, after the completion of a service request. You can configure survey questions, and the answers of the user are recorded and can be reported on. Note that with our latest MyIT / Digital Workplace version, we also introduce a simple satisfaction survey - allowing user just to click on some smiley faces to select, how satisfied they were with the completion of the request. For change approval, I should also point out that you can do this now on a mobile device via MyIT / Digital Workplace or via Smart IT.
________________________________________________________________
Q: Today we have 8.1 and use SAP Crystal Enterprise for reporting. Does this change with 9.1?
A: No, the functionality remains the same. While this continue to work for backwards compatibility, we highly recommend that you use Smart Reporting, once you have migrated to 9.1. This provides a highly modern, very easy-to-use web reporting capability. And as long as you use Smart Reporting to report on ITSM data in Remedy, you don't need a separate license for it. The Remedy ITSM user licenses entitle to access Smart Reporting in that use case.
________________________________________________________________
Q: In SAP Crystal enterprise there is Universe type functionality to make more relational connections to Remedy. Has BMC worked with this universe connection and SAP previously? I could not find any data points about how to use it with Remedy.
A: If you can access/report against the universe using the AR ODBC driver within Crystal Report Designer, Remedy should be able to run this as well. There has not been any concerted effort toward this type of integration.
________________________________________________________________
Q: In version 9.x, is it possible for an SR from SRM to directly open a Release or Problem record? Or only the records available today in version 7.6.04 (INC, WO, CRQ)?
A: Not OOTB. But SRM had always provided an option to build your own AOT that do custom activity, and as far as I know, you could use it to create e.g. a problem record. Per ITIL best practice Release or Problem requests are not originated by end user.
________________________________________________________________
Q: Again, upgrading from 8.1 no SP and upgrading to 9.1 latest SP, are there any modifications to the CMDB? Class modifications? Or any fields being moved from CDMB to Asset Management again?
A: I think you're referring to the Phase 3 that changed most of AM namespace attributes to AST:Attributes. No, there is no additional changes like that. We have added some new attributes to CMDB in 9.1 and have flattened (denormalized) the Common Data Model
________________________________________________________________
Q: I see. No we cannot use the AR ODBC driver with Crystal Enterprise. The new SAP Business Objects Business Intelligence tool connects via a MS SQL Server OLE DB ODBC connection.
A: That would not be supported. Remedy only supports integration to Crystal using the AR ODBC driver
________________________________________________________________
Q: we are in upgrade process from 7.6 to 9.1 (ARS and CMDB only) not ITSM. There are many class created under CMDB and so huge data called Golden Data sets. What will be best approach to keep the same instance id while migration. Is it possible?
A: During upgrade your data will not be modified, Installer will auto populate data to newly added columns. So instance ids will not be changed during upgrade.
________________________________________________________________
Q: We are upgrading from 6.3 version to 9x. Need to understand if user views need to be recreated or the old views will create any inconsistency ?
A: The recommended approach is to upgrade to 8.1 first, converting your customizations to overlays, and upgrade to 9.1 from there
________________________________________________________________
Q: Where can I find the system architecture diagram of remedy 9.1 which has all of the layers, and how they are connected? Has this been change significantly from v7.1?
A: we don’t have any data model diagram
________________________________________________________________
Q: It was recommended not to run an upgraded midtier on 9 without upgrading AR Server. What was the impact of running an environment like this?
A: Consider that the recommendation to upgrade midtier and AR server is linked to the use of CMDB. If you are not using CMDB you can upgrade midtier or AR server independently in a supported way.
________________________________________________________________
Q: Can you give some info on SSO?
A: Start here:
________________________________________________________________
Q: Do we need a dedicated server for both (MyIT and Smart IT) applications?
A: Yes the recommendation is to have a dedicated server for MyIT/SmartIT than your ITSM Server to get performance benefit
________________________________________________________________
Q: Has there ever been a data dictionary available under confidentiality to Customers in order to build Adhoc type reports?
A: In Remedy v9, when you go to Smart Reporting, you see a more consumption-/reporting-friendly data model. We make use of the association concept in Remedy 9. So, building ad-hoc reports is very easy in Remedy v9.
________________________________________________________________ | https://communities.bmc.com/docs/DOC-95002 | CC-MAIN-2018-30 | refinedweb | 4,513 | 66.94 |
#include <iostream>without the .h and main should be declared as int, not void,
int main(). (Perhaps you are using an outdated compiler, I'd recommend using a current version).
ch=ch+1;
'A' + 1will always evaluate to
'B'(or that it will never result in undefined behaviour).
charis unsigned.
int('A')has a numeric value less than
std::numeric_limits<int>::max()
int('A')would be > int's maximum is if all of the following are true:
(1<<2) == 4is not guaranteed.
int('A')(or any
intfor that matter) can never be grater than the maximum value that an
intcan hold.
some_char + 1here. Integral promotions in C++ have always been value-preserving; under those conditions, a
charwould be promoted to an
unsigned int.
some_char + 1:
(1<<2) == 4is guaranteed since
std::numeric_limits<unsigned char>::digitsis guaranteed to be >= 8.
int_min <= 'A' <= int_maxto be false was a combination of all of those conditions.
int_min <= 'A' <= int_max, that char is unsigned is neither a necessary or a sufficient condition.
int_min <= 'A' <= int_maxis guaranteed if and only if the default char is signed.
sizeof(char) == sizeof(int), a char is promoted to an unsigned int;
some_char + 1does not result in undefined behaviour.
sizeof(char) == sizeof(int), a char is promoted to an int;
some_char + 1can result in undefined behaviour.
char_min <= 'A' <= char_maxmust be true because 'A' is a char.
intis guaranteed to be as big or bigger than a
char. This means that:
signed_int_min <= signed_char_minand
signed_int_max >= signed_char_max
char_max > int_maxto be true is if int is signed, and char is unsigned.
'A' > int_maxmust also be true.
'A' == char_max, that I know of anyway. 'A'-'Z' are stored sequentially pretty much universally (barring reaaally old systems whose software had custom tables for their text) | http://www.cplusplus.com/forum/beginner/105406/ | CC-MAIN-2017-26 | refinedweb | 289 | 57.47 |
This:
How GDI+ compares to GDI
How GDI+ is defined and used in the .NET Framework
How to draw, paint, and fill graphics objects
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
Viewing and manipulating images
Transforming graphics objects, images, and colors
Printing in .NET
How to develop GDI+ Web applications
How to optimize drawing quality and performance
Interactive color blending and transparent colors
GDI interoperability
Answers to frequently asked GDI+ questions
Graphics Programming in GDI+ is the most in-depth treatment available on writing effective graphics applications for the .NET Framework.
[ Team LiB ]
.
This
Praise for Graphics Programming with GDI+
Microsoft .NET Development Series
Figures
Tables
Acknowledgments
Introduction
Who Is This Book For?
Prerequisites
What's in This Book That I Won't See in Other Books?
Chapter Organization
Example Source Code
Exception and Error Handling in the Samples
SUMMARY
Chapter 1. GDI+: The Next-Generation Graphics Interface
Section 1.1. Understanding GDI+
Section 1.2. Exploring GDI+ Functionality
Section 1.3. GDI+ from a GDI Perspective
Section 1.4. GDI+ Namespaces and Classes in .NET
Summary
Chapter 2. Your First GDI+ Application
This document was created by an unregistered ChmMagic, please go to to register it. Thanks.
Section 2.1. Drawing Surfaces
Section 2.2. The Coordinate System
Section 2.3. Tutorial: Your First GDI+ Application
Section 2.4. Some Basic GDI+ Objects
SUMMARY
Chapter 3. The Graphics Class
Section 3.1. Graphics Class Properties
Section 3.2. Graphics Class Methods
Section 3.3. The GDI+Painter Application
Section 3.4. Drawing a Pie Chart
SUMMARY
Chapter 4. Working with Brushes and Pens
Section 4.1. Understanding and Using Brushes
Section 4.2. Using Pens in GDI+
Section 4.3. Transformation with Pens
Section 4.4. Transformation with Brushes
Section 4.5. System Pens and System Brushes
Section 4.6. A Real-World Example: Adding Colors, Pens, and Brushes to the GDI+Painter Application
SUMMARY
Chapter 5. Colors, Fonts, and Text
Section 5.1. Accessing the Graphics Object
Section 5.2. Working with Colors
Section 5.3. Working with Fonts
Section 5.4. Working with Text and Strings
Section 5.5. Rendering Text with Quality and Performance
Section 5.6. Advanced Typography
Section 5.7. A Simple Text Editor
Section 5.8. Transforming Text
SUMMARY
Chapter 6. Rectangles and Regions
Section 6.1. The Rectangle Structure
Section 6.2. The Region Class
Section 6.3. Regions and Clipping
Section 6.4. Clipping Regions Example
Section 6.5. Regions, Nonrectangular Forms, and Controls
SUMMARY
Chapter 7. Working with Images
Section 7.1. Raster and Vector Images
Section 7.2. Working with Images
Section 7.3. Manipulating Images
Section 7.4. Playing Animations in GDI+
Section 7.5. Working with Bitmaps
Section 7.6. Working with Icons
Section 7.7. Skewing Images
Section 7.8. Drawing Transparent Graphics Objects
This document was created by an unregistered ChmMagic, please go to to register it. Thanks.
Section 7.9. Viewing Multiple Images
Section 7.10. Using a Picture Box to View Images
Section 7.11. Saving Images with Different Sizes
SUMMARY
Chapter 8. Advanced Imaging
Section 8.1. Rendering Partial Bitmaps
Section 8.2. Working with Metafiles
Section 8.3. Color Mapping Using Color Objects
Section 8.4. Image Attributes and theImageAttributes Class
Section 8.5. Encoder Parameters and Image Formats
SUMMARY
Chapter 9. Advanced 2D Graphics
Section 9.1. Line Caps and Line Styles
Section 9.2. Understanding and Using Graphics Paths
Section 9.3. Graphics Containers
Section 9.4. Reading Metadata of Images
Section 9.5. Blending Explained
Section 9.6. Alpha Blending
Section 9.7. Miscellaneous Advanced 2D Topics
SUMMARY
Chapter 10. Transformation
Section 10.1. Coordinate Systems
Section 10.2. Transformation Types
Section 10.3. The Matrix Class and Transformation
Section 10.4. The Graphics Class and Transformation
Section 10.5. Global, Local, and Composite Transformations
Section 10.6. Image Transformation
Section 10.7. Color Transformation and the Color Matrix
Section 10.8. Matrix Operations in Image Processing
Section 10.9. Text Transformation
Section 10.10. The Significance of Transformation Order
SUMMARY
Chapter 11. Printing
Section 11.1. A Brief History of Printing with Microsoft Windows
Section 11.2. Overview of the Printing Process
Section 11.3. Your First Printing Application
Section 11.4. Printer Settings
Section 11.5. The PrintDocument and Print Events
Section 11.6. Printing Text
Section 11.7. Printing Graphics
Section 11.8. Print Dialogs
Section 11.9. Customizing Page Settings
Section 11.10. Printing Multiple Pages
Section 11.11. Marginal Printing: A Caution
Section 11.12. Getting into the Details: Custom Controlling and the Print Controller
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
SUMMARY
Chapter 12. Developing GDI+ Web Applications
Section 12.1. Creating Your First ASP.NET Web Application
Section 12.2. Your First Graphics Web Application
Section 12.3. Drawing Simple Graphics
Section 12.4. Drawing Images on the Web
Section 12.5. Drawing a Line Chart
Section 12.6. Drawing a Pie Chart
SUMMARY
Chapter 13. GDI+ Best Practices and Performance Techniques
Section 13.1. Understanding the Rendering Process
Section 13.2. Double Buffering and Flicker-Free Drawing
Section 13.3. Understanding the SetStyle Method
Section 13.4. The Quality and Performance of Drawing
SUMMARY
Chapter 14. GDI Interoperability
Section 14.1. Using GDI in the Managed Environment
Section 14.2. Cautions for Using GDI in Managed Code
SUMMARY
Chapter 15. Miscellaneous GDI+ Examples
Section 15.1. Designing Interactive GUI Applications
Section 15.2. Drawing Shaped Forms and Windows Controls
Section 15.3. Adding Copyright Information to a Drawn Image
Section 15.4. Reading and Writing Images to and from a Stream or Database
Section 15.5. Creating Owner-Drawn List Controls
SUMMARY
Appendix A. Exception Handling in .NET
Section A.1. Why Exception Handling?
Section A.2. Understanding the try...catch Block
Section A.3. Understanding Exception Classes
SUMMARY
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
[ Team LiB ]
Chand, Mahesh
Graphics programming with GDI+ / Mahesh Chand.
p. cm.
ISBN 0-321-16077-0 (alk. paper)
1. Computer graphics. 2. User interfaces (Computer systems) I. Title
T385.C4515 2003
006.6—dc22
2003057705:
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
Pearson Education, Inc.
Rights and Contracts Department
75 Arlington Street, Suite 300
Boston, MA 02116
Text printed on recycled paper
1 2 3 4 5 6 7 8 9 10—CRS—0706050403
First printing, October 2003
Dedication
To Mel and Neel
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
Praise for Graphics Programming with GDI+
"This is the most comprehensive book about graphics programming using GDI+ so far. A lot of useful sample code
inside this book reveals that Mr. Chand apparently has done a fair amount of research on GDI+. This book will be a
very useful handbook for everyone who does graphics programming for Windows."
—Min Liu, Software Design Engineer of GDI+, Microsoft Corporation
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
Microsoft .NET Development Series
John Montgomery, Series Advisor
Don Box, Series Advisor
Martin Heller, Series Editor.
Titles in the Series
Keith Ballinger, .NET Web Services: Architecture and Implementation, 0-321-11359-4
Don Box with Chris Sells, Essential .NET Volume 1: The Common Language Runtime, 0-201-73411-7
Mahesh Chand, Graphics Programming with GDI+, 0-321-16077-0
Anders Hejlsberg, Scott Wiltamuth, Peter Golde,C# Language Specification, 0-321-15491-6
Alex Homer, Dave Sussman, Mark Fussell,A First Look at ADO.NET and System.Xml v. 2.0, 0-321-22839-1
Alex Homer, Dave Sussman, Rob Howard, A First Look at ASP.NET v. 2.0, 0-321-22896-0
Microsoft Common Language Runtime Team,The Common Language Runtime Annotated Reference and Specification, 0-321-15493-2
Microsoft .NET Framework Class Libraries Team, The .NET Framework CLI Standard Class Library Annotated Reference, 0-321-15489-4
Microsoft Visual C# Development Team, The C# Annotated Reference and Specification, 0-321-15491-6
James S. Miller and Susann Ragsdale,The Common Language Infrastructure Annotated Standard, 0-321-15493-2
Fritz Onion, Essential ASP.NET with Examples in C#, 0-201-76040-1
Fritz Onion, Essential ASP.NET with Examples in Visual Basic .NET, 0-201-76039-8
Ted Pattison and Dr. Joe Hummel, Building Applications and Components with Visual Basic .NET, 0-201-73495-8
Chris Sells and Justin Gehtland, Windows Forms Programming in Visual Basic .NET, 0-321-12519-3
Chris Sells, Windows Forms Programming in C#, 0-321-11620-8
Damien Watkins, Mark Hammond, Brad Abrams, Programming in the .NET Environment, 0-201-77018-0
Shawn Wildermuth, Pragmatic ADO.NET: Data Access for the Internet World, 0-201-74568-2
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
[ Team LiB ]
Figures
Figure 1.1:
The role of GDI+ 2
Figure 1.2:
The managed GDI+ class wrapper 5
Figure 1.3:
The GDI+ namespaces in the .NET Framework library 14
Figure 2.1:
Color components in GDI+ 29
Figure 2.2:
The Cartesian coordinate system 31
Figure 2.3:
The GDI+ coordinate system 32
Figure 2.4:
Figure 2.5:
Figure 2.6:
Figure 2.7:
Figure 2.8:
Figure 2.9:
[*]
Drawing a line from point (0, 0) to point (120, 80) 33
[*]
Creating a Windows application 35
[*]
Adding a reference to System.Drawing.dll 36
[*]
The System.Drawing namespace in a project 36
[*]
Adding the Form_Paint event handler 38
[*]
Your first GDI+ application 44
Figure 2.10:
Figure 2.11:
Figure 2.12:
Figure 2.13:
Figure 2.14:
[*]
Using Point to draw a line 48
[*]
Using PointF to draw a line 49
[*]
Using Rectangle to create rectangles 53
[*]
Using RectangleF to create rectangles 54
[*]
Using the Round, Truncate, Union, Inflate, Ceiling, and Intersect methods of Rectangle 57
Figure 3.1:
Using DrawLine to draw lines 67
Figure 3.2:
Using DrawLines to draw connected lines 68
Figure 3.3:
Drawing individual rectangles 69
Figure 3.4:
Drawing a series of rectangles 70
Figure 3.5:
An ellipse 71
Figure 3.6:
Drawing ellipses 72
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 3.7:
Drawing text 74
Figure 3.8:
Drawing text with different directions 76
Figure 3.9:
The line chart application 76
Figure 3.10:
The line chart application with a chart 77
Figure 3.11:
The line chart with rectangles to mark points 78
Figure 3.12:
Arcs in an ellipse 82
Figure 3.13:
A sample arc application 83
Figure 3.14:
The default arc, with start angle of 45 degrees and sweep angle of 90 degrees 84
Figure 3.15:
An arc with start angle of 90 degrees and sweep angle of 180 degrees 85
Figure 3.16:
An arc with start angle of 180 degrees and sweep angle of 360 degree 86
Figure 3.17:
Two curves 87
Figure 3.18:
Open and closed curves 87
Figure 3.19:
Drawing a curve 88
Figure 3.20:
[*]
A curve-drawing application 89
Figure 3.21:
Drawing a curve with a tension of 0.0F 91
Figure 3.22:
Drawing a curve with a tension of 1.0F 91
Figure 3.23:
Drawing a closed curve 94
Figure 3.24:
A Bézier curve 95
Figure 3.25:
Drawing Bézier curves 96
Figure 3.26:
[*]
Drawing a polygon 98
Figure 3.27:
Drawing icons 99
Figure 3.28:
A path 100
Figure 3.29:
Drawing a path 102
Figure 3.30:
Four pie shapes of an ellipse 103
Figure 3.31:
A pie shape–drawing application 103
Figure 3.32:
A pie shape with start angle of 0 degrees and sweep angle of 90 degrees 104
Figure 3.33:
A pie shape with start angle of 45 degrees and sweep angle of 180 degrees 104
Figure 3.34:
A pie shape with start angle of 90 degrees and sweep angle of 45 degrees 105
Figure 3.35:
Drawing an image 107
Figure 3.36:
Filling a closed curve 109
Figure 3.37:
Filling ellipses 110
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 3.38:
Filling a graphics path 112
Figure 3.39:
Filling a polygon 115
Figure 3.40:
Filling rectangles 115
Figure 3.41:
Using MeasureString when drawing text 119
Figure 3.42:
The GDI+Painter application 122
Figure 3.43:
A pie chart–drawing application 128
Figure 3.44:
The Draw Chart button click in action 130
Figure 3.45:
The Fill Chart button click in action 131
Figure 4.1:
Classes inherited from the Brush class 135
Figure 4.2:
Brush types and their classes 135
Figure 4.3:
Graphics objects filled bySolidBrush 137
Figure 4.4:
Figure 4.5:
Figure 4.6:
Figure 4.7:
Figure 4.8:
Figure 4.9:
[*]
A sample hatch brush application 142
[*]
The default hatch style rectangle 146
[*]
The LightDownwardDiagonal style with different colors 146
[*]
The DiagonalCross style 147
[*]
The texture brush application 148
[*]
Using texture brushes 151
Figure 4.10:
Figure 4.11:
Figure 4.12:
Figure 4.13:
Figure 4.14:
Figure 4.15:
Figure 4.16:
Figure 4.17:
Figure 4.18:
Figure 4.19:
[*]
Clamping a texture 151
[*]
The TileFlipY texture option 152
[*]
A color gradient 153
[*]
A gradient pattern with pattern repetition 153
[*]
Our linear gradient brush application 156
[*]
The default linear gradient brush output 160
[*]
The Vertical linear gradient mode 161
[*]
Using a rectangle in a linear gradient brush 162
[*]
Using LinearGradientBrush properties 163
[*]
Creating and using pens 166
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 4.20:
Figure 4.21:
Figure 4.22:
Figure 4.23:
[*]
Displaying pen types 171
[*]
Our pen alignment application 172
[*]
Drawing with center pen alignment 175
[*]
Drawing with inset pen alignment 175
Figure 4.24:
Figure 4.25:
Figure 4.26:
Figure 4.27:
Line cap and dash styles 176
[*]
Drawing dashed lines with different cap styles 179
[*]
Graphics shapes with cap and dash styles 181
[*]
Rotation and scaling 183
Figure 4.28:
Transformation in TextureBrush 186
Figure 4.29:
Transformation in linear gradient brushes 187
Figure 4.30:
Figure 4.31:
Figure 4.32:
Figure 4.33:
Figure 5.1:
Figure 5.2:
Figure 5.3:
Figure 5.4:
[*]
Transformation in path gradient brushes 189
[*]
Using system pens and system brushes 194
[*]
GDI+Painter with pen and brush support 195
[*]
GDI+Painter in action 200
[*]
Creating colors using different methods 208
[*]
Getting brightness, hue, and saturation components of a color 210
[*]
Using system colors to draw graphics objects 213
[*]
Converting colors 215
Figure 5.5:
Fonts available in Windows 217
Figure 5.6:
Font icons represent font types 219
Figure 5.7:
An OpenType font 220
Figure 5.8:
A TrueType font 220
Figure 5.9:
Font components 221
Figure 5.10:
Font metrics 225
Figure 5.11:
Figure 5.12:
Figure 5.13:
[*]
Getting line spacing, ascent, descent, free (extra) space, and height of a font 226
[*]
Using the FromHFont method 229
Fonts with different styles and sizes 232
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 5.14:
Figure 5.15:
Figure 5.16:
[*]
Alignment and trimming options 235
[*]
Drawing tabbed text on a form 237
[*]
Using FormatFlags to draw vertical and right-to-left text 240
Figure 5.17:
Figure 5.18:
Using different TextRenderingHint settings to draw text 243
[*]
Figure 5.19:
Figure 5.20:
Figure 5.21:
Figure 5.22:
Figure 5.23:
Using a private font collection 247
A simple text editor application 248
[*]
Drawing text on a form 251
[*]
Using ScaleTransform to scale text 252
[*]
Using RotateTransform to rotate text 252
[*]
Using TranslateTransform to translate text 253
Figure 6.1:
A rectangle 256
Figure 6.2:
A rectangle with starting point (1, 2), height 7, and width 6 256
Figure 6.3:
Figure 6.4:
[*]
Using Rectangle methods 260
[*]
Hit test using the Contains method 262
Figure 6.5:
Complementing regions 266
Figure 6.6:
Excluding regions 266
Figure 6.7:
Applying Union on regions 267
Figure 6.8:
Using the Xor method of the Region class 268
Figure 6.9:
Using the Intersect method of the Region class 269
Figure 6.10:
[*]
Figure 6.11:
Figure 6.12:
Figure 6.13:
Bounds of an infinite region 270
ExcludeClip output 272
[*]
Using Clip methods 274
[*]
Using TranslateClip 274
Figure 6.14:
Result of the Xor method 275
Figure 6.15:
Result of the Union method 276
Figure 6.16:
Result of the Exclude method 276
Figure 6.17:
Result of the Intersect method 277
Figure 6.18:
[*]
Client and nonclient areas of a form 278
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 6.19:
Figure 6.20:
Figure 6.21:
Figure 6.22:
Figure 7.1:
Figure 7.2:
Figure 7.3:
Figure 7.4:
Figure 7.5:
Figure 7.6:
Figure 7.7:
Figure 7.8:
Figure 7.9:
[*]
A nonrectangular form and controls 279
[*]
The nonrectangular forms application 280
[*]
A circular form 284
[*]
A triangular form 284
[*]
A zoomed raster image 289
[*]
A zoomed vector image 289
[*]
A simple image viewer application 295
[*]
Browsing a file 299
[*]
Viewing an image 300
[*]
Reading the properties of an image 304
[*]
A thumbnail image 306
[*]
Rotate menu items 308
[*]
Flip menu items 308
Figure 7.10:
Figure 7.11:
Figure 7.12:
Figure 7.13:
Figure 7.14:
Figure 7.15:
Figure 7.16:
Figure 7.17:
Figure 7.18:
Figure 7.19:
Figure 7.20:
Figure 7.21:
[*]
An image with default settings 310
[*]
The image of Figure 7.10, rotated 90 degrees 310
[*]
The image of Figure 7.10, rotated 180 degrees 311
[*]
The image of Figure 7.10, rotated 270 degrees 311
[*]
The image of Figure 7.10, flipped in the x direction 312
[*]
The image of Figure 7.10, flipped in the y direction 313
[*]
The image of Figure 7.10, flipped in both the x and the y directions 314
[*]
Fit menu items 315
[*]
An image in ImageViewer 318
[*]
The image of Figure 7.18 after Fit Width 319
[*]
The image of Figure 7.18 after Fit Height 319
[*]
The image of Figure 7.18 after Fit Original 320
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks 7.42:
Figure 7.43:
Figure 8.1:
Figure 8.2:
Figure 8.3:
[*]
The image of Figure 7.18 after Fit All 320
[*]
Zoom menu items 321
[*]
An image in ImageViewer 323
[*]
The image of Figure 7.24 with 25 percent zoom 323
[*]
The image of Figure 7.24 with 50 percent zoom 324
[*]
The image of Figure 7.24 with 200 percent zoom 324
[*]
The image of Figure 7.24 with 500 percent zoom 325
[*]
An animated image with three frames 325
[*]
An image animation example 327
[*]
The first frame of an animated image 329
[*]
The second frame of an animated image 330
[*]
A bitmap example 333
[*]
Changing the pixel colors of a bitmap 336
[*]
Viewing icons 338
[*]
A skewing application 339
[*]
Normal view of an image 341
[*]
Skewed image 342
[*]
Drawing transparent graphics objects 343
[*]
Drawing multiple images 345
[*]
Viewing an image in a picture box 348
[*]
Saving images with different sizes 349
[*]
New image, with width of 200 and height of 200 351
[*]
Using BitmapData to set grayscale 359
[*]
Changing the pixel format of a partial bitmap 361
[*]
Viewing a metafile 363
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 8.4:
Figure 8.5:
Figure 8.6:
Figure 8.7:
Figure 8.8:
Figure 8.9:
[*]
A metafile created programmatically 365
[*]
Reading metafile records 368
[*]
Reading metafile header attributes 371
[*]
Applying a color remap table 373
[*]
Wrapping images 377
[*]
Drawing semitransparent images 380
Figure 8.10:
Figure 8.11:
[*]
Applying SetGamma and SetColorKey 381
[*]
Using the SetNoOp method 382
Figure 8.12:
The relationship among Encoder, EncoderCollection, and Image 385
Figure 9.1:
Lines with different starting cap, ending cap, and dash styles 395
Figure 9.2:
Line dash style 396
Figure 9.3:
Line dash caps 396
Figure 9.4:
Figure 9.5:
Figure 9.6:
Figure 9.7:
Figure 9.8:
Figure 9.9:
[*]
Reading line caps 400
[*]
Reading line dash styles 401
[*]
Getting line dash caps 402
[*]
A rectangle, an ellipse, and a curve with different line styles 404
[*]
A line with custom caps 404
[*]
The line join test application 406
Figure 9.10:
Figure 9.11:
Figure 9.12:
Figure 9.13:
Figure 9.14:
Figure 9.15:
Figure 9.16:
Figure 9.17:
[*]
The Bevel line join effect 408
[*]
The Miter line join effect 408
[*]
The Round line join effect 409
[*]
Customized starting and ending caps 409
[*]
Setting customized starting and ending caps 411
[*]
Adjustable arrow caps 412
[*]
A simple graphics path 416
[*]
A filled graphics path 416
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 9.18:
Figure 9.19:
Figure 9.20:
Figure 9.21:
Figure 9.22:
Figure 9.23:
Figure 9.24:
Figure 9.25:
Figure 9.26:
Figure 9.27:
Figure 9.28:
Figure 9.29:
Figure 9.30:
Figure 9.31:
Figure 9.32:
Figure 9.33:
Figure 9.34:
Figure 9.35:
Figure 9.36:
Figure 9.37:
Figure 9.38:
Figure 9.39:
Figure 9.40:
Figure 9.41:
Figure 9.42:
[*]
A shaped form 417
[*]
Three subpaths 422
[*]
Nested containers 425
[*]
Drawing with different PageUnit values 428
[*]
Saving and restoring graphics states 431
[*]
Using graphics containers to draw text 433
[*]
Using graphics containers to draw shapes 435
[*]
Reading the metadata of a bitmap 437
[*]
Color blending examples 438
[*]
Transparent graphics shapes in an image using alpha blending 439
[*]
Mixed blending effects 440
[*]
Using linear gradient brushes 443
[*]
Using a rectangle in the linear gradient brush 444
[*]
Using the SetBlendTriangularShape method 445
[*]
Using the SetSigmaBellShape method 446
[*]
Comparing the effects of SetBlendTriangularShape and SetSigmaBellShape 447
[*]
Setting the center of a gradient 448
[*]
A multicolor gradient 450
[*]
Using blending in a linear gradient brush 452
[*]
Blending using PathGradientBrush 454
[*]
Setting the focus scale 455
[*]
Blending multiple colors 456
[*]
Using the InterpolationColors property of PathGradientBrush 457
[*]
Multicolor blending using PathGradientBrush 459
[*]
Drawing semitransparent graphics shapes 461
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 9.43:
Figure 9.44:
Figure 9.45:
Figure 9.46:
Figure 9.47:
Figure 9.48:
Figure 10.1:
Figure 10.2:
Figure 10.3:
Figure 10.4:
Figure 10.5:
Figure 10.6:
Figure 10.7:
Figure 10.8:
Figure 10.9:
[*]
Drawing semitransparent shapes on an image 463
[*]
Using CompositingMode.SourceOver 466
[*]
Blending with CompositingMode.SourceCopy 467
[*]
A mixed blending example 469
[*]
Drawing with SmoothingMode set to Default 472
[*]
Drawing with SmoothingMode set to AntiAlias 473
[*]
Steps in the transformation process 476
[*]
Transformation stages 477
[*]
Drawing a line from point (0, 0) to point (120, 80) 477
[*]
Drawing a line from point (0, 0) to point (120, 80) with origin (50, 40) 479
[*]
Drawing with the GraphicsUnit.Inch option 480
[*]
Drawing with the GraphicsUnit.Inch option and a pixel width 481
[*]
Combining page and device coordinates 482
[*]
Drawing a line and filling a rectangle 487
[*]
Rotating graphics objects 488
Figure 10.10:
Figure 10.11:
Figure 10.12:
Figure 10.13:
Figure 10.14:
Figure 10.15:
Figure 10.16:
Figure 10.17:
Figure 10.18:
Figure 10.19:
[*]
Using the RotateAt method 490
[*]
Resetting a transformation 490
[*]
Scaling a rectangle 492
[*]
Shearing a rectangle 493
[*]
Translating a rectangle 494
[*]
Composite transformation 499
[*]
Local transformation 500
[*]
Rotating images 502
[*]
Scaling images 503
[*]
Translating images 503
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 10.20:
[*]
Shearing images 504
Figure 10.21:
An identity matrix 505
Figure 10.22:
A matrix whose components have different intensities 506
Figure 10.23:
A color matrix with multiplication and addition 506
Figure 10.24:
Figure 10.25:
Figure 10.26:
[*]
Translating colors 509
[*]
Scaling colors 511
[*]
Shearing colors 512
Figure 10.27:
RGB rotation space 513
Figure 10.28:
RGB initialization 514
Figure 10.29:
Figure 10.30:
Figure 10.31:
Figure 10.32:
Figure 10.33:
Figure 10.34:
Figure 10.35:
[*]
Rotating colors 515
[*]
Using the transformation matrix to transform text 516
[*]
Using the transformation matrix to shear text 517
[*]
Using the transformation matrix to reverse text 518
[*]
Scale
[*]
Translate
Rotate
Scale composite transformation with Append 521
[*]
Translate
Rotate
Scale composite transformation with Prepend 522
Rotate
Translate composite transformation 520
Figure 11.1:
A simple drawing process 528
Figure 11.2:
A simple printing process 528
Figure 11.3:
Conceptual flow of the printing process 530
Figure 11.4:
A flowchart of the printing process 532
Figure 11.5:
Process A 533
Figure 11.6:
Figure 11.7:
Figure 11.8:
Figure 11.9:
[*]
Creating a Windows application 534
[*]
Your first printing application 535
[*]
The printer settings form 547
[*]
Reading printer properties 551
Figure 11.10:
Figure 11.11:
Print events 553
[*]
The print events application 555
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 11.12:
Figure 11.13:
Figure 11.14:
Figure 11.15:
Figure 11.16:
Figure 11.17:
Figure 11.18:
Figure 11.19:
Figure 11.20:
Figure 11.21:
Figure 11.22:
Figure 11.23:
Figure 11.24:
Figure 11.25:
Figure 11.26:
Figure 11.27:
[*]
The form with text file printing options 558
[*]
A graphics-printing application 563
[*]
Drawing simple graphics items 564
[*]
Viewing an image 567
[*]
Print dialogs in the Visual Studio .NET toolbox 569
[*]
The print dialog application 574
[*]
Viewing an image and text 579
[*]
The print preview dialog 579
[*]
The page setup dialog 580
[*]
The print dialog 580
[*]
The custom page settings dialog 584
[*]
The PageSetupDialog sample in action 588
[*]
A form for printing multiple pages 591
[*]
Print preview of multiple pages 595
[*]
Setting a document name 595
[*]
Marginal-printing test application 596
Figure 11.28:
Figure 11.29:
Figure 11.30:
PrintController-derived classes 600
[*]
Print controller test form 601
[*]
Print controller output 604
Figure 12.1:
Drawing in Windows Forms 608
Figure 12.2:
Drawing in Web Forms 608
Figure 12.3:
Figure 12.4:
Figure 12.5:
Figure 12.6:
Figure 12.7:
[*]
The FirstWebApp project 610
[*]
The default WebForm1.aspx page 611
[*]
The HTML view of WebForm1.aspx 611
[*]
An ASP.NET document's page properties 612
[*]
The WebForm1.aspx design mode after the addition of Web Forms controls 613
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Figure 12.8:
Figure 12.9:
[*]
Viewing an image in an Image control 614
[*]
Drawing simple graphics objects on the Web 617
Figure 12.10:
Figure 12.11:
Figure 12.12:
Figure 12.13:
Figure 12.14:
Figure 12.15:
Figure 12.16:
Figure 12.17:
Figure 12.18:
[*]
Drawing various graphics objects 621
[*]
Drawing an image 623
[*]
Using LinearGradientBrush and PathGradientBrush 625
[*]
Drawing semitransparent objects 626
[*]
Entering points on a chart 630
[*]
A line chart in ASP.NET 632
[*]
A pie chart–drawing application in ASP.NET 633
[*]
The Draw Chart button click in action 636
[*]
The Fill Chart button click in action 637
Figure 13.1:
Figure 13.2:
Figure 13.3:
Figure 13.4:
Figure 13.5:
Figure 13.6:
Figure 13.7:
Figure 15.1:
Figure 15.2:
Figure 15.3:
Figure 15.4:
Figure 15.5:
Figure 15.6:
Figure 15.7:
The Form class hierarchy 641
[*]
Drawing on a form 643
[*]
Drawing on Windows controls 644
[*]
Drawing lines in a loop 651
[*]
The same result from two different drawing methods 657
[*]
Using DrawRectangle to draw rectangles 658
[*]
Using system pens and brushes 661
[*]
An interactive GUI application 677
[*]
Designing transparent controls 680
[*]
Drawing a circular form and Windows controls 682
[*]
A graphics copyright application 683
[*]
Thumbnail view of an image 684
[*]
An image after copyright has been added to it 688
[*]
Users table schema 689
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
Figure 15.8:
Figure 15.9:
[*]
Reading and writing images in a database form 690
[*]
Displaying a bitmap after reading data from a database 694
Figure 15.10:
Figure 15.11:
Figure A.1:
Figure A.2:
[*]
An owner-drawn ListBox control 699
[*]
An owner-drawn ListBox control with images 701
[*]
An error generated from Listing A.1 705
[*]
An exception-handled error message 706
[*]
A color version of this figure is available on the Addison-Wesley Web site at.
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
[ Team LiB ]
Tables
Table 1.1:
System.Drawing classes 15
Table 1.2:
System.Drawing.Design classes 19
Table 1.3:
System.Drawing.Design interfaces 20
Table 1.4:
System.Drawing.Drawing2D classes 20
Table 1.5:
System.Drawing.Imaging classes 22
Table 1.6:
System.Drawing.Printing classes 23
Table 1.7:
System.Drawing.Text classes 25
Table 2.1:
Color properties 45
Table 2.2:
Color methods 46
Table 2.3:
Rectangle and RectangleF properties 51
Table 2.4:
Rectangle and RectangleF methods 55
Table 3.1:
Graphics properties 62
Table 3.2:
Graphics draw methods 64
Table 3.3:
Icon properties 98
Table 3.4:
Icon methods 99
Table 3.5:
Graphics fill methods 108
Table 3.6:
Some miscellaneous Graphics methods 116
Table 4.1:
HatchStyle members 139
Table 4.2:
TextureBrush properties 147
Table 4.3:
LinearGradientMode members 154
Table 4.4:
LinearGradientBrush properties 155
Table 4.5:
LinearGradientBrush methods 155
Table 4.6:
PathGradientBrush properties 164
Table 4.7:
WrapMode members 164
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Table 4.8:
Pen properties 168
Table 4.9:
Pen methods 169
Table 4.10:
PenType members 169
Table 4.11:
PenAlignment members 171
Table 4.12:
LineCap members 177
Table 4.13:
DashCap members 177
Table 4.14:
DashStyle members 178
Table 4.15:
TextureBrush methods 184
Table 4.16:
SystemPens properties 190
Table 4.17:
SystemBrushes properties 191
Table 5.1:
SystemColors properties 210
Table 5.2:
Common TypeConverter methods 214
Table 5.3:
ColorTranslator methods 216
Table 5.4:
FontStyle members 223
Table 5.5:
FontFamily properties 223
Table 5.6:
FontFamily methods 224
Table 5.7:
GraphicsUnit members 227
Table 5.8:
Font properties 228
Table 5.9:
StringAlignment members 233
Table 5.10:
StringTrimming members 233
Table 5.11:
StringFormatFlags members 238
Table 5.12:
StringDigitSubstitute members 240
Table 5.13:
TextRenderingHint members 242
Table 6.1:
Region methods 265
Table 6.2:
CombineMode members 273
Table 7.1:
Number of bits and possible number of colors per pixel 290
Table 7.2:
Image class properties 293
Table 7.3:
Image class methods 294
Table 7.4:
ImageFormat properties 301
Table 7.5:
RotateFlipType members 307
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
Table 7.6:
PictureBoxSizeMode members 348
Table 8.1:
ImageLockMode members 355
Table 8.2:
PixelFormat members 356
Table 8.3:
BitmapData properties 358
Table 8.4:
MetafileHeader methods 369
Table 8.5:
MetafileHeader properties 370
Table 8.6:
ColorPalette.Flags values 375
Table 8.7:
WrapMode members 376
Table 8.8:
ColorAdjustType members 378
Table 8.9:
The clear methods of ImageAttributes 383
Table 8.10:
Encoder fields 386
Table 8.11:
EncoderParameter properties 387
Table 8.12:
ImageCodecInfo properties 388
Table 9.1:
System.Drawing.Drawing2D classes 394
Table 9.2:
Line cap styles 395
Table 9.3:
Pen Class Members for Setting Line Caps and Styles 397
Table 9.4:
CustomLineCap properties 405
Table 9.5:
LineJoin members 405
Table 9.6:
PathPointType members 415
Table 9.7:
GraphicsPath properties 418
Table 9.8:
Some GraphicsPath methods 420
Table 9.9:
GraphicsUnit members 427
Table 9.10:
Id values 436
Table 9.11:
Format of Type property values 436
Table 9.12:
CompositingQuality members 464
Table 9.13:
SmoothingMode members 471
Table 9.14:
PixelOffsetMode members 473
Table 10.1:
Matrix properties 484
Table 10.2:
Transformation-related members defined in the Graphics class 495
Table 11.1:
Duplex members 540
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
Table 11.2:
Other PrinterSettings properties 543
Table 11.3:
PrinterResolutionKind members 545
Table 11.4:
PrintDocument properties 551
Table 11.5:
PrintDocument methods 552
Table 11.6:
PrintPageEventArgs properties 554
Table 11.7:
PrintDialog properties 570
Table 11.8:
PageSetupDialog properties 571
Table 11.9:
Some PrintPreviewDialog properties 573
Table 11.10:
PageSettings properties 582
Table 11.11:
PaperSourceKind members 583
Table 11.12:
PrintRange members 590
Table 13.1:
ControlStyle members 652
Table 14.1:
DllImportAttribute field members 665
Table 14.2:
CallingConvention members 666
Table 15.1:
DrawItemEventArgs properties 695
Table 15.2:
MeasureItemEventArgs properties 696
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
Acknowledgments
First of all, I would like to thank a great team at Addison-Wesley, including Stephane Thomas, John D. Ruley, Michael Mullen, Stephanie
Hiebert, and Tyrrell Albaugh, all of whom were very helpful from time to time.
Technical reviewers played a vital role in improving the technical aspects of this book. Their comments and suggestions made me think from
various different programming perspectives. I would like to thank technical reviewers Charles Parker, Min Liu, Gilles Khouzam, Jason
Hattingh, Chris Garrett, Jeffery Galinovsky, Darrin Bishop, and Deborah Bechtold.
I would also like to thank John O'Donnell for his contribution to the printing chapter of the bookChapter
(
11).
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
Introduction Forms
development. System.Data and its subnamespaces define classes that are used for database development (ADO.NET).
GDI+ is the next-generation graphics device interface, defined inSystem.Drawing and its subnamespaces. This book focuses on how to write
graphical Windows and Web applications using GDI+ and C# for the Microsoft .NET Framework.
[
[ Team LiB ]
This document was created by an unregistered ChmMagic, please go to to register it. Thanks.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks.
This document was created by an unregistered ChmMagic, please go to to. register it.
[ Team LiB ]
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
Example Source Code
Complete source code for the examples in this book (in both C# and Visual Basic .NET) is available for download at.
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]Appendix A and apply exception and
error handling techniques in your applications.
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
SUMMARY
This introduction explained the book's organization and answered basic questions about the book. In Chapter 1, you will learn the basics of
GDI+. Topics we will cover include
What is GDI+, and why it is a better programming interface than its predecessors?
How is GDI+ designed and used in the .NET Framework?
What are the major advantages of GDI+ over GDI?
How do you write your first graphics application in .NET using GDI+?
What are some of the basic graphics concepts?
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
Chapter 1. GDI+: The Next-Generation Graphics
Interface
Welcome to the graphics world of GDI+, the next-generation graphics device interface. GDI+ is the gateway to interact with graphics device
interfaces in the .NET Framework. If you're going to write .NET applications that interact with graphics devices such as monitors, printers, or
files, you will have to use GDI+.
This chapter will introduce GDI+. First we will discuss the theoretical aspects of GDI+, which you should know before starting to write a
graphics application.
After reading this chapter, you should understand the following topics:
What GDI+ is
How GDI+ is defined
How to use GDI+ in your applications
What's new in GDI+
What the major programming differences between GDI and GDI+ are
Which major namespaces and classes in the .NET Framework library expose the functionality of GDI+
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
1.1 Understanding GDI+
If.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks.
GDI Interoperability
You can use GDI in managed applications with GDI+. GDI interoperability allows you to use GDI functionality in managed
applications with GDI+, but you need to take some precautions. We will discuss GDI interoperability in Chapter 14..
This document was created by an unregistered ChmMagic, please go to to register it. ThanksSection 1.4.
1.1.3.3 GDI+ Revisited
In brief,
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
GDI+ is a component that sits between an application and graphical devices. It converts data into a form compatible with a
graphical device, which presents the data in human-readable form.
GDI+ is implemented as a set of C++ classes that can be used from unmanaged code.
In the .NET Framework library, GDI+ classes are exposed through System.Drawing (and its subnamespaces), which provides a
managed class wrapper around the GDI+ C++ classes.GPoint,
Size, and Rectangle objects also have overloaded methods that can use thePoint.
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
[ Team LiB ]
1.2 Exploring GDI+ Functionality
Microsoft's managed GDI+ documentation divides its functionality into three categories: 2D vector graphics, imaging, and typography. This
book divides the GDI+ functionality into five categories:
1.
2D vector graphics
2.
Imaging
3.
Typography
4.
Printing
5.
Design inChapter 2.
In the .NET Framework library, 2D vector programming is divided into two categories: general and advanced. General 2D vector graphics
programming functionality is defined in the System.Drawing namespace; advanced functionality is defined in theSystemBitmap and Metafile classes. The
Image class provides members to load, create, and save images.
The Bitmap and Metafile classes define functionality for displaying, manipulating, and saving bitmaps and metafiles.Chapters 7 and 8 cover
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
.ters.
[ Team LiB ]
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
1.3 GDI+ from a GDI Perspective
This section is for GDI programmers. To build on your existing knowledge, we will compare and contrast GDI and GDI+. If you've never
worked with GDI, we recommend that you skip this section.
We have already mentioned the first and major difference between the two versions: Whereas GDI+ exposes its functionality as both
unmanaged and managed classes (through the System.Drawing namespace), GDI is unmanaged only. Besides this major difference, some
of the important changes in GDI+ are as follows:
No handles or device contexts
Object-oriented approach
Graphics object independence
Method overloading
Separate methods for draw and fill
Regions and their styles
1.3.1 Elimination of Handles and Device Contexts theMoveToEx and LineTo functions.
Listing 1.1 C++ code to draw a line
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
LRESULT APIENTRY MainWndProc(
HWND hwnd, UINT message, WPARAM wParam,
LPARAM lParam)
{
PAINTSTRUCT ps;
switch (message)
{
case WM_PAINT:
HDC
handle;
PAINTSTRUCT pstruct;
HPEN
hPen;
...
....PaintEventArgs.Graphics. After that we create a
Pen object and pass it as an argument to theDrawLine method. The DrawLine method takes a Pen object and the starting and ending points
of a line, and draws a line on the form. Notice also in Listing 1.2 that there is no MoveTo call.
Listing 1.2 GDI+ code in C# to draw a line
private void Form1_Paint(object sender,
System.Windows.Forms.PaintEventArgs e)
{
Graphics g = e.Graphics;
Pen pn = new Pen(Color.Red, 3);
g.DrawLine(pn, 20, 20, 200, 200);
}
Note
There are other ways to get a Graphics object in your application. We will look at these options in more detail inChapter 3.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
1.3.2 Object-Oriented Approach
If you compare Listings.
1.3.3 Graphics Object Independence.
1.3.4 Method Overloading
GDI+ methods provide many overloaded forms to provide more flexibility to developers. For example, the DrawRectangle method has three
overloaded forms:
1.
public void DrawRectangle(Pen, Rectangle);
2.
public void DrawRectangle(Pen, int, int, int, int);
3.
public void DrawRectangle(Pen, float, float, float, float);. We will discuss these methods in more detail and see them in action inChapter 3.
1.3.5 Draw and Fill Methods
Drawing theRectangle method, which draws and fills
the rectangle. Listing 1.3 shows a code snippet that draws and fills a rectangle.
Listing 1.3 GDI code to draw and fill a rectangle
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
h. For example, theDrawRectangle method takes a Pen object and draws
an outline of a rectangle, and the FillRectangle method takes a Brush object and fills the rectangle with the specified brush, asListing 1.4
shows.
Listing 1.4 GDI+ code to draw and fill a rectangle
Graphics g = e.Graphics;
Pen pn = new Pen(Color.Red, 3);
HatchBrush htchBrush = new HatchBrush(HatchStyle.Cross,
Color.Red, Color.Blue);
g.DrawRectangle(pn, 50, 50, 100, 100);
g.FillRectangle(htchBrush, 20, 20, 200, 200);
We will discuss the draw and fill methods in more detail inChapter 4.
1.3.6 Regions and Their Styles
Regions are another area where a GDI developer may find minor changes in GDI+. GDI provides several functions for creating elliptical,
round, and polygonal regions. As a GDI programmer, you are probably familiar with the CreateRectRgn, CreateEllipticRgn,
CreateRoundRectRgn, CreatePolygonRgn, and CreatePolyPolygonRgn functions.
In GDI+, the Region class represents a region. TheRegion class constructor takes an argument of typeGraphicsPath, which can have a
polygon, a circle, or an ellipse to create a polygonal, round, or elliptical region, respectively. We will discuss regions in more depth in Chapter
6.
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
1.4 GDI+ Namespaces and Classes in .NET.
Figure
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
If you are already aware of the .NET Framework library's GDI+ objects and class hierarchy, you may want to skip the rest of
this chapter.
1.4.1 The System.Drawing Namespace
The System.Drawing namespace defines basic GDI+ functionality. This namespace contains theGraphics.
Table 1.1. System.Drawing classes
Class
Description
Bitmap
Encapsulates a bitmap, which is an image (with its properties) stored in pixel format.
Brush
An abstract base class that cannot be instantiated directly. The Brush class provides functionality used by its
derived brush classes colors Windows icon. The Icon class provides members to define the size, width, and height of an icon.
IconConverter
Provides members to convert an Icon object from one type to another.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
Class
Image
Description through its derived classes: Bitmap, Icon, and Metafile.
ImageAnimator
Provides methods to start and stop animation, and to update frames for an image that has time-based frames.
ImageConverter
Provides members to convert Image objects from one type to another.
ImageFormatConverter
Defines members that can be used to convert images from one format to another.
Pen
DefinesPoint objects from one type to another.
RectangleConverter
Defines members that can be used to convertRectangle objects from one type to another.
Region
Represents a region in GDI+, which describes the interior of a graphicsHighlight,.
This document was created by an unregistered ChmMagic, please go to to register it. ThanksSystem.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
Table 1.2. System.Drawing.Design classes
Class
Description
BitmapEditor
User interface (UI) for selecting bitmaps using aProperties window.
CategoryNameCollection
Collection of categories.
FontEditor
UI for selecting and configuring fonts.
ImageEditor
UI for selecting images in aProperties window.
PaintValueEventArgs
Provides data for thePaintValue event.
PropertyValueUIItem
Provides information about the property value UI for a property.
ToolboxComponentsCreatedEventArgs
Provides data for the ComponentsCreated event, which occurs when components are added to
the toolbox.
ToolboxComponentsCreatingEventArgs
Provides data for the ComponentsCreating event, which occurs when components are added to
the toolbox.
ToolboxItem
Provides a base implementation of a toolbox item.
ToolboxItemCollection
Collection of toolbox items.
UITypeEditor
Provides a base class that can be used to design value editors.
Table 1.3. System.Drawing.Design interfaces
Interface
Description
IPropertyValueUIService
Manages the property list of the Properties window.
IToolboxService
Provides access to the toolbox.
IToolboxUser
Tests the toolbox for toolbox item support capabilities and selects the current tool.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
Table 1.4. System.Drawing.Drawing2D classes
Class
Description
AdjustableArrowCap
Represents an adjustable arrow-shaped line cap. Provides members to define the properties to fill, and to set the
height and width of an arrow cap.
Blend
Gradient blends are used to provide smoothness and shading to the interiors of shapes. A blend pattern contains
factor and pattern arrays, which define the position and percentage of color of the starting and ending colors. The
Blend class defines a blend pattern, which usesLinearGradientBrush to fill the shapes. The Factors and Positions
properties represent the array of blend factors and array of positions for the gradient, respectively.
ColorBlend
DefinesContainer.Iterator+.
Linear aGraphicsPath
object with a gradient.
RegionData
Represents the data stored by a Region object. The Data property of this class represents the data in the form of an
array of bytes.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
Table 1.5. System.Drawing.Imaging classes
Class
BitmapData
Description encoder parameter. Encoder is used by the EncoderParameter class.
EncoderParameter
An encoder parameter, which sets values (for more information, see
Chapter 7).
ImageCodecInfo
Retrieves information about the installed image codecs..
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
Table 1.6. System.Drawing.Printing classes
Class
Margins
Description
Specifies the margins of a printed page. The Bottom, Left, Right, and Top properties are used
to get and set the bottom, left, right, and top margins, respectively, of a page in hundredths of
an inch.
MarginsConverter
Provides methods to convert margins, including CanConvertFrom, CanConvertTo,
ConvertFrom, and ConvertTo.
PageSettings
Specifies settings of a page, including properties such as Bounds, Color, Landscape, Margins,
PaperSize, PaperSource, PrinterResolution,, MaximumPage, Copies,
MaximumCopies, PrinterName, and so on.
PrinterSettings.PaperSizeCollection
Collection of PaperSize objects.
PrinterSettings.PaperSourceCollection
Collection of PaperSource objects.
PrinterSettings.PrinterResolutionCollection Collection of PrinterResolution objects.
PrinterUnitConvert
Specifies a series of conversion methods that are useful when interoperating with the Win32
printing application program interface (API).
PrintEventArgs
Provides data for theBeginPrint and EndPrint events.
PrintingPermission
Controls access to printers.
PrintingPermissionAttribute
Allows declarative printing permission checks.
PrintPageEventArgs
Provides data for thePrintPage event.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
Class
Description
QueryPageSettingsEventArgs
Provides data for theQueryPageSettings event.
StandardPrintController
Specifies a print controller that sends information to a printer.
Table 1.7. System.Drawing.Text classes
Class
FontCollection
Description.
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
Summary
GDI+ is an improved version of Microsoft's graphics device interface (GDI) API. In this chapter we learned how GDI+ is designed for use in
both managed and unmanaged code. System.Drawing and its helper namespaces defined in the .NET Framework library provide a managed
class wrapper to write managed GDI+ applications. We also learned the basics and definition of GDI+ and what major improvements are
offered by GDI+ in comparison to GDI. At the end of this chapter, we took a quick look at the System.Drawing namespace and its
subnamespaces, and classes defined in these namespaces.
Now that you've learned the basics of GDI+, the next step is to write a fully functional graphics application. In Chapter 2 you will learn how to
write your first graphics application using GDI+ in a step-by-step tutorial format.
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
Chapter 2. Your First GDI+ Application
In this chapter we move to the more practical aspects of writing graphics applications using GDI+ in the .NET Framework. This chapter is the
foundation chapter and discusses vital concepts, including the life cycle of a graphics application. After reading this chapter, you should
understand the basics of the GDI+ coordinate system, basic graphics structures used by GDI+, drawing surfaces, and how to write a graphics
application using GDI+.
To write a graphics application, a good understanding of drawing surfaces and coordinate systems chapter we will discuss some basic graphics structures and their members. These structures are used in examples
throughout this book and include the following:
Color
Point and PointF
Rectangle and RectangleF
Size and SizeF
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks
[ Team LiB ]
2.1 Drawing Surfaces
Every drawing application (regardless of the operating system), consists of three common components: a canvas, a brush or pen, and a
process.
1.
The canvas is the space on which objects will be drawn. For example, in a Windows application, a Windows Form is a canvas.
2.
A brush or a pen represents the texture, color, and width of the objects to be drawn on the canvas.
3., 1,280 horizontal pixels and 1,024 the number of possible colors (seeFigure 2.1). Each
8
component in RGB has 256 (2 ) color combinations. Hence all three components of GDI+ color represent 256x256x256 possible colors. The
alpha component determines the transparency of the color, which affects how the color mixes with other colors.
Figure 2.1. Color components in GDI+
.
This document was created by an unregistered ChmMagic, please go to to register it. Thanks. We will discuss colors in more detail in Chapter 5.
Note
The color depth of a surface is different from the color depth of a particular display device, such as a monitor or a printer.
Most monitors can support over a million colors, and some printers may support only black and white.
GDI+ provides three types of drawing surfaces: forms, printers, and bitmaps.
2.1.1 Forms as a Surface
When you write a Windows application that draws something on a form, the form acts as a drawing surface and supports all the properties
required by a drawing surface.
2.1.2 Printers as a Surface
When you print from an application, the printer acts as a drawing surface. You can set a printer's resolution and color depth, as well as the
height and width of the paper. We will discuss printer-related functionality in Chapter 11.
2.1.3. We will discuss Web graphics applications in more detail in Chapter 12.
[ Team LiB ]
.
This document was created by an unregistered ChmMagic, please go to to register
. it. Thanks
[ Team LiB ]
2.2 The Coordinate System
Understanding the coordinate system is another important part of graphics programming. The coordinate system represents the positions of
graphic objects on a display device such as a monitor or a printer.
2.2.1 The Cartesian Coordinate System
The Cartesian coordinate system (shown in Figure 2.2) divides a two-dimensional plane into four regions, also called quadrants, and two
axes: x and y. The x-axis is represented by a horizontal line and they-axis by a vertical line. An ordered pair ofx and y positions defines a point
in a plane. The origin of the plane is a point with x = 0 and y = 0 values, and the quadrants divide the plane relative to the origin.
Figure 2.2.
This document was created by an unregistered ChmMagic, please go to to register
.
it. Thanks
in quadrant III, and a point with +x and –y values will fall in quadrant IV. For example, a point at coordinates (2, –3) will fall in quadrant IV, and
a point at coordinates (–3, 2) will fall in quadrant II.
2.2.2 The Default GDI+ Coordinate System
Unlike the Cartesian coordinate system, the default GDI+ coordinate system starts with the origin in the upper left corner. The defaultx-axis
points to the right, and the y-axis points down. As Figure 2.3 shows, the upper left corner starts with pointsx = 0 and y = 0. Points to the left of
x = 0 are negative values in thex-direction, and points above y = 0 are negative values in they-direction.
Figure 2.3. The GDI+ coordinate system
Because the default GDI+ coordinate system starts with (x = 0, y = 0) in the upper left corner of the screen, by default you can see only the
points that have positive x and y values. Objects with either –x or –y values will not be visible on the screen. However, you can apply
transformations to move objects with negative values into the visible area.
GDI+ provides three types of coordinate systems: world coordinates, page coordinates, and device coordinates.
1.
The coordinate system used in an application is called world coordinates. Suppose that your application draws a line from point
A (0, 0) to point B (120, 80), as shown in Figure 2.4..
Figure 2.4. Drawing a line from point (0, 0) to point (120, 80)
This document was created by an unregistered ChmMagic, please go to | https://b-ok.org/book/634613/87389a | CC-MAIN-2019-13 | refinedweb | 9,066 | 53.58 |
There are 2 databases A and B; Each one has two tables: Table1 and Table2, both were created with the same script, defining for both: fields, versioning and archiving.
It is required to delete some rows in both tables and, of course in both databases.
A script was prepared:
def remove_items(from_table, ids): log.add("Start to clean orphan objects from: {0}".format(from_table)) total_to_delete = 0 fields = arcpy.ListFields("{0}.{1}".format(user_do, from_table)) if "INSPECTION_ID" in [f.baseName for f in fields]: with arcpy.da.SearchCursor("{0}.{1}".format(user_do, from_table), ["INSPECTION_ID"]) as count_cursor: for row in count_cursor: if row[0] not in ids: total_to_delete += 1 row_counter = 0 with arcpy.da.UpdateCursor("{0}.{1}".format(user_do, from_table), ["INSPECTION_ID"]) as table_cursor: for row in table_cursor: if row[0] not in ids: table_cursor.deleteRow() row_counter += 1 log.add("In table {0} deleted rows {1} / {2}".format(from_table, row_counter, total_to_delete)) log.add("Total rows deleted from {0} : {1}".format(from_table, row_counter)) with arcpy.da.Editor(arcpy.env.workspace) as edit: log.add("This script may take a while to finish.") remove_items("Table1", orphan_ids) remove_items("Table2", orphan_ids)
The script is run against Database A, and it runs with out any issue. The script is run against Database B, and it generates the following error:
['RuntimeError: Insufficient permissions [Database_B.Table_2][STATE_ID = 2880]\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', ' File "C:/T/Repos/RemoveOrphanInspections.py", line 136, in <module>\n main(argv[1:])\n', ' File "C:/T/Repos/RemoveOrphanInspections.py",'SystemError: <built-in method __exit__ of Workspace Editor object at 0x0000019EDF973180> returned a result with an error set\n']
Permissions were checked, and the data owner user (the one used to run the script), has the permissions of Select, Insert, Update and Delete.
I make a change to the script:
with arcpy.da.Editor(arcpy.env.workspace) as edit: log.add("This script may take a while to finish.") remove_items("Table1", orphan_ids) edit = arcpy.da.Editor(arcpy.env.workspace) edit.startEditing(False, True) edit.startOperation() remove_items("Table2", orphan_ids) edit.stopOperation() edit.stopEditing(True)
Now the script runs with out issue.
With this modification I am using two edit sessions, therefore there is two transaction scopes (That was not the original plan).
Versioning, archiving, permissions and fields are the same in both Databases.
How could check the differences between the Database_A.Table2 and Database_B.Table2? Which property of the table should be checked to choose a proper Edit session management? | https://community.esri.com/t5/arcgis-api-for-python-questions/how-to-choose-proper-edit-session-in-arcpy/td-p/800378 | CC-MAIN-2022-21 | refinedweb | 418 | 52.76 |
Hi, I'm trying to figure out how to use a SIGCHLD handler to execute reliably when a child worker process exits for use in larger projects, but this simple test isn't working. Since sigchld++ is executed every time the sig handler is called, and it fork()'s twice, I would expect it to print 2 every time, but it doesn't. Usually it's 2, but maybe 1/5 times it prints 1.
Code:
#include <iostream>
#include <signal.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
using namespace std;
int sigchld = 0;
void sigchld_handler(int signal){
sigchld++;
}
int main(){
signal(SIGCHLD, sigchld_handler);
if(fork() & fork()){
unsigned int leftover = 1;
while((leftover = sleep(leftover))); // The sleep is to allow both children time to exit.
cout << sigchld << endl;
}
return 0;
} | http://cboard.cprogramming.com/linux-programming/155448-sigchld-handler-not-executing-dependably-printable-thread.html | CC-MAIN-2014-23 | refinedweb | 134 | 75.2 |
I have a string literal containing one or more lines of (trusted) Python code, and I would like to
exec()
exec_then_eval
code = """
x = 4
y = 5
x + y
"""
assert exec_then_eval(code) == 9
def exec_then_eval(code):
first_block = '\n'.join(code.splitlines()[:-1])
last_line = code.splitlines()[-1]
globals = {}
locals = {}
exec(first_block, globals, locals)
return eval(last_line, globals, locals)
code = """
x = 4
y = 5
z = x + y
"""
globals = {}
locals = {}
exec(code, globals, locals)
assert locals['z'] == 9
exec_and_eval
You could use
ast to find the location of the last expression in a code block, divide the code into two parts, and execute them separately.
import ast def exec_then_eval(code): tree, lines = ast.parse(code), code.splitlines() # Assuming the last node is an expression *stmt, expr = ast.iter_child_nodes(tree) # Note: .lineno is 1-indexed line, col, _globals, _locals = expr.lineno - 1, expr.col_offset, {}, {} # Split at `line` and `col` ex = lines[:line] + [lines[line][:col]] ev = [lines[line][col:]] + lines[line + 1:] exec('\n'.join(ex), _globals, _locals) return eval('\n'.join(ev), _globals, _locals)
Some test cases:
exec_then_eval('''x = 4 y = 5 x + y''')) # 9 exec_then_eval('''x = 4 y = 5;x + y''')) # 9 exec_then_eval('''x = 4 y = 5;( x + y * 2)''') # 14 | https://codedump.io/share/LNseUltqivSL/1/python-exec-a-code-block-and-eval-the-last-line | CC-MAIN-2018-17 | refinedweb | 198 | 55.13 |
ResQSoft is hiring 竜 TatSu developers. Contact Tom Bragg at tbragg@resqsoft.com for more information.
def WARNING(): """ TatSu>=5.6 requires Python>=3.8 TatSu>=5.7 will require Python>=3.9 Python 3.8 and 3.9 introduced new language features that allow writing better programs more clearly. All code written for Python 3.7 should run fine on Python 3.9 with minor, or no changes. Python has adopted an annual release schedule (PEP-602). Python 3.10 will be released in Oct 2021 Python 3.9 was released on Oct 2020 Python 3.8 bugfix releases final in May 2021 Python 3.7 bugfix releases final in mid 2020 Python 3.6 had its last bugfix release on Dec 2019 Python 2.7 reached its end of life on Jan 2020 Compelling reasons to upgrade projects to the latest Python """ pass
竜 TatSu using the algorithm by Laurent and Mens. The generated AST has the expected left associativity.
$ pip install TatSu
竜 TatSu can be used as a library, much like Python's
re, by embedding grammars as strings and generating grammar models instead of generating Python code.
tatsu.compile(grammar, name=None, **kwargs)
Compiles the grammar and generates a model that can subsequently be used for parsing input with.
tatsu.parse(grammar, input, **kwargs)
Compiles the grammar and parses the given input producing an AST as result. The result is equivalent to calling:
model = compile(grammar) ast = model.parse(input)
Compiled grammars are cached for efficiency.
tatsu.to_python_sourcecode(grammar, name=None, filename=None, **kwargs)
Compiles the grammar to the Python sourcecode that implements the parser.
This is an example of how to use 竜 TatSu as a library:
GRAMMAR = ''' @@grammar::CALC start = expression $ ; expression = | expression '+' term | expression '-' term | term ; term = | term '*' factor | term '/' factor | factor ; factor = | '(' expression ')' | number ; number = /\d+/ ; ''' if __name__ == '__main__': import json from tatsu import parse from tatsu.util import asjson ast = parse(GRAMMAR, '3 + 5 * ( 10 - 20 )') print(json.dumps(asjson(ast), indent=2))
竜 TatSu will use the first rule defined in the grammar as the start rule.
This is the output:
[ "3", "+", [ "5", "*", [ "10", "-", "20" ] ] ]
For a detailed explanation of what 竜 TatSu is capable of, please see the documentation.
Please use the [tatsu] tag on StackOverflow for general Q&A, and limit Github issues to bugs, enhancement proposals, and feature requests.
See the CHANGELOG for details.
You may use 竜 TatSu under the terms of the BSD-style license described in the enclosed LICENSE.txt file. If your project requires different licensing please email. | https://openbase.com/python/TatSu | CC-MAIN-2021-39 | refinedweb | 426 | 60.11 |
I find myself frequently using Python's interpreter to work with databases, files, etc -- basically a lot of manual formatting of semi-structured data. I don't properly save and clean up the useful bits as often as I would like. Is there a way to save my input into the shell (db connections, variable assignments, little for loops and bits of logic) -- some history of the interactive session? If I use something like script I?
script
UPDATE: I am really amazed at the quality and usefulness of these packages. For those with a similar itch:
I am converted, these really fill a need between interpreter and editor.
IPython is extremely useful if you like using interactive sessions. For example for your usecase there is the save.
In[48]
save filename 1-48
You can also add this to get autocomplete for free:
readline.parse_and_bind('tab: complete')
Please note that this will only work on *nix systems. As readline is only available in Unix platform.
Also, reinteract gives you a notebook-like interface to a Python session.
In addition to IPython, a similar utility bpython has a "save the code you've entered to a file" feature
On windows, PythonWin is a lot more productive than that the default python terminal. It has a lot of features that you usually find in IDEs:
You can download it as part of Python for Windows extensions
import readline
readline.write_history_file('/home/ahj/history')
Just putting another suggesting in the bowl:
Spyderlib | http://jaytaylor.com/notes/node/1360777603000.html | CC-MAIN-2017-09 | refinedweb | 249 | 64 |
I am trying to make an automatic lawn mover
when I see objects that are not green I want to back away, when I see green grass I want to cut
The blob handling is very shaky, the frame size and position vary even when the mower is stationary.
Grass is not completely green but can contain other colors such as brown.
How can I recognize objects that are mainly green and where the light from the sun also varies.
I also want to find non green objects.
In thresholds 30, 100, -64, -8, -32, 32 is set but I don't understand what it setting?
can you give me some tip
Regards Ulf Andersson
from pyb import UART
import time
# UART 3, and baudrate.
uart = UART(3, 19200)
import sensor, image, time
threshold_index = 1 # green
# Color Tracking Thresholds (L Min, L Max, A Min, A Max, B Min, B Max)
# The below thresholds track in general red/green/blue things. You may wish to tune them...
thresholds = [ (30, 100, 15, 127, 15, 127),
(30, 100, -64, -8, -32, 32),
(0, 15, 0, 40, -80, -20)]# generisk röd grön blå
MesurementNo = 0
MesurementX = 320
MesurementW = 0
sensor.reset()
sensor.set_pixformat(sensor.RGB565)
sensor.set_framesize(sensor.QVGA)
sensor.set_windowing((0, 40, 320, 30))
sensor.skip_frames(20)
sensor.set_auto_gain(False) #of for tracing
sensor.set_auto_whitebal(False) # of for tracing
while(True):
#clock.tick()
img = sensor.snapshot()
for blob in img.find_blobs(thresholds, pixels_threshold=20, area_threshold=20, merge=True):
if blob.code() == 2: #green
img.draw_rectangle(blob.rect())
# Get largest rectangle and the left most
MesurementNo = MesurementNo + 1
if blob.x() < MesurementX:
MesurementX = blob.x()
if blob.w() > MesurementW:
MesurementW = blob.w()
if MesurementNo == 10:
#print(MesurementX, MesurementW , sep=', ')
xx = "%d"%MesurementX + ", " + "%d"%MesurementW + "R"
uart.write(xx)
MesurementNo = 0
MesurementX = 320
MesurementW = 0
cutting grass
Discussion related to "under the hood" OpenMV topics.
2 posts • Page 1 of 1
- Posts: 7
- Joined: Sun Sep 22, 2019 10:48 am
Re: cutting grass
Hi, I recommend you change approaches. Use the dataset editor of OpenMV IDE to collect a dataset of grass and not grass and train a CNN using Edge Impulse (see video details on our front page and product pages) to detect grass and not grass. It will work much better and is very little coding.
Nyamekye,
2 posts • Page 1 of 1
Return to “Technical Discussion”
Who is online
Users browsing this forum: No registered users and 6 guests | https://forums.openmv.io/viewtopic.php?f=6&t=2000 | CC-MAIN-2020-40 | refinedweb | 409 | 65.52 |
Primitives and Object Wrappers previous columns, you covered the concept of object wrappers. This is actually one of the most important, and interesting, object-oriented design methodologies. Perhaps the most elegant aspect of object-oriented design is when it comes to integrating with other technologies, specifically older, more established technologies. In fact, one of the hottest job opportunities in Information Technologies is that of integrating legacy applications with object-oriented technologies.
For example, because much of today's business data is kept on mainframes, and it will most likely be staying put, the ability to use a Web front end to interface with this legacy data is very important. Wrapping mainframe technologies in objects is a powerful mechanism form combining the two methodologies. Likewise, as you saw with the client-server systems of earlier articles in this series, using object wrappers to hide the hardware implementation provides a great advantage.
Interestingly, one of the best examples of object wrappers is also one of the simplest. Because the main purpose of this series is to explore the underlying concepts of object-oriented technologies, it is quite helpful to explore the relationship between primitives and objects.
Primitives
When experienced programmers begin to learn object-oriented techniques, one of the first stumbling bocks encountered is the concept that everything is an object—well almost everything. In some languages, such as Java, the universe is really divided into two camps, objects and primitives (in other o-o language architectures, although the primitives are there, the programmer must access them via object wrappers).
Sun Microsystems's Java tutorial defines a primitive as:.
The primitives are called built-in types, and there are eight of them:
- boolean
- int
- long
- byte
- short
- float
- double
- char
One of the problems that C programmers had was that the byte size of the primitive types differed among the various platforms. This led to a great amount of grief, or at least a lot of work, for a programmer wanting to port a program from one machine to another.
For example, there were times when I was porting C/C++ programs to different machines that had integer sizes of 16, 32, and 64 bits. This led to a lot of conditional compiling and made the code more vulnerable to bugs. In short, it was the programmer's responsibility to adjust to the variety of the platforms. This was not the optimal solution. The Java architecture allows the programmer to treat all primitives the same on all platforms. Table 1 shows the storage requirements for the various primitives.
Table 1
These primitives fall into three basic categories, the boolean type, the common numeric types (byte, short, int, long, float, and double) and the char type. I like to categorize them in groupings because the boolean and char types require a bit more explanation.
boolean
In the case of the boolean type, the explanation is really due to the fact that the boolean is not really what it appears to be. Because a boolean only has two possible values, true and false, it may seem obvious that a single bit is all that is required to implement the boolean. Yet, as most C programmers know, there was never an actual boolean type in the original C specification.
Yet, the concept of a boolean type was quite common in C programs. When a C programmer needed a boolean type, the programmer simply used an integer to do the job. To mimic the functionality of the boolean values true and false, a simple coding trick was performed:
#define TRUE 1 #define FALSE 0
In effect, this code defines TRUE and FALSE to the values 1 and 0 respectively. With this in place, the programmer can now use traditional Boolean logic.
if (flag == TRUE) { ...do something }
Although this workaround provides the functionality that we need, it has one drawback—it wastes space. This may well be a trivial problem, but the fact remains that you are using at minimum 8 bits, where only a single bit is required (this assumes that the compiler is implementing the boolean as a byte).
In fact, Java uses the same approach, although behind the scenes. There are efficiency reasons why a single bit is not used to implement a boolean. The compiler/virtual machine/operating systems are not designed to access individual bits. Thus, while you might be seeing bits, you are actually getting bytes simulating bits. As you can see in the code in Listing 1, you actually define a boolean type.
public class Primitives { public static void main(String[] args) { boolean flag = true; if (flag = = true)System.out.println("flag is true"); } }
Listing 1
char
The char type is the only non-numeric primitive, and is stored as a 16-bit unsigned value. Despite the fact that a char represents a single character, each char is stored as a numeric code that represents a specific character. Java represents characters in a 16-bit Unicode whereas earlier languages have used 8-bit codes such as ASCII. Unicode allows programming languages to handle a wider character set and support several languages using various alphabets. You can get a great idea of the various alphabet choices, as well as their definitions, by visiting:.
The code in Listing 2 is quite interesting.
public class Primitives { public static void main(String[] args) { char c = 'a'; short num = 97; System.out.println("c = " + (short)c); System.out.println("num = " + (char)num); } }
Listing 2
There are two primitives declared:
char c = 'a'; int num = 97;
The numeric code for the character 'a' is 97. When you cast the char to an int and print it, you get the value 97.
System.out.println("c = " + (int)c);
Conversely, you can assign an int to the value of 97 and then cast it to a char. When you print this char, you do indeed get the character 'a'.
System.out.println("num = " + (char)num);
When executed, the output produced by the code in Listing 2 is displayed in Figure 1.
Figure 1
Page 1 of 3
| http://www.developer.com/design/article.php/3611496/Primitives-and-Object-Wrappers.htm | CC-MAIN-2015-48 | refinedweb | 1,009 | 52.6 |
RSS, Lucene, and REST
Sorry for the horrible title. I struggled trying to come up with a worthy title, but after a few minutes I decided to not let perfection get in the way of.
Approach multiple reasons it just didn't work out.
So our first decision had to answer: how we were going to implement it......again, but better. Before the sprint began, a few of us got together and hashed out a potential solution: how about we use the Search REST Service, which is backed by Lucene, to support Advanced searches and return RSS?
Why does this excite me so much? To understand that I need to explain our application at a high level. It's a completely javascript-based application using ExtJS (now sencha), backed by REST Services using Jersey. Consequently, we have a lot of REST Services. Right now those REST Services support returning XML or JSON using a custom Response Builder we have created internally.
I'm excited because this single user story could have a huge improvement on the entire system:
- If we modified the Search Service to return RSS, then all our REST Services could support RSS.
- The REST Service would now support Advanced searches. Previously, it only really supported basic keyword searches.
- Any search they perform could now be subscribe to via RSS.
I'm not going to go into every detail on how it was done. I wasn't even actually the one who implemented it (see Matt White. He did a fantastic job.). We did have one major hurdle we had to overcome, and that was how to index items to enable advanced searches like Status=New.
Previously this wasn't possible given how we were indexing our items. We were basically indexing the item by building up a large String containing all the item information like the following:
import org.apache.lucene.document.Document import org.apache.lucene.document.Field def Document createDocument(item) { Document d = new Document() doc.add(new Field("content", getContent(item), Field.Store.NO, Field.Index.ANALYZED)) return d } def String getContent(item) { def s = new StringBuilder() s.append(item.getTitle()).append(" ") s.append(item.getStatus()).append(" ") s.append(item.getPriority()).append(" ") s.append(item.getDescription()).append(" ") return s.toString() }
The problem with this is performing a search for "New" would have returned any item with a status of New as well as any items that contained the word New. The solution was to just add another Field to the Document.
doc.add(new Field("Status", item.getStatus(), Field.Store.NO, Field.Index.NOT_ANALYZED));.
Start to Finish HttpBuilder executing the REST Service just as our javascript client would.
Finally, once the main work was finished, we uploaded a diff file to our internal instance of Review Board. From there I was able to perform a peer review where we found a minor bug in the changes.
Summary
I am sure it's not an original idea, but I thought it was a fun User Story that hopefully will provide a lot of value beyond what was originally estimated. Ideally, this might help others who are in similar situations. | https://jlorenzen.blogspot.com/2010/07/rss-lucene-and-rest.html?widgetType=BlogArchive&widgetId=BlogArchive1&action=toggle&dir=open&toggle=MONTHLY-1225515600000&toggleopen=MONTHLY-1277960400000 | CC-MAIN-2020-16 | refinedweb | 524 | 57.37 |
31 mouse buttons - a proposal for Qt 5.0; IT'S DONE, IT'S IN THERE :))
This replaces my 4.x Proposal, which was titled "I have a way to support ALL MOUSE BUTTONS in Qt, while staying compatible with the 4.x series!"
Here are the main changes:
(1) Qt 5.0 eliminates a very large number of plugins :)) On Linux (my platform), I propose to implement only XCB and Wayland.
(2) Since Qt 5.0 will require re-compilation of Applications written against earlier versions of Qt, binary compatibility is a non-issue. SOURCE Compatibility remains critical- and so, I will not try to add the button number into the event signatures. Instead, the range of values which Qt::MouseButton (and the corresponding flags) can take will be expanded with new name-value pairs.
low-level events, and acquire the current mask field when we're preparing to create the QWheelEvent.)
The need for this 'enhancement' seems to be even more severe than it appeared to be a year ago -- our current. ;)
I've done it! My updates for the xlib and xcb plugins (files xlibwindow.cpp and xcbwindow.cpp), together with a small update to qguiapplication.cpp and the expanded enum in qnamespace.h, handle every button I have. (And I have 14, plus the 4 tilt-wheel "buttons").
Qt5 only, of course -- I'm breaking BC. And I haven't begun to think about the mask -- Wayland support of the buttons comes next.
- sierdzio Moderators last edited by
Great, thanks!
commit was accepted today.
[quote author="rickst29" date="1321331455"]commit was accepted today.[/quote]
Cool! Congrats on that!
thanks, Andre. After the Wayland Platform Plugin for 5.0 again becomes compatible with Wayland GIT, I'll be working through that one as well. It's fallen a bit behind Wayland, some of the library calls are incompatble -- and that's something which should, IMO, be fixed by Trolls who can discuss their options during the day.
If I tried "playing with it", without any discussion, I'd probably end up with a rewrite which wouldn't fit the overall Qt strategy. But I know, almost exactly, how to write my part -- after the rest of the plugin receives an overhaul. | https://forum.qt.io/topic/10703/31-mouse-buttons-a-proposal-for-qt-5-0-it-s-done-it-s-in-there | CC-MAIN-2022-40 | refinedweb | 374 | 67.96 |
Find your next favorite book
Editor's Note
Overcome your financial fears…
If you’ve been leaving your money in a standard savings account because the last recession has left you rattled, let business guru Tony Robbins and financial advisor Peter Mallouk assuage your doubts. Here are facts you need to know to grow and secure your savings.
Description
From Scribd: About the BookThey say there are two things that are dependable in life: death and taxes. What they fail to mention is that there really are three: death, taxes, and market corrections. Considering that there have been more than 30 such corrections in the past 30 years, it is absurd that no one has, until now, developed an action plan for not only surviving, but thriving during such a correction.
Building upon the principles he laid out in his book Money: Master the Game, Tony Robbins continues providing essential steps that readers can implement to protect their investments while maximizing their wealth. Unshakeable: Your Financial Freedom Playbook is a detailed guide designed for investors, written with common sense, and deployable in a practical manner for all to utilize.
Few have been able to navigate the turbulence of the stock market with the same efficiency and success as Tony Robbins, and his proven and consistent achievements make him singularly qualified to help investors with this issue. Friendly for both seasoned and first-timers alike, Unshakeable stands to be a timeless book about preserving and increasing investment wealth during a time of market correction..
Related to Unshakeable
Related Books
Related categories
Book Preview
Unshakeable - Tony Robbins
WHAT THE WORLD’S GREATEST FINANCIAL LEADERS ARE SAYING ABOUT TONY ROBBINS . . .
It’s rare that an outsider steals the spotlight and becomes a respected voice of impact in the financial industry. Robbins does it again with a new book to prepare us and help us profit from the inevitable crashes and corrections to come.
—Anthony Scaramucci, founder, SkyBridge Capital; cohost of Wall Street Week
Remarkably, Robbins has produced a book that will appeal to both the beginner and the most sophisticated money jockey overseeing multibillions of dollars in assets. If there were a Pulitzer Prize for investment books, this one would win, hands down.
—Steve Forbes, publisher of Forbes magazine and CEO of Forbes Inc.
Robbins is the best economic moderator that I’ve ever worked with. His mission to bring insights from the world’s greatest financial minds to the average investor is truly inspiring.
—Alan Greenspan, former Federal Reserve chairman under four sitting presidents
Tony came to my office for a 45-minute interview that.
—John C. Bogle, founder, the Vanguard Group, which has over $3 trillion in assets under management, and the #1 hedge fund investor in the world
Tony Robbins needs no introduction. He is committed to helping make life better for every investor. Every investor will find this book extremely interesting and illuminating.
—Carl Icahn, billionaire activist investor
You can’t meet Tony Robbins and listen to his words without being inspired to act. This book will give you the strategies to create financial freedom for yourself and your family.
—T. Boone Pickens, founder, chairman, and CEO of BP Capital Management and TBP Investments Management; predicted oil prices accurately 18 out of 21 times on CNBC
Tony masterfully weaves anecdote and expertise to simplify the process of investing for readers—priming their financial education and helping them effectively plan for their future.
—Mary Callahan Erdoes, CEO, JPMorgan Asset Management; $2.4 trillion in assets under management
Tony Robbins is a human locksmith—he knows how to open your mind to larger possibilities. Using his unique insights into human nature, he’s found a way to simplify the strategies of the world’s greatest investors so that anyone can have the financial freedom they deserve.
—Paul Tudor Jones II, founder, Tudor Investment Corporation, and one of the top ten traders in history
Robbins’ Corporation and bestselling author of Stacking the Deck: How to Lead Breakthrough Change Against Any Odds
WHAT LEADERS FROM OTHER INDUSTRIES ARE SAYING ABOUT TONY ROBBINS . . .
He has a great gift. He has the gift to inspire.
—Bill Clinton, former president of the United States
Tony’s power is superhuman. . . . He is a catalyst for getting people to change.
—Oprah Winfrey, Emmy Award–winning media magnate
"We’ve been selected by Forbes as the most innovative company in the world for four consecutive years. Our revenues are now over $7 billion annually. Without access to Tony and his teachings, Salesforce.com wouldn’t exist today."
—Marc Benioff, founder, chairman, and CEO of Salesforce.com
Tony Robbins’ coaching has made a remarkable difference in my life both on and off the court. He’s helped me discover what I’m really made of, and I’ve taken my tennis game—and my life—to a whole new level!
—Serena Williams, 22 and producer
If you want to change your state, if you want to change your results, this is where you do it: Tony is the man.
—Usher, Grammy Award–winning singer, songwriter, entrepreneur
Tony Robbins is a genius . . . His ability to strategically guide people through any challenge is unparalleled.
—Steve Wynn, CEO and founder of Wynn Resorts six-foot-seven phenomenon!
—Diane Sawyer, former ABC World News and Good Morning America anchor
To those souls who will never settle for less than they can be, do, share, and give.
Legal disclosure: Tony Robbins is a board member and chief of investor psychology at Creative Planning Inc., an SEC registered investment advisor (RIA) with wealth managers serving all fifty states. Mr. Robbins receives compensation for serving in this capacity based on increased business derived by Creative Planning from his services. Accordingly, Mr. Robbins has a financial incentive to refer investors to Creative Planning. More information regarding rankings and/or accolades for Creative Planning can be found at:.
Waste no more time arguing about what a good man should be. Be one.
—MARCUS AURELIUS
Money is only a tool. It will take you wherever you wish, but it will not replace you as the driver.
—AYN RAND the fundamental objective is the same. In fact, successful businesspeople often become successful philanthropists. Bill Gates is only one example of many.
Tony Robbins demonstrates that by creating resources, by producing something, you gain the means to help others. His book will be your invaluable guide to enabling you to do the same—and on a scale you may never have thought possible.
FOREWORD
John C. Bogle, founder of Vanguard, which has more than $3 trillion in assets under management
As 2016 began, I started my Saturday morning reading the New York Times while eating breakfast. After scanning the front page (and pulling out the crossword puzzle for later), I turned my attention to the business section. Displayed prominently at the top of section B1 was Ron Lieber’s Your Money column, which featured essential money management strategies written on index cards by six personal finance experts.
Ron’s point was to show that effective money management does not need to be complicated, with the key points of managing your money fitting on a single index card. Five out of the six index cards addressed the topic of how to invest your savings, and each gave the same simple advice: invest in index funds.
That message is getting through to investors. In 1975 I created the world’s first index mutual fund, and I’ve been singing its praises ever since. In those early days, I was a lone voice without much of an audience. Today an enormous choir has developed to help me spread the word. Investors are hearing our voices loud and clear, and are voting with their feet—in other words, their dollars.
Since the end of 2007, mutual fund investors have added almost $1.65 trillion to their holdings of equity index funds while reducing their holdings of actively managed mutual funds by $750 billion. That swing of $2.4 trillion in investor preferences over the last nine years is, I believe, unprecedented in the history of the mutual fund industry.
Over the past seven years, Tony Robbins has been on a mission to help the average investor win the game, preach the message of index funds, and tell investors to stop overpaying for underperformance. In his journey, he has spoken to some of the greatest minds in finance. Although I’m not sure I belong in that category, Tony came to my office at Vanguard to get my thoughts on investing. Let me tell you, Tony is a force of nature! After spending just a few minutes with Tony, I completely understand how he’s been able to inspire millions of people all over the world.
We had such a great time speaking with each other that our scheduled 45-minute interview.
But even I underestimated just how big an impact Tony would have. His first book on investing, Money: Master the Game, has sold over one million copies and spent seven months at the top of the New York Times Business Best Sellers list. Now he returns with Unshakeable, which is sure to add even more value to readers. Unshakeable presents insights from some of the most important figures in the investing world, such as Warren Buffett and Yale endowment fund manager David Swensen. Both Warren and David have said time and again that index funds are the best way for investors to maximize their chances of investment success. This book will help that message reach even more investors.
Index funds are simple. Rather than try to time the market or outguess other professional money managers about the prospects of individual stocks, index funds simply buy and hold all of the stocks in a broad market index such as the S&P 500. Index funds work by paring the costs of investing to the bare-bones minimum. They pay no fees to expensive money managers and have minimal trading costs, as they follow the ultimate buy-and-hold strategy. We can’t control what the markets will do, but we can control how much we pay for our investments. Index funds allow you to invest, at minimal cost, in a portfolio diversified to the nth degree.
Think about it this way: all investors as a group own the market and therefore share the market’s gross return (before costs). By simply owning the entire market, index funds also earn the market’s return at minimal annual cost: as low as 0.05% of the amount you invest. The rest of the market is active, with investors and money managers furiously trading back and forth with one another, trying to outperform the market. Yet they too, as a group, own the entire market and earn the market’s gross return. All of that trading is enormously expensive. The fund managers demand (and receive) huge fees, while Wall Street takes a cut from all that frenzied trading. These and other hidden fees can easily add up to over 2% each year.
So index fund investors receive the gross market return minus fees as low as 0.05% or less, while active investors as a group will receive the same gross return minus 2% or more. The gross return of the market minus the cost of investing equals the net return to investors. This
cost matters hypothesis is all you need to know to understand the benefits of index investing. Over an investment lifetime, this annual difference really adds up. Most young people just starting their careers will be investing for 60 years or more. Compounded over that time frame, the high costs of investing can confiscate an astounding 70% of your lifetime returns!
This cost differential substantially understates the costs incurred by so many investors—especially investors in 403(b) and 401(k) retirement plans. As Tony points out in chapter 3, this extra layer of fees (often largely hidden) confiscates an additional staggering proportion of the returns delivered by your funds.
I’m excited to add my small contribution to this book and support Tony in being a voice for good. I’m thrilled to have spent a wonderful afternoon conversing with him. I’m humbled to have the opportunity to spread the gospel of indexing, to help the honest-to-God, down-to-earth human beings who are saving for a secure retirement or for their children’s education.
With flair and depth, Tony covers the history of investment risks and returns, and successful investors should understand this history. That said, history, as the British poet Samuel Taylor Coleridge wrote, is but
a lantern on the stern, which shines only on the waves behind us, and not on where we are headed. The past is not necessarily prologue to the future.
We live in an uncertain world, and face not only the risks of the known unknowns but also the unknown unknowns: the ones that
we don’t know we don’t know. Despite these risks, if we are to have any chance for meeting our long-term financial goals, invest we must. Otherwise we’re certain to fall short. But we don’t have to put up 100% of the capital and take 100% of the risk only to receive 30% of the reward (often far less). By buying low-cost, broad-market index funds (and holding them
forever), you can guarantee that you will receive your fair share of whatever returns the financial markets provide over the long term.
SECTION 1
WEALTH: THE RULE BOOK
CHAPTER 1
UNSHAKEABLE
Reviews
What people think about Unshakeable4.3
Critic reviews
I would divide "Unshakeable" into two books, really. The first half is a rehashing of "Money: Master the Game," but done so in a very creative way. "Money" is a long book at around 600 pages, and Robbins wanted to be able to provide the key information from that book in a more user accessible option; hence "Unshakeable." It takes all the key elements, the important parts, and the shocking data from "Money" and breaks it down into simplified version that still deliver the same punch. The other half seems to be a lead in to a book Robbins plans to write soon, or has already written and plans to release soon. It gets away from investment a bit and focuses more on fulfillment and happiness. Combining these two parts together gives you "Unshakeable," which is unique in its own right. Certainly a great read, but if you haven't read "Money" you may be a bit lost.Scribd Editors
Tony Robbins has done it again, and this book provides the typical insightful information that we've come to expect, and appreciate, from Robbins. Known for his financial genius, philanthropic work, and passion for helping small businesses and individuals with financial goals, Robbins writes Unshakeable from a standpoint that many can understand. The style he writes it in is simple to understand, far less complex than his massive Money: Master the Game, but still containing useful information. It is almost like a distilled version of Money, which is fine, since many people don't want to digest such a massive financial book. Robbins has some important points I'd like to highlight, like removing emotions from the equation when dealing with market corrections (and investment in general). His personal stories are compelling, and he provides enough examples of good and bad investments that the reader can create their own opinion effectively.Scribd Editors
Reader reviews
- (4/5)I have thought the whole book is about investing, but the end part focuses on happiness, so the core concept is that you should give more when you make more.
- (1/5)He is so pretentious. It feels like it's only about himself
- (4/5)He is amazing as always. This time he tried to explain the stock market and the self-restrained needs to create a fortune.
- (5/5)I like this book, very interesting, Another great treasure, Tony is very inspiring personality.
- (4/5)
2 people found this helpfulWhat about no-USA residents, no USA citizen, gotta reframe all
2 people found this helpful
- (4/5)
1 person found this helpfulI read this book in a day. I decided to read it after listening to Tony's interview with Tim Ferriss. This book has practical information on how to invest your money and also has a key message on happiness. I read this after reading chapters 8 and 20 of The Intelligent Investor by Benjamin Graham and hearing of various other snippets of information from Mr Money Moustache and Tim Ferriss. Worth a read for anyone wondering if they are doing the best thing with their savings.
1 person found this helpful
- (5/5)
2 people found this helpfulHighly recommended books. It changed my life. Thank you,Mr. Robbins.
2 people found this helpful
- (1/5)WHO IS TONY HE IS A PORNSTAR I DONT KNOW JAJAJAJJAJAJAJAJAJAAJAJAJAJAJAJAJAJJAJAJJAAJAJAJAJAJJAJAJAJAJAJAJAJAJAJJAJAJAJAJAJAJAJAJAJJAJAJJAJAJJAJAJAJJAJAJAJAJAJAJAJAJAJAJJAJAJAJAJAJJAJAJAJJAJAAJAJJAAJAJJAJAJJAJAJAJAJJAJAJAJAJAJAJAJA I DON HAVE A DAD JAJAJAJAJAJAJAJAAJJAJAJAJAJAJJAJAJAJAAAJAJ THIS IS VERY SAD HNSBCBGDCSBSDVCDAJHVCDSFCVFDVDSVCCHDC. 666 | https://www.scribd.com/book/339352691/Unshakeable-Your-Financial-Freedom-Playbook?utm_medium=cpc&utm_source=slideshare&utm_campaign=pmp_recs-top_right-mixed-v1 | CC-MAIN-2021-25 | refinedweb | 2,842 | 60.85 |
Everyone likes InfoPath’s email data connection because it lets you collect forms using email only, no other infrastructure required (no need for Windows SharePoint Services, SQL Server, or even a file share). We’ve built even more Outlook integration in InfoPath 2007 Beta, but since most of you don’t have that yet, let me share a tip that will work in both InfoPath 2003 and 2007.
The basics: Single dynamic email address
As your probably know, the To and CC line of the email data connection can come from a textbox in the form by using a formula. To do that, just use the Fx button next to the To line in the data connection wizard:
The trick: Multiple email addresses from repeating controls
Some forms have a list of names they want to send to, but the simple formula above won’t work for that.
For example, consider a repeating table that looks like this:
With this data source (note that “person” is repeating):
So you want to produce this semicolon-separated list of e-mails:
A good instinct is to use the “concat” function, but unfortunately that only works on the first element in a repeating structure.
So then comes the team insight: Our “eval” function returns a list of nodes which actually share an anonymous parent. That means you can use one eval functions to create a list of the email addresses, then wrap it in another eval function that gets the parent of that list.
Voila, here’s the formula to solve the problem:
eval(eval(person, “concat(my:email, ‘;’)”), “..”)
(Note that “person” can be inserted from the data source, but “my:email” needs to be typed by hand or you’ll get an error.)
For the curious: Here’s how it’s done
Let’s break down that XPath formula from the inside out:
- “concat(my:email, ‘;’)” – Adds a semicolon to each email address.
- eval(person, “concat(my:email, ‘;’)”) – Loops through each person to create a list of email addresses
- eval(eval(person, “concat(my:email, ‘;’)”), “..”) – Gets the anonymous parent of the email addresses, and converts them to a string.
So the end result returns the contents of that anonymous parent, which is a series of semicolon-delimited email addresses. Phew!
In summary
We are using two tricks here:
- The fields returned by eval() all have the same anonymous parent (feature of InfoPath’s function)
- The string value of a parent is the concatenation of all its children (W3C spec’ed)"], ‘concat(my:Skill_Name, ";")’), "..")
It works if I take out the Skill_Level filter. The exact same code works if I’m referencing a secondary data source dfs:.
Ay ideas?
Problem with french infopath 😉, ‘;’)"), "..").
Hi debb66,.
Scott
Hi debb66,
I decided to create the same data structure to see if this helps – here is what I have:
myFields, ‘concat(my:Email, ";")’), ".."), 1, string-length(xdMath:Eval(xdMath:Eval(../my:Contributors/my:People/my:Person, ‘concat(my:Email, ";")’), "..")) – 1)
When I Preview the form and add e-mail addresses to the "Email" field, each name in each row is added to the text box.
Let me know if this helped!
Scott
Scott:
Thank you so much for your quick response however, it didn’t work… Here’s the error:
MSXML5.DLL
Reference to undeclared namespace prefix: ‘my’.
Error occurred during a call to property or method ‘Eval’.
I verified that the .dll is in place and sure enough it is.
Thanks again for your help.
Hi debb66,
Did you create the data source yourself or is it being created from an existing XML/XSD, database, etc. file?
I just tested the same form using InfoPath 2003 and it worked fine.
Scott
Hi Scott:
I have to apologize I’m not a novice or programmer so you may have lost me.
I’ve created the form and have posted to a sharepoint site that collects the information. I’ve not created a database, etc.
I was hoping it was a matter of just copying and pasting the code. Arrrggghh! 🙁
Debbie
Hi Debbie,
No problem – and you actually answered my question! 🙂
If you would, do this for me:
– Open your XSN in Design view
– Display the Data Source Task Pane
– Right-click on your "Email" node and choose Copy XPath
Paste the XPath here so I can see the complete structure.
Thanks!
Scott
Hi again Scott:
I’m using 2003 InfoPath – it doesn’t give me the Copy XPath option.
Deb.
Hi Debbie…just consider me thick…I forgot you were using 2003. 🙂
How about this:
– On the View, drag an Expression Box and drop it outside of the section and repeating table (i.e. just somewhere in a clear area on the view)
– Click the "Fx" button
– Click Insert Field or Group
– Drill down and select your Email field and click Ok
– Enable the "Edit XPath" box on the Expression Box
– Now, select and copy what is in the "Formula" box and post that here.
Scott
You are a very patient person! 🙂 Thank you!
my:Contributors/my:People/my:Person/my:Email
That is what I have.
Thanks Debbie!
Well – what I gave you should definitely work. Unfortunately at this point, the best solution may be to open a support case so we can take a look at your XSN (and possibly at your machine) to try and see why this is failing.
That XPath expression that you provided is identical to the sample I created on my 2003 machine and it works without issue…so something is going on in your environment.
Do you have another machine where you can test this?
Scott
I will try it on another machine and see what happens. Again thank you for your help! It is greatly appreciated!
Hi Scott:
I tried a different machine unfortunately does not work. So put in a helpdesk ticket to see what my IT group can do.
Thank you again for you patience and help!
Debbie
Hi Debbie…
That is quite odd…if your Help Desk runs into a wall, please don’t hesitate to open a support incident with Microsoft so we can look at this for you.
Scott
I have designed a form with a send button who has this formula: eval(eval(Adresslist; ‘concat(], ‘concat(my:Contact_Name, ";")’), "..")
Any help would be greatly appreciated!
Amber
Just wanted to say Thanks! I am a non-pragramming InfoPath newbie and this blog has been awesome!
Hello,!
What if you do not need to do any concating
I need to compare what I have in a field with what is in a custom list. I also need to translate both into lower case so they match. Here is what I have:
translate(my:myFields/my:MyCurrentUser, "ABCDEFGHIJKLMNOPQRSTUVWYXZ", "abcdefghijklmnopqrstuvwyxz") = eval(eval(/dfs:myFields/dfs:dataFields/dfs:SustainApprovers, "translate(my:myFields/my:dataFields/my:Person, ‘ABCDEFGHIJKLMNOPQRSTUVWYXZ’, ‘abcdefghijklmnopqrstuvwyxz’)",".."))
I get an error when the form loads.
Thanks for any help Derek,
If you just want the e-mail addresses to be on separate lines in a text box, then take a look at this post: blogs.msdn.com/…/385577.aspx
You will need to create that file and add it to your XSN. Then in the expression for concatenating the emails, you will remove the ";" from the expression and select the "crlf" option from this new data connection. It is important that you actually select that item and not just hand enter it. In the end, your expression will look like this when you initially select it:
eval(eval(eval(attendee[selected = string(true())]/attendeeEmail, "."), 'concat(., @crlf)'), "..")
Once you click the Verify formula button, it will look like this:
eval(eval(eval(attendee[selected = string(true())]/attendeeEmail, "."), 'concat(., xdXDocument:GetDOM("Characters")/characters/@crlf)'), "..")
NOTE: This will only work if you are using the InfoPath client to open your forms. If you are using the browser, the only way to do this is with code.
One last item: in order for your text box to reflect these on different lines, you will need to enable the "Multi-Line" property of the text box.
Scott
Very helpful – works like a charm! In my case, I was using a repeating table with only one column, and needed all the values chosen concatenated into a separate field. This solution worked well. Thanks,
Thanks Scott, I'll give it a go. I was aware of the carriage return xml file (and am using it in non-repeating tables), but I couldn't get it to work with this eval function.
I have tried this method however the value returned is only the first value of the repeating array.
Example, Name = Mary, John, Ken
Value returned = Mary; Mary; Mary
Its note worthy that the field the value is returned to is within the footer of the repeating table. I'm not sure if this makes a difference. I will test some more.
Thanks,
Greg
Hi there, and thank you for a fantastic post! The only thing is that I also, just like Greg, get the first value repeated instead of the individual values… Any ideas?
Dear Scott,
Thanks a lot for your suggestion on how to eliminate the last ";" in one of the replies able. You really saved me.
Regards,
Ramya.
I've tried to replace the repeating group with one from a SharePoint list. Can anyone shed some light on this?
Good day Scott,
I have found your description above to be exactly what I was looking for. I also read through the blog below, and discovered I encounter the same error as Debb66 has. You have referred her to ther IT team.
Was there ever any resolution found to this?
Unfortunatley my IT team won't support InfoPath queries and troubleshooting. So I am at the mercy of my findings here.
Can you assist me, please?
Hi Charles M,
Can you share with me what "error" you are referring to?
Scott
My DataSource Looks like this:
myFields
RecipientColumn
RecipientRepeatingTable
CCRecipient
On this form, I have a one-cell table. This cell contains a repeating table with a single textbox field. Emails are typed in this field.
Below this repeating table, I have a single text box that I wish to concatenate all the emails entered in the repeating table above.
This is the formula entered as the DefaultValue of this textbox field:
eval(eval(RecipientRepeatingTable, "concat(CCRecipient, ';')"), "..")
This is the error returned when this formula is validated:
"../my:RecipientColumn/my:RecipientColumn/my:RecipientRepeatingTable/my:CCRecipient" does not point to a valid location path of a field or group.
Thank you.
Hi Charles M,
Thank you for the clarification. So I setup a sample XSN like this:
myFields
RecipientColumn (Group, non-repeating)
RecipientRepeatingTable (Group, repeating)
CCRecipient (Field, Text data type, non-repeating)
If your data source is setup like this, here is the XPath expression you will need as the Default Value for the "EmailRecipients" field (note: I clicked the "Edit XPath" check box in the formula box so you could see the entire expression):
xdMath:Eval(xdMath:Eval(xdMath:Eval(../my:RecipientColumn/my:RecipientRepeatingTable/my:CCRecipient, "."), 'concat(., ";")'), "..")
What this looks like without the "Edit XPath" checked is this:
eval(eval(eval(CCRecipient, "."), 'concat(., ";")'), "..")
So if your data source is indeed exactly what I have described above, you can copy/paste the expanded version of the expression above directly into your XSN.
I hope this helps!
Scott
I was just looking for this formula to send multiple email from repeating table and this information is very useful. Thank you.
Hi,
First of all, thanks for very useful post.
It works very good, but I have just a little lack.
The structure of my secundary dataconnection is like that:
myFields
dataFields
ns1:GetAdmittingDiagnosisResponse
GetAdmittingDiagnosisResult
AdmDiagnosis
Code
ClinicalPriorityDesc
…
My formula is:
eval(eval(AdmDiagnosis[ClinicalPriorityDesc != "Secundario"]; 'concat(ns1:Code, "~", ns1:Description, "¬")'); "..")
In XPath:
xdMath:Eval(xdMath:Eval(xdXDocument:GetDOM("GetAdmittingDiagnosis")/dfs:myFields/dfs:dataFields/ns1:GetAdmittingDiagnosisResponse/ns1:GetAdmittingDiagnosisResult/ns1:AdmDiagnosis[ns1:ClinicalPriorityDesc != "Secundario"], 'concat(ns1:Code, "~", ns1:Description, "¬")'), "..")
This formual works perfect, but if I try the equal filter
[ClinicalPriorityDesc = "Secundario"]
This error raised:
"msxml5.dll
Reference to undeclared namespace prefix: 'ns1'.
Error occurred during a call to property or method 'Eval'."
I workaround this issue using the not equal comparision instead the equal, but I'm curious if there is a way to fix that…
Thanks
I am receiving the same issue with the formula: "../my:RepeatingTable/my:NomineesHidden/my:Person" does not point to a valid location path of a field or group.
I first tried the formula in the main body of the post. When that didn't work, I copied the formula that was given to someone else and adjusted the group names to fit my .xsn. Here is the XPath formula:
substring(xdMath:Eval(xdMath:Eval(../my:RepeatingTable/my:NomineesHidden/my:Person, 'concat(my:AccountID, ";")'), ".."), 1, string-length(xdMath:Eval(xdMath:Eval(../my:RepeatingTable/my:NomineesHidden/my:Person, 'concat(my:AccountID, ";")'), "..")) – 1)
My data structure is this:
NomineeTopGroup (non-repeating Group)
RepeatingTable (repeating table)
NomineeHidden (non-repeating Group)
pc:Person (repeating person group)
display name (string)
accountid (string)
accounttype (string)
I have tried everything I can think of to fix the formula so that it recognizes the AccountID field, but I have failed.
I'd appreciate any help you could provide.
does not work with people picker sp 2010
Hi, thanks for this post, I managed to solve one of my problems :).
I was trying to get te same result from a multiple-selection listbox, but can't get it to work.
This is the datastructure :
group1
repeating field1
field2
I would like to see the checked item from field1 in field2, ';' seperated.
all help appreciatied 🙂
Jan
I have been looking for answer to send email to multiple people using InfoPath function ""Person/Group Picker" and data connection.
Finally, using this formula and adding some tricks I was able to do it.
My Data Source Looks like this:
Notification-group
Notification-send-to
Notification-send-to-group (Person/Group Picker)
pc:Person
DisplayName
AccountId
AccountType
Set default value for "Notification-send-to"
eval(eval(Person, 'concat(pc:AccountId, ";")'), "..")
Setup data connection for email and enter following formula to remove domain name in “To” using “fx”
translate(Notification-send-to, "DOMAIN NAME", "")
Thank you,
first of all hi!.i have a forma and a people list.
Structure:
myFields
Group
pc:Person (repeating group)
DisplayName
AccountId
AccountType
i want so send emails to several DisplayName and cannot make it work 🙁
I am having a form with different sections and the momment the user hits the submit button I need each section in this form to be sent to different e-mails, I mean If I am having 3 sections in the form I need each of these 3 sections to be sent to 3 different e-mails. Any ideas how can I achieve this goal ?
Does anybody besides me thinks that InfoPath is not the great tool they promote?
Exactly what I needed. Thanks Bunch!!!
hi scott
I really admire you for your good answer
tanx a lot
It worked for me. I used my secondary data source instead of "person" and I used "." (current field value) instead of "my:email". Thanks
Solved it with this formula
eval(eval(Person, 'concat(concat(substring-after(pc:AccountId, ""), "@domain.com"), ";")'), "..")
I am facing a weird issue and do not know how to solve. Please help.
I am using Infopath and I have one multi-selection listbox with secondary data connection with total 4 columns. There are ID, Title, Email Address, Description.
The listbox is displaying Title value.
Purpose: user need to check the listbox and decided which listbox to select. Once tick, the eval formula will look up Email Address column and send email out. This is the formula – eval(eval(Email, 'concat(., ";")'), "..")
Question: why the email display without check any check box?
Expectation: User should check the box then email address only show in email To list
Thank you.
Hi Scott,
I have a master/detail repeating table in my form, and one of fields is "EmailAddress" which shows customer's email address. I created a submit button and put the "EmailAddress" field to "TO" object. When I tested it, the submit button only would return the first row of emailaddress from the repeating table, but not the rest of it.. Even thought I put the XML code like this: current()/dfs:dataFields/d:vw_HZLeadLists/@EmailAddress
It still just return the first record of emailaddress… Could you give me some ideas? Thanks a lot!
-Lun
Thank you very much! It was a lot of help
Hi My formula has not errors but is not working 🙁
Hi to all, currently my formula has not errors but is not working does not concat nothing
eval(eval(FORMATIONGROUP, 'concat(../my:FORMATIONGROUP/my:group4/my:ContactFormations, "")'), "..")
Would you happen to have a solution where I don't send an email to multiple people in a repeating table, but the last line item in the table.
Some info, I have a form with repeating table that has Primary & Secondary analysts. The first line item (run 1) fires off the email to the primary & secondary when Status equals Completed. Now the next line item (run 2) has different people associated with Primary & Secondary and need to send the email to only them when the status condition is met, but not the folks in run 1.
Possible?
Hi, I've been going round in circles with this. My xpath doesn't return any errors but only pulls through the first entry in the repeating table, not all of them. Xpath is:
xdMath:Eval(xdMath:Eval(../my:InjuredPersonsReportingManager/pc:Person, 'concat(pc:DisplayName, ";")'), "..")
Any suggestions as to what I'm doing wrong.
Worked PERFECT! Thank you so much!
What is the 2013 version for this. There is no "Myemaill" for 2013?
Instead of an email 'To' field, I am trying to populate a text field with a concatenated set of people's id's
However, it
Approver Name – Secondary data source which has the Person name in a people picker field.
Any help on this will be appreciated.
Thanks,
Venkatesh
The formula didnt show correctly in the previous post.
eval(eval(Person, ‘concat(xdXDocument:GetDOM(“FIM Approvers”)/dfs:myFields/dfs:dataFields/d:SharePointListItem_RW/d:Approver_x0020_Name/pc:Person/pc:AccountId, “;”)’), “..”)
Sorry to drag up another old thread, but I am having issues with this particular function.
When I create the loop, and publish, I get the following effect:
The first email address is listed twice in the CC field.
I am using the people picker option, however, I created a manual table and manually entered emails and got the same result.
Here is my xpath:
xdMath:Eval(xdMath:Eval(/my:myFields/my:email/my:Change_Implementer_Primary/pc:Person, 'concat(/my:myFields/my:email/my:Change_Implementer_Primary/pc:Person/pc:DisplayName, ";")'), "..")
My repeating table is set as follows: (people picker)
myFields
Change_Implementer_Primary (this is a required field)
pc:Person
DisplayName
AccountId
AccountType
I am way stuck as to why the loop is picking the same email name twice when there are two email names selected.
I have tried this in two forms:
Repeating table with emails in separate boxes
One box allowing multiple selections.
Both do not work for me.
Thx for any help. | https://blogs.msdn.microsoft.com/infopath/2006/04/05/email-submit-to-line-loops-in-formulas/ | CC-MAIN-2016-50 | refinedweb | 3,210 | 54.73 |
How To Pass Context In Standard Way - Without ThreadLocal
javax.transaction.TransactionSynchronizationRegistry holds a Map-like structure and can be used to pass state inside a transaction. It works perfectly since the old J2EE 1.4 days and is thread-independent.
Because an Interceptor is executed in the same transaction as the ServiceFacade, the state can be even set in a @AroundInvoke method. The TransactionSynchronizationRegistry (TSR) can be directly injected into an Interceptor:
public class CurrentTimeMillisProvider {
@Resource
private TransactionSynchronizationRegistry registry;
@AroundInvoke
public Object injectMap(InvocationContext ic) throws Exception{
registry.putResource(KEY, System.currentTimeMillis());
return ic.proceed();
}
}
A ServiceFacade don't even has to inject the TSR. The state is automatically propagated to the invoked service:
@Stateless
@WebService
@Interceptors(CurrentTimeMillisProvider.class)
public class ServiceFacadeBean implements ServiceFacade {
@EJB
private Service service;
public void performSomeWork(){
service.serviceInvocation();
}
}
Everything, what is invoked in the scope of a ServiceFacade - and so its transaction has access to the state stored in the injected TSR:
@Stateless
public class ServiceBean implements Service {
@Resource
private TransactionSynchronizationRegistry tsr;
public void serviceInvocation() {
long timeMillis = (Long)tsr.getResource(KEY);
//...
System.out.println("Content is " + timeMillis);
}
}
TransactionSynchronizationRegistry works (should work) even in case you had assigned different thread pools to EJBs, which participate in the same transaction. The state would get lost with a simple ThreadLocal.
Because we are already in the lightweight Java EE 5 / 6 world - XML and other configuration plumbing are fully optional :-).
A deployable, working example (ContextHolder) was tested with Glassfish v3 and NetBeans 6.8m2 and pushed into.
[See Context Holder pattern, page 247 in "Real World Java EE Patterns Rethinking Best Practices" book for more in-depth discussion] seems not to work on WebSphere...
regards gustav
Posted by gustav on October 15, 2009 at 12:24 PM CEST #
dependency injection of TransactionSynchronizationRegistry fails in Interceptor and Entity-Listener.
JNDI Lookup works.
regards gustav
Posted by gustav on October 15, 2009 at 12:27 PM CEST #
But "KEY" itself is also context, which also needs to be passed around, so there is always something that you need to pass either via the callstack or via a thread-local kind of thing, no?
(Unless it's global - but then, the context itself becomes global, and passing data this way goes somewhat against information hiding).
Posted by Dimitris Andreou on October 15, 2009 at 01:49 PM CEST #
Hi Adam,
sounds cool and seems to work, so we might get rid of our ThreadLocal dependecies! How the **** did you find this class?
Thanks,
Norbert
@Dimitris: The key is local to a transaction. When using the same key within concurrent transactions it will reference different values.
Posted by Norbert Seekircher on October 15, 2009 at 07:19 PM CEST #
@Gustav,
1. "This seems not to work on WebSphere..." - then you will have to open an issue. It is a part of the spec.
2. "dependency injection of TransactionSynchronizationRegistry fails in Interceptor and Entity-Listener.
JNDI Lookup works." Works as designed - you cannot inject anything into JPA. A JNDI-lookup should work.
thanks for your feedback!,
adam
Posted by Adam Bien on October 15, 2009 at 11:10 PM CEST #
@Dimitris,
the key is just a constant. The value of the key is local to a transaction. Information hiding between layers is another issue. This technique is often used to pass additional information like security information or handle to a transaction-specific resource...
thanks!,
adam
Posted by Adam Bien on October 15, 2009 at 11:13 PM CEST #
@Norbert,
"How the **** did you find this class?"
A customer asked me during a review about my opinion about this approach :-). I found that in J2EE 1.4 spec and it worked well even at that time.
With Java EE 5 it not only works, but is really nice!,
thanks for your feedback,
adam
Posted by Adam Bien on October 15, 2009 at 11:14 PM CEST #
Annotation @Resource don`t work in jboss. The TransactionSynchronizationRegistry isn't made available in JNDI.
Posted by Sergey Kiselev on May 17, 2010 at 07:00 AM CEST #
Sergey,
Actually you can make this work on JBoss if you do:
@javax.annotation.Resource( mappedName = "java:comp/TransactionSynchronizationRegistry" )
private TransactionSynchronizationRegistry mRegistry;
Hope that helps!
Posted by Richard Kennard on August 18, 2010 at 07:41 AM CEST #
I can achieve the same thing with a requestscoped bean (javax.enterprise.context.RequestScoped).
example :
1. Counter service
@Stateless
public class CounterService {
@Inject
Context context;
public int getCount(){
return context.getCount();
}
}
2. The counter is stored in the 'context' class
@RequestScoped
public class Context {
private int count=0;
public int getCount() {
return ++count;
}
With this, I can inject my context in every bean that need to access/update the context, and the context is scoped to the request that initiated the call, whatever it is a webapp, a web service or a mdb.
What is the benefit to use the TSR instead of a specific context bean ?
Is there a difference between using the TSR and using a requestscoped bean ?
Am I wrong somewhere ?
Thanks,
Nicolas
Posted by Nicolas NOEL on December 21, 2011 at 03:51 PM CET #
does EJBContext.getContextData work the same way with respect to thread safety?
Posted by Bill Schneider on December 28, 2011 at 06:51 PM CET #
@Nicolas
You're almost right but the request scope in CDI doesn't extend to some places you might be using in your app like:
* org.hibernate.Interceptor implementations,
* old EJB interceptors which are called before CDI creates the request scope
Tomasz
Posted by Tomasz Nikiel on January 03, 2012 at 06:51 PM CET #
@Nicolas,
you are almost right. You can achieve exactly the same if you have a JSF / HTTP frontend. If you are coming through a IIOP or JCA channel, you will have to use TSR or ThreadLocal for the context/request binding.
thanks for your constructive comment!,
adam
Posted by Adam Bien on January 04, 2012 at 05:53 PM CET #
@Adam
Yes, we were using remote calls through IIOP. Anyway, I have one more observation: the state cannot be conveyed to org.hibernate.Interceptor with ThreadLocal because the interceptor methods are called back in a different thread. Only TSR can do here and that's the approach we had to take.
To recap everything:
1. If you use JSF/HTTP frontend the CDI request scope is created early enough to be available through all EJB calls. The org.hibernate.Interceptor, however, doesn't support injection.
2. If you use IIOP (remote business interface calls) or JCA channel the CDI request scope is created after classical EJB interceptor calls.
3. In either case 1. or 2. you cannot inject a @RequestScoped object into the org.hibernate.Interceptor because it doesn't support injection.
4. In case 2. you could theoretically use ThreadLocal to pass state but it's not conveyed to org.hibernate.Interceptor, whose methods are called in a different thread.
5. In case 2. TSR is the way to go as it's available in all the stages.
Posted by Tomasz Nikiel on January 05, 2012 at 05:18 PM CET #
Thank you ;)
Posted by Manfred Hoess on May 23, 2012 at 05:02 PM CEST #
I kind of know that this is late but :
@Manfred
ad 1 and 3: It doesn't support injection, but supports retrieving the beanmanager thru jndi (from which you can get requested bean)
ad 4: we do use Thread local right now to do exactly this (didn't know about TSR when implemented this - I will probably refactor it.)
Posted by psychollek on December 19, 2014 at 10:40 AM CET #
Hi,?
Thanks,
Ant
Posted by Ant Kutschera on December 31, 2014 at 03:14 PM CET #
@Ant,
a good question, I'm going to cover it at the next:,
thanks for asking and commenting!
cheers,
adam
Posted by Adam Bien on January 11, 2015 at 08:05 PM CET #
I've tried the @RequestScoped approach. Unfortunately the Scope isn't propagated to @Asynchronous methods: :(
Posted by Bastian on January 21, 2015 at 10:47 AM CET #
I have the problems that with wildfly 8.2 it seems the TSR is not propagated to @asynchronous calls..
So when I call from one Bean a method on another EJB, the TSR contains the same data in both.. if I call from the same Bean a @asynchronous method, in that method the TSR is empty. Both Beans require Transactions. Any idea?
Posted by Peter Sellers on January 23, 2015 at 07:20 PM CET #
Wow, it looks like ultimate way to propagate user's language to remote EJB calls (and retrieve it in MessageInterpolator).
Posted by PeterW on March 03, 2015 at 08:36 PM CET #
@Bastian @Peter Sellers
I think @Asynchronous methods by their nature don't take place in the current transaction. Thats why there cannot be any state propagated to them via TSR.
Posted by Heli on May 14, 2015 at 09:01 PM CEST #
Hi Adam,
recently (by accident) i stumbled upon the standard CDI scope called @TransactionScoped:
This seems to be available since JEE7 but isn't even mentioned in the official JEE Tutorial. Maybe you want to elaborate on this on your next airhacks.tv as an alternative to using TSR directly? Think it's worth spreading...
Great show by the way :-)
Br,
Heli
Posted by Heli on May 18, 2015 at 10:48 PM CEST #
Hi Adam, still a possibility limited to strings and the vm: System.get / setProperty () with the thread ID as key.
Posted by Gerd Tuschke on June 24, 2016 at 07:32 PM CEST #
Hi Adam,
Is this approach also supposed to work for remote EJB calls to remote VM over IIOP? I can't get that working (WebSphere 8.5).
Posted by Jimmy Praet on December 19, 2017 at 12:14 PM CET # | http://adambien.blog/roller/abien/entry/how_to_pass_context_in | CC-MAIN-2018-39 | refinedweb | 1,638 | 55.34 |
Query explanation:
I know that global variables in C sometimes have the
extern keyword. What is an
extern variable? What is the declaration like? What is its scope?
This is related to sharing variables across source files, but how does that work precisely? Where do I use
extern?
How to use extern to share variables between source files? Answer #1:
Using
extern is only of relevance when the program you’re building consists of multiple source files linked together, where some of the variables defined, for example, in source file
file1.c need to be referenced in other source files, such as
file2.c.
It is important to understand the difference between defining a variable and declaring a variable:
- A variable is declared when the compiler is informed that a variable exists (and this is its type); it does not allocate the storage for the variable at that point.
- A variable is defined when the compiler allocates the storage fortern declaration.c and
file2.c:
file3.h
extern int global_variable; /* Declaration of the variable */
file1.c
#include "file3.h" /* Declaration made available here */ #include "prog1.h" /* Function declarations */ /* Variable defined here */ int global_variable = 37; /* Definition checked against declaration */ int increment(void) { return global_variable++; }
file2.c
#include "file3.h" #include "prog1.h" #include <stdio.h> void use_it(void) { printf("Global variable: %d\n", global_variable++); } in front of function declarations in headers for consistency — to match the
extern in front of variable declarations in headers. Many people prefer not to use
extern in front of function declarations; the compiler doesn’t care — and ultimately, neither do I as long as you’re consistent, at least within a source file.
prog1.h
extern void use_it(void); extern int increment(void);
prog1.c
#include "file3.h" #include "prog1.h" #include <stdio.h> int main(void) { use_it(); global_variable += 19; use_it(); printf("Increment: %d\n", increment()); return 0; }
prog1uses
prog1.c,
file1.c,
file2.c,
file3.hand
prog1.h.
The file
prog1.mk is a makefile for
prog1 only. It will work with most versions of
make produced since about the turn of the millennium. It is not tied specifically to GNU Make.
prog1.mk
# Minimal makefile for prog1 PROGRAM = prog1 FILES.c = prog1.c file1.c file2.c FILES.h = prog1.h file3.h FILES.o = ${FILES.c:.c=.o} CC = gcc SFLAGS = -std=c11 GFLAGS = -g OFLAGS = -O3 WFLAG1 = -Wall WFLAG2 = -Wextra WFLAG3 = -Werror WFLAG4 = -Wstrict-prototypes WFLAG5 = -Wmissing-prototypes WFLAGS = ${WFLAG1} ${WFLAG2} ${WFLAG3} ${WFLAG4} ${WFLAG5} UFLAGS = # Set on command line only CFLAGS = ${SFLAGS} ${GFLAGS} ${OFLAGS} ${WFLAGS} ${UFLAGS} LDFLAGS = LDLIBS = all: ${PROGRAM} ${PROGRAM}: ${FILES.o} ${CC} -o $@ ${CFLAGS} ${FILES.o} ${LDFLAGS} ${LDLIBS} prog1.o: ${FILES.h} file1.o: ${FILES.h} file2.o: ${FILES.h} # If it exists, prog1.dSYM is a directory on macOS DEBRIS = a.out core *~ *.dSYM RM_FR = rm -fr clean: ${RM_FR} ${FILES.o} ${PROGRAM} ${DEBRIS}
Guidelines
Rules to be broken by experts only, and only with good reason:
- A header file only contains
externdeclarations of variables — never
staticor unqualified variable definitions.
- For any given variable, only one header file declares it (SPOT — Single Point of Truth).
- A source file never contains
externdeclarations of variables — source files always include the (sole) header that declares them.
- For any given variable, exactly one source file defines the variable, preferably initializing it too. (Although there is no need to initialize explicitly to zero, it does no harm and can do some good, because there can be only one initialized definition of a particular global variable in a program).
- The source file that defines the variable also includes the header to ensure that the definition and the declaration are consistent.
- A function should never need to declare a variable using
extern.
- Avoid global variables whenever possible — use functions instead. ‘common’ definition of a variable too. ‘Common’,
#include "prog2.h" long l; /* Do not do this in portable code */ void inc(void) { l++; }
file11.c
#include "prog2.h" long l; /* Do not do this in portable code */ void dec(void) { l--; }
file12.c
#include "prog2.h" #include <stdio.h> long l = 9; /* Do not do this in portable code */ void put(void) { printf("l = %ld\n", l); }
This technique does not conform to the letter of the C standard and the ‘one definition rule’ — it is officially undefined behaviour:
An identifier with external linkage is used, but in the program there does not exist exactly one external definition for the identifier, or the identifier is not used and there exist multiple external definitions for the identifier (6.9).
§6.9 External definitions ¶5
An external definition is an external declaration that is also a definition of a function (other than an inline definition) or an object. If an identifier declared with external linkage is used in an expression (other than as part of the operand of a
sizeofor
_Alignofoperator whose result is an integer constant), somewhere in the entire program there shall be exactly one external definition for the identifier; otherwise, there shall be no more than one.161)
161) Thus, if an identifier declared with external linkage is not used in an expression, there need be no external definition for it.
However, the C standard also lists it in informative Annex J as one of the Common extensions.
J.5.11 Multiple external definitions
There may be more than one external definition for the identifier of an object, with or without the explicit use of the keyword extern; if the definitions disagree, or more than one is initialized, the behavior is undefined (6.9.2).
Because this technique is not always supported, it is best to avoid using it, especially if your code needs to be portable. Using this technique, you can also end up with unintentional type punning.
If one of the files above declared
l as a
double instead of as a
long, C’s type-unsafe linkers probably would not spot the mismatch. If you’re on a machine with 64-bit
long and
double, you’d not even get a warning; on a machine with 32-bit
long and 64-bit
double, you’d probably get a warning about the different sizes — the linker would use the largest size, exactly as a Fortran program would take the largest size of any common blocks.
Note that GCC 10.1.0, which was released on 2020-05-07, changes the default compilation options to use
-fno-common, which means that by default, the code above no longer links unless you override the default with
-fcommon (or use attributes, etc — see the link).
The next two files complete the source for
prog2:
prog2.h
extern void dec(void); extern void put(void); extern void inc(void);
prog2.c
#include "prog2.h" #include <stdio.h> int main(void) { inc(); put(); dec(); put(); dec(); put(); }
prog2uses
prog2.c,
file10.c,
file11.c,
file12.c,
prog2.h.
Warning ‘ex
int some_var; /* Do not do this in a header!!! */
Note 1: if the header defines the variable without the
extern keyword, then each file that includes the header creates a tentative definition of the variable. As noted previously, this will often work, but the C standard does not guarantee that it will work.
broken_header.h
int some_var = 13; /* Only one source file in a program can use this */
Note 2: if the header defines and initializes the variable, then only one source file in a given program can use the header. Since headers are primarily for sharing information, it is a bit silly to create one that can only be used once.
seldom_correct.h
static int hidden_global = 3; /* Each source file gets its own copy */
Note 3: if the header defines a static variable (with or without initialization), then each source file ends up with its own private version of the ‘global’ ‘declarations ‘main
#ifdef DEFINE_VARIABLES #define EXTERN /* nothing */ #else #define EXTERN extern #endif /* DEFINE_VARIABLES */ EXTERN int global_variable;
file1a.c
#define DEFINE_VARIABLES #include "file3a.h" /* Variable defined - but not initialized */ #include "prog3.h" int increment(void) { return global_variable++; }
file2a.c
#include "file3a.h" #include "prog3.h" #include <stdio.h> void use_it(void) { printf("Global variable: %d\n", global_variable++); }
The next two files complete the source for
prog3:
prog3.h
extern void use_it(void); extern int increment(void);
prog3.c
#include "file3a.h" #include "prog3.h" #include <stdio.h> int main(void) { use_it(); global_variable += 19; use_it(); printf("Increment: %d\n", increment()); return 0; }
#ifdef DEFINE_VARIABLES #define EXTERN /* nothing */ #define INITIALIZER(...) = __VA_ARGS__ #else #define EXTERN extern #define INITIALIZER(...) /* nothing */ #endif /* DEFINE_VARIABLES */ EXTERN int global_variable INITIALIZER(37); EXTERN struct { int a; int b; } oddball_struct INITIALIZER({ 41, 43 });
file1b.c
#define DEFINE_VARIABLES #include "file3b.h" /* Variables now defined and initialized */ #include "prog4.h" int increment(void) { return global_variable++; } int oddball_value(void) { return oddball_struct.a + oddball_struct.b; }
file2b.c
#include "file3b.h" #include "prog4.h" #include <stdio.h> void use_them(void) { printf("Global variable: %d\n", global_variable++); oddball_struct.a += global_variable; oddball_struct.b -= global_variable / 2; }
Clearly, the code for the oddball structure is not what you’d normally write, but it illustrates the point. The first argument to the second invocation of
INITIALIZER is
{ 41 and the remaining argument (singular in this example) is
43 }. Without C99 or similar support for variable argument lists for macros, initializers that need to contain commas are very problematic.
The next two files complete the source for
prog4:
prog4.h
extern int increment(void); extern int oddball_value(void); extern void use_them(void);
prog4.c
#include "file3b.h" #include "prog4.h" #include <stdio.h> int main(void) { use_them(); global_variable += 19; use_them(); printf("Increment: %d\n", increment()); printf("Oddball: %d\n", oddball_value()); return 0; }:
#ifndef FILE3B_H_INCLUDED #define FILE3B_H_INCLUDED ...contents of header... #endif /* FILE3B_H_INCLUDED */
The header might be included twice indirectly. For example, if
file4b.h includes
file3b.h for a type definition that isn’t shown, and
file1b.c needs to use both header
file4b.h and before including
file3b.h to generate the definitions, but the normal header guards on
file3b.h would prevent the header being reincluded.
So, you need to include the body of
file3b.h at.c and
file6c.c directly include the header
file2c.h several times, but that is the simplest way to show that the mechanism works. It means that if the header was indirectly included twice, it would also be safe.
The restrictions for this to work are:
- The header defining or declaring the global variables may not itself define any types.
- Immediately before you include a header that should define variables, you define the macro DEFINE_VARIABLES.
- The header defining or declaring the variables has stylized contents.
external.h
/* ** This header must not contain header guards (like <assert.h> must not). ** Each time it is invoked, it redefines the macros EXTERN, INITIALIZE ** based on whether macro DEFINE_VARIABLES is currently defined. */ #undef EXTERN #undef INITIALIZE #ifdef DEFINE_VARIABLES #define EXTERN /* nothing */ #define INITIALIZE(...) = __VA_ARGS__ #else #define EXTERN extern #define INITIALIZE(...) /* nothing */ #endif /* DEFINE_VARIABLES */
file1c.h
#ifndef FILE1C_H_INCLUDED #define FILE1C_H_INCLUDED struct oddball { int a; int b; }; extern void use_them(void); extern int increment(void); extern int oddball_value(void); #endif /* FILE1C_H_INCLUDED */
file2c.h
/* Standard prologue */ #if defined(DEFINE_VARIABLES) && !defined(FILE2C_H_DEFINITIONS) #undef FILE2C_H_INCLUDED #endif #ifndef FILE2C_H_INCLUDED #define FILE2C_H_INCLUDED #include "external.h" /* Support macros EXTERN, INITIALIZE */ #include "file1c.h" /* Type definition for struct oddball */ #if !defined(DEFINE_VARIABLES) || !defined(FILE2C_H_DEFINITIONS) /* Global variable declarations / definitions */ EXTERN int global_variable INITIALIZE(37); EXTERN struct oddball oddball_struct INITIALIZE({ 41, 43 }); #endif /* !DEFINE_VARIABLES || !FILE2C_H_DEFINITIONS */ /* Standard epilogue */ #ifdef DEFINE_VARIABLES #define FILE2C_H_DEFINITIONS #endif /* DEFINE_VARIABLES */ #endif /* FILE2C_H_INCLUDED */
file3c.c
#define DEFINE_VARIABLES #include "file2c.h" /* Variables now defined and initialized */ int increment(void) { return global_variable++; } int oddball_value(void) { return oddball_struct.a + oddball_struct.b; }
file4c.c
#include "file2c.h" #include <stdio.h> void use_them(void) { printf("Global variable: %d\n", global_variable++); oddball_struct.a += global_variable; oddball_struct.b -= global_variable / 2; }
file5c.c
#include "file2c.h" /* Declare variables */ #define DEFINE_VARIABLES #include "file2c.h" /* Variables now defined and initialized */ int increment(void) { return global_variable++; } int oddball_value(void) { return oddball_struct.a + oddball_struct.b; }
file6c.c
#define DEFINE_VARIABLES #include "file2c.h" /* Variables now defined and initialized */ #include "file2c.h" /* Declare variables */ int increment(void) { return global_variable++; } int oddball_value(void) { return oddball_struct.a + oddball_struct.b; }
The next source file completes the source (provides a main program) for
prog5,
prog6 and
prog7:
prog5.c
#include "file2c.h" #include <stdio.h> int main(void) { use_them(); global_variable += 19; use_them(); printf("Increment: %d\n", increment()); printf("Oddball: %d\n", oddball_value()); return 0; }.h into
file2d.h:
file2d.h
/* Standard prologue */ #if defined(DEFINE_VARIABLES) && !defined(FILE2D_H_DEFINITIONS) #undef FILE2D_H_INCLUDED #endif #ifndef FILE2D_H_INCLUDED #define FILE2D_H_INCLUDED #include "external.h" /* Support macros EXTERN, INITIALIZE */ #include "file1c.h" /* Type definition for struct oddball */ #if !defined(DEFINE_VARIABLES) || !defined(FILE2D_H_DEFINITIONS) /* Global variable declarations / definitions */ EXTERN int global_variable INITIALIZE(37); EXTERN struct oddball oddball_struct INITIALIZE({ 41, 43 }); #endif /* !DEFINE_VARIABLES || !FILE2D_H_DEFINITIONS */ /* Standard epilogue */ #ifdef DEFINE_VARIABLES #define FILE2D_H_DEFINITIONS #undef DEFINE_VARIABLES #endif /* DEFINE_VARIABLES */ #endif /* FILE2D_H_INCLUDED */
The issue becomes ‘should the header include
#undef DEFINE_VARIABLES?’ If you omit that from the header and wrap any defining invocation with
#define and
#undef:
#define DEFINE_VARIABLES #include "file2c.h" #undef DEFINE_VARIABLES
in the source code (so the headers never alter the value of
DEFINE_VARIABLES), then you should be clean. It is just a nuisance to have to remember to write the the extra line. An alternative might be:
#define HEADER_DEFINING_VARIABLES "file2c.h" #include "externdef.h"
externdef.h
/* ** This header must not contain header guards (like <assert.h> must not). ** Each time it is included, the macro HEADER_DEFINING_VARIABLES should ** be defined with the name (in quotes - or possibly angle brackets) of ** the header to be included that defines variables when the macro ** DEFINE_VARIABLES is defined. See also: external.h (which uses ** DEFINE_VARIABLES and defines macros EXTERN and INITIALIZE ** appropriately). ** ** #define HEADER_DEFINING_VARIABLES "file2c.h" ** #include "externdef.h" */ #if defined(HEADER_DEFINING_VARIABLES) #define DEFINE_VARIABLES #include HEADER_DEFINING_VARIABLES #undef DEFINE_VARIABLES #undef HEADER_DEFINING_VARIABLES #endif /* HEADER_DEFINING_VARIABLES */
This is getting a tad convoluted, but seems to be secure (using the
file2d.h, with no
#undef DEFINE_VARIABLES in the
file2d.h).
file7c.c
/* Declare variables */ #include "file2d.h" /* Define variables */ #define HEADER_DEFINING_VARIABLES "file2d.h" #include "externdef.h" /* Declare variables - again */ #include "file2d.h" /* Define variables - again */ #define HEADER_DEFINING_VARIABLES "file2d.h" #include "externdef.h" int increment(void) { return global_variable++; } int oddball_value(void) { return oddball_struct.a + oddball_struct.b; }
file8c.h
/* Standard prologue */ #if defined(DEFINE_VARIABLES) && !defined(FILE8C_H_DEFINITIONS) #undef FILE8C_H_INCLUDED #endif #ifndef FILE8C_H_INCLUDED #define FILE8C_H_INCLUDED #include "external.h" /* Support macros EXTERN, INITIALIZE */ #include "file2d.h" /* struct oddball */ #if !defined(DEFINE_VARIABLES) || !defined(FILE8C_H_DEFINITIONS) /* Global variable declarations / definitions */ EXTERN struct oddball another INITIALIZE({ 14, 34 }); #endif /* !DEFINE_VARIABLES || !FILE8C_H_DEFINITIONS */ /* Standard epilogue */ #ifdef DEFINE_VARIABLES #define FILE8C_H_DEFINITIONS #endif /* DEFINE_VARIABLES */ #endif /* FILE8C_H_INCLUDED */
file8c.c
/* Define variables */ #define HEADER_DEFINING_VARIABLES "file2d.h" #include "externdef.h" /* Define variables */ #define HEADER_DEFINING_VARIABLES "file8c.h" #include "externdef.h" int increment(void) { return global_variable++; } int oddball_value(void) { return oddball_struct.a + oddball_struct.b; }
The next two files complete the source for
prog8 and
prog9:
prog8.c
#include "file2d.h" #include <stdio.h> int main(void) { use_them(); global_variable += 19; use_them(); printf("Increment: %d\n", increment()); printf("Oddball: %d\n", oddball_value()); return 0; }
file9c.c
#include "file2d.h" #include <stdio.h> void use_them(void) { printf("Global variable: %d\n", global_variable++); oddball_struct.a += global_variable; oddball_struct.b -= global_variable / ‘avoiding.h` and .c and
prog8.c is the name of one of the headers that are included. It would be possible to reorganize the code so that the
main() function was not repeated, but it would conceal more than it revealed.)
How to share variables between source files using extern in C? Answer #2:
An
extern variable is a declaration (thanks to sbi for the correction) of a variable which is defined in another translation unit. That means the storage for the variable is allocated in another file.
Say you have two
.c-files
test1.c and
test2.c. If you define a global variable
int test1_var; in
test1.c and you’d like to access this variable in
test2.c you have to use
extern int test1_var; in
test2.c.
Complete sample:
$ cat test1.c int test1_var = 5; $ cat test2.c #include <stdio.h> extern int test1_var; int main(void) { printf("test1_var = %d\n", test1_var); return 0; } $ gcc test1.c test2.c -o test $ ./test test1_var = 5
Answer #3:.
Answer #4:
declare | define | initialize | ---------------------------------- extern int a; yes no no ------------- int a = 2019; yes yes yes ------------- int a; yes yes no -------------
Declaration won’t allocate memory (the variable must be defined for memory allocation) but the definition will. This is just another simple view on the extern keyword since the other answers are really great.
Hope you learned something from this post.
Follow Programming Articles for more! | https://programming-articles.com/how-to-use-extern-to-share-variables-between-source-files-in-c-answered/ | CC-MAIN-2022-40 | refinedweb | 2,739 | 51.85 |
I am having a hard time processing after reading the DAT data with f = open (file_text.get (),'r').
What i am looking for ...
1. I want to skip the first line
2. The content of DAT data is a list of such numbers.
[20395089035130095 0258387190000927700000520090015]
I want to extract the bold part from this like the first and second columns
3. Finally, I want to output as CSV
I am wandering into an unfamiliar DAT file. Thanking you in advance.
import tkinter as tk #GUI library import tkinter.messagebox as tkm from pathlib import Path # Library that can handle file system paths import pandas as pd #csv Library to handle import datetime as dt import cx_Oracle from tkinter import file dialog now = dt.datetime.now () #time time = now.strftime ('% Y% m% d-% H% M% S') def OpenFileDlg (tbox): ftype = [('','*')] dir ='.' #File dialog display filename = filedialog.askopenfilename (filetypes = ftype, initialdir = dir) #Display file path in text box tbox.insert (0, filename) def btn_click (): #button function cc = str (txt1.get ()) # Substitute text box characters for cc sc = str (txt2.get ()) if not len (cc) == 6: tkm.showerror ("Enter error", "Customer number is 6 digits") return elif not len (sc) == 7: tkm.showerror ("Enter error", "Employee number is 7 digits") return elif file_text.get () =='': tkm.showerror ('error','specify data file') return else: else: tkm.showinfo ("Information", "Saving CSV") f = open (file_text.get (),'r') root = tk.Tk () #Tk class generation root.geometry ('500x400 + 600 + 300') #screen size + position root.title ('input screen') # screen title #Customer number lbl1 = tk.Label (text ='customer number', font = (u'MS Gothic', 11,'bold')) lbl1.place (x = 85, y = 145) txt1 = tk.Entry (width = 30) txt1.place (x = 160, y = 150) btn = tk.Button (root, text ='CSV output', width = 20, font = ("Menlo", 11), command = btn_click) btn.place (x = 155, y = 220) #employee number lbl2 = tk.Label (text ='employee number', font = (u'MS Gothic', 11,'bold')) lbl2.place (x = 85, y = 175) txt2 = tk.Entry (width = 30) txt2.place (x = 160, y = 180) btn1 = tk.Button (root, text = "end", width = 20, font = ("Menlo", 11), command = root.destroy) btn1.place (x = 155, y = 250) #Excel file dialog label = tk.Label (root, text ='data file', font = (u'MS Gothic', 10,'bold')) label.place (x = 100, y = 95) file_text = tk.Entry (root, width = 40) file_text.place (x = 100, y = 115) fdlg_button = tk.Button (root, text ='file selection', command = lambda: OpenFileDlg (file_text)) fdlg_button.place (x = 360, y = 110) root.mainloop () # screen display
- Answer # 1
Related articles
- reading and displaying npy format files in python
- python - read multiple audio files (wav files)
- processing using the len function when an integer value is obtained from python standard input
- python - about multiple processing and loop processing in discordpy
- parallel processing using python multiprocessingpool and multiprocessingqueue does not work well
- python:about processing such as timesleep and wxpython
- python google drive api 100mb or more files cannot be uploaded
- python - avoid processing when duplicated
- emacs - (python) i want to automate the work of reading files in order
- python iterative processing num is not defend
- python - while syntax processing
- python - i want to handle files with the path obtained by ospathjoin
- python about iterative processing with specified numbers
- python - speech processing typeerror:'int' object is not subscriptable
- reading python files
- i want to add processing to the python library
- python - about range and int type processing
- python 3x - update processing and multi-process with pyqtgraph
- about reading xls files with python pandas
Related questions
- python : Add data to existing CSV file for each row
- python : Replacing repetitive ID in dataset
- python : CSV is incorrectly read
- python : How to divide the readable string on the separating symbol without CSV?
- python : Transform Excel Date Format
- python : Saving changes to the CSV file
- Scikit Learn Python
- Put the first line in the CSV file using Python
- python : A certain number of columns from the CSV file for schedule
- python : How to import denormalized data from CSV in PostgreSQL with conservation of relations one to many
You can flexibly read fixed-length files using pandas.read_fwf. | https://www.tutorialfor.com/questions-324127.htm | CC-MAIN-2021-25 | refinedweb | 672 | 57.16 |
You can subscribe to this list here.
Showing
1
results of 1
Juergen Edner <juergen.edner <at> telejeck.de> writes:
>
> Hello,
> I've just installed Squirrelmail v1.5.2 on my server to check
> if it now supports public (#PUBLIC) and/or shared (#SHARED)
> folders of an UW-IMAPD. I had in mind that this feature should
> be added to the development branch.
> Unfortunately I couldn't find a way to get these folders
> displayed. Can someone please tell me if I'm doing something
> wrong or if this feature has not yet been implemented in SM.
In SquirrelMail 'uw' preset default folder prefix is set to 'mail/'.
This limits mail folders to user's home directory and disables all
namespaces. Set it to empty string.
In SquirrelMail 1.5.x mailboxes must be listed in LIST "*" "*" command
response. This disables all public and shared mailboxes. You will have
to modify sqimap_get_mailboxes function to generate listing of shared
and public namespaces.
SquirrelMail 1.4.x should be able to see subscribed shared and public
mailboxes, if you set default folder prefix to empty string.
If you set $no_list_for_subscribe to true in SquirrelMail 1.4.x
configuration, users should be able to subscribe shared and public
mailboxes, if they know mailbox name.
--
Tomas | http://sourceforge.net/p/squirrelmail/mailman/squirrelmail-users/?viewmonth=201012&viewday=5 | CC-MAIN-2016-07 | refinedweb | 212 | 68.87 |
Difference between revisions of "Common Build Infrastructure/Athena Progress Report"
Revision as of 08:08, 28 March 2010
Contents
- 1 Recent Changes
- 2 2010-03-38
- 3 2010-02-01
- 4 2009-11-27
- 5 2009-10-13
- 6 2009-09-01
- 7 2009-08-07
- 8 2009-07-23
- 9 2009-07-15
- 10 2009-07-10
- 11 2009-06-05
- 12 2009-05-26
- 13 2009-05-18
- 14 2009-05-09
- 15 2009-04-22
- 16 2009-03-30
- 17 2009-03-15
- 18 2009-03-05
- 19 2009-03-01
- 20 2009-02-26
- 21 2009-02-22
- 22 2009-02-16
- 23 2009-01-16
- 24 2008-11-26
- 25 2008-11-19
- 26 2008-11-06
- 27 2008-10-28
Recent Changes
- Bugs changed: this week, this month
- RSS feed: weekly changes
2010-03-38
- Ease of use
- build.xml is now simpler. Sample project template updated. You can now use the same build.xml script to run a build in Eclipse or in Hudson from an Ant-based job. Note that the page [In Hudson/Ant Script is no longer current. See also bug 304800.
- Packaging Support
- bug 306300 Athena removes .jar files and only contains pack 200 files in update site - new default setting (keep both artifact types) is
removeUnpackedJars=false, but can revert to old behaviour (and smaller update site) with
removeUnpackedJars=true
- bug 307016)
- Testing Support
- bug 296352 Can't connect to vnc server - fixed using Xnvc option in Hudson job and improvements to testLocal task
- Publishing Support
- bug 302170 Work around Hudson's missing lastS*Build folders - promote.xml will now recurse into Hudson job tree looking for correct build to publish
- Bugs
- bug 304800 Temporary regression caused by adopting new build.xml script with too-aggressive cleanup default
- Documentation & Branding
- EclipseCon 2010 Presentation, "Dash Athena Exposed" is available here.
- Common Build Infrastructure/Getting Started and Common Build Infrastructure/Getting Started/Build In Eclipse updated.
- Sample project template & generic build.xml updated.
- bug 272723 Logo design contest for Athena under way: vote early, vote often!
2010-02-01
There are now 63 Athena jobs on build.eclipse.org, building 41 projects using 6 job templates (Bash/Ant, CVS/SVN, Nightly/Integration).
- Infrastructure Changes
- bug 287013 - Support generation of Helios .build files - see Common Build Infrastructure/Publishing/Helios
- bug 272991 - Support user-defined site.xml for p2 metadata generation in Update zip
- Cross-Platform / Ease of Use
- bug 294927 - Remove hard coding of /tmp directory
- getCompilerResults.sh is now getCompilerResults.xml, so that this information can be generated on Windows now too. No more perl and egrep, just loadfile, replaceregex, and linecontainsregexp.
- Bug Fixes
- bug 296326 - subprojectName should == projectName if projectid only contains 1 part (foo, not foo.bar)
- Documentation
- Getting Started - Build In Eclipse updated w/ more .releng project example links
- Testing/VMArgs How to configure VM args and define a custom library.xml for your tests
2009-11-27
- Infrastructure
2009-10-13
- Infrastructure changes
- There are now 43 Athena jobs on build.eclipse.org! Of those, 30 are green, 1 is yellow, and 6 have not yet been enabled. These jobs represent 29 different projects' builds! 6 of them use SVN sources instead of CVS.
- bug 257074 comment 12 build.eclipse.org now has SVN 1.6.5 on it; if your map files no longer work (your build complains it can't resolve plugin or features sources) then read this.
- Bug fixes
- bug 291446 Provide hook for extra actions after fetching code from repo and before compiling it (e.g. code generation, parser generators, etc.)
- bug 275529 Athena is now a full Project rather than a Component! Now if we could just get someone to design a logo... Do I need to offer up prizes?
- Documentation
- Tips for Building on Windows - community contributed! Thanks to Nicolas Bros!
- New category created for User-contributed build stories: Category:Athena Common Build Users
- Bugzilla updated to move
product=Dash&component=Common+Builderto
product=Dash+Athena; all old bugs moved too.
2009-09-01
- Bug fixes
- bug 284593 improved documentation and sample code for setting up a crontab script for publishing your project's bits.
- bug 285359 Add option to ignore existing test.xml file if it exists.
- bug 287240 Add more entries ${JAVA14_HOME}/lib/*.jar entries to J2SE-1.4 to support a wider audience
- bug 284968 SDK zips do not include sources - they do now!
- In the past, to install extra Orbit bundles into the base Eclipse platform before building, you needed a buildExtra.xml (and/or testExtra.xml), with a task like this:
<target name="getDependencies"> <mkdir dir="${buildDirectory}/../eclipse/plugins/" /> <get dest="${buildDirectory}/../eclipse/plugins/org.apache.xalan_2.7.1.v200905122109.jar" src="" /> </target>
- Now you can now install plugins (not just features) into the target platform prior to building. For example, to install orbit jars for which there are no features, you can add the following into your build.properties file. Note that the update site MUST BE A p2 REPO, not simply a "Classic" Update Site.
repositoryURLs= pluginIDsToInstall=org.apache.xml.resolver+javax.xml+org.apache.xml.serializer+org.apache.xerces+org.apache.xalan
- If you want to install features and their included plugins, simply use the
featreIDsToInstallproperty:
repositoryURLs= featureIDsToInstall=org.eclipse.emf+org.eclipse.gef
2009-08-07
- Ease of use / inter-build dependencies
- bug 285519 support static URLs for "latest build" zips in Hudson so builds can depend on each other's output -- set the
- bug 284959 hudson shell script should be completely generic
When launching your build via shell script in Hudson, use this instead of your own custom shell script. You'll get N-SNAPSHOT.zip instead of N200908071234.zip so that others can more easily depend on your latest stable build.
export SNAPSHOT="true" . /opt/public/cbi/build/org.eclipse.dash.common.releng/hudson/run.sh
- New builds
2009-07-23
- Ease of Use
- bug 283776 create dir2svnmap task to generate a map file from checked out SVN sources (use with build.steps=dir2svnmap in build.properties)
- bug 284055 Build seems not to unpack all dependencies - see FAQ
- bug 284516 make optional pre-build dependency unpacking and post-build packaging more optional
- Better testing support
- bug 284331 Support test plugins as jars, not just folders
- Better documentation / error handling
- New Getting Started FAQ
- bug 284509 warn users of possible linux + SVN problems when checking out
- New builds
- EMF Query, Validation and Transaction now have a 3-part build
- Ajax Tools Framework has been rebooted with its first nightly build - tests coming soon.
2009-07-15
- Improve signing recovery w/ better log output and better trapping for errors; also, using Ant-Contrib we can avoid the stack overflow issue w/ tasks that call themselves indefinitely.
- bug 280642 Improve recovery from infinitely long signing queue
- bug 254205 Signing should fail more gracefully
- Also updated GEF releng example to build from Eclipse 3.5.0 w/ latest map sources, individual source plugins, and these properties in build.properties:
flattenDependencies=true parallelCompilation=true generateFeatureVersionSuffix=true individualSourceBundles=true
2009-07-10
- Some minor bug fixes:
- bug 280747 testPluginsToRun property doesn't ignore whitespace
- bug 282593 Signing should not be default-enabled for non-eclipse.org builds
- bug 282910 start.sh help says one can pass projRelengName but the arg is not parsed
- Several non-Eclipse.org build examples are available, including maps and generated psfs (using map2psf build.step):
- Discussion starting re: different ways to build - using RCP vs. SDK, using default-specified SDKs for user's platform (rather than needing to be explicit in dependencyURLs)
2009-06-05
- bug 272991 Documented generating custom update site categories and hiding unwanted features.
- bug 279056 Added support for excluding some jars from packing or signing; solved issue w/ VE's "corrupt" uninstallable jars.
- bug 278687 Added a new optional build.step for generating Project Set Files (.psf) from PDE .map files.
2009-05-26
- Documented p2 repo and SDK zip input options.
- buildExtra.xml and testExtra.xml are now optional; by default all zips will be unpacked, all repos+features will be installed w/o explicitly needing to direct the build how and where
- bug 252774 improve support for CVS/SVN repo dumps so building from local sources is easier/smarter (to ease local setup when repo tree is non-standard): testing complete; released to HEAD.
2009-05-18
- Started Hudson Management Documentation. Includes HOWTO for restarting server to install updates.
- Started Jar Signing Documentation. Includes code sample.
- SWTBot builds added to Hudson
- bug 276574 generate default test.xml if not present in test plugin (to ease migration to Athena builder - less boilerplate crap you have to write/maintain)
- bug 252774 improve support for CVS/SVN repo dumps so building from local sources is easier/smarter (to ease local setup when repo tree is non-standard)
- bug 273518 Support building against p2 repos using both Eclipse 3.4 and new 3.5 versions of p2.director (work in progress in
bug272774-bug273518branch)
- Began testing Athena for use with projects outside the org.eclipse namespace.
2009-05-09
- Builds now using Athena in Hudson include: GEF, Linux Tools, Visual Editor (VE), Voice Tools (VTP), and Nebula:
- Wiki docs refactored to use Category:Athena Common Build category.
- Other closed bugs since 2009-04-22:
- bug 273299 index.php should link to to use mirrors
- bug 253277 Local build should warn/fail gracefully in the absence of specific tools
- bug 266374 design process for publishing builds from Hudson to download.eclipse.org
- bug 273302 Create crontab'able script for rsync/copy, fix perms, and unpack update zip
- bug 251926 support setting feature/plugin qualifiers to buildId (instead of map tags) when running nightly builds
- bug 272965 support packaging for namespaces other than org.eclipse.*
2009-04-22
- The 'buildZips' step is now optional; you can omit it entirely and still run your tests from the generated update site. This will also verify that your update site can be installed into Eclipse SDK + your runtime requirements using the p2 director (bug 272403).
- As with PDE SVN support and ant4eclipse, Athena can now fetch ant-contrib from Sourceforge if it's not already installed (bug 253277).
- Working on a way to input a custom site.xml so that your own categories will be used in the generated p2 repo / update site (bug 272991).
- Anonymous access to Hudson on build.eclipse.org is now available! Log in using your committerid for more control. To create a new job, you must be in the callisto-dev group (or ask someone who's in the group to create the job and add you to its ACL).
- Nebula and VE are now building using the Athena system (bug 237588, bug 270849).
- Other closed bugs since 2009-03-30:
- bug 272404 build.step "cleanup" should purge Master + All zips
- bug 252401 Make unpacking doc.isv optional
- bug 252030 standalone buildserver-in-a-box (vmware/vbox/qemu/kvm)
- bug 252049 Separate exploded SDK and sources to build
- bug 256212 verify default & extra packaging steps are producing correct zips
- bug 271145 Document build types
- bug 271211 optional lib folder in project.releng should be optional
- bug 271322 Need another bootclasspath location
- bug 271524 run (not runEclipse) task can't find ant4eclipse
- bug 271549 Should we be able to run tests w/ an archived p2 repo instead of an SDK zip?
- bug 271661 if map contains svn ref and pde-svn plugin not found in basebuilder, spit out a smarter error message
- bug 271849 Document requirement to have version variable in script launching build
- bug 272906 Features with fragments don't build properly
- bug 252035 Provide a p2 repo after the master feature is built
- bug 256192 build folder's index.php contains broken links
- bug 264579 define a better workflow for build steps
2009-03-30
- Had a great week at EclipseCon '09 talking to people about Athena, particularly during Hands-On: Using the new Common Builder for Push-Button PDE Builds. Slides (supporting material & exercise instructions) + exercises are posted here, which include sample properties files for building on Mac, Linux, and Windows. Two build projects are included along with one simple PDE feature build as an appetizer before the main course. :)
- Due to popular demand, we'll be adding support for simple product builds (bug 271186) using the Athena system. Anyone want to help beta-test it?
2009-03-15
- progress made on building locally in Eclipse on Mac OS X 10.5 and on Windows XP. Tests do not yet run on these platforms due to shell script and Xvfb/Xvnc requirements.
- prototyping work done using ant4eclipse to fetch org.eclipse.test and org.eclipse.ant.optional.junit from a .psf file so they need not be included in project's map file
2009-03-05
- progress made on building from local sources. It's a hack for now, but it's workable (bug 252774)
- working on a virtual server image on which Hudson and Eclipse can be run for testing
2009-03-01
- verify build can run in Hudson via shell script
- verify build can run in Eclipse via build.xml (bug 253269), or commandline via start.sh, including signing
- refactor relative paths to absolute so build can be configured to run in more ways
- verify alternate path options: /tools/gef/d/d/$v/$bid/ vs. in /$bid/; /opt/public/cbi/build vs. /tmp/build
- verify cleanup step actually works for eclipse/ and testing/ folders, locally and in Hudson
- bugzilla cleanup (previously completed items are now closed); currently 27 closed Athena bugs, 39 open
2009-02-26
- Getting Started Guide updated with details about running in Eclipse, running commandline, and running in Hudson.
- signing works when run from local build (headless, in Eclipse) and Hudson build
- build directory path is more configurable than ever, with (hopefully) no more relative paths
2009-02-22
Numerous items in progress this week thanks to help from Andrew O and Bjorn F-B, including:
- test running build system on Mac OS X (problems with bootclasspath & JVM still to be solved)
- Getting Started Guide now includes information about running in eclipse (bug 252041
- run build in Eclipse / convert start.sh to build.xml (bug 252028, bug 253269)
- it seems that the new branch won't run in Hudson, so this is still in progress
- investigated SVN tagging and SVN change tracking in Hudson; the former does not seem to be working, but the latter is
- reported Hudson problems w/ CVS checkouts and use of Xvnc for UI testing (bug 265750, bug 265751)
2009-02-16
- Defined a better way to control flow of build steps, so ant targets "runWithoutTest" and "runWithoutBuildTest" are now obsolete (bug 264579)
- Improved startup process for build. Migrated several chunks of bash script to Ant (bug 253269)
- Started working with Hudson on build.eclipse.org for controlling and administering builds' logs and artifacts (bug 251945)
- Installed several new plugins and got server updated to latest version (bug 264778)
- However, Hudson cannot currently sign jars (bug 251945)
- See also next week's workshop plans
2009-01-16
Due to other commitments, not much is new.
- Linuxtools is now running scheduled builds using Dash Athena CBI via crontab
- Building with PPC Eclipse, GEF's automated JUnit tests PASS. See bug 253114 for followup problems on another server.
2008-11-26
- Build System Test Cases document started to capture tests, invocations, and problems
- Getting Started Guide drafted
- Tests (for GEF project build) are working with some success locally, but fail with "Java Result: 13" when run on build.eclipse.org
java.lang.UnsatisfiedLinkError: no swt-gtk-3448 or swt-gtk in swt.library.path, java.library.path or the jar file
- More refactoring and simplification (less shell, more Ant)
- System now uses ant-contrib for looping and conditional processing
- bug 256379 opened to optionally clean cvs exports out of the log (using <cvs reallyquiet="true">)
2008-11-19
2008-11-06
- SVN support implemented in setup script (to get svn-pde-plugin into the cached copy of basebuilder), startup script (to get a .releng project from svn repo or cvs repo), and generated fetch scripts (map files can specify cvs and svn entries interchangeably) (bug 251923).
- clean up customTargets.xml (for both ALL and Tests builders) (bug 251879).
- Support for archived update sites and SDK zips as input to build :: in progress (bug 252423).
- Testing code refactored and almost working for GEF.
2008-10-28
- Server config simplified on build.eclipse.org; documented and scripted for reproduction on localhost. Testing through cygwin on Windows TBD.
- basebuilder.releng from both 3.4 (tag: RC2_34) and 3.5M2 (R35_M2) checked out on build.eclipse.org; builds work equally.
- GEF build on build.eclipse.org and localhost has succeeded in producing GEF "Master" zip, as well as SDK, runtime, and examples.
- Signing fails, but can be skipped by running an N build.
- Test assembly fails; test run fails because tests are not yet built. builder/tests/customTargets.xml needs to be updated/fixed.
- Creation of Update site, p2 metadata, site digest, etc. still TBD.
- Web UI to run builds has been decoupled from the build system in case we decide to use Hudson or Cruise Control instead. Paths in web UI still broken (expect /gef/ but have /cbi/gef; downloads page shows builds on download.eclipse.org, not local builds on build.eclipse.org)
--
- To set up a build, run this script.
- For example: | http://wiki.eclipse.org/index.php?title=Common_Build_Infrastructure/Athena_Progress_Report&diff=194204&oldid=194203 | CC-MAIN-2016-26 | refinedweb | 2,902 | 54.73 |
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
Apparently Knockout is old news, and I'm behind the times. But: it is a Javascript library that apparently gives declarative binding, automatic ui refresh based on that, including dependency tracking. All written in 29k minified Javascript I guess. Scary. I liked this overview.
Very nice. Sounds like JavaFX and my own Bling project. I hope declarative binding takes off. One of the reasons I was disappointed with Silverlight is that they never supported fully general declarative binding.
Very appealing, I concur. Coincidentally, I'm going to need the help from this kind of thing for a project to begin shortly. So, thank you for sharing about this.
The number of high quality JS libraries is somewhat astounding, with new ones popping up almost daily it seems...
Amazing how a simple scripting language designed for non-programmers to manipulate DOM objects in a web page has tunred into the de facto general purpose library language for the web (both client and server - check out what's going on with node.js and the libraries forming around it...). There's also interesting work going on with JS AST shaping. See for example JSShaper and burrito
It is true that many high quality libraries have been developed. And node.js is nice because people don't like learning more than one language.
But I get the impression that this has happened mostly because developers do not have other viable options. Flash and Silverlight and applets tie developers to a puny little rectangle and put a large rendering/layout burden on developers, so that's no fun. I don't believe NPAPI or NPRuntime plugins currently allow us to extend the set of script types supported by a browser.
Someone could, of course, run off and create their own new browser model, and some web-services for it. But it's a lot of work to catch up to what browsers already accomplish today, especially if efficiency is a concern, and competition is fierce. It would take a real disruptive technology to succeed that way, which doesn't leave much room for incremental improvements... except via well-written JS libs.
We may be "stuck" with JS as the de facto in-browser programming language, but I don't think that's the case... It proved it's worth more than we were just not given any other options. The flexibility of JS enables one to build higher level abstractions or "languages" on top of it - JS becomes, as Erik Meijer says, a web assembly language, a compiler target. Look at CoffeeScript, for one example of something-higher-level-to-JavaScript. CS removes the rough edges of JS (and all the curly braces!), adds simplified higher level abstractions, etc. It compiles to JS, which is the "machine code" of the web in some sense.
What's astounding to me is that JS is BOTH a powerful and expressive human-composable language AND an efficient compiler target (though, it is far from perfect in this context... Humans of all skill levels need to be able to write JS programs effectively - this was Brendan's primary goal when he designed the language - so adding things like GOTO to make it a better compiler target would be counterproductive and just a bad idea...).
JavaScript is the language that just keeps on proving useful in astonishing ways or in contexts that make us go "wow... it does THAT, too?!".
Nice job, Brendan et al.
JS has many outstanding properties that make it remarkable as a web assembly language:
But, even with these properties in mind, I think we could do better if we were to design a language for purpose of web assembly - and not just for the compiler; the humans can also benefit. For one example, see Curl initially from MIT. I don't agree with all of Curl's decisions, but many of them are well justified.
I haven't heard of Curl. Thanks for the link. Interesting, though it represents yet another mark-up to learn (HTML is already the capable, standardized mark-up of the web...), which is OK, but if you are proficient with JS today (and HTML), then you can apply it to a wide range of web programming - browser, server and in Windows 8, desktop applications (HTML5 + JS), as revealed at this year's D9 conference by Windows executives Sinofsky and Larsen-Green).
I agree that there is plenty of room for the advent of a better language (maybe it just needs to be a more modern, object-oriented JavaScript? People seem to like classes and the like...) to program web application logic that's guaranteed to run in any modern browser host (it just needs to compile to JS...).
CoffeeScript is one example of SomethingSimplerOrCleaner-to-JS (there are others and there will be more...). Doug Crockford made an excellent suggestion in a recent conversation: Why not take all the CoffeScript-like implementations, throw them together on a stage and let's vote on the best one; essentially a beauty contest for languages that compile to JS and hide its sharp edges/bad parts, add more OO concepts, etc. I think he is spot on. This is something that should happen!
Now, if JS is to actually become a standard Web Assembly, then the TC39 folks will need to add this problem space to their current list of work items for serious consideration. What does the language need to make it an industrial strenth, reliable, efficient web assembly? How will these additions impact the language for general purpose use? What are the trade-offs? Are they worth it?
I've spoken with some of the TC39 folks about this and either they don't agree at all that JS should be used for this purpose or they feel that adding new constructs to the language to support JS-as-compiler-output will add new (and potentially dangerous or counterproductive) complexities higher up the stack for web developers who can today write robust web apps powered by JS without understanding the language (and the sharp knives) to the degree folks like you do...
JavaScript is supposed to be, and I'd argue in fact is, the language that democratizes web programming for the masses. JS is the only language I know of that can wear two very different hats simultaneously (high level language for general purpose composition and efficient compiler target for consumption by the web (virtual)machine).
Really interesting times.
JS is hardly unique in the role you name, if you include the many various other 'web machines' out there. Cf. Croquet project, Second Life, Oz/Mozart, distributed Alice, various cloud compute services (many of which use Python) and mobile agent languages, et cetera.
JS just happens to be the de-facto scripting language for the most popular class of web services, and thus better known than the others. And it certainly isn't the only language that could fulfill that role, though it now holds an incumbency advantage that can cow most competition.
What does the language need to make it an industrial strenth, reliable, efficient web assembly? How will these additions impact the language for general purpose use? What are the trade-offs? Are they worth it?
I've been enamored with open, federated, distributed systems programming for years, so I have a lot of advice to offer about what not to do and one very promising model on what I think we should do. But I think most people asking your questions won't be able to hear me. Too many people desire easy answers more than valid ones.
I don't think the other examples you share are the same, though. Which of these have had a democratizing effect at scale (none of this really has to do with which programming language is better designed - that doesn't factor into the mass success equation to the degree that language designers hope...)?
I don't think anybody could argue that SmallTalk is a language for the masses. Well, at least this has been proven empirically by the masses...
Theoretical validity doesn't really matter in this case does it? Users make programming languages successful, not programming language creators, implementors, experts. Well, yes... Corporations push the hardest and have a great deal of impact, but no corporation pushed JS as anything more than web page scripting technology. Now look where we are. I wonder what users wil discover next about the "little language that wasn't supposed to"...
I do not believe JavaScript has any 'democratizing effect'. Rather, I believe you misattribute the cause. If Brendan Eich chose SmallTalk for the Netscape browser, that's probably what you'd be gushing about today.
Also, most users are trapped by the system. Saying that 'users make languages successful' connotes that users are making an informed decision against other viable options. I think it would be more accurate to say that the success is self perpetuating, which is a common property for many platform technologies (gas, electric, roadways, gaming consoles, internet protocols, programming languages). It's difficult to directly compete with a large system - it takes a disruptive technology, a transition strategy, and a healthy dose of rage against the machine.
I agree that language design, theoretical validity, even outright in-your-face proofs of superiority are insufficient for success. They all pale in the face of market forces and inertia.
Brendan choose a syntax that was in vogue at the time, C-like, just like Java, which JS has no real relation to (what a bad name you chose, Brendan... :)
I'm not a JS pimp, but rather come from a background of actually disliking the language. I never took it seriously when I was professionally programming, but did enjoy it's ease of use when having to code in it (some of my work in Windows XP was written entirely in JavaScript...). Was it weird coming from a "real" language like C++? Yes. Yes it was! Did I find myself dumbfounded by certain aspects of the language? Yes. I still do, in fact. That said, perhaps Crockford has successfully brainwashed me, but I don't think so. What he really did was lead me to further investigation and experimentation with the language and it's left me impressed (and sometimes confused).
Regardless of the language's accidental-forced success it's an interesting language with some great characteristics that make it suited for the web and the varied skill levels of human web developers. It's these very attributes that are pushing the boundaries of JavaScript into new territories, outside of the browser, the DOM and web pages.
Today, I spend more time writing C++ code than JavaScript (I'm actually not affiliated with web programming in my day job - I pimp languages like C++ and C#, professionally), but I still believe it is a more advanced language than it was initially designed to be, weird as that sounds. Who knows, maybe Brendan is even smarter than we think he is.
We can agree on one thing for sure: JavaScript is here to stay. Reach wins every time. There's no substitute for adoption. And, it's a language deserving of respect from folks like myself who used to make fun of it. It's got mine. That's for sure.
When you say 'JavaScript has some great characteristics that make it suited for the web', what are you comparing it to? I would certainly agree that it's less painful than C++, but I sort of feel like you're comparing a sledgehammer to a ball-peen hammer when the problem calls for some twine and a gluestick.
I'm not inclined to agree that any imperative, insecure, non-concurrent, non-incremental language is 'suited to the web'.
I wasn't comparing JS and C++...
JavaScript is also functional and object oriented (prototypes). It has warts, as I said. It's not a perfect language but it is the language of the web as a matter of fact. The addition of strict mode fixes many of the blatant security/reliability issues you allude to. More work will be done here. Suited for the web? Again, I think the web has proven this to be the case...
JavaScript's highly dynamic, functional nature also makes it a good fit for web programming. Yes, the single-threaded view of the world is unfortunate. I doubt it makes sense to add threads to JavaScript, but you can imagine an async/await model (like C# 5 will have) added to the language at some point. This would obviously be very useful for responsive UI-based programming over distributed data sources, which is what the web client world is all about... Not sure what to say about concurrency and JavaScript. You'll need to ask one of the JS language curators.
JavaScript is not a good fit for web programming. But it takes a study of alternatives, and a long list of failed desiderata, to understand that this is the case.
Please refrain from further arguments of the form "it's pervasive, therefore it's effective". Unproven, inefficient, or even harmful systems can easily be widespread and self-perpetuating - consider war, prejudice, tobacco, religion, the American health system, and JavaScript.
When I say I want concurrency, I'm not suggesting that we 'add threads to JavaScript'. It does seem that 'use strict' will patch a few of JS's more obvious vulnerabilities, but there is still plenty to be done.
My perspective is one of pragmatic experience and looking around at what's happening in the real world (all the JS libs, the novel ways it's being used to solve a wide variety of problems, unexpectedly). I agree that adoption != effective or perfect (as I have said a few times in this discourse - it is far from a perfect language, yet it continues to prove capable).
It's clear that both JS and C++ are not among this group's favorite languages :) Sean, I can't agree with your representation of C++. Let's leave it at that.
I won't argue with you (I can't really. I'm just a user, somebody who uses these tools to build things. I also like them.)
This has been an enlightening and fun conversation. JS (and modern C++ for that matter) has much growth and evolution in front of it. We'll see what happens. I hope language folks such as yourselves get involved since the likelyhood of the advent of some new language for the web is pretty small. JS should become better and also not break the web as a consequence. It's a tricky problem and this is where the language designers come in.
Your 'perspective of pragmatic experience' is extremely biased: you only see success. That's all you'll find by "looking around at what's happening in the real world (all the JS libs, the novel ways it's being used to solve a wide variety of problems, unexpectedly)". Cost, failure, and lost opportunity are simply not on your radar.
An accounting of costs, failures, and lost opportunities would require you severely lower your estimate of just how 'capable' JavaScript is proving.
We could certainly do worse than JavaScript, but that doesn't mean it's well suited to its purpose. We could do a lot better.
Please see my
blog. Also this hacker news comment.
In the last, I suggest that perhaps the syntax chose me, as much as I chose to learn C, C++, and early Java, or to work at SGI (very much my choice getting out of UIUC in '85).
I could not have chosen Smalltalk. History has reason and rhyme as well as chance, it is not all and only random. For my part, there was little "arbitrary" in what I did, including the mistakes -- some of those weirdly recapitulated early LISP mistakes.
JS's influences: Self, Scheme (barely), AWK, HyperTalk, Java, C.
/be
I agree. We should not be quick to pin blame (or credit) for decisions shaped heavily by external pressures. And it is only as of 1997 that we truly began to develop programming models that support a more declarative approach to dynamic UIs and rich internet applications.
However, do you have any good reasons that JavaScript was provided in the core rather than shifted to NPAPI? Even now, we have a MIME field in <script type="text/javascript"> that is pretty much unusable. Use of <object> for the newer NPRuntime does allow a plugin some limited access to the DOM these days, but we still lack good integration with different parts of the page.
<script type="text/javascript">
<object>
It seems that decision has managed to stifle any real competition.
Even now, we have a MIME field in <script type="text/javascript"> that is pretty much unusable. Use of for the newer NPRuntime does allow a plugin some limited access to the DOM these days, but we still lack good integration with different parts of the page.
I agree and I often wondered about the same. I did also often regret that the so-called "script components" and "behaviours"-related scheme of things never had enough success/didn't attract enough attention for it to leave that unwelcoming realm of proprietary Microsoft-specific extensions, more specifically in scripting-with-rendering cooperation(*) in the browser, for this matter.
((*) and yes, ideally, I suppose, while also keeping the option of the scripting be done in Javascript or... something else)
There's nothing unusable about the script element's type attribute. See RFC 4329, and also note how people write <script type="application/python">...</script> and dig out the code to interpret using, e.g. Skulpt.
What's more, no off-the-shelf language engine has distribution, web-facing safety properties, or DOM integration such that any browser would just load, e.g., C-Python as a DLL or DSO due to such a type attribute value. That would be the road to security ruin, for users who had the DLL (on Windows and Mac, most won't).
The <object> tag came in 1998 with HTML4 (Dave Raggett invented the unregistered text/javascript type there, too). It did not exist in 1995. I invented script because the JS vision (marca and I agreed on this) started with inline scripts -- code in the page. That vision required a CDATA content model and rules about fallback that <object> lacks.
Again this is not just bad history. <object> is a failure relative to <script>, and for good reasons including security, opacity, and other costs of native-code plugins and the hairy, hard-to-evolve NPAPI. Not to mention the EOLAS patent.
I don't think you realize how hard it is to standardize something like JS. There's little energy and no economics among browser vendors to do a second language, even abstracted by a plugin API.
NPAPI is a big interface, and with custom, per-browser integration, one could do such a script engine extension mechanism. But the browser vendors cannot agree on one native-code interface standard expressing all the DOM and browser APIs. Google tried with Pepper, and failed. Apple will never follow that, nor will Microsoft.
/be
RFC 4329 describes four options as of 2006 (ten years after your initial work), all of which refer to the same language, and two of which are obsoleted: text/javascript (obsolete; also the default in HTML5), text/ecmascript (obsolete), application/ecmascript, and application/javascript (which IE still doesn't support).
text/javascript
text/ecmascript
application/ecmascript
application/javascript
Skulpt is hardly a positive example. Sure, we can interpret Python in JavaScript. But that comes at the cost of an extra layer of interpretation, and works moderately well only due to similar semantics (e.g. with respect to concurrency, dispatch, state, side-effects). The fact that the <script> field is used there is almost incidental.
Since you call this 'usable', I would hate to see what you call 'unusable'.
We will have practical support for multiple scripting languages if ever we can put Curl, E, Oz, LabView, or whatever into both script and various event-handler fields, without sacrificing the safety, performance, parallelism, security, et cetera advantages featured by these languages.
no off-the-shelf language engine has distribution, web-facing safety properties, or DOM integration such that any browser would just load
There are many safe language engines, even at the time you wrote JavaScript. And DOM integration is not a very difficult problem - it is easy to model the DOM, and manipulate that model, from almost any language. Browser integration has been the real stopper.
I don't think you realize how hard it is to standardize something like JS.
In this case, I believe it was the DOM that needed standardizing, not the scripting language.
Imagine we did not standardize on JavaScript. Would we still have de-facto common scripting languages that work in every browser? Yes, we would. But a site that needs to provide a richer experience would be able to provide a few plugins to major browser vendors.
The <applet> tag was the predecessor to <object>. The idea of 'applet' was to give some code a sandbox to play in. The idea of 'script' was to give some code the whole page as a sandbox to play in. If a different turn was taken back in 1998, these two concerns might have been reconciled.
Skulpt is young. Compiling and even trace-compiling JS eliminates double-interpretation costs on current optimizing JS VMs. It can even yield nearly native performance.
Your complaint about the script type attribute was misstated. You really meant "why didn't all browser vendors provide multiple script engines?" (No extension API could credibly work without multiple engines in multiple browsers under test by the vendors.) The answer to that question, I already gave: because doing so is enormously expensive, not only in one-time costs.
Contrary to your assertion, there were not any cross-platform (remember Windows 3.1?), open source, web-security rather than Unix-command-security ready language engines in sight.
C-Python of the time was strictly less memory-safety hardened. It relied on an unsafe FFI. And it was poised to evolve dramatically, so any browser-embedded version would be a forked fly-in-amber. Same situation for TCL and Perl.
You may now cite an obscure language that might have made it through the gauntlet, but the reality was not only a paucity of candidates, but high costs to implementors, and on top of that, the convicted monopolist bundling IE with Windows 95.
If JS were not "on first", we would not have a multi-vendor extensible script engine standard. We would have VBScript.
I agree with your replies to Charles stressing that we do not live in the best of all possible worlds. Dr. Pangloss aside, in the real world, path dependence in networks does breed monopolies. Netscape was one but it underinvested in the browser after Netscape 2 and thereby helped MS to the market.
The new "better is better" hope for multiple languages is NaCl. It was the driver for the Pepper API that failed by being too chromium-specific. Hope springs eternal, but IMHO the C/C++ toolchains will have CFI enforcement for portable safe binary code sooner than the browser vendors will agree on NaCl and Pepper2 or 3.
The "worse is better" hope is compilation to JS combined with language and VM evolution. That is my bet. And I am putting money on it at Mozilla.
Compiling and even trace-compiling JS eliminates double-interpretation costs on current optimizing JS VMs. It can even yield nearly native performance.
I agree that it is possible to leverage runtime specialization, tracing, staging and the like to achieve near-native performance for a tower of languages.
Unfortunately, JS makes a poor foundation for such a tower.
Among other problems, JS is not well designed for cached compilation of dependencies due to issues of namespace shadowing, global state, and load-time side-effects. If I go to a site and it loads a 100kB JS library, even state-of-the-art browsers will re-load, re-parse, and potentially re-JIT that library once per page I open, with corresponding space and time costs.
The JS benchmarks that seem to approach 'native performance' simply do not reflect concerns for snappy performance when loading a new page.
And there are many performance opportunities that are simply not addressed by JavaScript. Brokering of web services could put more processing on the client, but is undermined by JavaScript's single origin restriction, which is from its 'web-security' issues. Leveraging parallelism or heterogeneous memory (CUDA, GPGPU, DSP, FPGA) is also feasible, but hindered by use of shared state in the global namespace and the lack of standard collections-oriented operations.
The techniques developed to eek performance from JS have potential to apply even more effectively to languages designed to discover invariants suitable for optimization.
You really meant "why didn't all browser vendors provide multiple script engines?"
I did not. But I do respect your concerns that VBScript would be "on first" if JS was not already there. (shudder)
The new "better is better" hope for multiple languages is NaCl.
NaCl is a promising technology for supporting new languages, but I've yet to experiment with it or form a solid opinion on it. Hardware virtualization is another useful technology in the same vein.
I do prefer the JS approach of compiling/interpreting the code on the client side.
My 'dream' involves declarative automated sharding and code-distribution of fragments of a secure language, albeit one much more optimizable than JS. I would also like to trade out HTML for a document model better suited to zoomability, accessibility, CSCW, continuous real-time, multiple views, mashups, transclusion, flexible user input, disruption tolerance, augmented reality, and other nice properties. And I am making fair progress on it.
"JS makes a poor foundation...", "... global state, and load-time side-effects", "... will re-load, re-parse, and potentially re-JIT...".
You write as if nothing is changing, but JS is part of the evolving client side, and since it still has first-mover advantage, it is in fact evolving to address every one of these concerns. It's easier to evolve than to start over.
ES.next is based on ES5 strict mode, and eliminates the global object as top level scope. It has lexical scope all the way up, so free variable references are early errors. Developers opt in via script type="...;version=..." and an in-language pragma.
ES.next also has a static module system that caches module instances and prefetches all static dependencies. (Even now, thanks to Steve Souders and others, developers know to avoid side effects and global collisions, but by convention; ES.next modules make this foolproof.)
Source, parse tree, bytecode, and JITted code are all cached in leading edge browsers, and content-addressed caching across origins is under way. Nothing in JS today, never mind ES.next, prevents this kind of caching, and browsers are doing it.
As for data parallelism, WebCL (the low road) is coming fast, and we are working with Intel and NVIDIA on higher-level functional APIs based on immutable typed arrays with the usual combinators, which enable compilation (JIT or AOT) from JS to OpenCL or better.
Yes, starting fresh would be nice. It's not realistic in the current market. If a new monopoly arises, possibly -- but even then shortest path may prevail.
Meanwhile, right now JS evolution is accelerating.
I have very little faith in language evolution, except to the extent it is achievable from within the language (via libraries, frameworks, ajustable syntax). I'm not saying it's impossible, just that I've heard such claims then been let down too many times.
Starting fresh is undesirable, I agree. Any new technology should integrate well enough with the existing systems, e.g. via mutual embedding.
If you fix the performance issues, JS will be better as an 'assembly' language. Just remember that this means caching not just the code, but also intermediate results (such as from Skulpt compiling Python into JavaScript).
Lots of languages evolve successfully. In fact, basically every language in use today has undergone substantial evolution (Java, Ruby, C, C++, Python, Scheme, Perl, JavaScript, C#, Fortran, Scala, OCaml, ...).
In particular, the future evolution of JavaScript is described here:
If you have feedback on anything specific, please do let us know.
If I look only at the successful changes after they occur, that hardly qualifies as 'faith'.
I'm not sure I would qualify any of those languages as having undergone 'substantial' evolution, though perhaps I just measure against a much wider scale for how different languages could be (IMO, Ruby, Python, and JavaScript are nearly the same language, modulo libraries and community). Substantial changes tend to break code or require new idioms. But I do benefit from the minor improvements that offer convenience, performance, or discipline for something developers were previously hacking.
If you improve JavaScript to a degree where I can maintain proofs about security, safety, consistency, performance isolation, and nearly maintain bounded-space and real-time properties of the 'substantially' different languages I might compile to it, then I can be satisfied with that. I am happy to see Mark Miller's influence on the proposals.
Why JS became the web assembly language of choice is purely accidental. It wasn't designed as such, it was just supposed to be a way of building richer web pages in lieu of using Java. But here we are, and look at all the work going on in industry and research to patch JS up so that it is more suitable...its actually quite amazing and somewhat sad as all those cycles could have been used elsewhere.
The ideal web assembly language would be secure, portable, flexible, ... maybe something like JVM or CLR bytecode but of course we could do much better.
One could argue that any successful (at scale) language happens upon success "accidentally". JavaScript is no different in this regard than Java, C, C++, Ruby, PHP, etc. The better designed language, the perfect language, doesn't mean anything in realistic usuage context.
Humans play a far more important role in determining language success than the design of the language - we don't give users enough credit. In fact, it is the unpredictable and chaotic human part of the success equation that determines a language's success at scale.
JavaScript was designed for non-programmers, yet it is used today as a compiler target, annointed a web assembly, de facto language for web page logic, used for writing http servers, etc. Astounding.
There is no way Brendan Eich designed the language with anything besides simple web page programming in mind. This is why he was able to implement the first version in 10 days...
The big accident here is that humans took the language and ran with it, pushed it into areas that on the surface seem insane - yet the language consistenly proves itself useful in new contexts. So, accident or not, JavaScript will most likely become even more used (and useful) in the next 10 years. It needs more attention from the language community than it gets today - the TC39 people have a large responsibility to ensure they don't mess it up now that it has proven worthy of shedding it's web scritping clothes. I believe it will.
The accident is that JS was not designed to be a web assembly language, and in fact is fairly unsuited to being a web assembly language. But JS was pervasive, and so it became a web assembly language inspite of not being suitable for it. Rather, we (mainly Microsoft, Google) invest lots of money in making JS suitable via clever engineering.
JS wasn't designed for non-programmers, it was designed for low end programmers in much the same way that VB was. The semantic difference is very important. One of the crazy things that JS did well was borrow a lot from Scheme and Self, hence it has great extensibility and you can define crazy dynamic-meta-style libraries for it. This made it easier for programmers to use and extend, but ironically that also makes it even more less suited as a web assembly language.
JS is really in danger of becoming the next C++: a widely used language that is universally shunned by language enthusiasts. Much like C++, it will get more attention from system researchers than PL researchers given its incredible system challenges, as well as compiler people looking for good perf challenges.
Today's C++ is very different than yesterday's... Which C++ do you mean?
Today's C++ or yesterdays, they are both still in wide use. C++ is still an incredibly complicated and dangerous language. C++ is still a perennial pariah for language researchers, an example of how not to do things. But then C++ basically guarantees full stable employment for tool researchers, while companies like Coverity can earn major bucks on "solving" C++ problems.
Dear Sean,
I can't agree with your characterization of C++. Modern C++ is not terribly complicated (yes, it's very big, there's too much ceremony sometimes and it can be very unwieldy in certain contexts. C++ can be really hard to debug(nested templates anyone?) and you can, as a user of the language, use it in dangerous or unsafe ways.
This danger shouldn't be blamed on the language itself - it doesn't write it's own bugs, after all! We developers do that, and we do it very well sometimes... You can write type safe C++. You can write code that won't pollute the ambient environment. Nothing forces you into the sketchy world of C casting, for example (re type safe C++). You can write highly structured, object oriented, highly generic C++ as well. If you choose to do so.
C++ is just a tool. How you use it is really up to you (you can write the most dangerous code in the world, but you can also not write the most dangerous code in the world...).
Don't hate the player. Hate the game. :)
I hear it is common in an abuse scenario for a victim to blame himself instead of correctly attributing the cause to the perpetrator or environment. We should be prepared to blame our tools, our languages, our development systems when they are inadequate, inefficient, unsafe, or have other systemic issues.
See my reply to Andreas.
Charles, unfortunately, you are very mistaken.
Modern C++ is not terribly complicated -- It is. It is the most complicated mess of a language ever put forth. I don't know anything that would even come close.
You can write type safe C++. -- You can't. Just about every relevant feature of C++ is inherently unsafe. Casts are just the tip of the iceberg. There is no useful subset of the language that would be safe.
This danger shouldn't be blamed on the language itself -- It should. Seriously. Just like you would blame it on the car if you have a terrible accident because the breaks are not working on Tuesdays, and it has the cigarette lighter installed next to the gas tank, which is an open tub under the driver's seat (so that it can be refilled more quickly).
Andreas, unfortunately, you are very mistaken. You need only look to the intentionally obtuse languages such as INTERCAL or Whirl to find messes even more complicated and error-prone than C++.
So you should at least offer this glowing recommendation of C++: they could certainly do worse if they tried.
I learned valuable lessons from studying just how terribly awful language design can be - lessons that give me strength and hope as a language designer. The lessons are: The space of meaningfully distinct languages and programming models is vast, subtle, and largely unexplored. The most useful features come from what we eliminate from our languages, rather than what we add to them. If we can build a language worse in every way than Blub, we can probably build a language that is better in every way. All we need to do is see it.
These languages are perhaps messier, but I doubt that their creators accomplished creating something more complicated than C++.
Why did you choose C++ to implement V8 if you hate it so much? Does Lars feel the same way? You could have implemented it in some other powerful systems language. Why didn't you?
I understand that C++ is a language that you either love or hate; there is no middle ground. It's a language that makes it super easy to blow your arm off, shoot yourself in the foot and upset users when memory buffers overflow and the digital mob takes control of the users' systems... In all of this, however, it's the developer who is at fault and not the tool. If you pound a hole in your living room wall when you decide to hammer a nail to hang a picture in an area of the wall that is just sheet rock and you swing the hammer as hard as you can, do you blame the hammer, the nail, the wall, the picture or yourself?
The fact remains that when you want to build highly performant, low level systems using high level abstractions (or not), C++ is the overwhelming choice - in the real world. Why is this the case if the language is as terrible as you assert? Your car analogies are somewhat off base: the car's design is not dependent on semantics of C++, though it's implementation is. You can't blame C++ if, for example, you design an algorithm with unchecked buffers. The problem is with your design and how you implemented it using a tool that enables you to directly allocate memory to accomplish your computational tasks... C++ is not the problem. Bad design and inexperience are - both correctable human problems.
You don't like C++. That's fine. Millions of developers do. That's fine, too.
Just because another language exists does not mean it is a viable option. When choosing a language for a component of a system, one must consider: what is the rest of system written in? What other libraries are we integrating? What's the toolchain for getting everything compiled? Can we afford the configuration management and portability concerns of adding yet another language to the dependencies? How will it affect IDE integration and debugging? How many other developers on my team know or are willing to learn this language? (Will the developers brave their fear of the unknown?) What is the downtime and warm-up time, both on the developers and the toolchain, for switching to the new language?
Developers rarely have the privilege of choosing the best language for a job, Charles Torre. The tooling decisions we make are driven more heavily by circumstance and inertia than by suitability for the purpose.
Having been a pro dev earlier in my career (working on large projects like Windows), it's certainly true that there was no option for developers to pick and choose languages and toolchains... This was even true on smaller projects (like web sites or web services). You are right -> Circumstance and inertia are hard to overcome and in some cases it just doesn't make technical sense to try and force a new model into an unrelated world full of incompatible code (like shoving .NET into the Longhorn OS - it just didn't work...).
C++ doesn't need to be replaced. It will continue to evolve to meet the needs of modern times... It's a great tool for a number of unrelated jobs. I just don't agree that it's fundamentally a bad programming language or a terribly designed one. It just doesn't feel that way to me. My comment about picking a different language for V8 was less than serious and was an emotional response to the C++ bashing commentary... Obviously, the rest of the Chrome browser is written in C++ and runs in operating system user mode environments also written in C++ and the team of engineers are all seasoned C++ developers... The C++ compilers and debuggers are mature. The libraries are tried and true (and standardized in some cases).
Interestingly, this all applies equally to the topic of this thread: JS. Let's say that the best possible language emerges that is perfectly suited for the web. Easy to use. Powerful. Concurrent. Safe. Modular. Dynamic. Functional. Easy to build productive tools around. Performant. General purpose. Capable. Will it work with legacy JS code and state of the art design time tools? Will you be able to compile it down to JS so the JSVM wizards don't have to reengineer their machines? Do we expect a billion web pages to be re-written in this new language to gain all the benefits it provides for both developers and web application users? Of course not. Even in this hypothetical scenario JS will still reign supreme as a result of massive inertia. You are correct.
A badly designed tool remains a bad tool, even if you think you should blame me for using it wrongly. A good tool significantly reduces the risk of me having an accident, and supports and encourages good use.
As for your other points: choice of language isn't primarily driven by its technical merits -- David made some good points. And popularity certainly isn't a technical argument either (though unlike you, I don't seem to know many developers who have much love for C++). Re V8: I'd argue that VMs are a bit of a special case anyway, because you are doing inherently unsafe stuff.
I'd argue that the only things inherently unsafe are things that have some chance of crashing the program, corrupting data, etc. Since you presumably don't want your VM doing those things, I think it'd be better to replace this phrase with "stuff that's difficult to prove safe." Inability to implement tricky low level stuff in a provably safe way is a language limitation. </pedant>
Point taken, but I would still call implementing, say, a runtime code generator "inherently unsafe" because proving it safe is so utterly unrealistic -- especially with all the massively dirty tricks an engine like V8 has to do to get some performance out of JavaScript -- that by all practical means it becomes indistinguishable from impossible.
Which provably safe languages are suited for the low-level problem space of directly programming the machine (the "tricky low level stuff") besides the usual suspects in wide use today?
Illusion of choice is not an illusion at all if there are no viable alternatives.
BitC? Actually, Shapiro was working on Midori for awhile, but that didn't last long...
Your point is well taken. We choose C++ not because its the best language, or even a good language, but the only language suited to the task.
Adam Chlipala has done a lot of interesting work on this subject, such as verifying compiler optimizations and the Ynot project for validating side-effects.
I hope most of our 'safe' high-performance low-level code will eventually be written by high-level code. We achieve this by targeting an 'abstract machine' that (not coincidentally) happens to match the real one.
I would suggest ATS as a potential candidate, though I find the added complexity of statically formalizing this low-level stuff a bit off-putting.
Curiously, ATS has not be thoroughly discussed on LtU, though it has been mentioned a few time now.
In UX, there is this common saying that there are only bad designs, never bad users. This doesn't generalize to programming languages very well, a programmer can abuse even the best language. But it still has some relevance: languages are meant to be thought shapers, they are meant to guide their programmers, and C++ doesn't really do that in a constructive way.
There are a few things going on here:
And as a language for non-programmers, the "look like Java" directive deoptimized it immediately, compared to my earliest wishes and sketches which were based on Logo and HyperTalk (HyperCard's language).
Really, functional + prototypal was not aimed squarely at beginners or non-programmers. I put in what I hoped would be power enough for real programmers to finish the job. That's why I made everything malleable. I knew monkey-patching would be needed.
Brendan, why did you choose prototypal?
He did mention monkey patching. It is easier to monkey patch in a class-based language, though of course class-based languages like Python and Ruby support it.
One of my designer colleagues found ActionScript pre-3 easier to use post-3, one of the main differences in 3 was an emphasis on more class use for "more structured" programming (in Flash Studio). However, it comes at the cost of usability/tinkerability that prototypes enabled.
i'd guess somebody must have explored middle grounds, like being able to dynamically attach 'traits' or something?
You really can't/shouldn't mix classes and prototypes because it becomes weird very quickly and usually different extension methods will conflict. I like the ideas of traits, its something I'm doing in YinYang.
What is crazy is that classes had won (Treaty of Orlando be damned), and then...bam...JS comes out and prototypes are back in the game again. But since most people in industry hate prototypes, they are trying to suppress them again with classes (a plausible conspiracy theory :) ).
Er, the introduction of prototype-based classes is one of the very premises of dodo. I don't think you shouldn't mix them.
Are there specific issues you see in having both in the same language?
For me, the main issue I had was to figure if an object derived from a prototype is the same class as its parent (in dodo it is the same if it adds no members) and how to type prototypes (I use tuple types with specific rules for assignment).
Bridging between the class world and prototype world, even in the same language, is much like bridging between two different languages with two different programming models. And its not just a technical problem, its a problem of thought.
Abstract Factory?
It seems to me that classes and prototypes are mostly orthogonal:
* Each 'Class' is an object. We can clone and inherit from it. Doing so creates a 'subclass'. I.e. the prototype for a class is its super class.
* Each 'Class' is a factory. We can create 'new' objects from it.
Something like this might work, though we'd need to use special methods to mutate the object created by the factory, separate from extending the factory...
JS needed objects. To talk to Java, at least; more important: for its own HyperTalk-inspired DOM objects with their onclick, etc. methods. But as I've written elsewhere, JS back then could never have had classes as declarative forms -- it had to be Java's boy-hostage sidekick, it could not be too big for that batcave. Plus, there wasn't time.
I admired Self and saw prototypal inheritance as a better way to build object abstractions, especially in a dynamic language. And it was easy to implement. And it was non-threatening to Java -- indeed prototypes and first-class functions (and closures, which weren't in JS1.0) flew under the radar for much of the first year.
Thanks, Brendan.
JS is so far removed from the JavaMan bat cave that it's sometimes hard to fathom why it hasn't really evolved very much over the years.
Re classes -> Many folks coming (and more will come soon...) to JS from Java or C# are used to classes - they've come to expect them. They're used to _thinking_ in a more traditional OO-structured way (and programming, like any programming language, can be defined as a formalized expression of structured thought, right?).
Fact is, for many practicing developers classes are simple to use, simple to understand, etc. They are _implicit_ in their everyday coding lives. Prototypes, on the other hand, feel awkward in comparison. So, why not add classes to JS?
Then again, it's not like you really have to add classes to the language itself, right? JS is an assembly language, after all (or is it? What do you think about this notion, Brendan?) so just add classes to some JS variant that transpiles to proper JS (or compiles... It's sometimes hard to know the difference with so much of this sort of thing going on).
Versions of Javascript (aka EMCAScript) DO have classes, consider Adobe's ActionScript.
I thought that Javascript wasn't meant to be an assembly language, it was a language that programmers actually use. I would love to hear Brendan's thoughts on that also.
I thought that Javascript wasn't meant to be an assembly language, it was a language that programmers actually use.
Pretty interesting to me is it's been wearing both hats actively for half a decade or so now (if not longer, in some uses of it(*), and/or to a looser extent of 'assembly') when you look at it, actually, and all this progressively occurring de-facto, as opposed to per one of its main original design's objectives, it seems.
Anyway, Thank You, Brendan: in my book, Javascript didn't do bad (at all) until now, given the initial resources allocated to its design, implementation, etc.
I can easily imagine myself asking a junior (web) developer "Just out of curiosity, do you really enjoy coding in Javascript or have you tried looking for something else you'd like better?" and if the reply is "Well, yes, I quite like it." I'd probably try come back with a:
"Same here, for some things. So, you do find coding in an assembly language fun too? Nice..." and then, maybe, enjoy the funny look I'd get in return.
But then, again, it's not just about the language's design and semantics properties at play, here, but is also I suppose, as much as about the place occupied by JS in the whole WWW client (and recently, server) platform's role and playground and opportunities they met over time, and the progress achieved in performances of its various execution environments competing with each other, that have to deal with "the other guys"... DOM, rendering/server page engines, etc.
(*) Ok, shameless plug, there; but hopefully not off-topic, though.
There's a cross-compiler for AS to JS (Jangaroo). So, you're right, Sean. JS has classes today, but you have to compose in AS first (and what's the status of this AS->JS translator these days, anyway?). JS also now supports a Lisp dialect(Clojure), but you need to write in ClojureScript first.
Everything-to-JavaScript seems to be the pattern emerging here. It IS an assembly language! It's also a gp programming language that's expressive and high level.
Would love to hear Brendan's thoughts on JS classes and JS as web assembly language :-)
I think that "JS is the web's assembly language" is a silly notion. Yes, JavaScript a popular target for compilers these days, but it's not an assembly language and compilers have been targeting languages other than assembly/machine language for 50 years.
I think the lack of a prescribed VM for JavaScript is a strength, not a weakness, and perhaps a key reason for its success. I think the lessons from the JVM and the CLR is that Virtual Machines are pretty much tied to a single language; step much outside the original language and you have a lot of difficulty implementing things efficiently, if at all.
So what we've observed new languages that are designed around these VMs; and attempts of implementing existing languages on these VMs falter and fail.
So I think the better question is to ask what makes JavaScript a difficult language to target. And granted I haven't tried to implement a compiler that targets JavaScript, but it sounds like it's posed approximately the same level of incidental difficulty as the JVM or the CLR. And three issues come to mind that I've heard from multiple sources:
1. A lack of tail calls, explicit or otherwise.
2. A lack of standardization with regard to the initial JavaScript environment.
3. Numerical difficulties of implementing pretty much anything other than floating-point arithmetic.
I won't speak for Erik (I can't), but it seems he's using this terminology to reflect the working notion that JavaScript can be defined as a low level language that only VMs need to parse and compile. So, in this sense you don't program JS as a human programmer - you use some other higher level abstraction that compiles down to JS (just as we have today in various formats).
The question is, what will be the "C of the web"? C effectively "killed" assembly language in terms of its required usage by human developers programming computing devices. Higher level languages like CoffeeScript are most likely the direction we're going. Then again, maybe JS will just evolve to be both a better high level language for the web and a better web assembly, regardless of the terminology used to express what it is or isn't in context.
I don't think CoffeeScript qualifies as "higher level" than JavaScript -- it's just an alternative surface syntax with some additional sugar. Consequently, it doesn't prove anything about JS's suitability as a compilation target.
From a purely technical perspective, using JS as a "web assembly" has almost satirical value. Not because it is hard to compile to JS -- it isn't. But because you gonna throw away all the valuable knowledge a compiler has about a program in a "real" language, and then have the JS engine jump through hoops and hoops to reconstruct it at runtime, in a rather incomplete, unpredictable, and expensive fashion.
I happen to work on V8 these days. Like most other contemporary JS engines, it is amazing high-tech. But it also is somewhat sad that about 80% of the invested cleverness would be unnecessary if it wasn't for the peculiarities of JavaScript.
It seems to me that JS has plenty of room to get better. Today, it's not design-time tool friendly. There's no reason that it can become a better language with support for object orientation with classes, for example or more structured genericity (compile-time genericity al a templates in C++), safer (removal of eval or strict mode by default, etc), removal of dumb stuff like null as an object, better numerical support, on and on... Languages also evolve as a consequence of how they are used in the real world. If we want better tools, more control at design time, compilation before deployment, etc, then the language must evolve. I believe it will (but not as fast as all would like). It may be that JS is not really suited for a role as web assembly today, but this doesn't mean it won't be in the future. Clearly, this is one significant way it's being used today - and effectively.
Keep on cranking out killer engineering. We all appreciate it (the low end developers who rely on the brilliant hacks inside all modern browsers' JSVMs). Hopefully, in the future you can scratch your heads less.
Regarding your numbered list, 1 and 2 are addressed in ES.next, which requires proper tail calls, and whose module system helps clean up the hopeless top level. DOM and Web API standards continue to evolve, but at least they don't all need to inject global names.
Point 3 is a good one. For storage of machine integers along with 32- and 64-bit floats, typed arrays and binary data (ES.next's embrace-and-extend answer to typed arrays) provide fine-grained programmer control.
But for arithmetic operators, nothing beyond the status quo (bitwise for int32 and uint32 implicit conversions, all else are double). This is a problem we're working on for "Harmony" but not ES.next. It will take a bit longer. | http://lambda-the-ultimate.org/node/4308 | CC-MAIN-2018-47 | refinedweb | 9,470 | 63.19 |
Is it possible to terminate a running thread without setting/checking any flags/semaphores/etc.?
It is generally a bad pattern to kill a thread abruptly, in Python and in any language. Think of the following cases:
The nice way of handling this if you can afford it (if you are managing your own threads) is to have an exit_request flag that each threads checks on regular interval to see if it is time for it to exit.
For example:
import threading class StoppableThread(threading.Thread): """Thread class with a stop() method. The thread itself has to check regularly for the stopped() condition.""" def __init__(self): super(StoppableThread, self).__init__() self._stop_event = threading.Event() def stop(self): self._stop_event.set() def stopped(self): return self._stop_event.is_set()(ctypes.c_long(tid), None) )
(Based on Killable Threads by Tomer Filiba. The quote about the return value of
PyThreadState_SetAsyncExc appears to be from an old version of Python.). | https://pythonpedia.com/en/knowledge-base/323972/is-there-any-way-to-kill-a-thread- | CC-MAIN-2020-16 | refinedweb | 155 | 67.35 |
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On Wed, Sep 18, 2019 at 2:28 PM Lukasz Majewski <lukma@denx.de> wrote: > > Hi Alistair, > > > On Tue, Sep 17, 2019 at 9:51 AM Joseph Myers > > <joseph@codesourcery.com> wrote: > > > > > > On Tue, 17 Sep 2019, Lukasz Majewski wrote: > > > > > > > - New 32 bits glibc ports (like RISC-V 32) will get __TIMESIZE == > > > > 64 (__WORDSIZE == 32) and no need to define the -D_TIME_BITS=64 > > > > during the compilation. They will just get 64 bit time API > > > > support from the outset. > > > > > > Yes, at least if such ports wish to use 64-bit time; I don't think > > > we've really discussed if we want to *require* 64-bit time for > > > future ports (e.g. the next revised resubmissions of the ARC and > > > NDS32 ports). Certainly the work required right now for ARC or > > > NDS32 to use 64-bit time would be significantly more than the work > > > for RV32 (because they also support older kernel versions without > > > the 64-bit-time syscalls, so all the Y2038 work for fallback at > > > runtime to older syscalls becomes relevant), unless they decide on > > > 5.1 or later as minimum kernel version. > > > > - Already supported 32 bits architectures (like armv7-a with > > > > __WORDSIZE == 32) will keep __TIMESIZE == 32 and require > > > > -D_TIME_BITS=64 for compilation. > > > > > > Yes. > > > > > > > After glibc sets the minimal supported kernel version to 5.1 > > > > and all conversions for syscalls to support 64 bit time API are > > > > done the __TIMESIZE will be set to 64 and -D_TIME_BITS=64 will > > > > not be required anymore for compilation. > > > > > > No. __TIMESIZE means the size of time_t in the unsuffixed ABIs in > > > glibc, not the _TIME_BITS-dependent size of time_t in the current > > > compilation. We hope in future to make _TIME_BITS=64 the default > > > and only API supported for new compilations (which is independent > > > of what the minimum kernel version is), but __TIMESIZE would still > > > be 32, because the unsuffixed ABIs would remain compatible with > > > existing binaries using 32-bit time. > > > > Ok. So then we shall keep the condition: > > > > > > > > #if __WORDSIZE == 64 \ > > > > || (defined __SYSCALL_WORDSIZE && __SYSCALL_WORDSIZE == 64) > > > > # define __timespec64 timespec > > > > #else > > > > > > No. __timespec64 should be defined to timespec whenever __TIMESIZE > > > == 64. The timespec to which it is defined, in the public header, > > > would gain padding. > > > > > > The condition > > > > > > #if __WORDSIZE == 64 \ > > > || (defined __SYSCALL_WORDSIZE && __SYSCALL_WORDSIZE == 64) > > > > > > is correct as a condition for struct timespec (in the public > > > header) *not* to have padding. > > > > Are you going to incorporate this into your series Lukasz? > > > > I currently have this diff which fixes the build failures for me: > > > > diff --git a/include/time.h b/include/time.h > > index 7ed3aa61d1d..91f6280eb4d 100644 > > --- a/include/time.h > > +++ b/include/time.h > > @@ -50,8 +50,7 @@ extern void __tzset_parse_tz (const char *tz) > > attribute_hidden; > > extern void __tz_compute (__time64_t timer, struct tm *tm, int > > use_localtime) __THROW attribute_hidden; > > > > -#if __WORDSIZE == 64 \ > > - || (defined __SYSCALL_WORDSIZE && __SYSCALL_WORDSIZE == 64) > > +#if __TIMESIZE == 64 > > I've prepared v8 of __clock_settime64 conversion patches with the above > change: > > > I've tested it with meta-y2038: > > > as well as > ../src/scripts/build-many-glibcs.py > > It seems to work as expected. > > > # define __timespec64 timespec > > #else > > /* The glibc Y2038-proof struct __timespec64 structure for a time > > value. diff --git a/time/bits/types/struct_timespec.h > > b/time/bits/types/struct_timespec.h > > index 5b77c52b4f0..48405c4f08a 100644 > > --- a/time/bits/types/struct_timespec.h > > +++ b/time/bits/types/struct_timespec.h > > @@ -3,13 +3,25 @@ > > #define _STRUCT_TIMESPEC 1 > > > > #include <bits/types.h> > > + > > }; > > I did not incorporated the above change to v8 of __clock_settime64 as > there are some issues raised by Joseph. That's fine, I can fix up his comments and include that in my series. > > Last but not least - we can get away with the above change as the > implicit padding works for RV32, and ARM32 (which both are LE). RV32 is actually both BE and LE. The spec allows it to be either. At the moment there are only LE implementations, but we should try to handle both. Alistair > > > > > #endif > > > > As well as that the timeval struct has the same issue. I'll have to > > look into that and see what the solution is there. > > > > Alistair > > > > > > > > -- > > > Joseph S. Myers > > > joseph@codesourcery.com > > > > > | https://sourceware.org/legacy-ml/libc-alpha/2019-09/msg00283.html | CC-MAIN-2021-04 | refinedweb | 692 | 64 |
Hi All , My query was is there any way we can reload the changed python script without restarting the splunk everytime?
I understand we have _bump for javascript and CSS & debug/refresh for .conf files but do we have a similiar command for changed python scripts as well?
Thanks & Regards
AG.
Please describe your use case. I change the Python code in my apps often and never need to restart Splunk.
I have the same problem.
To specify it a little further:
I am developing a custom REST Endpoint using
from splunk.persistconn.application import PersistentServerConnectionApplicationPython
When I run the endpoint I get the expected result. However if I then go on to change the code, the changes sometimes impact the output and sometimes they don't no matter what I do to the code. Even if I deliberately break the code, the endpoint still works.
As far as I could figure out, I have about 5-10 iterations that will impact the output, afterwards some cached version of my script seems to be executed. If I reach that point, the only thing, that makes new versions of the code show up, is restarting splunk, as initially suggested.
So if there is something similar to _bump or debug/refresh for Python, that would be a HUGE help.
I have experienced this behavior with Splunk 6, 7, and 8. Previous versions I have not done much development on, so I can't say.
Any suggestions would be immensely appreciated.
Even better question, I recently learned python, how do I take what I learned and use it in splunk? Explanation links to articles are fine answers! Yes, I give Karma!
Please ask a new question rather than hijack a thread.
Please describe your use case. I change the Python code in my apps often and never need to restart Splunk.
Hi @richgalloway ! Thanks for this. So does it mean that you don't have to restart the Splunk server whenever you make any changes to your python code in "bin" folder? You can save the code and it should run fine whenever we are triggering this code through JS ?
In my use case, I am trying to encrypt password using python script. My JS is calling this python script & sending that encrypted password to Splunk to display.
Thanks
AG. | https://community.splunk.com:443/t5/Developing-for-Splunk-Enterprise/Reload-changed-python-code-without-restarting-splunk/m-p/558511 | CC-MAIN-2022-05 | refinedweb | 387 | 74.69 |
The XMLConnector class is the script version of the XMLConnector component. You can use this class to work with external XML documents using ActionScript. You can use ActionScript to bind the contents to other components .
You might want to script the XMLConnector component if youre adding components to the Stage dynamically. Before you can get started, youll need to have a copy of the XMLConnector in your library. The easiest way to do this is to drag the component to the Stage and delete it again.
You can include the XMLConnector class within your file using the following code line. Using an import statement means you wont have to use mx.data.components.XMLConnector each time you want to refer to the XMLConnector class.
import mx.data.components.XMLConnector;
Youve already seen some of the scripting that you can use with the XMLConnector component. Earlier, you saw the trigger method of the XMLConnector class and you displayed the results property in an Output window.
Well get started by scripting an XMLConnector that loads an external XML document. You can create a new XMLConnector with this line:
var myXMLConnector:XMLConnector = new XMLConnector();
You can set the parameters for the XMLConnector with the following code. These are equivalent to the parameters within the Component Inspector; you can see a description of each parameter in Table 8-2.
myXMLConnector.URL = "xmlDoc.xml"; myXMLConnector.direction = "receive"; myXMLConnector.ignoreWhite = true; myXMLConnector.multipleSimultaneousAllowed = true; myXMLConnector.suppressInvalidCalls = true;
Interestingly, even though the ignoreWhite parameter is set to true by default in the Component Inspector, it still defaults to false when you script the XMLConnector. Make sure you dont forget this line if youre scripting the XMLConnector; otherwise Flash will include white space in the results.
Once youve configured the parameters, youll need to trigger the component using this line:
myXMLConnector.trigger();
The XMLConnector class broadcasts the result event when it completes the call to the XML document. You can use a listener to display the XML content as shown here:
var xmlListener:Object = new Object(); xmlListener.result = function(evtObj:Object):Void { trace (evtObj.target.results); }; myXMLConnector.addEventListener("result", xmlListener);
You can access the XML document through the results property of the XMLConnector. The name is very similar to the result event, so make sure you dont confuse the two.
The best way to get started with the XMLConnector class is through an example.
In this exercise, well load the file address.xml into Flash using the XMLConnector class.
Open Flash and create a new movie. Add a new layer, actions , to the movie.
Save the file as scriptLoadAddress.fla in the same folder as the address.xml file.
Add and then delete an XMLConnector component. This adds the XMLConnector component to the library.
Select frame 1 of the actions layer and add the following code:
import mx.data.components.XMLConnector; var addressXC:XMLConnector = new XMLConnector(); var xCListener:Object = new Object(); xCListener.result = function(evtObj:Object):Void { trace (evtObj.target.results); }; addressXC.addEventListener("result", xCListener); addressXC.URL = "address.xml"; addressXC.direction = "receive"; addressXC.ignoreWhite = true; addressXC.multipleSimultaneousAllowed = true; addressXC.suppressInvalidCalls = true; addressXC.trigger();
This code imports the XMLConnector class and creates a new XMLConnector object. It assigns a listener that traces the results when the XMLConnector broadcasts the result event. The XMLConnector loads the address.xml file and sets the relevant properties. The last line triggers the connection.
Test the movie and you should see the contents of the XML document in an Output window. You can see the completed file scriptLoadAddress.fla saved with your resources.
In exercise 5, you saw how to use ActionScript with the XMLConnector component. You loaded the contents of an XML document into an XMLConnector object called addressXC , set the parameters, and displayed the results in an Output window.
You could work with the loaded XML using the XML class. For example, you can set an XML variable to the value of evtObj.target.results in the listener function, as shown here:
xCListener.result = function(evtObj:Object):Void { var myXML:XML = evtObj.target.results; var RootNode:XMLNode = myXML.firstChild; trace (RootNode.nodeName); }; addressXC.addEventListener("result", xCListener);
You can then work with the resulting XML object using the methods and properties of the XML class. However, this defeats the purpose of the XMLConnector class. One of the main benefits of the XMLConnector class is that you can bind the results to other components. You can do this using the Component Inspector, or you can use the DataBindingClasses component to achieve the same result with ActionScript.
The DataBindingClasses component allows you to use ActionScript to set the bindings between components. This is useful if youre dynamically adding components to your Flash movies using the createClassObject method. You need to use ActionScript because the components dont exist until you compile the movie.
To start with, youll need to include the DataBindingClasses component in your movie. This component is available from the Classes library. You can open this library by choosing Window
Other Panels
Common Libraries
Classes . Drag the DataBindingClasses component to the Stage of your movie and then delete it again. This will add the component to your library. You can check this by opening your library with the CTRL-L ( CMD-L for Macintosh) shortcut key. Unlike the data components, the DataBindingClasses component has a visual appearance, so dont forget to delete it from the Stage before you publish your movie.
You can import the relevant classes from the mx.data.binding package with this code:
import mx.data.binding.*;
Table 8-5 shows the classes included in the package. These classes are only available with the Professional version of Flash.
In this section, Ill introduce you to the Binding and EndPoint classes. The Binding class creates the binding but it uses the EndPoint class to specify the details for each side of the binding. Youll need two EndPoint objectsone for each component involved in the binding. Each EndPoint object needs information about the component, as well as a component property for binding.
You can create the two EndPoint objects using this code:
var fromBinding:EndPoint = new EndPoint(); var toBinding:EndPoint = new EndPoint();
Youll need to set two properties for each EndPoint with the following code:
EndPoint.component = componentName; EndPoint.property = "componentProperty";
The code sets the component property to the instance name of the component. The property is a string value that refers to the bindable component property. For example, in a TextInput component, you would set the property to text . Youd normally use the dataProvider property for data-aware components.
Depending on how you trigger the binding, you may need to set the event property of the source EndPoint as shown here. This code line lists the name of the event that will trigger the binding.
EndPoint.event = "triggerEventName";
You can leave out this line if youre going to trigger the binding at another time, for example, within an event-handler function.
To create the binding, youll need to call the constructor for the Binding class, as shown here:
new Binding(fromBinding, toBinding);
When you create bindings using ActionScript, they are one-way by default. The component that triggers the binding has an out direction while the other component has an in direction.
As you saw a little earlier, you can set the event that will trigger the binding:
EndPoint.event = "triggerEventName";
This is useful where updating one component should immediately update another, for example, two TextInput components. You could use the change , focusOut , or keyDown events to trigger the updating.
The second approach is to call the execute method of the binding in an event-handler function. For example, you could call this method in the result event handler of an XMLConnector component. The execute method looks like this:
var bindingResults:Array = newBinding.execute(reverse);
The execute method has one Boolean parameter, reverse , that indicates whether to apply the binding in both directions. You can assign the execute method to an array as shown. The array will contain any errors that occur when executing the binding. If there are no errors, it will contain the value null .
In the next exercise, Ill show you a simple example of how to script the XMLConnector class.
In this exercise, well script an XMLConnector and display the results in a TextArea component.
Open Flash if necessary and create a new movie. Save it in the same folder as the address.xml file.
Rename the first layer actions .
Add the DataBindingClasses component to your library by choosing Window
Other Panels
Common Libraries
Classes and dragging the DataBindingClasses component to the Stage. Delete the symbol from the Stage so it doesnt show in the completed movie.
Add a TextArea component to the library. Do the same with an XMLConnector component. The library should contain the three items shown in Figure 8-41.
Figure 8-41: The library contents
Click on frame 1 of the actions layer and add the following code in the Actions panel. The code imports the binding, TextArea, and XMLConnector classes.
import mx.data.binding.*; import mx.controls.TextArea; import mx.data.components.XMLConnector;
Importing the classes means that we dont have to use a fully qualified name to refer to the class. We can avoid using mx.data.component.XMLConnector each time we refer to the XMLConnector class.
Add the following code to the actions layer to create a new XMLConnector object and set values for its properties:
var addressXC:XMLConnector = new XMLConnector(); addressXC.URL = "address.xml"; addressXC.direction = "receive"; addressXC.ignoreWhite = true; addressXC.multipleSimultaneousAllowed = true; addressXC.suppressInvalidCalls = true;
Create a TextArea component using ActionScript by adding the following code at the bottom of the actions layer:
createClassObject(TextArea," content_ta", this.getNextHighestDepth()); content_ta.moveTo(10,10); content_ta.setSize(400, 200); content_ta.wordWrap = true;
This code uses the createClassObject method to add a TextArea component called content_ta to the Stage at 10,10 . The code also sets the size and wordWrap property of the TextArea.
Add this binding at the bottom of the actions layer:
var fromBinding:EndPoint = new EndPoint(); var toBinding:EndPoint = new EndPoint(); fromBinding.component = addressXC; fromBinding.property = "results"; toBinding.component = content_ta; toBinding.property = "text"; var newBinding:Binding = new Binding(fromBinding, toBinding);
In this code, weve created two EndPoint objects. The first is fromBinding , which will send the results to the toBinding EndPoint. The code sets the component properties of each EndPoint by specifying the name of the component to use. It also identifies which component property should be bound. In this case, we use the results property of the addressXC XMLConnector and bind that to the text property of the TextArea component.
Create a listener with the following code. The listener responds to the result event of the XMLConnector object. The XMLConnector broadcasts the event after it completes the call to an external XML document.
var xCListener:Object = new Object(); xCListener.result = function(evtObj:Object):Void { var bindingResults:Array = newBinding.execute(false); trace (bindingResults); }; addressXC.addEventListener("result", xCListener);
When the results are received, we call the execute method to apply the binding. The false parameter specifies that the binding is one way. We assign the outcome of the execute method to an array variable called bindingResults . This array contains any errors that Flash encounters when trying to execute the binding. If there are no errors, the word null displays in the Output window.
Add the following line at the bottom of the actions layer. This code triggers the XMLConnector component.
addressXC.trigger();
Test the movie and youll see the results from the binding within an Output window. You should see the word null in the window if there are no errors. When you close the Output window, you should see the same interface as that shown in Figure 8-42.
Figure 8-42: The completed interface
You can find the completed resource file saved as simpleScriptXMLConnector.fla .
In exercise 6, we loaded an external XML document into an XMLConnector object and bound it to a TextArea component. It was a very simple example, but as you can see it took a lot of code to create the bindings.
In the next exercise, well look at an alternative method of binding the results of an XMLConnector to a component. Instead of using bindings, well use ActionScript to process the results and add them to the dataProvider of a List component.
In this exercise, well use the results from an XMLConnector to populate the dataProvider of a List component. The steps outlined in this exercise provide an alternative to the approach in exercise 6, scripting the bindings.
Open Flash if necessary and create a new movie. Save it in the same folder as the address.xml file and rename Layer 1 as actions .
Drag the DataBindingClasses component from Window
Other Panels
Common Libraries
Classes onto the Stage. Delete the symbol so it appears only in the library.
Add a List and XMLConnector to the library.
Click frame 1 of the actions layer and add the following code to import the List and XMLConnector classes:
import mx.controls.List; import mx.data.components.XMLConnector;
Add the following code to the actions layer. These lines create a new XMLConnector object that loads the file address.xml .
var addressXC:XMLConnector = new XMLConnector(); addressXC.URL = "address.xml"; addressXC.direction = "receive"; addressXC.ignoreWhite = true; addressXC.multipleSimultaneousAllowed = true; addressXC.suppressInvalidCalls = true;
Use the following code to add a List component to the Stage:
createClassObject(List,"name_list", this.getNextHighestDepth()); name_list.moveTo(10,10); name_list.setSize(200,100);
Create an event listener that listens for the result event of the XMLConnector object:
var xCListener:Object = new Object(); xCListener.result = function(evtObj:Object):Void { trace (evtObj.target.results); }; addressXC.addEventListener("result", xCListener);
Add the following line at the bottom of the actions layer. The code triggers the XMLConnector object.
addressXC.trigger();
Test the movie and youll see the results from the XMLConnector in an Output window.
So far, weve used the same steps as in the previous example. Now, instead of using the EndPoint and Binding classes, well populate the dataProvider with ActionScript.
Change the event listener function as shown here. The new lines, which appear in bold, add the XML content to an array. We use the array to populate the dataProvider of the List component. Note that we can use evtObj.target.results to refer to the XML tree loaded into the XMLConnector object.
xCListener.result = function(evtObj:Object):Void { var len:Number = evtObj.target.results.firstChild.childNodes.length; var dp_arr:Array = new Array(); var childNode:XMLNode, theName:String, theID:Number; for (var i:Number = 0; i < len; i++) { childNode = evtObj.target.results.firstChild.childNodes[i]; theName = childNode.childNodes[1].firstChild.nodeValue; theID = childNode.childNodes[0].firstChild.nodeValue; dp_arr.push({label: theName, data: theID }); } name_list.dataProvider = dp_arr; };
The code creates a variable len that finds the number of contacts in the address book. We can find this by counting the number of childNodes in the firstChild . The code creates an array for the List items as well as supporting variables. We loop through each contact and find the name and id. These values are stored in the variables theName and theID and added to the array dp_arr . The function sets the array as the dataProvider for the name_list List box.
Test the movie and youll see the interface shown in Figure 8-43. You can also see the completed scriptLoadAddressNoBinding.fla in your resource files.
Figure 8-43: The completed interface
Exercise 7 has shown how to add data from an XMLConnector object to a List component without scripting the bindings. It is an alternative method for working with complicated data structures in bindings.
The ActionScript to create multiple bindings can get very involved, especially where you are using an XML document with complicated data structures. The EndPoint class has another property, location , which allows you to navigate data structures within the XML document. You can specify the path in many different ways, such as by using XPath statements or ActionScript paths. You then have to use Rearrange Fields, Compose String, or Custom formatters to manipulate the data.
Given the amount of effort involved in creating simple bindings with ActionScript, its much easier to create these bindings visually using the Component Inspector. I strongly suggest that you configure your bindings through the Component Inspector wherever possible. | https://flylib.com/books/en/1.350.1.70/1/ | CC-MAIN-2021-39 | refinedweb | 2,706 | 50.73 |
Hi,
How to change display line width of one object?
I want to change different line width with different objects.
How can I do?
Kind regards,
Vaker
Hi,
How to change display line width of one object?
I want to change different line width with different objects.
How can I do?
Kind regards,
Vaker
not sure which language you use. Below is a python example on how to change the print width for one or more curves. To see the results in the viewport, make sure to enable print display using the
_PrintDisplay command before running the script.
import Rhino import scriptcontext import rhinoscriptsyntax as rs def ChangePlotWeight(plot_weight=0): ids = rs.GetObjects("Select curves", rs.filter.curve, False, True, False) if not ids: return pws = Rhino.DocObjects.ObjectPlotWeightSource.PlotWeightFromObject for id in ids: rh_obj = rs.coercerhinoobject(id, True, True) rh_obj.Attributes.PlotWeightSource = pws rh_obj.Attributes.PlotWeight = plot_weight rh_obj.CommitChanges() scriptcontext.doc.Views.Redraw() if __name__=="__main__": ChangePlotWeight(plot_weight=2.0)
Note that running the function above without providing the argument
plot_weight will reset the plot weight to the default value 0 (hairline). A negative value will set the curve plot weight to “no print”.
c.
Hi Vaker,
See if this helps.
cmdSampleObjectPlotWeight
– Dale
Hi Dale,
I change plot weight for two lines.
“0.13” for left and “2.0” for right.
Vaker
Did you use the
PrintDisplay command like Clement said?
You might have to play with the Thickness setting to get the result you would like.
Hi clement, wim,
Thanks for your help.
It works.
I have another question.
How to open “Print Display Mode” by function without script?
Vaker
you can just script the
_PrintDisplay command and enable / disable it via it’s command line options. Eg. in Python RhinoScript this would be:
import rhinoscriptsyntax as rs rs.Command("_PrintDisplay _State=On")
c.
Hi clement,
Thanks for your help.
I know how to using script.
I just want to know way with Rhino API(if or not).
Vaker
@Vaker, i guess you need to envoke it using
RunConmand. I do not see a PrintDisplay method in RhinoCommon API.
c.
There isn’t. I’ve added the wish to the pile.
– Dale
Hi Dale,
Thanks for your help
Vaker | https://discourse.mcneel.com/t/how-to-change-display-line-width/36485 | CC-MAIN-2022-27 | refinedweb | 371 | 79.36 |
There is a big memory leak inside GPU when using Starling filters.
The leak is manifested in such way that Starling displays correct GPU memory while actual memory inside GPU is very different as can be seen with Task Manager or MSI Afterburner.
I don't think the bug is withing Starling but I need help from people that understand AGAL and Starling better to discover what is causing it.
I have created a test project that shows this memory leak. When running the test memory keeps rising and rising until the app crashes.
While running this test open Task Manger and select your GPU tab to see current GPU memory usage or you can use MSI afterburner to see current GPU memory usage. You will see that that memory is very different when compared to what Starling stats is showing.
This is only happening on WIndows. On Mac OS everything is fine, also on iOS and Android.
I am using latest AIR SDK 32 and this was happening on AIR 31 and 30 as well.
Also I have tested with 32bit and 64bit captive and this happens on both as well.
Also this is happening with all filters not just Starling default filters, but this is NOT happening when filter is cashed.
I have tested this with nvidia graphics card and Intel integrated graphics card and it is behaving the same on both. I did not tested with radeon graphics card.
Is someone able to run this test to confirm it?
On MacOS so I can’t test it, but how exactly is it crashing? I would guess it’s failing to allocate memory for a Texture or something else on the GPU. How long does it take to crash normally too?
I ran the test on GeForce GTX 1060 6GB.
The GPU memory filled completely in about 10s, while the app showed 147 MB GPU memory used.
After about 3 min, the app froze and closed itself.
Thanks @htmiel so the bug really exists.
@JohnBlackburne On my game it happens very quickly because I have filters on my game global sprite.
@Daniel are you able to help us figure out what part of the AGAL is responsible for this so that we can submit bug to Adobe or Harman. I have very low knowledge about AGAL so can anybody help in figuring what is the cause of this because this makes filters unusable on Windows?
Thanks,
Caslav
I doubt very much it’s AGAL. Every time you draw something it uses AGAL, whether the defaults in MeshEffect.as or AGAL supplied by a filter or by a custom mesh effect.
Changing the AGAL used can change the output, that’s all. It does not allocate memory, is not responsible for textures, vertex buffers or anything like that. It can have a performance impact but normally other things are more important for performance.
To confirm this you could try changing the filter, as the main effect of that is it changes the AGAL used. It should therefore have a similar memory problem (though if not it narrows things down considerably).
Also try just using a FragmentFilter, i.e. an empty filter. This has no visible effect but still allocates texture memory, so would be a further test which might help narrow it down.
Yes agal itself will have almost zero impact on gpu memory in this case.
Had a super quick skim of the code.. are you disposing of all the temp images used in the 2 for loops?
bwhiting Had a super quick skim of the code.. are you disposing of all the temp images used in the 2 for loops?
Yes I am disposing all the images. I have tested a few more times and actually it even happens if filters are cached.
I have improved the test a little bit. Here is the all the code for the test:
package dfpck
{
import flash.utils.setInterval;
import starling.core.Starling;
import starling.display.Image;
import starling.display.Sprite;
import starling.filters.BlurFilter;
import starling.filters.ColorMatrixFilter;
public class FilterMemoryLeak extends Sprite
{
public static const NUMBER_OF_IMAGES:int = 200;
protected var _atlas:StarlingAssets;
protected var _filterImages:Vector.<Image>;
protected var _container:Sprite;
public function FilterMemoryLeak()
{
super();
_filterImages = new Vector.<Image>;
this.filter = new BlurFilter(4, 4, 1);
}
public function get atlas():StarlingAssets
{
return _atlas;
}
public function set atlas(value:StarlingAssets):void
{
_atlas = value;
}
public function startExample():void
{
_container = new Sprite();
addChild(_container);
createFilterImages();
setInterval(updateImagePositions, 32);
}
public function updateImagePositions():void
{
var filterImage:Image;
for(var i:int = 0, len:int = _filterImages.length; i < len; i++)
{
filterImage = _filterImages[i];
filterImage.x = randomNumberRange(Starling.current.stage.stageWidth, 0, false);
filterImage.y = randomNumberRange(Starling.current.stage.stageHeight, 0, false);
}
}
public function randomNumberRange(maxNum:Number, minNum:Number, floor:Boolean = true):Number
{
var randomNumber:Number = 0;
if(floor)
{
randomNumber = Math.random() * (maxNum - minNum + 1);
randomNumber = Math.floor(randomNumber);
}
else
{
randomNumber = Math.random() * (maxNum - minNum);
}
return randomNumber + minNum;
}
public function createFilterImages():void
{
var filterImage:Image;
var filter:ColorMatrixFilter;
for(var i:int = 0; i < NUMBER_OF_IMAGES; i++)
{
filterImage = new Image(atlas.atlas.getTexture('ball_pool_1_full'));
filterImage.scale = 0.14;
filterImage.alignPivot();
filterImage.readjustSize();
filterImage.name = i.toString();
filterImage.x = randomNumberRange(Starling.current.stage.stageWidth, 0, false);
filterImage.y = randomNumberRange(Starling.current.stage.stageHeight, 0, false);
// if I does not include this filter memory usage stays normal
filter = new ColorMatrixFilter();
filter.tint(0x000DFA, 1);
filterImage.filter = filter;
_container.addChild(filterImage);
_filterImages[i] = filterImage;
}
}
}
}
So now I am just creating images once and on every 32 milliseconds I am only updating their position and still memory keeps rising in the GPU until the app crashes. This only happens on Windows machines.
Does anyone understands why is this happening, how can we submit bug about this problem?
I have also tried just using FragmentFilter like @JohnBlackburne suggested but nothing changes, the memory is still allocated endlessly until app crashes.
Yes!!! I have already created this topic! The problem is in the filters! And the problem occurs when using Direct 11, if you use Direct 9 - the problem disappears.
Thank you @denisgladkiy I have also tested this with Adobe Scout and Adobe Scout sees exactly what Starling sees normal memory consumption while reality is very different. Here is the Adobe Scout profile:
But how can we avoid using DirectX 11 on Windows Machines if we can at all?
hardcoremore
Apparently setting the value
-swf-version=37
But the difference in graphics is visible to the naked eye! anti-aliasing is much better on direct 11
denisgladkiy Apparently setting the value
-swf-version=37
You were right, when I use -swf-version=37 and run the test the memory is not allocated any more. Hope we can extract more data from this simple test that show what is the cause of this to present to Adobe or Harman.
The problem in the implementation of direct 11. At the moment adobe will support desktop applications?
@denisgladkiy I really hope so. Is there anywhere a ticket where we can vote. Also can we get more info from this test I have created which could help Adobe? Does the test shows what exactly is the problem because the test is really small. I hope @Daniel can help with extracting more details from this test that could help Adobe or Harman fix it.
Sorry for not chiming in earlier - I'm currently on a vacation.
Yes, I remember this issue. The only solution I found was to change the DirectX version via that "swf-version" flag, which is far from optimal. Maybe we now have a chance to get this fixed by Harman? Your sample is probably a good start. Does anyone know if there's a new issue tracker available now? Or are we supposed to still report on the old one?
Daniel I don't think that Harman has announced any kind of issue tracker for AIR yet. I don't know for sure, of course, but if I had to guess, I'd expect Adobe's issue tracker to be shut down. I know I wouldn't want to use it, if I were Harman.
Until we hear something more definitive from Harman, I suggest that we log issues here, since it's already an established practice:
Then, also send an email to Adobe.Support@harman.com with the issue's description and a link. | https://forum.starling-framework.org/d/21763-gpu-memory-leak-when-using-starling-filters-on-windows/1 | CC-MAIN-2019-26 | refinedweb | 1,392 | 57.67 |
So my questions are geared directly to my homework. Before you ask, yes I've looked at other questions and I have looked at the java docs to try and help me but I only understand so much..
You have become a restaurant mogul. You own several fast food chains. However, you now need to set a standard that all of your fast food chain must follow in order to have your software be uniform across the board. There will be some rules that will be the same for all restaurants.
Create an Abstract Class named Restaurant
Create a function/method that will print the name of the restaurant when called.
Create an abstract function/method named total price
Create an abstract function/method named menu items
Create an abstract function/method name location
Create a Class called McDonalds that extends Restaurant
Implement all abstract methods
Add logic so that the total price method/function will give the total price of the meal including a 6% tax
Add a method that returns a Boolean named hasPlayPlace. Which returns true when this location has a playplace
Create a Constructor that will set the name of the Mcdonalds, location, and hasPlayPlace
This is just a portion of my homework. I'm not looking for anyone to do it for me, just some insight and help in the right direction.
Here's what I've written in only one of the classes:
public class McDonalds extends Restaurant { private String name; private String location; private boolean hasPlayPlace; Scanner input = new Scanner(System.in); public McDonalds (String name, String location, boolean hasPlayPlace) { setName(name); setLocation(location); setHasPlayPlace(hasPlayPlace); } McDonalds location1 = new McDonalds("McDonalds", "Kirkman", false); McDonalds location2 = new McDonalds("McDonalds 2", "International Dr.", true); public String getName() { return name; } public void setName(String name) { this.name = name; } public String getLocation() { return location; } public void setLocation(String location){ this.location = location; } public boolean isHasPlayPlace() { return hasPlayPlace; } public void setHasPlayPlace(boolean hasPlayPlace) { this.hasPlayPlace = hasPlayPlace; } public void totalPrice() { double totalPrice = 0; double tax = 0.06; totalPrice += (totalPrice * tax); } public void menuItems() { double mcChicken = 1; double fries = 1.25; System.out.println("1. Mc Chicken $1"); System.out.println("2. Fries $1.25"); int choice = input.nextInt(); switch (choice){ case 1: mcChicken *= tax; case 2: fries *= tax; } } public void location() { //Don't know what's supposed to go in here. //But I've implemented the method as I was supposed to. } }
So I'm just confused as what goes in each abstract method(totalPrice, menuItems, location) and how I get them to do what they're supposed to do. I've written what I thinks fits into it so far but now I'm stuck.
Here is also another part of the homework:
• Add a method/function that returns a String of what hot sauce you would like..ie hot, fire, mild
• Create a Constructor that will set the name of the TacoBell and location
Here's what I thought worked?
public String TacoBellSauce(String fire, String hot, String mild) { System.out.println("What sauce would you like to have?"); System.out.println("1. Fire"); System.out.println("2. Hot"); System.out.println("3. Mild"); int choice = input.nextInt(); switch(choice) { case 1: return fire; case 2: return hot; case 3: return mild; } return null; }
Can someone point out my mistakes and point me in the right directions? | http://www.javaprogrammingforums.com/whats-wrong-my-code/19910-beginner-programmer-abstract-concept-assignment-few-confusions.html | CC-MAIN-2013-48 | refinedweb | 560 | 53.92 |
The.
The handpose package detects hands in an input image or video stream, and returns twenty-one 3-dimensional landmarks locating features within each hand. Such landmarks include the locations of each finger joint and the palm.
Once you have one of the packages installed, it’s really easy to use. Here’s an example using facemesh:
import * as facemesh from '@tensorflow-models/facemesh; // output will be a], ... ], ... } }
Both packages run entirely within the browser so data never leaves the user’s device.
Be sure to check the demos as they’re quite nice. I did notice that the handpose demo only shows one hand, even though the library can detect more than one.
Face and hand tracking in the browser with MediaPipe and TensorFlow.js →
facemesh Demo →
handpose Demo → | https://www.bram.us/2020/03/11/realtime-face-and-hand-tracking-in-the-browser-with-tensorflow/ | CC-MAIN-2021-17 | refinedweb | 130 | 66.33 |
Docs |
Forums |
Lists |
Bugs |
Planet |
Store |
GMN |
Get Gentoo!
Not eligible to see or edit group visibility for this bug.
View Bug Activity
|
Format For Printing
|
XML
|
Clone This Bug
Hi,
please see
And Secunia Adv. SA28596
due to CVE-2008-01-28 this vuln is also fixed in 2.0.4 - maintainer please
provide an updated ebuild.
could someone please add "CVE-2008-01-28" to the summary, i dont have the
needed permissions
there is another CVE:
CVE-2008-0467 this one is only fixed in 2.1RC1, maintainers - please advice
(could someone also add that CVE-Name to the summary?)
Need to update to 2.0.4 for this one, 2.1.x is ok
This needs 2.0.4 and 2.1RC1
2.0.4 isn't even on the horizon. Same with 1.5.6, but we have no 1.5.x in
tree., So not sure what to say about 2.0.4. I will see about bumping 2.1.x to
2.1RC1 ASAP. Likely later today or tomorrow. But that's a pre-release version
so really is kinda moot. Shouldn't be used in production, won't go stable, etc.
I don't think we should mask Firebird at this time. But really have no way to
address 2.0.3.x short of a backport/patch.
Willaim any news on this one?
The patches are linked within the Firebird bug report (see URL) and they should
apply cleanly to 2.0.3. Please patch.
Commited 2.1.0 rc1, which is not subject to this vulnerability. Removed past
2.1.0 version that was vulnerable. Still have to make patch for 2.0.3, and will
do so ASAP. Couldn't find a unified on from bug link, so will have to fetch
files/patches and create my own unified one.
I admit it's a little hidden. On these overview pages:
You find every changed file. Either use the CVS revisions to extract a patch,
or click "(+X -Y lines)" and the link named "Patch" at the top. This will give
you one unified diff. Merging those into one patch should work too.
Will get to this before end of my day, sometime in the next 8 hours or so.
Thanks for the pointers on fetching the patches/diffs.
Working on this. Made two patches, the one for CVE-2008-0387 is good to go. The
one for CVE-2008-0467 makes compile fail. So working on that atm. Might commit
the one then the other worse case. Sorry for the delay been busy.
Created an attachment (id=143904) [edit]
firebird-2.0.3.12981.0 CVE-2008-0467 patch
Here is the patch for CVE-2008-0467. Need some help with this one. It applies
fine, but makes compile fail :(
make[2]: Entering directory
`/tmp/portage/dev-db/firebird-2.0.3.12981.0-r5/work/Firebird-2.0.3.12981-0/gen'
x86_64-pc-linux-gnu-g++ -O2 -msse -msse2 -msse3 -march=k8 -mtune=k8
-minline-all-stringops -DSUPERSERVER -pthread -I../src/include/gen
-I../src/include -I../src/vulcan -DNAMESPACE=Vulcan -ggdb -O3
-fno-omit-frame-pointer -DNDEBUG -DLINUX -DAMD64 -pipe -MMD -fPIC
-fmessage-length=0 -DPROD_BUILD -c ../src/remote/inet_server.cpp -o
../temp/superserver/remote/inet_server.o
In file included from ../src/include/../jrd/gdsassert.h:24,
from ../src/include/../common/classes/tree.h:34,
from ../src/include/../common/classes/alloc.h:45,
from ../src/remote/../jrd/../common/classes/fb_string.h:39,
from ../src/remote/../jrd/isc_proto.h:28,
from ../src/remote/inet_server.cpp:40:
../src/include/../jrd/../jrd/gds_proto.h:37: warning: large integer implicitly
truncated to unsigned type
../src/remote/inet_server.cpp:566: error: 'SignalSafeSemaphore' in namespace
'Firebird' does not name a type
../src/remote/inet_server.cpp: In function 'void* shutdown_thread(void*)':
../src/remote/inet_server.cpp:583: error: 'shutSem' was not declared in this
scope
../src/remote/inet_server.cpp: In function 'void signal_term(int)':
../src/remote/inet_server.cpp:621: error: 'shutSem' was not declared in this
scope
../src/remote/inet_server.cpp: In function 'void shutdown_fini()':
../src/remote/inet_server.cpp:650: error: 'shutSem' was not declared in this
scope
make[2]: *** [../temp/superserver/remote/inet_server.o] Error 1
make[2]: Leaving directory
`/tmp/portage/dev-db/firebird-2.0.3.12981.0-r5/work/Firebird-2.0.3.12981-0/gen'
make[1]: *** [fbserver] Error 2
make[1]: Leaving directory
`/tmp/portage/dev-db/firebird-2.0.3.12981.0-r5/work/Firebird-2.0.3.12981-0/gen'
make: *** [firebird] Error 2
If someone can help out with the patch. And/or inform me of what I did wrong.
Or need to do to fix. Would help out allot. Kinda stuck on this atm. Thanks
Just drop the file in firebird/files and add a line above the other patches in
a 2.0.3 ebuild. Re-digest and emerge. Will allocate some more time to it
tomorrow if no one beats me to it :)
Ok went upstream for help on this. Damyan Ivanov <dmn@debian.org> was kind
enough to provide the patch they are using on Debian. I just tested that it
applied and compiled filed. I just committed it to tree along with patch for
CVE-2008-0387. So we should be good to go now :)
Although the Debian patch is a little smaller than mine. So not sure what's up
with that. (There is a patch for a file for windows or etc in mine, but not
sure that accounts for size diff )
I did also find out from upstream about the compile error
"SignalSafeSemaphore is surely from another fix - it was needed when porting to
Solaris, Darwin or may be something else that does not support timeouts in
posix semaphores. Rename it bak to Semaphore and compile error will be gone."
So I might try that with my patch and swap out patches. Maybe going to ask
about the differences with upstream. But either way is address. I guess we can
look to stabilize this one. Or wait a day or so to see if I change out patches.
Just wanted to get a fix in tree sooner than later. Since I was already
slacking on this.
Thx William. Could you clarify which versions are targets for stable?
firebird-2.0.3.12981.0-r5 is patched, also doesn't used hard coded cflags like
-r4. Main differences between that version and current stable.
Haven't had a chance to diff patches yet, but if I do that will be -r6 and will
comment accordingly. Will see about looking into that now.
Thx.
Arches please test and mark stable. Target keywords are:
firebird-2.0.3.12981.0-r5.ebuild:KEYWORDS="amd64 -ia64 x86"
x86 stable
I fixed the multilib issues best I could on the one ebuild, amd64 stable
Fixed in release snapshot.
Request filed.
GLSA 200803-02 | http://bugs.gentoo.org/208034 | crawl-002 | refinedweb | 1,152 | 70.09 |
Debugging Tools and Techniques - JUnit test gives ClassCastException
I'm trying to migrate some JUnit 3.8 tests to 4.3. At the moment my tests run fine in Eclipse, but when I try to run them from the command line they fall over with a "ClassCastException" error. This happens even when all the actual tests are ignored, e.g.:
package ac.nott.chem.lattice;
import java.util.ArrayList;
import ac.nott.chem.lattice.Helper;
import org.junit.Test;
import org.junit.Ignore;
importstatic org.junit.Assert.*;
publicclass HelperTest{
@Ignore
@Testpublicvoid testFileIO(){
ArrayList<String> lines = Helper.readFromFile("search.inp");
assertEquals(76, lines.size());
assertEquals("A 3 -4 0", lines.get(0));
assertEquals("TEMP 1.5", lines.get(75));
Helper.printToScreen(lines);
Helper.printToFile("test.out", lines);
}
If anyone has any suggestions as to what might be causing the problem, I'd be very grateful! I'm running the JUnit tests from the command line like so (obviously without the lines breaks, just introduced here for ease of viewing):
java -classpath /opt/junit4.3/:/opt/junit4.3/junit-4.3.jar:
/home/haydnw/workspace/JavaLatticeSearch/build/:
/home/haydnw/workspace/JavaLatticeSearch/testbuild/
junit.textui.TestRunner ac.nott.chem.lattice.HelperTest
I've read through the Javadoc for "ClassCastException" and can't see what in the test code might be the issue? This happens with all my tests, which worked fine in JUnit 3.8, and all work fine in JUnit 4.3 through Eclipse. Thanks in advance. | http://www.java-index.com/java-technologies-archive/515/debugging-techniques-5153426.shtm | crawl-001 | refinedweb | 245 | 54.39 |
[ earn a 660 challenge coin (which includes a cool cipher, natch).
But, when you teach a bunch of skills like that and hold a CtF on the last day, sometimes, a few students get a little too rambunctious in applying their new-found skills. At the risk of being indelicate, I'll come out and say it — they try to cheat. By using their Python skills along with their MiTM capabilities, they try to snarf flags from other teams attempting to send them to the score server. What's an enterprising course author to do? Well, Steve Sims has some clever things up his sleeve, turning the tables on such shenanigans using the concepts taught in the course with a little Python magic of his own.
I recommend you read through Steve Sims' script to see how he uses Python with Scapy to call Nmap, call the underlying OS, formulate HTTP requests, and more. Check it out! -Ed.]
By Stephen Sims
Here is a short blog article about an attack that students were attempting to pull off in some of the Capture the Flag (CtF) events as part of SANS SEC660: Advanced Penetration Testing, Exploits, and Ethical Hacking. To thwart their attempts, I wrote a python script. In this article, I'd like to review the skills and techniques students use to try to undermine the CtF, and tell you my technical approach to address it in class.
The Source
During Day 1 of class, which is focused mostly on network attacks, we spend a lot of time looking at various ways to pull off a Man-in-the-Middle (MitM) attack, and then what you can accomplish by having that position. We cover techniques such as attacking SSL, routers, switches, and Network Access Control (NAC) solutions. During Day 3 of class, we spend a lot of time on Python, and various Python-based tools such as Scapy (by Philippe Biondi) and the Sulley Fuzzing Framework (by Pedram Amini / Aaron Portnoy / Ryan Sears).
The Attack
Armed with this information taught in class, every so often a CtF team attempts to steal key submissions from other teams. Now, one could certainly argue that there is technically no cheating in a CtF; however, this does not mean it should be really easy to pull the attack off. To score in the SEC660 CtF, SHA-1 hashes, which act as keys, are submitted into the scoring system by each team. If a hash/key matches a challenge, points are awarded to the team. Regardless of whether SSL or simple HTTP is being used as the transport protocol to the scoring server, the aforementioned teams were attempting to, and sometimes successfully, performing ARP cache poisoning and SSL stripping. This would allow the teams performing the attack to potentially read valid key submissions from other teams and get the points without completing the challenge. Ouch.
The Solution
The script you are about to read was written in about 90 minutes during a live CtF, so please forgive the stylistic issues and cut corners, such as not putting in the full paths to binaries when using the system() function. One of the solutions I designed to thwart this type of attack, and note that I am only sharing just one of them, was to create a script that would make a lot of noise on the wire. The script is not well-commented (again with the quick turnaround during the game), but it's easy to read as it's in Python. I decided to use Scapy together with Python to do the following:
- Scan the student subnets to look for inactive IP addresses within the valid range assigned during class, using Nmap. This way it doesn't stand out as an IP address that is obviously part of the script.
- Use one of these addresses very briefly and also use a random MAC address in the VMware OUI range.
- Automatically configure my interface with these addresses and perform a valid TCP_HTTP session to the scoring server.
- Submit a pseudo-random SHA-1 hash as a key submission and use a pseudo-random PHP session ID.
- Loop through this script until terminated.
The bottom line here is that my script injects false flags into the network, so anyone looking to steal a flag will likely get a non-valid flag delivered by my script. Instead of stealing a valid flag from a legitimate student, they will have stolen a false flag from my script, netting them NOTHING, except some wasted time.
Getting an automated script like this working with Scapy, that shows no errors when sniffing with a tool like Wireshark, can sometimes be challenging. There are multiple ways to get it working. Feel free to read through the script and use it to improve your Scapy skills, or even better, improve it and send it to me at stephen@deadlisting.com. I will totally buy you a beer! Don't forget to change the interface listed in the script if necessary.
-Stephen Sims
p.s. Josh Wright, Jake Williams, and I will be teaching SEC 660 using SANS on-line training system, vLive, from March 4 through April 17. No travel is required, as you can take the class from the comfort of your home or office. We meet twice a week, and we'll be sharing our best tips and tricks for advanced pen testing. Details are here:.
from scapy.all import *
from time import sleep
from hashlib import sha1
from random import random, sample, randint
import string
from os import system
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
conf.verb=0
os.system("clear")
print "\nPlease stand by while NMAP results are collected... This could take a minute...\n"
f = os.popen("nmap -n -PA -p0 10.10.75,76,77,78.1-254 | grep 'scan report for'") #Grab IP Addr from student range
z = []
for lines in f:
y = lines.split("\n") #Split \n from extra possible host addr's shorter than 3 digits.
x = [] # Empty list
x.append(y[0]) #Append the IP addr from y, and ignore the possible \n's
r = y[0] #Assign the list element (IP ADDR) from y to r
z.append(r[21:33]) #Grab only the IP ADDR from the NMAP scan results
print "Collected %d IP Addresses... Standby..." % len(z)
while True:
print "Spoofing process started..."
sp = RandNum(1025,65535) #Random number for ephemeral port assignment.
char_set = string.ascii_lowercase + string.digits #Random string for PHPSESSID
w = ''.join("10.10."+str(randint(75,78))+"."+str(randint(1,254)))
for x in z:
if w == x:
w = ''.join("10.10."+str(randint(75,78))+"."+str(randint(1,254)))
system("ifconfig eth1 down") #You may have to change interface number...
sleep(.5)
system("ifconfig eth1 hw ether " + str(RandMAC("00:0c:29:*:*:*")))
sleep(.5)
system("ifconfig eth1 " + w + " " + "netmask 255.255.0.0")
sleep(.5)
system("ifconfig eth1 up")
sleep(.1)
system("iptables -A OUTPUT -p tcp --destination-port 80 --tcp-flags RST RST -s " + str(w) + " -d 10.10.10.100 -j DROP")
sleep(1)
ah = os.popen("ifconfig eth1 | grep 00:0c:29") #Grab IP Addr
for lines in ah:
x = lines.split("\n")
y = []
y.append(x[0])
ah = x[0]
ah = ah[-19:]
print "Using MAC Address: " + ah
p = IP(src=w,dst="10.10.10.100") #Random IP from student subnets.
saveip = p[IP].src
print "Saved IP IS: " + str(saveip)
key = sha1(str(random())).hexdigest()
print "Using key: " + key
myseq = 1000
q= TCP(sport=sp, dport=80, flags="S", seq=myseq)
SYNACK = sr1(p/q)
sleep(.1)
SPORT2=SYNACK.dport
my_seq = myseq+1
my_ack = SYNACK.seq+1
ACK = TCP(sport=SPORT2, dport = 80, flags="A", seq=my_seq, ack=my_ack)
derp = send(p/ACK)
ACK = TCP(sport=SPORT2, dport = 80, flags="PA", seq=my_seq, ack=my_ack)
b = ''.join(sample(char_set,26)) #Joining 26 random chars from char_set for SESSID.
spoof = "HTTP/1.1 Host: 10.10.10.100"+\
"User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.15) "+\
"Gecko/2009102814 Ubuntu/8.10 (intrepid) Firefox/3.0: PHPSESSID="
r = "GET /checkscore.php?key=" + key + spoof + b
getReq = sr1(p/ACK/r)
my_seq = myseq+507
ACK = TCP(sport=SPORT2, dport = 80, flags="FA", seq=my_seq, ack=my_ack)
derp = sr1(p/ACK)
ACK = TCP(sport=SPORT2, dport = 80, flags="A", seq=my_seq+1, ack=my_ack+1)
derp = send(p/ACK)
print "Successfully spoofed packet, no errors..."
sleep(3)
os.system("clear")
Posted February 11, 2014 at 11:12 AM | Permalink | Reply
san tran
Hah, came across this today so i thought i would also quickly brush up on my scapy..
Here is my answer to your "noisy" code It's quite simple, I noticed your code use random mac address
system("ifconfig eth1 hw ether " str(RandMAC("00:0c:29:*:*:*"))) " str(ipsrc) " ''"> " raw[24:64]
if(packet.load.find("GET /checkscore.php?key=")!=-1):
try:
meh=trust.index(macsrc)
## if this packet is already part of trust, just print it out
print fields
except:
if collection.has_key(macsrc): #if macaddress has been observed previously, it is valid!
trust.append(macsrc) #append to trust array
print collection[macsrc] #print the previously observed fields
print fields #print the current fields
else:
collection[macsrc]=fields #otherwise, add it to collection dictionary to observe it later.
except:
return
sniff(filter="tcp port 80 and dst " SUBMITSERVER,prn=customAction)
Posted February 12, 2014 at 9:15 PM | Permalink | Reply
san tran
wow'' I wonder why half of my comment (in the middle) there did not work.. and the code looks horrendous so why don't we try again with pastebin:
And yes, all this code does (after you have execute arp poison) is to look for 2 submission with the same mac address to add the machine to "trust" list. | https://pen-testing.sans.org/blog/2014/02/05/mission-impossible-thwarting-cheating-in-advanced-pen-test-class-ctf-the-sans-sec660-experience | CC-MAIN-2019-39 | refinedweb | 1,640 | 71.55 |
This guide will show you how to deploy HTML5 Applications to Openshift.
Prerequisites
- Java JDK6
- JBoss Developer Studio 5.0.0.M5
- Openshift client tools ()
Creating the Sample Project
Now, fill in the info for your project. We'll use myapp on this tutorial.
Openshit Setup
An Email containing a validation link should arrive at your mailbox.
You should accept the Openshift Terms and Conditions.
Voilà - your account should be ready.
We'll create a namespace for our apps (deploydemo on this tutorial) - When asked for a passphrase, just hit enter.
qmx@gondor ~ » rhc-create-domain -n deploydemo -l <your openshift account email> -a WARNING: Unable to find '/Users/qmx/.ssh/libra_id_rsa.pub' Your SSH keys are created either by running ssh-keygen (password optional) or by having the rhc-create-domain command do it for you. If you created them on your own (or want to use an existing keypair), be sure to paste your public key into the express console at. The client tools use the value of 'ssh_key_file' in express.conf to find your key followed by the defaults of libra_id_rsa[.pub] and then id_rsa[.pub]. Password: Generating OpenShift Express ssh key to /Users/qmx/.ssh/libra_id_rsa Generating public/private rsa key pair. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /Users/qmx/.ssh/libra_id_rsa. Your public key has been saved in /Users/qmx/.ssh/libra_id_rsa.pub. The key fingerprint is: 29:3e:82:37:b7:e3:c8:a7:eb:77:3f:13:e3:f2:81:71 qmx@gondor.local The key's randomart image is: +--[ RSA 2048]----+ | | | | | | | . | | . S E | | . . . +o | | . + + ...o | | o =+oo +. | | .**oo +oo | +-----------------+ Checking ~/.ssh/config Found rhcloud.com in ~/.ssh/config... No need to adjust Alteration successful.
Now let's create the application at Openshift.
You need to uncheck the "Create New Project" checkbox...
....
We need to publish our application to Openshift to trigger a new deployment. OpenShift is convenientely represented by a WTP Server entry.
There we go! Our shiny new mobile webapp is successfully deployed! | https://community.jboss.org/wiki/DeployingHTML5ApplicationsToOpenshift/version/7 | CC-MAIN-2014-15 | refinedweb | 342 | 61.63 |
Is there any potential problems associated with building/compiling my Java classes with Sun JDK 1.5 for my EE (Seam) app into an EAR and deploying on 32-bit Windows OS JBoss AS dev box using 32-bit Windows JDK 1.5 runtime (JVM) and then deploying the same EAR into 32-bit RHEL OS JBoss AS (UAT/QA box)?
The dev envmt is not clustered and the UAT RHEL envmt is a 2 node horizontal cluster without state replication (only mod_jk load balancing and failover).
Perhaps the entity (and other) classes that implement java.io.Serializable will have issues?
Is it advisable to do another build for the RHEL envmt using Sun JDK 1.5 for Linux 32-bit platform or am I safe going this route?
thx.
Here is an example:
@Entity @Table(name = "ApplicationSite", schema = "dbo", catalog = "EquipmentRecovery") public class ApplicationSite implements java.io.Serializable { private static final long serialVersionUID = -8318603160581183431L; ... }
I've never heard of any problems with using a build on multiple platforms and don't expect there to be any. The AS itself doesn't produce separate builds.
ok thanks! | https://developer.jboss.org/thread/68751 | CC-MAIN-2018-39 | refinedweb | 187 | 57.06 |
signal, sigset, sighold, sigrelse, sigignore, sigpause - signal management
#include <signal.h> void (*signal(int sig, void (*func)(int)))(int); int sighold(int sig); int sigignore(int sig); int sigpause(int sig); int sigrelse(int sig); void (*sigset(int sig, void (*disp)(int)))(int);
Use of any of these functions is unspecified in a multi-threaded process. must point to a function to be called when that signal occurs. Such a function is called a signal handler.
When a signal occurs, if func points to a function, first the equivalent of a:
signal(sig, SIG_DFL);
is executed or an implementation-dependent blocking of the signal is performed. (If the value of sig is SIGILL, whether the reset to SIG_DFL occurs is implementation-dependent.) Next the equivalent of.
(*func)(sig);
If the signal occurs other than as the result of calling abort(), kill() or raise(), the behaviour is undefined if the signal handler calls any function in the standard library other than one of the functions listed on the sigaction() page or refers to any object with static storage duration other than by assigning a value to a static storage duration variable of type volatile sig_atomic_t. Furthermore, if such a call fails, the value of errno is indeterminate.
At program startup, the equivalent of:
signal(sig, SIG_IGN);
is executed for some signals, and the equivalent of:
signal(sig, SIG_DFL);
is executed for all other signals (see exec).
The sigset(), sighold(), sigignore(), sigpause() and sigrelse() functions provide simplified signal management.
The sigset() function is used to will add sig to the calling process' signal mask before executing the signal handler; when the signal handler returns, the system will restore the calling process' signal mask to its state prior the delivery of the signal. In addition, if sigset() is used, and disp is equal to SIG_HOLD, sig will be added to the calling process' signal mask and sig's disposition will remain unchanged. If sigset() is used, and disp is not equal to SIG_HOLD, sig will be removed from the calling process' signal mask.
The sighold() function adds sig to the calling process' signal mask.
The sigrelse() function removes sig from the calling process' signal mask.
The sigignore() function sets the disposition of sig to SIG_IGN.
The sigpause() function removes sig from the calling process' signal mask and suspends the calling process until a signal is received. The sigpause() function restores the process' signal mask to its original state before returning.
If the action for the SIGCHLD signal is set to SIG_IGN, child processes of the calling processes will not be transformed into zombie processes when they terminate. If the calling process subsequently waits for its children, and the process has no unwaited for children that were transformed into zombie processes, it will block until all of its children terminate, and wait(), wait3(), waitid() and waitpid() will fail and set errno to [ECHILD].
If the request can be honoured, signal() returns the value of func for the most recent call to signal() for the specified signal sig. Otherwise, SIG_ERR is returned and a positive value is stored in errno.
Upon successful completion, sigset() returns SIG_HOLD if the signal had been blocked and the signal's previous disposition if it had not been blocked. Otherwise, SIG_ERR is returned and errno is set to indicate the error.
The sigpause() function suspends execution of the thread until a signal is received, whereupon it returns -1 and sets errno to [EINTR].
For all other functions, upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error.
The signal() function will sigset(), sighold(), sigrelse(), sigignore() and sigpause() functions will fail if:
- [EINVAL]
- The sig argument is an illegal signal number.
The sigset(), and sigignore() functions will fail if:
- [EINVAL]
- An attempt is made to catch a signal that cannot be caught, or to ignore a signal that cannot be ignored.
None.
The sigaction() function provides a more comprehensive and reliable mechanism for controlling signals; new applications should use sigaction() rather than signal()..
exec, pause(), sigaction(), sigsuspend(), waitid(), <signal.h>.
Derived from Issue 1 of the SVID. | http://pubs.opengroup.org/onlinepubs/007908775/xsh/sigset.html | CC-MAIN-2014-41 | refinedweb | 684 | 59.53 |
Please help me correct my code, I don't know if there's something wrong with it but I've got a successful compilation.
I've tried to make this code, the code is successfully compiled but I'm sill getting a wrong (compiler error issue) when submission. Here's my full code, I'm completely lost and I don't know if my code contains an error :
Code:#include <iostream> using namespace std; int main (int argc, char** argv){ int N = 100; int M = 10; cin >> N >> M; for(int i = 0; i < M; i++){ cout << N << endl; N++; } return 0; }
I appreciate for anyone who helps me cause I'm still learning. | https://cboard.cprogramming.com/cplusplus-programming/180127-c-plusplus-program-compilation-result-result-still-error.html?s=195624ebbc0c0ba229d709fc8ebeb9de | CC-MAIN-2021-25 | refinedweb | 114 | 62.04 |
Josh Justice
As developers, we love having quality tools like git, available to us.
Using git, we can easily navigate through all the code ever “committed” or added to a project throughout its history. We do this via git treeishes. Treeishes are git’s way of referencing commits and relations between commits. Treeishes can improve your workflow immensely if you’re a frequent git user.
In this post, we’ll cover some of the more basic treeishes and then work into the advanced ones, with some real-world examples.
You’ll probably be familiar with the types of treeishes in this first section if you’ve ever used git before. The following section will cover some more advanced uses of treeishes that even long-time git users may not know about.
Every commit in git is identified by its SHA — a unique hash. These hashes are pretty long (40 characters in fact), but git also allows you to reference any commit with any truncated version of its hash so long as the truncated portion is unique (generally, this means at least 5 characters long).
Branches, remotes, and tags are another kind of treeish. Each of these is actually just a pointer to a git commit.
Now we’re getting to the really good stuff!
It’s important to have a good understand of the git reference log (reflog for short) which enables most of the more advanced treeishes.
The git reflog is a running log of recent changes to tips of branches. In practice, this means that every time you commit to a branch, pull commits down from a remote repository, or checkout a new branch, your reflog will update. It’s important to recall that reflogs are specific to individual checkouts of a repository so the reflogs for the same project can differ across machines.
Here’s an example reflog from one of my personal projects:
› git reflog 0a4faaa HEAD@{0}: checkout: moving from 0a4faaaf0081e2a5e439e79f48e236cdfbcb687b to i-herd-u-liek-chef 0a4faaa HEAD@{1}: commit: Remove unnecessary comment. d4fdbdb HEAD@{2}: commit: Real namespaces in the app code now. fc16081 HEAD@{3}: commit: Suddenly, NAMESPACES b36e3a8 HEAD@{4}: clone: from git@github.com:wfarr/censored.git
This reflog shows that some of the more recent changes to this project include:
0a4faaa(which just happens to be
master)
Using the reflog, git can infer some interesting relative context for commits that will allow us some more flexibility in what we can get at with treeishes.
We use the date spec here at Highgroove as part of our weekly code reviews. With the date spec, I can easily look at the activity on a feature branch over the course of the last week:
git diff master@{1 week ago}
Similar to the ordinal spec, the tilde spec allows you to reference the Nth grandparent of a commit.
This is useful just to go back 2 or more commits in time in one command.
master~2
Unlike the tilde spec, the caret parent spec allows you to reference the Nth parent of a commit. The distinction here is important as the caret parent spec can walk backwards through commits that have more than one parent (merge commits), while the ordinal spec cannot.
The caret parent spec looks like:
master^2
The caret parent spec and the tilde spec can also be chained together, so
master^^ would actually point to the same commit as
master~2.
The range spec allows you to express a range of commits as a first-class object in git. These are frequently used for things like diffs where you may want to view the changes across a number of commits.
For example, the following command would show a diff of all commits between master and my current feature branch:
git diff master..i-herd-u-liek-chef
You can also refer to the range from the current commit through all commits after it by just leaving off the other ref after the “..”.
All of these treeishes are available from the git command line interface – though if you’re using a GUI to interact with git your particular application it may support some or all of these. I know that Github’s compare view can be used with treeishes as well. Hopefully if you’re not already using treeishes you’ll find that they’re as useful to you as they are to us.
How are you using git treeishes to improve your workflow?
Image credit: @moonlightbul | https://www.bignerdranch.com/blog/git-treeishes-considered-awesome/ | CC-MAIN-2018-09 | refinedweb | 748 | 68.5 |
So I have something of an integration problem, I guess. In my current Java course, we have an assignment involving Graphics and Graphics2D. What we have to do is draw 4 "fans" in a frame, oriented such that there's one at each of the main compass points. Each fan is a series of 5 equal sized blades, represented by arcs, and enclosed in a circle. Also, the fan blades have to spin. In addition, we have to put in the center a star that sort of jumps around its little section. That is, it disappears and then reappears 90 degrees away. So it's all animated.
Anyway, the good news for you guys and gals is that I have all those individual pieces already done. I have two classes--one named FanBlades for the fans, and one named SpinningStar for the star. I can run one instance of either of these objects and it looks great. But if I try to put them together in one frame, I can't get it to look right. There's something I'm missing about BorderLayout. And it's aggravating because I've used BorderLayout on my GUI windows before.
So here's the AnimationFrame class, which attempts to tie them together. At present, it just shows two of the fans.
public class AnimationFrame { //Total frame width and height. protected static final int FRAME_WIDTH = 600; protected static final int FRAME_HEIGHT = 600; public static void main(String[] args) { JFrame mainFrame = new JFrame(); mainFrame.setSize(FRAME_HEIGHT,FRAME_WIDTH); mainFrame.setTitle("Spinning thingies!"); mainFrame.setVisible(true); mainFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); mainFrame.add(new FanBlades(),BorderLayout.NORTH); mainFrame.add(new FanBlades(),BorderLayout.SOUTH); }
That code shows nothing but a blank gray screen. HOWEVER, if I set the layout to GridLayout, I can see both fans. But that's not the layout I want.
I thought this would be the easy part of the project, but now it's like the solution is dangling just beyond my grasp.
Here is the code for the FanBlades class.
public class FanBlades extends JPanel { private final int PANEL_WIDTH = AnimationFrame.FRAME_WIDTH / 4; private final int PANEL_HEIGHT = AnimationFrame.FRAME_HEIGHT / 4; //Starting position of the container box for the fan blades. private int start_x; private int start_y; //Angle of the arc. private int arcAngle; //Starting angle for the blades. private int startAngle; //Width and length of the arcs. private int arcWidth; private int arcLength; //Integer count of the animation state, basically 1 of 8 drawings that loop. private int animationState = 0; //Speed in milliseconds of the animation. private int animationSpeed = 100; //Timer to control the fan animation. private Timer animationTimer = new Timer(animationSpeed,new TimerListener()); //Color palette for the fan blades. private Color[] palette = new Color[5]; FanBlades() { //Build the color palette. for(int i = 0; i < 5; i++) palette[i] = buildPalette(); this.setSize(PANEL_WIDTH, PANEL_HEIGHT); this.setLayout(new FlowLayout(FlowLayout.LEFT)); animationTimer.start(); } protected void paintComponent(Graphics g) { int colorIndex = 0; super.paintComponent(g); //Start the fan's enclosing panel 10% off from the side. //Round off to nearest whole number. start_x = (int)(PANEL_WIDTH * 0.1); start_y = (int)(PANEL_WIDTH * 0.1); //Each fan blade is an arc of 1/10th of a circle, 36 degrees. arcAngle = 36; //Starting angle for the blades. startAngle = animationState * 10; //Width and length of the arcs. arcWidth = 110; arcLength = 110; //Draw the fan blades. while(startAngle < 360) { g.setColor(palette[colorIndex]); g.fillArc(start_x, start_y, arcWidth, arcLength, startAngle, arcAngle); startAngle += (2 * arcAngle); colorIndex++; } g.drawOval(start_x, start_y, arcWidth, arcLength); } private Color buildPalette() { Random rand = new Random(); int redBalance = rand.nextInt(255); int greenBalance = rand.nextInt(255); int blueBalance = rand.nextInt(255); return new Color(redBalance,greenBalance,blueBalance); } class TimerListener implements ActionListener { public void actionPerformed(ActionEvent e) { if(animationState < 7) animationState++; else animationState = 0; repaint(); } }
And the SpinningStar class.
public class SpinningStar extends JPanel { //Delay for the timer. private int animationDelay = 500; //Timer to control the animation. private Timer animationTimer = new Timer(animationDelay,new TimerListener()); //Variable to indicate the position of the star, in radians. private double starPosition = 0; private final int CLOCKWISE = 1; private final int COUNTERCLOCKWISE = -1; //Direction of the rotation. private int rotationDirection; SpinningStar() { //Set the rotation direction. rotationDirection = COUNTERCLOCKWISE; //Start the timer. animationTimer.start(); } // draw general paths public void paintComponent( Graphics g ) { super.paintComponent( g ); // call superclass's paintComponent Random random = new Random(); // get random number generator //Points on the star represented by two integer arrays. int[] xPoints = { 55, 67, 109, 73, 83, 55, 27, 37, 1, 43 }; int[] yPoints = { 0, 36, 36, 54, 96, 72, 96, 54, 36, 36 }; //The coordinates of approximate center of the star. int centerX; int centerY; //Sums of the x and y coordinates of the star points. Used to calculate the center. int xSum = 0; int ySum = 0; Graphics2D g2d = ( Graphics2D ) g; GeneralPath star = new GeneralPath(); // create GeneralPath object //Two points on opposite ends of the star are //(1,36) and (109,36) also known as //(xPoints[8],yPoints[8]) and (xPoints[2],yPoints[2]) //Use these to get an approximate diameter of the surrounding circle. double diameter = getDiameter(xPoints[8],xPoints[2],yPoints[8],yPoints[2]); //Use these same points to get the midpoint of the circle. //double midpoint = ( (double)xPoints[8] + xPoints[2]) / 2 ; //Find the approximate center of the star. for(int i = 0; i < xPoints.length; i++) { xSum = xSum + xPoints[i]; ySum = ySum + yPoints[i]; } //Divide sums by number of points to get the approximate center. Drop the decimal point. centerX = xSum / xPoints.length; centerY = ySum / yPoints.length; // set the initial coordinate of the General Path star.moveTo( xPoints[ 0 ], yPoints[ 0 ] ); // create the star--this does not draw the star for ( int count = 1; count < xPoints.length; count++ ) star.lineTo( xPoints[ count ], yPoints[ count ] ); star.closePath(); // close the shape g2d.translate( 150, 150 ); // translate the origin to (150, 150) Shape myCircle = new Ellipse2D.Double(centerX-(diameter/2), centerY-(diameter/2), diameter, diameter); //Rotate the position of the star. g2d.rotate( starPosition ); // set random drawing color g2d.setColor( new Color( random.nextInt( 256 ), random.nextInt( 256 ), random.nextInt( 256 ) ) ); //Draw one star at the indicated position g2d.fill( star ); g2d.draw(myCircle); //Rotate the position for the next star by Pi/2 radians, or 90 degrees. //Multiply by the value of the rotationDirection variable to decide direction. starPosition += (Math.PI / 2) * rotationDirection; } // end method paintComponent private double getDiameter(int x1, int x2, int y1, int y2) { //Use the distance formula to calculate the distance between two //points on opposite ends of the star. //First get (x2-x1)^2 and (y2-y1)^2. double xDist = Math.pow((x2 - x1), 2); double yDist = Math.pow((y2 - y1), 2); //Get the total diameter. double diameter = Math.sqrt((xDist + yDist)); //Return just the radius. Cast to integer to approximate. return diameter; } public static void main(String[] args) { JFrame starFrame = new JFrame("Spinny star!"); starFrame.setVisible(true); starFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); starFrame.add(new SpinningStar()); starFrame.setSize(300,300); } class TimerListener implements ActionListener { public void actionPerformed(ActionEvent e) { repaint(); } } } | http://www.javaprogrammingforums.com/awt-java-swing/31370-borderlayouts-graphics-panels-such.html | CC-MAIN-2014-15 | refinedweb | 1,159 | 52.36 |
Wikibooks:Duplicate modules
From Wikibooks, the open-content textbooks collection
WB:DM redirects here. For Decision making see Wikibooks:Decision_making
Wikibooks:Community Portal : Wikibooks:Utilities : Wikibooks:Pages needing attention : Duplicate modules
This is a list of duplicate modules that have been created mostly by mistake. They have to be merged into a single piece of work and one title has to be redirected to the other (or a completely new page is created) in accordance with Wikibooks:Naming conventions. After a pair has been merged, it can be removed from the list.
Although this page does not automatically update as does Category:Books to be merged, this page has nevertheless been kept because unlike the category page, this page allows editors to make comments about the books or modules to be merged.
If you disagree with a "merge" indication then you can remove it.
The creation of duplicate modules, and the wasted effort this causes, can be avoided by search for existing modules on the same subject.
[edit] Mark current duplicates
If you find a pair of modules which appear to be duplicates, merge them! If you can't carry out the merge yourself, it is suggested that you put the following at top of each:
{{merge|Other module}}
or
{{mergeto|Other module}}
This marks the pages so that future viewers will see that it needs to be merged. If you know which way the merge should go, you can put the following at the top of the article where the merged material should end up:
{{mergefrom|Other module}}
[edit] How modules should be merged
Here is a suggested process for merging:
- Decide which module is the source, and which is the target. The target should be the module with the more appropriate title and content.
- Merge the content by copying/pasting from one window to another. Be sure not to allow any of the good content to be lost in the transfer.
- Upon completion, it is critical to place a redirect from the source page to the target page. For example, if you move the content of BSD bookshelf into Guide to UNIX/BSD, then you would put a redirect on BSD bookshelf. This preserves the edit history of the module and avoids the need to delete a module.
- You must explain in your edit summary for the target article that you have merged content from the source article. For example: "merge content from [[BSD bookshelf]]" in the Guide to UNIX/BSD module. This satisfies a GFDL requirement that the contributors to the source module be credited for the material moved to the target module.
The articles don't necessarily have to be merged at once. You can let others collaborate with the merging process, by placing a note at the top of the page below the {{mergeto}} mark.
[edit] Modules with subpages
Merging modules when some of them have subpages (Programming:Java and Programming:The way of the program, for example) is more difficult.
[edit] Alternatives to merging
In some cases it might work to move one module so that it is a submodule of another module. In this case, merging is not necessary. See Wikibooks:How to rename (move) a page. For example, the merger of Guide to UNIX and BSD consisted of moving BSD to Guide to UNIX/BSD. This type of merge might be called annexation.
Alternatively, you could also leave the two modules distinct (without a redirect), but complete the text of one of the pages so that it conforms to Wikibooks:Forking policy.
[edit] Modules to be merged
- How about merging Sun with The_Sun_(Astronomy) and getting the resulting thing into an "approved" namespace --SV Resolution 13:41, 23 August 2005 (UTC)
- See also Category:Books to be merged. | http://en.wikibooks.org/wiki/WB:DM | crawl-001 | refinedweb | 626 | 58.92 |
My family and I stayed at HIFWB at the end of June 2010. We arrived a few days before my wedding and were pleasantly met by the front desk staff members. My dad was able to upgrade for free, and our suite was beautiful. The full-size fridge came in very handy as well as the sink area so we could eat in! Although we didn't use the stovetop, it was great to have as well. Our rooms were thoroughly cleaned, even when I came in for a mid-afternoon nap! There was no problem requesting additional towels. The view from the suite was spectacular - overlooking the rock slide pool and just high enough to see the ocean!
Our wedding was on the beach of the resort, and everyone was MORE than accommodating - everyone from Angie in Sales to Jessica in catering to the folks at the tiki bar! Everyone seemed to know I was the bride-to-be! My husband and I were upgraded for free, without even having asked to do so, and our entire stay was enjoyable.
All of our guests loved the hotel... we loved having 2 pools so that my husband and I could avoid seeing each other before the wedding. There were always plenty of chairs available and the pools were so clean.
I cannot say enough about the way our wedding and reception flowed so smoothly and perfectly. Jessica Young is by far the best coordinator I have ever come in contact with. She consistently went above and beyond what might be in her job description to assist me in finding local vendors, cleaners, etc. Angie also did the same in suggesting local spas and other services for me and my guests. The food at our reception was a big hit - we are still getting compliments on it!
My husband and I definitely plan to return (hopefully sooner rather than later) and make FWB a regular vacation spot - even if only for a long weekend. Holiday Inn was one of the friendliest hotels we've stayed at and you can't be the location - right on the white sandy beach and emerald green waters!
I would highly recommend this spot as a destination wedding location and hands-down say you will not find a better place to host your reception or other event!
Thank you Jessica, Angie, and all of the Holiday Inn staff!
- Booking.com, Hotels.com, Expedia, Wyndham, Travelocity, Hotwire, Orbitz, Odigeo, Agoda and Priceline | http://www.tripadvisor.com/ShowUserReviews-g34234-d84557-r71041586-Wyndham_Garden_Fort_Walton_Beach_Destin-Fort_Walton_Beach_Florida.html | CC-MAIN-2015-32 | refinedweb | 415 | 71.24 |
import "golang.org/x/mobile/geom".
Point is a point in a two-dimensional plane.
String returns a string representation of p like "(1.2,3.4)".
Pt is a length.
The unit Pt is a typographical point, 1/72 of an inch (0.3527 mm).
It can be be converted to a length in current device pixels by multiplying with PixelsPerPt after app initialization is complete.
Px converts the length to current device pixels.
String returns a string representation of p like "3.2pt".
A Rectangle is region of points. The top-left point is Min, and the bottom-right point is Max.
String returns a string representation of r like "(3,4)-(6,5)".
Package geom imports 1 packages (graph) and is imported by 15 packages. Updated 2017-10-12. Refresh now. Tools for package owners. | https://godoc.org/golang.org/x/mobile/geom | CC-MAIN-2017-43 | refinedweb | 138 | 69.79 |
Hello,
my problem is quite simple to explain.
I have the following string:
"table+camera"
and I want to remove the + sign:
"tablecamera".
How do i do that ?
Thanks for your help.
This is a discussion on how to remove a character from a string within the C Programming forums, part of the General Programming Boards category; Hello, my problem is quite simple to explain. I have the following string: "table+camera" and I want to remove the ...
Hello,
my problem is quite simple to explain.
I have the following string:
"table+camera"
and I want to remove the + sign:
"tablecamera".
How do i do that ?
Thanks for your help.
char *p = string;
int ofs = 0;
while(*(p+ofs)) {
while(*(p+ofs) == '+') ++ofs;
*p = *(p+ofs); ++p;
} *p = 0;
fixing the many errors i have made in this code might be a good exercise
.sect signature
thanks.
Isn't there another way to do that using a combination of the String functions provided by the C ANSI ?
like strstr(), strtok() ...
thanks
if you have a single +, then you can use strstr(string, "+") to find a point in the string w/ a + and then use strcpy to write all the characters past this point back one, but if you have multiple characters to replace rolling your own replacement function to do it in a single pass might be worth it
.sect signature
can you write the code in details ?
I do not have a compiler at the moment, so I cannot make tests for now.
Thanks
Since + is a CHARACTER and not a string you should use strchr not strstr.
Note: since I'm using strncat you have to put the null character when you declare the destantion.Note: since I'm using strncat you have to put the null character when you declare the destantion.Code:#include <string.h> #include <stdio.h> int main (void) { char str[30]="H+e+l+l+o+W+o+r+l+d"; char dest[30]="\0"; char *front, *end; front = str; while ( (end = strchr(front, '+')) != NULL) { strncat(dest, front, end-front); front = end + 1; } /* Take care of the rest of the string */ end = front + strlen(front); strncat(dest, front, end-front); puts(dest); return 0; }
Hopefully this will give you enough information to modify it to suit your needs.
thanks, that is exactly what i needed
//hi try this one
#include <string.h>
#include <stdlib.h>
#define NULL 0
main()
{
char string[]="table+camera";
char seps[]= "+";
char* token1;//found by strtok
char* token2;
token1 = strtok(string,seps);
token2 = strtok(NULL,seps);
strcat(token1,token2);
//look in token1
}
by the way while i was trying to create string as char* i got some breaks, whats the problem
> #define NULL 0
Include the correct header file, don't define NULL in your own code
> main()
Whilst its much better than saying void main, saying int main() is preferred. Especially since C99 (the latest C standard) makes implicit types illegal, it's best to get into the habit of being specific.
> strcat(token1,token2);
Bad news - strcat() had undefined behaviour if the source and destination strings overlap (as they do in this case).
> by the way while i was trying to create string as char* i got some breaks
Most modern compilers on most modern operating systems make string "constants", really constant, by placing them in read-only memory. Since strtok modifies the string by writing \0 into it, your OS will step in and kill your program as soon as you try it.
If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut.
If at first you don't succeed, try writing your phone number on the exam paper.
I support as the first necessary step to a free Europe.
Code:void remove_character(char * dst, char * src, char ch) { char chCur; size_t iSrc = 0, iDst = 0; do { if ((chCur = src[iSrc++]) != ch) dst[iDst++] = chCur; } while (chCur != '\0'); } int main(void) { char tst[] = "++++ta+ble+cam+era++++"; char tst2[sizeof(tst)]; /* To another buffer... */ puts(tst); remove_character(tst2, tst, '+'); puts(tst2); /* and in place... */ puts(tst); remove_character(tst, tst, '+'); puts(tst); getchar(); return 0; } | http://cboard.cprogramming.com/c-programming/51534-how-remove-character-string.html | CC-MAIN-2014-52 | refinedweb | 697 | 71.65 |
It is obvious that there can't exist number with unique digits with more than 10. So, its not necessary to count for n more than 10.While generating number's digits, in each step the quantity of available digits decreases by 1. The multiplication of available digits for each prefix gives us all possible variants for length n. However we need to calculate all possible variants for all length from i till n we need to sum up all the prefix's variants.
public class Solution { public int countNumbersWithUniqueDigits(int n) { if (n==0) return 1; n = Math.min(10, n); int variants = 10; int cur = 9; for (int i=1; i<n; i++) { cur*=(10-i); variants+=cur; } return variants; } } | https://discuss.leetcode.com/topic/70391/o-1-with-explanation | CC-MAIN-2018-05 | refinedweb | 122 | 64.1 |
This chapter provides planning information and guidelines specific to a Sun Cluster 3.2 11/09 section provides the following guidelines for planning Solaris software installation in a cluster configuration.
Guidelines for Selecting Your Solaris Installation Method
Solaris OS Feature Restrictions.
Consider the following points when you plan the use of the Solaris OS in a Sun Cluster configuration:
Solaris 10 Zones – Install Sun Cluster framework software only in the global zone.
To determine whether you can install a Sun with the Solaris 9 version of Sun Cluster software , LOFS capability is disabled by default. During cluster creation with the Solaris 10 version of Sun Cluster software, LOFS capability is not disabled by default.
If the cluster meets both of the following conditions, you must disable LOFS to avoid switchover problems or other failures:
Sun Cluster..
Add,.
Sun Cluster software offers two choices of locations to host the global-devices namespace:
A lofi device (Solaris 10 only)
A dedicated file system on one of the local disks (Solaris 9 or 10) - On the Solaris 10 OS,.
This section provides guidelines for planning and preparing the following components for Sun Cluster software installation and configuration:
Public-Network IP Addresses
Network Time Protocol (NTP) 11, 11 Sun Cluster Concepts Guide for Solaris OS for further information about cluster time. See the /etc/inet/ntp.cluster template file for additional guidelines about how to configure NTP for a Sun Cluster configuration..
This section provides the following guidelines for planning global devices and for planning cluster file systems:
Choosing Mount Options for Cluster File Systems
Mount Information for Cluster File Systems
For information about the purpose and function of global devices, see Shared Devices, Local Devices, and Device Groups in Sun Cluster Overview for Solaris OS and Global Devices in Sun Cluster Concepts Guide for Solaris OS.
Sun is accessible from a non-global zone.
For.
Zone clusters – You cannot configure cluster file systems that use UFS or VxFS for use in a zone cluster. Use highly available local file systems instead. You can use a QFS shared file system in a zone cluster, but only to support Oracle RAC. voting. Administering Cluster File Systems in Sun Cluster System Administration Guide for Solaris OS for more information about Vx Sun Cluster Concepts Guide for Solaris OS and Device Groups Solaris hosts must have Solaris Volume Manager mediators configured for the disk set. A disk string consists of a disk enclosure, its physical disks, cables from the enclosure to the host or hosts, and the interface adapter cards. Observe the following rules to configure dual-string mediators:
You must configure each disk set with exactly two hosts that act as mediator hosts.
You must use the same two hosts for all disk sets that require mediators. Those two hosts must master those disk sets.
Mediators cannot be configured for disk sets that do not meet the two-string and two-host requirements.
See the mediator(7D) man page for details.
/kernel/drv/md.conf settings – SPARC: On the Solaris 9 OS, Solaris Volume Manager volumes that are 9 OS:
All voting cluster nodes must have identical /kernel/drv/md.conf files, regardless of the number of disk sets that are served by each node. Failure to follow this guideline can result in serious volume name that will exist in the cluster. For example, if the highest value of the volume names that are used in the first 15 disk sets of a cluster is 10, but the highest value of volume name can be unique throughout the cluster.
The highest allowed value of voting Sun), EMC PowerPath, or Hitachi HDLM,. Sun. | http://docs.oracle.com/cd/E19575-01/820-7356/z40000f557a/index.html | CC-MAIN-2016-30 | refinedweb | 606 | 50.16 |
I have a List[Option[Int]] and want to sum over it using applicative functors.
From [1] I understand that it should be something like the following
import scalaz._
import Scalaz._
List(1,2,3).map(some(_)).foldLeft(some(0))({
case (acc,value) => (acc <|*|> value){_+_}
})
If you have
Option[T] and if there's a
Monoid for
T, then there's a
Monoid[Option[T]]:
implicit def optionTIsMonoid[T : Monoid]: Monoid[Option[T]] = new Monoid[Option[T]] { val monoid = implicitly[Monoid[T]] val zero = None def append(o1: Option[T], o2: =>Option[T]) = (o1, o2) match { case (Some(a), Some(b)) => Some(monoid.append(a, b)) case (Some(a), _) => o1 case (_, Some(b)) => o2 case _ => zero } }
Once you are equipped with this, you can just use
sum (better than
foldMap(identity), as suggested by @missingfaktor):
List(Some(1), None, Some(2), Some(3), None).asMA.sum === Some(6)
UPDATE
We can actually use applicatives to simplify the code above:
implicit def optionTIsMonoid[T : Monoid]: Monoid[Option[T]] = new Monoid[Option[T]] { val monoid = implicitly[Monoid[T]] val zero = None def append(o1: Option[T], o2: =>Option[T]) = (o1 |@| o2)(monoid.append(_, _)) }
which makes me think that we can maybe even generalize further to:
implicit def applicativeOfMonoidIsMonoid[F[_] : Applicative, T : Monoid]: Monoid[F[T]] = new Monoid[F[T]] { val applic = implicitly[Applicative[F]] val monoid = implicitly[Monoid[T]] val zero = applic.point(monoid.zero) def append(o1: F[T], o2: =>F[T]) = (o1 |@| o2)(monoid.append(_, _)) }
Like that you would even be able to sum Lists of Lists, Lists of Trees,...
UPDATE2
The question clarification makes me realize that the UPDATE above is incorrect!
First of all
optionTIsMonoid, as refactored, is not equivalent to the first definition, since the first definition will skip
None values while the second one will return
None as soon as there's a
None in the input list. But in that case, this is not a
Monoid! Indeed, a
Monoid[T] must respect the Monoid laws, and
zero must be an identity element.
We should have:
zero |+| Some(a) = Some(a) Some(a) |+| zero = Some(a)
But when I proposed the definition for the
Monoid[Option[T]] using the
Applicative for
Option, this was not the case:
None |+| Some(a) = None None |+| None = None => zero |+| a != a Some(a) |+| None = zero None |+| None = zero => a |+| zero != a
The fix is not hard, we need to change the definition of
zero:
// the definition is renamed for clarity implicit def optionTIsFailFastMonoid[T : Monoid]: Monoid[Option[T]] = new Monoid[Option[T]] { monoid = implicitly[Monoid[T]] val zero = Some(monoid.zero) append(o1: Option[T], o2: =>Option[T]) = (o1 |@| o2)(monoid.append(_, _)) }
In this case we will have (with
T as
Int):
Some(0) |+| Some(i) = Some(i) Some(0) |+| None = None => zero |+| a = a Some(i) |+| Some(0) = Some(i) None |+| Some(0) = None => a |+| zero = zero
Which proves that the identity law is verified (we should also verify that the associative law is respected,...).
Now we have 2
Monoid[Option[T]] which we can use at will, depending on the behavior we want when summing the list: skipping
Nones or "failing fast". | https://codedump.io/share/PBuekFUXnKSU/1/summing-a-list-of-options-with-applicative-functors | CC-MAIN-2017-51 | refinedweb | 539 | 57.2 |
C9 Lectures: Dr. Erik Meijer - Functional Programming Fundamentals Chapter 6 of 13
Actual format may change based on video formats available and browser capability.
We have reached the halfway point!
The equational reasoning part on the append operator (++) is wrong. This is what Erik wrote:
xs ++ ys = foldr (:) ys xs ≡ { 1 } (++) ys xs = foldr (:) ys xs ≡ { 2 } (++) ys = foldr (:) ys ≡ { 3 } (++ ys) = foldr (:) ys ≡ { 4 } (++) = foldr (:)
A correct way to define append would be:
(++) = flip (foldr (:))
Having said all that, I really like this series.
Keep on going Erik!
A function that returns a function? That is clear: It's a curried function!
a -> b // function a -> b -> c // curried function (a -> b) -> c // but a -> (b -> c) // vs:
import Test.QuickCheck prop_Concat :: [Int] -> [Int] -> Bool prop_Concat xs ys = xs ++ ys == foldr (:) ys xs:
*Main> quickCheck prop_Concat OK, passed 100 tests
Right
takeWhile' :: (a -> Bool) -> [a] -> [a] takeWhile' p = snd . foldl (\(e,v) x -> if e then (e,v) else if p x then (False,v++[x]) else (True,v)) (False,[]) dropWhile' :: (a -> Bool) -> [a] -> [a] dropWhile' p = snd . foldl (\(e,v) x -> if e then (e,v++[x]) else if p x then (True,v) else (False,v++[])) (False,[])
I'd say its easier to implement takeWhile using foldr:
takeWhile p = foldr (\x xs -> if p x then x : xs else []) []
foldr :: (a -> b -> b) -> b -> [a] -> b foldr f z [] = z foldr f z (x:xs) = f x (foldr f z xs)
fwith
xand the result of a recursive call.
fgets executed before the result of the recursive call is computed. If
fdecides to never inspects its second argument, the recursive call will never be evaluated. So that's why you can do:
takeWhile (<4) [0..]
However, you are right about foldl.
foldl :: (a -> b -> a) -> a -> [b] -> a foldl f z [] = z foldl f z (x:xs) = foldl (f z x) xs
foldlfirst recurses, before executing the
ffunction that produces the result value.
So calling the
takeWhile', defined below, with an infinite list will result in an infinite computation.
takeWhile' p = foldl (\ys x -> if p x then ys ++ [x] else ys) []
Nice post Tom.
So much to learn.
I much prefer this for reverse:
reverse = foldl (flip (:)) [].
mapfilter :: (a -> b) -> (a -> Bool) -> [a] -> [b] mapfilter f p = map f . filter p
3) Redefine map f and filter p using foldr.
map' :: (a -> b) -> [a] -> [b] map' f = foldr (\x v -> f x:v) [] filter' :: (a -> Bool) -> [a] -> [a] filter' p = foldr (\x v -> if p x then x:v else v) []
I *think* Erik said that using the sum . map variant on length would be less efficient but it runs faster for me. (using timing method from here). len1 = sum . map ( \ _ -> 1 ) len2 = foldr ( \ _ n -> n+1) 0 limit = 500000 main = do putStrLn "Len1:" time $ len1 [1..limit] `seq` return () putStrLn "Len2:" time $ len2 [1..limit] `seq` return ()
I get
*Main> :run main Len1: Computation time: 0.719 sec Len2: Computation time: 1.141 sec
Comments have been closed since this content was published more than 30 days ago, but if you'd like to continue the conversation, please create a new thread in our Forums, or Contact Us and let us know. | https://channel9.msdn.com/Series/C9-Lectures-Erik-Meijer-Functional-Programming-Fundamentals/C9-Lectures-Dr-Erik-Meijer-Functional-Programming-Fundamentals-Chapter-7-of-13?format=smooth | CC-MAIN-2017-09 | refinedweb | 539 | 80.01 |
SYNOPSISpackage require Tcl 8.6
package require Tk 8.6
package require menubar ?0.5?
menubar new ?options?
mBarInst define body
mBarInst install pathName body
mBarInst menu.configure option tag-settings ?option tag-settings ...?
mBarInst menu.namespace tag namespace
mBarInst menu.hide tag
mBarInst menu.show tag
mBarInst tag.add tag value
mBarInst tag.configure pathName tag ?option value ...option value?
mBarInst tag.cget pathName tag ?option?
mBarInst group.add tag label ?cmd? ?accel? ?sequence? ?state?
mBarInst group.delete tag label
mBarInst group.move direction tag label
mBarInst group.configure tag label ?option value ...option value?
mBarInst group.serialize tag
mBarInst group.deserialize tag stream
mBarInst notebook.addTabStore pathname
mBarInst notebook.deleteTabStore pathname
mBarInst notebook.setTabValue pathname tag
mBarInst notebook.restoreTabValues pathname
DESCRIPTION
- menubar new ?options?
-
Create and return a new instance of the menubar class. The menubar class encapsulates the definition, installation and dynamic behavior of a menubar. The class doesn't depend on a widget framework and therefore can be used with or without a framework (e.g. Bwidget, IWidget, Snit, etc.). Unlike other Tk widget commands, the menubar command doesn't have a pathName argument because menubars are handled by the window manager (i.e. wm) and not the application.
OPTIONSThe following options can be passed to the menubar new command.
These options are inherited from the Tk menu command, their effect is platform specific.
- -activebackground []
-
- -activeborderwidth []
-
- -activeforeground []
-
- -background []
-
- -borderwidth []
-
- -cursor []
-
- -disabledforeground []
-
- -font []
-
- -foreground []
-
- -relief []
-
INTRODUCTION
An instance of the menubar class provides methods for compiling a description of the menubar, configuring menu items and installing the menubar in toplevel windows.
A menubar can be thought of as a tree of cascading menus. Users define a menubar using a language that results in a human readable description of a menubar. The description of the menubar is then compiled by an instance of the menubar class after which it can be installed in one or more toplevel windows.
The menubar class provides many unique capabilities that are not found in other tcl/tk menubar implementation. Some of these are:
- A tagging system that simplifies access to menu entries in the menu tree.
- Support for user defined tags that depend on the toplevel window context.
- A simplified and uniform interface for all callback commands.
- Namespace support for all callback commands so callback commands can be easily grouped into namespaces.
- Support for hiding and exposing menus on the menubar.
- A simplified method for creating radiobutton groups.
- Automatic management of state variables for checkbuttons and radiobuttons.
- Scope control for the state variables of checkbuttons and radiobuttons.
- Tear-off menu management that ensures only one tearoff menu is created.
- Support for dynamic menu extension to simplify the creation of recent document menus.
- Support for saving and restoring dynamic menu extensions.
TERMINOLOGY
- MENUBAR
- The visible rendering of a menubar in a toplevel window is a horizontally group of cascading Tk menus.
- A menu is an ordered list of items that is rendered vertically. Menus are not visible until a user preforms some action (normally a <ButtonPress-1> event). A menu may contain any number of child menus that are rendered as cascading menus. Cascading menus are rendered next to the parent menu when they are activated.
- MENU ENTRY
- A menu contains an ordered list of items called entries. Menu entries have a type and the menubar class supports the following 6 entry types: Command, Checkbutton, Radiobutton, Separator, Group and Menu.
- ENTRY LABEL
- Each menu entry has a visible string that is called the entry label.
- TAG
- A tag is name that is normally used to referr to an item in a menu tree. A tag name is an alphanumeric character string that may include the underscore character. Menu tree tags are defined for all nodes and leafs in a menu tree. This provides a flat abstraction of the tree and simplifies item referencing in menubar methods. Without this abstraction it would be necessary to reference menu elements using a tree path which could change at run-time. The menubar class also has a method that can create a user defined tag. User defined tags store values that change based on the currently active toplevel window. User defined tags can be used to store widget pathnames use by callback code so that output can be routed to the appropriate toplevel window.
METHODS
- mBarInst define body
- Compiles body into a tree of menu entries which define the visual layout of the menubar. The body argument describes the layout using the following syntax, where the elements of the syntax are described below.
body == definitions
definitions ::= { <ignore> | <definition> | <definition> <definitions> } ignore ::= { <nl> | <white-space> <nl> | # <comment> <nl> } definition ::= { <command> | <checkbutton> | <radiobutton> | <separator> | <group> | <menu> } command ::= <label> C <tag> <nl> checkbutton ::= <label> X<scope> { <tag> | <tag>+ } <nl> radiobutton ::= <label> R<scope> { <tag> | <tag>+ } <nl> separator ::= <stext> S <tag> <nl> group ::= <dummy> G <tag> <nl> menu ::= <label> { M:<tag> | M:<tag>+ } <nl> <definitions> stext ::= '--' | <label> scope ::= '' | '@' | '='
- C - Command
- The C type entry is the most common type of entry. This entry executes a command when it is invoked.
- X - Checkbutton
- A X type entry behaves much like a Tk checkbutton widget. When it is invoked it toggles back and forth between a selected and deselected states. The value of a checkbutton is a boolean (i.e. 1 or 0). By default all checkbuttons are deselected. If you want the checkbutton to be initially selected then include a trailing plus (+) with the tag name. See SCOPE CONTROL below for a description of the scope indicator.
- R - Radiobutton
- A R type menu entry behaves much like a Tk radiobutton widget. Each radiobutton entry is a member of a radiobutton group that controls the behavior of the radiobuttons in the group. All radiobuttons in a group are given the same tag name. In the example below Red, Green and Blue all have the same tag and are therefore all in the same radiobutton group. A trailing plus (+) on the tag name of a radiobutton entry will cause the entry to be the initially selected entry. See SCOPE CONTROL below for a description of the scope indicator.
- S - Separator
- A S type menu entry is an entry that is displayed either as a horizontal dividing line or a label. Separators are not active elements of a menu and have no associated behavior if they are invoked. If <stext> is two dashes (i.e. '--') then the separator will be displayed as a horizontal line otherwise <stext> will be displayed as a bold label surrounded by double dashes (e.g. "-- <stext> --") with a lightgray background.
- G - Command Group
- The G type menu entry marks a location in the menu tree where entries can be dynamically added and removed. Menu extension can only occur at the end of a menu so G type entries must be the last item on a menu. A G type entry is rendered as a separator line. The group.<xxx>
sub-commands are used to manipulate command group entries.
- M - Menu
- An M type entry is used to define both menubar menus and cascading menus. Menu entries are the most complicated of the 6 menu types. A menu entry is composed of three list elements. The first element of the list is its label. The second element of the list is a composite string consisting of a type identifier (M) followed by an optional tag (beginning with a ':' separator) and finally an optional plus (+) which indicates that the menu is a tear-off menu. The final element of the list is a LIST VALUE.
- mBarInst install pathName body
- The install method installs the menubar created with the define method into toplevel window pathName. The body argument of the command contains a tcl script which is used to initialize the installed menubar. Normally the tcl script will contain calls to various menubar methods to perform the initialization. The initialization code is only run once when the menubar is installed. The namespace in which the install method is executed becomes the default namespace for callback commands (see menu.namespace below for more details).
METHODS - MENU.XXX
- mBarInst menu.configure option tag-settings ?option tag-settings ...?
- Configures the tags of a menubar and returns an empty string. This method provides a convenient way to configure a larger number of tags without the verbosity of using the tag.configure method.
- option
- Option may have any of the values accepted by the tag.configure method.
- tag-settings
- The tag-settings argument is a string that is converted to a list of tag-value pairs using the following syntax.
Syntax for tag-settings.
tag-settings ::= { <ignore> | <value> | <value> <tag-settings> } ignore ::= { <nl> | <white-space> <nl> | # <comment> <nl> } value ::= <tag> <option-value> <nl>
- mBarInst menu.namespace tag namespace
- Change the namespace for a sub-tree of the menubar starting at entry tag. The new value will be namespace. Each entry in the menubar tree has an associated namespace which will be used for its callback procedure. The default namespace is the namespace where the install method was executed. The namespace method can be used to change the namespace that will be used for callbacks in a sub-tree of the menubar. This method can only be used in the context of an install script.
- mBarInst menu.hide tag
- Remove (hide) a menubar entry. When a menubar tree is defined all entries are visible by default. This method can be used to hide a menubar entry. The hide methods can be used in the context of an install script so that a menu will be initially hidden at application start up. The tag argument is the tag name of the menu to be hidden.
- mBarInst menu.show tag
- Exposes (shows) a hidden menubar entry. When a menubar tree is defined all entries are visible by default. If a entry is hidden from the user (using the menu.hide method) then it can be exposed again using the show method. The tag argument is the tag name of the menu to be shown.
METHODS - TAG.XXX
- mBarInst tag.add tag value
- Add a user defined tag value. The tag.add method adds a new tag-value pair to the the tags defined for a menubar. User defined tags are different from the tags created by the define method. The tag.add method can only be used in an install script and its value is associated with the toplevel where the menubar is installed. This makes the tag context sensitive so callback code that queries the tag value will receive a value that is associated with the window that performed the callback.
- mBarInst tag.configure pathName tag ?option value ...option value?
- Given the pathName of a toplevel window and a tag this method configures the menu entry associated with the tag and return an empty string.
- Standard Options
- These option are the same as those described for menu entries in the Tk menu documentation.
- -activebackground
-
- -activeforeground
-
- -background
-
- -bitmap
-
- -columnbreak
-
- -compound
-
- -font
-
- -foreground
-
- -hidemargin
-
- -image
-
- -indicatoron
-
- -label
-
- -selectcolor
-
- -selectimage
-
- -state
-
-
- Class Specific Options
- -bind {uline accel sequence}
- The value of the -bind option is three element list where the values are as follows.
- uline
- An integer index of a character to underline in the entry. This value performs the same function as the Tk menu -underline option. If this value is an empty string then no underlining is performed.
- accel
- A string to display at the right side of the menu entry. The string normally describes an accelerator keystroke sequence that may be typed to invoke the same function as the menu entry. This value performs the same function as the Tk menu -accelerator option. If this value is an empty string then no accelerator is displayed.
- sequence
- A bind sequence that will cause the entries associated command to fire.
- -command cmdprefix
- The value of the -command option a command prefix that is evaluated when the menu entry is invoked. By default the callback is evaluate in the namespace where the install method was executed. Additional values are appended to the cmdprefix and are thus passed to the callback command as argument. These additional arguments are described in the list below.
- command entry
- 1) The pathname of the toplevel window that invoked the callback.
- checkbutton entry
- 1) The pathname of the toplevel window that invoked the callback.
2) The checkbutton's tag name
3) The new value for the checkbutton
- radiobutton entry
- 1) The pathname of the toplevel window that invoked the callback.
2) The radiobutton's tag name
3) The label of the button that was selected
- group entry
- 1) The pathname of the toplevel window that invoked the callback.
- mBarInst tag.cget pathName tag ?option?
- Returns the value of the configuration option given by option or the value of a user defined tag. The option argument may be any of the options accepted by the tag.configure method for the tag type. User defined tags are queried without an option value.
METHODS - GROUP.XXX
- mBarInst group.add tag label ?cmd? ?accel? ?sequence? ?state?
- Add a command to the group with tag name tag. This method appends a new command entry to the end of a command group. The order of the arguments is fixed but arguments to the right can be ignored. Arguments to this method have the following meaning.
- tag (string)
- The tag name of the command group.
- label (string)
- The displayed label for the menu entry.
- cmd (string)
- A command prefix that will be used for callback command.
- accel (string)
- An accelerator string that will be displayed next to the entry label.
- sequence (string)
- A bind sequence that will be bound to the callback command.
- state (enum)
- Sets the active state of the command. One of: normal, disabled, active
- mBarInst group.delete tag label
- Delete a command from a group with tag name tag. This method deletes command label from a command group.
- mBarInst group.move direction tag label
- Change the position of an entry in a group with tag name tag. The direction argument is the direction ('up' or 'down') the entry will be moved. The entry that is moved has the name label.
- mBarInst group.configure tag label ?option value ...option value?
- Configure the options of an entry in the command group with tag name tag. This method is similar to the tag.configure method except that it works on entries in a command group. Set documentation for the tag.configure method (above) for more details on command entry options.
- mBarInst group.serialize tag
- Return a string serialization of the entries in a command group. The argument tag is the tag name for the group that is to be serialized. The resulting serialization is a list containing three element (1) the tag name of the group (2) a dictionary containing group level options (3tk) a list of zero or more similar three element lists that describe the entries in the group.
- mBarInst group.deserialize tag stream
- Replace the contents of group tag tag with the commands defined in the serialization stream. The original contents of the group are lost.
METHODS - NOTEBOOK.XXX
- mBarInst notebook.addTabStore pathname
- This method should be used in code that creates a new notebook tab. Execution of this method will cause state storage to be allocated for the new notebook tab. The pathname for the notebook tab is passed as an argument to the method.
- mBarInst notebook.deleteTabStore pathname
- This command deallocates the state store for a notebook tab. The pathname for the notebook tab is passed as an argument to the method.
- mBarInst notebook.setTabValue pathname tag
- This method should be used in the callback for menubar checkbuttons or radiobuttons that have notebook tab scope control. When this method is executed it will move the value associated with tag into the tab store for the tab identified by pathname.
- mBarInst notebook.restoreTabValues pathname
- This method should be place in a bind script that is triggered by a notebooks <<NotebookTabChanged>> event.
SCOPE CONTROL
By default a menubar instance looks the same in all installed toplevel windows. As changes are made to one instance of a menubar all the other instances are immediately updated. This means the internal state of all the menu entries for the instances are synchronized. This behavior is called global scope control of the menubar state.
The menubar class allows finer scope control on check and radio buttons. The scope of these entry types can be modified by adding a modifier character to their type character. Two modifier characters are supported as show in the table below.
When the local scope character (@) is added to the definition of a button, the button is given a new variable for each installed toplevel window. This has the effect of making the button's state local to the window (i.e. local scope). An example use case for this behavior might be a status bar that can be toggled on an off by a checkbutton. The developer may want to allow the user to control the visibility of the status bar on a per window basis. In this case a local modifier would be added to the status bar selector so the callback code would receive an appropriate value based on the current toplevel window.
The notebook tab scope character (=) is similar in effect to the local scope character but it allows a notebook tab selection to also manage the state of of a button. Adding the notebook tab scope modifier enables notebook tab scope control but the developer must then make use of the notebook.xxxx sub-commands to actively manage state values as tabs are added, deleted and selected.
EXAMPLE
package require Tcl package require Tk package require menubar set tout [text .t -width 25 -height 12] pack ${tout} -expand 1 -fill both set mbar [menubar new \ -borderwidth 4 \ -relief groove \ -foreground black \ -background tan \ ] ${mbar} define { File M:file { Exit C exit } Edit M:items+ { # Label Type Tag Name(s) # ----------------- ---- --------- "Cut" C cut "Copy" C copy "Paste" C paste -- S s2 "Options" M:opts { "CheckList" M:chx+ { Coffee X coffee+ Donut X donut Eggs X eggs } "RadioButtons" M:btn+ { "Red" R color "Green" R color+ "Blue" R color } } } Help M:help { About C about } } ${mbar} install . { ${mbar} tag.add tout ${tout} ${mbar} menu.configure -command { # file menu exit {Exit} # Item menu cut {CB Edit cut} copy {CB Edit copy} paste {CB Edit paste} # boolean menu coffee {CB CheckButton} donut {CB CheckButton} eggs {CB CheckButton} # radio menu color {CB RadioButton} # Help menu about {CB About} } -bind { exit {1 Cntl+Q Control-Key-q} cut {2 Cntl+X Control-Key-x} copy {0 Cntl+C Control-Key-c} paste {0 Cntl+V Control-Key-v} coffee {0 Cntl+A Control-Key-a} donut {0 Cntl+B Control-Key-b} eggs {0 Cntl+C Control-Key-c} about 0 } -background { exit red } -foreground { exit white } } proc pout { txt } { global mbar set tout [${mbar} tag.cget . tout] ${tout} insert end "${txt}\n" } proc Exit { args } { puts "Goodbye" exit } proc CB { args } { set alist [lassign ${args} cmd] pout "${cmd}: [join ${alist} {, }]" } wm minsize . 300 300 wm geometry . +4+4 wm protocol . WM_DELETE_WINDOW exit wm title . "Example" wm focusmodel . active pout "Example started ..."
CAVEATS
This implementation uses TclOO so it requires 8.6. The code has been tested on Windows (Vista), Linux and OSX (10.4). | https://manpages.org/menubar/3 | CC-MAIN-2022-27 | refinedweb | 3,211 | 65.62 |
If you've ever jammed with the console cowboys in cyberspace,.
The query portion of this URL has three keys,
q,
src, and
f.
q represents the text we type into Twitter's search bar,
src tells Twitter how we did it (via typing into the search bar), and
f filters the results of the query by "Latest".
What's nice about this is it's sharable. You could copy and paste that URL into your browser right now and it would work. All the data Twitter needs to properly render the UI is right there in the URL.
With all that said, odds are you're not here to learn what query strings are but instead how to use them with React Router. The good news is that if you're already comfortable with React Router, there are just a few small details you need to know.
Let's say we were Twitter and we were building the
Route for the URL above. It would probably look something like this.
<Route path="/search" element={<Results />} />
Notice at this point there's nothing new. We don't account for the query string when we create the
Route. Instead, we get and parse it inside the component that is being rendered when that path matches - in this case,
Results.
Now the question becomes, how do we actually do this? Before we can answer that question, we first need to learn about the
URLSearchParams API.
URLSearchParams
The
URLSearchParams API is built into all browsers (except for IE) and gives you utility methods for dealing with query strings.
When you create a new instance of
URLSearchParams, you pass it a query string and what you get back is on object with a bunch of methods for working with that query string.
Take our Twitter query string for example,
const queryString = "?q=ui.dev&src=typed_query&f=live";const sp = new URLSearchParams(queryString);sp.has("q"); // truesp.get("q"); // ui.devsp.getAll("src"); // ["typed_query"]sp.get("nope"); // nullsp.append("sort", "ascending");sp.toString(); // "?q=ui.dev&src=typed_query&f=live&sort=ascending"sp.set("q", "bytes.dev");sp.toString(); // "?q=bytes.dev&src=typed_query&f=live&sort=ascending"sp.delete("sort");sp.toString(); // "?q=bytes.dev&src=typed_query&f=live"
useSearchParams
As of v6, React Router comes with a custom
useSearchParams Hook which is a small wrapper over the browser's
URLSearchParams API.
useSearchParams returns an array with the first element being an instance of
URLSearchParams (with all the properties we saw above) and the second element being a way to update the query string.
Going back to our example, (...)}
Then if we needed to update the query string, we could use
setSearchParams, passing it an object whose key/value pair will be added to the url as
&key=value.
import { useSearchParams } from 'react-router-dom'const Results = () => {const [searchParams, setSearchParams] = useSearchParams();const q = searchParams.get('q')const src = searchParams.get('src')const f = searchParams.get('f')const updateOrder = (sort) => {setSearchParams({ sort }). | https://ui.dev/react-router-query-strings | CC-MAIN-2022-21 | refinedweb | 498 | 64.3 |
Forum:Help Bring Dawn Back
From Uncyclopedia, the content-free encyclopedia
Welcome to the *OFFICIAL* "Help Save Dawn" thread.
Introduction/Rant/Rules
Okay. Here's the problem. Dawn is a very well-known Pokemon article with many references and a long history on Uncyclopedia. However, it's been huffed (and after I took on the mantle of restoring it to its former glory, CVP'd) and the mantle is too much for me to bear alone, mostly due to the fact that everyone has differing opinions on how the article should ideally be. I think I've really come far, but I just can't think of how to add extra "perfect Uncyclopedia article" sparkle to the article (which, as you've all made perfectly clear, is the only way this article will be saved). Therefore, I'm enlisting help.
Update: The Pokemon version has been moved. In accordance with advice below, I'm making an entirely different Dawn article. And requesting that the Pokemon Dawn be entirely edit-blocked. (Rules updated on 20:06, 8 October 2007 (UTC))
Official Rules for Helping
1. If you want to contribute something along the lines of generic Pokemon crap, do so here. I have created a special version of the page identical to the first version I submitted to Pee Review in order to keep the Uncyclopedia Dawn in-joke/meme (which is roughly "she's the sexiest girl ever") from nuking my page spontaneously.
2. You must have good ideas. Anything that is not a good idea will be nuked.
3. NO ANONYMOUS IP EDITS. I cannot stress this enough. Anonymous edits tend to be bullsh*t, and one of them was the cause for the Dawn CVP. If you want to help, log in first.
4. Do NOT create a major overhaul without a good reason. Even if you have a good reason (and I mean "This page does NOT exist" good) don't do anything without going to the talk page first. (Man, I'm so happy that page isn't a red link anymore...*happy sob*
5. I have the final word. It's on MY user page, after all.
6. There is no category and no userbox for you all. This is out of the goodness of your hearts. (I'd make a category and userbox, but lately my userboxes are getting nuked for no apparent reason, and I don't like boxless categories when we're talking about userpages.)
7. Sign Up Below Or Die. I have an army of Starcraft units (and a Grue) at my disposal, and will not hesitate to use them.
--User:Banjo2e/SIGGY 04:17, 29 September 2007 (UTC)
Helpers
--Narf, the Wonder Puppy/I support Global Warming and I'm 100% proud of it! 05:51, 29 September 2007 (UTC)
Well I don't like dying, it's like, when I get there all God does is make fun of my pants.
Additional Discussion Begins Here
- I already gave it a Pee Review, but sure, I'll help out... After finishing several others. Beans!--Narf, the Wonder Puppy/I support Global Warming and I'm 100% proud of it! 05:51, 29 September 2007 (UTC)
- /me dies. Urk. --Whhhy?Whut?How? *Back from the dead* 10:14, 29 September 2007 (UTC)
- I have to state that the current state of events are, frankly, disturbing. Whenever the topic Bring Dawn back and the topic Vampire template are next to each other or in close proximity I tend to find this quite suspicious, but who to blame ? high school chicks or evil vampire overlords ? --Vosnul 10:58, 29 September 2007 (UTC)
- High school chicks? Dawn is in elementary school! Those tricky ten year olds... Fresh Stain Serq Fet of Pokemon (At your service) 14:25, 29 September 2007 (UTC)
- Actually, she's an adult actress. You know, Uncyclopedia-wise. --User:Banjo2e/SIGGY 17:08, 29 September 2007 (UTC)
- Surely vampires would not want to bring dawn back, what with the hating sunlight thing and all? --Whhhy?Whut?How? *Back from the dead* 20:53, 29 September 2007 (UTC)
- Dawn should die. Also, starcraft units suck against Dune units, and I have complete and utter control over grues. --Lt. High Gen. Grue The Few The Proud, The Marines 03:41, 30 September 2007 (UTC)
- If this is aboutthe Dawn Astroid Orbiter, which dosen't exist, that none of you should know about I will call the CIA. (It was public, but we haven't had the CIA in Uncyclopedia for about 2 seconds. They're 1 second late.) --Lt. High Gen. Grue The Few The Proud, The Marines 03:43, 30 September 2007 (UTC)
- I've said it before, I'll say it again just to be annoying: Concept, concept, concept. It needs one. Badly. – Sir Skullthumper, MD (criticize • writings • SU&W) 03:59 Sep 30, 2007
- The best help I can give is this: READ THIS LINK. I read it all the time, just for ideas. It really, really helps. Also there's this. Both of those are very helpful when you're writing. If you follow what those tell you, I guarantee that the page will at least be decent. P.M., WotM, & GUN, Sir Led Balloon
(Tick Tock) (Contribs) 04:22, Sep 30
- So tell us, if this is ever taken off of CVP, exactly what format will you follow? What will the plot be? It better not just center around a "super-hot ten-year-old" either. Try incorporating other uses of the word "Dawn" into the article, like the sunrise, or the Hammer of Dawn from Gears of War. The article shouldn't be limited to one thing. Conniving 02:53, 7 October 2007 (UTC)
- *Sigh.* You just had to bump this, didn't you. Well then, I'd say that I'd have written a page named Dawn about sunrise, perhaps with a few subtle references to Pokemon. But, the page would still not be about pokemon, it'd be about sunrise. Now, the only reason I say this is because I never watched the show beyond the first few episodes(the ones about the "first generation" games) and I've only played the 1st generation games. Still, if you know about pokemon and think you can pull off a poke-centric page that is original and humorous, then by all means do. As I said above, read HTBFANJS. It'll make you write:11, Oct 7
New Comment Section
In accordance with the advice of a smart guy I have performed a major overhaul. Dawn is now going to be a completely different page. Dawn (Pokemon) is going to stay how it is (and I mean it; I formally request that once it's in the main namespace it gets completely edit-blocked) and so is the entirely-editable Sexy Version. However, the main Dawn article is now going to be new and shiny. However, if the Pokemon Dawn article dies, I'm going to nuke you all with my army of Starcraft units.
Input would be appreciated. Thanks. --User:Banjo2e/SIGGY 20:06, 8 October 2007 (UTC)
- What about Dawn cleaning fluid? Where's the love? Unsolicited conversation Extravagant beauty PEEING 00:31, 9 October 2007 (UTC)
- I thought about it right after I had to save and leave to do important stuff. Don't worry, it won't be neglected. And even if it were, it's still under the Pokemon version of Dawn. Both of them. So it doesn't really matter that muchly. Or maybe it does. I don't know. I'm just ranting for no apparent reason. --User:Banjo2e/SIGGY 12:46, 9 October 2007 (UTC)
ERROR 403 - REQUEST DENIED!. We are not going to do another Pokemon character that doesn't cover What is funny and what is not. --Jtaylor1 14:17, 14 October 2007 (UTC)
- Uh... you're a little late... by a few months... Unsolicited conversation Extravagant beauty PEEING 16:50, 14 October 2007 (UTC)
- I'm not an expert on the subject matter, but
does this mean the project is officially dead? Pentium5dot1 06:09, 17 October 2007 (UTC)Um, oops - I didn't realize that Banjo2e got a username change to Administrator. Hence the links in this topic need fixing. Pentium5dot1 06:27, 17 October 2007 (UTC) | http://uncyclopedia.wikia.com/wiki/Forum:Help_Bring_Dawn_Back | CC-MAIN-2016-50 | refinedweb | 1,392 | 74.49 |
SYNOPSIS
#include <time.h>
int nanosleep(const struct timespec *rqtp, struct timespec *rmtp);
DESCRIPTION
The nanosleep() function shall cause sys-
tem. But, except for the case of being interrupted by a signal, the
suspension time shall not be less than the time specified by rqtp, as
measured by the system clock CLOCK_REALTIME.
The use of the nanosleep() function has no effect on the action or
blockage of any signal.
RETURN VALUE
If the nanosleep() function returns because the requested time has
elapsed, its return value shall be zero.
If the nanosleep() function returns because it has been interrupted by
a signal, it shall return a value of -1 and set shall return a value of -1 and set errno to
indicate the error.
signal number. This volume of.
FUTURE DIRECTIONS
None.
SEE ALSO
sleep() , . | http://www.linux-directory.com/man3/nanosleep.shtml | crawl-003 | refinedweb | 134 | 65.01 |
Hi,
As far as I can see in the current ANT1 code, there seems to be no provisions on what happens
if I define
a task and a datatype using the same name. The definer tasks do not check across different
namespaces.
It seems that Datatype definitions always take precedence over task definitions. Since under
<antlib>
now we have the means to treat this issue in a proper way, I would like to know what do you
think
should be the correct thing to do..
Should the new definition be (a) rejected; (b) override the definition on A; (c) ignored.
Jose Alberto | http://mail-archives.apache.org/mod_mbox/ant-dev/200203.mbox/%3C000f01c1c313$b3a933d0$0100a8c0@jose%3E | CC-MAIN-2016-26 | refinedweb | 102 | 67.69 |
Managing general properties
Updated: January 21, 2005
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2
Managing general properties
When you add a zone using the DNS console, you can manage these general properties for it:
- Pause or start the zone to interrupt or restore service for it.
- Change or convert the type for the zone.
- Disable or enable dynamic updates for the zone.
For Active Directory-integrated zones, you can enable the use of secure dynamic updates. This enables you to restrict updates to only a specific set of authorized users or systems. When a secure update policy is enabled for the zone, only users, systems, or groups authorized through Active Directory and included in the access control list (ACL) for each directory-integrated zone, are permitted to update the zone or specific resource records used in it.
In addition to these general zone properties, you can configure or manage the following zone properties using the DNS console:
- Start of authority (SOA) properties These include properties supported by the SOA resource record, which is used to initialize the zone and indicate zone authority for a DNS domain name (and any of its subdomains not delegated away to other servers) to others in the DNS namespace. This record affects how often the zone must be renewed and transferred by other servers that load the zone and how long clients can cache resource records (RRs) when returned in answered queries for names in the zone. For more information on this record and configuring it, see Managing authority records.
- Name server (NS) properties These include all fields supported by the NS resource record (RR) for the zone. The NS RR is used for designating the names of DNS servers authoritative for the zone to others. For more information on this record and configuring it, see Managing authority records.
- Zone Transfer properties With this feature, you can configure how the zone permits transfers to be performed.
You can choose to deny all requests to the server for transfer of this zone, to allow them only for other DNS servers configured on the Name Servers tab, or to transfer the zone only to DNS servers that you specify by IP address in a configured list.
Using Notify located here, you can also enable and configure DNS notification for secondary servers of the zone. When notification is used, other servers (either those configured on the Name Servers tab or on a list that you specify) are notified of zone changes. These servers can then pull changes by initiating a zone transfer to update the zone.
For more information, see Configuring notify lists.
Note
- By default, the DNS server will only allow a zone transfer to authoritative DNS servers listed in the name server (NS) resource records for the zone.
- WINS lookup properties The Windows Internet Name Service (WINS) lookup feature can be used to provide an expanded DNS name resolution path for zones when a queried name is not found in the zone. If WINS lookup is enabled for the zone, a WINS server (or list of WINS servers) can be contacted to assist in resolving a host name in the WINS-managed NetBIOS namespace. This feature is supported only by Microsoft DNS servers. For more information, see Using WINS lookup. | https://technet.microsoft.com/en-us/library/cc785876(v=ws.10).aspx | CC-MAIN-2015-18 | refinedweb | 556 | 55.58 |
Microservices hosted in Docker
This tutorial details the tasks necessary to build and deploy an ASP.NET Core microservice in a Docker container. During the course of this tutorial, you'll learn:
- How to generate an ASP.NET Core application.
-.
You can view or download the sample app for this topic. will work for an ASP.NET Core application.
Prerequisites
You’ll need to setup.
You'll also need to install the Docker engine. See the Docker Installation page for instructions for your platform. Docker can be installed in many Linux distributions, macOS, or Windows. The page referenced above contains sections to each of the available installations.
Create the Application
Now that you've installed all the tools, create a new ASP.NET Core application in a directory called "WeatherMicroservice" by executing the following command in your favorite shell:
dotnet new web -o WeatherMicroservice
The
dotnet command runs the tools necessary for .NET development. Each verb executes a different command.
The
dotnet new command is used to create .Net Core projects.
The
-o WeatherMicroservice option after the
dotnet new command is used to give the location to create the ASP.NET Core application.
For this microservice, we want the simplest, most lightweight web application possible, so we used the "ASP.NET Core Empty" template, by specifying its short name,
web.
The template creates four files for you:
- A Startup.cs file. This contains the basis of the application.
- A Program.cs file. This contains the entry point of the application.
- A WeatherMicroservice.csproj file. This is the build file for the application.
- A Properties/launchSettings.json file. This contains debugging settings used by IDEs.
Now you can run the template generated application:
dotnet run
This command will first restore dependencies required to build the application and then it will build the application.
The default configuration listens to. You can open a browser and navigate to that page and see a "Hello World!" message.
When you're done, you can shut down the application by pressing Ctrl+C.
Anatomy of an ASP.NET Core application
Now that you've built the application, let's look at how this functionality is implemented. There are two of the generated files that are particularly interesting at this point: WeatherMicroservice.csproj and Startup.cs.
The .csproj file contains information about the project. The two nodes that are most interesting are
<TargetFramework> and
<PackageReference>.
The
<TargetFramework> node specifies the version of .NET that will run this application.
Each
<PackageReference> node is used to specify a package that is needed for this application. microservice, so it doesn't need to configure any dependencies. The
Configuremethod configures the handlers for incoming HTTP Requests. The template generates a simple handler that responds to any request with the text 'Hello World!'.
Build a microservice
The service you're going to build will deliver weather reports from anywhere around the globe. In a production application, you'd call some service to retrieve weather data. For our sample, we'll generate a random weather forecast. form:
All the changes you need to make are in the lambda expression defined as the argument to
app.Runin your startup class.
The argument on the lambda expression is the
HttpContext for the request. One of its properties is the
Request object. The
Request object has a
Query property that contains a dictionary of all the values on the query string for the request. The first addition is to find the latitude and longitude values:
var latString = context.Request.Query["lat"].FirstOrDefault(); var longString = context.Request.Query["long"].FirstOrDefault();
The
Query dictionary values are
StringValue type. That type can contain a collection of strings. For your weather service, each value is a single string. That's why there's the call to
FirstOrDefault() in the code above.
Next, you need to convert the strings to doubles. The method you'll use to convert the string to a double is
double.TryParse():
bool TryParse(string s, out double result);
This method leverages C# out parameters to indicate if the input string can be converted to a double. If the string does represent a valid representation for a double, the method returns true, and the
result argument contains the value. If the string does not represent a valid double, the method returns false.
You can adapt that API with the use of an extension method that returns a nullable double. A nullable value type is a type that represents some value type, and can also hold a missing, or null value. A nullable type is represented by appending the
? character to the type declaration.
Extension methods are methods that are defined as static methods, but by adding the
this modifier on the first parameter, can be called as though they are members of that class. Extension methods may only be defined in static classes. Here's the definition of the class containing the extension method for parse:
public static class Extensions { public static double? TryParse(this string input) { if (double.TryParse(input, out var result)) { return result; } else { return null; } } }
Before calling the extension method, change the current culture to invariant:
CultureInfo.CurrentCulture = CultureInfo.InvariantCulture;
This ensures that your application parses numbers the same on any server, regardless of its default culture.
Now you can use the extension method to convert the query string arguments into the double type:
var latitude = latString.TryParse(); var longitude = longString.TryParse();
To easily test the parsing code, update the response to include the values of the arguments:
await context.Response.WriteAsync($"Retrieving Weather for lat: {latitude}, long: {longitude}");
At this point, you can run the web application and see if your parsing code is working. Add values to the web request in a browser, and you should see the updated results.
Build a random weather forecast
Your next task is to build a random weather forecast. Let's start with a data container that holds the values you'd want for a weather forecast:
public class WeatherReport { private static readonly string[] PossibleConditions = { "Sunny", "Mostly Sunny", "Partly Sunny", "Partly Cloudy", "Mostly Cloudy", "Rain" }; public int HighTemperatureFahrenheit { get; } public int LowTemperatureFahrenheit { get; } public int AverageWindSpeedMph { get; } public string Condition { get; } }
Next, build a constructor that randomly sets those values. This constructor uses the values for the latitude and longitude to seed the
Random number generator. That means the forecast for the same location is the same. If you change the arguments for the latitude and longitude, you'll get a different forecast (because you start with a different seed).
public WeatherReport(double latitude, double longitude, int daysInFuture) { var generator = new Random((int)(latitude + longitude) + daysInFuture); HighTemperatureFahrenheit = generator.Next(40, 100); LowTemperatureFahrenheit = generator.Next(0, HighTemperatureFahrenheit); AverageWindSpeedMph = generator.Next(0, 45); Condition = PossibleConditions[generator.Next(0, PossibleConditions.Length - 1)]; }
You can now generate the 5-day forecast in your response method:
if (latitude.HasValue && longitude.HasValue) { var forecast = new List<WeatherReport>(); for (var days = 1; days <= 5; days++) { forecast.Add(new WeatherReport(latitude.Value, longitude.Value, days)); } }
Build the JSON response
The final code task on the server is to convert the
WeatherReport list into JSON document, and send that back to the client. Let's start by creating the JSON document. You'll add the Newtonsoft JSON serializer to the list of dependencies. You can do that using the following
dotnet command:
dotnet add package Newtonsoft.Json document. After you've constructed the response document, you set the content type to
application/json, and write the string.
The application now runs and returns random forecasts.
Build a Docker image
Our final task is to run the application in Docker. We'll create a Docker container that runs a Docker image that represents our application.
A Docker Image is a file that defines the environment for running the application.
A Docker Container represents a running instance of a Docker Image.
By analogy, you can think of the Docker Image as a class, and the Docker Container as an object, or an instance of that class.
The following Dockerfile will serve for our purposes:
FROM microsoft/dotnet:2.1-sdk AS build WORKDIR /app # /app/out . ENTRYPOINT ["dotnet", "WeatherMicroservice.dll"]
Let's go over its contents.
The first line specifies the source image used for building the application:
FROM microsoft/dotnet:2.1-sdk AS build
Docker allows you to configure a machine image based on a source template. That means you don't have to supply all the machine parameters when you start, you only need to supply any changes. The changes here will be to include our application.
In this sample, we'll use the
2.1-sdk version of the
dotnet image. This is the easiest way to create a working Docker environment. This image includes the .NET Core runtime, and the .NET Core SDK. That makes it easier to get started and build, but does create a larger image, so we'll use this image for building the application and a different image to run it.
The next lines setup and build your application:
WORKDIR /app # Copy csproj and restore as distinct layers COPY *.csproj ./ RUN dotnet restore # Copy everything else and build COPY . ./ RUN dotnet publish -c Release -o out
This will copy the project file from the current directory to the Docker VM, and restore all the packages. Using the dotnet CLI means that the Docker image must include the .NET Core SDK. After that, the rest of your application gets copied, and the
dotnet publish command builds and packages your application.
Finally, we create a second Docker image that runs the application:
# Build runtime image FROM microsoft/dotnet:2.1-aspnetcore-runtime WORKDIR /app COPY --from=build /app/out . ENTRYPOINT ["dotnet", "WeatherMicroservice.dll"]
This image uses the
2.1-aspnetcore-runtime version of the
dotnet image, which contains everything necessary to run ASP.NET Core applications, but does not include the .NET Core SDK. This means this image can't be used to build .NET Core applications, but it also makes the final image smaller.
To make this work, we copy the built application from the first image to the second one.
The
ENTRYPOINT command informs Docker what command starts the service.
Building and running the image in a container
Let's build an image and run the service inside a Docker container. You don't want all the files from your local directory copied into the image. Instead, you'll build the application in the container. You'll create a
.dockerignore file to specify the directories that are not copied into the image. You don't want any of the build assets copied. Specify the build and publish directories in the
.dockerignore file:
bin/* obj/* out/*
You build the image using the
docker build command. Run the following command from the directory containing your code.
docker build -t weather-microservice .
This command builds the container image based on all the information in your Dockerfile. The
-targument provides a tag, or name, for this container image. In the command line above, the tag used for the Docker container is
weather-microservice. When this command completes, you have a container ready to run your new service.
Run the following command to start the container and launch your service:
docker run -d -p 80:80 --name hello-docker weather-microservice
The
-d option means to run the container detached from the current terminal. That means you won't see the command output in your terminal. The
-p option indicates the port mapping between the service and the host. Here it says that any incoming request on port 80 should be forwarded to port 80 on the container. Using 80 matches the port your service is listening on, which is the default port for production applications. The
--name argument names your running container. It's a convenient name you can use to work with that container.:
Attaching to a running container
When you ran your service in a command window, you could see diagnostic information printed for each request. You don't see that information when your container is running in detached mode. The Docker attach command enables you to attach to a running container so that you can see the log information. Run this command from a command window:
docker attach --sig-proxy=false hello-docker
The
--sig-proxy=false argument means that Ctrl+C commands do not get sent to the container process, but rather stop the
docker attach command. The final argument is the name given to the container in the
docker run command. the attached running container.
Press Ctrl+C to stop the attach process.
When you are done working with your container, you can stop it:
docker stop hello-docker
The container and image is still available for you to restart. If you want to remove the container from your machine, you use this command:
docker rm hello-docker
If you want to remove unused images from your machine, you use this command:
docker rmi weather-microservice
Conclusion
In this tutorial, you built an ASP.NET Core microservice, and added a few simple features.
You built a Docker container image for that service, and ran that container on your machine. You attached a terminal window to the service, and saw the diagnostic messages from your service.
Along the way, you saw several features of the C# language in action. | http://semantic-portal.net/tutorials-microservices | CC-MAIN-2022-27 | refinedweb | 2,222 | 58.58 |
Wildfly 10 timing issue ?Lasse Petersen Jul 25, 2016 7:07 AM
I am currently migrating Jboss-4.2.1 to Wildfly-10.0.0, and I have what I think is a timing issue during startup. I have a number of what Jboss called a 'lightweight' services, i.e each deployment described with a .xml file describing what this deployment contains. One of these deployment is to initiate a queue during startup using the jndi name java:/ConnectionFactory. Sometimes the startup of this deployment go well and sometimes it does not. In the situations where the startup fails it is always with this exception javax.naming.NameNotFoundException: ConnectionFactory -- service jboss.naming.context.java.ConnectionFactory. So sometimes the jndi name exists and some time it does not. I have dumped the contents of various namespaces before initiating the queue, and this is perfectly aligned with the exception. When the startup fails, the name java:/ConnectionFactory does not exist. When the startup succeeds the name java;/ConnectionFactory exists.
.
Because of the multithreaded deployment used by Wildfly I used to think that my deployment tried to access ActiveMQ before it was ready, i.e before 'Server is now live' written in the log, but this is not the case .I have seen situations where the startup fails after 'Server is now live' is written to the log. In the situation where the startup has failed I can undeploy the deployemt. If I deploy it afterwards the deployment starts nicely up and showing the existence of java:/ConnectionFactory.
I have looked through the forums for similar behaviour but no one seem to match my problem. I start Wildfly using the file standalone-full.xml.
Does anyone have an idea of what causes this behaviour and does anyone have a suggestion for a fix ?
Regards
Lasse Petersen .
1. Re: Wildfly 10 timing issue ?Justin Bertram Jul 25, 2016 11:07 AM (in response to Lasse Petersen)
The simplest way to resolve this issue is likely just to retry the JNDI lookup when you hit a failure. Insert a small delay (e.g. 200ms) between retries and after X number of retries then throw an exception. That should deal with the timing issue.
2. Re: Wildfly 10 timing issue ?Lasse Petersen Jul 26, 2016 3:17 AM (in response to Justin Bertram)
The workaround you suggest has also been in my mind, but I will not use this yet. I had hoped for another answer for this problem To me it seems odd that WildFly's ActiveMQ component in the application log writes 'Server is now live' and then later on in the log it writes 'trying to deploy a queue' or 'resource adaptor started'. I will dig deeper into the problem, and if I find an answer, I will post it here.
3. Re: Wildfly 10 timing issue ?Tomaz Cerar Jul 26, 2016 6:05 AM (in response to Lasse Petersen)
WildFly uses new core architecture where whole boot process is multi threaded.
So unless you have proper dependencies defined between components you could see problem like you see now.
How do you do jndi lookup? by using InitialContext.lookup? or by injection?
if you would do something like
@Resource(mappedName="java:/ConnectionFactory")
private ConnectionFactory factory;
it should work. as server knows about relationship and it will wait till service is ready to inject it.
4. Re: Wildfly 10 timing issue ?Justin Bertram Jul 26, 2016 9:36 AM (in response to Lasse Petersen)
To me it seems odd that WildFly's ActiveMQ component in the application log writes 'Server is now live' and then later on in the log it writes 'trying to deploy a queue' or 'resource adaptor started'...
This is the expected behavior, I believe. You should establish proper dependencies like Tomaz suggested or implement a retry as I suggested if you're not using a component which support resource injection.
5. Re: Wildfly 10 timing issue ?Lasse Petersen Jul 27, 2016 3:21 AM (in response to Tomaz Cerar)
I do have a dependency between some components so they need to start in the right sequence.
Being an older application it uses InitialContext. I will try the use of @Resource | https://developer.jboss.org/thread/271638 | CC-MAIN-2017-47 | refinedweb | 700 | 66.13 |
Writing a blog is a great excuse to explore some new and unfamiliar technology. In this post I will explore two new(er) JavaScript frameworks, Stencil and Svelte.
As of writing this post. Svelte is at version 3.4.4 and Stencil is at version 1.0.0. Both projects seem actively worked on based on GitHub activity.
Both frameworks are web compiler frameworks. Meaning, they take some source input and generate some minified optimized version in JavaScript, HTML, and CSS.
Stencil
Stencil was created and is maintained by the Ionic Framework team. The focus is on using web standards, like custom web components, and not the opinions of a particular framework or build tools.
Since it generates standard web components, the components can be used in any JavaScript framework. It leverages modern browser APIs like Custom Elements. It supports IE11 and up.
Stencil also provides support for TypeScript and JSX. Here is an example component.
Example component. TypeScript + JSX = TSX
import { Component, Prop } from '@stencil/core'; @Component({ tag: 'my-first-component', }) export class MyComponent { // Indicate that name should be a public property on the component @Prop() name: string; render() { return ( <p> My name is {this.name} </p> ); } }
Usage
<my-first-component</my-first-component>
See learning resources for more guides and tutorials.
Svelte
Svelte seems like it has been around longer since it is at version 3. Some of the features of Svelte are:
- No virtual DOM
- No runtime (all work done at compile time)
.svelte files are very similar to Vue single file components. A
.svelte file can have 3 sections a script tag with the business logic, a style tag with CSS, and finally markup.
The markup, or template section, differs from a Vue component because you don't need a root level element.
Here is an example component. I went through the tutorial in their documentation and combined all the parts I found useful or interesting into a compact example.
<script> import Nested from './Nested.svelte'; let msg = 'A string with <strong>HTML</strong>'; let things = ['dog', 'cat', 'bear', 'frog']; let count = 0; function handleClick() { count += 1; } // reactive statement // code is run when count changes $: console.log(`the count is ${count}`); </script> <style> button { color: white; background-color: blue; } </style> <p>{@html msg}</p> <button on:click={handleClick}> Clicked {count} {count === 1 ? 'time' : 'times'} </button> {#if count > 10} <p>count > 10</p> {:else if count > 5} <p>count > 5</p> {:else} <p>count < 5</p> {/if} <ul> {#each items in item} <li>{item}</li> {/each> </ul> <Nested title="nested"/>
<!-- Nested.svelte --> <script> // export props and give it a default (optional) export let title = 'Title'; </script> <p>{title}</p>
Svelte works with the following build tools.
- Rollup
- Webpack
- Parcel
For generating larger projects, similar to the Vue CLI, see Sapper. It supports routing, server-side rendering, and code-splitting.
Bundle Size Comparisons
I thought it would be interesting to compare the outputs of each of these frameworks with the Real World App. I went to the demo page of each implementation and compared the network statistics in the network tab in my browser's dev tools (Firefox).
Network Charts From Dev Tools
A great future side project would be to generate these statistics for all the implementations of the RealWorld App. After scraping the project's REAMDE for the projects, you could use something like Selenium to hit each demo page and gather all the stats.
Conclusion
The new generation of JS frameworks seem more focused on bundle size. I thought nothing would be able to beat Elm's bundle size. Svelte proved me wrong.
After a brief look at these two frameworks, I would use Svelte as a replacement for Vue. It seems to provide a similar API.
I would use Stencil if I was concerned about sharing my component with the JS community and needed it to work across any JS framework. | https://pianomanfrazier.com/post/comparing-svelte-stencil/ | CC-MAIN-2020-24 | refinedweb | 647 | 64.91 |
I need to convert "void*" to int, but compiler keeps giving me warning.
Wonder if there is a way to change the code so that compiler will not complain. This occurs a lot in the code base, especially when passing an argument to starting a new thread.
$ g++ -fpermissive te1.cc
te1.cc: In function ‘void dummy(void*)’:
te1.cc:4:15: warning: cast from ‘void*’ to ‘int’ loses precision [-fpermissive]
int x = (int)p;
^
#include <stdio.h>
extern void someFunc(int);
void dummy(int type, void *p) {
if (type == 0) {
int x = (int)p;
someFunc(x);
} else if (type == 1) {
printf("%s\n", (char*)p);
}
}
int main(int argc, char *argv[]) {
void *p = (void*)5;
dummy(p);
return 0;
}
You are probably looking for something along the lines of
int x = static_cast<int>(reinterpret_cast<std::uintptr_t>(p));
This is not strictly guaranteed to work: perhaps surprisingly, the standard guarantees that a pointer converted to a large enough integer and back to a pointer results in the same value; but doesn't provide a similar guarantee for when an integer is converted to a pointer and back to the integer. All it says about the latter case is
[expr.reinterpret.cast]/4 A pointer can be explicitly converted to any integral type large enough to hold it. The mapping function is implementation-defined. [ Note: It is intended to be unsurprising to those who know the addressing structure of the underlying machine. —end note ]
Hopefully, you know the addressing structure of your machine, and won't be surprised. | https://codedump.io/share/F7sZhig89BUz/1/convert-quotvoidquot-to-int-without-warning | CC-MAIN-2017-04 | refinedweb | 257 | 60.65 |
NAME
PerlIO::via - Helper class for PerlIO layers implemented in perl
SYNOPSIS
use PerlIO::via::Layer; open($fh,"<:via(Layer)",...); use Some::Other::Package; open($fh,">:via(Some::Other::Package)",...);
DESCRIPTION
The PerlIO::via module allows you to develop PerlIO layers in Perl, without having to go into the nitty gritty of programming C with XS as the interface to Perl..
EXPECTED METHODS.
- $class->PUSHED([$mode,[$fh]])
Should return an object or the class, or -1 on failure. (Compare TIEHANDLE.) The arguments are an optional mode string ("r", "w", "w+", ...) and a filehandle for the PerlIO layer below. Mandatory.
When the layer is pushed as part of an
opencall,
PUSHEDwill be called before the actual open occurs, whether that be via
OPEN,
SYSOPEN,
FDOPENor by letting a lower layer do the open.
- $obj->POPPED([$fh])
Optional - called when the layer is about to be removed.
- $obj->UTF8($bellowFlag,[$fh])
Optional - if present it will be called immediately after PUSHED has returned. It should return a true value if the layer expects data to be UTF-8 encoded. If it returns true, the result is as if the caller had done
":via(YourClass):utf8"
If not present or if it returns false, then the stream is left with the UTF-8 flag clear. The $bellowFlag argument will be true if there is a layer below and that layer was expecting UTF-8.
- $obj->OPEN($path,$mode,[$fh])
Optional - if not present a lower layer does the open. If present, called for normal opens after the layer is pushed. This function is subject to change as there is no easy way to get a lower layer to do the open and then regain control.
- $obj->BINMODE([$fh])
Optional - if not present the layer is popped on binmode($fh) or when
:rawis pushed. If present it should return 0 on success, -1 on error, or undef to pop the layer.
- $obj->FDOPEN($fd,[$fh])
Optional - if not present a lower layer does the open. If present, called after the layer is pushed for opens which pass a numeric file descriptor. This function is subject to change as there is no easy way to get a lower layer to do the open and then regain control.
- $obj->SYSOPEN($path,$imode,$perm,[$fh])
Optional - if not present a lower layer does the open. If present, called after the layer is pushed for sysopen style opens which pass a numeric mode and permissions. This function is subject to change as there is no easy way to get a lower layer to do the open and then regain control.
- $obj->FILENO($fh)
Returns a numeric value for a Unix-like file descriptor. Returns -1 if there isn't one. Optional. Default is fileno($fh).
- $obj->READ($buffer,$len,$fh)
Returns the number of octets placed in $buffer (must be less than or equal to $len). Optional. Default is to use FILL instead.
- $obj->WRITE($buffer,$fh)
Returns the number of octets from $buffer that have been successfully written.
- $obj->FILL($fh)
Should return a string to be placed in the buffer. Optional. If not provided, must provide READ or reject handles open for reading in PUSHED.
- $obj->CLOSE($fh)
Should return 0 on success, -1 on error. Optional.
- $obj->SEEK($posn,$whence,$fh)
Should return 0 on success, -1 on error. Optional. Default is to fail, but that is likely to be changed in future.
- $obj->TELL($fh)
Returns file position. Optional. Default to be determined.
- $obj->UNREAD($buffer,$fh)
Returns the number of octets from $buffer that have been successfully a function of the return value of FILL or READ.
EXAMPLES
Check the PerlIO::via:: namespace on CPAN for examples of PerlIO layers implemented in Perl. To give you an idea how simple the implementation of a PerlIO layer can look, a simple example is included here.
Example - a Hexadecimal Handle"); | https://metacpan.org/pod/release/JESSE/perl-5.14.0-RC2/ext/PerlIO-via/via.pm | CC-MAIN-2017-09 | refinedweb | 645 | 65.32 |
.
(1) You will need a notes URL to a categorized.
Click on OK. This will fill in the add component dialog with the Notes url. Cut and paste this off into a blank copy of Notepad or something..
1. Constants.Field 1 -> Document Viewer.Show Document
2. Constants.Field 1 -> Notes View to Tag Cloud.URL
3. Notes View to Tag Cloud.TagCloudData -> Tag Cloud.Primary Data
4. Tag Cloud.Focused Entity -> Document Viewer.Column Filter.
The Smart Assistant sample application demonstrates a composite application that takes advantage of LanguageWare's text analysis libraries and displays information from an external information system, such as SAP.
There are three core components in the application. The user's mailbox, the language analysis component; called Smart Assistant and the Payroll component; a front end for an external information system.
When a user selects an email message header, the URL of that message is published to the Property Broker and the Smart Assistant component is wired for this property.
Smart Assistant reads the body of the selected message and using LanguageWare libraries, analyzes the text and extracts any entities that it can identify, displaying them in a tree-like fashion. Entities are arbitrary concepts such as people, places, tone of email, organizations etc. These libraries are quite neat, for example, they can disambiguate which person is being referred to in the email body based on the context of the email. The screen shot shows how the name 'Chris' was identified as Christopher Lambert, an employee managed by Mattew Paulson.
When a user selects a person from that tree, the person's ID is published as a property to the Property Broker.
The Payroll component, another Eclipse-based component is wired to respond to the person's ID and fetch that person's payroll information from an external system and display the information to the user. In this demonstration we used SAP as the external system.
More information about LanguageWare can be found at.
Jo Grant created another YouTube video about the Lead Manager sample..
Kannepalli Sreekanth from the Notes composite application development team describes in this blog entry a use case for the new feature he implemented for 8.0.1. The text below is from Sreekanth.
The feature Open Notes Document in CA Page can be used to open Documents associated with a particular keyword to be opened in a particular page. For e.g.This allows us to open all Documents that are related to Issue Tracker to be opened in a page containing the Issue Tracker database and all documents pertaining to company “MyCompany” to be opened in a page comprising MyCompany's teamroom along with other components etc.The below snapshot shows a mail Inbox with different mails
Double click on any document related to MyCompany and you can open it in a page along with the MyCompany Teamroom as shown below
We use @ Formula here to allow documents based on a single Form ( here Memo) to be opened in different pages based on a subject analysis.
Create a copy of the Memo form and name it as Issue Tracker ( Alias: IssueTracker). You may write intelligent code behind Form's PostOpen to publish any available fields.
In the Inbox folder write a formula in the FormFormula field. This will allow the calculation of a form name dynamically.
here i have used the @Contains method to check for the keyword “MyCompany” in the Subject field of the Memo. I return the value same as the Alias of the newly created Form “MyCompany” if SPR is found.
Create a new Page in CAE with the the page alias set to the form Name as returned by the FormFormula i.e. MyCompany
Voila you have a perfect example of the new feature. Double click on any incoming with the name MyCompany on it and see the document open in a new page along with the MyCompany Teamroom.
Composite applications are all about reuse of components and mix & match of components. In order to prove this model we're working on several components that can be used in custom applications. In 8.0.1 we've planned to ship a new 'recent collaborations component' and a 'side calendar view component'. Qian Liang is a technical lead in the Notes PIM team and Hui BJ Li is a developer in his team. They've implemented this feature and wrote the text below.
We add two new PIM components in Notes 8.0.1: Notes Recent Collaborations View and Notes SideCalendar View. You can find them in component palette in CAE: 1. Notes Recent Collaborations View: Recent Collaborations View is a view that show recent collaborations with someone, it searches recent collaborations history according to collaborator name. You can set collaborator name through two way: 1. Set component property (com.ibm.rcp.csiviews.viewpart.collaboratorname) in CAE, when view is opening, view will read this property and search recent collaborations by value of this property; 2. Recent collaboration view listening the property changed event, the value of property is the collaborator name. User can fire a property changed event to recent collaborations view, if recent collaboration view received a property changed event, it will search recent collaborations according to value of the property. The collaborator name's format should be one of following: a. A common name, such as "**** ***" b. An email address, such as "*****@***" c. Canonical name, such as "CN=***/OU=***/O=***" d. Notes ID name, "***/***/***" Recent Collaborations View providers one input action:
Pr.
Kannepalli Sreekanth from the Notes composite application development team has implemented with some other developers a new feature for Notes 8.0.1. This allows opening Notes documents in composite application pages together with other components. In 8.0.1 you could put forms on pages, but there was no way to open multiple documents at the same time. The text below is from Sreekanth.
In Notes 8, when you double click on a Notes Document it opens up in a new page which just the document opened in it. With this new feature in 8.0.1 you will be able to open Notes Documents in a page within a Composite Application. What does it mean to the end user ? The new page can again be a composite page which has wires to other components. Opening a document can lead to a few actions being fired which can show composite information on the page which was unavailable earlier.
How to use the feature ?
Let us use the Sales Lead Application for demonstrating this feature.
Create a new Composite Application and place the Sales Lead Core->Company View
The main idea behind this feature is that each Document is associated with a Form. A new page is created in CA associating it with a single form. When a document associated with such a form is opened the respective Page will be opened.
Note the form name associated with a Document from the Infobox. In the below e.g. The form name is equivalent to “CompanyForm”. Note: Form alias is used always.
Create a new page in the CA with any name. In the Page Properties->Advanced tab set a new property by the name “com.ibm.rcp.alias” and value equal to the form alias name. This property will be used to identify a page associated with a form. You may hide the page in the navigator also.
In the new page it is now important that we create a Notes Component that will act as a Placeholder component. This component will serve as a location markup as to where a document should be opened. Only one placeholder component can exist in a page. A Component is designated as a placeholder component using the property com.ibm.notes.isDocumentPlaceholder=true
NOTE: It is necessary to point the URL of the Notes Component to the Form from the database.
Note the new property com.ibm.notes.isDocumentPlaceholder
Add two other components and wire them as needed. This will demonstrate the composite nature of the page. For demonstration i have added the Closed and Pending Leads views and wired them as shown below. Also my Document's form is written to do a Publish of CompanyName property in the PostOpen event.
Now you are ready to try it out.
Open the CA and navigate to the first page. Double-Click any document to open the document within the Placeholder component in a new page along with other Components.
Documents belonging to a particular form always open in the same page bringing uniformity in the CA
Different target pages can be created for Documents using different forms
Document is still within the context of the Composite Application
I've been pointed recently to some other blogs where discussions about composite applications in Notes occur. I wish these discussions would happen in this blog but as long as someone points me to these other blogs that's fine too.
In these discussions I've partly seen some statements about composite applications that are just not true and it surprised me a lot that these wrong statements are still floating around. It seems we haven't made the best job so far to sell composite applications to our development community. Let me try to clarify a couple of points.
In order to use composite applications you do not need a portal server. There are NSF based composite applications that allow aggregations of different components and wiring between them.
In order to implement composite applications you do not need necessarily an Eclipse IDE. You can create NSF components in Domino Designer as previously and comp apps can be created and modified with Notes and the CAE (Composite Application Editor) which is part of Notes.
The Eclipse IDE can be used to launch Notes from the IDE in which you can implement Eclipse components and then directly debug from there. The setup instructions are not easy right now, but the plan for Notes 8.0 GA is to give developers access to the Expeditor toolkit which is an extension to the Eclipse IDE. This toolkit will allow much easier to start Notes from the Eclipse IDE by basically only pointing to the Notes install directory. We've tried to make this available already as part of the public betas but different non technical reasons prevented this unfortunately.
Not every Notes developer has to learn Eclipse. If you are happy with the Notes 7 capabilities, you'll be happy with Notes 8 as well since we do not remove any capabilities. Composite applications are not a revolution of the programming model, but an evolution/extension. When we added LotusScript or web features, you could still use @Formulas. Same here, just another set of capabilities.
With Eclipse components we give new options to build Notes applications. These new options can be used to build platform independent applications and applications with rather sophisticated user interfaces. I'm not saying you couldn't do these things differently as well, but we give a new option based on an open source platform with a big community around it including samples, re-usable components, doc etc.
I understand not every Notes developer wants to learn Eclipse, but again not everyone has to. I compare this with writing C extensions to Notes/Domino. I know that we have BPs who have specialized on this since many customers want to use the extensions but don't have the skills and time to use them.
And as I wrote about in another entry the biggest benefit of composite applications are re-usable components. We want to establish a model in which people can contribute components to a public catalog so that other people can reuse them from there. The people who will re-use these components don't have to worry about Eclipse either. They just reuse Eclipse components and don't even have to know that these are not NSF components.
Granted, at this point (8.0) you can use certain extensibility and customizability features only when writing Eclipse components. We are however already planning how to expose these features to Notes developers who don't want to leave their well known Domino Designer environment. As always we'd like to get your input as to what and how exactly you want to leverage the new Notes 8.0 "Eclipse" features in a more typical way for Notes developers. I don't want to force every Notes developer to learn Eclipse but we had to start in 8.0 to add new capabilities and now have the chance to make them more convenient to use.
Stephen Auriemma is one of the key composite application developers in the Notes team. Below he describes an important new feature that we're trying to get in 8.0.1 and that he implemented. This feature allows refering to different NSFs from composite applications in different environments as I blogged about some time ago. While the tooling aspect might not be optimal yet (e.g. we could have @Formula editor in CAE), the new capability is very powerful at runtime.
The links to the NSF components (the Notes:// URLs) in a composite application (CA) typically change between development environment and production environment. As a result the CA breaks when the CA is deployed to production environment. In addition there are time when a dynamic NotesURL can be used to customize a CA at runtime based on the role or rights of the user. For Notes 8.01 we we plan to add two new preferences CA XML com.ibm.notes.ComputedNotesURL and com.ibm.notes.ProcessOnlyOnUpdate described below.1. The preference com.ibm.notes.ComputedNotesURL can be set in the advance tab of CAE to a value that is a macro of @functions that resolves to a Notes URL that will be added as a notesurl preference to the cached CA XML.Example:The following preference: <preference name="com.ibm.notes.ComputedNotesURL"> <base:value </preference>Would result in the addition of this preference to the cache: <preference name="notesurl"> <base:value </preference>A better example: The following preference demonstrates how access a profile document: <preference name="com.ibm.notes.ComputedNotesURL"> <base:value </preference> *The examples are the raw CA XML and contain encoding. The Advanced Component Properties dialog in CAE will encode the characters ("e;&) for you. 2. This preference com.ibm.notes.ProcessOnlyOnUpdate can be set in the advance tab of CAE to a value of true or false. A value of true (default) would indicate that the CA XML cache will be recomputed only when the CA XML design note is updated. A value of false would indicate that the CA XML cache will be recomputed each time it is requested. By default CA XML cache is only updated when the CA XML design note is modified. It is worth noting, that moving the Composite Application CA to a new location (such as to deploy the CA) will result in a new CA XML cache for the user.The preference 'page level access' will override the preference com.ibm.notes.ProcessOnlyOnUpdate. For example if 'pagelevel access' is turned on we always return the CA XML even if the setting com.ibm.notes.ProcessOnlyOnUpdate is set to true. Examples: <preference name="com.ibm.notes.ProcessOnlyOnUpdate"> <base:value </preference> Would result in the CA XML cache getting recomputed and updated each time a request for the CA XML was made..
Cra
The Composite Application Editor will be part of the Notes kit and the All Client kit. The main reason why the CAE will be part of the Notes kit is so that power users can use it to define composite applications as opposed to developers only. Also the Notes kit will be available for Windows and Linux but components from the All Client kit like Domino Designer will only be supported on Windows. In order to install the CAE you need to select this component (default is don't install):
Customers have asked me about more ways to make the CAE available to certain users only. Essentially they want a role based policy. However we don't have this yet. What you could do instead however is to modify the Notes kit. I've been told by our install team that this is easily possible by changing the CAE entry in the file install.xml to this:</installfeature> <installfeature default="false" description="Composite Application Editor feature. Select this feature to install Composite Application Editor." id="CAE" name="Composite Application Editor" required="true" show="false" version="8.0.0.20070521.0613">So different kits could be made available to different users. You can find more documentation in the Domino Administrator help under the topic "Enabling and using third-party feature installation and update in Notes".In Notes 8.0.1 I'd like to add a Domino policy to define which users can see/open the Composite Application Editor if they have installed it. This would improve our current user experience where every user can open the application in CAE but then gets errors when trying to save the application after it has been modified if the user has not Designer access to the NSF based application.Niklas
Here are the answers to the crossword from last week.
Notes and Domino 8 BETA 3 is *LIVE*. You can get it from here. As always you can provide composite application feedback in this blog or in the Notes/Domino 8 Public Feedback Forum.
I posted an entry recently describing how composite application definitions are stored in NSFs. Once you have put a definition (CA XML) in a NSF you need to define to actually open the database as composite application rather than as classic standalone Notes database. There are two options:
In the database launch options can be defined to open a database as composite application. You can select which of the composite applications should be opened and which page of the app should be opened if it has more than one page.
The second option is to use a Notes frameset as 'proxy' to actually open a composite application. This is important so that existing links don't break. For example the PIM applications are opened via bookmark pointing to framesets (MailFS, CalendarFS, etc). If you have already classic Notes applications out there you still want to be able to use them in older Notes clients. However if some of your users have already Notes 8.0 you want them to open a composite application instead of the frameset. Another reason for this feature is so that you can open different composite applications in one database.
Lotus Notes 8.0 allows to store composite application definitions (CA XML) in Notes databases. A new design view and design note has been added to Domino Designer under 'Composite Applications-Applications':
As you can see one NSF can contain between 0 and N composite application definitions. Each composite application is stored in a separate design note. In order to edit this note you cannot doubleclick as on other design notes. Instead you need to open the composite application in Lotus Notes first and then start from there the Composite Application Editor.
There are different ways to create NSF based composite applications. Creating new NSF based apps means creating empty composite application definitions with only one empty page 'blank page' in them and no components. This is the same thing that happens when you create a new composite application instance on WebSphere Portal based on the blank template.
- There is a new 'virtual' Notes template that can be used to create a new NSF that contains one empty composite application definition.
- The 'New Comp App' button creates a new composite application definition in a database.
- The 'Import' button allows importing a CA XML file that has previously been exported from another database. The CA XML has not been published though.
- As with other design notes you can also copy and paste notes from other databases.
Since composite application definitions are stored as Notes design notes you can also put them in NTFs and inherit them in NSFs when the NTFs change. This essentially also allows you to have composite application templates as on Portal by just using NTFs. However there is no notion of composite application variables for NSF based apps that you can set in Portal based apps when initializing new instances.
Andre Guirard suggested to post a crossword as he did it recently in his blog. So here is an opportunity for you to test you composite applications skills.
ACROSS4. New LotusScript class for inter component communication5. Using standard datatypes allows for more "..."8. Actions that don't require custom code are "..."10. A "..." conntects properties and actions14. Service oriented architecture17. Tool to define the component interfaces (3w)19. Components are placed on a "..."20. Composite applications add "..."22. A server side component view is technically a "..."28. Components in a composite applications can be "..." coupled31. A wire is connected to a "..." component32. Composite Application Editor33. Prefix for WPLC namespace for datatypes like url34. Composite applications allow "..." of coarse grained components35. The topology manager is a "..." service36. Each datatype has a name and a "..."37. What you can publish from Notes without writing codeDOWN1. Java package in Eclipse with Notes.jar2. Composite applications allow building a "..." (2w)3. Eclipse documentation is located in the Expeditor "..."6. Execution rights for property broker can be defined in the "..."7. Product supporting composite applications (2w)9. One of the standard datatypes with prefix11. Service to access context in components12. Key benefit of composite applications (2w)13. Product supporting composite applications (2w)15. URL command to display Notes view only16. New LotusScript class for inter component communication18. Composite applications have many "..."21. Lotus Notes 8.0 bases on "..."23. Eclipse actions are implemented as an "..."24. Property "..."25. WebSphere "..."26. Ecipse code is put in a "..."27. Composite applications allow integration on the "..."29. A wire connects a "..." component with a target component30. UI service is called "..."Niklas
In composite applications on Lotus Expeditor and Lotus Notes you can use existing Eclipse components in your own composite applications as long as these components have been implemented as Eclipse ViewPart and define 'allowMultiple=true' in their plugin.xml.Sometimes however you want to run existing Eclipse applications on Expeditor/Notes that use Eclipse editors or use views with allowMutliple=false and that come in their own perspective. There is currently no way to integrate components of these applications in our type of composite applications. For Expeditor 6.1.2 and Notes 8.0.1 we're evaluating whether we can fix these limitations.For now you can run existing Eclipse perspectives in Expeditor and Notes. However this is not what we call composite applications in the sense Portal defines composite applications. In other words you cannot use CAE to define the application or create wires. What you can do instead is just to add a link to the perspective to the launcher via Eclipse extension point "com.ibm.rcp.ui.launcherSet". See the blog entry from Bob Balfe for more information.
As you might have heard we're working on a more sophisticated composite application sample that can be run on Lotus Notes 8.0. The idea is not only to show a technically brilliant sample, but also a real business application with a nice UI. So we choose a sales lead application to manage leads, contact information, contracts etc. It also shows the integration capabilities of composite applications. It uses mainly NSF components but also Eclipse components and an embedded browser.
I saw a first version of the actual implementation last week and I'm excited. The enablement team, designers and the development team have done a great job. I hope we can publish this sample soon after M5 (second public beta). It should give people more ideas what can be done with composite applications business vice and it also shows techniques that we haven't documented anywhere else yet..
There is a new redpiece about Composite Applications. The planned publication date is 05/15 which will then make it a redbook. The book has more than 700 pages and contains information about Portal and NSF based composite applications and NSF components, Eclipse components and portlet components.
For NSF components and NSF based composite applications check out the chapters 21 - 25 and appendix A. Michael Zink wrote these chapters and I want to thank him again for his outstanding job on this.
Today I want to describe how to use the Property Broker Editor for NSF components. The PBE is a tool that comes with Domino Designer. The same tool is used as part of Lotus Component Designer for LCD components and will be part of the Expeditor toolkit for Eclipse components. The PBE completely hides the fact that WSDL (web service description language) is used as the IDL for component interfaces. So it is more than just a WSDL editor since it is more convenient to use.To understand the PBE you need to understand the following concepts:- An output property is published by a source component.- A property has a datatype which has a name and namespace.- An action consumes an input property and is provided by a target component. - The datatype of an output property and the input property of an action have to match to define a wire.- A wire connects an output property and an action. Wires are defined in the composite application only which allows loosely coupling of components which define the properties, datatypes and actions.PBE can be launched from Domino Designer from the view 'Wiring Properties'. Just create a new WSDL design note via 'New Wiring Properties' and then use 'Open File' to launch the PBE've. Don't get confused by the names 'parameters' vs 'properties'. Some people in our team want this differentiation, but I just use the term properties.The checkbox 'default action ...' can be ignored. I hope we can remove it from the GA version. The output section should also be a collapsed section or completely removed. Actions can have output properties but that is completely identical to actions without output properties that publish 'normal' output properties.
In order to use the property broker in Notes databases to do inter component communication there are two new LotusScript classes NotesPropertyBroker and NotesProperty in Lotus Notes 8.0. Check out the help that comes with Domino Designer.
This code snippet shows how to publish an output property:Dim s As New NotesSessionDim pb As NotesPropertyBrokerSet pb = s.GetPropertyBroker()Call pb.setPropertyValue("Track", newCategory$)Call pb.Publish()This code snippet shows how to consume an input property in a Notes action:Dim s As New NotesSessionDim pb As NotesPropertyBrokerSet pb = s.GetPropertyBroker()Dim pbInputProperty As NotesPropertyDim pbContext As VariantpbContext = pb.InputPropertyContext Set pbInputProperty = pbContext(0)Dim PropName As StringPropName= pbInputProperty.NameDim pbValue As VariantSet pbValue = pb.GetPropertyValue(PropName) Messagebox "The value is " + pbValue(0), MB_OK, "GetPropertyValue"
Note that these APIs are slightly different from the first public beta since we removed an unnecessary 'namespace' parameter from different methods. The new APIs will be part of the next public beta and GA..
We've added two ECL settings in Lotus Notes 8.0 (already in first public beta) to control what code is allowed to execute property broker functionality:- 'Read from property broker' - defines whether the LotusScript code can read input properties from other components- 'Write to property broker' - defines whether the LotusScript code can publish output properties to other components
These settings are necessary so that you can control whether code with a certain signature is allowed to interact via LotusScript with other components. This is similar to the existing ECL settings which control what code can perform actions outside of the current database (e.g. access to file system).These settings are only applied to code in Notes databases and not supported for Eclipse components.Niklas.
Composite application definitions (CA XML) can be stored in design notes in Notes databases. You can have databases that don't contain anything else than just this composite application definition and an ACL on the database. From this composite application you can refer to other Notes databases, the NSF components, via Notes URLs. As another alternative you can store the CA XML and the NSF component in the same database. This is what we do for our mail application and the contacts application. Both mail and contacts are composite applications in Lotus Notes 8.0 and they refer to components in the same databases. In this special case you don't want to refer to the NSF components via Notes URL that contains a replica id or file path. Whenever you move the NSF to another server or whenever you create a new NSF based on an NTF you would have to change the CA XML with the new Notes URL. So for this special case we've introduced a new special type of Notes URL. If you use '0000000000000000' in the Notes URL Notes knows to open the NSF components from the same database it read the CA XML from.In this sample I've created a new NSF based on the virtual template '- Blank Composite Application' and then I added a view 'MyView1' to it. Then I used CAE to put the NSF component in the application. You can type in the '0000000000000000' either manually or you can use the NSF component picker and choose 'current database' (but in the current build there are some issues with the NSF component picker).
As a result when you open the composite application you get the same UX as if you opened the NSF standalone (in the classic way) but now your app is actually a composite application that can be extended with other components via CAE.There are also some other special replica ids that you can use.notes:///0000000000000E00 - this opens the current user's mail databasenotes:///0000000000000E01 - this opens the current user's contacts databaseWe didn't use these URLs in our mail and contacts applications though since they wouldn't support delegation.Niklas.
A typical composite application use case in Lotus Notes is that you select something in a Notes view and then want to update some other component based on the currently selected document. You can do this easily in a declarative way by publishing the value of a column in a property. However this only allows publishing a column's value, not a Notes URL of the currently selected document. In order to publish the Notes URL including the document UNID of the currently selected view entry we've added in M4 (first public beta) a so called 'built-in property'. You can configure to use this property without having to implement any code. Esentially you only need to import a certain WSDL part into your NSF and then you have to publish any other column declaratively from the same view you want to publish the Notes URL from.Here is the WSDL:<definitions name="Poperty Broker WSDL" ... xmlns: <types> <xsd:schema <xsd:simpleType <xsd:restriction </xsd:simpleType> </xsd:schema> </types> <message name="OnViewEntrySelectionChange_Property_Operation_Output3"> <part name="urlPart" type="idt:url"/> </message> <portType name="NotesDB_Operations"> <operation name="OnViewEntrySelectionChange_Property_Operation3"> <output message="tns:OnViewEntrySelectionChange_Property_Operation_Output3"/> </operation> </portType> <binding name="Notes_Binding" type="tns:NotesDB_Operations"> <portlet:binding/> <operation name="OnViewEntrySelectionChange_Property_Operation3"> <portlet:action <output> <portlet:param </output> </operation> </binding></definitions>As you can see in the WSDL you need to define the type 'idt:url' that I described in this blog entry. Then you need to use an output property and call it 'SelectedNotesDocumentUrlChanged'. You can also define this property via the Property Broker Editor so that you don't have to worry about the complexity of the WSDL. In that case you only need to use the name of the property.
The value of the property is a Notes URL pointing to a Notes document, e.g. "Notes://".Then other components can access the document and fields in the document via the LotusScript API or Java API 'NotesSession.resolve'. This is a very powerful mechanism since you don't have to publish every field of a document any other component might be interested in. However if you use this feature your components will be tightly coupled. So you need to make a decision as to when you want to use this feature in your scenarios.The WSDL that I've described above does not work in the first public beta since we only introduced the new type 'idt:url' recently. You can use this feature in the first public beta but need to use another datatype 'std:NotesURL' (see here).
"cn=Robert Smith,ou=people,dc=example,dc=com".
AD407 - How to Build IBM Lotus Notes Components for Composite Applications AD406 - Building Composite Applications for IBM Lotus Notes 8AD405 - Improve Your IBM Lotus Notes Application ROI Through Composite Applications with Lotus Notes and Domino 8AD202 - Developing with the IBM Lotus Expeditor Toolkit for IBM Lotus Expeditor and IBM Lotus Notes 8
Jo Grant asked me to post this entry since he is out on vacation this week. He describes how you can develop ViewParts that can be put as components in composite applications and in the sideshelf at the same time.
This.
The team around Ludwig Nastansky and Ingo Erdmann from GCC (Groupware Competence Center) has implemented their own activity manager application as composite application running on Lotus Notes 8 (first public beta).
In this document GCC writes that "Composite Applications in Lotus Notes 8 are innovative and add a new dimension of efficiency for software re-usablitiy". As strategic aspects GCC sees the "eclipse based, open UI architecture", "portal-like architecture for your workplace" and "very different information components, wired by LOB user".
Find out more here.
How to provision Eclipse features and plugins from Domino (from Thomas Gumz)
How to enable Eclipse update manager to install features and plugins manually (from Jay Roshenthal)
When..
Many Notes customers who are building composite applications with Eclipse components need to be able to access NSFs from the Java code running in Eclipse. In Notes 8, the Notes backend classes can be called by Eclipse plug-ins you write by using the normal "notes.jar" APIs. As a convenience, we have bundled notes.jar in the "com.ibm.notes.java.api" plug-in to allow easy access, security, dependency, and path management. In Beta 2, you will need to set an notes.ini variable to enable seamless security around the use of this plug-in. That is, with the setting, you will be able to access the backend classes without multiple password prompts. This "roping off" was done because we have not performed sufficient testing at this point to enable the feature by default. We expect to have this enabled by default at the final release of Lotus Notes 8.0, obviating the need for the notes.ini setting, but of course, everything is subject to change.
Ray Rosenthal has posted more details about this in the Notes/Domino 8 Public Beta Forum. | https://www.ibm.com/developerworks/mydeveloperworks/blogs/CompApps/tags/heidloff?maxresults=100&sortby=0&lang=en | CC-MAIN-2018-22 | refinedweb | 5,860 | 55.95 |
14.5. Computing the Voronoi diagram of a set of
The Voronoi diagram of a set of seed points divides space into several regions. Each region contains all points closer to one seed point than to any other seed point.
The Voronoi diagram is a fundamental structure in computational geometry. It is widely used in computer science, robotics, geography, and other disciplines. For example, the Voronoi diagram of a set of metro stations gives us the closest station from any point in the city.
In this recipe, we compute the Voronoi diagram of the set of metro stations in Paris using SciPy.
Getting ready
You need the Smopy module to display the OpenStreetMap map of Paris. You can install this package with
pip install git+.
How to do it...
1. Let's import the packages:
import numpy as np import pandas as pd import scipy.spatial as spatial import matplotlib.pyplot as plt import matplotlib.path as path import matplotlib as mpl import smopy %matplotlib inline
2. Let's load the dataset with pandas (which had been obtained on the RATP open data website, the public transport operator in Paris, at):
df = pd.read_csv('' 'cookbook-2nd-data/blob/master/' 'ratp.csv?raw=true', sep='#', header=None)
df[df.columns[1:]].tail(3)
3. The
DataFrame object contains the coordinates, name, city, district, and type of station. Let's select all metro stations:
metro = df[(df[5] == 'metro')]
metro[metro.columns[1:]].tail(3)
4. We are going to extract the district number of Paris' stations. With pandas, we can use vectorized string operations using the str attribute of the corresponding column.
# We only extract the district from stations in Paris. paris = metro[4].str.startswith('PARIS').values
# We create a vector of integers with the district # number of the corresponding station, or 0 if the # station is not in Paris. districts = np.zeros(len(paris), dtype=np.int32) districts[paris] = metro[4][paris].str.slice(6, 8) \ .astype(np.int32) districts[~paris] = 0 ndistricts = districts.max() + 1
5. We also extract the coordinates of all metro stations:
lon = metro[1] lat = metro[2]
6. Now, let's retrieve Paris' map with OpenStreetMap. We specify the map's boundaries with the extreme latitude and longitude coordinates of all our metro stations. We use Smopy to generate the map:
box = (lat[paris].min(), lon[paris].min(), lat[paris].max(), lon[paris].max()) m = smopy.Map(box, z=12) m.show_ipython()
7. We now compute the Voronoi diagram of the stations using SciPy. A
Voronoi object is created with the points coordinates. It contains several attributes we will use for display:
vor = spatial.Voronoi(np.c_[lat, lon])
8. We create a generic function to display a Voronoi diagram. SciPy already implements such a function, but this function does not take infinite points into account. The implementation we will use is available at:
def voronoi_finite_polygons_2d(vor, radius=None): """Reconstruct infinite Voronoi regions in a 2D diagram to finite regions. Source: []() """ if vor.points.shape[1] != 2: raise ValueError("Requires 2D input") new_regions = [] new_vertices = vor.vertices.tolist() center = vor.points.mean(axis=0) if radius is None: radius = vor.points.ptp().max() # Construct a map containing all ridges for a # given point all_ridges = {} for (p1, p2), (v1, v2) in zip(vor.ridge_points, vor.ridge_vertices): all_ridges.setdefault( p1, []).append((p2, v1, v2)) all_ridges.setdefault( p2, []).append((p1, v1, v2)) # Reconstruct infinite regions for p1, region in enumerate(vor.point_region): vertices = vor.regions[region] if all(v >= 0 for v in vertices): # finite region new_regions.append(vertices) continue # reconstruct a non-finite region ridges = all_ridges[p1] new_region = [v for v in vertices if v >= 0] for p2, v1, v2 in ridges: if v2 < 0: v1, v2 = v2, v1 if v1 >= 0: # finite ridge: already in the region continue # Compute the missing endpoint of an # infinite ridge t = vor.points[p2] - \ vor.points[p1] # tangent t /= np.linalg.norm(t) n = np.array([-t[1], t[0]]) # normal midpoint = vor.points[[p1, p2]]. \ mean(axis=0) direction = np.sign( np.dot(midpoint - center, n)) * n far_point = vor.vertices[v2] + \ direction * radius new_region.append(len(new_vertices)) new_vertices.append(far_point.tolist()) # Sort region counterclockwise. vs = np.asarray([new_vertices[v] for v in new_region]) c = vs.mean(axis=0) angles = np.arctan2( vs[:, 1] - c[1], vs[:, 0] - c[0]) new_region = np.array(new_region)[ np.argsort(angles)] new_regions.append(new_region.tolist()) return new_regions, np.asarray(new_vertices)
9. The
voronoi_finite_polygons_2d() function returns a list of regions and a list of vertices. Every region is a list of vertex indices. The coordinates of all vertices are stored in
vertices. From these structures, we can create a list of cells. Every cell represents a polygon as an array of vertex coordinates. We also use the
to_pixels() method of the
smopy.Map instance. This function converts latitude and longitude geographical coordinates to pixels in the image.
regions, vertices = voronoi_finite_polygons_2d(vor)
cells = [m.to_pixels(vertices[region]) for region in regions]
10. Now, we compute the color of every polygon:
cmap = plt.cm.Set3 # We generate colors for districts using a color map. colors_districts = cmap( np.linspace(0., 1., ndistricts))[:, :3] # The color of every polygon, grey by default. colors = .25 * np.ones((len(districts), 3)) # We give each polygon in Paris the color of # its district. colors[paris] = colors_districts[districts[paris]]
11. Finally, we display the map with the Voronoi diagram, using the
show_mpl() method of the Map instance:
ax = m.show_mpl(figsize=(12, 8)) ax.add_collection( mpl.collections.PolyCollection( cells, facecolors=colors, edgecolors='k', alpha=.35))
How it works...
Let's give the mathematical definition of the Voronoi diagram in a Euclidean space. If \((x_i)\) is a set of points, the Voronoi diagram of this set of points is the collection of subsets \(V_i\) (called cells or regions) defined by:
The dual graph of the Voronoi diagram is the Delaunay triangulation. This geometrical object covers the convex hull of the set of points with triangles.
SciPy computes Voronoi diagrams with Qhull, a computational geometry library in C++.
There's more...
Here are further references:
- Voronoi diagram on Wikipedia, available at
- Delaunay triangulation on Wikipedia, available at
- The documentation of scipy.spatial.voronoi available at
- The Qhull library available at
See also
- Manipulating geospatial data with Cartopy | https://ipython-books.github.io/145-computing-the-voronoi-diagram-of-a-set-of-points/ | CC-MAIN-2019-09 | refinedweb | 1,042 | 52.46 |
QuickTime for Java: A Developer's Notebook/Editing Movies
From WikiContent
Revision as of 00:15, 7 March 2008
Playback is nice, but you have nothing to play if you lack tools to create media, and the most critical of these are editing tools. If you've ever used iMovie with your home movies, you know what I'm talking about: there's a huge difference between watching a cute collection of scenes of your kids playing, set to music, and watching the two hours of unedited raw footage you started with. Sometimes, less is more.
Copying and Pasting
The most familiar form of editing is copy-and-paste, which many users already are familiar with from the "pro" version of QuickTime Player. The metaphor is identical to how copy-and-paste works in nonmedia applications such as text editors and spreadsheets: select some source material of interest, do a "copy" to put it on the system clipboard, select an insertion point in this or another document, and do a "paste" to put the contents of the clipboard into that target.
In the simplest form of a QuickTime copy-and-paste, the controller bar (from MovieController) is used to indicate where copies and pastes should occur. By shift-clicking, a user can select a time-range from the current time (indicated by the play head) to wherever the user shift-clicks (or, if he is dragging, wherever the mouse is released).
Note
QuickTime Pro costs money ($29.99 as of this writing), but it allows you to exercise much of the QuickTime API from QuickTime Player, which can be a useful debugging tool.
How do I do that?
BasicQTEditor, shown in Example 3-1, will be the basis for the examples in this chapter. It offers a single empty movie window (with the ability to open movies from disk in new windows or to create new empty movie windows), and an Edit menu with cut, copy, and paste options.
Example 3-1. A copy-and-paste movie editor
package com.oreilly.qtjnotebook.ch03; import quicktime.*; import quicktime.qd.QDRect; import quicktime.std.*; import quicktime.std.movies.*; import quicktime.app.view.*; import quicktime.io.*; import java.awt.*; import java.awt.event.*; import com.oreilly.qtjnotebook.ch01.QTSessionCheck; public class BasicQTEditor extends Frame implements ActionListener { Component comp; Movie movie; MovieController controller; Menu fileMenu, editMenu; MenuItem openItem, closeItem, newItem, quitItem; MenuItem copyItem, cutItem, pasteItem; static int newFrameX = -1; static int newFrameY = -1; static int windowCount = 0; /** no-arg constructor for "new" movie */ public BasicQTEditor ( ) throws QTException { super ("BasicQTEditor"); setLayout (new BorderLayout( )); QTSessionCheck.check( ); movie = new Movie(StdQTConstants.newMovieActive); controller = new MovieController (movie); controller.enableEditing(true); doMyLayout( ); } /** file-based constructor for opening movies */ public BasicQTEditor (QTFile file) throws QTException { super ("BasicQTEditor"); setLayout (new BorderLayout( )); QTSessionCheck.check( ); OpenMovieFile omf = OpenMovieFile.asRead (file); movie = Movie.fromFile (omf); controller = new MovieController (movie); controller.enableEditing(true); doMyLayout( ); } /** gets component from controller, makes menus */ private void doMyLayout( ) throws QTException { // add movie component QTComponent qtc = QTFactory.makeQTComponent (controller); comp = qtc.asComponent( ); add (comp, BorderLayout.CENTER); // file menu fileMenu = new Menu ("File"); newItem = new MenuItem ("New Movie"); newItem.addActionListener (this); fileMenu.add (newItem); openItem = new MenuItem ("Open Movie..."); openItem.addActionListener (this); fileMenu.add (openItem); closeItem = new MenuItem ("Close"); closeItem.addActionListener (this); fileMenu.add (closeItem); fileMenu.addSeparator( ); quitItem = new MenuItem ("Quit"); quitItem.addActionListener (this); fileMenu.add(quitItem); // edit menu editMenu = new Menu ("Edit"); copyItem = new MenuItem ("Copy"); copyItem.addActionListener(this); editMenu.add(copyItem); cutItem = new MenuItem ("Cut"); cutItem.addActionListener(this); editMenu.add(cutItem); pasteItem = new MenuItem ("Paste"); pasteItem.addActionListener(this); editMenu.add(pasteItem); // make menu bar MenuBar bar = new MenuBar( ); bar.add (fileMenu); bar.add (editMenu); setMenuBar (bar); // add close-button handling addWindowListener (new WindowAdapter( ) { public void windowClosing (WindowEvent e) { doClose( ); } }); } /** handles menu actions */ public void actionPerformed (ActionEvent e) { Object source = e.getSource( ); try { if (source = = quitItem) doQuit( ); else if (source = = openItem) doOpen( ); else if (source = = closeItem) doClose( ); else if (source = = newItem) doNew( ); else if (source = = copyItem) doCopy( ); else if (source = = cutItem) doCut( ); else if (source = = pasteItem) doPaste( ); } catch (QTException qte) { qte.printStackTrace( ); } } public void doQuit( ) { System.exit(0); } public void doNew( ) throws QTException { makeNewAndShow( ); } public void doOpen( ) throws QTException { QTFile file = QTFile.standardGetFilePreview (QTFile.kStandardQTFileTypes); Frame f = new BasicQTEditor (file); f.pack( ); if (newFrameX >= 0) f.setLocation (newFrameX+=16, newFrameY+=16); f.setVisible(true); windowCount++; } public void doClose( ) { setVisible(false); dispose( ); // quit if no windows now showing if (--windowCount = = 0) doQuit( ); } public void doCopy( ) throws QTException { Movie copied = controller.copy( ); copied.putOnScrap(0); } public void doCut( ) throws QTException { Movie cut = controller.cut( ); cut.putOnScrap(0); } public void doPaste( ) throws QTException { controller.paste( ); pack( ); } /** Force frame's size to respect movie size */ public Dimension getPreferredSize( ) { System.out.println ("getPreferredSize"); if (controller = = null) return new Dimension (0,0); try { QDRect contRect = controller.getBounds( ); Dimension compDim = comp.getPreferredSize( ); if (contRect.getHeight( ) > compDim.height) { return new Dimension (contRect.getWidth( ) + getInsets( ).left + getInsets( ).right, contRect.getHeight( ) + getInsets( ).top + getInsets( ).bottom); } else { return new Dimension (compDim.width + getInsets( ).left + getInsets( ).right, compDim.height + getInsets( ).top + getInsets( ).bottom); } } catch (QTException qte) { return new Dimension (0,0); } } /** opens a single new movie window */ public static void main (String[ ] args) { try { Frame f = makeNewAndShow( ); // note its x, y for future calls newFrameX = f.getLocation( ).x; newFrameY = f.getLocation( ).y; } catch (Exception e) { e.printStackTrace( ); } } /** creates "new" movie frame, packs and shows. used by main( ) and "new" */ private static Frame makeNewAndShow( ) throws QTException { Frame f = new BasicQTEditor( ); f.pack( ); if (newFrameX >= 0) f.setLocation (newFrameX+=16, newFrameY+=16); f.setVisible(true); windowCount++; return f; } }
Note
With the downloaded book code, compile and run this with ant run-ch03-basicqteditor.
Figure 3-1 shows the BasicQTEditor class in action, with two windows open. The window on the left is the original empty movie window, with the user about to paste in some contents. The window on the right is a movie that was opened from a file. Note the small stretch of darker gray in the timeline, under the play head, which indicates the selected segment that was copied from the movie to the system clipboard.
Figure 3-1. BasicQTEditor with two movies open
Image:QuickTime for Java: A Developer's Notebook I 3 tt39.png
Also note that when running in Windows, as pictured here, the menus are inside the windows. On Mac OS X, the usage of AWT means the File and Edit menus will be at the top of the screen in the Mac's "One True Menu Bar."
One usability note: for simplicity, I haven't tried to make this particularly smart about what the user "really wants," and that can be bad on the paste . The paste will replace whatever is selected in the target movie, and if there is no selection, it will paste to the beginning of the movie. It's probably more typical to add clips either to the end of the movie, or to the current time as indicated by the play head (i.e., to behave as if a lack of a selection should be interpreted as a zero-length selection beginning and ending at the movie's current time). It's simple enough to add this kind of intelligence to doPaste() and find a behavior that feels better.
What just happened?
This is a big example, so here's an overview.
The no-arg constructor, BasicQTEditor( ), initializes QuickTime with Chapter 1s QTSessionCheck, then creates a new empty Movie, gets a MovieController for it, and calls doMyLayout. A second constructor, BasicQTEditor (QTFile), is essentially identical, except that instead of creating an empty movie, it gets a movie from the provided QTFile. The movie and the controller instance variables are used by many methods throughout the application.
The doMyLayout( ) method sets up the menus and their ActionListeners and reminds us that building GUIs in code is a pain.
actionPerformed (ActionEvent) is used to farm out method calls from clicks on the various menu items.
doQuit( ) is a trivial call to System.exit(0). Remember that the QTSessionCheck call has set up a shutdown handler to close QuickTime when Java goes away.
doNew( ) trivially calls makeNewAndShow(), which is a convenience method to call the no-arg constructor (which creates an empty Movie), pack the frame, and move it down and to the right 16 pixels from the last place a new window was created.
Tip
Note that there's nothing here to keep new windows from going off-screen if the user creates enough of them. In a more polished application, you'd check the proposed x and y against the screen size reported by the AWT's Toolkit.getScreenSize() .
doOpen( ) brings up a file-open dialog and calls the file-aware constructor. It then pack( ) s the window and positions it in the same way makeNewAndShow() does.
doClose( ) closes the frame and, if it is the last open window, quits the application via doQuit( ) (yes, this is Windows-like behavior, as opposed to the typical Mac application which can hang around with no open windows).
doCopy( ) and doCut( ) are practically identical, and each needs only two lines to do its thing. They make a call to the MovieController to cut or copy the current selection and return the result as a new Movie. Then they put this movie on the system clipboard with the movie's putOnScrap( ) call.
doPaste( ) is even simpler: it just calls the controller's paste( ) method and then re-pack( )s the window.
The getPreferredSize() method overrides the default by indicating that the window needs to be large enough to contain the movie, its control bar, and any insets that might be set. This is why you should pack( ) after each paste: the original empty movie has no size other than its control bar, so when you paste into it, the size of the movie (and thus its controller) changes to accommodate the pasted contents, and you need the frame to adjust to that.
Warning
This really should be taken care of automatically in Java, because the use of a BorderLayout should allow the contents to achieve their preferred size on a pack( ). Unfortunately, on Mac OS X, the QTComponent exhibits a bizarre behavior where its preferred size is set once, when it's packed, and never again. So, a component built from an empty movie always thinks it's supposed to be zero pixels high by 160 pixels wide, even if you paste in contents much larger than that. Fixing this reveals the opposite problem on Windows: sometimes there's a good preferred size and a zero-height controller bound. The version here prefers whichever set of bounds has a greater height.
What about...
...that weird play head? That is odd, isn't it? The call to enableEditing(true) has changed the play head ball to an hourglass shape. Figure 3-2 shows it at an enlarged size.
Figure 3-2. MovieController scrubber bar with editing enabled
Image:QuickTime for Java: A Developer's Notebook I 3 tt40.png
My guess is that the shape is supposed to help you select the exact point for making a selection, instead of burying it under the center of the ball. That said, there's a reason you don't see this elsewhere: this default widget isn't terribly well-suited to editing. The QuickTime Player application that comes with QuickTime has a custom controller widget with two little triangles under the timeline to mark in and out points. But that control, like this one, shares the flaw that the accuracy of your edit is limited by the on-screen size of your movie. More serious editing applications, like Premiere and Final Cut Pro, have custom GUI components for editing, usually based on a timeline that can be "zoomed" to an arbitrary accuracy. Of course, one could do the same with AWT or Swing, tracking MouseEvents, paint()ing as necessary, and making programmatic calls to QTJ to perform actions.
Performing "Low-Level" Edits
Low-level edits are a separate set of editing calls that don't involve the clipboard or selection metaphors. They're called "low level" because instead of operating at the conceptual level of "paste the contents of the clipboard into the user's current selection," they work at the level of "insert a segment from movie M1, ranging from time A to time B, into movie M2 at time C."
Note
By way of comparison, although QuickTime has two sets of editing functions, Sun's Java Media Framework has no editing API at all.
How do I do that?
This version reimplements doCopy( ), doCut( ), and doPaste( ) to use low-level editing calls on the Movie instead of cut/copy/paste-type calls on the MovieController.
First, LowLevelQTEditor needs a static Movie, called copiedMovie, to keep track of what's on its virtual "clipboard" so that it can be shared across the new doCopy( ), doCut(), and doPaste( ) methods:
public void doCopy( ) throws QTException { copiedMovie = new Movie( ); TimeInfo selection = movie.getSelection( ); movie.insertSegment (copiedMovie, selection.time, selection.duration, 0); } public void doCut( ) throws QTException { copiedMovie = new Movie( ); TimeInfo selection = movie.getSelection( ); movie.insertSegment (copiedMovie, selection.time, selection.duration, 0); movie.deleteSegment (selection.time, selection.duration); controller.movieChanged( ); } public void doPaste( ) throws QTException { if (copiedMovie = = null) return; copiedMovie.insertSegment (movie, 0, copiedMovie.getDuration( ), movie.getSelection( ).time); controller.movieChanged( ); pack( ); }
Note
You can make ant compile and run this example with ant run-ch03-lowlevelqteditor.
The only thing the user might see as being different or odd in this example is that the cut or copied clip does not get put on the system clipboard because low-level edits don't touch the clipboard.
Tip
For what it's worth, this example was intended originally to be a drag-and-drop demo, for which these low-level, segment-oriented calls are particularly well-suited. Unfortunately, the QTComponent won't generate an AWT "drag gesture." I suppose it would be a little unnatural to drag the current image as a metaphor for copying a segment of a movie. Anyway, if you decide to do your own controller GUI, you can use this low-level stuff for your drag-and-drop.
What just happened?
The doCut( ), doCopy( ), and doPaste( ) methods all call Movie.insertSegment() ; either to put some part of a source movie into the clipboard-like copiedMovie or to put the copiedMovie into the target movie. This method takes four arguments:
- The Movie to insert into
- The start time of the segment, in the movie's time scale
- The end time of the segment, in the movie's time scale
- The time in the target movie when the segment should be inserted
In the case of a cut, the deleteSegment() call removes the segment that was just copied out. This method simply takes the beginning and end times of the segment to delete.
Note
Time scales are covered in Chapter 2, in the section Section 2.5."
In the doPaste( ) and doCut( ) methods, a call to MovieController.movieChanged( ) lets the controller know that the movie was changed in a way that didn't involve a method call on the controller, and that the controller now needs to update itself to adjust to the changed duration, current time, etc.
What about...
...any other low-level calls? There is an interesting method in the Movie class, called scaleSegment() , which changes the duration of a segment, meaning it either slows it down or speeds it up to suit the specified duration. This could be handy for creating a " slow-motion" or "fast-motion" effect from a normal-speed source, or stretching it out to fit a piece of audio.
Undoing an Edit
Critical to any kind of editing is the ability to back out of a change that had unintended or undesirable effects. Fortunately, controller-based cuts and pastes can be undone with some fairly simple calls.
How do I do that?
UndoableQTEditor builds on the original BasicQTEditor by adding an "undo" menu item. The doUndo( ) method it calls has an utterly trivial implementation:
public void doUndo( ) throws QTException { controller.undo( ); }
Note
Compile and run this example with ant run-ch03-undoableqteditor.
What just happened?
With a simple call to MovieController.undo() , the program gained the ability to undo a cut or paste, or any other destructive change made through the controller.
What about...
...multiple undoes? Or redoes? Ah, there's the rub. Hit undo again and the cut or paste is redone, in effect undoing the undo.
Sadly, this is your dad's "undo"...the undo from back in 1990, when a single level of undo was a pretty cool thing. Today, when users expect to perform multiple destructive actions with impunity, it's not too impressive.
Undoing and Redoing Multiple Edits
Fortunately, QTJ offers a unique opportunity to combine Swing's thoughtfully designed undo API, javax.swing.undo, with QuickTime's support for reverting a movie to a previous state. Combined, these features provide the ability to support a long trail of undoes and redoes.
How do I do that?
RedoableQTEditor again builds on BasicQTEditor, adding a Swing UndoManager that is used by both the doUndo( ) and doRedo( ) methods:
Note
Compile and run this example with ant run-ch03-redoableqteditor.
public void doUndo( ) throws QTException { if (! undoanager.canUndo( )) { System.out.println ("can't undo"); return; } undoManager.undo( ); } public void doRedo( ) throws QTException { if (! undoManager.canRedo( )) { System.out.println ("can't redo"); return; } undoManager.redo( ); }
The information about a destructive edit is encapsulated by an inner class called QTEdit :
class QTEdit extends AbstractUndoableEdit { MovieEditState previousState; MovieEditState newState; String name; public QTEdit (MovieEditState pState, MovieEditState nState, String n) { previousState = pState; newState = nState; this.name = n; } public String getPresentationName( ) { return name; } public void redo( ) throws CannotRedoException { super.redo( ); try { movie.useEditState (newState); controller.movieChanged( ); } catch (QTException qte) { qte.printStackTrace( ); } } public void undo ( ) throws CannotUndoException { super.undo( ); try { movie.useEditState (previousState); controller.movieChanged( ); } catch (QTException qte) { qte.printStackTrace( ); } } public void die( ) { previousState = null; newState = null; } }
Finally, doCut( ) and doPaste() are amended to create suitable QTEdits and hand them to the UndoManager:
public void doCut( ) throws QTException { MovieEditState oldState = movie.newEditState( ); Movie cut = movie.cutSelection( ); MovieEditState newState = movie.newEditState( ); QTEdit edit = new QTEdit (oldState, newState, "Cut"); undoManager.addEdit (edit); cut.putOnScrap(0); controller.movieChanged( ); } public void doPaste( ) throws QTException { MovieEditState oldState = movie.newEditState( ); Movie pasted = Movie.fromScrap(0); movie.pasteSelection (pasted); MovieEditState newState = movie.newEditState( ); QTEdit edit = new QTEdit (oldState, newState, "Paste"); undoManager.addEdit (edit); controller.movieChanged( ); pack( ); }
When clicked, the Undo menu item now undoes a cut or paste. Redo redoes the edit, while a second "undo" will undo the previous edit, etc.
What just happened?
Obviously, the fun parts involve the destructive actions and how they save enough information to be undoable and redoable. In each case, they call Movie.newEditState() to create a MovieEditState , a QuickTime object that contains the information needed to revert the movie to the current state at some point in the future. Then they do the destructive action and create another MovieEditState to represent the post-edit state. These objects are passed to the QTEdit, which is then sent to the UndoManager to join its stack of edits.
Note
.
When the UndoManager.undo() method is called, it takes the first undoable edit, if there is one, and calls its undo( ) method. In this case, that means the manager is calling the QTEdit.undo( ) method, which takes the pre-edit MovieEditState and passes it to Movie.useEditState( ) to return the movie to that state. Similarly, a post-undo call to QTEdit.redo( ) also uses useEditState( ) to get to the post-edit state.
Saving a Movie to a File
Once a user has performed a number of edits and has a finished project, she presumably needs to save the movie to disk. In QuickTime, many different actions can be thought of as "saving" a movie. Perhaps the simplest and most flexible option is to let the user decide.
How do I do that?
The SaveableQTEditor uses a QTFile to keep track of where a movie was loaded from (null in the case of a new movie). This is used by the doSave( ) method to indicate where the saved file goes:
public void doSave( ) throws QTException { // if no existing file, then prompt for one if (file = = null) { file = new QTFile (new File ("simplemovie.mov")); } int flags = StdQTConstants.createMovieFileDeleteCurFile | StdQTConstants.createMovieFileDontCreateResFile | StdQTConstants.showUserSettingsDialog; movie.convertToFile (file, // file StdQTConstants.kQTFileTypeMovie, // filetype, StdQTConstants.kMoviePlayer, // creator IOConstants.smSystemScript, // scriptTag flags); }
Note
Compile and run this example with ant run-ch03-saveableqteditor.
When the user hits the Save menu item, she'll see the QuickTime Save As dialog as shown in Figure 3-3.
Figure 3-3. QuickTime Save As dialog
Image:QuickTime for Java: A Developer's Notebook I 3 tt47.png
This dialog's Export selector gives the user four choices:
- Movie
- Saves a QuickTime reference movie , a tiny (typically 4 or 8 KB) file that contains just references (pointers) to the media in their original locations
- Movie, self-contained
- Copies all the media, in their original encodings, into a new QuickTime movie file
- Movie to Hinted Movie
- Creates a self-contained movie but lets the user adjust the hinting settings for use in a streaming server
- Movie to QuickTime Movie
- Creates a self-contained movie, but lets the user choose different compressors and settings to re-encode the audio and video
Some of these options give the user additional choices. Saving a "self-contained" movie presents an Options... button that lets the user specify the audio and video codecs to be used in the saved movie, their quality and bitrate settings, etc. A "Use" pop up contains canned settings with appropriate choices for distributing the movie on CD-ROM, over dial-up, etc.
Once the user clicks Save, the program saves the movie to disk. This is a very fast operation for the reference movie option and a potentially slow operation for the other options because the media might be re-encoded into a new format as part of the save.
What just happened?
The key is the Movie.convertToFile() method. The version shown here takes five parameters:
- The QTFile to save to.
- An int to represent the old Mac OS file "type." Use the constant kQTFileTypeMovie , which gives it the QuickTime movie type moov.
- An int to represent the old Mac OS file "creator." The boilerplate option is kMoviePlayer , which associates it with the default QuickTime Player application.
- An int to represent the old Mac OS "scriptTag," which indicates what kind of "script system" (character encoding, writing direction, etc.) is to be used. Common practice is to use the constant smSystemScript to use whatever the operating system's current script is.
- Behavior flags to affect the save operation, logically ORed together. The most important flag for this example is the showUserSettingsDialog ; without it, the program would silently save the file with Apple's ancient "Video" codec and uncompressed sound. This example also uses the flag createMovieFileDeleteCurFile to delete any file already at the target location and createMovieFileDontCreateResFile to force the file to exist in a single data "fork," instead of using the old Mac OS' "resource" fork. This is required for making QuickTime movies that run on multiple platforms.
Note
Most of the time, it's appropriate to use boilerplate code for things like type, creator, and system script, and not to have to read some Inside Macintosh book from 10 years ago.
What about...
...other interesting behavior flags? The docs for the native ConvertMovieToFile function offer two that aren't shown here because they seem to indicate behavior that is already the default:
- movieFileSpecValid indicates that the file passed in actually exists and should be shown as the default save location.
- movieToFileOnlyExport restricts the dialog to showing only the data export components that are actually present.
Can anything be done about the interminable wait when saving "Movie to QuickTime Movie"? One thing that helps is to provide a "progress function," which provides a visual representation of the progress being made on the long save operation. You can set up the default progress function with a one-line call right before convertToFile():
movie.setProgressProc( )
This will bring up a progress dialog like the one shown in Figure 3-4.
Figure 3-4. Default QuickTime progress dialog
Image:QuickTime for Java: A Developer's Notebook I 3 tt49.png
The Movie class also has a setProgressProc( ) method that takes a MovieProgress object as a parameter. The idea here is that of a typical callback arrangement—during a long save, MovieProgress.execute() is called repeatedly with four parameters: the movie being monitored, a "message" int, a "what operation" int, and a float that represents the percentage done on a scale from 0.0 to 1.0. Unfortunately, this interface has a couple of problems. First, the constants for the "message" aren't defined in QTJ (a few printlns here and there show that the values are 0 for start, 1 for update, and 2 for done). More importantly, using this callback seems extremely unstable in QTJ 6.1—I find I often get an exception with an "Unknown Error Code," and the movie doesn't save. So, maybe the default behavior is the safe choice for now.
Flattening a Movie
Saving a movie can mean different things in QuickTime: saving a reference movie, saving a self-contained movie, or exporting to a different format. Typically, though, the idea of creating a self-contained movie is what users think of as "saving"—they want a single file that doesn't depend on any others, so they can put it on a server, email it to mom, etc. This process is called "flattening."
Note
"Flattening" is also an old Mac OS term for turning a file with both a resource fork and a data fork into a single-fork file, suitable for use on non-Mac disk formats. In this book, we use "flatten" only in its QuickTime sense.
How do I do that?
The FlattenableQTEditor is similar to the SaveableQTEditor, adding the menu item and its typical GUI and action-handling support. The flattening is done in a doFlatten( ) method:
public void doFlatten( ) throws QTException { // always attempts to save to a new location, // so prompt for filename FileDialog fd = new FileDialog (this, "Flatten...", FileDialog.SAVE); fd.setVisible(true); // blocks if ((fd.getDirectory( ) = = null) || (fd.getFile( ) = = null)) return; QTFile flatFile = new QTFile (new File (fd.getDirectory( ), fd.getFile( ))); if (flatFile.exists( )) { // JOptionPane is a bit of cheat-for-clarity here, // building a working AWT dialog would be punitive int choice = JOptionPane.showConfirmDialog (this, "Overwrite " + flatFile.getName( ) + "?", "Flatten", JOptionPane.OK_CANCEL_OPTION); if (choice != JOptionPane.OK_OPTION) return; } movie.flatten(StdQTConstants.flattenAddMovieToDataFork | StdQTConstants.flattenForceMovieResourceBeforeMovieData, flatFile, // fileOut StdQTConstants.kMoviePlayer, // creator IOConstants.smSystemScript, // scriptTag StdQTConstants.createMovieFileDeleteCurFile, StdQTConstants.movieInDataForkResID, // resID null); // resName }
Note
Compile and run this example with ant run-ch03-flattenableqt-editor.
When run, this creates a self-contained QuickTime movie file at the specified location, using whatever video and audio encoding was used in the original sources. This can result in some playback jitters if the user has mixed in different kinds of codecs—for example, pasting in some MPEG-4 video with some Sorenson 3 video. Flattening doesn't change encoding; it just resolves references and puts all the media into one file.
What just happened?
The Movie.flatten( ) call creates the self-contained movie file, taking seven parameters to control its behavior:
Note
Many of these are the same parameters used by Movie.convertToFile( ), covered in the previous lab.
- Behavior flags for the flatten operation, logically ORed together. This example uses flattenAddMovieToDataFork to create a single-fork movie that is more suitable for non-Mac operating systems. Using flattenForceMovieResourceBeforeMovieData creates a "quick start" movie, so named because all its metadata comes before its media samples, which allows QuickTime to start playing the movie from a stream, even an URL, before all the data is loaded, because all the information QuickTime needs (what tracks are present, what size the video is, how loud the audio is, etc.) is loaded first.
- The file to flatten to.
- The Mac OS "creator," typically kMoviePlayer.
- The Mac OS script tag, typically smSystemScript.
- The behavior flags that are used for the create file operation. createMovieFileDeleteCurFile is used here to delete any file already at the target file location.
- Resource ID. For cross-platform reasons, it's usually best to use movieInDataForkResID instead of old Mac OS-style resources.
- Resource name. Irrelevant here, so null will do.
What about...
...behavior flags for the flatten operation? The native docs for FlattenMovie define a bunch, but the ones not used here are largely esoteric.
- flattenDontInterleaveFlatten
- Turns off "interleaving," an optimization that mixes audio and video samples together so that they're easier to read at playback time (if a movie had a couple of megabytes' worth of video samples, followed by a couple of megabytes' worth of audio samples, the hard drive would have a difficult time zipping back and forth between the two; interleaving puts the samples for the same time period in the same place so that they can be read together). The default behavior is a good thing, so this constant isn't used often.
- flattenActiveTracksOnly
- Doesn't include disabled tracks from the movie in the flattened file.
- flattenCompressMovieResource
- Compresses the movie's resource, and its organizational and metadata structure, if stored in the data fork. Like you care.
- flattenFSSpecPtrIsDataRefRecordPtr
- This is meaningless in QTJ.
Saving a Movie with Dependencies
The opposite of flattening is saving a movie with dependencies. In this type of a save, the resulting file just contains pointers to the sources of the media in each track. The file typically is tiny, usually just 8 KB or less.
How do I do that?
The RefSaveableQTEditor example extends the FlattenableQTEditor with a "Save w/Refs" menu item that calls doRefSave():
public void doRefSave( ) throws QTException { // if no home file, then prompt for one if (file = = null) { FileDialog fd = new FileDialog (this, "Save...", FileDialog.SAVE); fd.setVisible(true); // blocks if ((fd.getDirectory( ) = = null) || (fd.getFile( ) = = null)) return; file = new QTFile (new File (fd.getDirectory( ), fd.getFile( ))); } // save ref movie to file if (! file.exists( )) { file.createMovieFile(StdQTConstants.kMoviePlayer, StdQTConstants.createMovieFileDontCreateResFile); } OpenMovieFile outFile = OpenMovieFile.asWrite(file); movie.updateResource (outFile, StdQTConstants.movieInDataForkResID, null); }
Note
Compile and run this example with ant run-ch03-refsaveableqt-editor.
When run, this creates a movie file that, despite its tiny size, behaves exactly like any other movie file. Double-click it and it will open in QuickTime Player, just like a self-contained movie. QuickTime completely isolates the user from the fact that the file contains nothing more than metadata and pointers to the source media files.
Of course, there are limits to what QuickTime can do if those pointers cease to be valid. A user can move the source files and the movie still will play, but if the source movies are deleted, or if the reference movie is transferred to another system, QuickTime won't be able to resolve the references. This typically will result in a "searching..." dialog, followed by a dialog asking the user to locate the missing media, as shown in Figure 3-5.
Figure 3-5. Unresolvable media reference dialog
Image:QuickTime for Java: A Developer's Notebook I 3 tt52.png
What just happened?
First, a call to QTFile.createMovieFile() creates the file on disk, if it doesn't exist already. This method takes two parameters:
- A Mac OS "creator," for which StdQTConstants.kMoviePlayer is the typical boilerplate value.
- Behavior flags. The constant createMovieFileDontCreateResFile commonly is used to create cross-platform, single-fork files.
With the file created, the reference movie data can be put into the file with the updateResource() method. This method takes three parameters:
Note
The name updateResource( ) seems to be another Classic Mac OS legacy that doesn't make much sense today.
- An OpenMovieFile, opened for writing.
- A resource ID, for which the appropriately cross-platform, no-resource-fork value is movieInDataForkResId.
- An updated name for the resource; null is appropriate here.
What about...
...the fragility of reference movies? Because a reference movie is fragile, why would anyone ever create one? This technique is very handy for the saving state in editing applications because it allows the user to quickly save his edited movie without the I/O grinding of flattening. Editing, after all, can be seen as a process of arranging pointers to source materials; in the professional realm, a document called an Edit Decision List (EDL) is a simple list of "in" and "out" points from source media that you can use to produce the edited media. The reference movie is equivalent to the EDL: it's just a collection of pointers, with the nice advantage that it continues to behave as a normal QuickTime movie. So, the reference movie can be used to save the progress of the user's editing work, and when finished, a final self-contained movie can be generated via flattening or exporting (see Chapter 4).
Editing Tracks
Often, it makes sense to perform edits on all tracks of a movie. But for serious editing applications, sometimes you need to work at the track level, to add and remove tracks, or to work on just one track in isolation from the others. This task will provide a taste of that by adding a second audio track to a movie.
How do I do that?
The AddAudioTrackQTEditor builds on FlattenableQTEditor by adding another Add Audio Track... menu item, calling the doAddAudioTrack( ) method:
public void doAddAudioTrack( ) throws QTException { // ask for an audio file QTFile audioFile = QTFile.standardGetFilePreview (QTFile.kStandardQTFileTypes); OpenMovieFile omf = OpenMovieFile.asRead (audioFile); Movie audioMovie = Movie.fromFile (omf); // find the audio track, if any Track audioTrack = audioMovie.getIndTrackType (1, StdQTConstants.audioMediaCharacteristic, StdQTConstants.movieTrackCharacteristic); if (audioTrack = = null) { JOptionPane.showMessageDialog (this, "Didn't find audio track", "Error", JOptionPane.ERROR_MESSAGE); return; } // now make new audio track and insert segment // from the loaded track Track newTrack = movie.newTrack (0.0f, // width 0.0f, // height audioTrack.getVolume( )); // ick, need a dataref for our "new" media // SoundMedia newMedia = new SoundMedia (newTrack, audioTrack.getMedia( ).getTimeScale( ), new DataRef (new QTHandle( ))); newTrack.getMedia( ).beginEdits( ); audioTrack.insertSegment (newTrack, 0, audioTrack.getDuration( ), 0); controller.movieChanged( ); }
Note
Compile and run this example with ant run-ch03-addaudiotrackqteditor.
This method is admittedly contrived—it prompts the user to open another file, and if an audio track can be found in the file, the program adds that track to the movie, starting at time 0. If the user has done only a few short pastes and then adds an audio track from a typical iTunes MP3 or AAC, the result probably will be a movie in which the new soundtrack is much longer than the pasted contents.
Also, QuickTime will eat more CPU cycles playing this movie, because it has to decode two compressed soundtracks at once. Like I said, it's a contrived example, but it covers some interesting ground.
What just happened?
The program tries to find an audio track with Movie.getIndTrackType( ) , passing audioMediaCharacteristic as the search criterion. Assuming an audio track is found in this movie, the program needs to create a new track in the movie being edited. Movie.newTrack( ) creates the new track, taking as parameters the width, height, and volume of the new track.
This new track is useless without a Media object to hold the actual sound data, so the next step is to construct a new SoundMedia object. The constructor takes the track that the media is to be associated with, a time scale, and a DataRef to indicate where media samples can be stored.
Interestingly, although the edit methods this program uses are in the Track class, first I have to call Media.beginEdits( ) to inform the track's underlying media that it's about to get edited. Having done this, the program then can call Track.insertSegment() , which is identical to its low-level-editing Movie equivalent, taking a target track, source in and out times, and a destination-in time. Following this, the program calls movieChanged( ) on the movie controller to let it know that a change was made to the movie behind the controller's back.
The result is an additional audio track in the movie. If the user then flattens the movie and opens it up with QuickTime Player, a "Get Info" shows the extra audio track, as seen in Figure 3-6. In this case, I imported clips from an MPEG-4 file and added an MP3 soundtrack.
Note
No, I'm not swearing in this filename. I combined a video of my son in an inflatable boat with an MP3 of a song called "Dam Dariram" from the video game "Dance Dance Revolution"; thus, "dam-boat.mov".
Figure 3-6. QuickTime Player "Get Info" for movie with multiple audio tracks
Image:QuickTime for Java: A Developer's Notebook I 3 tt54.png
What about...
...that crazy-looking new DataRef (new QTHandle( )) parameter in the SoundMedia constructor? OK, scary edge case—here's the story. Zoom out for a second: movies have tracks, tracks have media, media have samples. Those samples need to live somewhere. It's not a problem when you open a movie from disk, but when you create new media in a new movie, QuickTime has no idea where it's supposed to put any samples that you add, whether by way of inserting segments from other tracks or by adding individual samples one by one (which will be covered in Chapters Chapter 7, Chapter 8, and Chapter 9). So, this example uses the SoundMedia constructor that takes a DataRef, which represents a location to store the samples. This DataRef can be practically anything, even a zero-length buffer in memory, which is pretty much what this example passes in by constructing a new DataRef out of a new, empty QTHandle.
Tip
For more on this icky little gotcha, and if you don't mind a C-oriented technote, see "BeginMediaEdits -2050 badDataRefIndex error after calling NewMovie" at.
Also, what about the control bar? It tells the user nothing about the tracks in the movie. You're absolutely right. Being playback-oriented, the provided GUI is weak for editing movies, and utterly useless for editing tracks. It gives the user no idea how many tracks a movie has, where there's video without sound or vice versa, etc. Moreover, there's no default widget in QTJ to replace it. If you want to provide track-oriented editing, you'll need to develop your own GUI components to display tracks and their contents. I haven't provided one here, because the appearance and behavior of such a component would vary wildly with the kind of application it was needed for (a home movie editor, an MP3 playlist builder, etc.) and because it easily could contain more than 1,000 lines of AWT code with maybe a dozen lines of QuickTime...not exactly ideal for the format of this book.
What about other track-editing methods? Fortunately, many of the concepts from the low-level Movie editing lab from earlier in the chapter apply to tracks. Along with Track.insertSegment() are a deleteSegment() and a scaleSegment() that work like their Movie equivalents. The insertEmptySegment() does what its name implies, and could be useful for building a track in nonconsecutive segments. There's also a Track.insertMedia() that will be used in later chapters to build up a Media object from raw samples.
As for how the tracks relate to their parent movies, this example uses Movie.newTrack( ) , though it also is possible to use addEmptyTrack() , which takes a prototype track and a DataRef. Tracks can be removed with Movie.removeTrack( ) and temporarily turned on and off with Track.setEnabled() . | http://commons.oreilly.com/wiki/index.php?title=QuickTime_for_Java:_A_Developer's_Notebook/Editing_Movies&diff=4306&oldid=4155 | CC-MAIN-2014-52 | refinedweb | 6,606 | 56.05 |
I’.
Grow the nanobots up
Grow them in the cracks in the sidewalk
Wind the nanobots up
Wind them up and wish them away
— They Might Be Giants
What The Framework Does For You
The bots generated with this framework are little command-line apps that are meant to be invoked periodically (mine are run once a minute by
cron) and each time they run:
- decide whether or not to generate a tweet (and if so, generate it)
- look to see if any users have @mentioned it, and if so, do something (by default, a bot written with
nanobotwill like any tweet that mentions it)
- handle any events that were received via Twitter’s streaming API
- send any tweets that were created as a result of the above actions.
The core logic that runs all of those steps remains constant, so custom bots only need to add the small bits that make them unique.
What You Need to Add
Factoring all of the common logic out into a framework means that your bot only needs to implement a small bit of code that makes it do its unique thing. The
Tockbot demo that’s included here is only around 100 lines long, and about 25% of those lines are comments and docstrings.
Twitter Setup
Obviously, before you can do anything, you need to create a new Twitter account and Twitter application for your bot to use. The instructions regarding this in my original post are still on point here, so consult that for more information.
Derive a Python class from nanobot.Nanobot
The
nanobot code comes with a quick demo bot called
TockBot. It will generate a tweet each hour at the top of the hour, and will reply to any @mention that includes the word ‘tick’ with the current time.
Create Tweets
First, the bot needs to decide if it should generate a tweet, which happens in the method
IsReadyForUpdate(). There’s a default version of this built into the framework that uses the logic from my tmbotg bot:
If:
- a random floating point number is less than a configurable tweet probability value
- …and it’s been at least a configurable number of minutes since our last tweet (don’t send them too frequently)
- …or it’s been more than some configurable number of minutes since our last tweet (don’t stay quiet for too long)
- or the bot was launched with the
--forcecommand line argument
…then we generate a new tweet.
The Tockbot has its own logic: If the current minute is zero (top of the hour) or we were invoked with
--force, it’s time to tweet.
def IsReadyForUpdate(self): ''' Overridden from base class. We're ready to create an update when it's the top of the hour, or if the user is forcing us to tweet. ''' now = datetime.now() return 0 == now.minute
When it’s time to make a tweet, the framework will call your
CreateUpdateTweet() method, which does whatever it needs to do to create some text that’s a tweetable length, and then adds a dict with that text as the value for a key named
status to the object’s list of tweets:
def CreateUpdateTweet(self): ''' Chime the clock! ''' now = datetime.now() # figure out how many times to chime; 1x per hour. chimeCount = (now.hour % 12) or 12 # create the message to tweet, repeating the chime # NowString() defined elsewhere, it just formats the # current time. msg = "{0}\n\n{1}".format("\n".join(["BONG"] * chimeCount), NowString(now)) # add the message to the end of the tweets list self.tweets.append({'status': msg}) # add an entry to the log file. self.Log("Tweet", ["{} o'clock".format(chimeCount)])
Handle a Mention
If other users @mention your bot’s account, the framework will pass a dict with the data representing that mention to your
HandleOneMention() method. The default handler for this just likes/favorites each mention, but we’d like to do more here. If someone mentions the Tockbot and includes the word ‘tick’, we’ll also reply to them with the current time:
def HandleOneMention(self, mention): ''' Like the tweet that mentions us. If the word 'tick' appears in that tweet, also reply with the current time. ''' who = mention['user']['screen_name'] text = mention['text'] theId = mention['id_str'] eventType = "Mention" # we favorite every mention that we see if self.debug: print "Faving tweet {0} by {1}:\n {2}".format(theId, who, text.encode("utf-8")) else: self.twitter.create_favorite(id=theId) if 'tick' in text.lower(): # reply to them with the current time. now = datetime.now() replyMsg = "@{0} {1}".format(who, NowString(now)) if self.debug: print "REPLY: {}".format(replyMsg) else: self.tweets.append({'status': replyMsg, 'in_reply_to_status_id': theId}) eventType = "Reply" self.Log(eventType, [who])
Handle Streaming API Events
As we discussed in this earlier post, much of the data that Twitter provides isn’t available through their REST API, only via a real-time streaming API. If you launch an instance of your bot using the
--stream command line argument, it will connect to that streaming API and sit forever waiting for streaming events to be sent for it to process.
When this stream-handling instance of the bot receives a message containing event data, it writes the message out into a file with a unique name and the extension
.stream. The next time that your bot is launched periodically, part of its general processing flow will look to see if there are any files with that extension; if there are, it attempts to find a handler function in your bot for that event type, and if it finds one, will call that handler with the event data.
The event types that Twitter supports at the time of writing are: access_revoked, block, unblock, favorite, unfavorite, follow, unfollow, list_created, list_destroyed, list_updated, list_member_added, list_member_removed, list_user_subscribed, list_user_unsubscribed, quoted_tweet, and user_update. You can find details on the purpose and content of each at.
Your handler method needs to be named using the pattern
Handle_[event_name]. Our example bot only looks for events of type
quoted_tweet, which we treat like an @mention — if someone quotes one of our tweets, we like that tweet:
def Handle_quoted_tweet(self, data): '''Like any tweet that quotes us. ''' tweetId = data['target_object']['id_str'] if self.debug: print "Faving quoted tweet {0}" .format(tweetId) else: try: self.twitter.create_favorite(id=tweetId) except TwythonError as e: self.Log("EXCEPTION", str(e))
Customize Your Expected Configuration
Nanobot bots get their runtime configuration from a text file containing JSON data. To simplify the creation of this file, if you run a bot that can’t load its config file, it will create a new file that has placeholder data in the correct format so you can edit an existing file instead of worrying about creating a new one with all the correct key names, etc. To add default key/value pairs that are specific to your bot, override the base class method
GetDefaultConfigOptions() to return a dict that contains your default configuration data.
Add Custom Command Line Flags
The framework supports three command line options:
--debug: Don’t generate any twitter output, just print to the console for debugging.
--force: Override the bot’s logic to decide whether to generate a tweet.
--stream: Make this instance of the bot listen to the Twitter streaming API instead of executing its regular logic.
If your bot needs additional command line arguments, create a function that accepts an
argparser.ArgumentParser object, and pass it to the
GetBotArguments() function as part of startup. Your function can call any of the methods that the
ArgumentParser object supports.
Get Your Bot Running
The framework defines a code>@classmethod called
CreateAndRun() that accepts a dict of arguments to pass to the bot, and attempts to launch it. If you’ve created a function to add additional command line arguments, the source file for you bot will end with code like:
def MyArgAdder(parser): parser.add_argument(...) if __name__ == "__main__": MyCoolBot.CreateAndRun(GetBotArguments(MyArgAdder))
If you don’t need any additional arguments, just omit that
argAdder bit.
Why You Might Not Want To Use This
I don’t know that I’d use this framework as-is to implement a realtime conversational interface like the cool kids are talking about this year. The challenges there are a little different than I’ve focused on here.
I definitely wouldn’t use this framework to write bots intended to spam people with unwanted sales pitches — not because it’s not suited to that, but because PLEASE DON’T DO THAT WITH MY FRAMEWORK.
I don’t know that I agree 100% with all of the points in this post by Darius Kazemi on bot ethics, but if you’re going to write a bot, please consider the impact that it might have. Make something that amuses and delights people, not something that annoys people without them soliciting it.
Get The Code
Once you’ve cloned the repository, you can install it locally (probably into a
virtualenv) with the standard Python
python setup.py install. It’s not yet available through the Cheeseshop, but I’ll probably upload it there at some point so it can be installed properly through
pip.
If you build something using this, please reach out on twitter @bgporter and point me at your bot. I’ve enjoyed seeing what other folks have already done with earlier versions of this that weren’t as easy to work with.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. | https://artandlogic.com/2016/06/nanobot-tiny-little-twitterbot-framework/?shared=email&msg=fail | CC-MAIN-2021-43 | refinedweb | 1,586 | 58.92 |
I realize I'm being vague,
but generally what causes the board to up and restart?
What's an "L" light?
I can get more specific as necessary
#include <LedDisplay.h>// Define pins for the LED display. #define dataPin 6 // connects to the display's data in#define registerSelect 7 // the display's register select pin #define clockPin 8 // the display's clock pin#define enable 9 // the display's chip enable pin#define reset 10 // the display's reset pin#define displayLength 8 // number of characters in the display// create an instance of the LED display library:LedDisplay d1 = LedDisplay(dataPin, registerSelect, clockPin, enable, reset, displayLength);int brightness = 15; // screen brightnessvoid setup() { // initialize the display library: d1.begin(); // set the brightness of the display: d1.setBrightness(brightness); }void loop() {//set cursor to 0 position d1.home();//print cursor position in each respective position for (int i = 0; i < 8; i++){ d1.print(d1.getCursor()); delay(500); }//clear display d1.clear(); }
Pin 13's built-in LED has the legend "L" printed next to it on the PCB.
shouldn't print be able to handle this?
any idea why print(); would fail, but sending each character to write() explicitly would work?
QuotePin 13's built-in LED has the legend "L" printed next to it on the PCB.Must be young, us old beggers with aging eyes can't see such detail............
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=48004.msg344499 | CC-MAIN-2016-07 | refinedweb | 267 | 65.93 |
This article is outdated and may not work correctly for current operating systems or software.
Composer is an extremely popular PHP management tool for dependencies, used to make installation and updates easier for projects. It also checks what other packages a project needs, and automatically obtains them for you, with the correct version.
In this doc, we will install and start using Composer on a Vultr Ubuntu 14.04 VPS.
Sudoaccess to that VPS.
First of all, we must ensure our VPS has all of Composer's requirements successfully installed and working.
Update the package list.
sudo apt-get update
Next, actually install Composer's requirements. You'll need
curl for the download, and
php5-cli for the installation and usage of it.
git is also used by Composer for project requirement downloads.
Install the requirements.
sudo apt-get install curl php5-cli git
Installing Composer is very simple.
curl -sS | sudo php -- --install-dir=/usr/local/bin --filename=composer
That downloads and installs Composer as a global command, called
composer, located in
/usr/local/bin. You will get this output.
#!/usr/bin/env php All settings correct for using Composer Downloading... Composer successfully installed to: /usr/local/bin/composer Use it: php /usr/local/bin/composer
Run the following to test the installation.
composer
The output will be as follows.
______ / ____/___ ____ ___ ____ ____ ________ _____ / / / __ \/ __ `__ \/ __ \/ __ \/ ___/ _ \/ ___/ / /___/ /_/ / / / / / / /_/ / /_/ (__ ) __/ / \____/\____/_/ /_/ /_/ .___/\____/____/\___/_/ /_/ Composer version 1.0-dev (9859859f1082d94e546aa75746867df127aa0d9e) 2015-08-17 14:57:00 Usage: command [options] [arguments]
To use Composer, you need a file called
composer.json to tell Composer what requirements your project has and what version of those requirements to install. Don't create this manually to avoid doing something incorrectly - Composer makes the file for you when you add dependencies. Additional dependencies are also added automatically.
To use Composer for dependency installations:
composer requireto include and install the dependency.
We will now go through this process with a simple example app, which will take a sentence and make it a friendly string, called a slug. This is used frequently to convert page names to URLs, to make it easier to generate URLs and paths.
We will start by making a folder for the app, called
slugit.
mkdir ~/slugit cd ~/slugit
We will now go on
packagist.org and find a package to help generate slugs. Searching for
slug on Packagist should show some of these packages.
easy-slug/easy-slug, muffin/slug, ddd/slug, zelenin/slug, webcastle/slug, anomaly/slug-field_type
We need to find a string to slug converter, so
cocur/slugify looks good, with many installations and stars.
After choosing the package, we run
composer require to include it as a dependency, generate
composer.json, and install it.
composer require cocur/slugify
As seen in the output generated, Composer selected the most recent package version and used it. Checking
~/slugit, you should see 2 files,
composer.lock and
composer.json, plus a folder named
vendor.
composer.lock is used to store info about package versions, and keep them the same.
The
vendor folder is used to install the dependencies. Do not commit this folder into a Git repository or GitHub.
If a project you've download already contains
composer.json, use
composer install to download its dependencies.
If you check what
composer.json includes, you should see something similar to this block.
{ "require": { "cocur/slugify": "^1.2" } }
Composer has many different formats, and constraints, to define a package's version, to allow flexibility coupled with stability.
2.0.
^before a version number makes that version the minimum, and allows all versions below
You shouldn't normally need to change version constraints, but if you do, check Composer's official documentation for more information and guidelines on how it all works.
Composer provides an autoload script, which makes it much easier to work with your dependencies and namespaces.
Just include
vendor/autoload.php in your PHP before any class instantiation.
Back to our
slugit example. Let's create a test script, called
example.php, using
cocur/slugify.
vim example.php
Put the following into
example.php.
<?php require __DIR__ . '/vendor/autoload.php'; use Cocur\Slugify\Slugify; $slugify = new Slugify(); echo $slugify->slugify('Hello World, this is a long sentence and I need to make a slug from it!');
Run the script.
php example.php
It will output the following text:
hello-world-this-is-a-long-sentence-and-i-need-to-make-a-slug-from-it
To update project dependencies.
composer update
If updates are found, and compatible with the constraint given in
composer.json, it'll replace the previous version and update
composer.lock.
To update one or more specific libraries.
composer update vendor1/package1 vendor2/package2
In this tutorial, we went through installing, configuring, and an example of using Composer for PHP application dependency management. | https://www.vultr.com/docs/downloading-installing-and-using-composer-on-ubuntu-14-04/ | CC-MAIN-2022-21 | refinedweb | 825 | 51.14 |
I didn't get any response on debian-mentors, so I repost this question here: Situation: Two source packages collide in the namespace. The second one gets rather awkward name. Later, the first package dies and is removed from unstable, testing, and (after release) stable, but still remains in oldstable. Question: Can the second source package take the first source package's (less awkward) name, or does it have to wait until oldstable is archived? Concrete example: lsh/lsh-utils (see bug 340354). -- Magnus Holmgren holmgren@lysator.liu.se (No Cc of list mail needed, thanks)
Attachment:
pgpDevft5M3vW.pgp
Description: PGP signature | https://lists.debian.org/debian-devel/2007/05/msg01141.html | CC-MAIN-2015-18 | refinedweb | 102 | 68.16 |
In C programming, data types are declarations for variables. This determines the type and size of data associated with variables. For example,
int myVar;
Here, myVar is a variable of
int (integer) type. The size of
int is 4 bytes.
Basic types
Here's a table containing commonly used types in C programming for quick access.
int
Integers are whole numbers that can have both zero, positive and negative values but no decimal values. For example,
0,
-5,
10
We can use
int for declaring an integer variable.
int id;
Here, id is a variable of type integer.
You can declare multiple variables at once in C programming. For example,
int id, age;
The size of
int is usually 4 bytes (32 bits). And, it can take
232 distinct states from
-2147483648 to
2147483647.
float and double
float and
double are used to hold real numbers.
float salary; double price;
In C, floating-point numbers can also be represented in exponential. For example,
float normalizationFactor = 22.442e2;
What's the difference between
float and
double?
The size of
float (single precision float data type) is 4 bytes. And the size of
double (double precision float data type) is 8 bytes.
char
Keyword
char is used for declaring character type variables. For example,
char test = 'h';
The size of the character variable is 1 byte.
void
void is an incomplete type. It means "nothing" or "no type". You can think of void as absent.
For example, if a function is not returning anything, its return type should be
void.
Note that, you cannot create variables of
void type.
short and long
If you need to use a large number, you can use a type specifier
long. Here's how:
long a; long long b; long double c;
Here variables a and b can store integer values. And, c can store a floating-point number.
If you are sure, only a small integer (
[−32,767, +32,767] range) will be used, you can use
short.
short d;
You can always check the size of a variable using the
sizeof() operator.
#include <stdio.h> int main() { short a; long b; long long c; long double d; printf("size of short = %d bytes\n", sizeof(a)); printf("size of long = %d bytes\n", sizeof(b)); printf("size of long long = %d bytes\n", sizeof(c)); printf("size of long double= %d bytes\n", sizeof(d)); return 0; }
signed and unsigned:
- bool Type
- Enumerated type
- Complex types
Derived Data Types
Data types that are derived from fundamental data types are derived types. For example: arrays, pointers, function types, structures, etc.
We will learn about these derived data types in later tutorials. | https://cdn.programiz.com/c-programming/c-data-types | CC-MAIN-2020-40 | refinedweb | 446 | 74.9 |
The Pittsburgh Penguins (colloquially known as the Pens) are a professional ice hockey team based in Pittsburgh, Pennsylvania. They compete in the National Hockey League (NHL) as a member of the Metropolitan Division of the Eastern Conference.
Founded during the 1967 expansion, the Penguins the salary cap era. Several former members of the team. The club are presently affiliated with two minor league teams, the Wilkes-Barre/Scranton Penguins of the American Hockey League, and the Wheeling Nailers of the ECHL.
Team history
Early years (1967–1984).9 million today) for their entry and $750,000 ($5.8]),[6][7] a logo was chosen that had a penguin in front of a triangle, which symbolized the "Golden Triangle" of downtown Pittsburgh."[6][8] The Penguins' first general manager, Jack Riley, opened the first pre-season camp for the franchise in Brantford, Ontario,.[10].[5].
Triumph of playoff berths and tragedy of Briere (1969–1974).[5].[11] Around the same time, rumors had begun to circulate that the Penguins and California Golden Seals were to be relocated to Seattle and Denver respectively, the two cities that were to have been the sites of an expansion for the 1976–77 season.[12] (1974–1982).[5]..
Lemieux–Jagr era (1984–2005).[14]. Penguins, for the second time in a row, missed the playoffs by one game..[10]
Back-to-back Stanley Cup titles (1989–1997).[17] The following season, the team lost coach Bob Johnson to cancer, and Scotty Bowman took over as coach. Under Bowman, they swept the Chicago Blackhawks to repeat as Stanley Cup champions in 1991–92.[5][10]
Cancer revisited the Penguins in 1993 when Lemieux was tragically diagnosed with Hodgkin lymphoma..[10].[10]
Lemieux's retirement and return (1997–2001)
The franchise was rocketed forward..[5]
Rebuilding (2001–2005)
The Penguins' attendance had dwindled in the late 1990s. In 1998–99, the Penguins had an average attendance of 14,825 at home games, the lowest it had been since Lemieux's rookie year.[18] Reducing revenue on top of the previous bankruptcy necessitated salary shedding. The biggest salary move was the trading of superstar Jaromir Jagr to the Washington Capitals in the summer of 2001. The Penguins missed the playoffs for the first time in 12 years in 2002, finishing in a tie for third-to-last in the conference. The following season they finished second-last. In the 2003 NHL Entry Draft, the Penguins selected goaltender Marc-Andre Fleury with the first overall pick.[19][20]
The 2003–04 season was an ordeal with Lemieux missing all but 24 regular-season games with a hip injury, and attendance dipping to an average of 11,877 (the lowest average out of any NHL team), with just one sellout.[18] As the season progressed, the Penguins signed new head coach (and former Penguins player and commentator) Eddie Olczyk and opted not to include Fleury in the lineup for the bulk of the (Alexander Ovechkin), which went to the Washington Capitals. However, Ovechkin's countryman, center Evgeni Malkin, was similarly highly regarded, and Pittsburgh took him with the second overall pick. However, a transfer dispute between the NHL and the International Ice Hockey Federation (IIHF) delayed his Pittsburgh debut.[21]
By this point, the Penguins had collapsed financially since the Stanley Cup-winning years of the early 1990s. Their home venue, the Civic Arena, had become the oldest.[22] The 2004–05 NHL season was canceled due to a lockout. One of the many reasons for the lockout included disagreements on the resolution of the financial struggles of teams like the Penguins and the Ottawa Senators, which had filed for bankruptcy protection.[23] In the midst of the lockout, the Penguins dispersed between the club's American Hockey League (AHL) affiliate, the Wilkes-Barre/Scranton Penguins, and to European leagues.[5]
Crosby–Malkin era (2005–present)
With the lockout resolved in 2005, the NHL organized an unprecedented draft lottery to set the 2005 NHL Entry Draft selection order. The draft lottery, which was held behind closed doors in a "secure location", resulted in the Penguins being awarded the first overall pick.[24] This was the second time in NHL history the Penguins had won the first overall pick outright, their first overall selection in the 2003 NHL Entry Draft having come as the result of a trade with the Florida Panthers.[25][26] The draft that year was being touted as having the greatest rookie class since Lemieux, himself, had been drafted. Quebec Major Junior Hockey League (QMJHL) superstar Sidney Crosby (who had been training with Lemieux over the summer)[24].[10].[18].[27][28]
Despite the team's various struggles, Crosby established himself as a star in the league,.
Runner–up and third Stanley Cup title (2006–2009).[29].[30].[31] Malkin was awarded the Conn Smythe Trophy as the MVP of the playoffs.[10]
Contenders and new arena (2009–2015)
The Penguins opened the 2009–10 season against the New York Rangers. It was the last home opener at the Mellon Arena and it was also the night the team raised the Stanley Cup championship banner to the arena's rafters.[32]. On February 11, 2011, the Pittsburgh Penguins–New York Islanders brawl took place.[33].[34].[35] Malkin was later awarded the Hart Memorial Trophy and Lester B. Pearson award. Following the Penguins' disappointing playoff exit, general manager Ray Shero made sweeping changes to the team at the 2012 NHL Entry Draft for the upcoming 2012–13 season.[36][37]
During the lockout-shortened 2012–13 season, the Penguins again fought through serious injury..[38]. The Pens would lose in five games to the New York Rangers in the first round of the playoffs. In the off-season, Rutherford traded a number of players and picks to acquire Phil Kessel, Nick Bonino and Matt Cullen.[39][40]
Back-to-back Stanley Cup titles and 50th anniversary (2015–2017)
After acquiring the star winger Kessel, the Penguins had high expectations for the 2015–16 season. But.[41] This move was followed by a series of trades by Jim Rutherford.[42][43].[44] On June 12, 2016, the Penguins defeated the Sharks in a 4–2 series to win their fourth Stanley Cup title. Captain Sidney Crosby was awarded the Conn Smythe Trophy.[45]
The Penguins opened their 50th anniversary season in the NHL as defending Stanley Cup champions, raising their commemorative banner on October 13, 2016, in a shootout victory over Washington.[46].[5]
After Cup titles (2017–present). In the following season which was shortened by the COVID-19 pandemic, the Penguins advanced to the 2020 playoffs, but were defeated by the Montreal Canadiens in the Qualifying Round.[47] On February 9, 2021, the Penguins named Ron Hextall as their new general manager, after Jim Rutherford resigned from his post on January 27, 2021, due to personal reasons. Brian Burke was hired as president of hockey operations.[48][49] On February 21, 2021, Crosby became the first player to reach 1,000 NHL games for the Pens.[50]
Team culture
Fanbase,[51] once noted in his autobiography that upon his arrival at KDKA-TV from WTAE-TV in 1985 that the station cared more about the Pittsburgh Spirit of the Major Indoor Soccer League than the Penguins.[52].[53])[54] ranking much lower on the list from its peers. The Penguins popularity has at times even rivaled that of the Steelers at the local level.[55]
Rivalries
Philadelphia Flyers
Considered by some to be the best rivalry in the NHL,[56][57][58].[59] However, the Penguins eliminated the Flyers from the playoffs in 2008 and 2009 and were eliminated from the playoffs in 2012 by the Flyers, strengthening the rivalry.[60]).
Washington Capitals
The two teams have faced-off 11 times in the playoffs, with the Penguins winning nine of the 11.[61][62][63]
Team information
Crest and sweater look. The circle encompassing the logo was later removed.[64] The team's colors were originally powder blue, navy blue, and white. The powder blue was changed to royal blue in 1973, but returned in 1977. The team adopted the current black and gold color scheme in 1980,[65] hence beginning the black and gold sports tradition in the city.[64]
This would remain unchanged until the 1992–93 season, when the team unveiled new uniforms and a new logo. The logo featured a modern-looking, streamlined penguin.[66] Although the "Robo-Penguin" logo survived in various forms for 15 years, it received mixed responses from fans and was never as accepted as the "skating penguin" logo. When Mario Lemieux purchased the team, he added back the "skating penguin" logo.[67] After winning their second Stanley Cup in 1992, the team redesigned their uniforms and introduced the "flying penguin" logo. The team's away uniforms were. When the new jerseys were unveiled for the 2007–08 season leaguewide, the Penguins made major striping pattern changes and removed the "flying penguin" logo from the shoulders. They also added a "Pittsburgh 250" gold circular patch to the shoulders to commemorate the 250th anniversary of the city of Pittsburgh.[64]
While the Penguins have worn their black jersey at home since the league made the initiative to do so starting with the 2003–04 NHL season, the team wore their white jerseys in some home games during the 2007–08 season, as well as wearing their powder blue, 1968–1972 "throwbacks" against the Buffalo Sabres in the 2008 NHL Winter Classic. This throwback was supposedly retired with the introduction of a new dark blue third jersey that made its debut at the 2011 NHL Winter Classic at Heinz Field,[68] but it was worn at several games after the 2011 Winter Classic. For the 2011–12 season, the 2011 Winter Classic jersey was the team's official third uniform, with the 2008 Winter Classic uniform being retired.[69] Called the "Blue Jerseys of Doom" by the Pittsburgh Tribune-Review, the alternate jerseys were worn when Sidney Crosby sustained a broken jaw injury and when he received a concussion in the 2011 Winter Classic. Evgeni Malkin was also concussed during a game when the Penguins donned the alternate uniforms.[64][70][71]
In 2014, the Penguins released their new alternate uniforms. The new black uniforms are throwbacks to the early part of Lemieux's playing career, emulating the uniforms worn by the Penguins' 1991 and 1992 Cup-winning teams. The new alternate uniform featured "Pittsburgh gold", the particular shade of gold which had been retired when the Penguins switched to the metallic gold full-time in 2002.[72] After the 2015–16 season, the team returned to using the "Pittsburgh gold" jerseys as the primary uniforms. The new home and away "Pittsburgh gold" jerseys were unveiled in 2016 and first presented at the 2016 NHL Entry Draft. A commemorative patch was added to the uniforms throughout the 2016–17 season to celebrate the team's 50th anniversary.[73] During the 2017 NHL Stadium Series against the archrival Philadelphia Flyers, the Penguins wore a special gold uniform featuring military-inspired lettering, a "City of Champions" patch and a variation of the "skating penguin" logo.[74]
Media
Radio
The Penguins currently have their radio home on WXDX-FM and their television home on AT&T SportsNet Pittsburgh. The Pittsburgh Penguins Radio Network consists of a total of 34 stations in four states.[75] Twenty three of these are in Pennsylvania, four in West Virginia, three in Ohio, and three in Maryland. The network also features an FM High-Definition station in Pittsburgh.
Broadcasters
The Penguins were broadcast by local ABC affiliate WTAE-TV during the 1967–68 season, with station Sports Director Ed Conway handling the play-by-play during both the television and radio broadcasts.[76][77]. Lange and Steigerwald remained a constant in the broadcast booth from 1985 until 1999.
With Steigerwald's departure in 1999, Mike Lange shared the broadcast booth with former Penguins' defenseman Peter Taglianetti. Taglianetti remained in the position for one season before being replaced by Eddie.[78] With Olczyk's vacancy, the Penguins hired Bob Errey as their new color commentator for the start of the 2003–04.
Arenas;[79] the Philadelphia 76ers also used the Civic Arena as a second home in the early 1970s).[80],[81].[82] The twin rink facility replaced both the IceoPlex at Southpointe and the 84 Lumber Arena as the Penguins' regular practice facility, freeing up the Consol Energy Center for other events on days the Penguins are not scheduled to play.[83].[84][85]
Minor league affiliates.[86]
Season-by-season record
This is a partial list of the last five seasons completed by the Penguins.
Note: GP = Games played, W = Wins, L = Losses, T = Ties, OTL = Overtime Losses, Pts = Points, GF = Goals for, GA = Goals against
Players and personnel
Current roster
Updated April 7, 2021[87][88]
Honored members
Retired numbers
- Notes
-.[91]
- The NHL retired Wayne Gretzky's No. 99 for all its member teams at the 2000 NHL All-Star Game.[92]
Hockey Hall of Fame
The Pittsburgh Penguins presently acknowledge an affiliation with a number of inductees to the Hockey Hall of Fame. Inductees affiliated with the Penguins include 14 former players and five builders of the sport.[a][93].[94] In 2009, Dave Molinari, a sports journalist for the Pittsburgh Post-Gazette was awarded the Elmer Ferguson Memorial Award from the Hall of Fame.[95]
Team captains, 1995–1997, 2001–2006
- Ron Francis, 1995,[96] 1997–1998
- Jaromir Jagr, 1998–2001
- Sidney Crosby, 2007–present
Franchise individual records
These are the top-ten point-scorers in franchise history.[97] Figures are updated after each completed NHL regular season.
- * – current Penguins player
Franchise goaltending leaders
These are the top-ten goaltenders in franchise history by wins.[98] Figures are updated after each completed NHL regular season.
- * – current Penguins player
Front office and coaching staff
- Executive Committee
- Owner(s) – Mario Lemieux, Ron Burkle
- Chairman – Mario Lemieux
- President/Chief Executive Officer – David Morehouse
- Hockey Operations
- President of Hockey Operations - Brian Burke
- General Manager – Ron Hextall
- Assistant General Manager – Patrik Allvin
- Director of Hockey Operations and Hockey Research – Sam Ventura
- Hockey Operations Assistant – Erik Heasley
- Hockey Operations Advisor - Trevor Daley
- Head Coach – Mike Sullivan
- Assistant Coach – Todd Reirden
- Assistant Coach – Mike Vellucci
-
- Scouting
- Director of Player Personnel – Derek Clancey
- Professional Scout – Craig Patrick
- Director of Professional Scouting – Ryan Bowness
In the community
The Pittsburgh Penguins Foundation conducts numerous community activities to support both youth and families through hockey education and charity assistance.
References
Footnotes
- ^ The Penguins also recognizes an affiliation with Hall of Famer Red Kelly, who served as the Penguins' head coach from 1969–73. However, he was inducted in the Hockey Hall of Fame in the players' category in 1969, not its builder category; and had never played for the Penguins. However, the team continues to acknowledge an affiliation as a Penguins Hall of Famer.[93]
Citations
- ^ "Penguins Make The Move to 'Pittsburgh Gold'". PittsburghPenguins.com. NHL Enterprises, L.P. June 24, 2016. Archived from the original on April 18, 2017. Retrieved April 18, 2017.
- ^ "Penguins Uniform History". PittsburghPenguins.com. NHL Enterprises, L.P. Archived from the original on May 9, 2018. Retrieved May 8, 2018.
- ^ Pickens, Pat (June 24, 2016). "Penguins go back to Pittsburgh gold in uniforms". NHL.com. NHL Enterprises, L.P. Archived from the original on January 31, 2021. Retrieved January 25, 2021.
- ^ a b "Steel City Legend: Sen. Jack McGregor". Pittsburgh Hockey.net. Archived from the original on 2017-06-30. Retrieved 2012-05-01.
- ^ a b c d e f g h i j "Timeline: The History of the Pittsburgh Penguins". pittsburghmagazine.com. 2016. Archived from the original on 2021-02-04. Retrieved 2021-03-18.
- ^ a b Stainkamp, Michael (August 25, 2010). "A brief history: Pittsburgh Penguins". National Hockey League. Archived from the original on June 2, 2016. Retrieved April 23, 2016.
- ^ "Why the name Pittsburgh Penguins?". LetsGoPens.com. September 19, 2002. Archived from the original on March 3, 2016. Retrieved April 23, 2016.
- ^ "Uniform History". Pittsburgh Penguins. Archived from the original on April 28, 2016. Retrieved April 23, 2016.
- ^ "Penguins Start Training Sessions". Pittsburgh Post-Gazette. September 14, 1967.
- ^ a b c d e f g h "A brief history: Pittsburgh Penguins". NHL.com. 2010. Archived from the original on 2016-06-02. Retrieved 2016-04-23.
- ^ "Penguins File For Chapter 11". CBS News. 1998-10-14. Archived from the original on 2019-07-25. Archived 2010-02-01 at the Wayback Machine Pittsburgh Post-Gazette
- ^ Finder: Lessons can be learned from Angotti and 1984 Archived 2011-03-11 at the Wayback Machine Pittsburgh Post-Gazette
- ^ "It's a Great Day for Hockey: Remembering "Badger" Bob Johnson". PBleacher Report. Retrieved 2021-03-25.
- ^ "Pittsburgh Hockey History". PenguinsJersey.com. Archived from the original on 2008-07-06. Retrieved 2008-06-24.
- ^ a b c Hockey Central Archived 2012-06-09 at the Wayback Machine Penguins attendance records
- ^ "Fleury has history against him". Pittsburgh Tribune-Review. Archived from the original on June 15, 2009. Retrieved November 25, 2008.
- ^ "Fleury shines debut; Penguins still lose". Canadian Broadcasting Corporation. October 10, 2003. Retrieved November 25, 2008.
- ^ "NHL Entry Draft Year by Year Results". National Hockey League.
- ^ "It Was a Great Night For Hockey – in Kansas City". National Hockey League. Archived from the original on 2011-12-05. Retrieved 2012-05-29.
- ^ "Judge grants Ottawa Senators bankruptcy protection". Archived from the original on 2012-01-10. Retrieved 2012-05-29.
- ^ a b Burnside, Scott (July 22, 2005). "Penguins, league hit jackpot with lottery". ESPN.com. Archived from the original on March 18, 2021. Retrieved April 9, 2019.
- ^ "NHL Draft Lottery History". TSN.ca. April 8, 2019. Archived from the original on April 8, 2019. Retrieved April 9, 2019.
- ^ "NHL Entry and Amateur Draft History". Hockey-Reference.com. Archived from the original on March 31, 2019. Retrieved April 9, 2019.
- ^ "Lemieux announces retirement". ESPN. 2006-01-25. Archived from the original on 2006-02-10. Retrieved 2006-01-24.
- ^ Allen, Kevin (2006-01-25). "Lemieux says goodbye for final time". USA Today. Archived from the original on 2012-07-07. Retrieved 2017-09-08.
- ^ "Game Summary". National Hockey League. 2007-02-19. Archived from the original on 2009-06-11. Retrieved 2007-02-20.
- ^ "Penguins to open new arena in 2010–11 season". National Hockey League. 2007-08-02.[dead link]
- ^ Allen, Kevin (2009-06-13). "Penguins ride Talbot to 2–1 Game 7 win over Red Wings". USA Today. Archived from the original on 2009-06-16. Retrieved 2009-07-02.
- ^ Aaron Beard (2010-10-14). "Penguins beat Hurricanes 3–2 in shootout". Yahoo! Sports. Archived from the original on 2009-10-18. Retrieved 2010-12-29.
- ^ "NHL levies suspensions to Penguins and Isles". National Hockey League. 2011-02-12. Retrieved 2011-03-04.
- ^ "Home ice may be dividing line between Pens, Bolts". NHL.com. April 10, 2011. Retrieved April 12, 2011.
- ^ Gelston, Dan (April 23, 2012). "Penguins humbled, disappointed after being ushered from playoffs by rival Flyers". National Hockey League. Archived from the original on 2013-06-17. Retrieved 2012-06-23.
- ^ Masisak, Corey (June 22, 2012). "Penguins deal Jordan Staal to 'Canes". National Hockey League. Archived from the original on 2012-06-23.. Archived from the original on 2014-05-17. Retrieved 2014-05-16.
- ^ "Canucks acquire Sutter & 3rd rounder from Pens". Vancouver Canucks. Archived from the original on March 4, 2016. Retrieved July 28, 2015.
- ^ "Penguins sign Matt Cullen to 1-year deal". Pittsburgh Penguins. 2015-08-06. Archived from the original on 2015-08-09. Retrieved 2015-08-06.
- ^ "Mike Sullivan Named Head Coach of Pittsburgh Penguins". 2015-12-12. Archived from the original on 2015-12-15. Retrieved 2016-06-13.
- ^ "Penguins notebook: Scuderi traded to Blackhawks for Daley". Pittsburgh Tribune-Review. 2015-12-14. Archived from the original on 2015-12-16. Retrieved 2015-12-15.
- ^ "Penguins acquire forward Carl Hagelin from the Ducks". Pittsburgh Penguins. 2016-01-16. Archived from the original on 2016-01-17. Retrieved 2016-01-16.
- ^ "Penguins top Lightning 2–1 to advance to Stanley Cup final". Associated Press. 2016-05-26. Archived from the original on 2019-03-30. Retrieved 2019-01-27.
- ^ West, Bill (2016-06-13). "Penguins' Crosby tabbed as Conn Smythe winner". Pittsburgh Tribune-Review. Archived from the original on 2016-06-16. Retrieved 2016-06-14.
- ^ Mackey, Jason (October 14, 2016). "Arena, fans aglow as Penguins raise Cup banner". Pittsburgh Post-Gazette. Archived from the original on January 3, 2017. Retrieved January 2, 2017.
- ^ "Canadiens shut out Penguins in Game 4 of Cup Qualifiers, win series". NHL.com. August 7, 2020. Retrieved August 7, 2020.
- ^ "Penguins name Ron Hextall as GM, Brian Burke as President of Hockey Ops". NHL.com. 2020-02-09. Archived from the original on 2021-02-09. Retrieved 2020-02-22.
- ^ "Rutherford resigns as Penguins general manager". TSN.ca. January 27, 2021. Archived from the original on February 1, 2021. Retrieved February 2, 2021.
- ^ "Family, teammates and rivals congratulate Crosby for 1,000th game". National Hockey League. Archived from the original on February 21, 2021. Retrieved February 22, 2021.
- ^ Heyl, Eric (2017-05-16). "Paul Steigerwald Out, Steve Mears In On Penguins Broadcast Team". Patch. Archived from the original on 2019-01-30. Retrieved 2019-01-30.
- ^ Steigerwald, John (2010). Just Watch the Game. Renaissance News, Inc. p. 117.
- ^ Collier, Gene (2008-05-25). "This is Hockeytown?". Pittsburgh Post-Gazette. Archived from the original on 2011-10-15. Retrieved 2008-06-07.
- ^ Anderson, Shelly (2007-11-07). "Penguins Notebook: In this case, No. 20 ranking is huge". Pittsburgh Post-Gazette. Archived from the original on 2011-10-15. Retrieved 2008-06-07.
- ^ "Pittsburgh Sports Report – Can the Penguins challenge the Steelers for popularity in Pittsburgh? By John A. Phillips". pittsburghsportsreport.com. Archived from the original on 2016-08-08. Retrieved 2016-06-14.
- ^ "Flyers-Penguins Is The NHL's Best Rivalry". Deadspin. Archived from the original on August 27, 2016. Retrieved March 10, 2017.
- ^ ""I think this is currently the biggest rivalry in the NHL." – Four former NHL players talk Penguins vs. Flyers". Archived from the original on March 12, 2017. Retrieved March 10, 2017.
- ^ "Ranking the NHL's 10 Best Rivalries". Sports Illustrated. Archived from the original on October 26, 2016. Retrieved March 11, 2017.
- ^ "Philadelphia Flyers Head-to-Head Results". Hockey-Reference. Sports Reference LLC. Archived from the original on 2017-10-15. Retrieved 2018-01-26.
- ^ . Archived from the original on 2018-07-11. Retrieved 2010-01-03.
- ^ Kasan, Sam (April 26, 2017). "The Origin of the Pens-Caps Rivalry". NHL.com. NHL Enterprises, L. P. Retrieved May 28, 2017.
- ^ "Crosby Elevates Game to Lift Pens as Caps Disappear in Lopsided Game 7". ESPN. May 14, 2009. Retrieved June 6, 2017.
- ^ Polacek, Scott (May 12, 2017). "Penguins Beat Capitals in Decisive Game 7 Behind Marc-Andre Fleury Shutout". bleacherreport.com. Bleacher Report. Retrieved June 28, 2017.
- ^ a b c d "Penguins Uniform History". National Hockey League. August 13, 2018. Archived from the original on May 9, 2018. Retrieved May 8, 2018.
- ^ Stark, Logan (3 May 2018). "Hockey History: The Pirates - Pittsburgh's First NHL Team". PensBurgh. Archived from the original on 27 January 2019. Retrieved 26 January 2019.
- ^ "Integrated Marketing Agency – Pipitone Group". vwadesign.com. Archived from the original on 2013-11-10. Retrieved 2018-12-05.
- ^ "Skating penguin logo returns to center ice". Pittsburgh Post-Gazette. August 22, 2002. Archived from the original on August 15, 2016. Retrieved June 18, 2016.
- ^ Anderson, Shelly (May 29, 2010). "Heinz 'in' place to be Jan. 1". Pittsburgh Post-Gazette. Archived from the original on June 2, 2010. Retrieved May 29, 2010.
- ^ Molinari, Dave (September 13, 2011). "Penguins to use Winter Classic sweaters". Pittsburgh Post-Gazette. Archived from the original on March 18, 2021. Retrieved September 13, 2011.
- ^ Rossi, Rob (April 2, 2013). "Penguins notebook: Crosby returns home after jaw surgery". Pittsburgh Tribune-Review. Archived from the original on April 7, 2013. Retrieved April 5, 2013.
- ^ Rossi, Rob (April 4, 2013). "Penguins notebook: Crosby visits team, still no set return date". Pittsburgh Tribune-Review. Archived from the original on April 9, 2013. Retrieved April 5, 2013.
- ^ "Penguins to wear 'Pittsburgh gold' jerseys during playoff home games". Pittsburgh Post-Gazette. April 11, 2016. Archived from the original on February 2, 2017. Retrieved January 23, 2017.
- ^ Muir, Allan (June 24, 2016). "Pittsburgh Penguins reveal new jerseys for 50th season". Sports Illustrated. Archived from the original on January 25, 2017. Retrieved January 23, 2017.
- ^ "Penguins unveil Stadium Series jersey". Pittsburgh Penguins. Archived from the original on October 27, 2019. Retrieved October 27, 2019.
- ^ "Media Affiliates – Schedule". Pittsburgh Penguins. Archived from the original on 2012-03-09. Retrieved 2010-12-29.
- ^ "Sportscaster Ed Conway Dies", The Pittsburgh Press, Pittsburgh, p. 67, May 29, 1974, archived from the original on March 18, 2021, retrieved November 19, 2020
- ^ Neill, Barbara M. (July–August 2008), "Swimming Against The Tide: The Unpredictable Life of Eleanor Schano", Laurel Mountain Post, Pittsburgh, pp. 4–5, archived from the original on 2013-03-14, retrieved 2012-10-03
- ^ Jordan Palmer (April 15, 2003). "Penguins Fire Coach Rick Kehoe". kdsk.com. Retrieved October 2, 2012.
- ^ "Scenes from Pittsburgh". Cleveland Cavaliers. Archived from the original on 2013-12-03. Retrieved 2013-09-04.
- ^ "NBA.com: Regular Season Records: Field Goals". National Basketball Association. Archived from the original on 2013-07-24. Retrieved 2019-01-04.
- ^ "Pittsburgh Penguins Start With Many Goalies on Team". Observer-Reporter. September 13, 1967. p. 4, Section D. Archived from the original on March 18, 2021. Retrieved November 19, 2020.
- ^ Crechiolo, Michelle (August 14, 2015). "UPMC Lemieux Sports Complex Has Grand Opening". The Pittsburgh Penguins. National Hockey League. Archived from the original on August 17, 2015. Retrieved August 16, 2015.
- ^ "New site in Cranberry chosen for UPMC-Penguins joint development – Pittsburgh Post-Gazette". Pittsburgh Post-Gazette. Archived from the original on 2013-08-21. Retrieved 2013-09-04.
- ^ "All about NHL goal horns". Frozen Face Off. Archived from the original on February 21, 2014. Retrieved October 5, 2018.
- ^ "NHL Videos and Highlights". National Hockey League. Archived from the original on 2013-05-12. Retrieved 2013-11-04.
- ^ "Penguins, Nailers renew affiliation agreement – Pittsburgh Sporting News". 22 July 2015. Archived from the original on 14 June 2016. Retrieved 15 June 2016.
- ^ "Pittsburgh Penguins Roster". National Hockey League. Retrieved April 7, 2021.
- ^ "Pittsburgh Penguins Hockey Transactions". The Sports Network. Retrieved April 7, 2021.
- ^ Kovacevic, Dejan (January 6, 2001). "Penguins Report: 01/06/01". Pittsburgh Post-Gazette. Archived from the original on October 7, 2012. Retrieved March 3, 2013.
- ^ Robinson, Alan (November 20, 1997). "Lemieux Teary as His Jersey Retired". Pittsburgh Post-Gazette. Archived from the original on November 1, 2013. Retrieved March 3, 2013.
- ^ Deardo, Bryan (January 27, 2017). "Mario Lemieux: Jaromir Jagr's jersey will be retired". Pittsburgh Steelers. Retrieved April 8, 2019.[permanent dead link]
- ^ . Archived (PDF) from the original on April 29, 2018. Retrieved April 28, 2018.
- ^ "Foster Hewitt Memorial Award winners". Hockey Hall of Fame and Museum. 2018. Archived from the original on June 12, 2018. Retrieved April 28, 2018.
- ^ "Elmer Ferguson Memorial Award Winners". Hockey Hall of Fame and Museum. 2018. Archived from the original on February 8, 2014. Retrieved April 28, 2018.
- ^ "Ron Francis". Legends of Hockey. Archived from the original on 2007-11-12. Retrieved 2008-02-04.
- ^ "Regular Season – All Skaters – Career for Franchise – Career Points – National Hockey League.com – Stats". National Hockey League. Archived from the original on June 17, 2013. Retrieved May 4, 2013.
- ^ "Regular Season – Goalie – Goalie Career for Franchise – Career Wins – NHL.com – Stats". National Hockey League. Archived from the original on June 17, 2013. Retrieved May 4, 2013.
Further reading
- Buker, Rick (2010). Total Penguins: the definitive encyclopedia of the Pittsburgh Penguins. Chicago, Ill: Triumph Books. ISBN 9781600783975.
External links
| https://wiki2.org/en/Pittsburgh_Penguins | CC-MAIN-2021-17 | refinedweb | 4,650 | 69.38 |
Originally posted by Viju: Now, compile the same code after removing "public" before the class name, this time the code will compile and run without any error.
but my question is WHY? What is the logic behind this?
Originally posted by Vishal Pandya: 'I' think there is no logic behind it.It's simply a rule.
Imagine that you have two files, A.java and B.java containing classes A and B. Furthermore, imagine that class A mentions class B. Now, you type "javac A.java". At some point, the compiler is going to look for B.class, and since it doesn't exist yet, what should the compiler do? Of course, what it does is look for "B.java" and expect class B to be defined in it. If B.java instead contained class C, and C.java contained class B, then the compiler would fail to compile anything. So even without the rule about one public class per file, you can see another common sense rule: if a class is referenced outside of the source file in which it is defined, the source file should be named after the class. Many Java compilers will warn about violations of this even for non-public classes. Now, the only reason for a class to be public is for it to be used outside of its own source file, right? So it makes sense to make the rule stronger in this case. The common-sense rule, the one that people follow consciously or not, is that a source file should contain at most one class that is ever mentioned by name outside of the file, and the file should be named after that one class.
Originally posted by Vijay Arora: Thanks Campbell for the hint..searched through the forum and found the above information.
Originally posted by Vijay Arora: Ya sure..following is the url where i found the above quote
Payel Bera wrote:Hi Campbell, Thank you for your clarification!!
1) If no public class, then the name of the java file can be the same as any of the non public class or any other name like abc.java,,, please let me know if my understanding is correct.
2) if we have class sample in file sample1.java then the code inside sample.java will execute whenn we run sample1.java,,, please let me know if my understanding is correct. Thanks in advance,. | http://www.coderanch.com/t/410062/java/java/Compiling-java-code-file-class | CC-MAIN-2014-41 | refinedweb | 404 | 76.22 |
Problem in running first hibernate program.... - Hibernate
Problem in running first hibernate program.... Hi...I am using...,
It seems that your class has not been found.There may be problem in your...
*
*
* Hibernate example to inset data into Contact table
Problem in Blazeds with Jboss Clustering ( Mod_JK with SSL )
Problem in Blazeds with Jboss Clustering ( Mod_JK with SSL ) Hi,
We are running our flex application in jboss clustering environment with the help... the RemoteObject for communicating with Java.
The Application is running fine when we
problem running a servlet on tomcat.
problem running a servlet on tomcat. i have followed the steps given... suppose that should mean that my tomcat server is up and running.
but whn i try... be the problem?
please help me out
jsf application war file not running in my jboss 6.0 on linux 5.5
jsf application war file not running in my jboss 6.0 on linux 5.5 17:15:47,647 INFO [config] Initializing Mojarra (1.2_08-b06-FCS) for context '/MFPMSDeployment'
17:15:48,354 INFO [TomcatDeployment] deploy, ctxPath=/
17
Jboss 3.2 EJB Examples
we attempt to start Tomcat4.1 server separately while JBoss is running... Jboss 3.2 EJB Examples
... be required by different
customers
Part 1
Stateless session bean jboss 3.2
JBoss and Sevlet - JSP-Servlet
JBoss and Sevlet I am trying to get familar with JBoss. I package up a file, gg.war, as follows: (note, the same directory structure I have running...
IsItWorking
/gg/servlets
I have JBoss installed and put my gg.war into
J2EE Tutorial - Running RMI Example
J2EE Tutorial - Running RMI Example
greeter.java
import java.rmi....?
1) We require jndi package for running this
program.
Problem with loginbean.jsp
Problem with loginbean.jsp... is some error regarding .What is hello in this?
Also in this example how is login.java is accessed or executed while running
JBoss Application Server
and running the JBoss as an installer.
For more information, now visit...JBoss Application Server
JBoss is an open source Java EE-based application server
Welcome to the Jboss 3.0 Tutorial
Welcome to the Jboss 3.0 Tutorial
... to Ant
Comprehensive description of Ant with example.
Building Web Application With Ant and Deploying on Jboss 3.0
EJB JNDI LOOK UP PROBLEM - EJB
a sessionbean using EJB3
but while running client code I am finding...://
Hope that it will be helpful for you
Problem
();
echoSocket.close();
}
}
this is my code,
compiled with no errors, but when running giving... that the server (something like EchoServer) is up and running.
Thanx i got the answer,
i just turned off the firewall & its running fine
Problem
with no errors, but when running giving "java.net.connectexception connection...) is up and running server
While running jsp
While running jsp I found this error when i run the client.jsp can anyone help me
javax.xml.ws.WebServiceException: Failed to access the WSDL... the following link:
Build Simple Web services example
Running and deploying Tomcat Server
Running and deploying Tomcat Server HI
Somebody has given the solution but it doesn't work out. kindly tell the solution for my problem which i... the problem to get the correct solution
C++ program not running
C++ program not running Hi, this program need to ask 10 random questions with a random month name.
Example:
RUN
How many days are there in the month of March? 28
No March has 31 days.
How many days are there in the month
Problem in card demo example.
Problem in card demo example. Hi,
I have successfully shows... help me, I am using Eclipse and card demo example by following link
http...://
Server running Button - Java Beginners
Server running Button Hi there,
I have a created a GUI using NetBeans IDE 6.1. The aim of this interface is to display the messages that have....
The problem is with the display , when I run the GUI and click on the play button
jboss sever
jboss sever Hi
how to configure data source in jboss server and connection pooling
Thanks
Kalins naik
JBoss Tutorials
jsp-jboss
jsp-jboss where to keep jsp in jboss
Java Compilation and running error - RMI
Java Compilation and running error The following set of programs... some problem and gives following on compilation"java uses unchecked or unsafe... showing problem. Please tell me why I am unable to run this program with Java
Problem in show card in applet.
other code will be add in the example or how can I solve my problem. please...Problem in show card in applet. The following link contained... the Cards contained) in suitable place.
4) now I am running CardDemo.java file
Running and testing the example
Running And Testing The Example
An example of testing the HelloWorld Application using give below. This
application uses the junit framework to test the application.
index.jsp
<%@ taglib prefix="s" uri="/struts
app crashed in ipad 2 running iOS 5.0.1
app crashed in ipad 2 running iOS 5.0.1 i developed a universal... in ipad 2 running iOS 5.0.1, witch is not in compliance with the App Store Review Guidelines."
Can anyone tell me how to solve this problem?.. or do i need iPad
Running threads in servlet only once - JSP-Servlet
Running threads in servlet only once Hi All,
I am developing... process while mail thread is running. these two separate threads has to run... them self immediately. how can resolve this problem.
Thanks
error while running the applet - Java Beginners
);
++num;
}
}
}
}
i have problem while running the code , error... is correct but u have problem in running the program. You follow the following steps...error while running the applet import java.applet.Applet;
import
Running and Testing Struts 2 Login application
Running and Testing Struts 2 Login application
Running Struts 2 Login Example
In this section we will run the example on Tomcat 6.0 server and check how it
works.
Running Tomcat
To run the Tomcat
Problem in EJB application creation - EJB
Problem in EJB application creation Hi,
I am new to EJB 3.0... NetBeans 6.0 IDE. Detailed steps with an example would be benaficial.
2. I am getting the following error message when running an enterprise application
how to install jboss drools
how to install jboss drools how to install jboss drools
Running and testing the example
Running And Testing The Example
Before running the application, you will have to build it to build the
application using ant at first download the ant... the hello world example press on the link HelloWorld in English
tomcat problem
tomcat problem error like requested(rootdirectory/urlpattern) resources are not available getting while running servlet on tomcatserver
Compiling and Running Java program from command line
Compiling and Running Java program from command line - Video tutorial... of compiling and running java
program from command line. We have also video instruction which makes learning
of Java very easy.
In this example we have a simple
how to install jboss - EJB
how to install jboss when i installed jboss at startup jboss generating errors . unable to run ejb3
JBoss Administrator
JBoss Administrator
Position Vacant: JBoss Administrator
Job Description
We are looking for JBoss Administrator to manage the JBoss based
Running faces config file without using internet - Java Server Faces Questions
Running faces config file without using internet how i can run faces..." file at local path
while i run the jboss,it will throw exception of fileNotFoundException
i m using jboss 3.2.7
JavaFX deployment in Jboss
JavaFX deployment in Jboss Hi,
How to deploy javafx application into JBoss
jsp - excel generating problem - JSP-Servlet
excel through jsp, which is the first example in this tutorial (generateExcelSheet.jsp). while running the program, the excel sheet is opening in download mode... is result excel file.
If you have more problem then give details
Running First Hibernate 3.0 Example
Running First Hibernate 3.0 Example
... of database.
After running the code example you may check your MySQL database... must be created before
running the example.
You can download
hibernate problem
:12)
i am getting this prob while running my hibernate program in Eclips
please
server problem - Hibernate
server problem dear sir please give me best soloution how run hibernate and spring application on jboss and weblogic10.1.0 sever and compile
thanks
Problem in Array
Problem in Array Hi, Can you help me with this problem?
Run a program that check if the input string contains equal number of A's and B's.
Hoping for your answer.Thank you.
Here is an example that check
error of HTTP Status 404 while running servlet on apache tomcat server
error of HTTP Status 404 while running servlet on apache tomcat server ... the apache tomcat page. It mean that my tomcat server is up and running. but whn i... be the problem
Running First Hibernate 3.0 Example
Running First Hibernate 3.0 Example
... provided very thing in one zip file. Download the example code and library from...:\hibernateexample". Download file contains the Eclipse project. To run the example you
jboss and struts - Struts
Deploy JBoss and Struts how to deploy struts 2 applications in jboss 4... the WAR file to the /server/default/deploy folder. Hello,You...;JBoss_home>/server/default/deploy folder.Jboss will automatically deploy
Problem to print from my properties file - Java Server Faces Questions
Problem to print from my properties file Hi,
I am a new user of this site. It is very interesting. So this is my problem:
I have a jsp file where... it with JBOSS application server, by my navigator it shows
MISSING:titreAchat
The Currently Running Servlet is showing some Previously run Servlet's Output
The Currently Running Servlet is showing some Previously run Servlet's Output My question is clearly stated in the Title part of this page. Suppose for example, I have executed the ParameterServlet servlet where the output
Jboss related linkage error
Jboss related linkage error Please check below error when i run in jboss
java.lang.LinkageError:
loader constraint violation in interface itable... loader (instance of org/jboss/classloader/spi/base/BaseClassLoader
James Server and JBoss - JavaMail
James Server and JBoss Hi Sir/Madam,
How to set class path for James server and Jboss servers to run programs with clear examples. Please send the reply very urgent
What is JBoss>
what is the Difference between weblogic and jboss?
what is the Difference between weblogic and jboss? what is the Difference between weblogic and jboss
An Entity Bean Example
Persistence Example
.... For example, in a banking application, Customer and BankAccount
can be treated... and a database.
For example, consider a bank entity bean that is used
EJB2.0 in jboss - EJB
EJB2.0 in JBoss How to Install EJB 2.0 in JBoss Answer: You do this by using following steps such as :1. import jBoss into Visual Age for Java (VAJ) to develop and debug EJBs using Enterprise Java Beans standard)
developing a Session Bean and a Servlet and deploy the web application on JBoss 3.0
and deploy the web application on JBoss 3.0 Server.
Our application is thin...
between clients calls. Example of Stateful session bean may be Shopping Cart... to the client. In our example we have defined the
SayHello() method for calling from
Struts - Jboss - I-Report - Struts
Struts - Jboss - I-Report Hi i am a beginner in Java programming and in my application i wanted to generate a report (based on database) using Struts, Jboss , I Report
Stateless Session Bean Example Error
Stateless Session Bean Example Error Dear sir,
I'm getting following error while running StatelessSessionBean example on Jboss. Please help me...
22:56:40,141 INFO [Server] JBoss (MX MicroKernel) [4.2.0.GA (build: SVNTag
Problem in uploading java application
Problem in uploading java application I have uploaded my java... is running in local machine properly.But while trying to access the hosted... this problem
Jsp include page problem
should we do for JBoss server
Crystal Reports with Spring: problem
with:
1)JBoss 4.2.3,
2)Oracle 11,
3)EJB 2,
4)Crystal Reports 2008 as reporting... - version 12.2.209,
2)JBoss 5.1,
3)Oracle 11,
4)EJB 3,
5)Crystal Reports 2008
which is better server for java applications? apache or jboss?
which is better server for java applications? apache or jboss? which is better server? apache or jboss
Run time problem
Run time problem when i run a project,it shows an exception like "unable to create MIDlet".It also shows
"running with locale:English_united States.1252
running in the identified third party security domain"
"please help
iterator display problem - Struts
friend,
Code to help in solving the problem :
Iterator Tag Example!
Iterator Tag Example
Running Jar file in Windows
Running Jar file in Windows Running Jar file in Windows
PHP SQL Injection Example
PHP SQL Injection Example
This Example illustrates how to injection... on the fly and then running
them, it's straightforward to create some real... no
problem in execution, as our MySQL statement will just select everything from
Session Bean Example
Session Bean Example I want to know that how to run ejb module by jboss 4.2.1 GA (session bean example by jboss configuration )?
Please visit the following link:
Hibernate code problem - Hibernate
Hibernate code problem Hi
This is Raju.I tried the first example........How can i solve this problem.
Am i need to Kept any other jars in path. ... problem please send me code.
Visit for more information.
http | http://www.roseindia.net/tutorialhelp/comment/85533 | CC-MAIN-2014-49 | refinedweb | 2,260 | 58.58 |
AS3WavSound (AWS) extends the generic Sound class in Flash and adds support for playing back WAVE data. AWS uses a Wav decoder that converts ByteData into mono/stereo, 44100/22050/11025 samplerate, 8/16 bitrate sample data, that is playable by the Sound class using the SampleDataEvent technique.
So you embed a .wav file as a ByteArray with mimetype ‘application/octet-stream’ and AWS will be able to decode this and playback this sound.
Sample
public class Demo { [Embed(source = "assets/drum_loop.wav", mimeType = "application/octet-stream")] public const DrumLoop:Class; public function foo():void { var drumLoop:WavSound = new WavSound(new DrumLoop() as ByteArray); drumLoop.play(); } }
was cool buddy,
thanks
Thanks, man!
Nice blog, by the way, I just added it to my feeds 😉
Hello guys,
Can you help me a little bit with this library. I try to play audio files with different samples rate, but it allways plays it at 44.1kHz. How can i play with different samples rate ?
Hi, there! I never used AS3WaveSound in my projects so far, so I am not able to help you. I think you can find some answers in the project’s docs or you can report a bug if you really think it is broken. | https://www.as3gamegears.com/sound/as3wavsound/ | CC-MAIN-2021-31 | refinedweb | 206 | 72.46 |
Please help!
I don't know what to do for this grades project. Any help would be great!!!
These are the directions:
You are not to modify any code provided. The program is to compute the average of several grades and find the letter grade equivalent. All the grades are in the grades table. The program is to process any number of grades in the table without the user entering anything. The program is to use two loops which are nested.
For each grade:
Find the location of the letter grade in the letter_grades table.
The numeric equivalent is in the grades table.
Total the numeric equivalent values.
After all grades processed:
Compute the grade point average.
Look up the equivalent letter grade using another loop.
For loops will work.
The output should look like this:
Average: 0.7140000000000001
Grade: C-
public class Project2 { public static void main(String[] args) { String[] letter_grades = new String[12]; double[] grades = new double[12]; letter_grades[0] = "A"; letter_grades[1] = "A-"; letter_grades[2] = "B+"; letter_grades[3] = "B"; letter_grades[4] = "B-"; letter_grades[5] = "C+"; letter_grades[6] = "C"; letter_grades[7] = "C-"; letter_grades[8] = "D+"; letter_grades[9] = "D"; letter_grades[10] = "D-"; letter_grades[11] = "F"; grades[0] = .93; grades[1] = .90; grades[2] = .87; grades[3] = .83; grades[4] = .80; grades[5] = .77; grades[6] = .73; grades[7] = .70; grades[8] = .67; grades[9] = .63; grades[10] = .60; grades[11] = .0; String[] grades_entered = new String[10]; grades_entered[0] = "B"; grades_entered[1] = "C+"; grades_entered[2] = "D-"; grades_entered[3] = "C+"; grades_entered[4] = "D-"; //Your code goes here } } | http://www.javaprogrammingforums.com/collections-generics/8125-grades-project-arrays-loops.html | CC-MAIN-2017-39 | refinedweb | 256 | 74.39 |
SelectMany: Probably The Most Powerful LINQ Operator
SelectMany: Probably The Most Powerful LINQ Operator
Join the DZone community and get the full member experience.Join For Free
Hi there back again. Hope everyone is already exploiting the power of LINQ on a fairly regular basis. Okay, everyone knows by now how simple LINQ queries with a where and select (and orderby, and Take and Skip and Sum, etc) are translated from a query comprehension into an equivalent expression for further translation:
from p in products where p.Price > 100 select p.Name
becomes
products.Where(p => p.Price > 100).Select(p => p.Name)
All blue syntax highlighting has gone; the compiler is happy with what remains and takes it from there in a left-to-right fashion (so, it depends on the signature of the found Where method whether or not we take the route of anonymous methods or, in case of an Expression<…> signature, the route of expression trees).
But let’s make things slightly more complicated and abstract:
from i in afrom j in bwhere i > jselect i + j
It’s more complicated because we have two from clauses; it’s more abstract because we’re using names with no intrinsic meaning. Let’s assume a and b are IEnumerable<int> sequences in what follows. Actually what the above query means in abstract terms is:
(a X b).Where((i, j) => i > j).Select((i, j) => i + j)
where X is a hypothetical Cartesian product operator, i.e. given a = { 1, 4, 7 } and b = { 2, 5, 8 }, it produces { (1,2), (1,5), (1,8), (4,2), (4,5), (4,8), (7,2), (7,5), (7,8) }, or all the possible pairs with elements from the first sequence combined with an element from the second sequence. For the record, the generalized from of such a pair – having any number of elements – would be a tuple. If we would have this capability, Where would get a sequence of such tuples, and it could identify a tuple in its lambda expression as a set of parameters (i, j). Similarly, Select would do the same and everyone would be happy. You can verify the result would be { 6, 9, 12 }.
Back to reality now: we don’t have the direct equivalent of Cartesian product in a form that produces tuples. In addition to this, the Where operator in LINQ has a signature like this:
IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> predicate)
where the predicate parameter is a function of one – and only one – argument. The lambda (i, j) => i > j isn’t compatible with this since it has two arguments. A similar remark holds for Select. So, how can we get around this restriction? SelectMany is the answer.
Demystifying SelectMany
What’s the magic SelectMany all about? Where could we better start our investigation than by looking at one of its signatures?
IEnumerable<TResult> SelectMany<TSource, TCollection, TResult>( this IEnumerable<TSource> source, Func<TSource, IEnumerable<TCollection>> collectionSelector, Func<TSource, TCollection, TResult> resultSelector)
Wow, might be a little overwhelming at first. What does it do? Given a sequence of elements (called source) of type TSource, it asks every such element (using collectionSelector) for a sequence of – in some way related – elements of type TCollection. Next, it combines the currently selected TSource element with all of the TCollection elements in the returned sequence and feed it in to resultSelector to produce a TResult that’s returned. Still not clear? The implementation says it all and is barely three lines:
foreach (TSource item in source) foreach (TCollection subItem in collectionSelector(item)) yield return resultSelector(item, subItem);
This already gives us a tremendous amount of power. Here’s a sample:
products.SelectMany(p => p.Categories, (p, c) => p.Name + “ has category “ + c.Name)
How can we use this construct to translate multiple from clauses you might wonder? Well, there’s no reason the function passed in as the first argument (really the second after rewriting the extension method, i.e. the collectionSelector) uses the TSource argument to determine the IEnumerable<TCollection> result. For example:
products.SelectMany(p => new int[] { 1, 2, 3 }, (p, i) => p.Name + “ with irrelevant number “ + i)
will produce a sequence of strings like “Chai with irrelevant number 1”, “Chai with irrelevant number 2”, “Chai with irrelevant number 3”, and similar for all subsequent products. This sample doesn’t make sense but it illustrates that SelectMany can be used to form a Cartesian product-like sequence. Let’s focus on our initial sample:
var a = new [] { 1, 4, 7 };var b = new [] { 2, 5, 8 };from i in afrom j in bselect i + j;
I’ve dropped the where clause for now to simplify things a bit. With our knowledge of SelectMany above we can now translate the LINQ query into:
a.SelectMany(i => b, …)
This means: for every i in a, “extract” the sequence b and feed it into …. What’s the …’s signature? Something from a (i.e. an int) and something from the result of the collectionSelector (i.e. an int from b), is mapped onto some result. Well, in this case we can combine those two values by summing them, therefore translating the select clause in one go:
a.SelectMany(i => b, (i, j) => i + j)
What happens when we introduce a seemingly innocent where clause in between?
from i in afrom j in bwhere i > jselect i + j;
The first two lines again look like:
a.SelectMany(i => b, …)
However, going forward from there we’ll need to be able to reference i (from a) and j (from b) in both the where and select clause that follow but both the corresponding Where and Select methods only take in “single values”:
IEnumerable<TSource> Where<TSource>(this IEnumerable<TSource> source, Func<TSource, bool> predicate);IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> projection);
So what can we do to combine the value i and j into one single object? Right, use an anonymous type:
a.SelectMany(i => b, (i, j) => new { i = i, j = j })
This produces a sequence of objects that have two public properties “i” and “j” (since it’s anonymous we don’t care much about casing, and indeed the type never bubbles up to the surface in the query above, because of what follows:
a.SelectMany(i => b, (i, j) => new { i = i, j = j }).Where(anon => anon.i > anon.j).Select(anon => anon.i + anon.j)
In other words, all references to i and j in the where and select clauses in the original query expression have been replaced by references to the corresponding properties in the anonymous type spawned by SelectMany.
Lost in translation
This whole translation of this little query above puts quite some work on the shoulder of the compiler (assuming a and b are IEnumerable<int> and nothing more, i.e. no IQueryable<int>):
- The lambda expression i => b captures variable b, hence a closure is needed.
- That same lambda expression acts as a parameter to SelectMany, so an anonymous method will be created inside the closure class.
- For new { i = i, j = j } an anonymous type needs to be generated.
- SelectMany’s second argument, Where’s first argument and Select’s first argument are all lambda expressions that generate anonymous methods as well.
As a little hot summer evening exercise, I wrote all of this plumbing manually to show how much code would be needed in C# 2.0 minus closures and anonymous methods (more or less C# 1.0 plus generics). Here’s where we start from:
class Q{ IEnumerable<int> GetData(IEnumerable<int> a, IEnumerable<int> b) { return from i in a from j in b where i > j select i + j; }}
This translates into:
class Q{ IEnumerable<int> GetData(IEnumerable<int> a, IEnumerable<int> b) { Closure0 __closure = new Closure0(); __closure.b = b; return Enumerable.Select( Enumerable.Where( Enumerable.SelectMany( a, new Func<int, IEnumerable<int>>(__closure.__selectMany1), new Func<int, int, Anon0<int, int>>(__selectMany2) ), new Func<Anon0<int, int>, bool>(__where1) ), new Func<Anon0<int, int>, int>(__select1) ); } private class Closure0 { public IEnumerable<int> b; public IEnumerable<int> __selectMany1(int i) { return b; } } private static Anon0<int, int> __selectMany2(int i, int j) { return new Anon0<int, int>(i, j); } private static bool __where1(Anon0<int, int> anon) { return anon.i > anon.j; } private static int __select1(Anon0<int, int> anon) { return anon.i + anon.j; }}private class Anon0<TI, TJ> // generics allow reuse of type for all anonymous types with 2 properties, hence the use of EqualityComparers in the implementation{ private readonly TI _i; private readonly TJ _j; public Anon0(TI i, TJ t2) { _i = i; _j = j; } public TI i { get { return _i; } } public TJ j { get { return _j; } } public override bool Equals(object o) { Anon0<TI, TJ> anonO = o as Anon0<TI, TJ>; return anonO != null && EqualityComparer<TI>.Default.Equals(_i, anonO._i) && EqualityComparer<TJ>.Default.Equals(_j, anonO._j); } public override int GetHashCode() { return EqualityComparer<TI>.Default.GetHashCode(_i) ^ EqualityComparer<TJ>.Default.GetHashCode(_j); // lame quick-and-dirty hash code } public override string ToString() { return “( i = “ + i + “, j = ” + j + “ }”; // lame without StringBuilder }}
Just a little thought… Would you like to go through this burden to write a query? “Syntactical sugar” might have some bad connotation to some, but it can be oh so sweet baby!
Bind in disguise
Fans of “monads”, a term from category theory that has yielded great results in the domain of functional programming as a way to make side-effects explicit through the type system (e.g. the IO monad in Haskell), will recognize SelectMany’s (limited) signature to match the one of bind:
IEnumerable<TResult> SelectMany<TSource, TResult>( this IEnumerable<TSource> source, Func<TSource, IEnumerable<TResult>> collectionSelector)
corresponds to:
(>>=) :: M x –> (x –> M y) –> M y
Which is Haskell’s bind operator. For those familiar with Haskell, the “do” notation – that allows the visual illusion of embedding semi-colon curly brace style of “imperative programming” in Haskell code – is syntactical sugar on top of this operator, defined (recursively) as follows:
do { e } = edo { e; s } = e >>= \_ –> do { s }do { x <- e; s } = e >>= (\x –> do { s })do { let x = e; s } = let x = e in do { s }
Rename to SelectMany, replace M x by IEnumerable<x> and assume a non-curried form and you end up with:
SelectMany :: (IEnumerable<x>, x –> IEnumerable<y>) –> IEnumerable<y>
Identifying x with TSource, y with TResult and turning a –> b into Func<a, b> yields:
SelectMany :: Func<IEnumerable<TSource>, Func<TSource, IEnumerable<TResult>>, IEnumerable<TResult>>
and you got identically the same signature as the SelectMany we started from. For the curious, M in the original form acts as a type constructor, something the CLR doesn’t support since it lacks higher-order kinded polymorphism; it’s yet another abstraction one level higher than generics that math freaks love to use in category theory. The idea is that if you can prove laws to be true in some “structure” and you can map that structure onto an another “target structure” by means of some mapping function, corresponding laws will hold true in the “target structure” as well. For instance:
({ even, odd }, +)
and
({ pos, neg }, *)
can be mapped onto each other pairwise and recursively, making it possible to map laws from the first one to the second one, e.g.
even + odd –> oddpos * neg –> neg
This is a largely simplified sample of course, I’d recommend everyone who’s interested to get a decent book on category theory to get into the gory details.
A word of caution
Now that you know how SelectMany works, can you think of a possible implication when selecting from multiple sources? Let me give you a tip: nested foreachs. This is an uninteresting sentence that acts as a placeholder in the time space while you’re thinking about the question. Got it? Indeed, order matters. Writing the following two lines of code produces a different query with a radically different execution pattern:
from i in a from j in b …from j in b from i in a …
Those roughly correspond to:
foreach (var i in a) foreach (var j in b) …
versus
foreach (var j in b) foreach (var i in a) …
But isn’t this much ado about nothing? No, not really. What if iterating over b is much more costly than iterating over a? For example,
from p in localCollectionOfProductsfrom c in sqlTableOfCategories…
This means that for every product iterated locally, we’ll reach out to the database to iterate over the (retrieved) categories. If both were local, there wouldn’t be a problem of course; if both were remote, the (e.g.) SQL translation would take care of it to keep the heavy work on the remote machine. If you want to see the difference yourself, you can use the following simulation:
using System; using System.Collections.Generic; using System.Diagnostics; using System.Linq; using System.Threading; class Q { static void Main() { Stopwatch sw = new Stopwatch(); Console.WriteLine("Slow first"); sw.Start(); foreach (var s in Perf<int,char>(Slow(), Fast())) Console.WriteLine(s); sw.Stop(); Console.WriteLine(sw.Elapsed); sw.Reset(); Console.WriteLine("Fast first"); sw.Start(); foreach (var s in Perf<char,int>(Fast(), Slow())) Console.WriteLine(s); sw.Stop(); Console.WriteLine(sw.Elapsed); } static IEnumerable<string> Perf<S,T>(IEnumerable<S> a, IEnumerable<T> b) { return from i in a from j in b select i + "," + j; } static IEnumerable<int> Slow() { Console.Write("Connecting... "); Thread.Sleep(2000); // mimic query overhead (e.g. remote server) Console.WriteLine("Done!"); yield return 1; yield return 2; yield return 3; } static IEnumerable<char> Fast() { return new [] { 'a', 'b', 'c' }; } }
This produces:
[img_assist|nid=4625|title=|desc=|link=none|align=none|width=259|height=374]
Obviously, it might be the case you’re constructing a query that can only execute by reaching out to the server multiple times, e.g. because order of the result matters (see screenshot above for an illustration of the ordering influence – but some local sorting operation might help too in order to satisfy such a requirement) or because the second query source depends on the first one (from i in a from j in b(i) …). There’s no silver bullet for a solution but knowing what happens underneath the covers certainly provides the necessary insights to come up with scenario-specific solutions.
Happy binding!
Published at DZone with permission of Bart De Smet , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/selectmany-probably-the-most-p | CC-MAIN-2020-24 | refinedweb | 2,444 | 51.07 |
Arrays.sort() method sorts all the array elements in ascending order. It avoids writing tedious sorting algorithms.
Two methods exist with Array class for sorting. Following are the method signatures.
- from Sorting complete array elements sort() uses both the above methods.
import java.util.*; public class ArraysSort { public static void main(String args[]) { // SORTING INT ARRAY int numbers[] = { 90, 10, 50, 60, 30, 40, 20, 80, 70 }; System.out.println("Before sort numbers: " + Arrays.toString(numbers)); Arrays.sort(numbers); System.out.println("After sort numbers: " + Arrays.toString(numbers)); // SORTING FEW ELEMENTS int numbers1[] = { 9, 1, 5, 6, 3, 4, 2, 8, 7 }; System.out.println("\nBefore sort numbers1: " + Arrays.toString(numbers1)); Arrays.sort(numbers1, 2, 7); // SORTING 2ND ELEMENT TO 6TH ELEMENT (7-1) System.out.println("After sort 2 to 7 numbers1:" + Arrays.toString(numbers1)); } }
Ouutput screen of Sorting complete array elements sort()
int numbers[] = { 90, 10, 50, 60, 30, 40, 20, 80, 70 };
System.out.println(“Before sort numbers: ” + Arrays.toString(numbers));
An integer array object numbers is created and elements are printed using toString() method of Arrays class.
Arrays.sort(numbers);
System.out.println(“After sort numbers: ” + Arrays.toString(numbers));
The sort() method of Arrays class sorts the elements of numbers object into ascending order and the sorted elements are printed with Arrays.toString() method.
int numbers1[] = { 9, 1, 5, 6, 3, 4, 2, 8, 7 };
Another integer array object numbers1 is created with a few elements. The earlier sort() method sorts all the elements of the array. The sort() method is overloaded that sorts a few elements of the array.
Arrays.sort(numbers1, 2, 7);
The above sort() method sorts the numbers1 elements starting from 2nd element to 6th (7-1) in ascending order. Remaining elements of numbers1 are not sorted and they remain as it is. Observe the screenshot.
A similar program exists with Collections.sort() that sorts data structure elements.
The Arrays class from java.util package does not have copy() method to copy the elements of one array to another as arraycopy() method already exists with general arrays. | https://way2java.com/collections/sorting-array-elements-with-sort/ | CC-MAIN-2020-50 | refinedweb | 345 | 51.75 |
Open Sound Control (OSC)
The OSC objects are for sharing musical data over a network. OSC is a standard that lets you format and structure messages. OSC enables communication at a higher level than the PureData [netsend] objects and is both more flexible and more precise than MIDI. OSC is network enabled, using common network cables and hardware.
Using OSC you can exchange data with a number of devices, such as Lemur, iPhone (through OSCulator), Monome, or applications such as Ardour, Modul8, Reaktor and many more. Most modern programming languages are OSC enabled, notably Processing, Java, Python, C++, Max/MSP and SuperCollider.
Setting up an OSC connection
There are several OSC implementations in PureData. At the time of writing, the mrpeach implementation is best supported. PureData is in the process of migrating to mrpeach OSC objects, but in the current release you still have to import them explicitly.
Sending a simple message
osc_udpsend.pd
Sending a connect message to an [udpsend] object opens an UDP connection to another computer. As with [netsend], you have to provide an IP address or hostname, and a port number.
The UDP connection you just opened can only really send bytes. In order to send an OSC message over the opened connection, you have to pack it first, using the [packOSC] object.
Receiving a simple message
osc_udpreceive.pd
The [udpreceive] object tells the patch to listen to a given port number.
The OSC message has to be unpacked using the [unpackOSC] object.
IP addresses, hostnames
If both sending and receiving PureData patches are on the same computer, you can use the special loopback interface: the IP address is 127.0.0.1 and the hostname is "localhost".
If both computers are on a local network, you can use their network names, or else, to find out a computers IP address, open a terminal and type "ifconfig" (Mac/Linux) or "ipconfig /all" (Windows).
If you want to open a remote connection to a computer over the internet, consider using TCP instead of UDP (see below) and proceed as with a local connection.
Ports
Every computer has a large number of ports. Each service (such as a webserver, a database etc.) may listen or send data through it's assigned port. Which port is used for what is a matter of configuration, but PureData uses port 9001 by default. You can choose another port if you want to, just make sure the port you choose is not already in use. If you are communicating with another application, you will have to find out which port it is using.
UDP vs. TCP
In all these examples, you can replace the [udpsend] and [udpreceive] objects by their corresponding TCP counterparts [tcpsend] and [tcpreceive]. The TCP protocol is much more reliable than UDP, so if you are connecting to a computer over the internet, or data packets are lost or shuffled underway, use TCP.
The OSC address pattern
The first part of an OSC message is an URL-style address (in the previous example, “/test”). The address lets you route the data on the receiving end.
This example sends 2 different OSC messages. Messages are told apart by their address components (/test/voice and /test/mute).
osc_pathsend.pd
On the receiving end, the messages are routed using the [routeOSC] object and used to control an oscillator.
osc_pathreceive.pd
It is important to understand that OSC does not come with predefined messages, like MIDI does. It is up to you to define the messages you want to send and receive.
OSC arguments
An OSC message can have any number of arguments. This example creates a message with 2 arguments for note (MIDI note number) and amplitude.
osc_argssend.pd
On the receiving patch, the arguments are unpacked using the [unpack] object, and used to control an oscillator's pitch and amplitude.
osc_argsreceive.pd
Types
The previous examples all send typed-guessed messages. It is also possible (and good practice) to set the types of the arguments.
Common types are:
i: integer
f: float
s: string
T: TRUE
F: FALSE
This example uses the [sendtyped] object to send a boolean (true or false), an integer (a MIDI note number) and a float (amplitude).
osc_typesend.pd
Depending on the value of the first argument (the boolean argument), the receiving patch puts out a sine or a sawtooth wave.
osc_typereceive.pd
Note that PureData and OSC use different types. PureData only knows floats, strings and symbols.
Bundles
Sometimes you might want to send several messages at the same time. This example sends one bundle containing 3 notes.
Bundles are enclosed in square brackets. Inside the brackets, you can pack any number of messages.
osc_bundlesend.pd
Receiving a bundle is no different than receiving a single message.
osc_bundlereceive.pd
Designing your namespace
Unlike MIDI, OSC requires you to define your own messages. This is one of OSC's main advantages, and if you are doing anything more complex than the examples above, you should start by getting your set of messages (your namespace) right. There is no single strategy to do this, but here are some ideas to get you started.
Connecting to hardware or external applications
The easiest case, since these will come with their own predefined set of commands. You will find them specified in the documentation. Not much you can do here but stick to the specs.
Connecting to another PureData patch or to your own application written in another language
Avoiding name conflicts: Keep in mind that you, or the person using your patch, are on a network. This network is shared by a number of computers running a number of applications, some of which might be using OSC too. So you should be careful to avoid name conflicts. A conflict happens when two applications use the same address pattern but mean different things. To avoid this, the first part of your address pattern should be unique. A foolproof, albeit pedantic, method is to use your domain as a prefix for all your messages e.g. /net/mydomain/...
Type conversion caveats: PureData and OSC use different data types, so type conversion takes place every time you send or receive anything else than a float or a string. Due to the way data is handled internally, PureData can only work accurately with 24 bit numbers. Above this, integers gradually loose precision. Since OSC can carry 32 bit integers, you will get strange results above 16777216.
Using a predefined namespace
If this is your life's work (or your idée fixe), then using a predefined, domain-specific namespace might be a good move. Examples of these include: SYNoscopy for MIDI style controls (specification and examples) and GDIF, for music related movements and gestures. You can also look at one of the many open source applications listed at opensoundcontrol.org for inspiration. | https://archive.flossmanuals.net/pure-data/network-data/osc | CC-MAIN-2019-09 | refinedweb | 1,137 | 63.8 |
Programming with Python
Instructor’s Guide
Legend
We are using a dataset with records on inflammation from patients following an arthritis treatment.
We make reference in the lesson that this data is somehow strange. It is strange because it is fabricated! The script used to generate the inflammation data is included as
tools.
Analyzing Patient Data
Solutions to exercises:
Sorting out references
What does the following program print out?
first, second = 'Grace', 'Hopper' third, fourth = second, first print(third, fourth)
Hopper Grace
Slicing strings
A section of an array is called a slice. We can take slices of character strings as well:
element = 'oxygen'
What is the value of
element[:4]? What about
element[4:]? Or
element[:]?
oxyg en oxygen
What is
element[-1]? What is
element[-2]?
n e
Given those answers, explain what
element[1:-1] does.
Creates a substring from index 1 up to (not including) the final index, effectively removing the first and last letters from 'oxygen'
Thin slices
The expression
element[3:3] produces an empty string, i.e., a string that contains no characters. If
data holds our array of patient data, what does
data[3:3, 4:4] produce? What about
data[3:3, :]?
print(data[3:3, 4:4]) print(data[3:3, :])
[] []
Check your understanding: plot scaling
Why do all of our plots stop just short of the upper end of our graph? Update your plotting code to automatically set a more appropriate scale (hint: you can make use of the
max and
min methods to help)
Because matplotlib normally sets x and y axes limits to the min and max of our data (depending on data range)
# for example: axes3.set_ylabel('min') axes3.plot(numpy.min(data, axis=0)) axes3.set_ylim(0,6) # or a more automated approach: min_data = numpy.min(data, axis=0) axes3.set_ylabel('min') axes3.plot(min_data) axes3.set_ylim(numpy.min(min_data), numpy.max(min_data) * 1.1)
Check your understanding: drawing straight lines
Why are the vertical lines in our plot of the minimum inflammation per day not perfectly vertical?
Because matplotlib interpolates (draws a straight line) between the points
Make your own plot
Create a plot showing the standard deviation (
numpy.std) of the inflammation data for each day across all patients.
max_plot = matplotlib.pyplot.plot(numpy.std(data, axis=0)) matplotlib.pyplot.show()
Moving plots around
Modify the program to display the three plots on top of one another instead of side by side.
import numpy import matplotlib.pyplot data = numpy.loadtxt(fname='data()
Repeating Actions with Loops
Solutions to exercises:
From 1 to N
Using
range, write a loop that uses
range to print the first 3 natural numbers.
for i in range(1, 4): print(i)
1 2 3
Computing powers with loops
Write a loop that calculates the same result as
5 ** 3 using multiplication (and without exponentiation).
result = 1 for i in range(0, 3): result = result * 5 print(result)
125
Reverse a string
Write a loop that takes a string, and produces a new string with the characters in reverse order.
newstring = '' oldstring = 'Newton' length_old = len(oldstring) for char_index in range(length_old): newstring = newstring + oldstring[length_old - char_index - 1] print(newstring)
'notweN'
After discussing these challenges could be a good time to introduce the
b *= 2 syntax.
Storing Multiple Values in Lists
Solutions to exercises:
Turn a string into a list
Use a
for loop to convert the string
"hello" into a list of letters:
my_list = [] for char in "hello": my_list.append(char) print(my_list)
["h", "e", "l", "l", "o"]
Analyzing Data from Multiple Files
Solutions to exercises:
Plotting Differences
Plot the difference between the average of the first dataset and the average of the second dataset, i.e., the difference between the leftmost plot of the first two figures.
import glob import numpy import matplotlib.pyplot filenames = glob.glob('data(data0.mean(axis=0) - data1.mean(axis=0)) fig.tight_layout() matplotlib.pyplot.show()
Making Choices
Solutions to exercises:
How many paths?
Which of the following would be printed if you were to run this code? Why did you pick this answer?
if 4 > 5: print('A') elif 4 == 5: print('B') elif 4 < 5: print('C')
C gets printed, because the first two conditions,
4 > 5 and
4 == 5 are not true, but
4 < 5 is true.
What is truth?
After reading and running the code below, explain the rules')
First line prints nothing: an empty string is false Second line prints
'word is true': a non-empty string is true Third line prints nothing: an empty list is false Fourth line prints
'non-empty list is true': a non-empty list is true Fifth line prints nothing: 0 is false Sixth line prints
'one is true': 1 is true
Close enough
Write some conditions that print
True if the variable
a is within 10% of the variable
b and
False otherwise.
a = 5 b = 5.1 if abs(a - b) < 0.1 * abs(b): print('True') else: print('False')
Another possible solution:
print(abs(a - b) < 0.1 * abs(b))
This works because the boolean objects
True and
False have string representations which can be
In-place operators
Write some code that sums the positive and negative numbers in a list separately, using in-place operators.
positive_sum = 0 negative_sum = 0 test_list = [3, 4, 6, 1, -1, -5, 0, 7, -8] for num in test_list: if num > 0: positive_sum += num elif num == 0: pass else: negative_sum += num print(positive_sum, negative_sum)
21 -14
Here
pass means “don’t do anything”. In this particular case, it’s not actually needed, since if
num == 0 neither sum needs to change, but it illustrates the use of
elif.
Tuples and exchanges
Explain what the overall effect of this code is:
left = 'L' right = 'R' temp = left left = right right = temp
The code swaps the contents of the variables right and left.
Compare it to:
left, right = right, left
Do they always do the same thing? Which do you find easier to read?
Yes, although it’s possible the internal implementation is different. Answers will vary on which is easier to read.
Creating Functions
Solutions to exercises:
Combining strings
Write a function called
fence that takes two parameters called
original and
wrapper and returns a new string that has the wrapper character at the beginning and end of the original.
def fence(original, wrapper): return wrapper + original + wrapper
Selecting characters from strings
Write a function called
outer that returns a string made up of just the first and last characters of its input.
def outer(input_string): return input_string[0] + input_string[-1])
259.81666666666666 287.15 273.15 0
k is 0 because the
k inside the function
f2k doesn’t know about the
k defined outside the function.
Errors and Exceptions
Solutions to exercises:'
- 3 levels
errors_02.py.")
SyntaxError for missing
(): at end of first line,
IndentationError for mismatch between second and third lines.
Fixed version:)
3
NameErrors for
number being misspelled, for
message not defined, and for
a not being in quotes.
Fixed version:
message = "" for number in range(10): # use a if the number is a multiple of 3, otherwise use b if (number % 3) == 0: message = message + "a" else: message = message + "b" print(message)
abbabbabba
Identifying Item Errors
- Read the code below, and (without running it) try to identify what the errors are.
- Run the code, and read the error message. What type of error is it?
- Fix the error.
seasons = ['Spring', 'Summer', 'Fall', 'Winter'] print('My favorite season is ', seasons[4])
IndexError; the last entry is
seasons[3], so
seasons[4] doesn’t make sense.
Fixed version:
seasons = ['Spring', 'Summer', 'Fall', 'Winter'] print('My favorite season is ', seasons[-1])
Defensive Programming
Solutions to exercises:
Pre- and post-conditions?
# a possible pre-condition: assert len(input) > 0, 'List length must be non-zero' # a possible post-condition: assert numpy.min(input) < average < numpy.max(input), 'Average should be between min and max of input values'
Testing assertions
Given a sequence of values, the function
running returns a list containing the running totals at each index.
- The first assertion checks that the input sequence
valuesis not empty. An empty sequence such as
[]will make it fail.
- The second assertion checks that the first value in the list is positive. Input such as
[-1,0,2,3]will make it fail.
- The third assertion checks that the running total always increases. Input such as
[0,1,3,-5,4]will make it fail.
Fixing and testing
Fix
range_overlap. Re-run
test_range_overlap after each change you make.
import numpy def range_overlap(ranges): '''Return common overlap among a set of [low, high] ranges.''' if len(ranges) == 1: # only one entry, so return it return ranges[0] lowest = -numpy.inf # lowest possible number highest = numpy.inf # highest possible number for (low, high) in ranges: lowest = max(lowest, low) highest = min(highest, high) if lowest >= highest: # no overlap return None else: return (lowest, highest)
Debugging
Solutions to exercises:
Debug the following problem
This exercise has the aim of ensuring learners are able to step through unseen code with unexpected output to locate issues. The issues present are that:
The loop is not being utilised correctly.
heightand
weightare always set as the first patient’s data during each iteration of the loop.
The height/weight variables are reversed in the function call to
calculate_bmi(...)
Command-Line Programs
Solutions to exercises:
Arithmetic on the command line
Write a command-line program that does addition and subtraction:
$ python arith.py add 1 2
3
$ python arith.py subtract 3 4
-1
# this is code/arith.py module introduced earlier, write a simple version of
ls that shows files in the current directory with a particular suffix. A call to this script should look like this:
$ python my_ls.py py
left.py right.py zero.py
# this is code/my_ls.py so that it uses
-n,
-m, and
-x instead of
--min,
--mean, and
--max respectively. Is the code easier to read? Is the program easier to understand?
# f in filenames: process(f, action) def process(filename, action): data = numpy.loadtxt(filename, delimiter=',') if action == '-n': values = numpy.min(data, axis=1) elif action == '-m': values = numpy.mean(data, axis=1) elif action == '-x': values = numpy.max(data, axis=1) for m in values: print(m) main() program?
# this is code/check.py f in filenames[1:]: nrow, ncol = row_col_count(f) if nrow != nrow0 or ncol != ncol0: print('File %s does not check: %d rows and %d columns' % (f, nrow, ncol)) else: print('File %s checks' % f) return def row_col_count(filename): try: nrow, ncol = numpy.loadtxt(filename, delimiter=',').shape except ValueError: #get this if file doesn't have same number of rows and columns, or if it has non-numeric content nrow, ncol = (0, 0) return nrow, ncol.
# this is code/line_count.py f in filenames: n = count_file(f) print('%s %d' % (f, n)) sum_nlines += n() | https://cac-staff.github.io/summer-school-2016-Python/instructors.html | CC-MAIN-2022-33 | refinedweb | 1,814 | 55.95 |
This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.
Hi Eric, Thanks for the review. > +/* Return true if X is a sign_extract or zero_extract from the least > + significant bit. */ > + > +static bool > +lsb_bitfield_op_p (rtx x) > +{ > + if (GET_RTX_CLASS (GET_CODE (x)) == RTX_BITFIELD_OPS) > + { > + enum machine_mode mode = GET_MODE(x); > + unsigned HOST_WIDE_INT len = INTVAL (XEXP (x, 1)); > + HOST_WIDE_INT pos = INTVAL (XEXP (x, 2)); > + > + return (pos == (BITS_BIG_ENDIAN ? GET_MODE_PRECISION (mode) - len : > 0)); > > It seems strange to use the destination mode to decide whether this is the LSB > of the source. Indeed, I think it has to be the mode of loc, but I just wonder if it is not always the same, as in the doc it is written that mode m is the same as the mode that would be used for loc if it were a register. > +/* Return true if X is a shifting operation. */ > + > +static bool > +shift_code_p (rtx x) > +{ > + return (GET_CODE (x) == ASHIFT > + || GET_CODE (x) == ASHIFTRT > + || GET_CODE (x) == LSHIFTRT > + || GET_CODE (x) == ROTATE > + || GET_CODE (x) == ROTATERT); > +} > > ROTATE and ROTATERT aren't really shifting operations though, so are they > really needed here? According to gcc internals, ROTATE and ROTATERT are similar to the shifting operations, but to be more accurate maybe we can rename shif_code_p in shift_and _rotate_code_p rotation are used in arm address calculation, and thus need to be handle in must_be_index_p and set_address_index Thanks, Yvan | https://gcc.gnu.org/legacy-ml/gcc-patches/2013-09/msg01758.html | CC-MAIN-2021-04 | refinedweb | 229 | 57.3 |
Docker.py- python API for Docker
Once upon a time I and my friend decided to write an application that helps us doing code kata. The first problem that we faced was how to run a code provided by the user in a safe manner so our server won't be destroyed. After giving it some thought I decided to write a prototype of an application that runs the code inside Docker container which is immediately destroyed after the code has been run. This blog post is about this prototype.
Assumptions
I need an application that gets a code from the user, executes it and gives output back. As many people before me said output from user cannot be trusted so I need to use some kind of container for user input. To do that I used Docker python API- docker.py. Using that and Flask I created Tdd-app-prototype. Under the hood, this application will work like this: user writes a code on a website, clicks submit. Then Docker creates a container based on python docker image and executes code. I take the output from the container and destroy it afterwards.
As we know what application should do, let's jump into the code.
Code
The first problem that I have is that I don't want to write a code provided by the user to a disk, then read it from the disk and it execute by Docker. I want to store it in memory - perfect case for StringIO. Code that does this looks as follows:
@app.route("/send_code", methods=['POST'])
def execute_code():
data = request.form['source_code']
code = io.StringIO(data)
create_container(code)
output = get_code_from_docker()
return output
Here beside specifying routes in Flask I take data from the form, cast
it to
StringIO and create a container from that code. Function that
does that is below:
def create_container(code):
cli.create_container(
image='python:3',
command=['python','-c', code.getvalue()],
name='tdd_app_prototype',
)
What is
cli here? I can use docker.py with Docker from other than my
own computer location so before I can use any of these functions I need
to specify
Client:
cli = Client(base_url='unix://var/run/docker.sock')
It tells docker.py to use my local Docker. Let's go back to
create_container. I tell docker.py to use official python 3 images.
Then I specify a command to run:
python -c and my code from
StringIO. If you want to run standalone python script you can use
this:
def create_container(code):
cli.create_container(
image='python:3',
command=['python','-c', 'my_code.py'],
volumes=['/opt'],
host_config=cli.create_host_config(
binds={ os.getcwd(): {
'bind': '/opt',
'mode': 'rw',
}
}
),
name='tdd_app_prototype',
working_dir='/opt'
)
volumes and
host_config keywords are for telling Docker to mount
volumes.
It is the same as running
docker run -v "$PWD":/opt. Finally I set up
working_dir so I don't need to provide a full path to
my_code.py. As
we have a container created now it is time to start it:
def get_code_from_docker():
cli.start('tdd_app_prototype')
cli.wait('tdd_app_prototype')
output = cli.logs('tdd_app_prototype')
cli.remove_container('tdd_app_prototype', force=True)
return "From docker: {}".format(output.strip())
I used here
wait so I wait for the container to stop. Then I take
output in form of lists and remove the container.
That's all for today! If you want to see full code grab it here. Do you know other ways of using docker.py?
Special thanks to Kasia for being editor for this post. Thank you. | https://krzysztofzuraw.com/blog/2016/docker-py/ | CC-MAIN-2022-21 | refinedweb | 580 | 59.8 |
It’s easy to forget how different the editing experience for Silverlight applications is in Visual Studio 2010 over and above what we had in Visual Studio 2008 Sp1. This isn’t necessarily Silverlight 4 specific but I suspect a lot of people coming to the Silverlight 4 bits will be doing Silverlight in VS 2010 for the first time so I thought a few things were worth pointing out.
Firstly, there’s multi-targeting. Visual Studio 2010 is the first version of Visual Studio to support building applications for multiple versions of Silverlight ( namely V3 and V4 ). Note that a user ( or developer ) can only have one version of the Silverlight runtime installed so you’d still need to have multiple machines or VPCs in order to test on Silverlight 3 and/or Silverlight 4 but you can explicitly choose which version you’re targeting in the IDE at build time. If you’re doing this you need to be aware that when you create a project you need to ignore the .NET Framework setting on the New Project dialog;
and, instead, on the “New Silverlight Application” dialog that follows you need to select which Silverlight version you’re working with;
Secondly, there’s a graphical designer 🙂 for XAML files and a Toolbox of controls to drag-drop onto that design surface – no small thing.
The designer has some neat tricks around layout. If we drag out a button for instance;
then the XAML view and Visual Studio do the right thing in terms of selection tracking and XAML highlighting below;
and, naturally, the XAML view and the design surface as kept in sync as you’d expect. I actually like working with them both open as I’m a bit keen to see what XAML is being generated by the designer and the Document Outline view also syncs itself up nicely and provides preview too which is pretty useful if you like to work with just the XAML editor open and find yourself lost in there from time to time;
There’s nice features for layout. When you’re looking at that button it’s not immediately obvious what those arrows and dots mean;
and note that in a Canvas moving the button around will set Canvas.Top and Canvas.Left rather than Margin and you won’t see the arrows and dots. These dots/arrows take a bit of getting used to as they do different things. For instance, let’s take this orange grid within a white grid;
Now, if I select the Orange grid and drag its resize handles what properties am I changing?
It looks like I’m setting the margins. So what if I wanted to set width/height? If I go and click the little arrows to turn them into circles;
then the editor changes the margin into Width/Height values and dragging now changes Width/Height ( in this case );
Note also the handy right-mouse-menu;
and my personal favourite ( because margins tend to bug me 🙂 );
which is a quick way of resetting stuff you’ve set around size/alignments/margin and is great when you’ve copied a lump of XAML from somewhere.
If you’ve got the “root” element of a UserControl selected ( i.e. the UserControl ) then you’ll see a little highlight on the bottom right;
what this does is switch between your UserControl not specifying a Width/Height ( which is 99.9% of the time what you want imho ) and your UserControl copying its Width and Height from the current value for d:DesignWidth and d:DesignHeight
If it’s not obvious what these things are – DesignWidth/Height are just what VS/Blend will use to display your control while you work with it. They’re not used at runtime. Width/Height are used at runtime and leaving them unset generally means “stretch to fill” dependent on who your parent is and what they’ve done about your sizing.
Selection works pretty well for me. If I have a couple of grids like the white/orange ones below then clicking the orange one makes it fairly obvious which grid I’m working with and clicking in the white area behind it ( or in the XAML ) makes it clear I’m now dealing with the white grid;
Also, dragging elements from one Grid to another is pretty easy. As I drag this button from the Orange grid to the white grid there’s a clear, blue indicator that shows me which Grid I’m targeting;
Note also that if you just double click on a control in the toolbox, it’ll go into the currently selected layout panel ( just like in Blend ) and so if you have a Grid selected and double click a Button a few times then you’ll get buttons inserted pretty quickly with a little margin put around them ( don’t like the layout? drag select them and right mouse and Reset what you don’t like 🙂 );
You might also notice a little red line with a couple of numbers on it – as you approach the edge of “other objects” this little snap lines appear and tell you how far away you are from those other objects as in;
I use Grids a lot and I find typing in Grid.RowDefinitions and Grid.ColumnDefinitions painful and it’s good to see that VS has decent support for Grid layout. If I take a Grid like this orange one within a white one and hover over the blue area to the left or top of the grid then I see a little triangle and an insertion marker;
If I click then my grid gets a now Row/Column definition;
Now, I’d not usually type in 172* and 80* myself 🙂 What if I’m trying to set up one row which is Auto and the other which is just *? I can affect those choices by hovering over the Grid/Row in the blue area;
and I can quickly change between Auto/Fixed/* sizing by just clicking the relevant button. It’s worth saying that if you do select a RowDefinition (in the XAML editor) then its properties do pop up in the properties window;
Speaking of which, over in the property grid, there’s a nice search feature;
and I can also group properties by where their value is coming from – i.e. is the value set locally or inherited or data-bound or…?
and then there are proper editors for properties such as brushes and masks;
And the editor has support for the ideas of using Resources and Data-Binding. Taking some examples. Let’s say that I want to do a bit of quick Element binding. I drag out a Slider and a Rectangle;
now if I want to set the width of the Rectangle so that it binds to ( say ) the value of the Slider I can go to the properties window where a visual cue gives a hint as to whether the property value is coming from ( resource/binding/local value/inherited/etc );
and I can change this to be data-bound as in;
and I’m done. What if I have some class that I want to bind to? Let’s add a class;
using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.ComponentModel; namespace SilverlightApplication48 { public class RectangleData : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public RectangleData() { Width = 96; Height = 96; } public double Width { get { return (width); } set { width = value; FirePropertyChanged("Width"); } } public double Height { get { return (height); } set { height = value; FirePropertyChanged("Height"); } } void FirePropertyChanged(string property) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(property)); } } double width; double height; } }
Ok, so that’s simple enough – let’s go and add one of those as a resource into my XAML;
and then see if I can pick that up as the DataContext for my Grid;
neat – so can I pick up Width/Height for my Rectangle if I go and select that? Well, firstly, I can see that the Rectangle picks up its DataContext by inheritance;
and then I can go to its Height and something pretty clever happened in that this dialog launched with the right stuff pretty much set in it;
Nice, nice, nice – it knows about the inherited DataContext, it knows that it is a RectangleData and it knows what one of those looks like. Like it 🙂
Naturally, I can then do the same thing for Width and my job’s done. I can also specify Converters in this dialog and Options like;
Similarly, in terms of resources – if I have something like my Rectangle and I set its fill to some colour;
then I can go and extract that out into a resource;
which gives me a dialog that lets me choose which resource dictionary we’re targetting;
and then if I draw out another Rectangle and want to use the same resource I can just drop to its Fill property;
and graphically select that thing;
Nice dialogs – will save me a bunch of typing and a bunch of errors around mistakenly typed property names and keep me out of Blend for some things.
Another thing you should notice is that controls have a design-time experience for Visual Studio. So, for instance if you drag a TabControl out onto the design surface then it has sensible additions like “Add Tab”;
and you can easily edit the headers and so controls are doing more work to offer up a design time experience rather than expecting you to just type XAML.
There’s also support for the Data-Sources window so you can drop out to that window and add a new data-source;
and then choose where the data’s coming from;
( I’m not sure what Database does here as it’s not-so-easy to get Silverlight connected to a database directly but it might have a meaning that I’ve not “understood” just yet ). I went for object source then selected a little Customer type;
and that gives me a new source;
where I can alter whether I want a Grid/Details view on the drop-down;
and choose the controls that I want to generate on the other drop-downs for field;
and then drag-drop to my design surface to create UI pretty quickly;
where the Grid version is using DataGrid and the Details version is using a bunch of standard controls in a Grid – the data sources stuff is smarter than this in that it can figure out related properties and help with master-detail kind of scenarios as well.
The other thing that you’re going to notice for Silverlight is new templates when it comes to starting new projects and adding items to projects. For instance, my “New Project” dialog currently includes;
so there’s 3 new templates there including a starting point for a RIA Services Application ( “Silverlight Business Application” ) and one for a RIA Services library along with a new project type for in-browser unit testing with Silverlight.
The add new items dialog also has more items to it for Silverlight applications;
I’m sure there’s a lot more to the tooling support but that’s as far as I’ve gone so far – a big leap forward from what was in VS 2008 Sp1 for Silverlight.
My only plea – could I have XAML snippet support please? 🙂 | https://mtaulty.com/2009/11/18/m_11743/ | CC-MAIN-2017-39 | refinedweb | 1,932 | 58.35 |
Add a New Language
Table of contents
Summary
In order to train a stanza package for a new language, you will need data for the various models, word vectors for all models other than tokenization, and possibly a character model for improving performance.
Data format
Most of the training scripts expect data in conllu format. Each word has its own individual line. For examples of the data format expected, you can download a package such as en-ewt from and run it through the preprocessing scripts such as
prep_tokenize.sh. If your data is not already compatible with this format, you would need to write your own processing script to convert it to this format.
Note that many of the columns in conllu format may not be present in your data. Most of these columns can be represented with a blank “”. One exception to this is the dependency column, which occupies the 7th, 8th, and 9th, columsn of the data. There is some numeric processing involved in these columns, so “” is not sufficient. If these columns are not prsent, you should fake them as follows: set the first row’s values to
0, root, 0:root, set each other row
i to
i-1, dep, i-1:dep. You can look at process_orchid.py for an example.
The classifier model (which is used for sentiment) has a different data format. For this model, the format is one line per sentence or phrase, with the label first and the text as a whitespaced tokenized sentencse after that. For example, see any of the sentiment processing scripts.
Word Vectors
In general we use either word2vec or fasttext word vectors. If none of those are available for the language you want to work on, you might try to use GloVe to train your own word vectors.
More information on using word vectors here.
Character LM
Character LMs are included for a few languages. You can look in resources.json for
forward_charlm and
backward_charlm
For adding a new languages, we provide scripts to automate large parts of the process. Scripts for converting raw text to conllu and conllu to a charlm dataset can be found in stanza/utils/charlm/conll17_to_text.py and stanza/utils/charlm/make_lm_data.py
- Gather a ton of tokenized text. Ideally gigabytes. Wikipedia is a good place to start for raw text, but in that case you will need to tokenize it.
- One such source of text is the conll17 shared task
- If the data you gathered was from the conll17 shared task, we provide a script to turn it into txt files. Run
python3 stanza/utils/charlm/conll17_to_text.py ~/extern_data/finnish/conll17/Finnish/This will convert conllu or conllu.xz files to txt and put them in the same directory.
- Run
python3 stanza/utils/charlm/make_lm_data.py extern_data/charlm_raw extern_data/charlmThis will convert text files in the
charlm_rawdirectory to a suitable dataset in
extern_data/charlm. You may need to adjust your paths.
- Run
python3 -m stanza.models.charlm --train_dir extern_data/charlm/fi/conll17/train --eval_file extern_data/charlm/fi/conll17/dev.txt --direction forward --lang fi --shorthand fi_conll17 --mode train
- This will take days or weeks to fully train.
For most languages, the current defaults are sufficient, but for some languages the learning rate is too aggressive and leads to NaNs in the training process. For example, for Finnish, we used the following parameters:
--lr0 10 --eval_steps 100000
Building models
Once you have the needed resources, you can follow the instructions here to train the models themselves.
Integrating into Stanza
Once you have trained new models, you need to integrate your models into the available resources.
The stanza models are kept in your
stanza_resources directory, which by default is kept in
~/stanza_resources. A json description of the models is needed so that stanza knows which models are prerequisites for other models.
The problem with editing this directly is that if you download more officially released models from stanza, any changes you make will be overwritten. A solution to this problem is to make your own directory with a new json file. For example, if you were to create new Thai tokenizers, you could make a directory
thai_stanza_resources with a file
resources.json in it. You could copy a block with information for the models:
{ "th": { "tokenize": { "orchid": { }, "best": { } }, "default_processors": { "tokenize": "orchid" }, "default_dependencies": { }, "lang_name": "Thai" } }
The resources directory then needs a structure where the first subdirectory is the language code, so in this case
/home/username/thai_resources/th. Each model type then gets a further subdirectory under that directory. For example, the
orchid tokenizer model goes in
/home/username/thai_resources/th/tokenize/orchid.pt and the
best tokenizer model goes in
/home/username/thai_resources/th/tokenize/best.pt
At last, you can load the models via
import stanza pipeline = stanza.Pipeline("th", dir="/home/username/thai_resources")
There are several options for configuring a new pipeline and its use of resources You can see the existing
resources.json for examples of how to build the json entries for other models.
Contributing Back to Stanza
If you feel your finished model would be useful for the wider community, please feel free to share it back with us! We will evaluate it and include it in our distributions if appropriate.
Please describe the data sources used and any options used or modifications made so that the models can be recreated as needed.
You can open an issue on our main page. For example, the Ukrainian NER model we have was provided via an issue. | https://stanfordnlp.github.io/stanza/new_language.html | CC-MAIN-2021-25 | refinedweb | 919 | 56.05 |
Important: Please read the Qt Code of Conduct -
32bit Integer cap on 64bit OS
I'm using Qt Creator 2.2.0 based on Qt 4.7.4 (32bit) on Windows 7 Enterprise x64. I can't remember where to find my g++ compiler version. I'm also an engineer by training, so the obvious answer to a programmer is probably the right one.
Given the following code: [edit]Made nrows an int for clarity[/edit]
@int npts = 71338;
int nrows = npts*(npts - 1);@
I always get the result 794,071,610. The expected value is 5,089,038,906.
Now, I understand that the maximum range of a 32bit integer is 2^32 (or 4,294,967,296). When I subtract that from the expected value using my trusty TI-82, I get the observed result. This suggests to me that this is a type range issue.
But when I try other data types for nrows @long long, qint64, quint64, int_fast64_t, int_least64_t, __int64@ I get the same result! The only data type I've seen work thus far is a double, but since nrows is a counter I'd prefer to keep it as an integer if possible.
Anyone have an explanation or solution? Thanks!
The reason is probably that the compiler is using 32bit ints and you only assign the result to long long or whatever.
This should work
@
long long npts = 71338;
long long nrows = npts*(npts - 1);
@
Sometimes you have to append suffixes. I should be something like:
@long long nrows = 71338LL * 71337LL; @
for example.
Of course, there are numerous other possibilities such as
@qint64 nrows = qint64 ( 71338 ) * qint64 ( 71337 ) @
- mlong Moderators last edited by
This works for me:
@
#include <QtCore/QCoreApplication>
#include <QDebug>
int main(int argc, char *argv[])
{
quint64 val = 71338;
quint64 result = val * (val - 1);
qDebug() << result; return 0;
}
@
I cannot reproduce the problem:
@
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
int v_int = 71338; int v_int_2 = v_int * (v_int-1); qDebug() << "int :" << v_int << " --> " << v_int_2; long v_long = 71338; long v_long_2 = v_long * (v_long-1); qDebug() << "long :" << v_long << " --> " << v_long_2; long long v_longlong = 71338; long long v_longlong_2 = v_longlong * (v_longlong-1); qDebug() << "long long:" << v_longlong << " --> " << v_longlong_2; qint64 v_qint64 = 71338; qint64 v_qint64_2 = v_qint64 * (v_qint64-1); qDebug() << "qint64 :" << v_qint64 << " --> " << v_qint64_2; quint64 v_quint64 = 71338; quint64 v_quint64_2 = v_quint64 * (v_quint64-1); qDebug() << "quint64 :" << v_quint64 << " --> " << v_quint64_2; return 0;
}
@
the output on a 32 bit system is:
@
int : 71338 --> 794071610
long : 71338 --> 794071610
long long: 71338 --> 5089038906
qint64 : 71338 --> 5089038906
quint64 : 71338 --> 5089038906
@
The only difference on a 64bit Linux system is that the long version is correct too.
As koahnig already mentioned, make sure that all variables involved are of the bigger type, otherwise you may suffer from unwanted downcasts.
Kaohnig and Volker hit the nail on the head. Because npts was only an int, the computation was downcast regardless of the type I assigned to nrows. Since I can't change the type of npts (it comes from other code... I showed it adjacent for clarity in my post), I had to recast it during the nrows computation:
@int npts = 71338;
quint64 nrows = quint64(npts) * ( quint64(npts) - 1 ); //5089038906 as expected@
Thanks for the help! I knew it was something obvious. | https://forum.qt.io/topic/8694/32bit-integer-cap-on-64bit-os | CC-MAIN-2021-25 | refinedweb | 530 | 58.11 |
#include <rpc/des_crypt.h> int ecb_crypt(char *key, char *data, unsigned datalen, unsigned mode);
int cbc_crypt(char *key, char *data, unsigned datalen, unsigned mode, char *ivec);
void des_setparity(char *key);
int DES_FAILED(int stat);
ecb_crypt() and cbc_crypt() implement the NBS DES (Data Encryption Standard). These routines are faster and more general purpose than crypt (3C). the DES_ENCRYPT or DES_DECRYPT to specify the encryption direction and DES_HW or DES_SW to specify software or hardware encryption. If DES_HW is specified, and there is no hardware, then the encryption is performed in software and the routine returns DESERR_NOHWDEVICE.
For cbc_crypt(), the parameter ivec is the 8-byte initialization vector for the chaining. It is updated to the next initialization vector upon successful return.
Given a result status stat, the macro DES_FAILED is false only for the first two statuses.
No error.
Encryption succeeded, but done in software instead of the requested hardware.
An error occurred in the hardware or driver.
Bad parameter to routine.
See attributes(5) for descriptions of the following attributes:
When compiling multi-thread applications, the _REENTRANT flag must be defined on the compile line. This flag should only be used in multi-thread applications. | http://docs.oracle.com/cd/E36784_01/html/E36876/cbc-crypt-3ext.html | CC-MAIN-2015-35 | refinedweb | 195 | 55.84 |
ReSharper Ultimate 2016.3 EAP 6 is now available to download. Let’s take a look at what’s in this latest build.
dotTrace
EAP 4 introduced support for profiling .NET Core applications, and now you can profile .NET Core unit tests directly from ReSharper’s test runner. You’ll note that currently, dotTrace hasn’t yet added folding or subsystem definitions for the xUnit.net or .NET Core modules, which can make it a little harder to read, but don’t worry, we’re working on this.
As before, there are certain restrictions – this is Windows only, right now, and targets the CoreCLR, not Mono. It also only supports Sampling mode. However, we previously suggested setting the COMPLUS_ReadyToRun environment variable. This is no longer necessary – dotTrace will do this for you.
Next up, the Timeline view in dotTrace now supports capturing native allocations. This is enabled in the Advanced view when Timeline is selected.
Once enabled, dotTrace will capture events whenever there is a native allocation. Note that this has an overhead, and can have a noticeable impact on the application being profiled.
After profiling is complete, and the data is analysed, dotTrace will add a new filter of “Native Memory Allocations”. When selected, this will filter down to only the events which caused native memory allocations, selecting the time slices on the main Timeline view, and filtering the Call Stack top methods to the methods that were making the allocations.
We’ve also added a new Events tool window that aggregates file, HTTP, SQL or exception events that have taken place during the currently selected set of time slices. This is similar to the filters of the same name, but is designed to allow examination of the events, rather than to use them as the filter. Selecting an event will give details of the event, including timestamp, duration and call stack.
ReSharper
This build sees initial support for C# 7.0’s pattern matching in is expressions and switch cases. This is just the start, and more support is of course coming in future builds.
The JavaScript language injections that we introduced in the last build get a little more polished, as we fix Rename and Find Usages, and syntax highlighting should work more reliably, too.
Our tool for speeding up your solution build time, ReSharper Build, is also getting a bit of an update in this EAP.
Firstly, the results of a build are now displayed in their own Build Results window, which will list build warnings and errors. Toolbar buttons allow hiding warnings, displaying a preview of the code with the error, and exporting the results to a file.
You get a lot of control over how you want to see the results, too. You can show them as a flat list, or enable grouping, to show by a mixture of project, folder and file. Or you can use ReSharper’s semantic knowledge of the file to group by namespace, type and member.
The options are currently being reworked, as well. For example, you can choose when the Build Results window is shown – always, never, only when there are errors, or when there are warnings or errors. We’ve also split the heuristics lists into their own page, and added a NuGet restore log page, to capture output from NuGet restore activities.
The other interesting new option is to log the build to either the Output window, or a file. You get to choose what kinds of messages are output – errors, warnings, messages and standard console output – and of course, the location of the log file.
And ReSharper Build now gets its own implementation of NuGet restore. Previously, we would use Visual Studio’s NuGet integration to perform the restore, but this only worked in Visual Studio 2015. We’ve rewritten this so it now works for all Visual Studio versions. The new NuGet options page allows capturing and viewing log information for restores.
ReSharper C++
The “Third Party Code” options page is proving popular! Introduced for JavaScript and TypeScript in EAP 4, updated to include CSS in EAP 5, and EAP 6 now adds C++ support. Just to recap – this page allows adding files, folders and wildcards to be treated either as “skipped” or “library”. ReSharper will completely ignore “skipped” files, and treat “library” files as read-only – indexed for navigation, but no inspections or quick fixes.
And finally, ReSharper C++ gets a form of postfix completion that is very much like C# extension methods. If you “dot” an expression, ReSharper C++ will now suggest free functions that would accept that expression as the first parameter. When you accept the suggestion, ReSharper C++ will rewrite your code so that the expression is passed as the first argument.
Please download the EAP and give it a go! As ever, if you encounter any issues, please let us know.
Will it be possible to suppress warnings in the r# build result window like it’s possible in the Vs output window?
Br
Hi Rene. ReSharper will hide warnings, but you can’t remove errors. I’m not aware of the feature you mention in Visual Studio’s error list – do you have an example, or a link?
Hi, here you go:
Br
Thanks! That looks like it’s talking about the Code Analysis window, rather than Visual Studio’s Error List window. I don’t think ReSharper’s Build Results would show these warnings at all – do they normally appear in Visual Studio’s Error List after a build?
Yes, it appears in the error/warning window as error, and its a great feature because you don’t have to find the error/warning id you need to suppress.
Br
Sorry I mean as warning.
Br
The free function postfix completion in c++ is a feature I wanted forever, and doesn’t exist in any other IDE for any other language, so this is amazing, and I think that feature also has a place in other languages.
The only thing I would add is completing pointer types, meaning that if i have a Foo, and the function takes Foo*, the completion will show the function, and add the & necessary.
John, thanks for the feedback! We’ll try to support this case as well, please follow for updates. EAP7 will also bring postfix completion for another common case – when you have a variable with type Foo*, and the function accepts Foo or Foo&.
Pingback: Auf dem Weg zu ReSharper Ultimate 2016.3 - entwickler.de
Pingback: Dew Drop - November 2, 2016 (#2358) - Morning Dew Anniversary Edition - Morning Dew
The performance for 2016.3 eap 6 and eap 7 is abysmal. It was ok in eap 5. Significant lag when typing.
Please provide gif in RSS feed. | https://blog.jetbrains.com/dotnet/2016/11/01/resharper-ultimate-2016-3-eap-6/?replytocom=477844 | CC-MAIN-2019-47 | refinedweb | 1,127 | 63.9 |
the following adapter in my Activity. I have set a CheckBox on the GroupView that when touched will check/uncheck the child views (which also have a CheckBox). The state of the CheckBox(true or false) is stored in the DB and initially set to t
I pretty much have the same problem as How to remove specific space between parent and child in an ExpandableListView . I have a ExpandableListView and I want to separate my groups without having a gap between the group and first child when it is exp
After many searches via google I still can't get how to figure it out.So, here is the explanation of my problem. I have an expandablelistview in which I'm using one gridview as child for each group, and I want to select (at runtime) some items and co
This is my first post so please be gentle, I will try to explain in the best way I can. I have an ExpandableListView, with a number of children. When I press one of these children, I want to add an extra LinearLayout (which I have already defined in
I using ExpandableListview ... I am able to set the values retrieved from web service to single Textview of child layout . Now need to set values two different Textviews in child layout from web service I am able to work with single textviews , but n
I apologize for the rather basic question but I am getting myself wrapped around the axel. I have been googling potential solutions for a while and I am not sure what is the best way forward. I have a use case for a ExpandableListView where each row
Why is that when i set a textsize the list items get overlap? Check the images below. This was taken from a sample and by default when we dont set a textsize for the headers the lists shows properly without overlapping one another. After i set a text
i am using expandable listview with json array. i have get sample code my question is how to get child title as string. I need child data while click childlist. expListView.setOnChildClickListener(new OnChildClickListener() { @Override public boolean
This is my fragment with my expandable list: public class Social_ActivitiesFragment extends ListFragment { ExpandableListAdapter listAdapter; ExpandableListView expListView; List<String> listDataHeader; HashMap<String, List<String>>
I have an ExpandableListView, the count of child groups can vary since they are dynamically created. I need the first 3 child groups to be always expanded by default. When the user clicks on a main group, the rest of the child groups will be expanded
I want to combine contextual action bar and expandablelist view in my app. Actually I want when I long click on child of expandable list ,the contextual bar should appear and if i press delete icon on action bar,the child item get deleted. Since I am
I'm working on my theme in android and am having a heck of a time getting my Expandable List Views to look right. Here is my desired effect. Collapsed Expanded So I primarily want the space between each List Group. And since android also adds those d
I need to customize the expandable list view and each parent should have child with different structure,i.e in User Interface design.How can i do this task? Can anyone help me with this. --------------Solutions------------- Use ExpandableListAdapter
I have created an app with a custom ExpandableListView. I have wanted to put an images in groupIndicator to left. But I get that the images show elongated to right. The size of the images is 100x100 px. How I can adjust the width of the images to the
public class CustomExpandableListAdapter extends BaseExpandableListAdapter implements OnCheckedChangeListener { private CoreWorkFlowActivity context; private List<ActivationType> listDataHeader=new ArrayList<ActivationType>(); // header t
I am creating a expandable list which has editText. Everything working fine, But i want to add a button last of the page for search and want to get all fields value which i entered, But i don't know how to do in this code. I found this code on intern
I trying to populate an expandablelistview, with data from a database that is populated my a httprequest. So far I have been able to populate the database without any problems. I get NullPointerException error when I try to set the adapter of the exp
I want to create something like this but I want the user to be able to add to the list and I want them to be able to edit the items. I am a beginner so please don't hate. Any help would be highly appreciated. --------------Solutions------------- You
I am using nhaarman's ListViewAnimations () to implement an expandable list view where each row has one parent and one child view that expands when a user clicks on the parent view. The child row has a cou | http://www.dskims.com/tag/expandablelistview/ | CC-MAIN-2019-13 | refinedweb | 819 | 68.6 |
Introduction
I recently added smart lighting to my home. Specifically, I installed smart light switches: the switches allow my wife (who is not smart home advocate) to control the lights as she normally does (by touch), while giving me the option to use Alexa to control the lights.
Unfortunately, I didn't find a great solution for lamps. There are smart lamp modules that exist, but they are physically cumbersome to use. The module is typically "installed" at the outlet. No one wants to reach down (or behind the couch) to control a lamp.
Smart bulbs aren't much better; they can easily be controlled by voice, but turning off the power at the light source renders the bulb inoperable.
So, I decided to build my own, using a MediaTek Linkit Duo as the board.
Step 1 - Designing the Touch Control
The light bulb is connected to a simple 5V relay - with connections to the Ground, 5V, and GPIO3 pins. There are two ways change the GPIO3 pin state: 1) touch (a switch/button) and 2) Alexa.
Button
I used a push button in my first prototype. The controlling code, written in Python, simply waits to see if the button fires. When fired, the relay pin is set to High or Low (depending on its current state). [Note: Button is connected to the Ground and GPIO43 pins.]
import mraa import time # Refer to the pin-out diagram for the GPIO number #button pin buttonpin = mraa.Gpio(43) buttonpin.dir(mraa.DIR_IN) relaypin = mraa.Gpio(3) relaypin.dir(mraa.DIR_IN) relaypin_out = mraa.Gpio(3) relaypin_out.dir(mraa.DIR_OUT) while True: if buttonpin.read() == 0: #button was pressed time.sleep(0.5) print "button pressed" print "relay:", relaypin.read() if relaypin.read() == 0: relaypin_out.write(1) else: relaypin_out.write(0)
Step 2 - Designing the Alexa Control
The Alexa solution required two components :
- An Alexa skill
- An Python script that would take a command from the Alexa Skill and change the state of the relay pin.
Alexa Skill
Amazon has a specific API created for Smart Home skills (these skills do not require the user to say the skill name when using the skill). There is a great five- part tutorial on their blog with very clear instructions; I created my first draft of the skill within 30 minutes of starting.
Once my base skill was completed, I updated the code for use with my lamp prototype.
Device Discovery Function (getAppliances)
The Smart Home Skill must provide Alexa with a list of valid/approved/installed smart devices. Typically, this code would sit in a device manufacturer's "cloud". Because I was building my own, I hard-coded the device information:
var getAppliances = function(event) { // var accessToken = event.payload.accessToken return [ { "applianceId": "DEVICE-NAME", //office-room-lamp1 "manufacturerName": "Darian Johnson", "modelName": "DIY Office Light", "version": "1", "friendlyName": "Office", "friendlyDescription": "DIY Smart Light", "isReachable": true, "actions": [ "turnOn", "turnOff" ], "additionalApplianceDetails": { "extraDetail1": "This is a light that is reachable" } } ]; };
Device Control Function (callDeviceCloud)
The Smart Home Skill must also provide the ability to control the device. I used AWS IoT to send MQTT messages to the device.
var callDeviceCloud = function(event, command, commandValue) { var deviceId = event.payload.appliance.applianceId; log(deviceId, command + " = " + commandValue); var email = 'user@email.com'; var iotTopic = email + "/" + deviceId + "/" + commandValue; var iotPayload = '{ "message": "Toggle Light"}' publishMessage(iotTopic,iotPayload); };// callDeviceCloud
AWS IoT Setup
I needed to create an AWS IoT "thing" to receive/route the messages. The following guide explains how to create a device. A key part of this is creating the certificates, which you'll need to download and install on your Linkit.
MQTT Script on the Linkit Duo
I created a script similar to the Button code; this script waited for an MQTT message and set the relay pin start to High or Low based on the message.
import mraa import time import paho.mqtt.client as paho import ssl import json # relay pin out relaypin_out = mraa.Gpio(3) relaypin_out.dir(mraa.DIR_OUT) # Parameters topic = "user@email.com" applianceId = ""DEVICE-NAME" #office-room-lamp1 def on_connect(client, userdata, flags, rc): print("Connection returned result: " + str(rc) ) # Subscribing in on_connect() means that if we lose the connection and # reconnect then subscriptions will be renewed. #client.subscribe("#" , 1 ) client.subscribe(topic + "/" + applianceId + "/#" ,1) def on_message(client, userdata, msg): print("payload: " + msg.payload) parsed_json = json.loads(msg.payload) if msg.topic ==topic + "/" + applianceId + "/on": relaypin_out.write(1) if msg.topic ==topic + "/" + applianceId + "/off": relaypin_out.write(0) mqttc = paho.Client() mqttc.on_connect = on_connect mqttc.on_message = on_message #variables to connect to AWS IoT #Note these certs allow access to send IoT messages awshost = "data.iot.us-east-1.amazonaws.com" awsport = 8883 clientId = applianceId # + str(uuid.uuid4()) thingName = applianceId caPath = "certs/verisign-cert.pem" certPath = "certs/Light.certificate.pem.crt" keyPath = "certs/Light.private.pem.key" mqttc.tls_set(caPath, certfile=certPath, keyfile=keyPath, cert_reqs=ssl.CERT_REQUIRED, tls_version=ssl.PROTOCOL_TLSv1_2, ciphers=None) mqttc.connect(awshost, awsport, keepalive=60) mqttc.loop_forever()
Enabling the Alexa Skill
Before I could use skill, I had to enable it and perform device discovery.
Once that was complete, I was able to control my prototype via button and voice.
Final Solution Build
Once I confirmed that the code worked, I went about build a more robust solution.
Step 1 - Wire the switch and run the wires through the lamp
I decided to go with a pole style floor lamp. This would allow be to snake the switch wires from the stop of the lamp to the base.
There were two challenges with this approach.
1) I had to drill two small holes in the back of the lamp to install the button. I used heat wrap to cover the wires.
2) My small push button that I planned to used was defective, so I changed to a toggle switch. This was actually a more aesthetically pleasing solution, but required that I change my "button" code.
import mraa import time # Refer to the pin-out diagram for the GPIO number #button pnb buttonpin = mraa.Gpio(43) buttonpin.dir(mraa.DIR_IN) current_button = buttonpin.read() relaypin = mraa.Gpio(3) relaypin.dir(mraa.DIR_IN) relaypin_out = mraa.Gpio(3) relaypin_out.dir(mraa.DIR_OUT) while True: if buttonpin.read() != current_button: #button was flipped time.sleep(0.25) print "button flipped" print "relay:", relaypin.read() if relaypin.read() == 0: relaypin_out.write(1) else: relaypin_out.write(0) current_button = buttonpin.read()
Step 2 - Solder the device to a Protoboard.
Full disclosure - I haven't soldered anything since my senior year in college and I barely passed by circuits 101 class, so my final solution was never going to be pretty.... but it works.
Step 3 - Connect the relay and assemble the device
I clipped the power cord to the lamp, soldered the ends, and connected them to the relay (I chose the "always open" connection, but either would work). From there, I connected the relay to the Linkit Duo (see the red [5V], black [ground], and yellow [signal] wires in the picture above).
Once connected, I placed the Linkit and relay and and placed both components inside the plastic project box.
Step 4 - Final steps
My final activity was to create two services on the Linkit to automatically start the button and mqtt Python programs on reboot. The scripts can be installed with the following commands:
mv initScript /etc/init.d/ chmod +x /etc/init.d/initScript /etc/init.d/initScript enable
Conclusion
As you can see, the lamp looks like a "dumb" lamp - no crazy wiring or out-of-place indicators. Total price, about $35:
- $16 - Linkit Duo
- $8 - Floor Lamp
- $3 - Relay
- $3 - Toggle Switch
- $5 - Project Box
Attribution/Thanks
- Michael Palermo's posts on Alexa Smart Home Skills
- Alex Glow's Intro to Soldering | https://mediateklabs.hackster.io/darian-johnson/diy-smart-lamp-controlled-by-toggle-switch-and-alexa-7de243 | CC-MAIN-2018-47 | refinedweb | 1,284 | 58.89 |
Opened 5 years ago
Last modified 2 months ago
#28598 assigned Cleanup/optimization
BCC addresses are ignored in the console and file email backends
Description
Hi there,
we noticed during development that the bcc header line is not printed in the console (and filebased since it inherits from console) EmailBackend (
django.core.mail.backends.{console|filebased}.EmailBackend). It seems like this issue has been reported before, e.g. in #18582, however there was no specific solution. It looks like a design decision, however there is no documentation (that I found) about it.
In my opinion, it would be nice to have the BCC line printed in those backends. As the documentation says, those backends are not intended for use in production, which makes it an ideal tool for development and testing. However that requires that the backend behaves just as a regular (smtp) backend would and display the email message exactly as it would have been sent.
If you decide not to fix this, please add a note to the documentation to help developers avoid a sleepless night because they really can't get BCC to work in their mail function ;)
Best,
zngr
Change History (20)
comment:1 Changed 5 years ago by
comment:2 Changed 5 years ago by
comment:3 Changed 4 years ago by
comment:4 Changed 4 years ago by
I think theres's an easy-enough™ fix available here, rather than just documenting that the
BCC list won't be shown.
The current console backend does this:
for message in email_messages: self.write_message(message) ...
The simple addition is to also write
message.recipients() at this point.
If we add and document a
format_message() hook to
console.EmailBackend, with a stub implementation...
def format_message(self, message): return message # ... then in send_messages()... for message in email_messages: self.write_message(self.format_message(message)) ...
... then users would be free to subclass the either the console or file backends in order to display the BCC list.
comment:5 Changed 4 years ago by
I thought about including
message.recipients(), my hesitation is that it will double up the
From and
CC pieces since currently we show the MIME which already includes that component. I wonder if adding
.format_message() isn't a case of YAGNI or over engineering.
I do agree that the proposed documentation patch is ugly, it makes specific call outs to technical details that don't feel necessary.
What do you think about also documenting the fact that
filebased inherits from
console? Having to double up docs is unfortunate.
Currently
write_message also includes
'*' * 79, is that part of the overridable interface as well in your solution?
comment:6 Changed 4 years ago by
I wonder if adding .format_message() isn't a case of YAGNI or over engineering.
Well, I guess most people won't need it, but that this ticket exists suggest some will. Adding a hook to allow subclassing at least allows those people to address their issue.
(Just saying in the docs that "BCCs won't be included" isn't great IMO: it offers me no way forward.)
... it will double up ...
I wouldn't worry about this. If people want to do some set operations in their subclass to get just the BCCs then they're welcome to.
All we're doing is providing the hook.
Currently write_message also includes '*' * 79...
I'd leave that where it is. If someone wants to override
write_message() as well then they're welcome.
comment:7 Changed 4 years ago by
To be clear
write_message takes in an
write_message as the hook given that?
comment:8 Changed 4 years ago by
...write_message takes in an EmailMessage, not a string.
Yes, that's right. So they'd need to be some adjustment.
The trouble with
write_message() as the hook is that you need to essentially reimplement it (or copy and paste the whole thing) in order to adjust the formatting.
As such it's more of a
wontfix solution.
The idea I'm trying to communicate is that we add a
format_message() hook, which does just that, separate from the existing
write_message() method, which would just be responsible for the
write() calls, message dividers (etc).
comment:9 Changed 4 years ago by
There is (of course) the
wontfix scenario.
The "Defining a custom email backend" section begins thus:
If you need to change how emails are sent you can write your own email backend...
People wanting BCCs for console or file based backends could just do this. We might say it's already documented...
comment:10 Changed 4 years ago by.
comment:11 Changed 4 years ago by
I updated the PR with a provisional patch. I am also okaying
wontfix-ing this.
comment:12 Changed 4 years ago by
Hi Josh. Thanks for updating the PR. It's looking good.
... it is confusing when encountered in this context.
Right. OK. I think you might have convinced me. :-)
What's your thought here: are you keen to add the (small?) amount of set logic needed to calculate the BCC list and add that to the formatted message for these backends?
comment:13 Changed 4 years ago by
Right. OK. I think you might have convinced me. :-)
What's your thought here: are you keen to add the (small?) amount of set logic needed to calculate the BCC list and add that to the formatted message for these backends?
Is there any sort of backwards compatibility guarantee on the output?
comment:14 Changed 3 years ago by
comment:15 Changed 3 years ago by
Muesliflyer, please don't reopen old tickets. If you have something to add about displaying Bcc addresses in the console you can leave your comments here.
comment:16 Changed 22 months ago by
comment:17 Changed 2 months ago by
I thought my email sending was not working, stepped code all the way to when
send_mail is called and sure enough its getting a valid BCC address, but its not being printed. Then I starting googling and found this issue. I recommend printing out the the BCC in console/file email backends, as thats what people expect, and they are tools for debugging.
comment:18 Changed 2 months ago by
Michael, feel-free to prepare a patch.
comment:19 follow-up: 20 Changed 2 months ago by
Mariusz Felisiak
Hi Mariusz, is what I came up with, do you the solution is good enough for the Django Core? If it is a will create a patch out of it
import re from io import StringIO from django.core.mail.backends.console import EmailBackend class WithBccEmailBackend(EmailBackend): """Email backend that writes messages to console instead of sending them, by defaul the Django Console back end does not print the Bcc fields, with just predixes the original output with the Bcc field. """ def write_message(self, message): if message.bcc: self.stream_backup = self.stream self.stream = StringIO() super().write_message(message) content = self.stream.getvalue() bcc = f"Bcc: {', '.join(message.bcc)}\n" new_content = re.sub(r'(Reply-To: |Date: )', bcc + r'\g<1>', content, 1) self.stream = self.stream_backup self.stream.write(new_content) else: super().write_message(message)
I'm not sure which solution is best, but I'll accept the ticket as an indication to do something (either a code change or document the limitation). | https://code.djangoproject.com/ticket/28598 | CC-MAIN-2022-40 | refinedweb | 1,213 | 65.93 |
A while back Bob blogged about The Latin1 Transcoding Trick for Java Servlets, etc.
Suppose you have an API that insists on converting an as-yet-unseen stream of bytes to characters for you (e.g. servlets), but lets you set the character encoding if you want.
Because Latin1 (officially, ISO-8859-1) maps bytes one-to-one to Unicode code points, setting Latin1 as the character encoding means that you get a single Java char for each byte.
Another situation where this trick comes in real handy is dealing with the way that Ant compiles its logfiles.
If, like me, you’re fond of debug-by-printf and you use Ant to compile and run your programs, then you might have run into the problem that has given rise to many StackOverflow queries, that is, when you use an Ant task to run the program and instrument your code with print statements to standard out, Ant replaces non-ASCII characters with a question mark. When the problem you’re trying to debug is making sure that non-ASCII characters are being processed correctly, this is both misleading and maddening. The standard advice on StackOverflow is to set the shell environment variable ANT_OPTS using the following incantation (for bash shell):
export ANT_OPTS="-Dfile.encoding=UTF-8"
This works as long as you’re working with UTF-8 encoded character data and your terminal’s encoding is set to UTF-8 as well. Here a solution that works no matter what character encoding is in play:
export ANT_OPTS="-Dfile.encoding=Latin1"
It’s the ol’ Latin1 transcoding trick!
Of course you already know about character encodings . Do you know about Ant’s IO System? Here’s what Ant contributor Conor MacNeill says:
The Ant IO system is designed to associate any output sent to System.out and System.err with the task that generated the output and to log the output accordingly.
Ant’s Main class installs its own output streams into System.out and System.err. These streams are instances of DemuxOutputStreams
Using the source code for Ant 1.9.0, in class
org.apache.tools.ant.Main we see that
System.In,
System.Out, and
System.Err are all reassigned to Ant’s
DemuxInputStream and
DemuxOutputStream, which extend
InputStream and
OutputStream, respectively:
System.setIn(new DemuxInputStream(project)); System.setOut(new PrintStream(new DemuxOutputStream(project, false))); System.setErr(new PrintStream(new DemuxOutputStream(project, true)));
The call to the
PrintStream constructor is the one-arg constructor
PrintStream(OutputStream out). Because no file encoding is specified, the encoding used is the default charset for the JVM that’s running Ant. This is specified by the system property
file.encoding. This property varies depending on your platform and locale. To check this, try this on your machine:
public class GetDefaultEncoding { public static void main(String[] args) { System.out.println(System.getProperty("file.encoding")); } }
On my Mac running OS-X the default is
US-ASCII (because the default locale on this machine is
en_US). On my Windows XP machine the default is
Cp1252 (Windows Latin1 which differs from ISO_8859-1 just enough to be noticeable).
At the point where Ant’s
DemuxInputStream reads in the bytes sent to System.out by a Java task, any character outside of the default character set is replaced by a question mark. When Latin1 is the default encoding, all bytes are valid Latin1 characters and their Unicode code point value is the same as the byte value so the bytes from the Java task pass through the Ant IO system unchanged.
As long as the next process in the chain (e.g. the terminal app) is configured to handle whatever encoding your text data is in, you’re good to go. | http://lingpipe-blog.com/page/2/ | CC-MAIN-2014-41 | refinedweb | 623 | 56.35 |
Note: arasu is in beta ,still not yet ready for deployment.
Arasu : A Lightning Fast Web Framework
Note: arasu development only work on dart enabled browsers like dartium or dart enabled chrome browser.
Arasu is a Next Generation Full Stack Web framework written on Go language & Dart language.
Features
- lightning fast, because of golang and dartlang
- use RDBMS and BIGDATA for serverside store
- use IndexedDB and Angular Dart for clientside store,clientside framework
- use TDD default by golang and dartlang
- use BDD with selenium and Spinach (this is in alpha)
- automatic build system.
Installation
Install <a href="">Golang</a> then add golang binary into system PATH and verify the sucessfull installation by <pre> ~$ go version go version go1.3 linux/amd64 </pre>
Install <a href="">Dartlang(Dart SDK)</a> then add dart-sdk binary into system PATH and verify the sucessfull installation by <pre> ~$ dart --version Dart VM version: 1.5.8 (Tue Jul 29 07:05:41 2014) on "linux_x64" </pre>
Install <a href="">Mysql</a> then add mysql binary into system PATH and verify the sucessfull installation by <pre> ~$ mysql --version mysql Ver 14.14 Distrib 5.5.37, for debian-linux-gnu (x86_64) using readline 6.2 </pre>
Install <a href="">Hbase</a> then add hbase binary into system PATH and verify the installation is sucessfull by <pre> ~$ hbase version //some valid output </pre>
finally * Install Arasu Framework by
~$ go get github.com/arasuresearch/arasu
Creating a New Arasu Project
Part 1
Creating scaffold for relational Database Management System aka RDBMS (mysql)
~$ arasu new demo ~$ cd demo ~$ arasu dstore create ~$ arasu generate scaffold Admin name password:string age:integer dob:timestamp sex:bool ~$ arasu dstore migrate </pre>
Now start the server:
~$ arasu serve
// you will get output like "You don't have a lockfile, so we need to generate that:" by DArt Pub Manager ,this will take few more seconds (this will occur at first time only). // // then //you may get dart-sdk "pub downlad error" for few times , but you can ignore and stop the command by CTRL + C . //and start the same command again until sucessfull start.
~$ arasu serve </pre>
after successfull start....
now visit on <i><a href="">Dartium</a> or dart enabled chrome</i> browser. <pre> To open dartium ~$ ./DART-SDK-INSTALLED-DIRECTORY/chromiun/chrome </pre>
then visit
There you can play !!!
Part 2
Creating scaffold for BigData (hbase)
stop the arasu server by pressing CTRL + C
open another terminal and start bigdata... <pre> ~$ start-hbase.sh ~$ hbase thrift start </pre>
leave this terminal to run thrift deamon. come back to old terminal then~$ arasu dstore create --dstore bigdata
this will result in failure
unfortunately Hbase thrift V1 Binary server is not supporting to create database through API Calls so we have to create it manually . to do that <pre>
~$ hbase shell
create_namespace 'demo_development' quit
</pre>
close hbase shell , then
~$ arasu generate scaffold User Profile:{FirstName:String,LastName:String,Age:int,Dob:DateTime} Contact:{Phone:String,Email:String} --dstore bigdata ~$ arasu dstore migrate --dstore bigdata </pre>
Now start the server: <pre> ~$ arasu serve </pre>
now visit
on dartium browser, there you can play !!!!!
lets dive into Full Tutorial Arasu Framework To learn more...
Contribute
Contribution are welcome Here.
License
Released under the MIT License. | https://www.dartdocs.org/documentation/arasu/0.1.2/index.html | CC-MAIN-2017-13 | refinedweb | 541 | 50.87 |
both how your application works. In case your environment is very specific, you can also roll your own SDK using our document SDK API.
Note
Your platform is not listed? There are more SDKs we support: list of SDKs
The quickest way to get started is to use the CDN hosted version of the JavaScript browser SDK:
<script src="" crossorigin="anonymous"></script>
Don't like the CDN?
You can also NPM install our browser library
Install our SDK using the cordova command:
$ cordova plugin add sentry-cordova@0.12.3
Install the NuGet package:
Package Manager:
Install-Package Sentry -Version 1.1.0
.NET Core CLI:
dotnet add package Sentry -v 1.1.0
Using .NET Framework prior to 4.6.1?
Our legacy SDK supports .NET Framework as early as 3.5.
Install the NuGet package:
Package Manager:
Install-Package Sentry.AspNetCore -Version 1.1.0
.NET Core CLI:
dotnet add package Sentry.AspNetCore -v 1.1.0
To add Sentry to your Rust project you just need to add a new dependency to your
Cargo.toml:
[dependencies] sentry = "0.12.0"
If you are using
yarn you can add our package as a dependency easily:
$ yarn add @sentry/browser@4.4.1
Or alternatively you can npm install it:
$ npm install @sentry/browser@4.4.1
Want a CDN?
You can also use our more convenient CDN version
If you are using
yarn you can add our package as a dependency easily:
$ yarn add @sentry/electron@0.14.0
Or alternatively you can npm install it:
$ npm install @sentry/electron@0.14.0
If you are using
yarn you can add our package as a dependency easily:
$ yarn add @sentry/node@4.4.1
Or alternatively you can npm install it:
$ npm install @sentry/node@4.4.1.
You should
init the Sentry Browser SDK as soon as possible during your page load:
Sentry.init({ dsn: '___PUBLIC_DSN___' });
import sentry_sdk sentry_sdk.init("__ }
Add Sentry to
Program.cs through the
WebHostBuilder:
public static IWebHost BuildWebHost(string[] args) => WebHost.CreateDefaultBuilder(args) .UseStartup<Startup>() // Add this: .UseSentry("___PUBLIC_DSN___") .Build();
extern crate sentry; let _guard = sentry::init("___PUBLIC_DSN___");
You should
init the Sentry browser SDK as soon as possible during your application load up:
import * as Sentry from '@sentry/browser'; Sentry.init({ dsn: '___PUBLIC_DSN___' });
You need to call
init in your
main and every
renderer process you spawn.
For more details about Electron click here
import * as Sentry from '@sentry/electron'; Sentry.init({dsn: '___PUBLIC_DSN___'});
You need to inform the Sentry Node SDK about your DSN:
const Sentry = require('@sentry/node');. | https://docs.sentry.io/error-reporting/quickstart/ | CC-MAIN-2018-51 | refinedweb | 429 | 60.01 |
Miscellaneous callbacks that don't belong to any specific group are to be found here.
There could be various uses for this handy callback.
The initial purpose of it was to be able to quickly check memory requirements for a given set of hyperparamaters like
bs and
size.
Since all the required GPU memory is setup during the first batch of the first epoch see tutorial, it's enough to run just 1-2 batches to measure whether your hyperparameters are right and won't lead to Out-Of-Memory (OOM) errors. So instead of waiting for minutes or hours to just discover that your
bs or
size are too large, this callback allows you to do it seconds.
You can deploy it on a specific learner (or fit call) just like with any other callback:
from fastai.callbacks.misc import StopAfterNBatches [...] learn = cnn_learner([...]) learn.callbacks.append(StopAfterNBatches(n_batches=2)) learn.fit_one_cycle(3, max_lr=1e-2)
and it'll either fit into the existing memory or it'll immediately fail with OOM error. You may want to add ipyexperiments to show you the memory usage, including the peak usage.
This is good, but it's cumbersome since you have to change the notebook source code and often you will have multiple learners and fit calls in the same notebook, so here is how to do it globally by placing the following code somewhere on top of your notebook and leaving the rest of your notebook unmodified:
from fastai.callbacks.misc import StopAfterNBatches # True turns the speedup on, False return to normal behavior tune = True #tune = False if tune: defaults.silent = True # don't print the fit metric headers defaults.extra_callbacks = [StopAfterNBatches(n_batches=2)] else: defaults.silent = False defaults.extra_callbacks = None
When you're done tuning your hyper-parameters, just set
tune to
False and re-run the notebook to do true fitting.
The setting
defaults.silent controls whether
fit calls print out any output.
Do note that when you run this callback, each fit call will be interrupted resulting in the red colored output - that's just an indication that the normal fit didn't happen, so you shouldn't expect any qualitative results out of it. | https://docs.fast.ai/callbacks.misc.html | CC-MAIN-2020-05 | refinedweb | 367 | 52.8 |
CNTL(2) OpenBSD Programmer's Manual FCNTL(2)
NAME
fcntl - file control
SYNOPSIS
#include <fcntl.h>
int
fcntl(int fd, int cmd, ...);
DESCRIPTION:
- Lowest numbered available descriptor greater than or
equal to arg (interpreted as an int).
- Same object references as the original descriptor.
- New descriptor shares the same file offset if the ob-
ject was a file.
- Same access mode (read, write or read/write).
- Same file status flags (i.e., both file descriptors
share the same file status flags).
- The close-on-exec flag associated with the new file
descriptor is set to remain open across execv(3) (interpreted as an int). The flag should be speci-
fied as 0 (do not close-on-exec) or 1 (do close-on-exec).
F_GETFL Get descriptor status flags, as described below (arg is ig-
nored).
F_SETFL Set descriptor status flags de-
scription de-
scriptor ex-
tend100w-
name(3) to retrieve a record, the lock will be lost because getpwname(3)
opens, reads, and closes the password database. The database close will
release all locks that the process has associated with the database, even
if the library routine never requested a lock on the database. Another
minor semantic problem with this interface is that locks are not inherit-
ed by a child process created using the fork(2) function. The flock(2)
interface has much more rational last close semantics and allows locks to
be inherited by child processes. flock(2) is recommended for applica-
tions that want to ensure the integrity of their locks when using library
routines or wish to pass locks to their children. Note that flock(2) and
fcntl(2).
ERRORS writing.
[EMFILE] cmd is F_DUPFD and the maximum allowed number of file de-
scriptors are currently open.
dtable-
size(3)
HISTORY
The fcntl() function call appeared in 4.2BSD.
OpenBSD 2.6 January 12, 1994 4 | http://www.rocketaware.com/man/man2/fcntl.2.htm | crawl-001 | refinedweb | 310 | 65.42 |
WWW::Mechanize::Script::Plugin - plugin base class for check plugins
version 0.100
Instantiates new WWW::Mechanize::Script::Plugin. This is an abstract class.
Retrieves the value for $value_name from the hash %check.
Retrieves the value for $value_name from the hash %check and returns true when it can be interpreted as a boolean value with true content (any object is always returned as it is, (?:(?i)true|on|yes) evaluates to true, anything else to false).
Proves whether this instance can check anything on the current run test. Looks if any of the required "check_value_names" are specified in the check parameters of the current test.
Returns list of check values which are used to check the response.
Each value has a value
_code counterpart which is used to modify the return value of "check_response" when the check upon that value fails.
Checks the response based on test specifications. See individual plugins for specific test information.
Returns the accumulated code for each failing check along with optional messages containing details about each failure.
# no error return (0); # some error return ($code,@messages); # some error but no details return ($code);. | http://search.cpan.org/~rehsack/WWW-Mechanize-Script-0.100/lib/WWW/Mechanize/Script/Plugin.pm | CC-MAIN-2015-18 | refinedweb | 188 | 58.69 |
On Mon, Jan 26, 2004 at 10:44:29PM -0500, Mirian Crzig Lennox wrote: > address@hidden (Tom Lord) writes: > > > > Is the inevitable resulting user confusion really worth it? > > > I think when you start munging names like this, user confusion is > > > inevitable :( > > > > That's the #1 reason why I resist changes like this. We have a > > perfectly servicable namespace right now with a clear structure. > > It's all downhill from here. And, for what? An imperfect support > > for "arbitrary names"? > > Well, here's the thing: I would dearly love to sell my workplace on > the idea of ditching CVS for Arch. However, that won't happen unless > Arch can support our version naming scheme, which is typically > [productname]-x.y-rel-z (for natural numbers x,y and z). For example, > they want to be able to tag something as "2.1-alpha-4", "2.1-beta-3", > "2.1-rc5", and so on. They most definitely are NOT interested in > saying "alpha-2.1.4", "beta-2.1.3", "rc-2.1.5", etc. because those > names have entirely different meanings to them, and to our customers > and beta sites. > > And for what it's worth, I firmly agree. Our version naming scheme > isn't uncommon, complicated or irregular; it's perfectly reasonable to > expect Arch to cope. If it can't, we ought to fix it so it can. This is a fairly good example of why it wasn't a particularly good idea to embed a versioning system into arch in the first place. Because it's there, people think they have to use it (while most of the time, the right thing to do is to stick a 0 in the version field and use the branch name). -- .''`. ** Debian GNU/Linux ** | Andrew Suffield : :' : | `. `' | `- -><- |
signature.asc
Description: Digital signature | http://lists.gnu.org/archive/html/gnu-arch-users/2004-01/msg00904.html | CC-MAIN-2016-30 | refinedweb | 301 | 75 |
KDEUI
#include <kpixmapcache.h>
Detailed Description
General-purpose pixmap cache for KDE.
The pixmap cache can be used to store pixmaps which can later be loaded from the cache very quickly.
Its most common use is storing SVG images which might be expensive to render every time they are used. With the cache you can render each SVG only once and later use the stored version unless the SVG file or requested pixmap size changes.
KPixmapCache's API is similar to that of the QPixmapCache so if you're already using the latter then all you need to do is creating a KPixmapCache object (unlike QPixmapCache, KPixmapCache doesn't have many static methods) and calling insert() and find() method on that object:
The above example illustrates that you can also cache pixmaps created from some data. In case such data is updated, you might need to discard cache contents using discard() method:
As demonstrated, you can use cache's timestamp() method to see when the cache was created. If necessary, you can also change the timestamp using setTimestamp() method.
- Deprecated:
- KPixmapCache is susceptible to various non-trivial locking bugs and inefficiencies, and is supported for backward compatibility only (since it exposes a QDataStream API for subclasses). Users should port to KImageCache for a very close work-alike, or KSharedDataCache if they need more control.
- See also
- KImageCache, KSharedDataCache
Definition at line 85 of file kpixmapcache.h.
Member Enumeration Documentation
Describes which entries will be removed first during cache cleanup.
Definition at line 216 of file kpixmapcache.h.
Constructor & Destructor Documentation
Constucts the pixmap cache object.
- Parameters
-
Definition at line 983 of file kpixmapcache.cpp.
Definition at line 996 of file kpixmapcache.cpp.
Member Function Documentation
- Returns
- maximum size of the cache (in kilobytes). Default setting is 3 megabytes (1 megabyte = 2^20 bytes).
Definition at line 1122 of file kpixmapcache.cpp.
Deletes a pixmap cache.
- Parameters
-
Definition at line 1217 of file kpixmapcache.cpp.
Deletes all entries and reinitializes this cache.
NOTE: If useQPixmapCache is set to true then that cache must also be cleared. There is only one QPixmapCache for the entire process however so other KPixmapCaches and other QPixmapCache users may also be affected, leading to a temporary slowdown until the QPixmapCache is repopulated.
Definition at line 1226 of file kpixmapcache.cpp.
Makes sure that the cache is initialized correctly, including the loading of the cache index and data, and any shared memory attachments (for systems where that is enabled).
- Note
- Although this method is protected you should not use it from any subclasses.
Definition at line 1032 of file kpixmapcache.cpp.
Tries to load pixmap with the specified key from cache.
If the pixmap is found it is stored in pix, otherwise pix is unchanged.
- Returns
- true when pixmap was found and loaded from cache, false otherwise
Reimplemented in KIconCache.
Definition at line 1272 of file kpixmapcache.cpp.
Inserts the pixmap pix into the cache, associated with the key key.
Any existing pixmaps associated with key are overwritten.
Reimplemented in KIconCache.
Definition at line 1366 of file kpixmapcache.cpp.
Cache will be disabled when e.g.
its data file cannot be created or read.
- Returns
- true when the cache is enabled.
Definition at line 1048 of file kpixmapcache.cpp.
- Returns
- true when the cache is ready to be used. Not being valid usually means that some additional initialization has to be done before the cache can be used.
Definition at line 1054 of file kpixmapcache.cpp.
Can be used by subclasses to load custom data from the stream.
This function will be called by KPixmapCache immediately following the image data for a single image being read from stream. (This function is called once for every single image).
- See also
- writeCustomData
- loadCustomIndexHeader
- Parameters
-
- Returns
- true if custom data was successfully loaded, false otherwise. If false is returned then the cached item is assumed to be invalid and will not be available to find() or contains().
Reimplemented in KIconCache.
Definition at line 1361 of file kpixmapcache.cpp.
Can be used by subclasses to load custom data from cache's header.
This function will be called by KPixmapCache immediately after the index header has been written out. (This function is called one time only for the entire cache).
- See also
- loadCustomData
- writeCustomIndexHeader
- Parameters
-
- Returns
- true if custom index header data was successfully read, false otherwise. If false is returned then the cache is assumed to be invalid and further processing does not occur.
Reimplemented in KIconCache.
Definition at line 1039 of file kpixmapcache.cpp.
Loads a pixmap from given file, using the cache.
If the file does not exist on disk, an empty pixmap is returned, even if that file had previously been cached. In addition, if the file's modified-time is more recent than cache's timestamp(), the entire cache is discarded (to be regenerated). This behavior may change in a future KDE Platform release. If the cached data is current the pixmap is returned directly from the cache without any file loading.
- Note
- The mapping between filename and the actual key used internally is implementation-dependent and can change without warning. Use insert() manually if you need control of the key, otherwise consistently use this function.
- Parameters
-
- Returns
- The given pixmap, or an empty pixmap if the file was invalid or did not exist.
Definition at line 1447 of file kpixmapcache.cpp.
Same as loadFromFile(), but using an SVG file instead.
You may optionally pass in a size to control the size of the output pixmap.
- Note
- The returned pixmap is only cached for identical filenames and sizes. If you change the size in between calls to this function then the pixmap will have to be regenerated again.
- Parameters
-
- Returns
- an empty pixmap if the file does not exist or was invalid, otherwise a pixmap of the desired size.
Definition at line 1473 of file kpixmapcache.cpp.
This function causes the cache files to be recreate by invalidating the cache.
Any shared memory mappings (if enabled) are dropped temporarily as well.
- Note
- The recreated cache will be initially empty, but with the same size limits and entry removal strategy (see removeEntryStrategy()).
If you use this in a subclass be prepared to handle writeCustomData() and writeCustomIndexHeader().
- Returns
- true if the cache was successfully recreated.
Definition at line 1156 of file kpixmapcache.cpp.
Removes some of the entries in the cache according to current removeEntryStrategy().
- Parameters
-
- Warning
- This currently works by copying some entries to a new cache and then replacing the old cache with the new one. Thus it might be slow and will temporarily use extra disk space.
Definition at line 1258 of file kpixmapcache.cpp.
- Returns
- current entry removal strategy. Default is RemoveLeastRecentlyUsed.
Definition at line 1146 of file kpixmapcache.cpp.
Sets the maximum size of the cache (in kilobytes).
If cache gets bigger than the limit then some entries are removed (according to removeEntryStrategy()).
Setting the cache limit to 0 disables caching (as all entries will get immediately removed).
Note that the cleanup might not be done immediately, so the cache might temporarily (for a few seconds) grow bigger than the limit.
Definition at line 1127 of file kpixmapcache.cpp.
Sets the removeEntryStrategy used when removing entries.
Definition at line 1151 of file kpixmapcache.cpp.
Sets the timestamp of app-specific cache.
It's saved in the cache file and can later be retrieved using the timestamp() method. By default the timestamp is set to the cache creation time.
Definition at line 1072 of file kpixmapcache.cpp.
Sets whether QPixmapCache (memory caching) should be used in addition to disk cache.
QPixmapCache is used by default.
- Note
- On most systems KPixmapCache can use shared-memory to share cached pixmaps with other applications attached to the same shared pixmap, which means additional memory caching is unnecessary and actually wasteful of memory.
- Warning
- QPixmapCache is shared among the entire process and therefore can cause strange interactions with other instances of KPixmapCache. This may be fixed in the future and should be not relied upon.
Definition at line 1112 of file kpixmapcache.cpp.
Sets whether this cache is valid or not.
(The cache must be enabled in addition for isValid() to return true.
- See also
- isEnabled(),
- setEnabled()).
Most cache functions do not work if the cache is not valid. KPixmapCache assumes the cache is valid as long as its cache files were able to be created (see recreateCacheFiles()) even if the cache is not enabled.
Can be used by subclasses to indicate that cache needs some additional initialization before it can be used (note that KPixmapCache will not handle actually performing this extra initialization).
Definition at line 1060 of file kpixmapcache.cpp.
- Returns
- approximate size of the cache, in kilobytes (1 kilobyte == 1024 bytes)
Definition at line 1103 of file kpixmapcache.cpp.
- Note
- KPixmapCache does not ever change the timestamp, so the application must set the timestamp if it to be used.
- Returns
- Timestamp of the cache, set using the setTimestamp() method.
Definition at line 1066 of file kpixmapcache.cpp.
Whether QPixmapCache should be used to cache pixmaps in memory in addition to caching them on the disk.
NOTE: The design of QPixmapCache means that the entries stored in the cache are shared throughout the entire process, and not just in this particular KPixmapCache. KPixmapCache makes an effort to ensure that entries from other KPixmapCaches do not inadvertently spill over into this one, but is not entirely successful (see discard())
Definition at line 1117 of file kpixmapcache.cpp.
Can be used by subclasses to write custom data into the stream.
This function will be called by KPixmapCache immediately after the image data for a single image has been written to stream. (This function is called once for every single image).
- See also
- loadCustomData
- writeCustomIndexHeader
- Parameters
-
Reimplemented in KIconCache.
Definition at line 1429 of file kpixmapcache.cpp.
Can be used by subclasses to write custom data into cache's header.
This function will be called by KPixmapCache immediately following the index header has being loaded. (This function is called one time only for the entire cache).
- See also
- writeCustomData
- loadCustomIndexHeader
- Parameters
-
Reimplemented in KIconCache.
Definition at line 1044 of file kpixmapcache. | https://api.kde.org/4.x-api/kdelibs-apidocs/kdeui/html/classKPixmapCache.html | CC-MAIN-2019-30 | refinedweb | 1,691 | 58.28 |
NAME | SYNOPSIS | INTERFACE LEVEL | PARAMETERS | DESCRIPTION | RETURN VALUES | EXAMPLES | ATTRIBUTES | SEE ALSO | NOTES
#include <sys/types.h> #include <sys/cred.h> #include <sys/mman.h> #include <sys/ddi.h>int prefixmmap(dev_t dev, off_t off, int prot);
This interface is obsolete. devmap(9E) should be used instead.
Device whose memory is to be mapped.
Offset within device memory at which mapping begins.
A bit field that specifies the protections this page of memory will receive. Possible settings are:
Read access will be granted.
Write access will be granted.
Execute access will be granted.
User-level access will be granted.
All access will be granted.
Future releases of Solaris will provide this function for binary and source compatibility. However, for increased functionality, use devmap(9E) instead. See devmap(9E) for details.
The mmap() entry point is a required entry point for character drivers supporting memory-mapped devices. A memory mapped device has memory that can be mapped into a process's address space. The mmap(2) system call, when applied to a character special file, allows this device memory to be mapped into user space for direct access by the user application.
The mmap() entry point is called as a result of an mmap(2) system call, and also as a result of a page fault. mmap() is called to translate the offset off in device memory to the corresponding physical page frame number.
The mmap() entry point checks if the offset off is within the range of pages exported by the device. For example, a device that has 512 bytes of memory that can be mapped into user space should not support offsets greater than 512. If the offset does not exist, then -1 is returned. If the offset does exist, mmap() returns the value returned by hat_getkpfnum(9F) for the physical page in device memory containing the offset off.
hat_getkpfnum(9F) accepts a kernel virtual address as an argument. A kernel virtual address can be obtained by calling ddi_regs_map_setup(9F) in the driver's attach(9E) routine. The corresponding ddi_regs_map_free(9F) call can be made in the driver's detach(9E) routine. Refer to Example 1 below for more information.
mmap() should only be supported for memory-mapped devices. See the segmap(9E) and ddi_mapdev(9F) reference pages for further information on memory-mapped device drivers.
If a device driver shares data structures with the application, for example through exported kernel memory, and the driver gets recompiled for a 64-bit kernel but the application remains 32-bit, the binary layout of any data structures will be incompatible if they contain longs or pointers. The driver needs to know whether there is a model mismatch between the current thread and the kernel and take necessary action. ddi_mmap_get_model(9F) can be use to get the C Language Type Model which the current thread expects. In combination with ddi_model_convert_from(9F) the driver can determine whether there is a data model mismatch between the current thread and the device driver. The device driver might have to adjust the shape of data structures before exporting them to a user thread which supports a different data model. See ddi_mmap_get_model(9F) for an example.
If the protection and offset are valid for the device, the driver should return the value returned by hat_getkpfnum(9F), for the page at offset off in the device's memory. If not, -1 should be returned.
The following is an example of the mmap() entry point. If offset off is valid, hat_getkpfnum(9F) is called to obtain the page frame number corresponding to this offset in the device's memory. In this example, xsp->regp->csr is a kernel virtual address which maps to device memory. ddi_regs_map_setup(9F) can be used to obtain this address. For example, ddi_regs_map_setup(9F) can be called in the driver's attach(9E) routine. The resulting kernel virtual address is stored in the xxstate structure, which is accessible from the driver's mmap() entry point. See ddi_soft_state(9F). The corresponding ddi_regs_map_free(9F) call can be made in the driver's detach(9E) routine.
struct reg { uint8_t csr; uint8_t data; }; struct xxstate { . . . struct reg *regp . . . }; struct xxstate *xsp; . . . static int xxmmap(dev_t dev, off_t off, int prot) { int instance; struct xxstate *xsp; /* No write access */ if (prot & PROT_WRITE) return (-1); instance = getminor(dev); xsp = ddi_get_soft_state(statep, instance); if (xsp == NULL) return (-1); /* check for a valid offset */ if ( off is invalid ) return (-1); return (hat_getkpfnum (xsp->regp->csr + off)); }
See attributes(5) for a description of the following attributes:
mmap(2), attributes(5), attach(9E), detach(9E), devmap(9E), segmap(9E), ddi_btop(9F), ddi_get_soft_state(9F), ddi_mmap_get_model(9F), ddi_model_convert_from(9F), ddi_regs_map_free(9F), ddi_regs_map_setup(9F), ddi_soft_state(9F), devmap_setup(9F), getminor(9F), hat_getkpfnum(9F)
For some devices, mapping device memory in the driver's attach(9E) routine and unmapping device memory in the driver's detach(9E) routine is a sizeable drain on system resources. This is especially true for devices with a large amount of physical address space.
One alternative is to create a mapping for only the first page of device memory in attach(9E). If the device memory is contiguous, a kernel page frame number may be obtained by calling hat_getkpfnum(9F) with the kernel virtual address of the first page of device memory and adding the desired page offset to the result. The page offset may be obtained by converting the byte offset off to pages. See ddi_btop(9F).
Another alternative is to call ddi_regs_map_setup(9F) and ddi_regs_map_free(9F) in mmap(). These function calls would bracket the call to hat_getkpfnum(9F).
However, note that the above alternatives may not work in all cases. The existence of intermediate nexus devices with memory management unit translation resources that are not locked down may cause unexpected and undefined behavior.
NAME | SYNOPSIS | INTERFACE LEVEL | PARAMETERS | DESCRIPTION | RETURN VALUES | EXAMPLES | ATTRIBUTES | SEE ALSO | NOTES | http://docs.oracle.com/cd/E19683-01/817-3948/6mjgoq5q9/index.html | CC-MAIN-2016-44 | refinedweb | 974 | 56.35 |
The Writer monad represents computations which produce a stream of data in addition to the computed values. It is commonly used by code generators to emit code.
transformers provides both the strict and lazy versions of
WriterT monad transformer. The definition of bind operator
>>= reveals how the Writer monad works.
instance (Monoid w, Monad m) => Monad (WriterT w m) where return a = writer (a, mempty) m >>= k = WriterT $ do (a, w) <- runWriterT m (b, w') <- runWriterT (k a) return (b, w `mappend` w')
runWriterT returns a pair whose second element is the output to accumulate. Because the output value is a
Monoid instance, we can merge two outputs
w and
w' using
mappend and return the combined output.
Here is a simple example of the Writer monad. It accumulates
LogEntrys in a list. (CAUTION: Do not use
WriterT for plain logging in real world applications. It unnecessarily keeps the entire logs in memory. I recommend fast-logger for logging.)
import Control.Monad import Control.Monad.Trans.Writer.Strict data LogEntry = LogEntry { msg::String } deriving (Eq, Show) calc :: Writer [LogEntry] Integer calc = do output "start" let x = sum [1..10000000] output (show x) output "done" return x output :: String -> Writer [LogEntry] () output x = tell [LogEntry x] test = mapM_ print $ execWriter calc
The code looks innocuous, but its performance deteriorates when the accumulated log gets bigger because the
Monoid instance of
[] uses
(++) to append two lists and the concatenations are left-nested.
do { tell [1]; tell [2]; tell [3]; tell[4]; tell [5] } => (((([1] ++ [2]) ++ [3]) ++ [4]) ++ [5])
(++) is known to perform poorly when applications of
(++) are left-nested.
Difference List
One well-known solution is to use the difference list instead of an ordinary list.
DList provides O(1)
append and
snoc operations on lists. Demystifying DList explains how
DList works in details.
The code is almost the same except we replaced
[LogEntry] with
DList LogEntry, but it scales well as the accumulated log gets bigger.
import Data.DList calc :: Writer (DList LogEntry) Integer calc = ... output :: String -> Writer (DList LogEntry) () output x = tell (singleton (LogEntry x)) test = mapM_ print $ toList (execWriter calc)
Endo
Another option is to use
Endo wrapper from
Data.Monoid. It is an endomorphism from type
a to
a.
newtype Endo a = Endo { appEndo :: a -> a } deriving (Generic)
Surprisingly, it is an instance of
Monoid.
mempty is the identity function and
mappend is the composition of two functions.
instance Monoid (Endo a) where mempty = Endo id Endo f `mappend` Endo g = Endo (f . g)
But how can I output a log? We need a function of type
[LogEntry] -> [LogEntry] to make an
Endo value. The trick is to create a section
([LogEntry x]<>) which prepends a log entry to the list.
calc :: Writer (Endo [LogEntry]) Integer calc = ... output :: String -> Writer (Endo [LogEntry]) () output x = tell $ Endo ([LogEntry x]<>) test = mapM_ print $ appEndo (execWriter calc) []
But why does this use of
Endo perform well? To see why, we need to see how the following code is actually evaluated.
do { tell [1]; tell [2]; tell [3]; tell[4]; tell [5] }
is translated to
([1]++) . ([2]++) . ([3]++) . ([4]++) . ([5]++)
This is a composition of functions whose type is
[Int] -> [Int]. We can obtain the final result by applying
[].
([1]++) . ([2]++) . ([3]++) . ([4]++) . ([5]++) $ [] => [1] ++ ([2] ++ ([3] ++ ([4] ++ ([5] ++ []))))
We can see that
(++) operators are right-nested.
This also explains why
DList in the previous section performs well because
DList is just
Endo specialized to lists.
newtype DList a = DL { unDL :: [a] -> [a] } instance Monoid (DList a) where mempty = DL id mappend xs ys = DL (unDL xs . unDL ys)
State Monad
It is possible to implement the Writer monad in terms of the State monad. We can store the accumulated logs in the state and update it by appending a new log.
import Control.Monad.Trans.State import Data.Monoid ((<>)) calc :: State [LogEntry] Integer calc = ... output :: String -> State [LogEntry] () output x = modify (<> [LogEntry x]) test = mapM_ print $ execState calc []
Unfortunately, this version has the same performance issue with the initial version because applications of
(++) are left-nested.
But there is a magical trick that can change this situation.
Backward State Monad
The section “2.8 Variation six: Backwards state” of Philip Wadler’s The essence of functional programming briefly mentions the Backwards state monad (also known as reverse state monad). This is a strange variant of the State monad where the state is propagated backward.
newtype RState s a = RState { runRState :: s -> (a,s) } instance Monad (RState s) where return x = RState $ (,) x RState sf >>= f = RState $ \s -> let (a,s'') = sf s' (b,s') = runRState (f a) s in (b,s'') rget = RState $ \s -> (s,s) rmodify f = RState $ \s -> ((),f s) rput = rmodify . const execRState f s = snd (runRState f s)
In the definition of
>>=, the state
s is passed to the second expression and its result
s' is passed back to the first expression. This seems impossible because two expressions are mutually recursive, but Haskell’s lazy evaluation makes it possible. In the backward state monad,
rget reads the state from the future!
With this in mind, we can implement the Writer monad by prepending the log to the state. Because the state contains all the future logs, we can simply prepend our log to it.
calc :: RState [LogEntry] Integer calc = ... output :: String -> RState [LogEntry] () output x = rmodify ([LogEntry x]<>) test = mapM_ print $ execRState calc []
Applications of
(++) are right-nested because logs are accumulated backward from the end.
Readers who would like to know more about the backward state monads are referred to:
- Mindfuck: The Reverse State Monad shows how to compute the fibonacci number using the reverse state monad.
- tardis package - a combination of both a forwards and a backwards state transformer. | http://kseo.github.io/posts/2017-01-21-writer-monad.html | CC-MAIN-2017-17 | refinedweb | 954 | 63.19 |
Hey all,
I've tried not to ask questions for like a week or two but, I've got the Introduction to Java Programming (sixth edition) and I'm doing the Exercise 7.2 'The Fan Class' where I am to; Design a class named Fan to represent a fan. They have the UML Diagram of:
Fan +SLOW = 1
+MEDIUM = 2
+FAST = 3
-speed: int The speed of this fan (default 1).
-on: Boolean Indicates whether the fan is on (default false).
-radius: double The radius of this fan (default 5).
-color: String The color of this fan (default white).
+Fan() Constructs a fan with default values.
+getSpeed(): int Returns the speed of this fan.
+setSpeed(speed: int): void Sets a new speed for this fan.
+isOn(): Boolean Returns true if this fan is on.
+setOn(on: boolean): void Sets this fan on to true or false.
+getRadius(): double Returns the radius of this fan.
+setRadius(radius: double): void Sets a new radius for this fan.
+getColor(): String Returns the color of this fan.
+setColor(color: String): void Sets a new color for this fan.
+toString(): String Returns a string representation for this fan.
So far, I've got this... I know that I'm supposed to be utilizing these other variables and constructors etc. but I don't know how or why. HELP!?! (lol)
public class Fan { //main method public static void main(String[] args) //variables slow = 1; medium = 2; fast = 3; boolean on = false; int speed = 1; double radius = 5; String color = blue; //make my fan Fan() { } //fan's radius Fan(double radius = 5) { } //get the colors public boolean getColorValue() { if(getColor() == Color.blue) { //something goes here return true; } else { //something goes here return false; } //The to String String toString(); { String result; if(on == true) { String speedString; if(speed == slow) speedString = "slow" else if (speed == medium) speedString = "medium" else if (speed == fast) speedString = "fast" result = "Speed = " + speedString; + "Color = " + color + "Radius = " + radius; } else result = "Color = " + Color + "Radius = " + radius + "fan is off"' { } return result; System.out.println(fan1.toString()); System.out.println(fan2.toStrint()); | https://www.daniweb.com/programming/software-development/threads/94144/help-with-exercise-7-2 | CC-MAIN-2017-51 | refinedweb | 344 | 73.07 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.