text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
The server, like the client, is a state machine. More... #include "Server.hpp" The server, like the client, is a state machine. It doesn't present its state explicitly however, but only as two implicit states: map loaded ("normal, running") and map unloaded ("idle"). As with the client, having a true "idle" state (rather than expressing it with a NULL ServerT instance pointer) has several advantages: a) We can gracefully terminate pending network connections (e.g. resend reliable data in the clients zombie state), and b) the server can receive and process conn-less network packets, and thus is available for administration via rcon commands. The constructor. A console function that stores the given command string until the server "thinks" next. The RunMapCmdsFromConsole() method then runs the commands in the context of the current map/entity script.
https://api.cafu.de/c++/classServerT.html
CC-MAIN-2018-51
refinedweb
137
60.75
Visual C++ Custom Debug Monitor Posted by Daniel Chirca on May 4th, 2000 WEBINAR: On-demand webcast How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 REGISTER > Environment: Visual C++ 6 (SP3) I have checked the existing solutions/workarounds (like MsgTracer, TraceWindow, ...), but none seemed enough flexible and very easy to use. So, I have decided to implement my own solution. I have created a small utility, named Debug.exe, which is a mainly a messaging monitor window, where all the debugging messages sent from your application are displayed at runtime. (You can download the full source code here) In order to use this utility, you need to follow these easy steps: - Add CDebug.cpp and CDebug.h to your Visual C++ project. - Declare a global instance of the CDebug class: - Wherever you need in your source code to send (debug) messages, use one of the following functions: - Run Debug.exe and customize, through "Options" how do you want to handle the arriving messages. (One nice idea might be to add the Debug.exe utility to the "Tools" IDE menu) - Run your Visual C++ application. CDebug Debug; Debug.printf0(/* message */) Debug.printf1(/* message */) Debug.printf2(/* message */) Debug.printf3(/* message */)- the last digit (0, 1, 2 or 3) means the level of the message (you can customize what levels you want to intercept in the Debug window). - the syntax/arguments are the same as for the printf function... It will also accept a CString as first argument, in addition to LPTSTR and LPCTSTR. Example Usage //YourApp.cpp implementation file #ifndef CDEBUG #include "CDebug.h" #endif extern CDebug Debug; //....your code... //...A function that uses the Debug facility... //At runtime, the Debug.exe utility should be running in order //to catch these messages... BOOL CWOA_GridCtrl::IsRunMode() { BOOL bIsRunMode; bIsRunMode = AmbientUserMode(); Debug.Printf1("IsRunMode returns ", bIsRunMode); return bIsRunMode; } Re:one problemPosted by Legacy on 10/08/2003 12:00am Originally posted by: Mark You can easily wrap his call in a macro that will null out if CDEBUG is not defined so it will not load down a release. Mark Reply One little BugPosted by Legacy on 10/21/2001 12:00am Originally posted by: Feng.Zh I have download the program and run it in my computer. I have found that the program will be terminated by some error. At last,I see that the function OnCopyData(...) some codes error. // char szHWND[16]; // sprintf(szHWND, "%x", (LONG) pWnd->m_hWnd); // SetDlgItemText(IDC_EDIT_HWND, szHWND); because the size of szHWND is small. I'm intersted in VC.Would you like to exchange some ideasReply with me? Yours. one problemPosted by Legacy on 04/25/2001 12:00am Originally posted by: Yevgeniy Marchenko There is one but big disadvantage in the suggested approach. The Debug object and all calling for its monitor function will be in either release and debug versions. Sincerely, Yevgeniy.Reply
https://www.codeguru.com/cpp/v-s/debug/article.php/c1265/Visual-C-Custom-Debug-Monitor.htm
CC-MAIN-2017-51
refinedweb
485
59.19
Hi, the UserDict, UserList, and UserString classes in the collections module ought to provide __repr__ implementations along the following lines: Advertising def __repr__(self): return '%s(%r)' % (type(self).__name__, self.data) so that they give useful immediate information in the repl to the effect that they are not mere dicts/lists/strs but user-defined container types. dpk (David P. Kendal) · Nassauische Str. 36, 10717 DE · The reason we had no idea how cats worked was because, since Newton, we had proceeded by the simple principle that essentially, to see how things work, we took them apart. If you try and take a cat apart to see how it works, the first thing you have on your hands is a non- working cat. -- Douglas Adams _______________________________________________ Python-ideas mailing list Python-ideas@python.org Code of Conduct:
https://www.mail-archive.com/python-ideas@python.org/msg00904.html
CC-MAIN-2016-40
refinedweb
138
72.36
I have a class which has a method that reads a text file line by line and then puts each line into an ArrayList public class ReadFile { public List<String> showListOfCourses() throws IOException { String filename = "countriesInEurope.txt"; FileReader fr = new FileReader(filename); BufferedReader br = new BufferedReader(fr); List<String> courseList = new ArrayList<>(); while (true) { String line = br.readLine(); if (line == null) { break; } courseList.add(line); } br.close(); return courseList; } } Mockito As the filename countriesInEurope.txt is hardcoded in your implementation, this is not testable. A good way to make this testable would be to refactor the method to take a Reader as parameter: public List<String> showListOfCourses(Reader reader) throws IOException { BufferedReader br = new BufferedReader(reader); List<String> courseList = new ArrayList<>(); // ... return courseList; } Your main implementation could pass a FileReader to this. On the other hand when testing, your test method could pass a StringReader instance, which is easy to create with a sample content as a simple string, no temporary files needed, for example: @Test public void showListOfCourses_should_read_apple_orange_banana() { Reader reader = new StringReader("apple\norange\nbanana"); assertEquals(Arrays.asList("apple", "orange", "banana"), showListOfCourses(reader)); } Btw the name of the method is not good, as it doesn't "show" anything. readListOfCourses would make more sense.
https://codedump.io/share/uLerTjoDHjZE/1/suggestions-on-how-to-test-a-bufferedreader-and-filereader-that-takes-in-strings-and-puts-them-into-an-arraylist
CC-MAIN-2018-22
refinedweb
204
53
Unfiltered Smooth Ride Crafty Painting Purple Pond Sunday Oily McOilface Learn more about style transfer import Algorithmia input = {"images": ["_IMAGE_URL_"], "savePaths": ["_OUTPUT_URL_"], "filterName": "space_pizza"} client = Algorithmia.client('_API_KEY_') algo = client.algo('deeplearning/DeepFilter/0.5.3') print algo.pipe(input) LEARN MORELEARN MORE { "savePaths": [ "_OUTPUT_URL_" ] } Leverage an ever-growing library of more than 2,200 Deep Filter is an implementation of Texture Networks: Feed-forward Synthesis of Textures and Stylized Images, to create interesting and creative photo filters. Learn more about Deep Filter with our guide to getting started with style transfer We've open sourced our training AMI to expose the pipeline needed to train your own filters. This is the same pipeline we used with Deep Filter. The train your own model takes approximately 26 hours end-to-end and will cost about $25 per filter using an Amazon EC2 P2 instance. Once you're finished with the tutorial, you’ll have a custom style transfer filter to use in your app.READ THE TUTORIAL
https://demos.algorithmia.com/deep-style/
CC-MAIN-2018-22
refinedweb
167
54.83
Details - Type: Improvement - Status: Resolved - Priority: Minor - Resolution: Won't Fix - Fix Version/s: None - Component/s: None - Labels:None Description. Activity - All - Work Log - History - Activity - Transitions "The only extra piece of code would be creating and tearing down symlinks when real sstables are created and deleted." That doesn't really buy us anything for the "I want to merge in some sstables from an external source" problem though; just changes the constraint from "distinct filenames" to "distinct symlink names." Created CASSANDRA-6719 to supersede this. Notes so far: - sstable filenames are controlled by the io/sstable/Descriptor class, which encapsulates a few parameters including "generation" – the increasing integer in question. - dropping generation in favor of a uuid seems questionable, given that generation is used by a wide variety of clients in the codebase. So the most likely approach is uuid + generation side by side. - using the host id as the uuid is easy conceptually, but will violate layering, because code in io will start to depend on db and/or service. Plus there is potential bootstrapping problem where system sstables need to be initialized early on during boot, and it's not clear whether the unique host id is available early enough to feed into system sstable descriptors. - random uuids are also tricky, because sstable names will no longer be discoverable without directory lookups. Some code (particularly in unit tests) leans on the ability to synthesize sstable names without touching the filesystem. It's possible to persist these uuids in one of the system tables, but it will have to be a local table, and, regardless, changing system schema can make this a breaking change. I haven't yet found a cost-effective fix that would involve actually modifying the existing naming scheme. The latest idea I have is to create a directory that will hold symlinks to real sstables (symlinks are available in Java 7). Symlink names will contain the UUIDs. The only extra piece of code would be creating and tearing down symlinks when real sstables are created and deleted. End users could then access sstables through this symlink directory whenever doing related maintenance. The last piece would be making sure that appropriate clients, such as the compactor, can consume sstables with and without UUIDs. I'll work on this some more tomorrow, but it'll probably spill until next week (or later). Is this still needed? Naming in 2.0+ is still incremental as far as I can tell. I'd like to work on this fix while I'm learning the codebase. +1 here, global uniqueness for sstable names would make many copy-the-sstables style maint operations easier, as you wouldn't have to manually resolve the namespace conflict. just now I saw someone in #cassandra who was setting up a cluster with a copy of data get confused by non-unique filenames being overwritten on his new cluster. the only downside seems to be longer sstable file names. As long as the host is still willing to read filenames without its own uuid, sure Alternatively, since we'll need a host->uuid mapping for counters we can put that uuid in the filename along with a serial integer (make it a long and we should be ok, right?) OK, but symlinks are much easier to make unique, because they won't affect all that code that expects to find sstables under well-known names (regular names still being available in regular sstable storage). The fact that they're symlinks allows decoupling the problem from internal naming requirements.
https://issues.apache.org/jira/browse/CASSANDRA-1983
CC-MAIN-2017-51
refinedweb
595
60.04
Is there an application that will silently update a local folder containing all the files in a web directory? Or even better, if there is a .zip file, it will extract the files into the specified folder? I have a simple desktop web application (HTML/Javascript using jQuery) in a local folder, and whenever it is updated, I would like the computers in our office to automatically install the update by downloading something like 'latest.zip' automatically (through HTTP download, not FTP download) and let the updater copy the files into the application directory. I've been searching but all I see are file synchronization which is bi-directional and programs that are too complicated to use or are too heavy. My goal is to be able to silently update the application directory with very minimal user interaction. But since JavaScript can't write on hard drives, it needs to be done using another application (exe file) which will run periodically. Maybe there is something like a generic updater application in which you can put a web address of the files and the local directory where the files should be copied? Since the question mentions an .exe file, I assume you're on Windows. If so, you can do this with PowerShell. # Set some variables to hold the source, name and destination of the update file $UpdateUrl = "" $UpdateFile = "update.zip" $Destination = "C:\Path\to\application\" $TempDir = $Env:Temp + "\" # Download the update file to a temporary directory $Client = New-Object System.Net.WebClient $Client.DownloadFile($UpdateUrl + $UpdateFile, $TempDir + $UpdateFile) # Calculate MD5 hash of the downloaded update file $MD5 = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider $Hash1 = [System.BitConverter]::ToString( $MD5.ComputeHash([System.IO.File]::ReadAllBytes($TempDir + $UpdateFile)) ) # If an old update file exists at the destination, calculate its MD5 hash as well If (Test-Path ($Destination + $UpdateFile)) { $Hash2 = [System.BitConverter]::ToString( $MD5.ComputeHash([System.IO.File]::ReadAllBytes($Destination + $UpdateFile)) ) } Else { $Hash2 = "" } # Compare the MD5 hashes # If they're not equal, then copy the new update file to the destination and extract its contents If ($Hash1 -ne $Hash2) { Copy-Item ($TempDir + $UpdateFile) $Destination $Shell = New-Object -ComObject Shell.Application $Shell.NameSpace($Destination).CopyHere( $Shell.NameSpace($Destination + $UpdateFile).Items(), 20 ) } # Delete the downloaded update file Remove-Item ($TempDir + $UpdateFile) The first block declares some variables that contain the name, source and destination of the update file, as well as a temporary directory to hold the downloaded update file. Don't forget the trailing slashes when you change them to your own paths. The next block uses the WebClient object to download the file to the temporary directory. Next, the script calculates the MD5 hash of the downloaded file. If you want to calculate a different hash, like SHA-1, check out the available classes in the System.Security.Cryptography namespace. Then, the script checks for the existence of an old update file at the destination folder, and calculates its MD5 hash. Next, the two hashes are compared. If they're not equal, it means there has been an update. The script then uses the Windows Shell object to copy the update file to the destination folder and extract its contents. The number 20 is the sum of two options of the CopyHere() function - 4 (which supresses the progress dialog) and 16 (which answers "Yes to All" to any dialogs, thus automatically overwriting existing files). Finally, the last line deletes the downloaded update file from the temp directory. For more information on the classes, methods and cmdlets used, see the following links: The trouble I see is that your client computers don't have access to your local folder so there's no way of them knowing it's time. If it's small, you could make it automatically download a new copy every so often anyway, or perhaps make a shell/bash script which they use to launch it, which checks for and downloads a new version each time they open up. In case it's of use, if you're on Windows have a look at the RealtimeSync application which comes with FreeFileSync, you set it to monitor a folder for changes and then can have it execute a command when changes are discovered - such as a batch script to upload them for the other machines to download. By posting your answer, you agree to the privacy policy and terms of service. asked 1 year ago viewed 140 times active
http://superuser.com/questions/553343/generic-automated-updating-of-desktop-applications
CC-MAIN-2014-23
refinedweb
738
51.28
4.4 Constructed Types A generic type declaration, by itself, denotes an unbound generic type that is used as a “blueprint” to form many different types, by way of applying type arguments. The type arguments are written within angle brackets (< and >) immediately following the name of the generic type. A type that includes at least one type argument is called a constructed type. A constructed type can be used in most places in the language in which a type name can appear. An unbound generic type can be used only within a typeof-expression (§7.6.11). Constructed types can also be used in expressions as simple names (§7.6.2) or when accessing a member (§7 nongeneric classes in the same program: namespace Widgets { class Queue {...} class Queue<TElement> {...} } namespace MyApplication { using Widgets; class X { Queue q1; // Nongeneric Widgets.Queue Queue<int> q2; // Generic Widgets.Queue } }<T> { public class Inner {...} public Inner i; // Type of i is Outer<T>.Inner } In unsafe code, a constructed type cannot be used as an unmanaged-type (§18.2). 4.4.1 Type Arguments Each argument in a type argument list is simply a type. type-argument-list: < type-arguments > type-arguments: type-argument type-arguments , type-argument type-argument: type In unsafe code (§18), a type-argument may not be a pointer type. Each type argument must satisfy any constraints on the corresponding type parameter (§10.1.5). 4.4.2 Open and Closed Types All types can be classified as either open types or closed types. An open type is a type that involves type parameters. More specifically: - A type parameter defines an open type. - An array type is an open type if and only if its element type is an open type. - A constructed type is an open type if and only if one or more of its type arguments is an open type. A constructed nested type is an open type if and only if one or more of its type arguments or the type arguments of its containing type(s) is an open type. A closed type is a type that is not an open type. At runtime, all of the code within a generic type declaration is executed in the context of a closed constructed type that was created by applying type arguments to the generic declaration. Each type parameter within the generic type is bound to a particular runtime type. The runtime processing of all statements and expressions always occurs with closed types, and open types occur only during compile-time processing. Each closed constructed type has its own set of static variables, which are not shared with any other closed constructed types. Since an open type does not exist at runtime, there are no static variables associated with an open type. Two closed constructed types are the same type if they are constructed from the same unbound generic type, and their corresponding type arguments are the same type. 4.4.3 Bound and Unbound Types The term unbound type refers to a nongeneric type or an unbound generic type. The term bound type refers to a nongeneric type or a constructed type. An unbound type refers to the entity declared by a type declaration. An unbound generic type is not itself a type, and it cannot be used as the type of a variable, argument, or return value, or as a base type. The only construct in which an unbound generic type can be referenced is the typeof expression (§7.6.11). 4.4.4 Satisfying Constraints Whenever a constructed type or generic method is referenced, the supplied type arguments are checked against the type parameter constraints declared on the generic type or method (§10.1.5). For each where clause, the type argument A that corresponds to the named type parameter is checked against each constraint as follows: - If the constraint is a class type, an interface type, or a type parameter, let C represent that constraint with the supplied type arguments substituted for any type parameters that appear in the constraint. To satisfy the constraint, it must be the case that type A is convertible to type C by one of the following: - An identity conversion (§6.1.1). - An implicit reference conversion (§6.1.6). - A boxing conversion (§6.1.7), provided that type A is a non-nullable value type. - An implicit reference, boxing, or type parameter conversion from a type parameter A to C. - If the constraint is the reference type constraint (class), the type A must satisfy one of the following: - A is an interface type, class type, delegate type, or array type. Both System.ValueType and System.Enum are reference types that satisfy this constraint. - A is a type parameter that is known to be a reference type (§10.1.5). - If the constraint is the value type constraint (struct), the type A must satisfy one of the following: - A is a struct type or enum type, but not a nullable type. Both System.ValueType and System.Enum are reference types that do not satisfy this constraint. - A is a type parameter having the value type constraint (§10.1.5). - If the constraint is the constructor constraint new(), the type A must not be abstract and must have a public parameterless constructor. This is satisfied if one of the following is true: - A is a value type, since all value types have a public default constructor (§4.1.2). - A is a type parameter having the constructor constraint (§10.1.5). - A is a type parameter having the value type constraint (§10.1.5). - A is a class that is not abstract and contains an explicitly declared public constructor with no parameters. - A is not abstract and has a default constructor (§10.11.4). A compile-time error occurs if one or more of a type parameter’s constraints are not satisfied by the given type arguments. Since type parameters are not inherited, constraints are never inherited either. In the example below, D needs to specify the constraint on its type parameter T so that T satisfies the constraint imposed by the base class B<T>. In contrast, class E need not specify a constraint, because List<T> implements IEnumerable for any T. class B<T> where T: IEnumerable {...} class D<T>: B<T> where T: IEnumerable {...} class E<T>: B<List<T>> {...}
http://www.informit.com/articles/article.aspx?p=1648574&seqNum=4
CC-MAIN-2019-51
refinedweb
1,065
63.39
-- | A class for tree types and representations of selections on tree types, as well as functions for converting between text and tree selections. module Language.GroteTrap.Trees ( -- * Paths and navigation Path, root, Nav, up, into, down, left, right, sibling, -- * Tree types Tree(..), followM, follow, depth, selectDepth, flatten, -- * Tree selections Selectable(..), TreeSelection, select, allSelections, selectionToRange, rangeToSelection, posToPath, isValidRange, -- * Suggesting and fixing suggest, repair ) where import Language.GroteTrap.Range import Data.List (sortBy, findIndex) import Data.Maybe (isJust) import Control.Monad.Error () ------------------------------------ -- Paths and navigation ------------------------------------ -- | A path in a tree. Each integer denotes the selection of a child; these indices are 0-relative. type Path = [Int] -- | @root@ is the empty path. root :: Path root = [] -- | Navigation transforms one path to another. type Nav = Path -> Path -- | Move up to parent node. Moving up from root has no effect. up :: Nav up [] = [] up path = init path -- | Move down into the nth child node. into :: Int -> Nav into i = (++[i]) -- | Move down into first child node. down :: Nav down = into 0 -- | Move left one sibling. left :: Nav left = sibling (-1) -- | Move right one sibling. right :: Nav right = sibling 1 -- | Move @n@ siblings (@n@ can be negative). sibling :: Int -> Nav sibling 0 p = p -- because sibling 0 [] == [] sibling d p = if newindex < 0 then p else into newindex parent where index = last p newindex = index + d parent = up p ------------------------------------ -- Parents and children ------------------------------------ -- | Tree types. class Tree p where -- | Yields this tree's subtrees. children :: p -> [p] -- | Breadth-first, pre-order traversal. flatten :: Tree t => t -> [t] flatten t = t : concatMap flatten (children t) -- | Follows a path in a tree, returning the result in a monad. followM :: (Monad m, Tree t) => t -> Path -> m t followM parent [] = return parent followM parent (t:ts) = do c <- childM parent t followM c ts -- | Moves down into a child. childM :: (Monad m, Tree p) => p -> Int -> m p childM t i = if i >= 0 && i < length cs then return (cs !! i) else fail ("child " ++ show i ++ " does not exist") where cs = children t -- | Follows a path in a tree. follow :: Tree t => t -> Path -> t follow t = fromError . followM t fromError :: Either String a -> a fromError = either error id {- indexIn :: (Eq p, Parent p) => p -> p -> Maybe Int indexIn child = elemIndex child . children -} -- | Yields the depth of the tree. depth :: Tree t => t -> Int depth t | null depths = 1 | otherwise = 1 + maximum (map depth $ children t) where depths = map depth $ children t -- | Yields all ancestors at the specified depth. selectDepth :: Tree t => Int -> t -> [t] selectDepth 0 t = [t] selectDepth d t = concatMap (selectDepth (d - 1)) (children t) ------------------------------------ -- Tree selections ------------------------------------ -- | Selection in a tree. The path indicates the left side of the selection; the int tells how many siblings to the right are included in the selection. type TreeSelection = (Path, Int) -- | Selectable trees. class Tree t => Selectable t where -- | Tells whether complete subranges of children may be selected in this tree. If not, valid TreeSelections in this tree always have a second element @0@. allowSubranges :: t -> Bool -- | Enumerates all possible selections of a tree. allSelections :: Selectable a => a -> [TreeSelection] allSelections p = ([], 0) : subranges ++ recurse where subranges | allowSubranges p = [ ([from], to - from) | from <- [0 .. length cs - 2] , to <- [from + 1 .. length cs - 1] , from > 0 || to < length cs - 1 ] | otherwise = [] cs = children p recurse = concat $ zipWith label cs [0 ..] label c i = map (rt i) (allSelections c) rt i (path, offset) = (i : path, offset) -- | Selects part of a tree. select :: (Monad m, Tree t) => t -> TreeSelection -> m [t] select t (path, offset) = (sequence . map (followM t) . take (offset + 1) . iterate right) path -- | Computes the range of a valid selection. selectionToRange :: (Tree a, KnowsPosition a) => a -> TreeSelection -> Range selectionToRange parent (path, offset) = (from, to) where from = begin $ follow parent path to = end $ follow parent (sibling offset path) -- | Converts a specified range to a corresponding selection and returns it in a monad. rangeToSelection :: (Tree a, KnowsPosition a, Monad m) => a -> Range -> m TreeSelection rangeToSelection p ran@(b, e) -- If the range matches that of the root, we're done. | range p == ran = return ([], 0) | otherwise = -- Find the children whose ranges contain b and e. case ( findIndex (\c -> b `inRange` range c) cs , findIndex (\c -> e `inRange` range c) cs) of (Just l, Just r) -> if l == r -- b and e are contained by the same child! -- Recurse into child. then rangeToSelection (cs !! l) ran -- ... and prepend child index, of course. >>= (\(path, offset) -> return (l : path, offset)) else if begin (cs !! l) == b && end (cs !! r) == e -- b is the beginning of l, and e is the end -- of r: a selection of a range of children. -- Note that r - l > 0; else it would've been -- caught by the previous test. -- This also means that there are many ways -- to select a single node: either select it -- directly, or select all its children. then return ([l], r - l) -- All other cases are bad. else fail "text selection does not have corresponding tree selection" -- Either position is not contained -- within any child. Can't be valid. _ -> fail "text selection does not have corresponding tree selection" where cs = children p -- | Returns the path to the deepest descendant whose range contains the specified position. posToPath :: (Tree a, KnowsPosition a) => a -> Pos -> Path posToPath p pos = case break (\c -> pos `inRange` range c) (children p) of (_, []) -> [] (no, c:_) -> length no : posToPath c pos -- | Tells whether the text selection corresponds to a tree selection. isValidRange :: (KnowsPosition a, Selectable a) => a -> Range -> Bool isValidRange p = isJust . rangeToSelection p ------------------------------------ -- Suggesting and fixing ------------------------------------ -- | Yields all possible selections, ordered by distance to the specified range, closest first. suggest :: (Selectable a, KnowsPosition a) => a -> Range -> [TreeSelection] suggest p r = sortBy distance $ allSelections p where distance s1 s2 = (selectionToRange p s1 `distRange` r) `compare` (selectionToRange p s2 `distRange` r) -- | Takes @suggest@'s first suggestion and yields its range. repair :: (KnowsPosition a, Selectable a) => a -> Range -> Range repair p = selectionToRange p . head . suggest p
http://hackage.haskell.org/package/GroteTrap-0.3/docs/src/Language-GroteTrap-Trees.html
CC-MAIN-2014-52
refinedweb
981
64.71
Overview When we introduced Work Folders in Windows Server 2012 R2, we included support for PCs running Windows 8.1 and Windows RT 8.1. However, we knew that we needed to continue releasing support for other clients, and the number one request was to support the large number of enterprise deployments of Windows 7. We heard the feedback and we are excited to announce that we have just released the packages of Work Folders for Windows 7 on the Download Center! There are 2 packages: This blog post will focus specifically on the differences between Work Folders on Windows 7 and Windows 8.1 as well as deployment considerations. You can find more general information on Work Folders here in the Work Folders Overview What’s the difference between the Windows 7 and Windows 8.1 releases? Windows 7 is still our most widely deployed operating system, especially in the enterprise, which is the group of customers who have been most interested in Work Folders support on Windows 7. So we created this release focusing on our enterprise customers. Supported Windows Editions Given the enterprise focus, the Work Folders for Windows 7 package can be installed only on PCs running the following editions of Windows 7: - Windows 7 Professional - Windows 7 Enterprise - Windows 7 Ultimate This package can be installed only on these editions of Windows 7, no other operating system is supported by this package. The package also requires Windows 7 Service Pack 1. For home users with Windows 7 PCs, we recommend upgrading to Windows 8.1. Setup To set up Work Folders on Windows 7, the client PC must be joined to your organization’s domain. If not, Setup will fail with the following error: Policy enforcement Work Folders provides two device policies that administrators can control. The policies are enforced on the Windows 8.1 clients before data sync is allowed: - Encrypt Work Folders All the Work Folders data on a user’s PC will be encrypted using the Windows 8.1 Selective Wipe technology - Automatically lock screen, and require a password Applies the following three policies to a user’s PC: - Password minimum length is 6 characters - Device idle auto lock is 15 minutes or less - Logon retry is set to 10 or less The policy settings are not configurable, and they are enforced on the devices running with Windows 8.1 through the EAS Engine. Work Folders on Windows 7 can’t enforce the lock screen and password policy due to missing feature (EAS Engine) support in the operating system. This can be easily mitigated with Group Policy to enforce password policies on their domain-joined PCs. Since Work Folders on Windows 7 is supported only on domain-joined PCs, you (as the admin) still have control over the password policies of all your Work Folders users. You should continue using Group Policy to manage password policies for all the domain-joined PCs. For PCs and devices that aren’t joined to a domain (Windows 8.1 devices only), Work Folders will enforce its password policy as set on each sync share. To do so, you’ll need to run the Set-SyncSharecmdlet to add the domain in which all of your Windows 7 PC computer accounts are located to a domain-exclusion list. We describe how to do that in the Server Configuration section below. If you use the Work Folders password policy but do not configure the excluded domain list on the server, the user will see the following error during Work Folders setup: Encryption is different on Windows 7, as the Windows 8.1 Work Folders encryption mechanism (selective wipe) is not available. On Windows 7, the files in Work Folders are encrypted using EFS, which does not have remote wipe capability. Status notification area of the taskbar On Windows 8.1 clients, users can view the sync status in the File Explorer status bar, and are notified of sync issues through the Action Center. On Windows 7, Work Folders can’t integrate into Windows Explorer and the Action Center, so we added a Work Folders icon to the notification area of the taskbar. The Work Folders taskbar icon shows sync status, and also a convenient menu option to open Work Folders in Windows Explorer. The icon by default will only show notifications, and is not present on the taskbar. A user can choose to always show the icon by opening Control Panel, searching for “notification” and then using the Notification Area Icons Control Panel item, as shown below. Server configurations As mentioned above in the Policy enforcement section, if the administrator wants to enforce Work Folders password policies on Windows 7 PCs, the computer accounts must be in an excluded domain list. An administrator can configure the excluded domain list by using the following cmdlet: Set-SyncShare <share name> -PasswordAutolockExcludeDomain <domain list> For example, you can use the following cmdlet to exempt all computer accounts (this doesn’t apply to user accounts) of the contoso.com domain from the Work Folders password policy for the FinShare sync share: Set-SyncShare FinShare -PasswordAutolockExcludeDomain “Contoso.com” In this example, PCs in the Contoso.com domain (running Windows 7 or Windows 8.1) receive password policies from Group Policy – not from Work Folders because the domain is excluded from the Work Folders PasswordAutolock policy. Windows 8.1 PCs that aren’t joined to the domain receive Work Folders password policies, if set on the sync share – not from Group Policy because Group Policy applies only to domain-joined PCs. Each user can be given permission to sync with a single sync share, though they can have a mix of Windows 8.1 and Windows 7 PCs that sync with this share. Upgrade or migration When it is the time to upgrade or migrate a Windows 7 PC to a newer version, the expected behavior is listed below: - Windows 7 -> Windows 8: Sync will stop, and the Work Folders Control Panel item will show “Can’t use Work Folders on this version of Windows” since there is no Work Folders support on Windows 8. Ideally, the user would install the Windows 8.1 update, and then set up Work Folders again. - Windows 7 -> Windows 8 -> Windows 8.1: User needs to set up Work Folders again. If data is migrated, see the Known Issues section of this document. - Windows 7 -> Windows 8.1: User needs to set up Work Folders again. If data is migrated, see the Known Issues section of this document. - Windows 7 -> Windows 8.1 using User State Migration Toolkit (USMT), the expected user experience will be: - Work Folders partnership configuration will be migrated. - Work Folders data will not be migrated. (i.e. no files that have yet to be synced are migrated to the new client) - Work Folders is shown in File Explorer under Favorites, but isn’t listed under “This PC” as is the case when setting up the Work Folders partnership on Windows 8.1. - The Work Folders configuration is migrated, and files are synced from the sync server after the user signs on. - To ensure sync continue to work, make sure you get the latest QFE for Win 7 Work Folders here: Known issues - In the case where the user upgrades from Windows 7 to Windows 8.1, and the data is migrated without the partnership information, if the local folder storing Work Folders (by default, C:\Users\<username>\Work Folders) was encrypted on Windows 7, the same path can’t be used again on the Windows 8.1. This is because the different encryption mechanisms used on Windows 7 and Windows 8.1. There are two workarounds: - The user can open File Explorer in Windows 8.1, right click the folder storing Work Folders and then click Properties. Click Advanced, and then clear the “Encrypt contents to secure data” checkbox. Click OK, and then click “Apply changes to this folder, subfolders, and files”. - The user can choose a different path for the Work Folders, and optionally delete the old folder. The user must make sure all the content has synced to the server before removing the old Work Folders path. - If your environment requires Active Directory Federation Services (AD FS) and uses form-based authentication, the client PCs must use Internet Explorer 9, 10 or 11. There is an issue with Internet Explorer 8, where the user can’t authenticate against the server. - If your environment uses IPSec, see Knowledge Base article 2665206. Without this hotfix, Work Folders client might experience slow sync performance in some environments that use IPSec. - If you are configuring Work Folders by using Group Policy, the Work Folders Group Policy template is included with Windows Server 2012 R2. Although the description text indicates that it only applies to Windows 8.1 PCs, the policy settings can also configure Windows 7 PCs that have Work Folders installed. - On Windows 7, the Work Folders shortcut is added to the user’s Favorites folder in Windows Explorer. If the Favorites folder is redirected to a network share, the shortcut for Work Folders will not be present. This is because the Work Folders path is local to a client machine, so the shortcut may not have any meaning on other client machines when presented through a network share. - If the user migrates from Windows 7 to Windows 8.1 using USMT, and chooses to migrate the settings (which includes the user partnership), the Work Folders data will not be migrated. After user logs on the new machine, the partnership will be established, and data will synced down to the machine. The shell namespace under “This PC” for Work Folders is not created. To get the shell namespace under “This PC” for work folders, you can simply click “Stop Work Folders” in the Work Folders Control Panel, and then set up Work Folders again. This will allow the namespace to be created as part of the partnership creation. - If the client has installed a localized (non-English) version of the Work Folders, after migration, the Work Folders shortcut under the favorite folder will be shown as English. So that’s our Windows 7 app for Work Folders. Let us know what you think, and we’ll keep working on clients for other popular platforms and update when they’re ready. Thanks, Jian Yan and the Windows 7 Work Folders team Join the conversationAdd Comment @Fred, the client will always try to connect using either 443 or 80. You can ping us with your issue wfdisc at microsoft.com, to further discuss the details @Scott From your description, looks like John is connected using Win 7, and Mary is connected using the netbook. John and Mary are different users, on the server, their data will be separated with each user folder. Work Folders is designed for individual user data. If you are thinking of a team share scenario, Work Folders doesn’t support that in this release. Thanks for that great informations. @Jared, what error are you getting? Can Work Folders be used as users home drives in Windows 7? Are there pros or cons to use it instead of previous sync options? S Work Folders neboli Pracovními složkami jsme vás seznamovali s příchodem Windows Hi everyone Today we are excited to release the Work Folders client for Windows 7. We know a lot of our Work Folders wurden erstmals mit Windows 8.1 und Windows RT 8.1 vorgestellt und dienen als persönliche Workfolders est une feature propre à Windows 8/8.1. Elle apportait un plus pour les scénarios de BYOD We setup a windows 7 test machine (enterprise) on our domain with the client. Installation was fine. Trying to connect to our work folders machine it shows error: “The connection with the server was terminated abnormally (0x80072efe) Even though all firewalls are off, bindings show 443 not used on server, and pingable/can be seen across the network. I figured out that we needed a SSL cert for 443, so I added the registry entry to allow unsecured. 0x80004005 Unspecified Error however, upon realizing a binding on 80, I attempted to alter the port designation on system32 XML file settings for Sync share…however it will not allow domain or local administrator rights to do so? Work Folders er en Windows Server teknologi der gør virksomheder i stand til at levere en OneDrive Hi Folks – One of the most innovative new features in Windows Server 2012 R2 is Work Folders , which I installed work folders on a server but I can not connect from a windows 7 client. Should I configure something special on the windows 7 client? Pingback from Microsoft Releases Work Folders for Windows 7 | Windows News Pingback from Network Steve Pingback from Windows Server 2012 R2 – “Work Folders” – [Actualizado] | WindowServer Pingback from Work Folders in Windows 7 – Active Directory FAQ Pingback from Work Folders in Windows 7 – Active Directory FAQ Pingback from Series 4 of 5 ??? Access and Information Protection (AIP) Visit for more info on windows 7 and linux, if you want to know how to do anything on windows 7 we have 12 hour service on are site we will awnser any queastions about computers, with in 24 hours please feel free to check it out. I have setup a single Work Folder in E:Workfolders, shared in Windows Server 2012, only one group has permissions. Users are say John and Mary. When I connect via my Windows 7 laptop (John) or Windows 8.1 netbook (Mary), I see a new folder in WorkFolders called john and one called mary, why am I not seeing the one shared folder and data structure where users see the same thing? @joemcginley, can you clarify how you are thinking about using Work Folders as user home drives? 【媒体来源: 金理人网】 解读: 上周五,美元走弱,美指触及近三周低位,上周跌幅达0.6%.
https://blogs.technet.microsoft.com/filecab/2014/04/24/work-folders-for-windows-7/?replytocom=11653
CC-MAIN-2019-13
refinedweb
2,314
60.14
Hey there, I am extremely new to java and was wanting some help on this bug I am getting. My code is as follows: But whenever I run the program, no matter what I do, I get my average is 0.0But whenever I run the program, no matter what I do, I get my average is 0.0import java.util.Scanner; public class bavg{ public static void main (String args[]){ Scanner keyboard=new Scanner(System.in); System.out.println("Enter the amount of at-bats."); int ab=keyboard.nextInt(); System.out.println("Now, enter the amount of hits."); int hits=keyboard.nextInt(); int avg=(hits/ab); System.out.println("Your batting average is "+avg+"."); } } Anyone have any suggestions? Again I am very new to java so I apologize in advance for any silly mistakes.
http://www.javaprogrammingforums.com/whats-wrong-my-code/16107-need-some-help.html
CC-MAIN-2015-14
refinedweb
135
60.11
This class manages the controlling of one or more telescopes by one instance of the stellarium program. More... #include <TelescopeControl.hpp> This class manages the controlling of one or more telescopes by one instance of the stellarium program. "Controlling a telescope" means receiving position information from the telescope and sending GOTO commands to the telescope. No esoteric features like motor focus, electric heating and such. The actual controlling of a telescope is left to the implementation of the abstract base class TelescopeClient. Definition at line 70 of file TelescopeControl.hpp. Adds a telescope description containing the given properties. DOES NOT VALIDATE its parameters. If serverName is specified, portSerial should be specified too. Call saveTelescopes() to write the modified configuration to disc. Call startTelescopeAtSlot() to start this telescope.. Remove all currently registered telescopes. field of view circles color. Definition at line 238 of file TelescopeControl.hpp. Returns a list of the currently connected clients. Safe access to the loaded list of telescope models. Get display flag for telescope field of view circles. Definition at line 196 of file TelescopeControl.hpp. Get display flag for telescope name labels. Definition at line 181 of file TelescopeControl.hpp. Get display flag for telescope reticles. Definition at line 166 of file TelescopeControl.hpp. Get the telescope labels color. Definition at line 218 of file TelescopeControl.hpp. Get the telescope reticle color. Definition at line 210 of file TelescopeControl.hpp. Retrieves a telescope description. Returns false if the slot is empty. Returns empty serverName and portSerial if the description contains no server. Initialize itself. If the initialization takes significant time, the progress should be displayed on the loading bar. Implements StelModule. Checks if the TelescopeClient object at a given slot is connected to a server. Checks if there's a TelescopeClient object at a given slot, i.e. if there's an active telescope at that slot. List all StelObjects. Implements StelObjectModule. Definition at line 92 of file TelescopeControl.hpp. Loads the module's configuration from the configuration file. Loads from telescopes.json the parameters of telescope clients and initializes them. If there are already any initialized telescope clients, they are removed. Removes info from the tree. Should it include stopTelescopeAtSlot()? Saves the module's configuration to the configuration file. Saves to telescopes.json a list of the parameters of the active telescope clients. The function searches in a disk of diameter limitFov centered on v. Only visible objects (i.e curretly displayed on screen) should be returned. Implements StelObjectModule. Return the matching StelObject if exists or the empty StelObject if not found. Implements StelObjectModule. Find a StelObject by name. Implements StelObjectModule. Set the field of view circles color. Definition at line 231 of file TelescopeControl.hpp. Set display flag for telescope field of view circles. Definition at line 189 of file TelescopeControl.hpp. Set display flag for telescope name labels. Definition at line 174 of file TelescopeControl.hpp. Set display flag for telescope reticles. Definition at line 159 of file TelescopeControl.hpp. Forces a call of loadDeviceModels(). Stops all active telescopes. Used in the GUI. Definition at line 268 of file TelescopeControl.hpp. Define font size to use for telescope names display. Set the telescope labels color. Definition at line 224 of file TelescopeControl.hpp. Set the telescope reticle color. Definition at line 203 of file TelescopeControl.hpp. Forces a call of loadDeviceModels(). Stops all active telescopes. slews a telescope to the selected object. For use from the GUI. The telescope number will be deduced from the name of the StelAction which triggered the slot. slews a telescope to the point of the celestial sphere currently in the center of the screen. For use from the GUI. The telescope number will be deduced from the name of the StelAction which triggered the slot. Starts a telescope at the given slot, getting its description with getTelescopeAtSlot(). Creates a TelescopeClient object and starts a server process if necessary. Stops all telescopes, but without removing them like deleteAllTelescopes(). Stops the telescope at the given slot. Destroys the TelescopeClient object and terminates the server process if necessary. Send a J2000-goto-command to the specified telescope. Update the module with respect to the time. Implements StelModule.
http://stellarium.org/doc/0.15.1/classTelescopeControl.html
CC-MAIN-2017-04
refinedweb
697
54.59
F all of you. Daddy's Invisible House iFAQ I will do bananas. I'm here to violate your privacy Asteroids Youtube Videos Deleted Posts Today (∞) "To live is to war with trolls." -― Henrik Ibsen Search saving for retirement The miracle of compounding interest: I am 27 and currently have ~100k in 401k savings and currently contribute $15k/year (numbers approximate for the sake of argument). If I invest in the laziest way possible (say, index funds for US markets, world markets, and real estate) until retirement, how much longer do I need to save at this level so that I can stop saving completely and have a comfortable income at 65? We'll discount the possibility of the entire world's economy crashing only because that is not a useful scenario for the sake of this exercise. I'm thinking that if I conservatively assume 6% returns, if I save another 15k for the next 5 years or so, I should end up with 1.5 million for retirement, which is enough. Crazy.. am I really that close to not having to worry about retirement? My parents are financial planners and basically told me to save for retirement first.. looks like the miracle of compounding interest makes that good advice. the great purple April 27th, 2007 8:57am I have no advice to offer just wanted to say fair play, everybody I know in theres 20's doesn't save a penny (including myself) wow what are you reading for? April 27th, 2007 9:00am You must account for inflation. Your effective interest is the difference between nominal interest and inflation. In the future there will be a period of heavy inflation (when China starts selling dollars). When nominal interest keeps up with inflation you are fine, but when there is a period when it goes out of control, your savings can melt away fast. It happened around 1930 in Germany, in the early 1990's in Russia, and in countless failed countries. Erik Springelkamp April 27th, 2007 9:12am 1.5 million may not be that big a deal in 2040. $-- April 27th, 2007 9:26am > I'm thinking that if I conservatively assume 6% returns Do you think you'll be earning 6% every year or that in n number of years, you would've had a simple average return of 6%? If the latter is the case, real compound average return may be much lesser. Naturally when people advertise their funds they'll use the simple average return and not the compounded return. See the table Impact of Dispersion of Returns in the following link: I guess assuming 6% compound return is not a conservative return. It's a good return. Of course, I assume this will take into account inflation too. Senthilnathan N.S. April 27th, 2007 9:36am 6% for retirement investments at age 27 is *very* conservative. She should be aiming for 8-10% average returns. Philo April 27th, 2007 9:41am Her advantage is that she's starting at age 27. Something I've been looking for, but haven't been able to find hard info on, is expected expenses once you retire (stuff like medical costs). I heard that your medical costs in the last 3 years of your life can exceed what you paid up until that point. xampl April 27th, 2007 9:50am ..my point was that you need a target to save for. aka: How much will I be spending once I turn 72? xampl April 27th, 2007 9:52am Inflation is the problem. If you make 6% returns ABOVE INFLATION, you'll do excellent. With inflation around 5%, that's 11% right now, which is not really do-able. If you make 3% above inflation, you'll do OK. That's 8% these days, which is not unreasonable. If you make 5%, you're AT inflation, and treading water. And if inflation is higher than your returns, you're going backward, and you get the Germany 1930's syndrome. The point being, whatever analysis you look at, it's important to factor inflation in. I know, it's hard to predict future inflation. But leaving it out entirely can be disasterous. SaveTheHubble April 27th, 2007 10:03am Oh, and $100,000 at age 27 (or even 30) is excellent. Now factor in housing costs, children education costs, and what the inflated value of your retirement income needs to be. SaveTheHubble April 27th, 2007 10:05am > She should be aiming for 8-10% average returns. Yea. I agree. I should given this percentage rather than saying 'Of course, I assume this will take into account inflation too.', which wasn't clear I guess. Inflation varies and 6% real return will fall into 8-10% return. But my main point was it is difficult to do this consistently, year over year, for even 10 years, save for longer periods. People, when they talk about compounding, see only the way it goes higher. It can go the other way too. The way it is indicated in the above link, on average, the simple return can stay at 10% or so but the real compound return be much lesser. One single stroke on the negative side may easily wipe off a lot of return earned for quite a few years. > Her advantage is that she's starting at age 27. One more assumption people make regarding compounded return. Naturally, earlier is better. But there are instances where the time you get in may be bad. Not all sons are better off than their fathers in their earning power or career, etc. It may be the case that someone gets in early and makes a bad return and another gets in late and make a good or even much better return. As being there when the going is good is important, not being there when the going is bad is as important too. In the book "Bull" by Maggie Mahar, there are examples given where people have burnt their fingers getting in early at different periods. I don't have the examples readily. But it talks about investment, retirements, 401k, etc. Senthilnathan N.S. April 27th, 2007 10:06am "But my main point was it is difficult to do this consistently, year over year, for even 10 years, save for longer periods." *Average* Some years it may be 3-6%. Many years can be 12%+ Philo April 27th, 2007 10:09am The size annuity you can buy with $100k would only pay about $600/month. The reason you purchase annuities is to mitigate the risk that you will outlive your money. If your money runs out when you're 85, there are very few jobs you could get. Real estate is a very risky investment as many of the house flippers are learning now, and many more will learn in the coming years. Unless it is bringing in money, namely rental income, it isn't an investment. Another risk you might not have considered is divorce. Some community property states will split your IRA and give half to your ex. Defined contribution plans - namely 401ks - haven't been around for a long time. Less than 2 decades. I suspect that not all the legal issues, especially with bankruptcy, have been worked out. Defined benefit plans - traditionally called "pensions" - had been around for more than half a century before the big Studebaker bankruptcy that wiped out large numbers of pensions and lead to PBGC. PBGC doesn't cover 401ks, and as the Enron folks learned, it can all go up in smoke when the company stock goes tits up. Peter April 27th, 2007 10:17am This is where Excel really can help as you can model discounted cash flows quite precisely. A simple first cut would assume you will save a fixed proportion of your earnings and your earnings rise over time. Add in interest (NET of management fees!) as it accrues to your accumulated principal. Stop at your selected retirement point for the the future value of your diligent savings. Looks huge, eh? But that's in future money where a sandwich costs $200. You need to convert the future value to todays' money to get some idea of what it represents in current value. You can then elaborate - factor in varying rates of inflation, varying income and expenses etc. trollop April 27th, 2007 10:34am remember you have to pay taxes as you take it out... cool webpage: arg! April 27th, 2007 11:43am It seems a fairly simple intersection of two bands in the Cartesian plane: - take $100K at year 27, add $15K/year going forward, and compound at 5-7% (which is a fair guess at what an after-inflation moderate risk interest would be). - start at death, add somewhere between $X1 and $X2 per year going backwards (discount 6% for interest gained on this amount too). The two bands will overlap in a diamond shaped figure with curvy sides. Hopefully the year 65 either intersects this diamond or comes after it (early retirement!). The problem is forecasting your own death. And calculating those Xs. I heard a program on NPR that said retirement expenditures used to be calcluated at 70% of peak income (not age 27, but peak). But retirement advisors are now upping that to 90%. Don't ask me how someone making say $80K a year is supposed to plan a retirement where they spend $72K a year. Peter points to an interesting first estimate: let the statisticians/free market do it, ask an annuity (a reverse life insurance plan) to tell you how much they'd pay you per month for your $100K. They have the acturiarial tables, they can do the work fairly quickly (and their 10-20% profit margin is easy to take into account as well). Granted this is not a safe number: an annuity can risk letting a few old geisers live longer than predicted at a loss. You have less risk being such an old long-lasting geiser when it's mostly you paying the bills (though Social Security and kids offer a safety nest). strawberry beeswax April 27th, 2007 11:58am Taxes on unearned interest could be regarded as a "mamagement fee" ... but good point well spotted. trollop April 27th, 2007 12:01pm > Some years it may be 3-6%. Many years can be 12%+ True. But the problem is some years it may be negative too and the order matters. Year 1 = 5%, Year 2 = 25% and Year 3 = 0% gives 9.49% compounded average. Year 1 = 30%, Year 2 = -20% and Year 3 = 0% gives 9.49% compounded average. Year 1 = 40%, Year 2 = 30% and Year 3 = -40% gives 2.98% compounded average. All of the above will give a simple average return of 10%. By averge, which average do you mean? It can either be simple average or the compounded average. Simple average looks good but the real return (what you'll finally end up with) is the compounded average. Senthilnathan N.S. April 27th, 2007 12:04pm Wow... lots of numbers. To answer one person's question, you would expect your expenses to go down as you approach retirement, because most people pay off their homes. Therefore no more rent/mortgage. Also, a portion of your income is no longer going towards retirement savings. However, health insurance costs go up and you might want to travel so maybe that evens it out. Also, you expect SS to kick in for some portion of your income. So my guess would be that they expect that to cover the gap. Maybe they're expecting SS benefits to go down, and that's why they're recommending people save more? There are too many unknowns in this whole "how much do I need" discussion. I don't know what inflation is going to look like, nor what my returns on investments will be. To be on the safe side I have to assume high inflation and low returns. I have no idea what my pre-retirement income will be. I know what I make right now, but will that go up or down with time? No clue. Will I want to travel? I really don't know. I don't have dreams of travelling, per se, but maybe I'll develop some? What will my medical expenses be? A HUGE question to which I have no answer. The end result being, it's pretty much impossible to plan. This irritates me to no end. the great purple April 27th, 2007 12:59pm It may be impossible to plan, but too many people use that as an excuse to do nothing. It’s highly unlikely you will arrive at retirement age regretting having saved and invested to prepare for it. Just don’t get conned into living like a pauper in order to accumulate huge amounts for your retirement years. This is typically the advice given by people who stand to gain commissions and fees from you doing so. Planning your retirement so you can live on much less income than you have now is possible, but there is no money in it for them to tell you that. LH April 27th, 2007 2:46pm 401ks haven't been around that long, but they have been around long enough for people to retire with them. you have to watch out. There are lots of ways things can screw up and you'll end up losing most of it. Banks will 'misplace' your accounts. Or you'll transfer from one bank to another, and mistakes will be made and you'll end up with full tax liability for the entire fund value in a single year, wiping out nearly half of it in one move. Be extremely careful. Practical Economist April 27th, 2007 3:34pm This topic is archived. No further replies will be accepted. O ther topics: April, 2007 Recent t opics FruitShow and the CoT community.
http://www.crazyontap.com/topic.php?TopicId=17877&Posts=20
CC-MAIN-2015-22
refinedweb
2,339
73.47
#include <stdio.h> #include <string.h> struct sale { int week; int units; int price; char name[30]; }; int main(int argc, char *argv[]) { int week = 0; int units = 0; int price = 0; char name[30]; int count = 13; int i = 0; struct sale weeklySale[13]; int total = 0; for (i = 0; i < 13; i++) { scanf("%i %i %i %s", &weeklySale[i].week, &weeklySale[i].units, &weeklySale[i].price, weeklySale[i].name); FILE *fp; fp = fopen("data.txt", "wb"); if(fp == NULL) { printf("Sorry, there is no such file as data.txt"); } fwrite(weeklySale[i].week, weeklySale[i].units, weeklySale[i].price, weeklySale[i].name, fp); fclose(fp) == 0? "succeeded" : "failed"; if(weeklySale[i].week > 13) { break; } } return 0; } starting from the first for loop, i want to allow users to enter sales information for a period of 13 weeks, and store the sales values in a struct. once the values are in a struct, i want to store the struct values in a file called data.txt. (because data.txt will store all of the struct values whereas the struct will keep getting refreshed)..... then i want to close the "write mode" for the file and open the file in "read mode". once i have done that, i want to output the data in the form of week numbers and total sales in each week. the problem in my code is that it wont let me write to the file, and the error messages i am getting are the following: "In function main" "warning: passing argument 1 of fwrite makes pointer from integer without a cast" "warning passing argument 4 of fwrite from incompatible pointer type" "too many arguments to function fwrite" ......does anyone have any idea of whats going wrong because i'm going to kill someone soon if i dont start making some sort of progress
http://www.dreamincode.net/forums/topic/77375-i-am-having-a-problem-writing-to-a-file/
CC-MAIN-2016-22
refinedweb
308
75.64
A. In addition to this, the OData protocol has been recently ratified as an OASIS standard which will help bolster the adoption of the protocol by many companies and services all over the internet. If you want to know more about OData you can check the official site at where you can find the complete specification of the protocol and the features, the different formats supported and information about existing OData clients you can use in your apps. If you want to take a sneak peak at the new features and changes in the v4.0 version, you can do it here. During the past few months, the Web API team has been working on the initial support for the v4.0 version. Many of the existing changes in the current nightly build deal with protocol and format changes from the v3.0 to the v4.0 version, but we have managed to add some interesting features to our current OData support. This list of features include: 1. OData attribute routing: This feature allows you to define the routes in your controllers and actions using attributes. 2. Support for functions: This feature allows you to define functions in your OData model and bind them to actions in your controller that implement them. 3. Model aliasing: This feature allows to change the names of the types and properties in your OData model to be different than the ones in your CLR types. 4. Support for limiting allowed queries: This feature allows the service to define limitations on the properties of the model that can be filtered, sorted, expanded or navigated across. 5. Support for ETags: This feature allows to generate an @odata.etag annotation based on some properties of the entity that can be used in IfMatch and IfNoneMatch headers in following requests. 6. Support for Enums: We’ve improved our support for Enums and now we support them as OData enumerations. 7. Support for $format: We’ve also added support for $format, so clients are able to specify the desired format of the response in the URL. Important changes in this version. One of the important goals with this new implementation has been to support the side by side scenario where customers can have v3 and v4 services running on the same application. To that effect, we had to make some changes in the current naming of some classes and methods to allow for a reasonable user experience.. For the sake of consistency between versions, we have done the same set of changes in the Microsoft.AspNet.WebApi.OData to achieve a similar development experience. Only the namespace remains System.Web.Http.OData in this version. The current methods and class names can still be used with the System.Web.Http.OData (OData v3.0), but we have marked them as obsolete, and they are not available in the new assembly. Enough talking, let’s write an OData v4.0 service! We’ll start our new OData v4.0 service by creating a simple web application that we’ll call ODataV4Service. We’ll chose to use the Web API template that will install the default Web API packages required for our application. Once the basic application has been created, the first thing we need to do is update the existing Web API packages to use the nightly versions hosted on MyGet. In order to do that, right click on “References” in the project we have just created on the solution explorer, click on “Manage Nuget Packages” and expand the Updates section on the left. Check that there is a source for WebStack Nightly, and if not, just proceed to add it by clicking the Settings button on the left bottom corner of the window and adding the source in the windows that appears after clicking, as shown in the following figure. As you can see from the image, the URL for the nightly ASP.NET packages is and you can see all the different published packages on. Now that we have setup our nightly package source we can go and update the Web API packages. In order to do that, we need to select the Include Prerelease option on the dropdown menu on the top of the window. Then we just need to click Update All. Before leaving the Nuget Package Manager we need to install the Web API 2.2 for OData v4.0 package, in order to do that, we expand the Online tab, select the WebStack Nightly Source and the Include Prerelease option and then search for Microsoft.AspNet.OData. After installing this package, we can exit the Nuget Package Manager and try running our application by pressing F5. The default page should appear in the browser. At this point we have our application running on the latest 5.2 assemblies and we are ready to create our OData service. The first step is to create a model, for that we create a couple of C# classes representing entities as follow: public class Player { public virtual int Id { get; set; } public virtual int TeamId { get; set; } public virtual string Name { get; set; } } public class Team { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual double Rate { get; set; } public virtual int Version { get; set; } public virtual ICollection<Player> Players { get; set; } public Category Category { get; set; } } We are going to need some data to use, so we are going to use Entity Framework for that, in order to do that, we install the Entity Framework package from Nuget in the same way we have done with the OData package, except this time we pick the nuget.org package source and a stable version of the package. Then we create a context and include an initializer to seed the database with some data, as shown here: public class LeagueContext : DbContext { public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } static LeagueContext() { Database.SetInitializer<LeagueContext>(new LeagueContextInitializer()); } private class LeagueContextInitializer : DropCreateDatabaseAlways<LeagueContext> { protected override void Seed(LeagueContext context) { context.Teams.AddRange(Enumerable.Range(1, 30).Select(i => new Team { Id = i, Name = "Team " + i, Rate = i * Math.PI / 10, Players = Enumerable.Range(1, 11).Select(j => new Player { Id = 11 * (i - 1) + j, TeamId = i, Name = string.Format("Team {0} Player {1}", i, j) }).ToList() } )); } } } The next step is creating our OData model. We are going to create it in the WebApiConfig.cs file as the next figure shows: public static IEdmModel GetModel() { ODataModelBuilder builder = new ODataConventionModelBuilder(); builder.EntitySet<Team>("Teams"); builder.EntitySet<Player>("Players"); return builder.GetEdmModel(); } OData attribute routing Now that we have created our model, we need to define the route for the OData service. We are going to use OData Attribute Routing to define the routes in our service. In order to do that, we need to open the WebApiConfig.cs file under our App_Start folder and add the System.Web.OData.Extensions and System.Web.OData.Routing namespaces to the list of our usings. Then, we need to modify our Register method to add the following lines: ODataRoute route = config.Routes.MapODataServiceRoute("odata", "odata",GetModel()); route.MapODataRouteAttributes(config); At this point we have successfully configured our OData service, but we haven’t defined yet any controller to handle the incoming requests. Ideally we would use scaffolding for this, but we are still working on getting the OData v4.0 scaffolders ready for preview (the existing scaffolders only support OData v3.0 services). So we have to create our controllers by hand, but we’ll see that with attribute routing it’s not difficult at all. In previous versions of our Web API OData support, we had a very tight restriction on the names of the controllers, actions and even parameter names of our actions. With attribute routing, all those restrictions go away. We can define a controller or an action using whatever name we want as the following fragment of code shows: [ODataRoutePrefix("Teams")] public class TeamsEntitySetController : ODataController { private readonly LeageContext _leage = new LeageContext(); [EnableQuery] [ODataRoute] public IHttpActionResult GetFeed() { return Ok(_leage.Teams); } [ODataRoute("({id})")] [EnableQuery] public IHttpActionResult GetEntity(int id) { return Ok(SingleResult.Create<Team>(_leage.Teams.Where(t => t.Id == id))); } } As we can see on the figure above, we can use ODataRoutePrefixAttribute to specify a prefix for all the routes in the actions on the controller, and we can use ODataRouteAttribute to specify further segments that will get combined with the ones in the prefix. That way, the GetFeed action, represents the route /Teams and the GetEntity action represents routes like Teams(1), Teams(2), etc. Support for Functions Now that we have a basic service up and running, we are going to introduce some business logic. For that, we are going to define a function that will give us the teams whose rating is around a certain threshold with a given tolerance. Obviously, we could achieve the same result with a query, but in that case, the clients of our service are ones responsible for defining the query and might make mistakes. However, if we give them a function, they only need to care about sending the right parameters. In order to define a function that represents the business logic that we have specified, we can modify our GetModel function as follows: public static IEdmModel GetModel() { ODataModelBuilder builder = new ODataConventionModelBuilder(); EntitySetConfiguration<Team> teams = builder.EntitySet<Team>("Teams"); builder.EntitySet<Player>("Players"); FunctionConfiguration withScore = teams .EntityType .Collection .Function("WithScore"); withScore.Parameter<double>("threshold"); withScore.Parameter<double>("tolerance"); withScore.ReturnsCollectionFromEntitySet<Team>("Teams"); return builder.GetEdmModel(); } Functions can be defined at the service level (unbounded), at the collection level (bounded to collection) or at the entity level (bounded to the entity). In this case, we have defined a function bounded to the collection, but similar methods exist on the ODataModelBuilder class (to define service level functions) and on the EntityConfiguration class (to define entity level functions). Now, the last step is to define an action that implements the function, in order to do that, we are going to take advantage of attribute routing. The action in the figure below shows the implementation: [ODataRoute("Default.WithScore(threshold={threshold},tolerance={tolerance})")] [EnableQuery] public IHttpActionResult GetTeamsWithScore(double threshold, double tolerance) { return Ok(_league.Teams.Where(t => (t.Rate < (threshold + tolerance)) && (t.Rate > (threshold - tolerance)))); } As you can see, the way we call the function is by using it’s fully qualified name after the entity set on which we want to call it. We use attribute routing to define the parameters of the function and bind them to the parameters of the action in a very elegant way. In this case a sample call to the function would use the following URL /odata/Teams/Default.WithScore(threshold=3, tolerance=2) Important note: If you try this in IIS, you’ll probably get a 404 response. This is because IIS doesn’t like the dot in the URL on the last segment (IIS thinks it´s a file). One possible way to fix this is to add piece of configuration on you web.config to ensure IIS runs the routing module on all the requests. <system.webServer> <modules runAllManagedModulesForAllRequests="true"></modules> </system.webServer> Model aliasing So far we’ve seen attribute routing and functions, now we are going to show another very interesting feature, model aliasing. Many times we want to expose some data from our domain, but we want to change things like the names of the domain entities or the names of some properties. In order to do that, we can use model aliasing. There are two ways to configure model aliasing in our model, we can do it directly through the model builder by setting the name property of the types and the properties of the types, or we can annotate our types with DataContract and DataMember attribute. For example, we can change our model using data contract in the following way: [DataContract(Name = "Member")] public class Player { [DataMember] public virtual int Id { get; set; } [DataMember(Name = "Team")] public virtual int TeamId { get; set; } [DataMember] public virtual string Name { get; set; } } Support for limiting the set of allowed queries As we said above, query limitations allow a service to limit the types of queries that users can issue to our service by imposing limitations on the properties of the types of the model. A service can decide to limit the ability to sort, filter, expand or navigate any property of any type on the model. In order to do that, there are two options, we can use attributes like Unsortable, NonFilterable, NotExpandable or NotNavigable on the properties of the types in our model, or we can configure this explicitly in the model builder. In this case, we’ll do it though attributes. public class Team { public virtual int Id { get; set; } [Unsortable] public virtual string Name { get; set; } [NonFilterable] public virtual double Rate { get; set; } [NotExpandable] [NotNavigable] public virtual ICollection<Player> Players { get; set; } } The meaning of Unsortable, NonFilterable and NotExpandable is self-explanatory, as for NotNavigable it is a shortcut for specifying that a property is Unsortable and NonFilterable. When a client issues a query that involves a limited property, the server will answer with a 400 status code and will indicate the limited property that is causing the request to fail. Support for ETags The next feature we are going to see is ETags. This feature allows a service to define what fields of an entity are part of the concurrency check for the entity. Those fields will be used to generate an @odata.etag annotation that will be sent to the clients when returning the entity, either as part of a feed or just the single entity. The client can use this ETag value in the If-Match and If-None-Match headers to implement optimistic concurrency updates and efficient resource caching. In order to mark a field as part of the concurrency check, we can use the ConcurrencyCheck attribute or the Timestamp attribute. It’s important to note that we should use one or another, but not both at the same time. The difference strives in that ConcurrencyCheck is applicable to multiple fields of the entity and Timestamp is meant to be applied to a single field. The individual properties can also be marked as part of the concurrency check explicitly using the model builder. In this case, we’ll do it through attributes. For example, we have modified the Team entity to add a Version property and mark it as part of the ETag for the entity. The result is shown in the next figure: public class Team { public virtual int Id { get; set; } [Unsortable] public virtual string Name { get; set; } [NonFilterable] public virtual double Rate { get; set; } [ConcurrencyCheck] public int Version { get; set; } [NotExpandable] [NotNavigable] public virtual ICollection<Player> Players { get; set; } } Now, we will serialize the ETag of the entity when we retrieve it through a GET request, but we still need to take advantage of the ETag on the actions of our service. In order to do that, we are going to add a Put action and we’ll bind ODataQueryOptions<Team> in order to use the ETag. [ODataRoute("({id})")] public IHttpActionResult Put(int id, Team team, ODataQueryOptions<Team> options) { if (!ModelState.IsValid) { return BadRequest(ModelState); } if (id != team.Id) { return BadRequest("The key on the team must match the key on the url"); } if (options.IfMatch != null && !(options.IfMatch.ApplyTo(_leage.Teams.Where(t => t.Id == id)) as IQueryable<Team>).Any()) { return StatusCode(HttpStatusCode.PreconditionFailed); } else { _leage.Entry(team).State = EntityState.Modified; _leage.SaveChanges(); return Updated(team); } } As we can see, we can take advantage of the ETag by binding ODataQueryOptions as a parameter and using the IfMatch or IfNoneMatch properties in that object in order to apply the ETag value to a given query. In the above example, we check if the ETag on the IfMatch header exists and if so, if it doesn’t the value of the Team with the id represented by the URL to return a Precondition Failed status in that case. Support for Enums We already had support for Enums in Web API OData v3.0 by serializing them as strings, but the new version of the protocol has added full support for them, so we have upgraded our Enum support accordingly. In order to use Enums you just need to define a property with an Enum type and we’ll represent it as an Enum in the $metadata and the clients will able to use Enum query operators in $filter clauses. There are also specific overloads on the model builder in case we want to configure the enumeration explicitly. Defining an OData Enum property in your type is as simple as this: public enum Category { Amateur, Professional } public class Team { public virtual int Id { get; set; } [Unsortable] public virtual string Name { get; set; } [NonFilterable] public virtual double Rate { get; set; } [ConcurrencyCheck] public virtual int Version { get; set; } [NotExpandable] [NotNavigable] public virtual ICollection<Player> Players { get; set; } public Category Category { get; set; } } Support for $format This feature allows a client to specify the format they want in the query string of the URL bypassing any value set by the accept header. For example, the user can issue queries like this one to get all the metadata in the response, instead of just the minimal ammount (which is the default):;odata.metadata=full The above query uses a MIME media type and includes parameters in order to ask for a specific JSON version. The above query uses an alias to refer to a specific MIME media type, application/json which in the case of OData is equivalent to application/json;odata.metadata=minimal Using the .NET OData client to query the v4.0 service The OData client for .NET has been released this week, the following blog post contains the instructions on how to use it to generate a client that can be used to query Web API for OData v4.0 services. Note: If you plan to use $batch it won’t work properly with the client that gets generated by default. This is caused due to the fact that we are still using the beta version of OData Lib (we plan to update to the RTM version in the near future) and the client uses the RTM version of OData Lib. In order to workaround this issue, you can do the following: Open the Nuget Package Console and downgrade the OData Client package to beta1 doing: 1. Uninstall-package Microsoft.OData.Client -RemoveDependencies -project <ProjectName> 2. Install-package Microsoft.OData.Client -version 6.0.0-beta1 -pre -project <ProjectName> Perform the following changes on the T4 template mentioned on the blog: 1. Replace Microsoft.OData.Client.Key with Microsoft.OData.Service.Common.DataServiceKeyAttribute 2. Replace Microsoft.OData.Client.ODataProtocolVersion with Microsoft.OData.Service.Common.DataServiceProtocolVersion Samples Along with the nightly build we have published samples with the new features and new samples showing off common scenarios that target OData v4 in the ASP.NET codeplex site. Conclusion We have started publishing nightly builds for our OData v4.0 support that it’s built on top of the ODataLib support that has already shipped. Our nightly builds can be found on. The package ID to find them is Microsoft.AspNet.OData (Remember to select IncludePrerelease in the Nuget Package Manager). We’ve also seen a brief introduction to all the new features in this version, like OData attribute routing, functions, model aliasing, query limitations, ETags, enums and $format. You can find samples for all the new features in. Enjoy! Join the conversationAdd Comment Hi, Thank you so much… I have a question… how can I specify which properties will return when I query this : route /Teams … I just want it to return Name and Rate properties, not all them… Thanks… @Kourosh You can do Team?$select=Name,Rate. The action for the route needs to have the EnableQuery attribute It's great to see model aliasing. But the another important part of this feature is an ability to use case-insensitive for properties and functions. Would be useful if the dev.team will provide a separate adapter for property binding while OData Request is being parsed. Is this feature will be delivered with next release neither with some explanation how it can be done? @Dmitriy, we are still exploring the best way of to do the case-insensitive feature. odata is clearly a case sensitive protocol. e.g. you have both 'Name' and 'name' for different properties or one property and one function in your model. Just one question: I do not like closing the context when releasing the controller. How we can close the context per action? Can we implement OData 4.0 using WCF Data Service? If yes, how? If not, is the Web API 2.2 only way to go with? I would like to use WebAPI to expose tabular data as well as OLAP data as OData. Is there a simple way of gen'ing the EDM for these models – the reason for this is we create these cubes on the fly and the data model changes as new attributes/ and or dimensions could be added. thank you for any feedback. I'm using Microsoft.AspNet.WebApi.OData v5.2.0-alpha1-14401 in my host and it references ODataLib, Edm and Spatial 5.6.0 (both in the NuGet dialog dependencies list and what it automatically downloads). See the screenshot at When I remove the 5.6.0 nuget packages and instead use use only the latest 6.2.0 packages, the Web API Host doesn't recognize the location of IEdmModel in these newer packages and my controller scaffolding doesn't work. This post states "The assembly name and the root namespace are now System.Web.OData instead of System.Web.Http.OData." Web API 2.2 shouldn't support ODataLib, Edm and Spatial 5.6.0 correct? Does this post need to be updated to reflect the latest nightly builds on MyGet? Works pretty good! Don't know if it is EF6.1 or the new OData libs – but results with $top & $skip seem much faster. A few things I noticed using todays build (did not test prior builds): 1) EF DateTime properties don't get persisted correctly in the JSON – this is a serious bug that pretty much makes it unusable for me until it is fixed. 2) Some of the new v4 syntax is implemented (like $count=true instead of $inlinecount=allpages) but some is not (like $filter=contains(PropName,'value')) – contains doesn't show up in AllowedFunctions yet. Keep up the good work. Getting an exeption "Object not set to an instance of an object" In your Global.asax.cs file, configure Web API like this: protected void Application_Start() { GlobalConfiguration.Configure(WebApiConfig.Register); } Previously, the project templates used this: protected void Application_Start() { // Caution: Won't work with attribute routing WebApiConfig.Register(GlobalConfiguration.Configuration); } forums.asp.net/…/5545668.aspx @Bjorn, please ensure you installed the latest nuget packages. e.g. stackoverflow.com/…/globalconfiguration-configure-not-present-after-web-api-2-and-net-4-5-1-migra? Regarding "'System.Web.Http.HttpRouteCollection' does not contain a definition for 'MapODataServiceRoute' – make sure you have the necessary using statements: using System; using System.Collections.Generic; using System.Data.Entity; using System.Linq; using System.Web.Http; using System.Web.OData.Builder; using System.Web.OData.Extensions; using System.Web.OData.Routing; using System.Web.OData.Routing.Conventions; using Microsoft.OData.Edm; I tied to use the T4 template to generate the proxy but it created a new type called ".Nullable_1OfDateTime" when it should have been nullable datetime. I get the following error when I try to query the proxy "The complex type 'System.Nullable_1OfDateTime' has no settable properties." If I manually query the Web API it works fine when generating the json. The same Web API service worked fine with an odata v3 client before I upgraded it to a odata v4 service. The template seems to have problems with nullable datetime. Also noted in the comments here: aspnetwebstack.codeplex.com/…/1753 The old DataService Client was generating nullable DateTime like this. <Property Name="ShipmentDate" Type="Edm.DateTime" /> but the new T4 template is generating it like this <Property Name="ShipmentDate" Type="System.Nullable_1OfDateTime" />. If you try to replace all the Nullable_1OfDateTime with Edm.DateTime you will get this error "The complex type 'System.Nullable`1[System.DateTime]' has no settable properties." This work item aspnetwebstack.codeplex.com/…/1753 talks about removing the DateTime from OData V4. Is this true? If so, how should I change my model to fit the new alternative to DateTime? Well the answer was in front of me… docs.oasis-open.org/…/odata-v4.0-os-part3-csdl.html Looks like all I had to do was change my model type from DateTime to DateTimeOffset. It's a shame that the generated code from the template couldn't have converted the DateTime at the client level so that the model on the server could stay DateTime but the client code could just drop the offset before sending it to or grabbing it from the OData service. The $expand is not working the same as the it was for OData web api V3. For example this works fine in OData V3 but throws the following error in V4 "The query specified in the URI is not valid. Found a path traversing multiple navigation properties. Please rephrase the query such that each expand path contains only type segments and navigation properties." Ok so the expand has changed here is how I got it to work in V4 At the client I was able to do something like this context.CustomerOrders.Expand("Customer($expand=Phones)") HI, the link…/aspnetwebstacknightly doesn't work. Can anyone help? Thanks.. @Jim Deng, the links seems fine to me:…/aspnetwebstacknightly and…/aspnetwebstacknightly. What exactly not working? Thanks. I'm very happy to see the $format parameter because I need to create endpoints that can be consumed by Excel PowerPivot. I am able to get Excel to recognize all of the data feed tables but it fails to discover any columns when it looks at the metadata. I get an error that reads, "The data source does not contain any columns." If I use Fiddler to call the service (e.g.), I can see the metadata/context link () and if I browse to that, I see the metadata. I'm not sure what URI Excel is using to request the metadata, though (it isn't traceable by Fiddler). Is there anything different about the Web API 2.2 $metadata query from other services or versions that would cause this to break? Any thoughts on why Excel is having trouble identifying the columns (properties) of my tables (Entity objects)? I'm having some problems getting the OData service to return xml. I can request the various levels of metadata, in both the Accept header and with the $format querystring, as long as I'm using "application/json". When I attempt with "application/xml", "application/atom+xml" or some variation thereof, the service falls back to returning json with minimal metadata. @Mikal Jensen – I don't know if this will help but I had to make sure I URL-encoded the requests to my Web API services when using something like Fiddler. For example, instead of was the only way that worked for me. Thanks for the suggestion Robert. But I had already tried that. And I've found that using the composer in fiddler, usually doesn't require me to url encode querystring values. To clarify: in my test project using this url returns a single user in json with minimal metadata (294 bytes payload) using;odata.metadata=full or;odata.metadata=none returns the same user with full or no metadata (851 bytes and 223 bytes respectively). So my service is capable of reading and interpreting the $format parameter. However when I use (or even) it falls back to returning json with minimal metadata, which is the default format for unrecognised mediatypes. So it seems to me that my service is unaware that it can, and should return xml when requested. The question is, how do I configure it to do this? I'm getting this error while trying to install the nuget package: Unable to find a version of 'Microsoft.AspNet.WebApi.Core' that is compatible with 'Microsoft.AspNet.WebApi.Client 5.2.0-rc-140514'. Any ideas why? How do I add support for the $search query option? It's supported in Microsoft.OData.Core.UriParser.ODataQueryOptionParser but I can't figure out how to do the routing rules for it to work. Can you please post an example project that provides basic $search support. @Shravan peddapally I am having same issue. Were you able to resolve?? @Sharvan I fixed the issue by checking out the refrences folder before applying NuGet update Restart the VS 2013 stackoverflow.com/…/one-or-more-packages-could-not-be-completely-uninstalled Questions: I was able to successfully setup my WEBAPI 2.2 ODATAv4 however I am still unable to do following. 1. In my Web API metadata I am expecting the Position column to be Edm:Georgraphy NOT what you see above i.e System.Data.Entity.Spatial.DbGeorgraphy. 2. Do you know why I cannot use WEB API OData v4 OOTB functions like e.g:(Position,geography' Point(-122.03547668457 47.6316604614258)') lt 900 Database I have a database table called Resturants with three columns •ResturantID (Long) •Name (String) •Phone (int) •Position (Georgraphy) I have data for resturants with their LAT & LONG stored in Position column witch is Georgraphy column. EF6 Code FIRST Then I have an Entity framework 6 for that table (CODE FIRST). I used the reverse engineer to generate CODE and I get a class as below: public partial class Resturants { public long ResturantID {get;set} public string Name{get:set:} public int Phone{get;set} public System.Data.Entity.Spatial.DbGeorgraphy Position { get;set; } } WEB API 2.2 with ODATA v4 (Latest Pre release from Nuget) I have successfully created the ODATA v4 using the link . Now when I run the WEB API service I get <?xml ver=1.0 Encoding=UTF-8> <edmx:Edmx xmlns:edmx=docs.oasis-open.org/…/edmx version=4.0> <Schema xmlns:edmx=docs.oasis-open.org/…/edmx Namespace ="Resturnants.EF.Tables> <Entity type <key> <PropertyRef Name="ResturantID"> </key> <property Name="ResturantID" Type="Edm:Int64"> <property Name="Name" Type="Edm:String"> <property Name="Phone" Type="Edm:Int32"> <property Name="Position" Type="System.Data.Entity.Spatial.DbGeorgraphy "> ……. ……… Why ODATA v4 Reason I had to go V4 is 'casue ODATA v3 does not support spatial functions like : "Get me all resturants by given Lat & Long" OOTB. I do not wish to write my own ODATA v3 functions to translate LINQ queries to SQL for Geo spatial calculations. thanks in advance Guys, Sorry for long post last time. Just letting everybody know that this is a bug. See below comments from a MS contact: The type of Position column is System.Data.Entity.Spatial.DbGeography. Then I searched about DbGeography and DEM.Geography. I found this article: Do Entity Framework Provide Utility that convert DB Type To C# Type In this article, we can find that System.Data.Entity.Spatial.DbGeography is a ClrEquivalentType of Edm.Geography. And it also mentioned that EF.Utility.CS.ttinclude will internally uses classes from System.Data.Metedata.Edm Namespace. So based on the article and the test results, I think this is a bug that EF doesn’t convert System.Data.Entity.Spatial.DbGeography to edm.Geography internally. @Omer Zubair, could you please log the EF issue to entityframework.codeplex.com, then post the link here? That way, you can communicate with EF team directly and get votes from other supporters to get the bug fixed sooner. Thanks! I think your workgroup is doing very bad work implementing OData 4.0. Really do you think I will change column type on Database? Please leave your work to someone more smart. @Shravan I don't think that line is needed anymore as I believe the function was deprecated: aspnetwebstack.codeplex.com/…/1946 Dear OData Team, I'm also, like others having issue with inability to force OData service to return XML formatted result by either Accept header and $format uri parameter. Could you please tell us how to fix it? WebAPI version: 5.2.2. Ready to use example: WebAPI content negotiation works as expected – we're able to specify xml or json in Accept header and receive suitable result format. But what to ODataControllers – it doesn't work. @Omer, Geography and Geometry types are not supported now. @Eugene, the xml(atom) format is not supported except the metadata document as it is not an OASIS standard. Guys sharing a simple solution for Geo Spatial Distance function implementation on OData v3 webapiodata.blogspot.com.au How to use $batch in javascript? @XueYan: Olingo OData JavaScript client was just released a few days agao @ olingo.apache.org/…/index.html , you can find some sample test codes for $batch: git-wip-us.apache.org/…/asf I also had the error "The data source does not contain any columns." when using Excel 2013 and PowerPivot and discovered it was because Excel requires data in the Content node rather than in custom elements. Setting the content requires using the correct namespaces so see msdn.microsoft.com/…/ff714561.aspx and stackoverflow.com/…/adding-namespaces-to-a-syndicationfeed-rather-than-individual-elements for examples. To prevent the namespace declaration being duplicated unnecessarily in the output in the XmlWriter use an XmlWriterSettings and specify NamespaceHandling = NamespaceHandling.OmitDuplicates. For those struggling with the error "?)" You simply need to change: config.Routes.MapODataServiceRoute("odata", "odata",GetModel()); to: config.MapODataServiceRoute("odata", "odata",GetModel()); Not only did they rename the extension method they moved it…. where is the syntax highlight for the code snippets ? Hi, Is there any way I can export the data in excel format. I'm using ODATA v4.0 to query and filter the results. I want to send back excel in response vs JSON. It doesn’t seem that the method ‘route.MapODataRouteAttributes(config);’ is defined anymore? BEWARE – this document is so far out of date against modern OData V4, it’s not funny.
https://blogs.msdn.microsoft.com/webdev/2014/03/13/getting-started-with-asp-net-web-api-2-2-for-odata-v4-0/
CC-MAIN-2018-30
refinedweb
5,740
55.54
Python String Encoding¶ The Python developer community has published a great article that covers the details of unicode character processing. - Python 3: - Python 2: The following notes are intended to help answer some common questions and issues that developers frequently encounter while learning to properly work with different character encodings in Python. Does ChatterBot handle non-ascii characters?¶ ChatterBot is able to handle unicode values correctly. You can pass to it non-encoded data and it should be able to process it properly (you will need to make sure that you decode the output that is returned). Below is one of ChatterBot’s tests from tests/test_chatbot.py, this is just a simple check that a unicode response can be processed. def test_get_response_unicode(self): """ Test the case that a unicode string is passed in. """ response = self.chatbot.get_response(u'سلام') self.assertGreater(len(response.text), 0) This test passes Python 3. It also verifies that ChatterBot can take unicode input without issue. How do I fix Python encoding errors?¶ When working with string type data in Python, it is possible to encounter errors such as the following. UnicodeDecodeError: 'utf8' codec can't decode byte 0x92 in position 48: invalid start byte Depending on what your code looks like, there are a few things that you can do to prevent errors like this. Unicode header¶ # -*- coding: utf-8 -*- Unicode escape characters¶ >>> print u'\u0420\u043e\u0441\u0441\u0438\u044f' Россия Import unicode literals from future¶ from __future__ import unicode_literals When to import unicode literals¶ Use this when you need to make sure that Python 3 code also works in Python 2. A good article on this can be found here:
https://chatterbot.readthedocs.io/en/stable/encoding.html
CC-MAIN-2021-25
refinedweb
279
54.12
MySQL PHP Query delete ( ) delete query from a database table. MySQL PHP Query is used to delete the records... on 'MySQL PHP Query delete'. To understand the example we create a table 'MyTable... the database:".mysql_error()); $sql="delete from mytable where empid MySQL PHP Query delete ( ) delete query from a database table. MySQL PHP Query is used to delete the records... in selecting the database:".mysql_error()); $sql="delete from mytable where... MySQL PHP Query delete   MYSQL MYSQL How to create time and date based trigger in mysql MySQL Time Trigger MySQL Create Database MySQL Create Database MySQL... and choosing the Create New Table option. The MySQL Table Editor can..., executing, and optimizing SQL queries for your MySQL Database Server. The MySQL Query MySQL Creating and Deleting Database CREATE USER privilege or DELETE privilege for the mysql database. The general... and delete the database in MySQL. MySQL provides the both commands. In this section you... MySQL Creating and Deleting Database   mysql you need to download the mysql-connector jar file for connecting java program from mysql database....... Hi friend, MySQL is open source database... is the link for the page from where you can understand how to Download and Install MySQL mysql mysql what is database Mysql Drop Mysql Drop Mysql Drop is used to remove or delete the table from the database. Understand with Example The Tutorial illustrate an example from 'Mysql Drop'.To understand MySQL MySQL Create queries in SQL that will retrieve the following information: Select all the clients whose surnames start with the letters 'GU' Select all clients in catered accommodation Select all the clients in non-catered MySQL remove MySQL remove We can use MySQL Delete statement to implement the removal on the database. If you want to remove any record from the MySQL database table then you can use MySQL MySQL Database Training MySQL data with SQL. How to create a simple MySQL database Table... MySQL Database Training  MySQL Training Course Objectives How Create a counter in mySQL 5 database through Java - SQL Create a counter in mySQL 5 database through Java Dear Editor... question regarding Java & mySQL 5 database.... ============================================================== This is an example of mySQL5 database as shown above. For example "Bee Mysql Trigger After Mysql Trigger After  ... an example from 'Trigger After Delete' in Mysql. To understand this example, we... Trigger is used to delete trigger Employee_Trigger from database, in case Mysql Table delete is used to execute Mysql function ( ) delete query from a database table...; MySQL Create Table Here, you will read the brief description about the MySQL create table. Mysql Add MySQL Front Database. These MySQL Front tools actually helps you in working fast and easily with the MySQL database. Here is the list of MySQL Front end tools: What...-developers using the popular MySQL-Database. It allows you to manage and browse MySQL NOT IN will create a database table in MySQL using sql create query named purchase then we...MySQL NOT IN In this section we will read about how to use the NOT IN function in MySQL with a simple example. MySQL NOT IN() function specifies Setup MySQL Database /mysql/mysql5/Installing-MySQL-on-Windows.shtml Creating Database : You can create mysql database...Setup MySQL Database In this section we will create database and table As from 'Mysql As'. To understand this example we create a table 'stu'. The create... Mysql As Mysql.... The Alias Name is helpful for database that makes it easy to write and read MYSQL - SQL MYSQL how to select last row in table?(MYSQL databse) How to select last row of the table in mySql Here is the query given below selects tha last row of the table in mySql database. SELECT * FROM stu_info MySql Remove Tutorials Mysql Drop is used to remove or delete the table from the database... database. You can create new user and add permission using the grant... database. Usually you can connect to mysql through command prompt and issue mysql query mysql query how do transfer into the excel data to my sql Hello Friend, First of all create a dsn connection.For this,Follow... the mysql connection in your php and store the list values inside the Welcome to the MySQL Tutorials will learn how to create and delete the database in MySQL. MySQL... will be learn how to create database and create a specific columns in MySQL... of the Data Definition statements supported by MySQL like CREATE DATABASE mysql mysql how to open\import table in .dbf format in mysql MySQL Configuration configuration MySQL Standard version of the mysql database server. MySQL... database engine. standard configuration option is used for create a new user get... MySQL Configuration   update mysql database update mysql database update mysql database MySQL the database. The development, distribution, and support of MySQL is done...;PYTHON","PHP". MySQL is a database management system.... MySQL database is a relational. In a Relational database data is stores can we create a database using PHP and mysql? How can we create a database using PHP and mysql? How can we create a database using PHP and mysql MySQL Alter Tutorial, Database MySQL MySQL Crash Tutorial Tutorial definition and delete the column from table. Mysql... the records from table in the database. MySQL PHP Query delete Mysql PHP Query delete is used to execute Mysql function MySQL Create Table MySQL Create Table Here, you will read the brief description about the MySQL create table. The CREATE TABLE statement is used for creating a table in database.  Database - mysql - SQL Database - mysql size limit What is the size limit for any mysql how to create a php script to download file from database mysql how to create a php script to download file from database mysql how to create a php script to download file from databse mysql PHP Mysql Database Connection PHP Mysql Database Connection PHP Mysql Database Connection is used to build a connection... The Tutorial illustrate an example from PHP Mysql MySQL Training ? Online Free MySQL Training and delete the database in MySQL. MySQL provides the both commands... to create database and create a specific columns in MySQL. And you can also learn about... Definition statements supported by MySQL like CREATE DATABASE, CREATE TABLE, ALTER XML parsing to Mysql database XML parsing to Mysql database Can someone please post the code for parsing an XML file into Mysql database using SAX MySQL Count MySQL Count This example illustrates how to use count() in MySQL database. Count(): The Count...; Table: employee CREATE TABLE `employee Data retrieve from mysql database Data retrieve from mysql database Hi sir, please give some example of jsp code for retrieving mysql database values in multiple dropdown list...:mysql://localhost:3306/"; String dbName = "test"; String driver PHP Chat Systems A MySQL Driven PHP Chat Systems A MySQL Driven Chat Script Tutorial This article will show you how to create a simple chat script using PHP and MySQL database... and paste on your mysql editor or console: create table chat(userid int(10 Introduction to create or manipulate a database whereas MySQL is a engine that interprets... of both PHP and MySQL are used to develop or create dynamic and interactive...; SQL stands for Structured Query language and MySQL is a database management tutorial with MySQL JDBC Examples with MySQL In this section we are giving many examples of accessing MySQL database from Java program. Examples discussed here will help...; Create JDBC Create Database JDBC Create Table JDBC Create Tables MySQL For Window Graphical User Interface (GUI) allows you to create/edit all MySQL database objects.... MySQL Database Server and Microsoft Windows The Windows version of the MySQL Database Server accounts for over 40 percent of the downloads MySQL Download Database Server MySQL is the most popular Open Source SQL database on the internet. MySQL is a relational database management system. A relational database... of an existing MySQL database. There are some feature include :- *All MS php mysql php mysql automatically insert a value in the month field at the last date of current month in mysql database mysql - SQL mysql How to insert the check box values in mysql database through php MYSql with struts MYSql with struts How to write insert and update query in struts by using mysql database? Please visit the following link: JDBC Example with MySQL . This section describes how to create a MySQL database table that stores all java types... for transferring data between an database using MySQL data types and a application using Java data types. Connecting to a MySQL Database in Java In this section MySQL Books source database server. Whether you are a seasoned MySQL user looking to take your... to one of the most popular relational database systems. MySQL is an open source... as well. Web Database Applications with PHP & MySQL Connecting to MYSQL Database in Java Connecting to MYSQL Database in Java I've tried executing the code...("MySQL Connect Example."); Connection conn = null; String url = "jdbc:mysql://localhost/"; String dbName = "textbook"; String driver MySQL Tools , especially on Windows, knows the poor mans tools that MySQL has presented to database... could view the MySQL table schemas but not change or create new ones.  ... MySQL Tools MySQL js,,mysql - JSP-Servlet js,,mysql Hi, I want a jsp code for editing,deleting data retrieved from from the mysql database and displayed in a html page .Can anybody provide me a code for including edit and delete options with the following code Java and MySQL Java and MySQL I am doing a project on an accounting system. I need to know to things: How do I write reports using information in an MySQL database . How get multiple MySQL database rows and assign them to variables r MYSQL Java Connector Library and MYSQL database. Downloading Java SDK and MYSQL server is not sufficient to create a database application in Java. You will also required a MYSQL Java... by the MySQL database. Basically, a MySQL Connector/J is the official JDBC driver Establish a Connection with MySQL Database Establish a Connection with MySQL Database  ... coding methods of establishing the connection between MySQL database and quartz application for updating and manipulating the data of MySQL database tables. Here How to delete a table in mysql How to Delete a Table in MySQL Consider a situation where we need to delete a table from a database. To delete a table from the database firstly we need user in mysql user in mysql how to create user in mysql? Please visit the following link: Java and Mysql Java and Mysql Sir, I want to connect my java program with mysql server (mysql server is situated on another windows machine ) ??? ???? ????? Put mysql jar file in your jdk lib and set the classpath. After mysql problem - JDBC mysql problem hai friends please tell me how to store the videos in mysql plese help me as soon as possible thanks in advance Hi Friend, Create a database table image(image_id(bigint),image(blob))and use connectivity with mysql /phpdatabase/Check-PHP-MySQL-Connectivity.html Please visit the following.../tutorial/php/phpdatabase/ Setting up MySQL Database and Tables Setting up MySQL Database and Tables  ... Database and you know how to work with the MySQL database. To access the database.... CREATE DATABASE `struts-hibernate PHP MYSQL Interview Questions a requirement to create a new database everytime a user request to create a new database in mysql. So far i have created only tables in mysql using php code. So, just wondering how to create a database using PHP script? In reference to PHP mysql problem - JDBC of creation of table in mysql. it will take any image and store in database. thanks friends Hi Friend, Create a database table image(id...mysql problem hai friends i have some problem with image Database connectivity Hibernate mysql connection. Database connectivity Hibernate mysql connection. How to do database connectivity in Hibernate using mysql DriverClass hibernate mysql connection. ; <!-- Drop and re-create the database schema on startup --> <...DriverClass hibernate mysql connection. What is DriverClass in Hibernate using mysql connection? DriverClass implements java.sql.Driver. Mysql Loader Examples ; Mysql Loader Example is used to create the backup of data file and then import the data text file into Mysql database. ... the backup created file into table 'Loaded Employee' in Mysql database. mysql> Mysql Loader Examples ; Mysql Loader Example is used to create the backup of data file and then import the data text file into Mysql database. Understand... into table 'Loaded Employee' in Mysql database. mysql> LOAD DATA INFILE 'C what is the mysql in the database using php what is the mysql in the database using php what is the mysql in the database using php Please visit the following link: PHP Database restore all mysql database:cPanel MYSQL 5.1 to 5.5 restore all mysql database:cPanel MYSQL 5.1 to 5.5 restore all mysql database:cPanel MYSQL 5.1 to 5.5 mysql - JSP-Servlet mysql code to add photo in mysql database ... i want page from which i can upload photo which will store in mysql database and display in one page... following database table: CREATE TABLE `file MySql,java MySql,java In MySQL table i am adding the fields of name,photo,video... procedure is it necessary to create different tables for each category.and also... then it automatically save into MySQL.but i am not create the table for that subject java & mysql to your MySQL server version for the right syntax to use near '' at line 1" my...("com.mysql.jdbc.Driver"); Connection con=DriverManager.getConnection("jdbc:mysql...(); int i=1; String pname="";%> <h1 align="center">DELETE Mysql PHP Select with database to access the records. The mysql _connect function ( ) is used...;Could not connect: ".mysql_error()); mysql_select_db($database,$link) or die("Error in selecting the database:".mysql_error()); $sql="Select MySQL Not In MySQL Not In MySQL Not In is used to updates only those records of the table whose fields 'id... The Tutorial illustrate an example from 'MySQL Not In'. To understand and grasp mysql - WebSevices mysql Hello, mysql storing values in column with zero... when I want to store this value in mysql it store 0 first and than 60... for time column and in database you might have declared the time to be integer type
http://www.roseindia.net/tutorialhelp/comment/34380
CC-MAIN-2014-23
refinedweb
2,387
64.41
Working with Inline Web Workers In the past I wrote a post about what are Web Workers. In short, Web Workers enable web developers to run JavaScript code in the background which might help to increase web page performance. This post is going to explain what are inline Web Workers and how to create them. Inline Web Workers When dealing with Web Workers, most of the times you will create a separate JavaScript file for the worker to execute. Inline Web Workers are Web Workers which are created in the same web page context or on the fly. The reason for doing such a thing is obvious, sometimes we want to reduce the number of requests that the page perform and sometimes we need to create some functionality on the fly. Executing external JavaScript files can’t help us with that. There are two kinds of inline Web Workers: - Page inline worker – The worker’s JavaScript code is created inline inside the web page. In this case you will use a script tag with an id and a javascript/worker type. the type will indicate to the browser not to parse the written inline JavaScript and it will refer to it as string. Here is an example for such script tag: Later you will be able to retrieve the script by its id and use its textContent property to extract the worker body. <script id=“worker” type=“javascript/worker”> postMessage(‘worker sent message’); </script> - On the fly worker – The worker’s JavaScript code is provided by an external source as string. In both of the cases, in order to run the worker you will have to create a blob object and a blob URL. Creating the Web Worker The main way to create an inline Web Worker is using the BlobBuilder object which was added by the HTML5 File API. The BlobBuilder enables us to create a blob object from a given string. It includes two main functions – the append function and the getBlob function. The append function adds data into the underlining blob and the getBlob returns the created blob object. After you create a blob object from the inline worker implementation you will have to create a URL from it. The reason is that Web Workers gets a URL as parameter. For our rescue, HTML5 defines another two functions in the File API – createObjectURL and revokeObjectURL. Both of the functions exists in the window.URL object. Blob URLs are a unique URL which is created and stored by the browser up until the document is unloaded. The createObjectURL function gets a blob object and returns the blob URI which can be used. The revokeObjectURL function is used to release a created blob URL. If you are creating a lot of blob URLs you should use the revokeObjectURL in order to release references to blob URLs which aren’t in use. Lets take a look at an example of creating an inline Web Worker: var bb = new BlobBuilder(); bb.append(workerBody); var workerURL = window.URL.createObjectURL(bb.getBlob()); var worker = new Worker(workerURL); In the example a BlobBuilder is created and a workerBody is appended to it. The workerBody can be any piece of code that you want to run inside a Web Worker. After you create the in-memory blob you will use the createObjectURL function to create the the blob URL and use it as a parameter to the Web Worker. If you want to use the script tag from the first code example you can write the following code: var bb = new BlobBuilder(); bb.append(document.querySelector(‘#worker’).textContent); var workerURL = window.URL.createObjectURL(bb.getBlob()); var worker = new Worker(workerURL); The Full Example I wanted to create an experimental code example to show how to encapsulate the previous inline Web Workers implementation inside a JavaScript object and use it so here it goes: <!DOCTYPE html> <html lang=“en”> <head> <title>Inline WebWorker</title> <meta charset=“utf-8” /> <script> // create a namespace for the object var workerHelpers = workerHelpers || {}; // set the blob builder and window.URL according to the browser prefix if needed var BlobBuilder = window.BlobBuilder || window.WebKitBlobBuilder || window.MozBlobBuilder; window.URL = window.URL || window.webkitURL; workerHelpers.InlineWorkerCreator = function () { }; workerHelpers.InlineWorkerCreator.prototype = function () { createInlineWorker = function (workerBody, onmessage) { if (BlobBuilder) { var bb = new BlobBuilder(); bb.append(workerBody); var workerURL = window.URL.createObjectURL(bb.getBlob()); var worker = new Worker(workerURL); worker.onmessage = onmessage; return workerURL; } else { console.log(‘BlobBuilder is not supported in the browser’); return; } }, releaseInlineWorker = function (workerURL) { window.URL.revokeObjectURL(workerURL); }; return { createInlineWorker: createInlineWorker, releaseInlineWorker: releaseInlineWorker }; } (); window.addEventListener(‘DOMContentLoaded’, function () { var creator = new workerHelpers.InlineWorkerCreator(); var workerURL = creator.createInlineWorker(‘postMessage(\’worker sent message\’);’, function (e) { console.log(“Received: “ + e.data); }); console.log(workerURL); // release the URL after a second setTimeout(function () { creator.releaseInlineWorker(workerURL); }, 1000); }, false); </script> </head> <body> </body> </html> Summary In the post I explained the reason to create inline Web Workers. I also showed how to create an inline Web Worker and provided an implementation for a JavaScript object that can be used to do that. I’ll appreciate any comments about the provided code. Have you tried to access to indexedDB from the inline worker ? I post a question in Stackoverflow , there is some problem in chrome accesing indexedDB form blob code (the title is ‘Accessing indexedDB from code inside a Blob’)
http://blogs.microsoft.co.il/gilf/2012/01/29/working-with-inline-web-workers/
CC-MAIN-2018-30
refinedweb
890
55.13
jGuru Forums Posted By: Alan_Parker Posted On: Monday, March 25, 2002 04:32 AM HI I have the following problem: I have a controller stateful session bean A that calls a BMP steteful entity bean B. I would like to raise an exception in B ejbCreate() method, allowing A to know that some parameter passed to A is incorrect. That exception should be passed to A and finally to the client. So I code my exception like this: public class MyException extends Exception implements java.io.Serializable {....} This works fine if I throw it from A to the client. But when I throw it in B I got a RemoteException in A instead of MyException. Why ? Whatt am I missing ? (I'm using JBoss) Many thanks Alan Re: MyException becomes RemoteException Posted By: Bozidar_Dangubic Posted On: Monday, March 25, 2002 05:48 AM
http://www.jguru.com/forums/view.jsp?EID=810811
CC-MAIN-2015-11
refinedweb
143
62.38
Created on 2019-10-28 01:50 by legnaleurc, last changed 2019-11-07 22:11 by asvetlov. In bpo-32972 we enabled debug mode in IsolatedAsyncioTestCase._setupAsyncioLoop, which may print some warnings that are not that important to tests. (e.g. Executing <Future ...> took 0.110 seconds) I personally don't really like it being turn on by default, but if it needs to be so, maybe include it in document would be a good idea. Add Andrew to nosy list because he was the author. This print comes from asyncio debug mode when async function is blocked by time longer than 0.1 sec (see loop.slow_callback_duration at loop.slow_callback_duration). Usually, it is a sign of a problem in user code, e.g. something should be pushed into executor. A test case is executed in debug mode. I think it is reasonable for the test run, isn't it? The mode can be disabled by `asyncio.get_running_loop().set_debug(False)` in `asyncSetUp()` method. > Usually, it is a sign of a problem in user code, e.g. something should be pushed into executor. Sometimes also happens on low-end CI machines. And the message is somewhat unclear to me. I have to grep cpython sources to understand that it is coming from debug mode and it means there is a slow awaitable ... in somewhere, because the displayed file:line is not always the right position. > A test case is executed in debug mode. I think it is reasonable for the test run, isn't it? Probably. > The mode can be disabled by `asyncio.get_running_loop().set_debug(False)` in `asyncSetUp()` method. Glad to know that. Thanks. Well, if the reported line is invalid it should be fixed Hi, is there anything that needs to be fixed or done here? At first, would be nice to figure out what "invalid line reported" does mean. What text is reported and what is expected? Executing <Handle <TaskWakeupMethWrapper object at 0x10b31ec10>(<Future finis...events.py:418>) created at /.../lib/python3.8/asyncio/queues.py:70> took 0.104 seconds Executing <Handle <TaskWakeupMethWrapper object at 0x10b31e9d0>(<Future finis...events.py:418>) created at /.../lib/python3.8/asyncio/queues.py:70> took 0.121 seconds I was expecting it can display the stack of the awaitable. Thanks for the clarification. I forgot about this thing; the output can be improved sure. As a workaround you can use the following hack:: import asyncio.task asyncio.task.Task = asyncio.task._PyTask IIRC the pure python version prints a coroutine name at least. I've assigned myself to never forget about the issue; if somebody wants to fix _CTask and TaskWakeupMethWrapper representation -- you are welcome I cannot import asyncio.task, so I did this instead: import asyncio.tasks asyncio.tasks.Task = asyncio.tasks._PyTask Then it changed to this: Executing <Task pending name='Task-1' coro=<IsolatedAsyncioTestCase._asyncioLoopRunner() running at /.../lib/python3.8/unittest/async_case.py:96> wait_for=<Future pending cb=[Task.__wakeup()] created at /.../lib/python3.8/asyncio/base_events.py:418> created at /.../lib/python3.8/unittest/async_case.py:118> took 0.187 seconds I suppose this means the entire test case is slow? Slower by percents, not in the factor of times. I guess for tests it is satisfactory.
https://bugs.python.org/issue38608
CC-MAIN-2019-47
refinedweb
540
62.24
Created on 2010-09-20 21:36 by ned.deily, last changed 2010-12-18 03:52 by r.david.murray. This issue is now closed. When building Python on OS X, there is now support for linking Python with the readline compatibility interface of the OS X supplied BSD editline library rather than using the GNU readline library. Because of deficiencies in the version in earlier OS X releases, this support is only enabled for builds with a deployment target of 10.5 or higher. With the python 2.7 release, for the first time a python.org installer for OS X is available that uses this capability: the 10.5 and higher 32-bit/64-bit version. The 10.3 and higher 32-bit-only installer uses GNU readline as do previous installers. There is a behavior regression in the editline-linked versions: when started in interactive mode, the TAB key does not insert, rather it inserts a "./" file spec in the command buffer and a second TAB causes a completion search of files in the current directory. With readline and typing <TAB> <CR>: $ unset PYTHONSTARTUP $ python2.7 Python 2.7 (r27:82508, Jul 3 2010, 20:17:05) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> ... $ With editline and <TAB> <CR>: $ unset PYTHONSTARTUP $ python2.7 Python 2.7 (r27:82508, Jul 3 2010, 21:12:11) [GCC 4.0.1 (Apple Inc. build 5493)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> ./ File "<stdin>", line 1 ./ ^ SyntaxError: invalid syntax >>> ^D Two workarounds for python2.7 until the problem is addressed in a future installer: (1) either install the 10.3 and up python 2.7 or (2) add or edit a python startup file for python2.7: $ cat > $HOME/.pystartup import readline if 'libedit' in readline.__doc__: readline.parse_and_bind("bind ^I ed-insert") ^D $ export PYTHONSTARTUP=$HOME/.pystartup Since whitespace is significant in Python, Modules/readline.c initialization attempts to override TAB behavior by forcing TAB to "insert" by default (unless overridden by later readline module calls). Somehow that is failing when going through editline's readline compatibility layer. [Thanks to Nik Krumm for reporting the problem on python-list/comp.lang.python] The problem is due to a difference in the behavior of the rl_initialize function between the editline readline emulation and the real GNU libreadline. Modules/readline.c setup_readline calls several rl functions to create various default bindings, including overriding TAB to "insert" rather than to "trigger completion", then calls rl_initialize allowing the users defaults from .inputrc to override. It seems the emulated rl_initialize causes all the modified bindings to be discarded, causing TAB to revert to its default "trigger file completion". The solution in the attached patches is to conditionally call rl_initialize at the beginning of setup_readline, rather than at the end, if the editline emulation is in use. Patches supplied for py3k and 27 (but not 31 since the feature was never backported there even though it was to 26). I did not supply any additional tests since I can't think of a straightforward way to simulate the condition in the test framework; suggestions welcome. Patch looks fine and should IMO be applied On second thought, the patch isn't quite as harmless as I first thought: the default key-bindings that are created after the call to rl_initialize will replace custom bindings in the users .editrc file. I've attached a new version of the py3k patch that works around this problem by calling rl_read_init_file(NULL) after setting the default bindings. This allows me to override the bindings for TAB in ~/.editrc as before. The modified patch looks OK to me and tests OK. The rl_read_init_file call seems like a reasonable thing for users who are used to using libedit's .editrc. As a practical matter, though, I think the only thing that would be affected is an .editrc TAB binding. Some of the initializations done in Modules/readline.c, like rl_bind_key_in_map (for sure) and rl_completer_word_break_characters are silently ignored by the libedit readline-compatibility layer; it does not implement features like the emacs_meta_keymap. I believe this fix should go into 3.2 (and 2.7) as it has been reported by a number of people in various places and the fix risk is low. Committed to py3k in 87356 and 2.7 in r87358.
http://bugs.python.org/issue9907
CC-MAIN-2015-14
refinedweb
737
67.96
Python 101: An Intro to loggingPosted by Mike on August 2nd, 2012 filed in Cross-Platform, Education, Python Python provides a very powerful logging library in its standard library. A lot of programmers use print statements for debugging (myself included), but you can also use logging to do this. It’s actually cleaner to use logging as you won’t have to go through all your code to remove the print statements. In this tutorial we’ll cover the following topics: - Creating a simple logger - How to log from multiple modules - Log formatting - Log configuration By the end of this tutorial, you should be able to confidently create your own logs for your applications. Let’s get started! Creating a Simple Logger Creating a log with the logging module is easy and straight-forward. It’s easiest to just look at a piece of code and then explain it, so here’s some code for you to read: import logging # add filemode="w" to overwrite logging.basicConfig(filename="sample.log", level=logging.INFO) logging.debug("This is a debug message") logging.info("Informational message") logging.error("An error has happened!") As you might expect, to access the logging module you have to first import it. The easiest way to create a log is to use the logging module’s basicConfig function and pass it some keyword arguments. It accepts the following: filename, filemode, format, datefmt, level and stream. In our example, we pass it a file name and the logging level, which we set to INFO. There are five levels of logging (in ascending order): DEBUG, INFO, WARNING, ERROR and CRITICAL. By default, if you run this code multiple times, it will append to the log if it already exists. If you would rather have your logger overwrite the log, then pass in a filemode=”w” as mentioned in the comment in the code. Speaking of running the code, this is what you should get if you ran it once: INFO:root:Informational message ERROR:root:An error has happened! Note that the debugging message isn’t in the output. That is because we set the level at INFO, so our logger will only log if it’s a INFO, WARNING, ERROR or CRITICAL message. The root part just means that this logging message is coming from the root logger or the main logger. We’ll look at how to change that so it’s more descriptive in the next section. If you don’t use basicConfig, then the logging module will output to the console / stdout. The logging module can also log some exceptions to file or wherever you have it configured to log to. Here’s an example: import logging logging.basicConfig(filename="sample.log", level=logging.INFO) log = logging.getLogger("ex") try: raise RuntimeError except Exception, err: log.exception("Error!") This will log the entire traceback to file, which can be very handy when debugging. How to log From Multiple Modules (and Formatting too!) The more you code, the more often you end up creating a set of custom modules that work together. If you want them all to log to the same place, then you’ve come to the right place. We’ll look at the simple way and then show a more complex method that’s also more customizable. Here’s one easy way to do it: import logging import otherMod #---------------------------------------------------------------------- def main(): """ The main entry point of the application """ logging.basicConfig(filename="mySnake.log", level=logging.INFO) logging.info("Program started") result = otherMod.add(7, 8) logging.info("Done!") if __name__ == "__main__": main() Here we import logging and a module of our own creation (“otherMod”). Then we create our log file as before. The other module looks like this: # otherMod.py import logging #---------------------------------------------------------------------- def add(x, y): """""" logging.info("added %s and %s to get %s" % (x, y, x+y)) return x+y If you run the main code, you should end up with a log that has the following contents: INFO:root:Program started INFO:root:added 7 and 8 to get 15 INFO:root:Done! Do you see the problem with doing it this way? You can’t really tell very easily where the log messages are coming from. This will only get more confusing the more modules there are that write to this log. So we need to fix that. That brings us to the complex way of creating a logger. Let’s take a look at a different implementation: import logging import otherMod2 #---------------------------------------------------------------------- def main(): """ The main entry point of the application """ logger = logging.getLogger("exampleApp") logger.setLevel(logging.INFO) # create the logging file handler fh = logging.FileHandler("new_snake.log") formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') fh.setFormatter(formatter) # add handler to logger object logger.addHandler(fh) logger.info("Program started") result = otherMod2.add(7, 8) logger.info("Done!") if __name__ == "__main__": main() Here we create a logger instance named “exampleApp”. We set its logging level, create a logging file handler object and a logging Formatter object. The file handler has to set the formatter object as its formatter and then the file handler has to be added to the logger instance. The rest of the code in main is mostly the same. Just note that instead of “logging.info”, it’s “logger.info” or whatever you call your logger variable. Here’s the updated otherMod2 code: # otherMod2.py import logging module_logger = logging.getLogger("exampleApp.otherMod2") #---------------------------------------------------------------------- def add(x, y): """""" logger = logging.getLogger("exampleApp.otherMod2.add") logger.info("added %s and %s to get %s" % (x, y, x+y)) return x+y Note that here we have two loggers defined. We don’t do anything with the module_logger in this case, but we do use the other one. If you run the main script, you should see the following output in your file: 2012-08-02 15:37:40,592 - exampleApp - INFO - Program started 2012-08-02 15:37:40,592 - exampleApp.otherMod2.add - INFO - added 7 and 8 to get 15 2012-08-02 15:37:40,592 - exampleApp - INFO - Done! You will notice that we no longer have any reference to root has been removed. Instead it uses our Formatter object which says that we should get a human readable time, the logger name, the logging level and then the message. These are actually known as LogRecord attributes. For a full list of LogRecord attributes, see the documentation as there are too many to list here. Configuring Logs for Work and Pleasure The logging module can be configured 3 different ways. You can configure it using methods (loggers, formatters, handlers) like we did earlier in this article; you can use a configuration file and pass it to fileConfig(); or you can create a dictionary of configuration information and pass it to the dictConfig() function. Let’s create a configuration file first and then we’ll look at how to execute it with Python. Here’s an example config file: [loggers] keys=root,exampleApp [handlers] keys=fileHandler, consoleHandler [formatters] keys=myFormatter [logger_root] level=CRITICAL handlers=consoleHandler [logger_exampleApp] level=INFO handlers=fileHandler qualname=exampleApp [handler_consoleHandler] class=StreamHandler level=DEBUG formatter=myFormatter args=(sys.stdout,) [handler_fileHandler] class=FileHandler formatter=myFormatter args=("config.log",) [formatter_myFormatter] format=%(asctime)s - %(name)s - %(levelname)s - %(message)s datefmt= You’ll notice that we have two loggers specified: root and exampleApp. For whatever reason, “root” is required. If you don’t include it, Python will raise a ValueError from config.py’s _install_loggers function, which is a part of the logging module. If you set the root’s handler to fileHandler, then you’ll end up doubling the log output, so to keep that from happening, we send it to the console instead. Study this example closely. You’ll need a section for every key in the first three sections. Now let’s see how we load it in the code: # log_with_config.py import logging import logging.config import otherMod2 #---------------------------------------------------------------------- def main(): """ Based on """ logging.config.fileConfig('logging.conf') logger = logging.getLogger("exampleApp") logger.info("Program started") result = otherMod2.add(7, 8) logger.info("Done!") if __name__ == "__main__": main() As you can see, all you need to do is pass the config file path to logging.config.fileConfig. You’ll also notice that we don’t need all that setup code any more as that’s all in the config file. Also we can just import the otherMod2 module with no changes. Anyway, if you run the above, you should end up with the following in your log file: 2012-08-02 18:23:33,338 - exampleApp - INFO - Program started 2012-08-02 18:23:33,338 - exampleApp.otherMod2.add - INFO - added 7 and 8 to get 15 2012-08-02 18:23:33,338 - exampleApp - INFO - Done! As you might have guessed, it’s very similar to the other example. Now we’ll move on to the other config method. The dictionary configuration method (dictConfig) wasn’t added until Python 2.7, so make sure you have that or better or you won’t be able to follow along. It’s not well documented how this works. In fact, the examples in the documentation show YAML for some reason. Anyway, here’s some working code for you to look over: # log_with_config.py import logging import logging.config import otherMod2 #---------------------------------------------------------------------- def main(): """ Based on """ dictLogConfig = { "version":1, "handlers":{ "fileHandler":{ "class":"logging.FileHandler", "formatter":"myFormatter", "filename":"config2.log" } }, "loggers":{ "exampleApp":{ "handlers":["fileHandler"], "level":"INFO", } }, "formatters":{ "myFormatter":{ "format":"%(asctime)s - %(name)s - %(levelname)s - %(message)s" } } } logging.config.dictConfig(dictLogConfig) logger = logging.getLogger("exampleApp") logger.info("Program started") result = otherMod2.add(7, 8) logger.info("Done!") if __name__ == "__main__": main() If you run this code, you’ll end up with the same output as the previous method. Note that you don’t need the “root” logger when you use a dictionary configuration. Wrapping Up At this point you should know how to get started using loggers and how to configure them in several different ways. You should also have gained the knowledge of how to modify the output using the Formatter object. If you want to get really fancy with the output, I recommend that you check out some of the links below. Additional reading - Logging module documentation - Logging HOWTO - Logging Cookbook - logging_tree package - Plumber Jack’s Python Logging 101 - Stop Using “print” for Debugging: A 5 Minute Quickstart Guide to Python’s logging Module - Hellman’s PyMOTW logging page Source Code Pingback: Code! Code! Code!() Pingback: [Python] An Introduction to logging | Lonely Coder() Pingback: Top 10 Articles of 2012 « The Black Velvet Room() Pingback: Uso de Logs con Python y el Modulo Logging | El Blog de FoxCarlos() Pingback: Mike Driscoll: wxPython: How to Redirect Python’s Logging Module to a TextCtrl | The Black Velvet Room() Pingback: Live Streaming Κάλυψη Συνεδρείων() Pingback: Python: How to Create Rotating Logs | Hello Linux()
http://www.blog.pythonlibrary.org/2012/08/02/python-101-an-intro-to-logging/
CC-MAIN-2015-32
refinedweb
1,817
57.47
Download presentation Presentation is loading. Please wait. Published byJimena Douthat Modified over 2 years ago 1 CHAPTER 4 QUEUE CSEB324 DATA STRUCTURES & ALGORITHM 2 What is a queue? Definition: A queue is a set of elements of the same type in which the elements are added at one end, called the back or rear, and deleted from the other end, called the front or first The general rule to process elements in a queue is that the customer at the front of the queue is served next and that when a new customer arrives, he or she stands at the end of the queue. That is, a queue is a First In First Out, or simply FIFO data structure. 2 3 Queues Implementation 3 Physical Model We must keep track both the front and the rear of the queue. One method is to keep the front of the array in the first location on the array. Then, we can simply increase the counter of the array to show the rear. Nevertheless, to delete an entry from this queue is very expensive, since after the first entry was served, all the existing entry need to be move back one position to fill in the vacancy. With a long queue this process can lead to poor performance. 4 Queues Implementation 4 Physical Model ‘a’‘c’‘d’‘g’‘v’‘e’ 012345 Before: ‘a’ at index 0 is deleted ‘c’‘d’‘g’‘v’‘e’ 012345 ‘c’‘d’‘g’‘v’‘e’ 012345 After: front rear 5 Queues Implementation 5 Linear Implementation Indicate the front and rear of the queue. We can keep track the entry of the queue without moving any entries. Append an entry: increase the rear by one. To get the entry: increase the front by one. Problem: Queue will increase and never decrease This lead to the end of the storage capacity 6 Queues Implementation 6 Linear Implementation ‘a’‘c’‘d’‘g’‘k’ 012345 Add ‘k’ to the queue: - rear + 1 (increase) front rear Delete ‘a’ in queue: - front + 1 (increase) ‘a’‘c’‘d’‘g’‘k’ 012345 front rear ‘c’‘d’‘g’‘k’‘z’ 012345 Add ‘z’ to the queue: - -rear + 1 (increase) Reach end of storage!! rear 7 Queues Implementation 7 Circular Arrays We can overcome the inefficient use of the space by using a circular array. In this way, as entry are added and removed from the queue, the head will continually chasing the tail around the array, thus you don’t have to worry about running out of space unless the queue is fully occupied 8 Queues Implementation 8 Circular Arrays 9 Queues Implementation 9 Circular Arrays : Boundary Condition to indicate whether a queue is empty or full. If there is exactly one entry in the queue, then the front index will equal to the rear index. When this one entry is removed, then the front will increase by 1, so that an empty queue is indicated when the rear is one position before the front. 10 Queues Implementation 10 Circular Arrays : Boundary Condition Now, suppose that the queue is nearly full whereby it only has one empty position left. Then the rear will be only one position behind the front, the same condition as empty queue. 11 Queues Implementation 11 Circular Arrays : Possible Solution Leaving one position to be empty in the array. full queue is when the rear is two positions behind the front. Introduce a new variable to indicate the queue is full or not. The variable could either: A Boolean variable that will be used when the rear comes just before the front to indicate whether the queue is full or not. or An integer variable that counts the number of entries in the queue. Use special value for rear and/or front indices. For example, the array entries are indexed from 0 to MAX-1; then an empty queue could be indicated by setting the rear index to –1. 12 Circular Queues in C -Array Concept- 12 PART 1 13 Sample Program 1 #define MAZQUEUE 10 typedef int QueueIndex; typedef char QueueEntry; typedef struct queue{ int count; QueueIndex front; QueueIndex rear; QueueEntry entry[MAXQUEUE]; } Queue; 13 determine the maximum item in the queue to avoid program crash to show current index/position in queue Determine front and rear of queue Size of array “entry” 14 Continue… Several operations that can be performed to a Queue : 1. Create a queue 2. Test for an empty queue 3. Test for a full queue 4. Append (add) an item into queue 5. Serve (delete) an item from the queue 6. return the number of entries in the queue 14 void CreateQueue( Queue *q) { q count = 0; q front = 0; q rear = -1; } void CreateQueue( Queue *q) { q count = 0; q front = 0; q rear = -1; } bool QueueEmpty(Queue *q){ return (q->count count <= 0);} bool QueueEmpty(Queue *q){ return (q->count count <= 0);} bool QueueFull(Queue *q){ return ( q->count >= MAXQUEUE); return ( q->count >= MAXQUEUE);} bool QueueFull(Queue *q){ return ( q->count >= MAXQUEUE); return ( q->count >= MAXQUEUE);} void Append(QueueEntry x, Queue *q){ if (QueueFull(q)) if (QueueFull(q)) printf ("full queue" ); else { else {q->count++; q->rear = (q->rear + 1 ) % MAXQUEUE; q->entry[q->rear] = x; }} void Append(QueueEntry x, Queue *q){ if (QueueFull(q)) if (QueueFull(q)) printf ("full queue" ); else { else {q->count++; q->rear = (q->rear + 1 ) % MAXQUEUE; q->entry[q->rear] = x; }} void Serve(QueueEntry *x, Queue *q){ if (QueueEmpty(q)) if (QueueEmpty(q)) printf ("empty queue"); printf ("empty queue"); else { else {q->count--; *x = q->entry[q->front]; q->front =(q->front + 1 ) % MAXQUEUE; }} void Serve(QueueEntry *x, Queue *q){ if (QueueEmpty(q)) if (QueueEmpty(q)) printf ("empty queue"); printf ("empty queue"); else { else {q->count--; *x = q->entry[q->front]; q->front =(q->front + 1 ) % MAXQUEUE; }} int QueueSize( Queue *q){ return q->count; } int QueueSize( Queue *q){ return q->count; } 15 Sample Program 1 void main() { int a = 3, b = 8, c = 12; Queue q; CreateQueue(&q); //create a queue Append(a, &q); //insert value of a in queue Append(c, &q); //insert value of c in queue Serve(&b,&q); //delete entry in queue ;store in b printf("B is now : %d\n", b); //display value b } output??? 15 16 Circular Queues in C -Linked List Concept- 16 PART 2 17 Queues Implementation 17 Linked List Overview 18 Data Structure typedef char QueueEntry; typedef struct queuenode { QueueEntry info; struct queuenode *next; } QueueNode; typedef struct queue { QueueNode *front; QueueNode *rear; } Queue; Queue *z; 18 19 Create Queue void CreateQueue(Queue *q){ q->front = q->rear = NULL; } 19 bool QueueEmpty( Queue *q) { return(q->front == NULL); } Test Empty 20 Create Node QueueNode *CreateNode(QueueEntry x) { QueueNode *p; p=QueueNode*)malloc(sizeof(QueueNode)); if(!p) printf("Unable to allocate memory "); else { p->info = x; p->next = NULL; } return p; } 20 21 Append void Append(QueueEntry x, Queue *q) { QueueNode *np; np=CreateNode(x); if (np == NULL) printf ("Can’t append – queue full" ); else if (QueueEmpty(q)) q->front = q->rear = np; else { q->rear->next = np; q->rear = np; } } 21 22 Serve void Serve(QueueEntry *x, Queue *q) { QueueNode *p; if(QueueEmpty(q)) printf (“Fail…Queue is empty"); else { p = q->front; q->front = q->front->next; if(QueueEmpty(q)) // if(q->front == NULL) q-> rear = NULL; *x = p->info; free(p); } } 22 23 Main () void main() { char alp; CreateQueue(z); Append('i',z); Append('f',z); Append('a',z); Serve(&alp,z); printf("Alp is %c \n",alp); } output??? 23 24 Queue Application Queue is used to synchronize between two different processes that run at different speed. For instance, CPU and Printer. A buffer or a spooler that uses queue techniques introduced, store all the printed item pass by CPU. It’s because of the CPU operation is faster. By using a spooler, CPU time can be better utilized. Queue can also be implemented as input/output buffer in operating system that wants to handle many devices with different speeds. 24 25 Circular Queues in C -Linked List Concept- 25 EXERCISE 27 27 Question a) a)Give the data structure of the queue. a) a)Assume a queue Q of type above has been created and initialized. Write the function that receives Q as its parameter and compute the total amount of all orders. 28 That’s all for today TQ.. 28 Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/4151119/
CC-MAIN-2017-22
refinedweb
1,397
61.29
Symptoms When you stop or pause a managed Microsoft Windows service, and the process of stopping or pausing the service takes more time than the default configured time, you receive the following error message: Could not stop the Windows service name service on Local Computer.Note Windows service name is a placeholder for the name of the Windows service that you have created. Error 1053: The service did not respond to the start or control request in a timely fashion. Error 1053: The service did not respond to the start or control request in a timely fashion.. Status Microsoft has confirmed that this is a problem in the Microsoft products that are listed in the "Applies to" section. This problem was first corrected in Microsoft .NET Framework 1.1 Service Pack 1. More Information Steps to reproduce the behavior - Create a Windows Service project. To do this, follow these steps: - Use Microsoft Visual Basic .NET to create a Windows Service project. Name the Windows service SampleWS. - In the Properties window of SampleWS, set the CanPauseAndContinue property, the CanShutDown property, and the CanStop property to True. - Set the ServiceName property to SampleWS. - In the code view of the Service1.vb file, add the following code at the beginning of the file to import the System.Threading namespace into the project. Imports System.Threading - Add an OnPause method to the Service1.vb file. - Add the following code to the OnStop method and to the OnPause method. Thread.Sleep(40000) - Add the ServiceProcessInstaller1 installer and the ServiceInstaller1 installer to your SampleWS project. - In the Properties window of the ServiceProcessInstaller1 installer, set the Account property of the ServiceProcessInstaller1 installer to LocalSystem. - In the Properties window of the ServiceInstaller1 installer, set the StartType property to Automatic. - Build the SampleWS application. - Create a Setup project, and then add the output of the SampleWS application to the Setup project. - Build the SampleWS solution. - Locate the Setup1.msi file. This file is located in the Setup1 project folder that was created in step 2. - Double-click the Setup1.msi file to install the SampleWS Windows service. - Click Start, click Run, type services.msc in the Open box, and then click OK. The Services Microsoft Management Console (MMC) snap-in opens. - In the right pane, locate the SampleWS service, and then start the service. - Stop or pause the SampleWS service. For additional information, click the following article number to view the article in the Microsoft Knowledge Base: References For more information about how to create a Windows service, visit the following Microsoft Developer Network (MSDN) Web site: Properties Article ID: 839174 - Last Review: Oct 9, 2011 - Revision: 1
https://support.microsoft.com/en-us/help/839174/fix-you-receive-an-error-1053-the-service-did-not-respond-to-the-start-or-control-request-in-a-timely-fashion-error-message-when-you-stop-or-pause-a-managed-windows-service?SmcNavTabIndex=0
CC-MAIN-2017-22
refinedweb
439
59.09
Graphics Programming with the Java 2D API - The Basic Java 2D Recipe - Set the Graphics2D Context... - ...and Render Something - Rendering on Components - Shape Primitives - Graphics Stroking - Fill Attributes and Painting - Transparency and Compositing - Text - Clipping - Coordinate Space Transformations - Techniques for Graphical User Input - Double Buffering - Comprehensive Example: Kspace Visualization - Summary The Java 2D API extends the Java Advanced Windowing Toolkit (AWT) to provide classes for professional 2D graphics, text, and imaging. The subject of this chapter is the use of Java 2D for graphics and text. Java 2D imaging is the subject of Chapter 4, "The Immediate Mode Imaging Model." Keep in mind that, for the most part, all discussion referring to shapes will apply equally to text because for all intents and purposes, text is represented as shapes. Operations such as texture mapping, stroking, and alpha composting can be applied equally to shapes and text. The key to using Java 2D for graphics is to understand a simple basic programming paradigm that we will refer to as the Basic Java 2D Recipe. The Basic Java 2D Recipe As stated previously, there is a basic three-step recipe for writing a graphics program in Java: Get a graphics context. Set the context. Render something. Getting the graphics context is pretty straightforward. Cast the Graphics object as a Graphics2D object as follows: public void paint(Graphics g) { Graphics2D g2d = (Graphics2D) g; } The result of making this cast is that the programmer has access to the increased functionality of the methods, classes, and interfaces of the Graphics2D object. These extended capabilities enable the advanced graphics operations described in the next several chapters. The Graphics2D object is covered in detail in the section "Set the Graphics2D Context...." Step 2 of the recipe, setting the graphics context, is also pretty straightforward once you understand what a graphics context is. For now, let's say that the graphics context is a collection of properties (also known as state attributes) that affect the appearance of the graphics output. The most common example of changing the graphics context is to set the color used for drawing. Most of this chapter deals with changing the myriad state attributes to achieve the desired effect. The final step in this paradigm is to render something. This refers to the action of outputting graphics to a device. The most obvious graphics output device is a monitor; however, printers, files, and other devices are equally valid output targets for graphics. Let's examine the recipe in the simplest possible example (see Listing 3.1). In this case, our goal is to draw a square on the screen, as shown in Figure 3.1. Keep in mind, however, that this same recipe can be applied in more complex applications. Listing 3.1 BasicRecipeJ2D.java // BasicRecipeJ2D.java //Part 1 of the recipe, general program setup. import java.applet.Applet; import java.awt.*; import java.awt.event.*; import java.awt.geom.*; public class BasicRecipeJ2D extends Frame { public BasicRecipeJ2D() { //constructor super("Java 2D basic recipe"); this.add(new myCustomCanvas()); this.setSize(500,500); this.show(); addWindowListener(new WindowEventHandler()); } class WindowEventHandler extends WindowAdapter { public void windowClosing(WindowEvent e) { System.exit(0); } } public static void main(String[] args) { new BasicRecipeJ2D(); } } //Part 2; Java 2D specific-extend the drawing Component -Canvas- // and override it's paint method. class myCustomCanvas extends Canvas { public void paint(Graphics g) { System.out.println("in paint"); // step one of the recipe; cast Graphics object as Graphics2D Graphics2D g2d = (Graphics2D) g; // step two-set the graphics context g2d.setColor(Color.red); //setting context //step three-render something g2d.fill(new Rectangle2D.Float(200.0f,200.0f,75.0f,75.0f)); } } Figure 3.1 Output from BasicRecipeJ2D. By modifying this recipe, it is possible to realize most of the projects you would want to do with Java 2D. Many of the examples that follow will simply modify the paint() method to add whatever functionality is needed. Because the basic recipe is central to our discussion of Java 2D, let's examine the pieces in more detail. Part 1 of Listing 3.1 is a basic skeleton for any Java program. The appropriate classes are imported; JFrame is extended and an eventListener is added for exiting the frame. Note that we imported java.awt.geom. This will be necessary to have access to shapes for drawing. The other important thing to notice in part 1 is the following line: this.add(new myCustomCanvas()); In this case, we add myCustomCanvas, a class extending Canvas to the main application frame. Note that Canvas extends Component and is the most common graphics component for display of graphics. It should be emphasized that any of the many objects extending Component (such as JButton and JPanel) can be used in the same fashion (see the section "Drawing on Components"). Part 2 of Listing 3.1 is the part of the program that most relates to Java 2D. The Component class Canvas is extended (subclassed), and its paint() method is overridden. This is the fundamental use of Canvas, and you will see this time and time again. Within the overridden paint() method, the three necessary parts of the Java 2D recipe are realizedwe get a graphics context by casting the Graphics object as Graphics2D. Steps 2 and 3 of the recipe are then achieved by calling two methods of the Graphics2D object. First, there is a change to the rendering attributes of the Graphics2D object by calling setColor(). Second, a Shape object (in this case, a Rectange2D) is created and drawn using the Graphics2D object's draw() method. You are encouraged to run the BasicRecipeJ2D now. Differences Between paint(), repaint(), and update() After taking a look at the basic recipe, you might have noticed that even though our Java 2D code is contained within the paint() method, we never actually call this method. This underscores an important point that often becomes a source of frustration to the uninitiated. The paint() method is called automatically whenever the window needs to be refreshed. The programmer never calls paint() directly, but instead calls repaint() in order to obtain a rendering. It is repaint() that calls paint(). The rendering is then made at the next convenient time. It becomes even more confusing when you consider that in actuality, paint() doesn't do all the drawing, another method called update() also participates. The drawing in update() includes an additional step in which the screen is first filled with the Component's foreground color, effectively clearing the screen. The update() method then finally calls the Component's paint() method to output the graphics. There are often cases in which the programmer doesn't want to clear the screen before drawing (see the section "Comprehensive Example: Kspace Visualization" at the end of this chapter). In this case, the programmer will need to override the update() method to eliminate the filling of the background. As an aside, we note that the statement "The programmer never calls paint() directly" is perhaps a little too strong. Many animation applets do indeed call paint() directly in order to avoid the automatic queing process that results from calling repaint(). These cases tend to be rare and are only recommended in special circumstances. All Rendering Should Occur in paint() A general rule to follow is that unless there is a compelling reason not to, all drawing for a Component should be done in that Component's paint() method. In our basic recipe example from Listing 3.1, the Component object that we want to draw on is an instance of the class myCustomCanvas (which extends Canvas). What might constitute a compelling reason not to place the drawing of objects in the paint method? For most complex applications, the paint() method can become unwieldy and should be broken down into smaller methods. Grouping the steps into methods is functionally equivalent to having the code directly in the paint() method, so this really isn't a major departure from the rule of doing all drawing in the paint() method. Another case in which you would render outside of paint() is when a BufferedImage is used. Still, the final rendering occurs in the paint() method. This is shown later in PDExamples.java and TexturePaint.java. Other Methods Similar to paint() Two additional methods are commonly encountered. The paintAll() method is often useful and is used in a similar fashion to the paint() method except that paintAll() will request a paint() of the Component and all of its subcomponents. For Swing components, paint() is often replaced by paintComponent() in order to not invoke the paintChildren() and paintBorder() methods. This is frequently necessary when developing an interface with a custom look and feel.
http://www.informit.com/articles/article.aspx?p=30085&amp;seqNum=12
CC-MAIN-2017-22
refinedweb
1,442
54.63
Random Clone the code or follow along in the online editor. So far we have only seen commands to make HTTP requests, but we can command other things as well, like generating random values! So we are going to make an app that rolls dice, producing a random number between 1 and 6. We need the elm/random package for this. The Random module in particular. Let’s start by just looking at all the code: import Browser import Html exposing (..) import Html.Events exposing (..) import Random -- MAIN main = Browser.element { init = init , update = update , subscriptions = subscriptions , view = view } -- MODEL type alias Model = { dieFace : Int } init : () -> (Model, Cmd Msg) init _ = ( Model 1 , Cmd.none ) -- UPDATE type Msg = Roll | NewFace Int update : Msg -> Model -> (Model, Cmd Msg) update msg model = case msg of Roll -> ( model , Random.generate NewFace (Random.int 1 6) ) NewFace newFace -> ( Model newFace , Cmd.none ) -- SUBSCRIPTIONS subscriptions : Model -> Sub Msg subscriptions model = Sub.none -- VIEW view : Model -> Html Msg view model = div [] [ h1 [] [ text (String.fromInt model.dieFace) ] , button [ onClick Roll ] [ text "Roll" ] ] The new thing here is command issued in the update function: Random.generate NewFace (Random.int 1 6) Generating random values works a bit different than in languages like JavaScript, Python, Java, etc. So let’s see how it works in Elm! Random Generators The core idea is that we have random Generator that describes how to generate a random value. For example: import Random probability : Random.Generator Float probability = Random.float 0 1 roll : Random.Generator Int roll = Random.int 1 6 usuallyTrue : Random.Generator Bool usuallyTrue = Random.weighted (80, True) [ (20, False) ] So here we have three random generators. The roll generator is saying it will produce an Int, and more specifically, it will produce an integer between 1 and 6 inclusive. Likewise, the usuallyTrue generator is saying it will produce a Bool, and more specifically, it will be true 80% of the time. The point is that we are not actually generating the values yet. We are just describing how to generate them. From there you use the Random.generate to turn it into a command: generate : (a -> msg) -> Generator a -> Cmd msg When the command is performed, the Generator produces some value, and then that gets turned into a message for your update function. So in our example, the Generator produces a value between 1 and 6, and then it gets turned into a message like NewFace 1 or NewFace 4. That is all we need to know to get our random dice rolls, but generators can do quite a bit more! Combining Generators Once we have some simple generators like probability and usuallyTrue, we can start snapping them together with functions like map3. Imagine we want to make a simple slot machine. We could create a generator like this: import Random type Symbol = Cherry | Seven | Bar | Grapes symbol : Random.Generator Symbol symbol = Random.uniform Cherry [ Seven, Bar, Grapes ] type alias Spin = { one : Symbol , two : Symbol , three : Symbol } spin : Random.Generator Spin spin = Random.map3 Spin symbol symbol symbol We first create Symbol to describe the pictures that can appear on the slot machine. We then create a random generator that generates each symbol with equal probability. From there we use map3 to combine them into a new spin generator. It says to generate three symbols and then put them together into a Spin. The point here is that from small building blocks, we can create a Generator that describes pretty complex behavior. And then from our application, we just have to say something like Random.generate NewSpin spin to get the next random value. Exercises: Here are a few ideas to make the example code on this page a bit more interesting! - Instead of showing a number, show the die face as an image. - Instead of showing an image of a die face, use elm/svgto draw it yourself. - Create a weighted die with Random.weighted. - Add a second die and have them both roll at the same time. - Have the dice flip around randomly before they settle on a final value.
https://guide.elm-lang.org/effects/random.html
CC-MAIN-2019-30
refinedweb
683
58.69
My model is onnx format generated by pytorch and I try to convert it to bin and xml, but it show the error "output array is read-only". I see that other people on the Internet having this problem is about numpy's version, however, it seems not work for me. I degrade the version numpy from 1.16.2 to 1.15.0, still doesn't work. Any suggestion? ''' File "C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\mo\main.py", line 325, in main return driver(argv) File "C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\mo\main.py", line 302, in driver mean_scale_values=mean_scale) File "C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\mo\pipeline\onnx.py", line 165, in driver fuse_linear_ops(graph) File "C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\mo\middle\passes\fusing\fuse_linear_ops.py", line 258, in fuse_linear_ops is_fused = _fuse_add(graph, node, fuse_nodes) File "C:\Intel\computer_vision_sdk_2018.5.456\deployment_tools\model_optimizer\mo\middle\passes\fusing\fuse_linear_ops.py", line 212, in _fuse_add fuse_node.in_node(2).value += value ValueError: output array is read-only ''' Link Copied Dear Anthony, This is indeed strange. I have messaged you so that you can send me your onnx model privately. Thanks for using OpenVino ! Shubha Hi, Shubha R.: Thank you for your help. I have already emailed you to "idz.admin@intel.com" or this is just a Forums Notification, because I receive any message in my Intel account. Do I miss something? Sorry for my late reply! Dear Anthony I did not receive anything from you. I have once again sent you a PM message. Just kindly reply to it and attach your model as a zip file. Thanks for using OpenVino ! Shubha Dearest Anthony, Thank you for sending me your zipped up model over PM. I've got good news and bad news. The bad news is that I reproduced your problem on OpenVino version computer_vision_sdk_2018.5.456 (commonly known as R5.1), so you really did see a bug ! The good news is that it's been fixed in the latest OpenVino Release which dropped today (2019 R1). Thanks for using OpenVino ! Shubha Hi Shubha R., Could you explain in more detail about this bug? I am just curious. Because I comment out Line 224 to 260 in use_linear_ops.py and it work. Thank you! Dear Anthony, So I performed a "diff" between the 5.1 version of fuse_linear_ops.py and the latest 2019 R1 version. What I noticed is that there is a slight redesign of the _fuse_mul, _fuse_add and fuse_linear_ops methods: Version 2019 R1 method signature: def _fuse_mul(graph: Graph, node: Node, fuse_nodes: list, backward: bool = True): Version 5.1 method signature: def _fuse_mul(graph: nx.MultiDiGraph, node: Node, fuse_nodes: list, backward: bool = True): The main difference is the first argument. So in all three methods 2019 R1 uses Graph rather than networkx.MultiDiGraph. Looking through this file there are other minor changes also. I encourage you to do a "diff" yourself and see what the changes in this file are, after all OpenVino is open source ! Thanks for using OpenVino ! Shubha
https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/output-array-is-read-only/td-p/1175728
CC-MAIN-2021-10
refinedweb
514
53.27
Introduction: Light-Controlled Box I came up with the idea of a smart device that would sense the natural light outside of a room and adjust the brightness of the lighting inside to maintain a constant brightness. Also, if it exceeded a certain brightness outside, it would lower a shade over the window. For this project, I made a model of this, using a small box to represent a room. In order to make this device, you will need the following materials: - Arduino Uno microcontroller - breadboard - jumper cables - servo cables - 2 LEDs - photoresistor - servo motor - 2 470 Ω resistors - 1 10 kΩ resistor - small box - paper - small piece of cloth Step 1: Connect the LEDs Set up your two LEDs on the breadboard following the circuit diagram in the second photo. First, run jumper cables from the positive rail of your breadboard to the 5V pin and from the negative rail to ground. Connect the positive ends to 470Ω resistors and into digital pins 10 and 11. Then connect the negative ends to the negative rail on the breadboard. When you are finished, your breadboard should look something like the first photo. Step 2: Connect the Photoresistor Now, add your photoresistor to the circuit. Connect one end to the positive rail of your breadboard, and the other end to analog pin 0 and a 10 kΩ resistor. Then connect the other end of the resistor into the negative rail, as show in the second photo. Step 3: Program the Arduino to Fade the LEDs First, we will program the Arduino to take in an input from the photoresistor and dim or brighten the LEDs accordingly. Enter the following code: int sensorPin = 0; //connects the photoresistor to pin A0 int ledPin = 10; //connects the LEDs to pins D10 and D11 int ledPin2 = 11; int sensorValue = 0; int fadeAmount = 5; void setup() { pinMode (ledPin, OUTPUT); //declares pins D10 and D11 as outputs pinMode (ledPin2, OUTPUT); //Serial.begin(9600); } void loop () { sensorValue = analogRead(sensorPin); //reads the input of the photoresistor //Serial.println(sensorValue); fadeAmount = map(sensorValue, 0, 1000, 500, 0); //maps the input of the photoresistor to a corresponding brightness analogWrite(ledPin, fadeAmount); //brightens or dims the LEDs according the the photoresistor input analogWrite(ledPin2, fadeAmount); } You can map the input of your photoresistor by uncommenting the following two lines in your code: Serial.begin(9600); Serial.println(sensorValue); Then, open the serial monitor and expose your photoresistor to various amounts of light. Decide on the range that you would like to use, and enter the values in the following line: fadeAmount = map(sensorValue, [lower value], [upper value], 500, 0); Step 4: Connect the Servo Now that we have the LEDs working, add the servo to your circuit. Connect the white wire to digital pin 9, the red wire to the positive rail (5V), and the black wire to the negative rail (GND). Your completed circuit should look something like the third photo. Step 5: Program the Servo to Turn at a Certain Brightness Add the following bolded lines to your code: #include <Servo.h> Servo servo; int pos = 0; int sensorPin = 0; int ledPin = 10; int ledPin2 = 11; int sensorValue = 0; int fadeAmount = 5; void setup() { servo.attach(9); //connects servo to pin 9 pinMode (ledPin, OUTPUT); pinMode (ledPin2, OUTPUT); servo.write(pos); //Serial.begin(9600); } void loop () { sensorValue = analogRead(sensorPin); //Serial.println(pos); fadeAmount = map(sensorValue, 0, 1000, 500, 0); //maps input analogWrite(ledPin, fadeAmount); //adjusts brightness of LEDs based on input analogWrite(ledPin2, fadeAmount); if ( (sensorValue > 500) && (pos == 0) ) { //turns servo 180° in increments of 1° if input is greater than 500 and the position of the servo is at 0 for (; pos < 180; pos +=1) { servo.write(pos); delay(10); } } if ( (sensorValue < 400) && (pos == 180) ) { //turns servo back 180° in increments of 1° if input is less than 400 and the position of the servo is at 180 for (; pos > 0; pos -=1) { servo.write(pos); delay(10); } } } This will be used to lower a shade if the brightness sensed by the photoresistor is greater than 500 and raise the shade if it is less than 400. Step 6: Set Up the LEDs and Photoresistor in the Box Find a small box and cut a hole in one side to represent a window. Then poke two pairs of small holes in the top of the box. Put the LEDs through the holes and put the photoresistor in a bottom corner of the "window." Connect them back to the breadboard using servo cables. Step 7: Set Up the Servo and Shade in the Box For the last part of this project, we need to set up a window shade. First, cut a small piece of cloth in the shape of your window. Then roll a small piece of paper into a tube that will fit around your servo. This will function as a "curtain rod." Attach the servo to the inside of the box, next to the window. With the servo rotated 180° in the "lowered" position, tape the cloth to the rod. Recommendations We have a be nice policy. Please be positive and constructive.
http://www.instructables.com/id/Light-Controlled-Box/
CC-MAIN-2018-17
refinedweb
855
58.01
HI I am very new to programming and I was just assigned to do this programming job for work and I am having a lot of trouble with it. The code here has many different '.cpp' files all calling functions within each other.. One of the functions/ codes has to call to another one 'FileWriter' which should ask the user for a filename to enter, open that file, and write to it. Since this function will be called many times throughout the application, the next time it writes to the file, it should append to it (not overwrite) and in the end save and close the file. So far this is what I got but it has errors (one which comes up now is '........caltest\filewriter.cpp(72) : error C2440: 'initializing' : cannot convert from 'errno_t' to 'FILE *') and there probably is a neater way of doint it. Please post your advice and help on the error. #include "stdafx.h" #include <stdio.h> #include <process.h> #include "stdlib.h" #include <iostream> #include <fstream> #include "stdafx.h" #include "FileWriter.h" #include "CalDevice.h" #include "LogicInterface.h" using namespace std; FILE* stream;r. Here is the code for FileWriter: void WriteFile(const char* Log_BPCalTest) { char s[] = "This file contains the test resullts.\n"; char question[] = "Please enter a name for the Log file containing test results: "; char filename [80]; cout << question; cin >> filename; FILE *stream = fopen_s( &stream, filename, "a+" ); if( stream == NULL ) { printf ("File could not be opened, enter different name.\n"); } else { fprintf_s( stream, "%s", s ); fprintf(stream,"CalPrimaryTransducer() returned %s",CalPrimaryTransducer()); fprintf(stream,"CalBattery() returned %s",CalBattery()); fclose( stream ); } } Thank you in advance!! You need to look at the declaration of fopen_s and see what its return type is. Hint: it's not FILE*. A better idea: get rid of fopen_s() and use the standard fopen() function instead. Danny Kalev I replaced fopen with fopen_s since Visual studio 2005 has fopen "DEPRACATED". Anyway, I was looking at some examples, and seemed like I dint really need the "FILE* string = " part. So I replaced "FILE *stream = fopen_s( &stream, filename, "a+" );" with just "fopen(filename, "a+");" and I have a whole bunch of Linking errors now! : Compiling... FileWriter.cpp ...caltest\filewriter.cpp(74) : warning C4996: 'fopen' was declared deprecated c:\program files\microsoft visual studio 8\vc\include\stdio.h(234) : see declaration of 'fopen' Message: 'This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_DEPRECATE. See online help for details.' Linking... InflationConCheck.obj : error LNK2005: "struct _iobuf * stream" (?stream@@3PAU_iobuf@@A) already defined in FileWriter.obj UI.obj : error LNK2005: "char * resultString" (?resultString@@3PADA) already defined in LogicInterface.obj FileWriter.obj : error LNK2019: unresolved external symbol "char * __cdecl CheckSafetyTransducerCal(void)" (?CheckSafetyTransducerCal@@YAPADXZ) referenced in function "void __cdecl WriteFile(char const *)" (?WriteFile@@YAXPBD@Z) LogicInterface.obj : error LNK2001: unresolved external symbol "char * __cdecl CheckSafetyTransducerCal(void)" (?CheckSafetyTransducerCal@@YAPADXZ) FileWriter.obj : error LNK2019: unresolved external symbol "char * __cdecl CheckPrimaryTransducerCal(void)" (?CheckPrimaryTransducerCal@@YAPADXZ) referenced in function "void __cdecl WriteFile(char const *)" (?WriteFile@@YAXPBD@Z) LogicInterface.obj : error LNK2001: unresolved external symbol "char * __cdecl CheckPrimaryTransducerCal(void)" (?CheckPrimaryTransducerCal@@YAPADXZ) C:\Documents and Settings\GauthamJ\Desktop\BPCalTest\BPCalTest_prog\BPCalTest\Debug\BPCalTest.exe : fatal error LNK1120: 2 unresolved externals Build log was saved at ":\Documents and Settings\GauthamJ\Desktop\BPCalTest\BPCalTest_prog\BPCalTest\BPCalTest\Debug\BuildLog.htm" BPCalTest - 7 error(s), 1 warning(s) Sorry, I read about the return type as you suggested and I think I realize what the mistake was, so I replaced the fopen section with this: if( (err = fopen_s( &stream, filename, "a+" )) != 0 ) printf( "The file 'data2' was not opened\n" ); else printf( "The file 'data2' was opened\n" ); fprintf_s( stream, "%s", s ); Yet, I am getting all those linking errors I posted above. Please Help!! You need to: 1) get rid of the stafx.h header (which is #included twice, mind you), and disable precompiled headers 2) make sure that FileWriter.cpp CalDevice.cpp and LogicInterface.cpp have been compiled successfully, that they contain definitions of their class's functions, and that the linker knows where to find their respective .obj files. Forum Rules Development Centers -- Android Development Center -- Cloud Development Project Center -- HTML5 Development Center -- Windows Mobile Development Center
http://forums.devx.com/showthread.php?169344-File-Open-and-write-to-and-error-error-C2440
CC-MAIN-2015-32
refinedweb
701
50.23
A Short Course in Computer Graphics. How to Write a Simple OpenGL. Article 1 of 6Game Development Describing longer than 500 lines. My students need 10 to 20 programming hours to begin making such renderers. At the input, we get a test file with a polygonal wire + pictures with textures. At the output, we’ll get a rendered model. No graphical interface, the program simply generates an image. Since the goal is to minimize external dependencies, I give my students just one class that allows working with TGA files. It’s one of the simplest formats that supports images in RGB/RGBA/black and white formats. So, as a starting point, we’ll obtain a simple way to work with pictures. You should note that the only functionality available at the very beginning (in addition to loading and saving images) is the capability to set the color of one pixel. There are no functions for drawing line segments and triangles. We’ll have to do all of this by hand. I provide my source code that I write in parallel with students. But I would not recommend using it, as this doesn’t make sense. The entire code is available on github, and here you will find the source code I give to my students. #include "tgaimage.h" const TGAColor white = TGAColor(255, 255, 255, 255); const TGAColor red = TGAColor(255, 0, 0, 255); int main(int argc, char** argv) { TGAImage image(100, 100, TGAImage::RGB); image.set(52, 41, red); image.flip_vertically(); // i want to have the origin at the left bottom corner of the image image.write_tga_file("output.tga"); return 0; } output.tga should look something like this: Bresenham’s Line Algorithm The goal of the first lecture is to render the wire mesh. To do this, we should learn how to draw line segments. We can simply read what Bresenham’s line algorithm is, but let’s write code ourselves. How does the simplest code that draws a line segment between (x0, y0) and (x1, y1) points look like? Apparently, something like this void line(int x0, int y0, int x1, int y1, TGAImage &image, TGAColor color) { for (float t=0.; t<1.; t+=.01) { int x = x0*(1.-t) + x1*t; int y = y0*(1.-t) + y1*t; image.set(x, y, color); } } The snapshot of the code is available on github. The problem with this code (in addition to efficiency) is the choice of the constant, which I took equal to .01. If we take it equal to .1, our line segment will look like this: We can easily find the necessary step: it’s just the number of pixels to be drawn. The simplest (with errors!) code looks something like the following: void line(int x0, int y0, int x1, int y1, TGAImage &image, TGAColor color) { for (int x=x0; x<=x1; x++) { float t = (x-x0)/(float)(x1-x0); int y = y0*(1.-t) + y1*t; image.set(x, y, color); } } Caution! The first source of errors in such code of my students is the integer division, like (x-x0)/(x1-x0). Then, if we try to draw the following lines with this code: line(13, 20, 80, 40, image, white); line(20, 13, 40, 80, image, red); line(80, 40, 13, 20, image, red); It turns out that one line is good, the second one is with holes, and there’s no third line at all. Note that the first and the second lines (in the code) give the same line of different colors. We have already seen the white one, it is drawn well. I was hoping to change the color of the white line to red, but could not do it. It’s a test for symmetry: the result of drawing a line segment should not depend on the order of points: the (a,b) line segment should be exactly the same as the (b,a) line segment. There are holes in one of the line segments due to the fact that its height is greater than the width. My students often suggest the following fix: if (dx>dy) {for (int x)} else {for (int y)}. Holy cow! void line(int x0, int y0, int x1, int y1, TGAImage &image, TGAColor color) { bool steep = false; if (std::abs(x0-x1)<std::abs(y0-y1)) { // if the line is steep, we transpose the image std::swap(x0, y0); std::swap(x1, y1); steep = true; } if (x0>x1) { // make it left-to-right std::swap(x0, x1); std::swap(y0, y1); } for (int x=x0; x<=x1; x++) { float t = (x-x0)/(float)(x1-x0); int y = y0*(1.-t) + y1*t; if (steep) { image.set(y, x, color); // if transposed, de-transpose } else { image.set(x, y, color); } } } This code works great. That’s exactly the kind of complexity I want to see in the final version or our renderer. It is definitely inefficient (multiple divisions, and the like), but it is short and readable. Note that it has no asserts and no checks on going beyond the borders, which is bad. But I try not to overload this particular code, as it is read a lot. At the same time, I systematically remind of the necessity to perform checks. So, the previous code works fine, but we can optimize it. Optimization is a dangerous thing. We should be clear about the platform the code will run on. Optimize the code for a graphics card or just for a CPU — are completely different things. Before and during any optimization, the code should be profiled. Try to guess, which operation is the most recourse-intensive operation here? For tests, 1,000,000 times I draw 3 line segments we have drawn before. My CPU is Intel® Core™ i5-3450 CPU @ 3.10GHz. For each pixel, this code calls the TGAColor copy constructor. Which is 1000000 * 3 line segments * approximately 50 pixels per line segment. Quite a lot of calls, isn’t it? Where to start with optimization? The profiler will tell us. I compiled the code with g++ -ggdb -g3 -pg -O0 keys, and then ran gprof: % cumulative self self total time seconds seconds calls ms/call ms/call name 69.16 2.95 2.95 3000000 0.00 0.00 line(int, int, int, int, TGAImage&, TGAColor) 19.46 3.78 0.83 204000000 0.00 0.00 TGAImage::set(int, int, TGAColor) 8.91 4.16 0.38 207000000 0.00 0.00 TGAColor::TGAColor(TGAColor const&) 1.64 4.23 0.07 2 35.04 35.04 TGAColor::TGAColor(unsigned char, unsigned char, unsigned char, unsigned char) 0.94 4.27 0.04 TGAImage::get(int, int) 10% of the time are spent on copying the color. But 70% are performed in calling line()! That’s where we will optimize. We should note that each division has the same divisor. Let’s take it out of the loop. The error variable gives is the distance to the best straight line from our current (x, y) pixel. Each time error is greater than one pixel, we increase (or decrease) y by one, and decrease the error by one as well. The code is available here.; float derror = std::abs(dy/float(dx)); float error = 0; int y = y0; for (int x=x0; x<=x1; x++) { if (steep) { image.set(y, x, color); } else { image.set(x, y, color); } error += derror; if (error>.5) { y += (y1>y0?1:-1); error -= 1.; } } } % cumulative self self total time seconds seconds calls ms/call ms/call name 38.79 0.93 0.93 3000000 0.00 0.00 line(int, int, int, int, TGAImage&, TGAColor) 37.54 1.83 0.90 204000000 0.00 0.00 TGAImage::set(int, int, TGAColor) 19.60 2.30 0.47 204000000 0.00 0.00 TGAColor::TGAColor(int, int) 2.09 2.35 0.05 2 25.03 25.03 TGAColor::TGAColor(unsigned char, unsigned char, unsigned char, unsigned char) 1.25 2.38 0.03 TGAImage::get(int, int) Why do we need floating points? The only reason is one division by dx and comparison with .5 in the loop body. We can get rid of the floating point by replacing the error variable with another one. Let’s call it error2, and assume it is equal to error*dx*2. Here’s the equivalent code:; int derror2 = std::abs(dy)*2; int error2 = 0; int y = y0; for (int x=x0; x<=x1; x++) { if (steep) { image.set(y, x, color); } else { image.set(x, y, color); } error2 += derror2; if (error2 > dx) { y += (y1>y0?1:-1); error2 -= dx*2; } } } % cumulative self self total time seconds seconds calls ms/call ms/call name 42.77 0.91 0.91 204000000 0.00 0.00 TGAImage::set(int, int, TGAColor) 30.08 1.55 0.64 3000000 0.00 0.00 line(int, int, int, int, TGAImage&, TGAColor) 21.62 2.01 0.46 204000000 0.00 0.00 TGAColor::TGAColor(int, int) 1.88 2.05 0.04 2 20.02 20.02 TGAColor::TGAColor(unsigned char, unsigned char, unsigned char, unsigned char) Now, it’s enough to remove unnecessary copies during the function call by passing the color by reference (or just enable the compilation flag -O3), and it’s done. Not a single multiplication or division in code. The execution time has decreased from 2.95 to 0.64 seconds. Wire Render So now we are ready to create a wire render. You can find the snapshot of the code and the test model here. I used the wavefront obj format of the file to store model. All we need for the render is read from the file the array of vertices of the following type: v 0.608654 -0.568839 -0.416318 […] are x,y,z coordinates, one vertex per file line and faces f 1193/1240/1193 1180/1227/1180 1179/1226/1179 […] We are interested in the first number after each space. It is the number of the vertex in the array that we have read before. Thus, this line says that 1193, 1180 and 1179 vertices form a triangle. The model.cpp file contains a simple parser. Write the following loop to our main.cpp and voila, our wire renderer is ready. for (int i=0; i<model->nfaces(); i++) { std::vector<int> face = model->face(i); for (int j=0; j<3; j++) { Vec3f v0 = model->vert(face[j]); Vec3f v1 = model->vert(face[(j+1)%3]); int x0 = (v0.x+1.)*width/2.; int y0 = (v0.y+1.)*height/2.; int x1 = (v1.x+1.)*width/2.; int y1 = (v1.y+1.)*height/2.; line(x0, y0, x1, y1, image, white); } } Next time we will draw 2D triangles and improve our renderer. Ropes — Fast Strings
https://kukuruku.co/post/a-short-course-in-computer-graphics-how-to-write-a-simple-opengl-article-1-of-6/
CC-MAIN-2019-13
refinedweb
1,808
76.72
Basic Completion Ctrl+Space JetBrains Rider Second Basic Completion. If you want to change the default behavior, use the corresponding controls on the page of JetBrains Rider settings Ctrl+Alt+S. The list of suggestions is similar to that of Automatic Completion.. When you select items in completion lists using keyboard, the selection will jump to the first item after the last item and vice versa. You can disable this behavior by clearing Cyclic scrolling in list on the page of JetBrains Rider settings Ctrl+Alt+S. Exclude items from completion suggestions You may want some types or namespaces not to be suggested, for example, if you have something similar to a system type in your solution, say MyFramework.MyCollections.List, but you are not actually using it. To exclude such symbols from the suggestions, add them to the Exclude from import and completion list on the page of JetBrains Rider settings Ctrl+Alt+S. The format of the entries is Fully.Qualified.Name, Fully.Qualified.Name.Prefix*, or *Fully.Qualified.Name.Suffix. Generic types are specified as List`1. Examples of basic completion You can use the following examples to get an idea of how basic completion works with various code items: Suggesting type-based variable names Commonly used names for fields and variables are suggested depending on their type. ><<
https://www.jetbrains.com/help/rider/2021.1/Coding_Assistance__Code_Completion__Symbol.html
CC-MAIN-2022-27
refinedweb
221
54.42
Using a codec in a Grails unit test February 24, 2010 13 Comments This is a small issue, but I encountered it and found a solution on the mailing lists, so I thought I’d document it here. I was demonstrating a trivial Grails application in class today and decided to unit test it. The app has a single controller, called WelcomeController: class WelcomeController { def index = { def name = params.name ?: "Grails" render "Hello, $name" } } When I deploy the application and access the Welcome controller (via), it displays “Hello, Grails!”. If I append “ ?name=Dolly” to the URL, the result is “Hello, Dolly!”. All nice and simple. I decided I wanted to write a test case for this, and lately I’ve been learning how to favor unit tests over integration tests as much as possible, mostly for speed. I therefore wrote the following tests: import grails.test.* class WelcomeControllerTests extends ControllerUnitTestCase { void testWelcomeWithoutParameter() { def wc = new WelcomeController() wc.index() assertEquals "Hello, Grails!", wc.response.contentAsString } void testWelcomeWithParameter() { def wc = new WelcomeController() wc.params.name = "Dolly" wc.index() assertEquals "Hello, Dolly!", wc.response.contentAsString } } When I run the unit tests (i.e., grails test-app unit:), everything runs correctly. One of the students pointed out that though this is a trivial example, it’s open to XSS (cross-site scripting) attacks. In the URL, replace “ name=Dolly” with “ name=alert('dude, you've been hacked')” and the embedded JavaScript code executes and pops up an alert box. I knew that an easy solution to this would be to modify the index action in the controller to look like: class WelcomeController { def index = { def name = params.name ?: "Grails" render "Hello, $name".encodeAsHTML() } } The “ encodeAsHTML” method escapes all the HTML, so the output of the hack is just “Hello, alert(…” (i.e., the script is shown as a string, rather than executed) and the problem goes away. The issue I encountered, though, is that my unit tests started failing, with a missing method exception that claimed that the String class doesn’t have a method called encodeAsHTML. That’s correct, of course, because that method is dynamically injected by Grails based on the org.codehaus.groovy.grails.plugin.codecs.HTMLCodec class. In a unit test, though, the injection doesn’t happen, and I get the exception. One solution to this, as pointed out on the very valuable grails-users email list, is to add the method to the String class via its metaclass. In other words, in my test, I can add void setUp() { super.setUp() String.metaclass.encodeAsHTML = { org.codehaus.groovy.grails.plugins.codecs.HTMLCodec.encode(delegate) } } Now the String class has the encodeAsHTML method, and everything works again. Then I started browsing the Grails API, and found that in ControllerUnitTestCase there’s a method called loadCodec. The GroovyDocs weren’t very informative, but I found in the jira for Grails that issue GRAILS-3816 recommends the addition of the loadCodec method for just this sort of purpose. That means that I can actually write void setUp() { super.setUp() loadCode(org.codehaus.groovy.grails.plugins.codecs.HTMLCodec) } and everything works as it should. Since this isn’t terribly well documented, I thought I’d say something here. Hopefully this will save somebody some looking around. Recent Comments
https://kousenit.org/2010/02/
CC-MAIN-2017-17
refinedweb
545
56.96
On 06/20/2011 11:00 AM, Serge Hallyn wrote:> Quoting Eric Paris (eparis@redhat.com):>> Ahhhh, I feel so unhappy with capability code these days. Serge can>> you come to the rescue? I'm really really starting to dislike the>> fact that we have lots of code flows that goes>> kernel->kernel/capablities->LSM->security/capabilities. Which is a>> very strange calling convention. I'd like to stop adding any calls>> to kernel/capability.c and everything from now on needs to be done>> with an LSM function named security_*. I'd really like to see>> kernel/capabilities stripped back to nothing but syscall handling>> and move all of has_capability, has_ns_capability, ns_capable,>> task_ns_capable, and all that crap moved to normal LSM calls.> > I can see why you'd feel that way, but I'd like to hold off on that> until we get targeted capabilities and VFS user namespace support ironed> out. I'm working on it right now (at>;a=summary)> > I certainly do not want the targeted stuff duplicated in every LSM.> Maybe we can move that stuff into security/security.c though.> > Anyway I'm just coming back after leave, and only ever took a> quick glance at this patch. I'll look again.I'm certainly not asking for you to throw down everything you have to doand rewrite all of this code! But I'd like to see a slow move towardsthe elimination of kernel/capability.c and wondered if you agreed thatwas a good idea. If so, this can be the first place we start to thinkabout how to move intelligently to use LSM functions rather than directcapability calls. I like the idea that he use the has_* functionsinstead of creating new ones in there.Hopefully you can help guide us on the right path Serge!-Eric
https://lkml.org/lkml/2011/6/20/261
CC-MAIN-2017-04
refinedweb
304
56.96
Viewing as Array or DataFrame Limitations This command is available for: - The variables that represent NumPy arrays. - The variables that represent pandas dataframes. Hence, NumPy and/or pandas must be downloaded and installed in your Python interpreter. Viewing as array or DataFrame To use the command View as Array/View as DataFrame, follow these steps: - Launch the debugger session. - In the Variables tab of the Debug tool window, select an array or a DataFrame. - Click a link View as Array/View as DataFrame to the right. Alternatively, you can choose View as Array or View as DataFrame on the context menu. The Data View tool window appears. One can also use the command View as Array from the Python console. To view as array from the Python Console, follow these steps: - Launch the Python Console. - Execute a Python code, for example: import pandas as pd import numpy as np array = np.random.random((36, 36)) array1 = np.random.random((36, 10)) df = pd.DataFrame(array) df2 = pd.DataFrame(array1) print("Put breakpoint here") df[0][0] = 1 print("The End") - In the toolbar of the console, click . The variables declared in the console, appear to the right. - Do one of the following: - Click the link View as Array/View as DataFrame: - On the context menu of a variable, choose View as Array/ View as DataFrame: Actions available via the Data View tool window In the Data View tool window, one can do the following: - Change the format of presentation. For example, if in the Format field one specifies %.5f, then 5 digits will appear after dot; if one specifies %.2f, the presentation of the data will change to showing 2 digits after dot. See Python documentation for details. - Close the viewer tab by clicking , and open a new one by clicking . - Make the presentation black-and-white in the new tabs by clearing the check-command Colored by Default that appears in the drop-down menu . If this command is cleared, and a new DataFrame/array is opened, then the new presentation will be colorless. - It's possible to change from the colored to colorless modes for the current tab by right-clicking a tab and selecting the check-command Colored: - Resize columns using the double-headed arrow : Data View is a tool window, and as such, it inherits all the behaviors that are common to all the tool windows. Refer to the section Working with Tool Windows to learn more. Last modified: 23 July 2018
https://www.jetbrains.com/help/pycharm/2018.1/viewing-as-array.html
CC-MAIN-2020-40
refinedweb
416
73.78
Short source code examples I generally try to avoid this coding style these days, but, if you want to see how to use a Java BufferedReader and its readLine method in a Scala while loop, here you go:: Note: The code shown below is a bit old. If you want to perform a “search and replace” operation on all instances of a given pattern, all you have to do these days is use the replaceAll method on a Java String, like this: String s = "123 Main Street"; String result = s.replaceAll("[0-9]", "-"); That second line of code returns the string “ --- Main Street”. I kept the information below here for background information. As a quick note, if you ever want to created a dotted border that has some RGB opacity to it, I just used the following CSS code to style some hyperlinks, and I can confirm that it works: Here’s a short Java/JDBC example program where I show how to perform SQL INSERT, UPDATE, and DELETE statements with JDBC:: As a quick note, if you’re looking at a Drupal form and it says you can use the "Rewrite the output of this field" replacement patterns shown (somewhere) on this page — and you can’t find those replacement patterns on that page — you can find a complete list of them at this drupal.org url. As an example, if you’re working with a Drupal Node, you can use replacement patterns like these: [node:author:name] [node:content-type] [node:content-type:name] As a quick CSS note, if you want to achieve a “zebra striping” style with even and odd CSS row selectors, CSS styles like this will get the job done: .path-frontpage .content-inner-right .content-type-Text:nth-child(even) { /* yellow */ background-color: #fdfdf6; } .path-frontpage .content-inner-right .content-type-Text:nth-child(odd) { /* blue */ background-color: #f3fbff; } I use that CSS for the front page of this website, but if you want a simpler example, here you go: SQL FAQ: How can I select every row from a database table where a column value is not unique? I’m working on an problem today where a Drupal article can have many revisions, and the way Drupal works is that a table named node_revisions has a nid field for “node id,” and a vid field for “revision id.” If you have five revisions of an article (i.e., a blog post), there will be five records in this table, and each record will have the same nid value and a unique vid. If an article has no revisions, this table will have one entry with a unique nid and unique vid. If.) As a quick note, if you need a Drupal 8 Twig template if/else/then structure where you test to see if a string value is in an array, code like this will work: {% if node.getType not in ['photo', 'text'] %} <div class="similar"> {{ similar_by_terms }} </div> {% endif %} That code can be read as, “If the node type is NOT ‘photo’ or ‘text,’ emit the HTML/Twig code shown.” FWIW, it appears that you can drop all of the “migrate” database tables that end up in your Drupal 8 database after a migration, such as migrating from Drupal 6 to Drupal 8. I ended up with 178 of these migration tables in my Drupal 8 database, and deleted them like this: As a quick note to self, here is some source code for a simple example PHP script that runs a SQL SELECT query against a MySQL database:: Note: This code is currently a work in progress. I know of possible approaches, but I don’t know of a perfect working solution yet. I’m currently trying to find the right way to find the current monitor size, when you’re writing a Java Swing application to work in a multiple-monitor configuration. I always use three monitors, so I can test this pretty easily. If you need some source code for a Java FileFilter for image files, this code can get you started: As a quick note, if you ever need to use a Java TimerTask, you can define one like this: class BrightnessTimerTask extends TimerTask { @Override public void run() { // your custom code here ... } } and you can then instantiate it, create a Timer, and schedule the task like this:
http://alvinalexander.com/source-code-snippets?page=1
CC-MAIN-2017-09
refinedweb
729
55.31
Spire.XLS supports to delete a specific shape as well as all shapes in an Excel worksheet. This article demonstrates how to use Spire.XLS to implement this function. The example file we used for demonstration: Detail steps: Step 1: Initialize an object of Workbook class and load the Excel file. Workbook workbook = new Workbook(); workbook.LoadFromFile("Input.xlsx"); Step 2: Get the first worksheet. Worksheet sheet = workbook.Worksheets[0]; Step 3: Delete the first shape in the worksheet. sheet.PrstGeomShapes[0].Remove(); To delete all shapes from the worksheet: for (int i = sheet.PrstGeomShapes.Count-1; i >= 0; i--) { sheet.PrstGeomShapes[i].Remove(); } Step 4: Save the file. workbook.SaveToFile("DeleteShape.xlsx", ExcelVersion.Version2013); Screenshot: Full code: using Spire.Xls; namespace DeleteShape { class Program { static void Main(string[] args) { //Initialize an object of Workbook class Workbook workbook = new Workbook(); //Load the Excel file workbook.LoadFromFile("Input.xlsx"); //Get the first worksheet Worksheet sheet = workbook.Worksheets[0]; //Delete the first shape in the worksheet sheet.PrstGeomShapes[0].Remove(); //Delete all shapes in the worksheet //for (int i = sheet.PrstGeomShapes.Count-1; i >= 0; i--) //{ // sheet.PrstGeomShapes[i].Remove(); //} workbook.SaveToFile("DeleteShape.xlsx", ExcelVersion.Version2013); } } }
https://www.e-iceblue.com/Tutorials/Spire.XLS/Spire.XLS-Program-Guide/Objects/Delete-shapes-in-an-Excel-Worksheet-in-C.html
CC-MAIN-2021-43
refinedweb
194
53.78
2. Using the C++ Compiler 3. Using the C++ Compiler Options Part II Writing C++ Programs 6. Creating and Using Templates 9. Improving Program Performance 10. Building Multithreaded Programs 12. Using the C++ Standard Library 13. Using the Classic iostream Library B.1.1 Overloaded Functions as Pragma Arguments B.2.2 #pragma does_not_read_global_data B.2.3 #pragma does_not_return B.2.4 #pragma does_not_write_global_data B.2.6 #pragma end_dumpmacros B.2.7 #pragma error_messages B.2.13 #pragma must_have_frame B.2.14 #pragma no_side_effect B.2.17 #pragma rarely_called B.2.18 #pragma returns_new_memory B.2.19 #pragma unknown_control_flow B.2.20.1 #pragma weak name This section describes the pragma keywords that are recognized by the C++ compiler. #pragma align integer(variable[,variable...]) Use align to make the listed variables memory-aligned to integer bytes, overriding the default. The following limitations apply: integer must be a power of 2 between 1 and 128. Valid values are 1, 2, 4, 8, 16, 32, 64, and 128. variable is a global or static variable. It cannot be a local variable or a class member variable. If the specified alignment is smaller than the default, the default is used. The pragma line must appear before the declaration of the variables that it mentions. Otherwise, it is ignored. Any variable mentioned on the pragma line but not declared in the code following the pragma line is ignored. Variables in the following example are properly declared. #pragma align 64 (aninteger, astring, astruct) int aninteger; static char astring[256]; struct S {int a; char *b;} astruct; When #pragma align is used inside a namespace, mangled names must be used. For example, in the following code, the #pragma align statement will have no effect. To correct the problem, replace a, b, and c in the #pragma align statement with their mangled names. namespace foo { #pragma align 8 (a, b, c) static char a; static char b; static char c; } #pragma does_not_read_global_data(funcname[, funcname]) This pragma asserts that the specified routines do not read does_not_return(funcname[, funcname]) This pragma is an assertion to the compiler that the calls to the specified routines will not return, enabling the compiler to perform optimizations consistent with that assumption. For example, register life-times terminate at the call sites which in turn enables more optimizations. If the specified function does return, then the behavior of the program is undefined. This pragma is permitted only after the prototype for the specified functions are declared, as the following example shows: extern void exit(int); #pragma does_not_return(exit) extern void __assert(int); #pragma does_not_return(__assert) For a more detailed explanation of how the pragma treats overloaded function names as arguments, see B.1.1 Overloaded Functions as Pragma Arguments. .116 message pragma provides control within the source program over the messages issued by the compiler. The pragma has an effect on warning messages only. The -w command-line option compiler program from issuing the given messages beginning with the token specified in the pragma. The scope of the pragma for any specified error message remains in effect until overridden by another #pragma error_messages, or the end of compilation. #pragma error_messages (default, tag… tag) The default option ends the scope of any preceding #pragma error_messages directive for the specified tags. #pragma fini (identifier[,identifier...]) Use fini to mark identifier as a finalization function. Such functions are expected to be of type void, to accept no arguments, and to be called either when a program terminates under program control or when the containing shared object is removed from memory. As with initialization functions, finalization functions are executed in the order processed by the link editor. In a source file, the functions specified in #pragma fini are executed after the static destructors in that file. You must declare the identifiers before using them in the pragma. Such functions are called once for every time they appear in a #pragma fini directive. Embed the hdrstop pragma in your source-file headers to identify the end of the viable source prefix. For example, consider the following files: example% cat a.cc #include "a.h" #include "b.h" #include "c.h" #include <stdio.h> #include "d.h" . . . example% cat b.cc #include "a.h" #include "b.h" #include "c.h". See A.2.156 -xpch=v and A.2.157 -xpchstop=file. #pragma ident string Use ident to place string in the .comment section of the executable. #pragma init(identifier[,identifier...]) Use init to mark identifier as an initialization function. Such functions are expected to be of type void, to accept no arguments, and to be called while constructing the memory image of the program at the start of execution. Initializers in a shared object are executed during the operation that brings the shared object into memory, either at program start up or during some dynamic loading operation, such as dlopen(). The only ordering of calls to initialization functions is the order in which they are processed by the link editors, both static and dynamic. Within a source file, the functions specified in #pragma init are executed after the static constructors in that file. You must declare the identifiers before using them in the pragma. Such functions are called once for every time they appear in a #pragma init directive.. ; } See B.1.1 Overloaded Functions as Pragma Arguments #pragma no_side_effect(name[,name...]) Use no_side_effect to indicate that a function does not change any persistent state. The pragma declares that the named functions have no side effects of any kind. That is, the functions return result values that depend on the passed arguments only. In addition, the functions and their. If the function does have side effects,. For a more detailed explanation of how the pragma treats overloaded function names as arguments, see B.1.1 Overloaded Functions as Pragma Arguments. #pragma pack([n]) Use pack to affect the packing of structure members. If present, n must be 0 or a power of 2. A value of other than 0 instructs the compiler to use the smaller of n-byte alignment and the platform’s natural alignment for the data type. For example, the following directive causes the members of all structures defined after the directive (and before subsequent pack directives) to be aligned no more strictly than on 2-byte boundaries, even if the normal alignment would be on 4–byte or 8-byte boundaries. #pragma pack(2) When n is 0 or omitted, the member alignment reverts to the natural alignment values. If the value of n is the same as or greater than the strictest alignment on the platform, the directive has the effect of natural alignment. The following table shows the strictest alignment for each platform. Table B-1 Strictest Alignment by Platform A pack directive applies to all structure definitions which follow it until the next pack directive. If the same structure is defined in different translation units with different packing, your program might fail in unpredictable ways. In particular, you should not use a pack directive prior to including a header defining the interface of a precompiled library. The recommended usage is to place the pack directive in your program code, immediately before the structure to be packed, and to place #pragma pack() immediately after the structure. When using #pragma pack on a SPARC platform to pack denser than the type’s default alignment, the -misalign option must be specified for both the compilation and the linking of the application. The following table shows the storage sizes and default alignments of the integral data types. Table B-2 Storage Sizes and Default Alignments in Bytes #pragms rarely_called(funcname[, funcname]) This pragma provides a hint to the compiler that the specified functions are called infrequently, enabling: extern void error (char *message); #pragma rarely_called(error) For a more detailed explanation of how the pragma treats overloaded function names as arguments, see B.1.1 Overloaded Functions as Pragma Arguments. #pragma returns_new_memory(name[,name...]) This pragma asserts that each named function returns the address of newly allocated memory and that the pointer does not alias with any other pointer. This information allows the optimizer to better track pointer values and to clarify memory location, resulting in improved scheduling and pipelining. If the assertion is false, unknown_control_flow(name[,name...]) Use unknown_control_flow to specify a list of routines that violate the usual control flow properties of procedure calls. For example, the statement following a call to setjmp() can be reached from an arbitrary call to any other routine. The statement is reached by a call to longjmp(). Because such routines render standard flowgraph analysis invalid, routines that call them cannot be safely optimized; hence, they are compiled with the optimizer disabled. If the function name is overloaded, the most recently declared function is chosen. #pragma weak name1 [= name2] Use weak to define a weak global symbol. This pragma is used mainly in source files for building libraries. The linker does not warn you if it cannot resolve a weak symbol. The weak pragma can specify symbols in one of two forms: String form. The string must be the mangled name for a C++ variable or function. The behavior for an invalid mangled name reference is unpredictable. The compiler might not produce an error for invalid mangled name references. Regardless of whether it produces an error, the behavior of the compiler when invalid mangled names are used is unpredictable. Identifier form. The identifier must be an unambiguous identifier for a C++ function that was previously declared in the compilation unit. The identifier form cannot be used for variables. The front end (ccfe) will produce an error message if it encounters an invalid identifier reference. In the form #pragma weak name, the directive makes name a weak symbol. The linker will not indicate if it does not find a symbol definition for name. It also does not warn about multiple weak definitions of the symbol. The linker simply takes the first one it encounters. If another compilation unit has a strong definition for the function or variable, name will be linked to that. If there is no strong definition for name, the linker symbol will have a value of 0. The following directive defines ping to be a weak symbol. No error messages are generated if the linker cannot find a definition for a symbol named ping. #pragma weak ping In the form #pragma weak name1 = name2, the symbol name1 becomes a weak reference to name2. If name1 is not defined elsewhere, name1 will have the value name2. If name1 is defined elsewhere, the linker uses that definition and ignores the weak reference to name2. The following directive instructs the linker to resolve any references to bar if it is defined anywhere in the program, and to foo otherwise. #pragma weak bar = foo In the identifier form, name2 must be declared and defined within the current compilation unit. For example: extern void bar(int) {...} extern void _bar(int); #pragma weak _bar=bar When you use the string form, the symbol does not need to be previously declared. If both _bar and bar in the following example are extern "C", the functions do not need to be declared. However, bar must be defined in the same object. extern "C" void bar(int) {...} #pragma weak "_bar" = "bar" When you use the identifier form, exactly one function with the specified name must be in scope at the pragma location. Attempting to use the identifier form of #pragma weak with an overloaded function is an error. For example: int bar(int); float bar(float); #pragma weak bar // error, ambiguous function name To avoid the error, use the string form, as shown in the following example. int bar(int); float bar(float); #pragma weak "__1cDbar6Fi_i_" // make float bar(int) weak See the Oracle Solaris Linker and Libraries Guide for more information.
http://docs.oracle.com/cd/E24457_01/html/E21991/bkbjx.html
CC-MAIN-2016-30
refinedweb
1,975
57.37
tag:blogger.com,1999:blog-8712770457197348465.post86137105838178229..comments2017-12-10T07:21:27.315-08:00Comments on Javarevisited: Java program to reverse a number - Example tutorialJavin Paul an object oriented java program to display t...Write an object oriented java program to display the arithmetic operation reverse of a number using switch case<br /><br />Plzzzzzz give me this program fastMD sohail Ahmed can I reverse a number without using any loop?...How can I reverse a number without using any loop?Jitu to solve if integer starting with zero??? (ex....how to solve if integer starting with zero??? (ex. 0321, 042)Naveen KTRnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-2753809625165437302015-01-06T05:49:05.175-08:002015-01-06T05:49:05.175-08:00how to reverse a number like 10,100,500,1000.........how to reverse a number like 10,100,500,1000.............etcsamuel kumar post friend! but there's a ERROR in the a...Nice post friend!<br />but there's a ERROR in the above code, which make it UNABLE to REVERSE a huge list of NUMBERS.....<br /><br />*ERROR: when i entered 120 its output came out -> 21 only. so its not giving right output for the numbers ending with ZEROS(0) like 100,1500 etc.<br /><br />I corrected the CODE and HERE it is : <br /> Nirvana we reverse 3 digit no. & not more than 3 d...how we reverse 3 digit no. & not more than 3 digit no. is reverse and not less than 2 digit no. in javaBhavesh Kukreja, This should work on any number of digi...@Anonymous, This should work on any number of digit? are you facing any issue?Javin @ abstract class vs interface java to apply this logic for 3 digit numberhow to apply this logic for 3 digit numberAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-37832965097745574842013-04-29T12:46:40.595-07:002013-04-29T12:46:40.595-07:00package reversedigit; import java.util.Scanner; ...package reversedigit;<br /><br />import java.util.Scanner;<br /><br /> class ReverseDigit {<br /><br /> <br /> public static void main(String[] args)<br /> {<br /> int c,a=0,b=0;<br /> System.out.println("Enter Digit to Reverse It");<br /> Scanner sc = new Scanner(System.in);<br /> c=sc.nextInt();<br /> <br /> System.out.println("Reverse Order of &Mokshnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-61552164437055207492013-01-20T15:25:55.352-08:002013-01-20T15:25:55.352-08:00is there a way to do this without the use of loops...is there a way to do this without the use of loops, if statements and strings?Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8712770457197348465.post-15464012712399908912012-06-02T08:31:48.833-07:002012-06-02T08:31:48.833-07:00Hi Rob, I understand the question as 'literal...Hi Rob, <br />I understand the question as 'literal' reversal of number in which case we would be missing the zero's at the end. If we just consider the actual integer value after reversal, what you said is right.Madhan wrong output. Wouldn't 12,300 reversed ...Define wrong output. Wouldn't 12,300 reversed be 00,321 or simply 321?Rob Foster blog is very good! But, for this program, we...Your blog is very good!<br /><br />But, for this program, we would get wrong output if the input integer ends with "0". So instead of saving the remainder in to a int, instead we could use a string buffer and append it. Finally showing the stringbuffer as final ouput would be one option.Madhan question! Here's a solution that also ha...Great question! Here's a solution that also handles negative numbers and overflows.<br /><br />public class Reverse {<br /> public static long reverse(int number) {<br /> long reverse = 0;<br /> while (number != 0) {<br /> reverse = (reverse * 10) + number % 10;<br /> number /= 10;<br /> }<br /> return reverse;<br /> }<br />}Rob Foster Jan, It's converting String input to int b...Yes Jan, It's converting String input to int but that's the easiest way to get input from User, Though you can also use JOptionPane to display input dialog to input numbers but I stick to simpler one.Javin @ String to double in Java, it's going to be tough to write somethin...Yeah, it's going to be tough to write something _without_ touching the API at all.<br />In your example, just reading in the String to be converted to an int and then reversed requires the Scanner to read and the nextInt() to translate the next token to an int.Jan Ettles, you are right on your part, you can reverse ...@Jan, you are right on your part, you can reverse number by treating them as String in Java but this programming exercise is meant to do it without any API method and with just arithmetic operator in order to apply some kind of logic but no doubt it can be done by converting String to integer and using your solution.Javin @ String to int conversion java this code is only dealing with integers, would ...As this code is only dealing with integers, would it not be simpler to leave it as a String and reverse that?<br /><br />public String reverse(String s) {<br /> if (s.length() <= 1) { <br /> return s;<br /> }<br /> return reverse(s.substring(1, s.length())) + s.charAt(0);<br />}<br /><br />Then parseInt if a numeric is actually needed.Jan Ettles
http://javarevisited.blogspot.com/feeds/86137105838178229/comments/default
CC-MAIN-2017-51
refinedweb
914
59.19
#include <chain.h> This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead. Go to the source code of this file. Definition at line 122 of file chain.cpp. Return the time it would take to redo the work difference between from and to, assuming the current hashrate corresponds to the difficulty at tip, in seconds. Definition at line 137 of file chain.cpp. Compute what height to jump back to with the CBlockIndex::pskip pointer. Definition at line 74 of file chain.cpp. Turn the lowest '1' bit in the binary representation of a number into a '0'. Definition at line 71 of file chain.cpp. Find the last common ancestor two blocks have. Find the forking point between two chain tips. Both pa and pb must be non-nullptr. Definition at line 156 of file chain.cpp.
https://doxygen.bitcoincore.org/chain_8cpp.html
CC-MAIN-2021-39
refinedweb
144
77.74
Le Saturday 07 June 2008 18:13:28 Thomas Viehmann, vous avez écrit : > Romain Beauxis wrote: > > So, Sam wrote the code, owns the copyright, but the Savonet Team, which > > Sam is part of is the current upstream author. What's the issue there ? > > Well, the code says > (c) by Savonet Team > Author: Samuel > the debian/copyright says > (c) Samuel > Author: Savonet Team > > That is not the same and something to get right. > I'm not in a place to give advice to the upstream you, but I'm not sure > that some more standardized copyright and licensing statement would be > undesirable there... Well, I don't think it's the same namespace. Author in a source code file, especially when written like: * @author Samuel Mimram Actually means that Samuel wrote the file, hence is the copyright owner. But, the overall global author, refered as upstream author truly is "The Savonet Team". Besides, I can't get the code you are refering to. I only have: ./src/Makefile.in:# Copyright (C) 2005-2006 Savonet team ./src/Makefile.in:# by Samuel Mimram ./src/faad.mli: * @author Samuel Mimram Yes, the Makefile.in could be confusing, but that's the one we use for every projects. Furthermore, not all autoconf related files are listed in debian/copyright usually. And, yes no other license notice expecpt the COPYING file, which is sufficient but not recommended. Anyway, I thank you for the time you take on reviewing my packages. Romain
https://lists.debian.org/debian-ocaml-maint/2008/06/msg00060.html
CC-MAIN-2017-47
refinedweb
245
74.08
$ cnpm install @quasar/quasar-app-extension-qmarkdown The QMarkdown app extension can do the following: @quasar/quasar-ui-qmarkdowncomponent using the Quasar CLI. Remember, app extensions can only be used with the Quasar CLI.. quasar ext add @quasar/qmarkdown Quasar CLI will retrieve it from NPM and install the extension. When installing the QMarkdown app extension, you will be prompted with two questions: The default is true for the above question. It allows you to do this in your Quasar apps: import markdown from '../examples/myMarkdownFile.md' You can now use the QMarkdown component to process the markdown file to be displayed on your page. The default is true for the above question. It allows you to do this in your Quasar apps: import vmd from '../examples/myVuePlusMarkdownFile.vmd' components: { myComponent: vmd } vmd files also allow you to provide front-matter as part of the processing. Be sure to read the documentation to understand how this works. quasar ext remove @quasar/qmarkdown You can use quasar describe QMarkdown for the QMarkdown component If you appreciate the work that went into this project, please consider donating to Quasar or Jeff. MIT (c) Jeff Galbraith jeff@quasar.dev
https://developer.aliyun.com/mirror/npm/package/@quasar/quasar-app-extension-qmarkdown/v/1.0.0-beta.19
CC-MAIN-2020-40
refinedweb
196
64.71
Index Links to LINQ This post describes how a C# developer can set up and run a Silverlight project in Visual Studio Orcas. The best place to begin is by making sure you have Visual Studio Orcas installed. I prefer using the VPC versions of the Orcas betas, but you may have reason for making other decisions. If you do install the VPC version, however, then starting with Beta 1 is essential, as you will want to download and install the other pieces inside the Beta 1 VPC. Here are all the pieces you need: You can create Silverlight projects directly in Visual Studio or you can create the project in Expression Blend. Visual Studio and Expression Blend are designed to work together, and you can easily move back and forth between them. In this post, I will show you to start by creating the application in Visual Studio. Choose File | New Project in Visual Studio. Select Silverlight under Project types, and select Silverlight project on the right under Templates. A new project will be created when you click the okay button. Figure 1: Creating a Silverlight application in Visual Studio. You should now be able to see your project in Visual Studio, as shown in Figure 2. Figure 2: A default Silverlight application in Visual Studio. (Click to enlarge.) The code you see in the editor is written in an XML based description language called XAML that defines the interface of your application. XAML rhymes with Camel, and is pronounced like this: "zammel." If you wanted, you could now write XML to define your interface. However, that is a difficult and painful process. Instead, right click on the Page.xml node in the Solution Explorer, and choose "Open In Expression Blend" from the popup menu. You project will open in Expression Blend, as shown in Figure 3. Figure 3: A default Silverlight application in Expression Blend. (Click to see a larger image.) Off to the right of Figure 3 you see a list of files. This part of Expression Blend plays the same role as the Solution Explorer does in Visual Studio. The two applications share the same format for their projects, and hence these projects will open in either Visual Studio or in Expression Blend. In other words, a Silverlight application created in Expression Blend will open in Visual Studio, and vice versa. (For now, however, it is simplest to first create your application in Visual Studio, as Expression Blend leaves out code that Visual Studio will generate automatically.) The white area shown in Figure 3 is called the Canvas, and on this portion of the screen you can create your interface. To get started, you might want to work with the tools found on the far left of the Expression Blend screen. Notice in particular the shape tool, shown second from the top in Figure 4, and the text tool, which is shown at the bottom of Figure 4. Figure 4: The Shape tool and the Text tool in Expression blend can be used to create a simple interface for your application. Right click on the shape tool and select Ellipse from the pop up menu. Drag the ellipse on to the Canvas and center and resize it. On the right of of the Expression Blend interface choose Properties and use the tools found there to set the color of your shape. Drop a Text block in the middle of your shape. Enter some text, and then use the Properties area in Expression Blend to set your font size and color. Notice that in the Objects and Timeline section on the left of Expression Blend you can select either the Canvas, the Ellipse, or the Text block. You can change the traits of the selected item in the Properties window on the right. When you are done editing, you might come up with something like the image shown in Figure 4, only hopefully a bit prettier. Figure 4: An Ellipse and Text Tool on a white Canvas. On the left, the Canvas is select, on the right the properties for the canvas are visible. (Click to enlarge the image. You can run the project from inside Expression Blend by pressing F5. However, you probably would like to see the project inside Visual Studio. To do this, simple Alt - Tab back to Visual Studio. You will be informed that your project has been updated. Click okay on the dialog, and the code you created in Expression Blend will be seen in Visual Studio, as shown in Listing 1. Listing 1: The code generated in Expression Blend is visible near the bottom of this listing. < <Ellipse Fill="#FF6071B9" Stroke="#FF000000" Width="374" Height="280" Canvas.<TextBlock Width="303" Height="104.97" Canvas. </Canvas> You can now press F5 to run the application. A browser will be launched, and you will see the application inside the browser, as shown in Figure 5. Figure 5: The Silverlight application displayed in a browser. (Click to enlarge.) Let's write just a little code so that we have an event handler which will be called when the user clicks on the text in the application. In the Text block section add the following attribute to the XML: x:Name="MyText" When you are done, the code should look like this: <TextBlock x: Now click on the plus sign to the left of Page.xml in the Solution Explorer. This will give you access to Page.xaml.cs. In this file you can write C# code. Modify the Page_Loaded method in Page.xaml.cs so that it looks like this: 1: namespace SilverlightProject3 2: { 3: public partial class Page : Canvas 4: { 5: public void Page_Loaded(object o, EventArgs e) 6: { 7: // Required to initialize variables 8: InitializeComponent(); 9: 10: MyText.MouseLeftButtonDown += 11: } 12: } 13: } Immediately after you type in the += operator, press space once and then press Tab two times. Code should be automatically inserted into your source file. If it is not, delete the +=, type it in again, and then press Tab two times. When you are done, your code should look something like this: 10: MyText.MouseLeftButtonDown += new MouseEventHandler(MyText_MouseLeftButtonDown); 12: 13: void MyText_MouseLeftButtonDown(object sender, MouseEventArgs e) 14: { 15: throw new Exception("The method or operation is not implemented."); 16: } 17: } 18: } Modify line 15, and replace it so that the event handler looks like this: 1: void MyText_MouseLeftButtonDown(object sender, MouseEventArgs e) 3: MyText.Text = "What you think you become."; 4: } Run the application. Click once on the text in the middle of your Ellipse. The words written there will change, as shown in Figure 6. Figure 6: The event handler changes the text shown in the ellipse. Compare this image with the one shown in Figure 5. In this post you have learned the basic facts you need to know to write a Silverlight application. You saw how to create the application in Visual Studio, and how to load it in Expression Blend. You then used the tools in Blend to create an interface for your application. Back in Visual Studio, you saw how to add a simple event handler to your application. If you would like to receive an email when updates are made to this post, please register here RSS You've been kicked (a good thing) - Trackback from DotNetKicks.com Thanks for the sharing your experience! You've been linked (a good thing:)) - Trackback from dotneturls.com Silverlight and C# in Orcas Beta 1(转载) One thing - as you may see in the pictures if you f5 from VS the page runs off as a flat file from the OS, but if you run it on Blend, it runs as a page delivered by the dev server (as it should be). This causes the projects cannot be debugged from vs and the BrowserHttpWebRequest falis because of the cross-domain restrictions. Does someone know how to fix this inside VS? Thanks and good post!! 英文原版地址 本人讲述的是作为C#程序员,如何在Visual .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", Well this week was a nice rest, most of it spent relaxing with my wife. So it was a non-coding week but you may find this video also useful by Scott Guthrie. HTH - Dipesh you may find this video also useful by Scott Guthrie. HTH - Dipesh I need example on creating listbox.Which created dynamically by using orcas. how add silverlight in page asp.net? I notice that in your example you attached the MouseLeftDown event in Page_loaded(). Is this strictly necessary, i.e. or could you have put it in the constructor? I am trying to find out when you absolutely need to use Page_loaded. I asked this question in the MS forum: I got the answer: "Mostly you will need it [Page_loaded] when you want to access object properties like position or size of the control." Can you think of a case where putting the code in the constructor will cause a blow-up? Thanks for your help. BTW, we met at the SoCal User's about 6 months ago. Richard
http://blogs.msdn.com/charlie/archive/2007/05/21/silverlight-and-c-in-orcas-beta-1.aspx
crawl-002
refinedweb
1,518
73.27
Search: Search took 0.02 seconds. - 15 Nov 2013 7:37 AM - Replies - 0 - Views - 4,825 com.extjs.gxt.ui.client.widget.Status renders it's html as text (using El.toSafeHTML) if setHtml executing on not yet rendered Status. For example: final Status status = new Status();... - 28 Oct 2011 4:44 AM - Replies - 6 - Views - 5,199 What is TwinTriggerComboBox? Where it's code? TwinTriggerField does not have all ComboBox features like setStore etc. - 20 Jun 2011 12:32 AM chalu, thanks you again for right direction. I used this: loader.useLoadConfig(config); combo2.setUseQueryCache(false); combo2.doQuery(null, true);... - 17 Jun 2011 9:23 AM Thanks! I will try it. But not all my comboboxes works without loadConfig. I have 4 chained comboboxes where data in 2nd depends on what is selected in 1st. I listen to combo1.onDataChanged and... - 17 Jun 2011 6:49 AM My question is simple: How tell to ComboBox that data in it already loaded and there is no necessity to load it again when user click on TriggerAction button? sven, where is you? Help me... - 17 Jun 2011 6:18 AM Err... Not solved.. Now I have the exception: java.lang.ClassCastException: com.extjs.gxt.ui.client.data.BaseListLoadConfig cannot be cast to com.extjs.gxt.ui.client.data.PagingLoadConfig ... - 17 Jun 2011 2:23 AM Problem solved by calling loader.setReuseLoadConfig(true); - 14 Jun 2011 2:38 AM This question about ComboBox with store and loader with RpcProxy. When this type of ComboBox displayed and I click on trigger button, dropdown list expands and loads data to store (by RpcProxy).... - 10 Jun 2011 7:52 AM Jump to post Thread: RPC ComboBox set default value by djxak - Replies - 0 - Views - 1,546 I use ComboBox with RPC proxy, reader and loader. I want to set initial value for this combobox to one from store. For that I need to call loader.load() before setting the value, or else store is... - 10 May 2011 1:04 AM - Replies - 2 - Views - 2,100 Thanks you very much! I am very new to java and didn't know about needs for equals() implementation for each class. I thinked java has some default comparer and can compare objects by fields each... - 9 May 2011 10:50 PM - Replies - 2 - Views - 2,100 - Detailed description of the problem I have one problem when some grid cell binded to ComboBox. When I first time open ComboBox (to view full list of values) and then just close it py pressing... - 6 May 2011 2:36 AM - Replies - 1 - Views - 2,034 If anybody have something similar to my problem, I wrote some code for this. Usage example: ModelData nullModel = new NullModel("N/A"); // proxy, loader, reader and store... - 29 Apr 2011 5:59 AM - Replies - 1 - Views - 2,034 I have a Grid binded to Store. Grid store loads list of Entity1 (JPA backend) by RPC. @Entity public class Entity1 extends LightEntity implements Serializable, BeanModelTag { @Id... - 5 Aug 2010 3:07 AM Thanks. - 5 Aug 2010 2:54 AM Here is the page in the book where i seen the link to. - 5 Aug 2010 1:29 AM The same question! Where can I see examples from this book? Results 1 to 16 of 16
https://www.sencha.com/forum/search.php?s=9089177b99f70708466a7b1969c29fd7&searchid=13305320
CC-MAIN-2015-48
refinedweb
548
76.62
Why is getLineMetrics inaccurate when using device fonts* or immediately after resizing a TextField?James22s22 Dec 22, 2013 11:45 PM 1. We need getLineMetrics to return correct values immediately after changing a TextField's width/height or any property that would affect the layout metrics, withouth having to alter other properties like setting the text to itself (p1.text = p1.text). Currently, if you change the width of a text field to match the stage width for example, getLineMetrics will not return correct values until the next frame.... UNLESS you set the text property. 2. We also need some kind of "stage scaled" event in addition to the "stage resize" event (which only fires when stage scale mode is no_scale), because stage scaling affects the rendered size of device fonts so dramatically that we must call getLineMetrics again. This is not the case for fonts antialiased for readability, since their size is relatively stable with scaling, as demonstrated by drawing a box around the first line once and then scaling the stage. So those are the problems. The asterisk in the title of this post is there because it seem that TextField.getLineMetrics is accurate with device fonts, but I cannot take advantage of that accuracy without a way to detect when the player is scaled. I can only confirm its accuracy at a 1:1 scale, since there is no way to recalculate the size of the line rectangle once the player is scaled, aside from setting a timer of some sort which is a real hack not to mention horribly inefficient with no way to detect when the stage has actually be scaled. I use device fonts because embedded fonts look terrible and blurred compared to device font rendering. The "use device font" setting matches the appearance of text in web browsers exactly. The only way to get embedded/advanced antialiased text in flash to approximate that of the device font look is to primarily set gridFitType to PIXEL instead of SUBPIXEL, and secondly set autokerning to true to fix problems caused by the PIXEL grid fit type. That ensure strokes are fitted solidly to the nearest pixel, however it still lacks the "ClearType" rendering that device fonts use, which has notable color offset to improve appearance on LCD monitors, rather than the purely grayscale text that flash uses in its subpixel rendering. Frankly, failure to use device fonts because of API issues, is the only reason why Flash sometimes doesn't look as good as HTML text and why people say text in Flash "looks blurry". I'm tired of hearing it. If the player simply dispatched an event when scaled and updated the metrics immediately when any property of the text field that would affect the metrics is changed, then we could all happily use device fonts and Flash text would look great. As is stands, because of the two problems I mentioned in the opening paragraph, we're stuck dealing with these problems. If you create two text fields named "p1" and "p2" for paragraph 1 and 2, populate them with an identical line of text and set one to "use device fonts" and the other to "antialias for readability", then use this code to draw boxes around the first line of text in each of them: import flash.text.TextField; import flash.text.TextLineMetrics; graphics.clear(); drawBoxAroundLine( p1, 0 ); drawBoxAroundLine( p2, 0 ); function drawBoxAroundLine( tf:TextField, line_index:int ):void { var gutter:Number = 2; var tlm:TextLineMetrics = tf.getLineMetrics( line_index ); graphics.lineStyle( 0, 0x0000ff ); graphics.drawRect( tf.x + gutter, tf.y + gutter, tlm.width, tlm.height ); } The box surrounding the line of text in the "use device fonts" box is way off at first. Scaling the player demonstrates that the text width of the device font field fluctuates wildly, while the "antialias for readability" field scales with the originally drawn rectangle perfectly. That much is fine, but again to clarify the problems I mentioned at the top of this post: Since the text width fluctuates wildly upon player resize, assuming that getLineMetrics actually works on device fonts (and that's an assumption at this point), you'd have to detect the player resize and redraw the text. Unfortunately, Flash does not fire the player resize event unless the stage scale mode is set to NO_SCALE. That's problem #1. And if that's by design, then they should definitely add a SCALE event, because changes in player scale dramatically affect device font layout, which requires recalculation of text metrics. It's a real issue for fluid layouts. The second problem is that even when handling the resize event, and for example setting the text field width's to match the Stage.stageWidth property, when the text line wraps, it's not updated until the next frame. In other words, at the exact resize event that causes a word to wrap, calling getLineMetrics in this handler reports the previous line length before the last word on the line wrapped. So it's delayed a frame. The only way to get the correct metrics immediately is basically to set the text property to itself like "p1.text = p1.text". That seems to force an update of the metrics. Otherwise, it's delayed, and useles. I wrote about this in an answer over a year ago, showing how sensitive the text field property order is: 1. Re: Why is getLineMetrics inaccurate when using device fonts* or immediately after resizing a TextField?sinious Dec 24, 2013 7:42 AM (in response to James22s22) As you've noted several times, setting the .text property triggers the metrics engine to update. Similar in the component world to manually invalidating, forcing a redraw, a very common thing. While it's a workaround, is this actually causing your app any strain? A frame is typically a very short duration (even my mobile apps are 60fps). At worst case scenario, on a RESIZE you can simply wait a frame (1/60th of a second) to get the correct metrics on the TextField. Flash may actually trigger the metrics engine internally by re-assigning the text anyhow, who knows how it actually does it. Is this an application? You may use other scale modes and simply detect resizing using the container. For instance if it's a website, detect the browser window resizing and simply call a method in the SWF to alert it of the resizing. It can give you all the information on the current size of the SWF. If it's an AIR application or something without a container then you won't get that luxury (although check NativeWindowBoundsEvent). What is your target? 2. Re: Why is getLineMetrics inaccurate when using device fonts* or immediately after resizing a TextField?James22s22 Dec 27, 2013 12:38 PM (in response to sinious) As I've noted several times, setting the text property to its own current value should not be necessary to update the metrics, and in some subclasses of text field, setting a property to its own value is ignored as the property is not actually changing and processing such a change would cause unnecessary work which could impact application performance. Metrics should be current upon calling getLineMetrics. They are not. That's the problem. From a programming perspective, having to set the text property (really "htmlText" to preserve formatting) to itself to update metrics is almost unmanagable, and doesn't even make sense considering "htmlText" is just one of a dozen properties and methods on a TextField that could invalidate the layout metrics (alignment, setTextFormat, width, height, antiAliasMode, type, etc.), and I would have to override every one of those properties so that I could set htmlText = htmlText. Using such a subclass isn't even possible if I want to use the Flash IDE to add text fields to the stage. I would have to iterate over the display list and replace all existing fields with my subclass, which also isn't a good workaround because there's no way to update any and all variable references that may have been made to those instances. Frome what I've read, the invalide+render event system is unreliable. My layout framework is similar to that of Windows Forms, and performs layout immediately, with dozens of docking modes and uses suspend and resume layout calls for efficiently resizing multiple child objects in a component. Certain calculations cannot be aggregated for a render event, because some containers are semi-reflexive, meaning they can expand to fit the child contents while also contraining the child size, depending on whether the contain was resized or the child component was resized, so as a matter of correctness the resizing calcultation must occur immediately when the child resizes, otherwise a top-down pass on the display hierarchy for resizing will not be sufficient. As far as waiting until the next frame, no that is not possible, as it will cause one frame to be completely wrong. If I was dragging the browser window to resize it, it would look terrible as virtually every single frame during the resizing operation would be incorrect. Also, in the case where a user clicks the maximize or restore button of the web browser, the resizing event will occur exactly once, so if the metrics are not correct when that occurs, there is no recalculation occuring on the next frame, it will just be wrong and sit there looking wrong indefinitely. In case it's not obvious by now, this is a web application. It uses the NO_SCALE stage scaling option, so notification of the event is not actually an issue for me personally. I was just pointing out that for anyone not using the NO_SCALE option, there is no event in Flash to detect player scale. What you're suggesting is using a JavaScript event and using the ExternalInterface bridge to send a message, which there is no guarantee whether it will be processed in a timely matter by the player and introduces possible platform inconsistancies, depending on whether the browser has actually resized the Flash interface at that point or what state Flash is in when it tries to recalculate the size of the text. The browser may send that event to flash before the player is actually resized, so it will be processing incorrect sizes and then resized after the fact. That's not a good solution. Flash needs a scale event in addition to a resize event. I'm really surprised it doesn't have one. At the very least, the existing resize event should be dispatched reguardless of the stage scale mode, rather than occuring exclusively in the NO_SCALE mode. Bottom line is that getLineMetrics needs to return correct values every time it is called, without having to set the "text" property immediately before calling it. If such a requirement exists, which seems to be the case, then that needs documented in the getLineMetrics method. 3. Re: Why is getLineMetrics inaccurate when using device fonts* or immediately after resizing a TextField?sinious Dec 30, 2013 10:19 AM (in response to James22s22) I'm not disagreeing with you. Every visual method, property and event should update the metrics. As you've identified, it doesn't. On the "next frame" scenario, being a single frame behind and the users perception of the application accurately resizing the component is deeply in the land of personal preference. My a-typical user is not analyzing medical information that requires up to the nanosecond perfect rendering accuracy such as a scaling component that displays a MRI. Flash is not Win32. The TextField is clearly a small bit of code to graphically represent a traditional Input using the limits of it's own engine. If that representation is rendered one frame behind (1/30th or more typically today 1/60th of a second), that is completely acceptable. If you really want something more like Windows components you might want to consider Flex. I find going between visual c# with the VS IDE and components comparable at an IDE level to using the Flex IDE (if you use 4.6 and below), or if you switch to Apache (4.11) you'll just have to do it in code. In either case it's much more suited to complex, container based application programming. For example, here's the interface for a TextFlow on an Input component for Apache 4.11: Short of rewriting for Flex, you should document the bug you found. I'll be happy to vote it up: 4. Re: Why is getLineMetrics inaccurate when using device fonts* or immediately after resizing a TextField?James22s22 Jan 2, 2014 3:38 PM (in response to sinious) You're mischaracterizing the problem. This isn't a frame cycle issue, and it has nothing to do with differences between Flash and Win32. And please don't suggest Flex, it's terrible. That's why I built my own framework. As for components using a invalidation, redraw cycle... those don't spill over into the next frame... the invalitions occur in one part of the cycle (e.g. during the scripting phase), and are resolved before the next invalidation cycle. Also, Flash is written in Win32 on Windows, and like other game engines, there's nothing stopping any Win32 application from utilizing such a frame cycle for rendering. This has nothing to do with need up to the second nanosecond perfection and it's not something that can be fixed by allowing for a 1/30 second delay.. that's a serious mischaracterization of the problem. The resize event occurs precisely once... once when the user resizes the window, once every time the player dispatches a resize event, once every time the size of a components parent container changes for whatever reason. The metrics are either correct or they are not. If the values are not correct when getLineMetrics is called, then the screen will set there looking incorrect, not for a 30th of a second, but indefinitely... until the next resize event occurs, which will again be incorrect! Also, what you're suggesting is that when the width of a textField changes, use some relatively elaborate code to delay the getLineMetrics-dependent calculations until the next frame. First, you're assuming that a frame cycle will cause getLineMetrics to be updated. That's not necessarily true, and secondly, why would I even resort to using expensive timers when setting "htmlText=htmlText" will force an update immediately. The problem, again, is that because getLineMetrics doesn't resolve it's own internals as it should, and because the only way to force that resolution is to set the text to itself, I have to remember to do it for it. Every single time the width of any text field changes. And as I said, if I could subclass the TextField to correct such an error (and I already have, along with numerous others, via my TextFieldEx class), then I cannot use the Flash IDE to lay out text fields, because it only works with TextField instances, and not subclasses thereof. 5. Re: Why is getLineMetrics inaccurate when using device fonts* or immediately after resizing a TextField?.:}x-=V!P=-x{:. Jan 2, 2014 4:34 PM (in response to James22s22) hey not sure if this will help but will updateAfterEvent work? event.updateAfterEvent(); 6. Re: Why is getLineMetrics inaccurate when using device fonts* or immediately after resizing a TextField?James22s22 Jan 3, 2014 7:44 AM (in response to .:}x-=V!P=-x{:.) Nah. That triggers a render after a timer event, it's unrelated. Besides, I could just set htmlText to itself, as I mentioned. The problem is having to perform such an extra step before calling getLineMetrics, and that the extra step to ensure the metrics are updated isn't documented or obvious. 7. Re: Why is getLineMetrics inaccurate when using device fonts* or immediately after resizing a TextField?.:}x-=V!P=-x{:. Jan 3, 2014 9:11 AM (in response to James22s22) cool ya one extra line of code dont hurt lol. cant beat em join em
https://forums.adobe.com/thread/1363939
CC-MAIN-2019-04
refinedweb
2,692
60.04
Following is the Sample Android Login Application which I have developed and would like to share with everyone. This Application is at its very basic stage, Irrespective of the database used. It works as follows: If User Name and Password Entered are similar it gives a Toast pop up saying that "Login Successful" Else It will give a Toast pop up Saying that "Invalid Login" I feel it will help the Android learning beginners: This application is developed on Following platforms: Ubuntu 10.10, Eclipse 3.5 Galileo Android 2.1 SDK Following is the Screen short of the Application: Following is the Screen short of the Application: I have used Table Layout, Following is the Layout Code for the Same: <TableLayout xmlns:android="" android:layout_width="fill_parent" android:layout_height="fill_parent" android: <TableRow> <TextView android:text="User Name: " android:id="@+id/TextView01" android:layout_width="wrap_content" android: </TextView> <EditText android:text="" android:id="@+id/txtUname" android:layout_width="fill_parent" android: </EditText> </TableRow> <TableRow> <TextView android:text="Password: " android:id="@+id/TextView02" android:layout_width="wrap_content" android: </TextView> <EditText android:text="" android:id="@+id/txtPwd" android:layout_width="fill_parent" android:layout_height="wrap_content" android: </EditText> </TableRow> <TableRow> <Button android:text="Cancel" android:id="@+id/btnCancel" android:layout_width="fill_parent" android: </Button> <Button android:text="Login" android:id="@+id/btnLogin" android:layout_width="fill_parent" android: </Button> </TableRow> </TableLayout> Following is the Activity Code for the Application: package com.mayuri.login; import android.app.Activity; import android.os.Bundle; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; public class SampleLogin extends Activity { EditText txtUserName; EditText txtPassword; Button btnLogin; Button btnCancel; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); txtUserName=(EditText)this.findViewById(R.id.txtUname); txtPassword=(EditText)this.findViewById(R.id.txtPwd); btnLogin=(Button)this.findViewById(R.id.btnLogin); btnLogin=(Button)this.findViewById(R.id.btnLogin); btnLogin.setOnClickListener(new OnClickListener() { @Override public void onClick(View v) { // TODO Auto-generated method stub if((txtUserName.getText().toString()).equals(txtPassword.getText().toString())){ Toast.makeText(SampleLogin.this, "Login Successful",Toast.LENGTH_LONG).show(); } else{ Toast.makeText(SampleLogin.this, "Invalid Login",Toast.LENGTH_LONG).show(); } } }); } } Following are the Screenshots of the Applications, Depending on their behaviours i.e. "Login Success" and "Invalid Login". I hope this example proves to be useful to the people learning Android. I will keep on updating with more and more examples. 45 comments: Good work ...thank you can you please show how to start the sub_activity from the same activity by using intents???? Thanks Akhtar! Try this: Intent myIntent = new Intent(view.getContext(), ActivityClassName.class); startActivityForResult(myIntent, 0); Its a Mind blowing work for android developers. mayuri mam is my guru. thanks a lot mam. Hi. Thanks I am finding this type of login form .. With use of database how datafetch?? thaks I am looking for this type of loginform Hello ,can you please find how to fetch data form databse... using of webservices... Hello,, I just want to knw that how to check whether the edittext is focused or not can u plz tell me which method i should use i tried isFocused() and hasFocus() but dint work..... Can you please let me know how I can modify this code to connect to a remote data base . I would appreciate if you send me the lines of code Hi, Thanks for this tutorial.... But actually iam searching for login application i.e)username and password are stored in database or server.. when i enterd username&password values server or database will check these two values.. if it matches it will return success otherwise failure.... pls help me :P would like to thank for creating this interesting blog, because its having the good knowledge about android. so its useful to me. Android developer please how can i connect to my mysql database from this form ??? pls tell me, how to login my mail address from my android application??? superb... Thanks a lot, its really helpfull to everyone who have no idea of xml (like me ) in creating android basic app.. once again thank you so much.. :) do you have any ideas of creating database and viewing database using timepicker...it will be very useful if you help i am a student doing finalyear project......... Can you help me with creating database using timepicker............I am doing it for final year college project thanks in advance Satheesh Hi this app is very useful to me. YQ gatchu mayuri gatchu mayuri am unable to enter the values into EditText.... Can u help me.... mam plz tell me how i can crete my sqldatabase can u create web app for login using servlets & jsp & import it into ur android project ??? Hi Mayuri, thanks for providng data for login form...i need some more information from that data like how to fetch the same code through database through different un and pwd's???can u please provide if you have that type of data... Thanks, Pavan Kumar Thanks for your post mayuri... This example was quite helpful. I have also referred this site, which looks quite useful. Have a look!! Thanks. This Code really helped me out. I want some modification in this code, I want to switch another Screen I have used the Intent but it is showing me error.. Will u plz. help me.? hi Mayuri i want signup page program with database.can u help me Good One yaar , Thank'z for Your Code. Could you please provide me a login and registration sourse code using sqlite and eclipse: There should be a reset button on registration page. And after login directed to a new welcome page. Please provide me this code as soon as possible. Thankyou cse.amver@gmail.com Can any body Explain how to authenticate login from remote server through PHP and MysQl.. I have tried several Tutorial nut unable to resolve the issue . My emulator gets stuck when ever i make any request to the server (Xaampp). Awesome mayuri @Nauman Shaw use phonegap man I tried this app and most of it works. I copied the code snippets from here to a fullscreen app. My username field overlaps the app name field on the top left? still trying to find the reason and rectify this Thanks Mayuri, SanjayB. I found one solution for this - by setting marginTop as a value Hi, may I know the connectivity coding for the same example shown above? Thank you please tell me how to handle the error "unfortunately has stopped " contact me by: tsegay.gm@gmail.com Thanks Akhtar! I used to login in a sqlserver, impressive code!! Thanks for sharing this post Dealersocket Login Nice idea,keep sharing your ideas with us.i hope this information's will be helpful for the new learners. keep up the good work. this is an Assam post. this to helpful, i have reading here all post. i am impressed. thank you. this is our digital marketing training center. This is an online certificate course digital marketing training in bangalore / Thanks for taking time to share this post. It is really useful. Continue sharing more like this. Android Training Institute in Chennai | Android Training Institute in anna nagar | Android Training Institute in omr | Android Training Institute in porur | Android Training Institute in tambaram | Android Training Institute
http://catchmayuri.blogspot.com/2010/12/sample-android-login-application.html
CC-MAIN-2021-39
refinedweb
1,224
50.84
NAME vga_getmousetype - returns the mouse type configured SYNOPSIS #include <vga.h> int vga_getmousetype(void); DESCRIPTION This returns the mouse type configered in /etc/vga/libvga.config. The return value logically anded with MOUSE_TYPE_MASK is one of (defined in (#include <vgamouse.h>): MOUSE_NONE There is no mouse installed. It is good style to check if there is no mouse available first and then enable mouse support to avoid an svgalib error message if you try to initialize a non existing mouse. MOUSE_MICROSOFT A Microsoft compatible mouse (2 buttons) (default). MOUSE_MOUSESYSTEMS A MouseSystems compatible mouse (3 buttons). MOUSE_MMSERIES A MMSeries compatible mouse. MOUSE_LOGITECH An ordinary LogiTech compatible mouse. MOUSE_BUSMOUSE A busmouse. MOUSE_PS2 A PS/2 busmouse. MOUSE_LOGIMAN An ordinary LogiTech LogiMan compatible mouse. MOUSE_GPM The GPM daemon is used. MOUSE_SPACEBALL A 3d SpaceTec Spaceball pointer device. MOUSE_INTELLIMOUSE A Microsoft IntelliMouse or Logitech MouseMan+ on serial port. MOUSE_IMPS2 A Microsoft IntelliMouse or Logitech MouseMan+ on PS/2 port. The return value may be ored with one or more of the following flags MOUSE_CHG_DTR change the setting of DTR to force the mouse to a given mode. MOUSE_DTR_HIGH set DTR to high instead of setting it to low (default). MOUSE_CHG_RTS change the setting of RTS to force the mouse to a given mode. MOUSE_RTS_HIGH set RTS to high instead of setting it to low (default). Your application may use this info to perform specific actions (go into a 3d pointer device mode for example). SEE ALSO svgalib(7), vgagl(7), libvga.config(5), mousetest(6), spin(6), mouse_close(3), mouse_getposition_6d(3), mouse_getx(3), mouse_init(3), mouse_setposition(3), mouse_setscale(3), mouse_setwrap(3), mouse_setxrange(3), mouse_update(3), mouse_waitforupdate(3), vga_init.
http://manpages.ubuntu.com/manpages/oneiric/man3/vga_getmousetype.3.html
CC-MAIN-2013-48
refinedweb
274
60.41
When will GXT 3.0 support Desktop Example When will GXT 3.0 support Desktop Example So - I tried creating a new project with the old Destop package com.extjs.gxt.samples.desktop.client It seems like GXT3.0 does not support Desktop from 2.2.5. Question: 1. Do you plan to support this in 3.0 2. If So when? should I start a new project on 2.2.5 if my intention is to use the Desktop should I start a new project on 2.2.5 if my intention is to use the Desktop the subject has the question Still no Desktop in Beta 1? Still no Desktop in Beta 1? Hello Just downloaded 3.0 Beta 1 and was still unable to find Desktop stuff. I am looking for import com.extjs.gxt.desktop.client.Desktop; import com.extjs.gxt.desktop.client.Shortcut; import com.extjs.gxt.desktop.client.StartMenu; import com.extjs.gxt.desktop.client.TaskBar; These was in the SRC folder, and in the GXT javadoc in 2.2.5. So, it appears as part of the product. Not as a sample. As we used this feature as the fundation of a major project, we will feel very unconfortable if this will be throwed away. So, GXT team, what's you plan about this ? Hi, And now ? is there some plans about WebDesktop ? yes, and it's better than 2.5's one.
http://www.sencha.com/forum/showthread.php?155216-When-will-GXT-3.0-support-Desktop-Example&p=694256&viewfull=1
CC-MAIN-2014-49
refinedweb
240
80.38
Member 19 Points Jul 06, 2011 06:51 AM|nccsbim071|LINK I have created a custom server control. So far this control renders some html in the webpage. On submit of the page i need to take the values entered in the textbox of the server control and call some webservice to validate the input of the user. i don't want to write this code in code behind of the page that this control is used in. I want all the validations to be written in the server control itself and if validation fails, Page.IsValid should be set to false. If the user input value in server control is valid Page.IsValid will be true. I am trying to achieve is the same functionality as google recaptcha. All user needs to do to use this control is to user the control in the page. user entered value is correct or incorrect is handled in the control itself and in the code behind of the page, there is only Page.IsValid. Here is the page on google that explains this and i have also used the google recaptcha and it works as expected. I also want to build same kind of functionality for my server control, Please help, if it is possible. control Contributor 4012 Points Jul 06, 2011 07:12 AM|Kulrom|LINK You can just normally use the IsValid method in the server controls. Add some validators there and then use OnLoad event to check if page is valid. Just like you do it with standard Web Page. control Jul 06, 2011 11:18 PM|jkirkerx|LINK Let me rephrase your request. You want to make a server control, that outputs the html needed for 100% complete client side processing? most of the Google and You Tube stuff is almost pure html, javascript and jquery, in which they make javascript ajax calls to a webservice, that handles the data in the background, without having to post any of the page back to the server. it's pretty much 100% done in the browsers memory. In fact, this site is built using the same technology that I just described above. The use of page.isvalid is for postbacks to the server. Try the webserver forum Member 19 Points Jul 07, 2011 01:17 AM|nccsbim071|LINK Hi Guys, Thank's for answering the questions. I found the solution. Here is the entire code of the server control. The trick was to implement IValidator. It gives us two property and one metod. ErrorMessage and IsValid properties and Validate method. I wrote all the validation code in Validate method and set this.IsValid. This solved the problem. [ToolboxData("<{0}:MyControl runat=server></{0}:MyControl>")] public class MyControl : WebControl, IValidator { protected override void RenderContents(HtmlTextWriter output) { //Render the required html } protected override void Render(HtmlTextWriter writer) { this.RenderContents(writer); } protected override void OnInit(EventArgs e) { Page.Validators.Add(this); base.OnInit(e); } public string ErrorMessage { get; set; } public bool IsValid { get; set; } public void Validate() { string code = Context.Request["txtCode"]; this.IsValid = Validate(code);//this method calls the webservice and returns true or false if (!this.IsValid) { ErrorMessage = "Invalid Code"; } } } Jul 07, 2011 04:31 PM|jkirkerx|LINK That's interesting. I too have put alot of time into form validations in the last year, in a pre and post emptive manner, to help customers fill out information in which I precieve to be the correct format. I would like like to see an live example of this, to measure whethter or not it is more effective than what I have currently implemented. I decided to go back to having the client side do all the validation via javascript / jquery, and then a routine on the server side in case their not running javascript. Sort of a 2 part system. On the server side, I strip out html tags before processing the data. I dumped the page.valid stuff, because I found it to be unreliable in the production or live enviroment. The validation group, and the require field validators worked fine in development, but for some reason in the production enviroment, they were not 100% effective. Pm me with a link for me to check out when your ready. 4 replies Last post Jul 07, 2011 04:31 PM by jkirkerx
https://forums.asp.net/t/1697149.aspx?integrate+custom+control+controls+validation+with+Page+IsValid
CC-MAIN-2019-26
refinedweb
721
73.47
import os os.system('cls') #windows os.system('clear') #linux / os x mlempjr wrote:Well, being new to the terminology, I was not sure what to call the window. Since neither worked for me I must have meant the GUI window. Thanks, Mike mlempjr wrote:Well as I said, I'm new. What do I do to get to run in the console window? Thanks again, Happy thanksgiving! Pretty sure I am running idle. I got into Python because of the Raspberry Pi and I'm using idle3 for that so I try to keep them the same. Much less confusion. And as you see I'm usually confused! mlempjr wrote:You know, like real programs do! for i in range(10): print "" # Print 10 blank lines - Code: Select all Python 3.3.2 (v3.3.2:d047928ae3f6, May 16 2013, 00:06:53) [MSC v.1600 64 bit (AMD64)] on win32 Type "copyright", "credits" or "license()" for more information. >>> ================================ RESTART ================================ >>> os.system('cls') os.system('clear') Return to General Coding Help Users browsing this forum: Google Adsense [Bot] and 11 guests
http://www.python-forum.org/viewtopic.php?p=11740
CC-MAIN-2016-50
refinedweb
182
78.35
Recent discussions about whether to print a message about unimplementedia32 syscalls on x86_64 have missed the real bug: the number of ia32syscalls is wrong in 2.6.16. Fixing that kills the message.Signed-off-by: Chuck Ebbert <76306.1226@compuserve.com>--- 2.6.16.17-64.orig/include/asm-x86_64/ia32_unistd.h+++ 2.6.16.17-64/include/asm-x86_64/ia32_unistd.h@@ -317,6 +317,6 @@ #define __NR_ia32_ppoll 309 #define __NR_ia32_unshare 310 -#define IA32_NR_syscalls 315 /* must be > than biggest syscall! */+#define IA32_NR_syscalls 311 /* must be > than biggest syscall! */ #endif /* _ASM_X86_64_IA32_UNISTD_H_ */-- Chuck"The x86 isn't all that complex -- it just doesn't make a lot of sense." -- Mike Johnson-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2006/5/22/253
CC-MAIN-2018-51
refinedweb
138
50.23
- -each or Enhanced for Loop Beginning with the JDK 5, a second form of the for loop was defined which implements a "for-each" style loop. The for-each style of for loop is designed to cycle by a collection of objects, such as an array, in strictly sequential fashion, from the start to end. Unlike some other computer languages like C#, that implements a for-each loop by using the foreach keyword, Java adds for-each capability by enhancing the for loop. The advantage of this approach is that there is no new keyword required, and also no pre-existing code is broken. The for-each style of the for loop is also referred to as enhanced for loop. Java for-each for Loop Syntax The general form of for-each version of the for loop is shown below : for(type itr-var: collection) statement-block Here, type specifies the data type and itr-var specifies the name of an iteration variable that will receive the elements from a collection, one at a time, from the beginning to end. The collection being cycled through is specified by collection. There are several types of collections that can be used with the for, but the only type used in this tutorial is the array. With each iteration of the loop, the next elements in the collection is retrieved and stored in itr-var. The loop repeats before all the elements in the collection have been obtained. Because the iteration variable gets values from the collection, type must be the same as (or compatible with) the elements stored in the collection. Therefore, when iterating over arrays, type must be compatible with the element type of the array. To realize the motivation behind a for-each style loop, look at the type of the for loop that it is designed to replace. Here this code fragment uses a traditional for loop to compute the sum of the values in an array : int nums[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; int sum = 0; for(int i=0; i<10; i++) { sum = sum + nums[i]; } To calculate the sum each element present in nums is read from the start to finish. Therefore, the entire array is read purely in sequential order. This is achieved by manually indexing the nums array by the loop control variable i. The for-each style for automates the previous loop. Specifically, it eliminates the need to establish a loop counter, specify the starting and the ending value, and manually index the array. Instead, it automatically cycles through the entire array, getting one element at a time, in sequence, from the beginning to end. For example, following is the preceding code fragment rewritten by using the for-each version of the for loop: int nums[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; int sum = 0; for(int x: nums) { sum = sum + x; } With each pass through the loop, x is automatically given a value equal to the next element in the variable nums. Therefore, in the first iteration, x contains 1, in the second iteration, x holds 2, and so on. Not only is the syntax efficient, but it also prevents the boundary errors. Java for-each for Loop Example Following is an entire program that illustrates the for-each version of the for loop just described : /* Java Program Example - Java for-each Loop * This program uses a for-each style for loop */ public class JavaProgram { public static void main(String args[]) { int nums[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; int sum = 0; // use the for-each style for loop to display and sum the values for(int x : nums) { System.out.println("Value is " + x); sum = sum + x; } System.out.println("\nSummation is " + sum); } } When the above Java program is compile and executed, it will produce the following output: As the above output shows, the for-each style of the for loop automatically cycles through an array in sequence from the lowest index to highest. Even though the for-each for loop iterates until all the elements in an array have been examined, it is also possible to terminate the loop early by using a break statement. For example, the following program sums only starting five elements of nums : /* Java Program Example - Java for-each Loop * Use a break with a for-each style for */ public class JavaProgram { public static void main(String args[]) { int sum = 0; int nums[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; // use for-each for loop to display and sum the values for(int x : nums) { System.out.println("Value is " + x); sum = sum + x; if(x == 5) // stop the loop break; // when 5 is obtained } System.out.println("Summation of first 5 elements is " + sum); } } When the above Java program is compile and executed, it will produce the following output: Here, the for loop stops after the fifth element has been obtained. The break statement can also be used with other loops. You will learn about break statement in separate chapter. « Previous Tutorial Next Tutorial »
https://codescracker.com/java/java-for-each-loop.htm
CC-MAIN-2022-21
refinedweb
855
54.36
This Tutorial will Explain the Java Array Length attribute Along with its Various Uses and Different Situations in Which Array Length Attribute Can be Used: In our previous tutorial, we explored the concept of printing of elements in Java array using various methods. As we know, in order to loop through the array we should know how many elements are there in the array beforehand so that we can stop when the last element is reached. Thus we need to know the size or the number of elements present in the array for looping through the array. Java doesn’t provide any method to calculate the length of the array but it provides an attribute ‘length’ that gives the length or size of the array. What You Will Learn: Java ‘length’ Attribute The number of elements in the array during declaration is called the size or length of the array. Given an array named ‘myArray’, the length of the array is given by the following expression. int len = myArray.length; The program below shows the illustration of the length attribute of the Java array. import java.util.*; class Main { public static void main(String[] args) { Integer[] intArray = {1,3,5,7,9}; //integer array String[] strArray = { "one", "two", "three" }; //string array //print each array and their corresponding length System.out.println("Integer Array contents: " + Arrays.toString(intArray)); System.out.println("The length of the Integer array : " + intArray.length); System.out.println("String Array contents: " + Arrays.toString(strArray)); System.out.println("The length of the String array : " + strArray.length); } } Output: The program above simply makes use of the length attribute and displays the contents and length of two different arrays. Now that we have seen the length attribute, let us see how we can use it in different situations. Array length is useful in several situations. Some of them are listed below. They are: - To search for a specific value in the array. - Searching for minimum/maximum values in the array. Let’s discuss these in detail. Searching For A Value Using Length Attribute As already mentioned, you can iterate through an array using the length attribute. The loop for this will iterate through all the elements one by one till (length-1) the element is reached (since arrays start from 0). Using this loop you can search if a specific value is present in the array or not. For this, you will traverse through the entire array until the last element is reached. While traversing, each element will be compared with the value to be searched and if the match is found then the traversing will be stopped. The below program demonstrates searching for a value in an array. import java.util.*; class Main{ public static void main(String[] args) { String[] strArray = { "Java", "Python", "C", "Scala", "Perl" }; //array of strings //search for a string using searchValue function System.out.println(searchValue(strArray, "C++")?" value C++ found":"value C++ not found"); System.out.println(searchValue(strArray, "Python")?"value Python found":"value Python not found"); } private static boolean searchValue(String[] searchArray, String lookup) { if (searchArray != null) { int arrayLength = searchArray.length; //compute array length for (int i = 0; i <= arrayLength - 1; i++) { String value = searchArray[i]; //search for value using for loop if (value.equals(lookup)) { return true; } } } return false; } Output: In the above program, we have an array of programming language names. We also have a function ‘searchValue’ which searches for a particular programming language name. We have used a for loop in the function searchValue to iterate through the array and search for the specified name. Once the name is found the function returns true. If the name is not present or the entire array is exhausted then the function returns false. Find The Minimum And Maximum Values In Array You can also traverse the array using the length attribute and find the minimum and highest elements in the array. The array may or may not be sorted. Hence in order to find the minimum or maximum elements, you will have to compare each of the elements till all the elements in the array are exhausted and then find out the minimum or maximum element in the array. We have presented two programs below. This program is to find the minimum element in the array. import java.util.*; class Main { public static void main(String[] args) { int[] intArray = { 72,42,21,10,53,64 }; //int array System.out.println("The given array:" + Arrays.toString(intArray)); int min_Val = intArray[0]; //assign first element to min value int length = intArray.length; for (int i = 1; i <= length - 1; i++) //till end of array, compare and find min value { int value = intArray[i]; if (value <min_Val) { min_Val = value; } } System.out.println("The min value in the array: "+min_Val); } } Output: In the above program, we have the first element in the array as a reference element. Then we compare all the elements one by one with this reference element and pick the smallest one by the time we reach the end of the array. Note the way we use length attribute to iterate through the array. The next program is used to find the largest element in the array. The logic of the program is on similar lines to that of finding the smallest element. But instead of finding the element less than the reference element, we find the element greater than the reference. This way, in the end, we get the maximum element in the array. The program is as follows. import java.util.*; class Main { public static void main(String[] args) { int[] intArray = { 72,42,21,10,53,64 }; //int array System.out.println("The given array:" + Arrays.toString(intArray)); int max_Val = intArray[0]; //reference element int length = intArray.length; for (int i = 1; i <= length - 1; i++) // find max element by comparing others to reference { int value = intArray[i]; if (value >max_Val) { max_Val = value; } } System.out.println("The highest value in the array: "+max_Val); } } Output: Frequently Asked Questions Q #1) What is the difference between the length of an array and the size of ArrayList? Answer: The length property of an array gives the size of the array or the total number of elements present in the array. There is no length property in the ArrayList but the number of objects or elements in the ArrayList is given by size () method. Q #2) What is the difference between length and length() in Java? Answer: The ‘length’ property is a part of the array and returns the size of the array. The method length() is a method for the string objects that return the number of characters in the string. Q #3) What is the length function in Java? Answer: The length function in Java returns the number of characters present in a string object. Q #4) How do you get the length in Java? Answer: It depends on whether you want to get the length of the string or an array. If it’s a string then using length() method will give you the number of characters in the string. If it is an array, you can use the ‘length’ property of the array to find the number of elements in the array. Q #5) What is the maximum length of an array in Java? Answer: In Java, arrays store their indices as integers (int) internally. So the maximum length of an array in Java is Integer.MAX_VALUE which is 231-1 Conclusion This tutorial discussed the length property of arrays in Java. We have also seen the various situations in which length can be used. The first and foremost use of the length attribute of the array is to traverse the array. As traversing an array endlessly may cause unexpected results, using for loop for a definite number of iterations can ensure that the results aren’t unexpected. Happy Reading!!
https://www.softwaretestinghelp.com/java/java-array-length/
CC-MAIN-2021-17
refinedweb
1,309
63.59
hello experts, I have one question regarding Application variables in jsp. How to use application variables. What is its scope.What r its advantages. thanx in advance vinod JSP (3 messages) Threaded Messages (3) - JSP by Uri Cohen on January 28 2001 18:10 EST - JSP by vinod c on January 29 2001 23:52 EST - Hi i am dinesh by dinesh sood on June 16 2008 11:16 EDT JSP[ Go to top ] Application variables are simply Java objects, which are identified by their unique name (Just like session variables). Their scope is the entire web application, meaning once you have set then in the application, they will stay there until you remove them (or until the application teminates). An application, as defined by Sun, is "a collection of servlets and content installed under a specific subset of the server's URL namespace such as /catalog and possibly installed via a .war file". - Posted by: Uri Cohen - Posted on: January 28 2001 18:10 EST - in response to vinod c They are usefull for global variables that all your jsp's and servlets have to access, and are typically initialized when the application starts. An example for this might be a class which contains some application constants, which are read from a data source upon start up, and have to be accessed by servlets and jsp's. You use them like this: in jsp's: application.setAttribute("variableName", someObject) in servlets: getServletContext().setAttribute("variableName", someObject) JSP[ Go to top ] hello - Posted by: vinod c - Posted on: January 29 2001 23:52 EST - in response to Uri Cohen can u send me some more information on this. vinod Hi i am dinesh[ Go to top ] well my question is How can we get the value in Servlet, which is set in jsp by using "application.setAttribute("anyvalue",anyvalue);"? i dont want any other alternative request.setAttribute object or use session.setAttribute in jsp instead of apllication.setAttribute.... I jst need a specific answer related to my question only.... thx - Posted by: dinesh sood - Posted on: June 16 2008 11:16 EDT - in response to vinod c
http://www.theserverside.com/discussions/thread.tss?thread_id=3791
CC-MAIN-2015-32
refinedweb
356
61.06
Update: If you’ll check my FeedBurner wiki node, you’ll notice that I’ve been investigating why feed item GUIDs were stripped away by it. My attempts at fixing this (and the migration to Google, which re-wrote the links as well) have resulted in some people (depending on which feed reader they use) noticing duplicate or “renewed” posts. This should be the last of them, hopefully. Oh, and mind the last paragraph on this post – feedback is welcome. A heads up for all of you subscribing to this site’s feeds – the URLs have changed from feeds.feedburner.com to feedproxy.google.com (something that was meant to happen for a good while now due to the FeedBurner acquisition by Google, but which I only had the opportunity to trigger last weekend). Google places a redirect for those of you who already used the FeedBurner URL, but I still get a whopping amount of requests for the ancient PhpWiki RSS URL (yes, even after a couple of years), and although I still have a redirect there, it’s going to go away soon. Getting to the point, please re-subscribe using the new feed URL ASAP. Even if you don’t think it’s necessary. Here are the links: - Full site feed - Blog-only feed - Photo feed (I’ll be tweaking this one to improve upon the stock MobileMe feed). Since I post relatively few links (an average of one a day or less), I am also considering doing a blog + linkblog feed without the rest of the Wiki (most likely simply adding the links namespace to the blog feed). If you’re interested, drop me a line in the comments.
http://the.taoofmac.com/space/blog/2008/09/25/0725
crawl-002
refinedweb
283
67.59
ES6 (or ES2015) was arguably the biggest change to JavaScript in a long time. As a result, we received a wide variety of new functionality. The purpose of this appendix is to illustrate the features used in the book in isolation to make it clearer to understand how they work. Rather than going through the entire specification, I will just focus on the subset of features used in the book. ES6 introduced proper module declarations. Earlier, this was somewhat ad hoc and we used formats, such as AMD or CommonJS. ES6 module declarations are statically analyzable. This is highly useful for tool authors. Effectively, this means we can gain features like tree shaking. This allows the tooling to skip unused code easily simply by analyzing the import structure. importand exportfor Single# To give you an example of exporting directly through a module, consider below: persist.js import makeFinalStore from 'alt-utils/lib/makeFinalStore'; export default function(alt, storage, storeName) { ... } index.js import persist from './persist'; ... importand exportfor Multiple# Sometimes it can be useful to use modules as a namespace for multiple functions: math.js export function add(a, b) { return a + b; } export function multiply(a, b) { return a * b; } export function square(a) { return a * a; } Alternatively we could write the module in a form like this: math.js const add = (a, b) => a + b; const multiple = (a, b) => a * b; // You can omit ()'s with a single parameter if you want. const square = a => a * a; export { add, multiple, // Aliasing works too multiple as mul }; The example leverages the fat arrow syntax. This definition can be consumed through an import like this: index.js import {add} from './math'; // Alternatively we could bind the math methods to a key // import * as math from './math'; // math.add, math.multiply, ... ... Especially export default is useful if you prefer to keep your modules focused. The persist function is an example of such. Regular export is useful for collecting multiple functions below the same umbrella. Given the ES6 module syntax is statically analyzable, it enables tooling such as analyze-es6-modules. Sometimes it can be handy to alias imports. Example: import {actions as TodoActions} from '../actions/todo' ... as allows you to avoid naming conflicts. Unlike many other languages out there, JavaScript uses prototype based inheritance instead of class based one. Both approaches have their merits. In fact, you can mimic a class based model through a prototype based one. ES6 classes are about providing syntactical sugar above the basic mechanisms of JavaScript. Internally it still uses the same old system. It just looks a little different to the programmer. These days React supports class based component definitions. Not all agree that it's a good thing. That said, the definition can be quite neat as long as you don't abuse it. To give you a simple example, consider the code below: import React from 'react'; export default class App extends React.Component { constructor(props) { super(props); // This is a regular property outside of React's machinery. // If you don't need to trigger render() when it's changed, // this can work. this.privateProperty = 'private'; // React specific state. Alter this through `this.setState`. That // will call `render()` eventually. this.state = { name: 'Class demo' }; } render() { // Use the properties somehow. const privateProperty = this.privateProperty; const name = this.state.name const notes = this.props.notes; ... } } Perhaps the biggest advantage of the class based approach is the fact that it cuts down some complexity, especially when it comes to React lifecycle methods. It is important to note that class methods won't get bound by default, though! This is why the book relies on an experimental feature known as property initializers. As stated above, the ES6 modules allow export and import single and multiple objects, functions, or even classes. In the latter, you can use export default class to export an anonymous class or export multiple classes from the same module using export class className. To export and import a single class you can use export default class to export an anonymous class and call it whatever you want at import time: Note.jsx export default class extends React.Component { ... }; Notes.jsx import Note from './Note.jsx'; ... Or use export class className to export several named classes from a single module: Components.jsx export class Note extends React.Component { ... }; export class Notes extends React.Component { ... }; App.jsx import {Note, Notes} from './Components.jsx'; ... It is recommended to keep your classes separated in different modules. ES6 classes won't bind their methods by default. This can be problematic sometimes, as you still may want to be able to access the instance properties. Experimental features known as class properties and property initializers solve this problem. Without them, we might write something like this: import React from 'react'; class App extends React.Component { constructor(props) { super(props); this.renderNote = this.renderNote.bind(this); } render() { // Use `renderNote` here somehow. ... return this.renderNote(); } renderNote() { // Given renderNote was bound, we can access `this` as expected return <div>{this.props.value}</div>; } } App.propTypes = { value: React.PropTypes.string }; App.defaultProps = { value: '' }; export default App; Using class properties and property initializers we could write something tidier instead: import React from 'react'; export default class App extends React.Component { // propType definition through static class properties static propTypes = { value: React.PropTypes.string } static defaultProps = { value: '' } render() { // Use `renderNote` here somehow. ... return this.renderNote(); } // Property initializer gets rid of the `bind` renderNote = () => { // Given renderNote was bound, we can access `this` as expected return <div>{this.props.note}</div>; } } Now that we've pushed the declaration to method level, the code reads better. I decided to use the feature in this book primarily for this reason. There is simply less to worry about. Traditionally, JavaScript has been very flexible with its functions. To give you a better idea, see the implementation of map below: function map(cb, values) { var ret = []; var i, len; for(i = 0, len = values.length; i < len; i++) { ret.push(cb(values[i])); } return ret; } map(function(v) { return v * 2; }, [34, 2, 5]); // yields [68, 4, 10] In ES6 we could write it as follows: function map(cb, values) { let ret = []; let i, len; for(i = 0, len = values.length; i < len; i++) { ret.push(cb(values[i])); } return ret; } map((v) => v * 2, [34, 2, 5]); // yields [68, 4, 10] The implementation of map is more or less the same still. The interesting bit is at the way we call it. Especially that (v) => v * 2 part is intriguing. Rather than having to write function everywhere, the fat arrow syntax provides us a handy little shorthand. To give you further examples of usage, consider below: // These are the same v => v * 2; (v) => v * 2; // I prefer this variant for short functions (v) => { // Use this if you need multiple statements return v * 2; } // We can bind these to a variable const double = (v) => v * 2; console.log(double(2)); // If you want to use a shorthand and return an object, // you need to wrap the object. v => ({ foo: 'bar' }); Arrow functions are special in that they don't have this at all. Rather, this will point at the caller object scope. Consider the example below: var obj = { context: function() { return this; }, name: 'demo object 1' }; var obj2 = { context: () => this, name: 'demo object 2' }; console.log(obj.context()); // { context: [Function], name: 'demo object 1' } console.log(obj2.context()); // {} in Node.js, Window in browser As you can notice in the snippet above, the anonymous function has a this pointing to the context function in the obj object. In other words, it is binding the scope of the caller object obj to the context function. This happens because this doesn't point to the object scopes that contains it, but the caller object scopes, as you can see it in the next snippet of code: console.log(obj.context.call(obj2)); // { context: [Function], name: 'demo object 2' } The arrow function in the object obj2 doesn't bind any object to its context, following the normal lexical scoping rules resolving the reference to the nearest outer scope. In this case it happens to be Node.js global object. Even though the behavior might seem a little weird, it is actually useful. In the past, if you wanted to access parent context, you either needed to bind it or attach the parent context to a variable var that = this;. The introduction of the arrow function syntax has mitigated this problem. Historically, dealing with function parameters has been somewhat limited. There are various hacks, such as values = values || [];, but they aren't particularly nice and they are prone to errors. For example, using || can cause problems with zeros. ES6 solves this problem by introducing default parameters. We can simply write function map(cb, values=[]) now. There is more to that and the default values can even depend on each other. You can also pass an arbitrary amount of parameters through function map(cb, ...values). In this case, you would call the function through map(a => a * 2, 1, 2, 3, 4). The API might not be perfect for map, but it might make more sense in some other scenario. There are also convenient means to extract values out of passed objects. This is highly useful with React component defined using the function syntax: export default ({name}) => { // ES6 string interpolation. Note the back-ticks! return <div>{`Hello ${name}!`}</div>; }; Earlier, dealing with strings was somewhat painful in JavaScript. Usually you just ended up using a syntax like 'Hello' + name + '!'. Overloading + for this purpose wasn't perhaps the smartest move as it can lead to strange behavior due to type coercion. For example, 0 + ' world would yield 0 world string as a result. Besides being clearer, ES6 style string interpolation provides us multi-line strings. This is something the old syntax didn't support. Consider the examples below: const hello = `Hello ${name}!`; const multiline = ` multiple lines of awesomeness `; The back-tick syntax may take a while to get used to, but it's powerful and less prone to mistakes. That ... is related to the idea of destructuring. For example, const {lane, ...props} = this.props; would extract lane out of this.props while the rest of the object would go to props. This object based syntax is still experimental. ES6 specifies an official way to perform the same for arrays like this: const [lane, ...rest] = ['foo', 'bar', 'baz']; console.log(lane, rest); // 'foo', ['bar', 'baz'] The spread operator ( ...) is useful for concatenating. You see syntax like this in Redux examples often. They rely on experimental Object rest/spread syntax: [...state, action.lane]; // This is equal to state.concat([action.lane]) The same idea applies to React components: ... render() { const {value, onEdit, ...props} = this.props; return <div {...props}>Spread demo</div>; } ... There are several gotchas related to the spread operator. Given it is shallow by default, it can lead to interesting behavior that might be unexpected. This is particularly true if you are trying to use it to clone an object using it. Josh Black discusses this problem in detail at his Medium post titled Gotchas in ES2015+ Spread. In order to make it easier to work with objects, ES6 provides a variety of features just for this. To quote MDN, consider the examples below: const a = 'demo'; const shorthand = {a}; // Same as {a: a} // Shorthand methods const o = { get property() {}, set property(value) {}, demo() {} }; // Computed property names const computed = { [a]: 'testing' // demo -> testing }; const, let, var# In JavaScript, variables are global by default. var binds them on function level. This is in contrast to many other languages that implement block level binding. ES6 introduces block level binding through let. There's also support for const, which guarantees the reference to the variable itself cannot change. This doesn't mean, however, that you cannot modify the contents of the variable. So if you are pointing at an object, you are still allowed to tweak it! I tend to favor to default to const whenever possible. If I need something mutable, let will do fine. It is hard to find any good use for var anymore as const and let cover the need in a more understandable manner. In fact, all of the book's code, apart from this appendix, relies on const. That just shows you how far you can get with it. Given decorators are still an experimental feature and there's a lot to cover about them, there's an entire appendix dedicated to the topic. Read Understanding Decorators for more information. There's a lot more to ES6 and the upcoming specifications than this. If you want to understand the specification better, ES6 Katas is a good starting point for learning more. Just having a good idea of the basics will take you far. This book is available through Leanpub. By purchasing the book you support the development of further content.
https://survivejs.com/react/appendices/language-features/index.html
CC-MAIN-2018-34
refinedweb
2,152
58.58
I am trying to create a template in Django. In that template i have: dato which contains a dictionary like this {"Name":"Someone","Age":"23"} and propiedad which contains a dictionary like this {"parameterName":"Name"}. I can make {{propiedad.parameterName}} and get the value Name. After that i would like to store that value in a buffer(which doesnt exist until now) order to do later {{dato.buffer}} obtaining Someone which i would show in the HTML. I don't know how to do this, can someone help me? A fragment of the code: <tbody> {% for dato in atributo.datos %} <tr> {% for propiedad in atributo.propiedades %} <td> {# I would make the stuff here#} {{ dato.buffer}} </td> {% endfor %} </tr> {% endfor %} </tbody> This can be achieved using django filters. But we will have to code our own custom filter. Include this in the views file from django.template.defaulttags import register @register.filter def get_item(dictionary, key): return dictionary.get(key) In the html file you can add the following line. We are passing the dictionary and key to get_item filter which returns the value of the key. <td>{{ dato|get_item:propiedad }}</td>
https://databasefaq.com/index.php/answer/8308/python-django-templates-django-creating-variables-in-a-template-and-assigning-a-value
CC-MAIN-2020-16
refinedweb
191
70.19
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: Scala 2.8.1, Scala 2.9.0 - Fix Version/s: Scala 2.10.0 - Component/s: Misc Compiler - Labels:None Description I have an entry point wrapper that looks something like the trait listed below that is intended to expose a main for a java entry point if type T = Unit or an arbitrary return type T if the module is embedded: trait RunWrapper[T] { def run(args : Array[String]) : T; def main(args : Array[String]) : T = { try { run(args) } finally { } } } // Entry point for java object A extends RunWrapper[Unit] { def run(args : Array[String]) { println("the end") } } Unfortunately, I get this bytecode from the compiler (the same for both 2.8.1 and 2.9.0.1): $ javap A Compiled from "test.scala" public final class A extends java.lang.Object Why do I get a return type of java.lang.Object, in this case, for main()? How do I get void in this scenario so that java will allow my to use A as an entry point? However, if I define the following: object B extends RunWrapper[Unit] { def run(args : Array[String]) override def main(args : Array[String]) { try{ run(args) } finally { } } } I get this: $ javap B Compiled from "test.scala" public final class B extends java.lang.Object{ public static final void main(java.lang.String[]); public static final void run(java.lang.String[]); public static final java.lang.Object main(java.lang.String[]); } So it seems override didn't override anything and the compiler doesn't complain or warn about this. Is this a boxing issue? Activity I'm guessing that object B is the bug. Shouldn't the override yield a warning or an error instead of quietly creating a main method with a void return type in addition to the base class's main? No, that is also how generics work. Better error messages in 2064372659 . OK, let's see here: The return type of Object in RunWrapper is correct. This is how generics work. Unless we special case main, you will have to write a method which is specifically Unit, you can't use a generic implementation because the jvm requires the exact signature. When I first saw the example I thought there was a bug, but now if there is one I don't know what it is, other than that we should spot a method which looks like it's trying to be an entry point and warn that it is not in fact an entry point.
https://issues.scala-lang.org/browse/SI-4749
CC-MAIN-2014-10
refinedweb
430
73.37
2.5 Other Languages Under UNIX You now know the basics of how to handle and manipulate the CGI input in Perl. If you haven't guessed by now, this book concentrates primarily on examples in Perl, since Perl is relatively easy to follow, runs on all three major platforms, and also happens to be the most popular language for CGI. However, CGI programs can be written in many other languages, so before we continue, let's see how we can accomplish similar things in some other languages, such as C/C++, the C Shell, and Tcl. C/C++ Here is a CGI program written in C (but that will also compile under C++) that parses the HTTP_USER_AGENT environment variable and outputs a message, depending on the type of browser: #include <stdio.h> #include <stdlib.h> #include <string.h> void main (void) { char *http_user_agent; printf ("Content-type: text/plain\n\n"); http_user_agent = getenv ("HTTP_USER_AGENT"); if (http_user_agent == NULL) { printf ("Oops! Your browser failed to set the HTTP_USER_AGENT "); printf ("environment variable!\n"); } else if (!strncmp (http_user_agent, "Mosaic", 6)) { printf ("I guess you are sticking with the original, huh?\n"); } else if (!strncmp (http_user_agent, "Mozilla", 7)) { printf ("Well, you are not alone. A majority of the people are "); printf ("using Netscape Navigator!\n"); } else if (!strncmp (http_user_agent, "Lynx", 4)) { printf ("Lynx is great, but go get yourself a graphic browser!\n"); } else { printf ("I see you are using the %s browser.\n", http_user_agent); printf ("I don't think it's as famous as Netscape, Mosaic or Lynx!\n"); } exit (0); } The getenv function returns the value of the environment variable, which we store in the http_user_agent variable (it's actually a pointer to a string, but don't worry about this terminology). Then, we compare the value in this variable to some of the common browser names with the strncmp function. This function searches the http_user_agent variable for the specified substring up to a certain position within the entire string. You might wonder why we're performing a partial search. The reason is that generally, the value returned by the HTTP_USER_AGENT environment variable looks something like this: Lynx/2.4 libwww/2.14 In this case, we need to search only the first four characters for the string "Lynx" in order to determine that the browser being used is Lynx. If there is a match, the strncmp function returns a value of zero, and we display the appropriate message. C Shell The C Shell has some serious limitations and therefore is not recommended for any type of CGI applications. In fact, UNIX guru Tom Christiansen has written a FAQ titled "Csh Programming Considered Harmful" detailing the C Shell's problems. Here is a small excerpt from the. However, for completeness sake, here is a simple shell script that is identical to the first unix.pl Perl program discussed earlier: #!/bin/csh echo "Content-type: text/plain" echo "" if ($?QUERY_STRING) then set command = `echo $QUERY_STRING | awk 'BEGIN {FS = "="} { print $2 }'` if ($command == "fortune") then /usr/local/bin/fortune else if ($command == "finger") then /usr/ucb/finger else /usr/local/bin/date endif else /usr/local/bin/date endif The C Shell does not have any inherent functions or operators to manipulate string information. So we have no choice but to use another UNIX utility, such as awk, to split the query string and return the data on the right side of the equal sign. Depending on the input from the user, one of several UNIX utilities is called to output some information. You may notice that the variable QUERY_STRING is exposed to the shell. Generally, this is very dangerous because users can embed shell metacharacters. However, in this case, the variable substitution is done after the `` command is parsed into separate commands. If things happened in the reverse order, we could potentially have a major headache! Tcl The following Tcl program uses an environment variable that we haven't yet discussed up to this point. The HTTP_ACCEPT variable contains a list of all of the MIME content types that a browser can accept and handle. A typical value returned by this variable might look like this: application/postscript, image/gif, image/jpeg, text/plain, text/html You can use this information to return different types of data from your CGI document to the client. The program below parses this accept list and outputs each MIME type on a different line: #!/usr/local/bin/tclsh puts "Content-type: text/plain\n" set http_accept $env(HTTP_ACCEPT) set browser $env(HTTP_USER_AGENT) puts "Here is a list of the MIME types that the client, which" puts "happens to be $browser, can accept:\n" set mime_types [split $http_accept ,] foreach type $mime_types { puts "- $type" } exit 0 As in Perl, the split command splits a string on a specified delimiter, placing all of the resulting substrings in an array. In this case, the mime_types array contains each MIME type from the accept list. Once that's done, the foreach loop iterates through the array, displaying each element. Back to: CGI Programming on the World Wide Web © 2001, O'Reilly & Associates, Inc.
http://oreilly.com/openbook/cgi/ch02_05.html
CC-MAIN-2013-20
refinedweb
850
62.07
Xml. The summary tag can be used in a constructor to describe its class, interface or enumeration, or inside a method, property getter or event "adder" to describe respectively the method, property or event.. The returns tag is used to describe the return value of a method. It should be used only if the function returns a value (in other words if it contains a return statement that is not just "return;"). The value tag describes a property (which shouldn't have a summary tag). contents of the description itself, whether the tag is summary, param, returns, field or value, is described by the exact same convention that works for C# and VB.NET doc comments. It can be plain text or text containing other XML tags, as described here:. Awesome, thanks Bertrand. Fabio: for several reasons. First, JSDoc comments are not accessible at runtime. Second, they're missing many features that we need. And most of all, xml doc comments exist in .NET since its inception. .NET developers are very much used to them and how they work. There are also many, many tools that already work with xml doc comments (more about this in future posts) and that will work unmodified with this convention. Speednet: not for the moment but I'll keep that in mind. If you include those tags today, the worst that can happen is that they will get ignored by the tools. Thanks for the suggestion. Caterwomtious: I agree, and I've passed this suggestion along to the right team for a future version (a while ago :) ). Morten: Yes, I agree. Thanks for the suggestion. I've created a set of webcontrols that registers scripts (both at runtime and designtime), and these scripts also reference other scripts using the <reference assembly.../> tags. When I add the webcontrol to the page, I would think that the scripts also gets included in the page so I get intellisense on them, but apparently not. I might be doing something wrong, but if not, I'd like to see this kind of support in Orcas too. Morten: you can get Intellisense on those files by including an explicit script reference to the page. Is there a limit to the amount of js that can be commented? I'm getting a strange error when the .js file with comments is over 256K. I've tried splitting into two files and same result: "Error updating JScript IntelliSense: JScript IntelliSense has been disabled for referenced scripts. A referenced script may have caused a stack overflow. Reopen this file to reenable IntelliSense." Ev: not that I know of, but send me mail through the contact form of this blog and I'll get you the right contacts to debug that issue. WebSite.WebCameraInfo.prototype = { ZoomLevel: 11, .... Hi guys. Does any have any ideas how to comment prototype getter like this? Jim: that's not technically a getter, but you can document it from the constructor using a field tag. It should be. In VS2008, if I use: function Foo() { /// <param name='x' domElement='true'>x</param> } Should I get DOMElement intellisense for x? If so, it doesn't seem to be working. Same for: /// <returns domElement='true'> @Brock: I don't think that's handled in 2008, no. I'm also getting the same error as Ev. Any progress on that one? This is the only Google result for that error message. @whatispunk: There actually is a limit to the XML Comment data per section. This was a implementation limitation we could not work around. I'm sorry to say that you'll just have to shorten your XML Comment data. I'll keep this in mind to fix for our next release. Thanks! create comments by php Could we have a static "parse" method added to all classes, in order for clients to be able to cast to type? eg Sys.SomeClass.parse = function(obj) {///<returns type="Sys.SomeClass"/> return obj; Client Code would be function onSomeClassClick(sender, args) sender = Sys.SomeClass.parse(sender); args = Sys.SomeClassEventArgs.parse(args); // Full JS intellisense now enabled. Thanks I think I'm missing something, but it might be related to Brock's question to which you replied, "@Brock: I don't think that's handled in 2008, no." A simplified example: /// <reference path="Stuff.js" /> var outside = new MyNamespace.Stuff(); outside.+++ Other.Thing.prototype = { foo: function (inside) { /// <param name="inside" type="MyNamespace.Stuff">arg</param> inside.--- At the '+++', I get working Intellisense completion. At the '---', I don't. The point being that it obviously found the proper comments for the 'Stuff' class, since it works at one point in the file. Am I doing something wrong (specifying the class name improperly?)? or does VS2008 simply not do Intellisense for <param>'s? Best, Ben Again, Ben, an ability to statically parse or cast to type would solve that for you. This ability can be added by us, retrospectively, via an extensions library applied after MS Ajax is downloaded. Sys.ApplicationLoadEventArgs.cast = function(obj) {///<returns type="Sys.ApplicationLoadEventArgs"/> By also doing this in your own classes, your constructor could have code like this: inside = MyNamespace.Stuff.cast(inside); inside.---; // Intellisense is now fully enabled. Also, the lack of "in file" intellisense can be largely overcome by structuring your code differently to the way advertised/recommended. Firstly you need to ensure that your namespace is defined in an external file (so the namespacing code can run and create the namespace for you ready to go). I have found adding a "_namespace.js" file to contain js intellisense references as well as namespace definition works well. This file is not required to be downloaded to the client. These can also be chained from child to parent folders, for each namespace required. Secondly, after declaring your class (function) you can get intellisense by setting your function's prototype to that of the class you intend to inherit from. This duplexing (actual inheritance/temp inheritance) can be offset by setting these lines of code together. Thirdly, when setting out your prototype, rather than set the whole thing (and overwriting the prototype you have just faked), writing it out longhand to add each item to the prototype adds a few more keystrokes but, if done in the correct order, I've found can yield 99% working intellisense at all times. if ($methodThatAlwaysReturnsFalse()) MyNamespace.Stuff.prototype = Sys.xxxx.yyyy.prototype // Fake inheritance ... MyNamespace.Stuff.prototype._myVar = null; MyNamespace.Stuff.prototype.foo = function() this._myVar; // intellisense now enabled this.initialize(); // inherited methods are also available. Stumbled into this blog when I was searching for JavaScript doc comments. Great post, more so, great blog, more so, great fun cachy hippy evil banner. I did not see your trademark avatar here, but that is great too. @Jimmy: sorry, I know nothing about XPIDL. Hi, Could we have a static "parse" method added to all classes, in order for clients to be able to cast to type? return obj; sender = Sys.SomeClass.parse(sender); args = Sys.SomeClassEventArgs.parse(args); // Full JS intellisense now enabled. @89: that's been suggested before but it requires to write code in order to support IntelliSense. Instead, we prefer to make IntelliSense work in that case with no extra code. The "89" comment was left by me but I did not use that name nor that URL. I can see why you might like to have code comments solve this problem of "casting" but #1 Is there even a doc comment syntax for doing what is being done in the above example yet? #2 To implement and call to a cast method is not alot of (additional) code. #3 Even in C# we need to make use of casting, not always do we know the exact type being passed in. Even then we may wish to cast down or up to various types in the inheritance chain. #4 Sometimes one might want to cast one object into multiple seperate interfaces. So while I appreciate your keen-ness on not adding any additional code lines, I'm not sure you could cater for these scenarios with doc comments alone? @VR2: we're looking at ways to enable type hint comments in the next version of Visual Studio. Is there a way to add doc comments to global variables that are not part of a class? For example: /// <field name="MyVar" type="String" mayBeNull="false"> /// This is my global variable. /// </field> var MyVar = "my global value"; Also, is there a way to emulate an enumeration using doc comments? For example: ConnectionState = function() /// <summary>An enumeration of connection states</summary> /// <field name="Open" type="Number" integer="true" mayBeNull="false"> /// The connection is open. /// </field> /// <field name="Closed" type="Number" integer="true" mayBeNull="false"> /// The connection is closed. The code above doesn't work (e.g. no intellisense), but is there another variation that could make it work? I would like to be able to type "ConnectionState." and get a list of all the enumeration values, with intellisense for each value. Hi, Great blog, i just wanted to know if there is a way to add comments in javascript like we can in C#, simply by typing /// and then the ide would insert the code comment like C# does. Is something like this possible? return should have attribute like numParam and allow multi return comment. example: function func(a, b) { /// <returns type="String" numberParam="1"></returns> /// <returns type="Array"></returns> if (b === undefined) { return a; } else { return [a, b]; } Looking forward to Code Snippets Manager supporting JavaScript files/language, or better yet being able to do the /// inside a function and having it create template comments for me ... @Angus: If you haven't already, check out this presentation by Jeff: channel9.msdn.com/.../TL48 Finally figured out a way to document member variables for Intellisense: a param tag is needed along the field tag. myClass = function() { ///<summary>MyClass comment</summary> ///<param name="field" type="string">Dummy not visible.</param> ///<field name="field" type="String">member variable comment</field> }; It would be very nice if Visual Studio recognized jsdoc toolkit style comments in it's intellisense. The old xml style comments are very time consuming to type by hand. The only reason they were usable in C# was because when you type three slashes it autocompletes a comment structure for you. Having to type that by hand just isn't going to happen. The Jsdoc toolkit comment style has become pretty much the defacto standard. They are significantly easier to type by hand. Great post B. One question is still bothering me though and I haven't been able to find any resources which explain it: what the heck does the 'locid' attribute do on summary tags? Any chance you've stumbled across an answer to that? @Andrew: the locid is a hook that can be used to handle translations. It doesn't "do" anything out of the box. Any idea why vs2010 shows the type from a properties value tag, but not the comment inside the tag? For that you have to add a summary to both getter and setter... @Luke: you would have to ask the Visual Studio team about their implementation.
http://weblogs.asp.net/bleroy/archive/2007/04/23/the-format-for-javascript-doc-comments.aspx
CC-MAIN-2014-15
refinedweb
1,868
66.13
John E. Howland Department of Computer Science Trinity University 715 Stadium Drive San Antonio, Texas 78212-7200 Voice: (210) 999-7364 Web: Subject Areas: Functional Programming, J Programming Language. Keywords: Functional Programming, J Programming Lanugage. A computer is a mechanism for interpreting a language. Computers interpret (perform). Languages are also classified as being imperative or applicative depending on the underlying model of computation used by the system. Abstraction is an important concept in computing. Generally, higher level languages are more abstract. A key tool of abstraction is the use of names. An item of some complexity is given a name. This name is then used as a building block of another item which in turn is named and so on. Abstraction is an important tool for managing complexity of computer programs. Functional programming is more than just using a functional programming language. The methodology of functional programming is different from that of imperative programming in substantive ways. The functional programming paradigm involves means for reliably deriving programs, analysis of programs, and proofs of program correctness. Because functional programming languages are based on ideas from mathematics, the tools of mathematics may be applied for program derivation, analysis, and proof. Functional programming languages (applicative languages) differ from conventional programming languages (imperative languages) in at least the following ways: In applicative languages, names are associated with items which are stored in memory. Once created, in memory, an item is never changed. Names are assigned to items which are stored in memory only if an item needs to be referenced at a later point in time. Items stored in memory are used as arguments for subsequent function applications during the course of a functional computation. For example, in C (an imperative language) we might write: int foo; ... foo = 4; In this example we associate the name foo with a particular memory cell of size sufficient to hold an integer. Its state at that moment is unspecified. Later foo is assigned the value 4, i.e., its state is changed to 4. In J (an applicative language) we might write: foo =: 4 An item 4 is created in memory and the name foo is assigned to that item. Note that in C we say that the value 4 is assigned to foo, but in J we say that the name foo is assigned to the value 4. The difference is subtle. With imperative languages the focus is on the memory cells and how their state changes during the execution of a program. With functional languages the focus is on the items in memory. Once an item is created, it is never changed. Names are more abstract in the sense that they provide a reference to something which is stored in memory, but is not necessarily an ordinary data value. Functions are applied to items producing result items and the process is repeated until the computation is complete. For example, consider the following C (an imperative language) program: #include <stdio.h> #include <stdlib.h> int main (int argc, char *argv[]) { int sum, count, n; count = 0; sum = 0; while (1 == scanf("%d\n", &n)) { sum = sum + n; count++; } printf("%f\n", (float)sum / (float)count); exit(0); } This program reads a standard input stream of integer values and computes the average of these values and writes that average value on the standard output stream. At the beginning of the average computation, memory cells count and sum are initialized to have a state of 0. The memory cell n is changed to each integer read. Also, the state of sum accumulates the sum of each of the integers read and the state of count is incremented to count the number of integers read. Once all of the integers have be read, their sum is acumulated and the count is known so that the average may be computed. To use the C average program one must compile the program. [jhowland@Ariel jhowland]$ make ave cc ave.c -o ave [jhowland@Ariel jhowland]$ echo "1 2 3" | ave 2.000000 [jhowland@Ariel jhowland]$ In functional languages, computations involve function application. Complex computations require the results of one application be used as the argument for another application. This process is known as functional composition. Functional languages have special compositon rules which may be used in programs. Functional languages, being based on the mathematical idea of a function, benefit from their mathematical heritage. The techniques and tools of mathematics may be used to reason (derive, simplify, transform and prove correctness) about programs. For example, consider the following J (an applicative language) program: +/ % # This program computes the average of a list of numbers in the standard input stream. The result (because no name is assigned to the resulting value) is displayed on the standard output stream. The J average program has several interesting features. To use the J average program you put the program and list of numbers in the standard input stream of a J machine (interpreter). [jhowland@Ariel jhowland]$ echo "(+/ % #) 1 2 3" | jconsole (+/ % #) 1 2 3 2 [jhowland@Ariel jhowland]$ The J average program consists of three functions. +/sum %divide #tally When a three function program is applied to a single argument, y, the following composition rule, fork, is used. (f g h) y = (f y) g (h y) In the J average program, f is the function +/, sum, and g is the function %, divide, and # is the function tally which counts the number of elements in a list. More explaination is needed for the J average program. / ( insert) is a function whose domain is the set of all available two argument functions (dyads) and whose result is a function which repeatedly applies its argument function between the items of the derived function's argument. For example, +/ produces a function which sums the items in its argument while */ derives a function which computes the product of the items in its argument. Most functional languages allow functions to be applied to functions producing functions as results. Most imperative languages do not have this capability. For example, the derived function +/ sums the items in its argument while the derived function */ computes the product of the items to which it is applied. Imperative languages use language constructs (such as assignment) which describe the state changes of named memory cells during a computation, for example, a while loop in C. Such languages have many sentences which produce no values or which produce changes of other items as a side-effect. count++;which references the value of count(the C average program did not use this reference) and then increments its value by 1 after the reference. The C average program relied on the side-effect. Pure functional languages have no side-effects. Functional programming is important for the following reasons. Functional languages allow programming without assignments. Structured imperative languages (no goto statements) provide programs which are easier derive, understand, and reason about. Similarly, assignment-free functional languages are easier to derive, understand, and reason about. Functional languages encourage thinking at higher levels of abstraction. For example, Functions may be applied to functions producing functions as results. Functions are manipulated with the same ease as data. Existing functions may be modified and combined to form new functions. Functional programming involves working in units which are larger than individual statements. Algorithms may be represented without reference to data. Functional programming languages allow the application of functions to data in agregate rather than being forced to deal with data on an item by item basis. Such applications are free of assignments and independent of evaluation order and provide a mechanism to operate on entire data structures which is an ideal paradigm for parallel computing. Functional languages have been applied extensively in the field of artificial intelligence. AI researchers have provided much of the early development work on the LISP programming language, which though not a pure functional language, none the less has influenced the design of most functional languages. Functions languages are often used to develop protoptype implementations and executable specifications for complex system designs. The simple semantics and rigorous mathematical foundations of functional languages make them ideal vehicles for specification of the behavior of complex programs. Functional programming, because of its mathematical basis, provides a connection to computer science theory. The questions of decidability may be represented in a simpler framework using functional approaches. For example, the essence of denotational semantics involves the translation of imperative programs into equivalent functional programs. The J programming language [Burk 2001,Bur 2001,Hui 2001] is, a functional language.. In functional programming the underlying model of computation is functional composition. A program consists of a sequence of function applications which compute the final result of the program. The J programming language contains a rich set of primitive functions together with higher level functions and composition rules which may be used in programs. To better understand the composition rules and higher level functions we can construct a set of definitions which show some of the characteristics of the language in symbolic form using standard mathematical notation. We start with argument name assignments using character data. x =: 'x' y =: 'y' We wish to have several functions named f. g, h, and i, each of the form: f =: 3 : 0 'f(',y.,')' : 'f(',x.,',',y.,')' ) Rather than enter each of these definitions (and their inverses) we use a function generating definition which uses a pattern. math_pat =: 3 : 0 '''',y.,'('',y.,'')''',LF,':',LF,'''',y.,'('',x.,'','',y.,'')''' ) Applying math_pat produces the definition: math_pat 'f' 'f(',y.,')' : 'f(',x.,',',y.,')' Using explicit definition ( :) and obverse ( :.) we have: f =: (3 : (math_pat 'f')) :. (3 : (math_pat 'f_inv')) g =: (3 : (math_pat 'g')) :. (3 : (math_pat 'g_inv')) h =: (3 : (math_pat 'h')) :. (3 : (math_pat 'h_inv')) i =: (3 : (math_pat 'i')) :. (3 : (math_pat 'i_inv')) which produces definitions for each of the functions f. g, h, and i and a symbolic definition for each inverse function. Next, we use these definitions to explore some of J's composition rules and higher level functions. f g y f(g(y)) (f g) y f(y,g(y)) x f g y f(x,g(y)) x (f g) y f(x,g(y)) f g h y f(g(h(y))) (f g h) y g(f(y),h(y)) x f g h y f(x,g(h(y))) x (f g h) y g(f(x,y),h(x,y)) f g h i y f(g(h(i(y)))) (f g h i) y f(y,h(g(y),i(y))) x f g h i y f(x,g(h(i(y)))) x (f g h i) y f(x,h(g(y),i(y))) f@g y f(g(y)) x f@g y f(g(x,y)) f&g y f(g(y)) x f&g y f(g(x),g(y)) f&.g y g_inv(f(g(y))) (h &. (f&g))y g_inv(f_inv(h(f(g(y))))) x f&.g y g_inv(f(g(x),g(y))) f&:g y f(g(y)) x f&:g y f(g(x),g(y)) (f&g) 'ab' f(g(ab)) (f&(g"0)) 'ab' f(g(a)) f(g(b)) (f&:(g"0)) 'ab' f( g(a) g(b) )))) f^:3 y f(f(f(y))) f^:_2 y f_inv(f_inv(y)) f^:0 y y f 'abcd' f(abcd) f/ 2 3$'abcdef' f(abc,def) (f/"0) 2 3$'abcdef' abc def (f/"1) 2 3$'abcdef' f(a,f(b,c)) f(d,f(e,f)) (f/"2) 2 3$'abcdef' f(abc,def) 'abc' f/ 'de' f(abc,de) 'abc' (f"0)/ 'de' f(a,d) f(a,e) f(b,d) f(b,e) f(c,d) f(c,e) 'abc' (f"1)/ 'de' f(abc,de) Inexact (floating point) numbers are written as 3e10 and can be converted to exact representations by the verb x:. x: 3e10 30000000000 Exact rational representations are given by using r to separate the numerator and denominator. ]a =: 1r2 1r2 (%a)+a^2 9r4 %2 0.5 % x: 2 1r2 Using the reshape verb ( $) we create a table using exact values. ] matrix =: x: 3 3 $ _1 2 5 _8 0 1 _4 3 3 _1 2 5 _8 0 1 _4 3 3 ]inv_matrix =: %. matrix 3r77 _9r77 _2r77 _20r77 _17r77 39r77 24r77 5r77 _16r77 matrix +/ . * inv_matrix 1 0 0 0 1 0 0 0 1 Exact computation of the factorial function ( !) produces large numbers. ! x: i. 20 ! 100x 93326215443944152681699238856266700490715968264381621468592963895217 59999322991560894146397615651828625369792082722375825118521091686400 0000000000000000000000 To answer the question of how many zeros there are at the end of !n, we use the function q: which computes the prime factors of its integer argument. Each zero at the end of !n has a factor of 2 and 5. It is easy to reason that there are more factors of 2 in !n than factors of 5. Hence the number of zeros at the end of !n is the number of factors of 5. We can count the zeros at the end of !n with the following program. +/ , 5 = q: >: i. 4 0 !6 720 +/ , 5 = q: >: i. 6 1 !20x 2432902008176640000 +/ , 5 = q: >: i. 20 4 J supports complex numbers, using a j to separate real and imaginary parts. 0j1 * 0j1 _1 +. 0j1 * 0j1 NB. real and imaginary parts _1 0 + 3j4 NB. conjugate 3j_4 Other numeric representations include: 1p1 NB. pi 3.14159 2p3 NB. 2*pi^3 62.0126 1x1 NB. e 2.71828 x: 1x1 NB. e as a rational (inexact) 6157974361033r2265392166685 x: 1x_1 NB. reciprocal of e as a rational (inexact) 659546860739r1792834246565 2b101 NB. base 2 representation 5 1ad90 NB. polar representation 0j1 *. 0j1 NB. magnitude and angle 1 1.5708 180p_1 * 1{ *. 3ad30 * 4ad15 NB. angle (in degrees) of product 45 We could define functions rotate and rad2deg as: rotate =: 1ad1 & * rad2deg =: (180p_1 & *) @ (1 & {) @ *. rotate rotates 1 degree counter-clockwise on the unit circle while rad2deg gives the angle (in degrees) of the polar representation of a complex number. rad2deg (rotate^:3) 0j1 NB. angle of 0j1 after 3 degrees rotation 93 +. (rotate^:3) 0j1 NB. (x,y) coordinates on the unit circle _0.052336 0.99863 +/ *: +. (rotate^:3) 0j1 NB. distance from origin 1 +. rotate ^: (i. 10) 1j0 NB. points on the unit circle 1 0 0.999848 0.0174524 0.999391 0.0348995 0.99863 0.052336 0.997564 0.0697565 0.996195 0.0871557 0.994522 0.104528 0.992546 0.121869 0.990268 0.139173 0.987688 0.156434 A plot of the unit circle is shown in 3. 'x y'=: |: +. rotate ^: (i. 360) 1j0 plot x;y Howland [How 1998] used the often studied recursive Fibonacci function to describe recursive and iterative processes. In J, the recursive Fibonacci function is defined as: fibonacci =. monad define if. y. < 2 do. y. else. (fibonacci y. - 1) + fibonacci y. - 2 end. ) Applying fibonacci to the integers 0 through 10 gives: fibonacci "0 i.11 0 1 1 2 3 5 8 13 21 34 55 Howland [How 1998] also introduced the idea of a continuation; a monad representing the computation remaining in an expression after evaluating a sub-expression. Given a compound expression e and a sub-expression f of e, the continuation of f in e is the computation in e, written as a monad, which remains to be done after first evaluating f. When the continuation of f in e is applied to the result of evaluating f, the result is the same as evaluating the expression e. Let c be the continuation of f in e. The expression e may then be written as c f. Continuations provide a ``factorization'' of expressions into two parts; f which is evaluated first and c which is later applied to the result of f. Continuations are helpful in the analysis of algorithms. Analysis of the recursive fibonacci definition reveals that each continuation of fibonacci in fibonacci contains an application of fibonacci. Hence, since at least one continuation of a recursive application of fibonacci is not the identity monad, the execution of fibonacci results in a recursive process. Define a monad, fib_work, to be the number of times fibonacci is applied to evaluate fibonacci. fib_work is, itself, a fibonacci sequence generated by the J definition: fib_work =. monad define if. y. < 2 do. 1 else. 1 + (fib_work y. - 1) + fib_work y. - 2 end. ) Applying fib_work to the integers 0 through 10 gives: fib_work "0 i.11 1 1 3 5 9 15 25 41 67 109 177 Consider the experiment of estimating how long it would take to evaluate fibonacci on a workstation. First evaluate fib_work 100. Since the definition given above results in a recursive process, it is necessary to create a definition which results in an iterative process when evaluated. Consider the following definitions: fib_work_iter =: monad def 'fib_iter 1 1 , y.' fib_iter =: monad define ('a' ; 'b' ; 'count') =. y. if. count = 0 do. b else. fib_iter (1 + a + b) , a , count - 1 end. ) Applying fib_work_iter to the integers 0 through 10 gives the same result as applying fib_work: fib_work_iter "0 i. 11 1 1 3 5 9 15 25 41 67 109 177 Next, use fib_work_iter to compute fib_work 100 (exactly). fib_iter 100x 57887932245395525494200 Finally, time ( time =: 6!:2) the recursive fibonacci definition on arguments not much larger than 20 to get an estimate of the number of applications/sec the workstation can perform. (fib_work_iter ("0) 20 21 22 23) % time'fibonacci ("0) 20 21 22 23' 845.138 1367.49 2212.66 3580.19 Using 3500 applications/sec as an estimate we have: 0 3500 #: 57887932245395525494200x 16539409212970150141 700 0 100 365 24 60 60 #: 16539409212970150141x 5244612256 77 234 16 49 1 which is (approximately) 5244612256 centuries! An alternate experimental approach to solve this problem is to time the recursive fibonacci definition and look for patterns in the ratios of successive times. [ experiment =: (4 10 $'fibonacci ') ,. ": 4 1 $ 20 21 22 23 fibonacci 20 fibonacci 21 fibonacci 22 fibonacci 23 t =: time "1 experiment t 2.75291 4.42869 7.15818 11.5908 (1 }. t) % _1 }. t 1.60873 1.61632 1.61924 Note that the ratios are about the same, implying that the time to evaluate fibonacci is exponential. As an estimate of the time, perform the computation: [ ratio =: (+/ % #) (1 }. t) % _1 }. t 1.61476 0 100 365 24 60 60 rep x: ratio^100 205174677357 86 306 9 14 40 This experimental approach produces a somewhat larger estimate of more than 205174677357 centuries. Students should be cautioned about certain flaws in either experimental design. Suppose we have the following test scores. [ scores =: 85 79 63 91 85 69 77 64 78 93 72 66 48 76 81 79 85 79 63 91 85 69 77 64 78 93 72 66 48 76 81 79 /:~scores NB. sort the scores 48 63 64 66 69 72 76 77 78 79 79 81 85 85 91 93 A stem-and-leaf diagram has the unit digits (leaves) of observations on one asix and more significant digits (stems) on the other axis. These may be computed from the scores as: stem =: 10&* @ <. @ %&10 leaf =: 10&| sl_diagram =: ~.@stem ;"0 stem </. leaf sl_diagram /:~scores +--+-----------+ |40|8 | +--+-----------+ |60|3 4 6 9 | +--+-----------+ |70|2 6 7 8 9 9| +--+-----------+ |80|1 5 5 | +--+-----------+ |90|1 3 | +--+-----------+ A more conventional frequency tabulation is given by the definition fr =: +/"1 @ (=/). The left argument is a range of frequencies and the right argument is a list of obversations. 4 5 6 7 8 9 fr <. scores%10 1 0 4 6 3 2 This frequency tabulation may be shown as a bar chart (Figure 4) using the built-in ploting library. pd 'new' pd 'type bar' pd 'xlabel "40" "50" "60" "70" "80" "90"' pd 4 5 6 7 8 9 fr <. scores%10 pd 'show' When tossing a coin a large number of times, the ratio of the number of heads to the total number of throws should approach a limit of 0.5. However, the absolute value of the difference between heads and tails may become very large. This can be illustrated with the following experiment, the results of which are shown in Figures 5and 6. toss =: >: i. n =: 500 NB. 500 coin tosses heads =: +/\?n$2 ratio =: heads % toss diff =: |toss - 2*heads toss =: >: i. n =:10 NB. a small trial toss;ratio +--------------------+------------------------------------------------------------+ |1 2 3 4 5 6 7 8 9 10|1 0.5 0.666667 0.75 0.6 0.666667 0.714286 0.625 0.555556 0.5| +--------------------+------------------------------------------------------------+ toss;diff +--------------------+-------------------+ |1 2 3 4 5 6 7 8 9 10|1 0 1 2 1 2 3 2 1 0| +--------------------+-------------------+ We examine some elementary ideas from number theory to demonstrate the expressive power of J. 12 +. 5 NB. greatest common divisor 1 27 +. 3 3 1 2 3 4 5 6 7 8 9 10 11 12 +. 12 1 2 3 4 1 6 1 4 3 2 1 12 NB. The numbers <: 12 which are coprime with 12 (1 = 1 2 3 4 5 6 7 8 9 10 11 12 +. 12) # 1 2 3 4 5 6 7 8 9 10 11 12 1 5 7 11 NB. The numbers <: 12 which have common factors with 12 (-. 1 = 1 2 3 4 5 6 7 8 9 10 11 12 +. 12) # 1 2 3 4 5 6 7 8 9 10 11 12 2 3 4 6 8 9 10 12 NB. 8 9 19 have common factors but do not divide 12 ((-. 1 = 1 2 3 4 5 6 7 8 9 10 11 12 +. 12) # 1 2 3 4 5 6 7 8 9 10 11 12) | 12 0 0 0 0 4 3 2 0 Next we generalize these expressions as functions totatives and non_totatives. totatives =: 3 : 0 p =. >: i. y. (1 = p +. y.) # p ) non_totatives =: 3 : 0 p =. >: i. y. (-. 1 = p +. y.) # p ) totatives 12 1 5 7 11 totatives 28 1 3 5 9 11 13 15 17 19 23 25 27 non_totatives 12 2 3 4 6 8 9 10 12 non_totatives 15 3 5 6 9 10 12 15 divisors =: 3 : 0 p =. non_totatives y. (0 = p | y.) # p ) divisors "0 (12 27 100) 2 3 4 6 12 0 0 0 3 9 27 0 0 0 0 0 2 4 5 10 20 25 50 100 The number of totatives of n is called the totient of n. We can define totient =: # @ totatives. An alternate (tacit) definition is phi =: * -.@%@~.&.q:. (totient "0) 100 12 40 4 phi 100 12 40 4 Euler's theorem states that given an integer coprime with , then . This leads to the definition: euler =: 4 : 'x. (y.&| @ ^) totient y.' 2 euler 19 1 2 euler 35 1 3 euler 28 1 3 euler 205 1 3 euler 200005 1 The product of two totatives of , is a totative of n. We can see this by using J's table (/) adverb. totatives 12 1 5 7 11 12 | 1 5 7 11 */ 1 5 7 11 1 5 7 11 5 1 11 7 7 11 1 5 11 7 5 1 We notice that we have a group (closure, identity element, inverses, and associativity). There is a table adverb which may be used to present the above results. table 1 : 0 u.table~ y. : (' ';,.x.),.({.;}.)":y.,x.u./y. ) 12&|@* table totatives 12 +--+-----------+ | | 1 5 7 11| +--+-----------+ | 1| 1 5 7 11| | 5| 5 1 11 7| | 7| 7 11 1 5| |11|11 7 5 1| +--+-----------+ Notice that addition residue 12 of the totatives of 12 do not form a group. 12&|@+ table 0 , totatives 12 +--+------------+ | | 0 1 5 7 11| +--+------------+ | 0| 0 1 5 7 11| | 1| 1 2 6 8 0| | 5| 5 6 10 0 4| | 7| 7 8 0 2 6| |11|11 0 4 6 10| +--+------------+Consider totatives of a prime value. p: 6 17 totatives 17 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17&|@* table totatives 17 +--+-----------------------------------------------+ | | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16| +--+-----------------------------------------------+ | 1| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16| | 2| 2 4 6 8 10 12 14 16 1 3 5 7 9 11 13 15| | 3| 3 6 9 12 15 1 4 7 10 13 16 2 5 8 11 14| | 4| 4 8 12 16 3 7 11 15 2 6 10 14 1 5 9 13| | 5| 5 10 15 3 8 13 1 6 11 16 4 9 14 2 7 12| | 6| 6 12 1 7 13 2 8 14 3 9 15 4 10 16 5 11| | 7| 7 14 4 11 1 8 15 5 12 2 9 16 6 13 3 10| | 8| 8 16 7 15 6 14 5 13 4 12 3 11 2 10 1 9| | 9| 9 1 10 2 11 3 12 4 13 5 14 6 15 7 16 8| |10|10 3 13 6 16 9 2 12 5 15 8 1 11 4 14 7| |11|11 5 16 10 4 15 9 3 14 8 2 13 7 1 12 6| |12|12 7 2 14 9 4 16 11 6 1 13 8 3 15 10 5| |13|13 9 5 1 14 10 6 2 15 11 7 3 16 12 8 4| |14|14 11 8 5 2 16 13 10 7 4 1 15 12 9 6 3| |15|15 13 11 9 7 5 3 1 16 14 12 10 8 6 4 2| |16|16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1| +--+-----------------------------------------------+ and 17&|@+ table 0 , totatives 17 +--+--------------------------------------------------+ | | 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16| +--+--------------------------------------------------+ | 0| 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16| | 1| 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0| | 2| 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0 1| | 3| 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0 1 2| | 4| 4 5 6 7 8 9 10 11 12 13 14 15 16 0 1 2 3| | 5| 5 6 7 8 9 10 11 12 13 14 15 16 0 1 2 3 4| | 6| 6 7 8 9 10 11 12 13 14 15 16 0 1 2 3 4 5| | 7| 7 8 9 10 11 12 13 14 15 16 0 1 2 3 4 5 6| | 8| 8 9 10 11 12 13 14 15 16 0 1 2 3 4 5 6 7| | 9| 9 10 11 12 13 14 15 16 0 1 2 3 4 5 6 7 8| |10|10 11 12 13 14 15 16 0 1 2 3 4 5 6 7 8 9| |11|11 12 13 14 15 16 0 1 2 3 4 5 6 7 8 9 10| |12|12 13 14 15 16 0 1 2 3 4 5 6 7 8 9 10 11| |13|13 14 15 16 0 1 2 3 4 5 6 7 8 9 10 11 12| |14|14 15 16 0 1 2 3 4 5 6 7 8 9 10 11 12 13| |15|15 16 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14| |16|16 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15| +--+--------------------------------------------------+ Finally, consider the definition powers which raises the totatives to the totient power. powers =: 3 : '(totatives y.) (y.&| @ ^) / i. 1 + totient y.' powers 12 1 1 1 1 1 1 5 1 5 1 1 7 1 7 1 1 11 1 11 1 powers 17 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 4 8 16 15 13 9 1 2 4 8 16 15 13 9 1 1 3 9 10 13 5 15 11 16 14 8 7 4 12 2 6 1 1 4 16 13 1 4 16 13 1 4 16 13 1 4 16 13 1 1 5 8 6 13 14 2 10 16 12 9 11 4 3 15 7 1 1 6 2 12 4 7 8 14 16 11 15 5 13 10 9 3 1 1 7 15 3 4 11 9 12 16 10 2 14 13 6 8 5 1 1 8 13 2 16 9 4 15 1 8 13 2 16 9 4 15 1 1 9 13 15 16 8 4 2 1 9 13 15 16 8 4 2 1 1 10 15 14 4 6 9 5 16 7 2 3 13 11 8 12 1 1 11 2 5 4 10 8 3 16 6 15 12 13 7 9 14 1 1 12 8 11 13 3 2 7 16 5 9 6 4 14 15 10 1 1 13 16 4 1 13 16 4 1 13 16 4 1 13 16 4 1 1 14 9 7 13 12 15 6 16 3 8 10 4 5 2 11 1 1 15 4 9 16 2 13 8 1 15 4 9 16 2 13 8 1 1 16 1 16 1 16 1 16 1 16 1 16 1 16 1 16 1 In this section we discuss the representation of polynomials and operations defined on polynomials. A polynomial is determined by its coefficients so we represent the polynomial as a list of coefficients written in ascending order rather than the usual decending order. For example, the polynomial is written as 5 2 0 1. To evaluate a polynomial we write: peval =: (#. |.) ~ 5 2 0 1 peval 3 38 A primitive for polynomial evaluation, p. is provided. 5 2 0 1 p. 3 38 To add or subtract two polynomials we add or subtract the coefficients of like terms. psum =: , @ (+/ @ ,: & ,:) pdif =: , @ (-/ @ ,: & ,:) 1 2 psum 1 3 1 2 5 1 3 psum 1 3 1 4 3 1 1 2 pdif 1 3 1 0 _1 _1 Next we consider the product and derivative of polynomials. If we make a product table, the coefficients of like terms lie along the oblique diagonals of that table. The oblique adverb /. allows access to these diagonals. pprod =: +/ /. @ (*/) 1 2 pprod 1 3 1 1 5 7 2 pderiv =: 1: }. ] * i. @ # pderiv 1 3 3 1 3 6 3 p.. 1 3 3 1 NB. There is a primitive for derivative 3 6 3 To illustrate the ease with which higher level functional abstractions may be expressed, consider the problem of working with matrices whose elements are polynomials. We represent these as a boxed table. For example, [ m =: 2 2 $ 1 2 ; 1 2 1 ; 1 3 3 1 ; 1 4 6 4 1 +-------+---------+ |1 2 |1 2 1 | +-------+---------+ |1 3 3 1|1 4 6 4 1| +-------+---------+ [ n =: 2 3 $ 1 2 3 ; 3 2 1 ; 1 0 1 ; 3 3 3 3 ; _1 _2 3; 3 4 5 +-------+-------+-----+ |1 2 3 |3 2 1 |1 0 1| +-------+-------+-----+ |3 3 3 3|_1 _2 3|3 4 5| +-------+-------+-----+ Next, we define new versions of psum, pdif, and pprod which assume their arguments are boxed polynomials. psumb =: psum &. > pdifb =: pdif &. > pprodb =: pprod &. > Then we can define a matrix product for these matrices whose elements are polynomials as: pmp =: psumb / . pprodb m pmp n +---------------------+---------------+------------------+ |4 13 19 18 9 3 |2 4 3 6 3 |4 12 17 16 5 | +---------------------+---------------+------------------+ |4 20 45 61 56 36 15 3|2 5 5 8 14 11 3|4 19 43 60 52 25 5| +---------------------+---------------+------------------+ m pmp m +--------------------+-----------------------+ |2 9 14 10 5 1 |2 10 20 22 15 6 1 | +--------------------+-----------------------+ |2 12 30 42 37 21 7 1|2 13 38 66 75 57 28 8 1| +--------------------+-----------------------+ m pmp^:0 m +-------+---------+ |1 2 |1 2 1 | +-------+---------+ |1 3 3 1|1 4 6 4 1| +-------+---------+ m pmp^:1 m +--------------------+-----------------------+ |2 9 14 10 5 1 |2 10 20 22 15 6 1 | +--------------------+-----------------------+ |2 12 30 42 37 21 7 1|2 13 38 66 75 57 28 8 1| +--------------------+-----------------------+ m pmp^:2 m +----------------------------------------+----------------------------------------------+ |4 29 88 152 176 148 88 36 9 1 |4 31 106 217 304 309 230 123 45 10 1 | +----------------------------------------+----------------------------------------------+ |4 35 137 323 521 613 539 353 168 55 11 1|4 37 158 418 772 1055 1094 864 513 222 66 12 1| +----------------------------------------+----------------------------------------------+ m pmp^:10 x: &. > m +-------------------------------------------------... |1024 29952 424704 3899184 26124316 136500501 5803... +-------------------------------------------------... |1024 31488 471040 4577232 32551980 180983051 8205... +-------------------------------------------------... Iverson and others have written several books which use J to describe a number of computing related topics. One of these [Ive 1995] uses J in a rather formal way to express algorithms and proofs of topics covered in [Gra 1989]. Following is an example from the introduction of [Ive 1995]. A theorem is an assertion that one expression l is equivalent to another r. We can express this relationship in J as: t=: l -: rThis is the same as saying that l must match r, that is, t must be the constant function 1 for all inputs. tis sometimes called a tautology. For example, suppose l =: +/ @ i. NB. Sum of integers r =: (] * ] - 1:) % 2:If we define n =: ] , the right identity function, the we can rewrite the last equations as: r =: (n * n - 1:) % 2:Next, t =: l -: rNotice that by experimentation, t seems to always be 1 no matter what input argument is used. t 1 2 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 1A proof of this theorem is a sequence of equivalent expressions which leads from l to r. l +/ @ i. Definition of l +/ @ |. i. Sum is associative and commutative (|. is reverse) ((+/ @ i.) + (+/ @ |. @ i.)) % 2: Half sum of equal values +/ @ (i. + |. @ i.) % 2: Summation distributes over addition +/ @ (n # n - 1:) % 2: Each term is n -1; there are n terms (n * n - 1:) % 2: Definition of multiplication r Definition of rOf course, each expression in the above proof is a simple program and the proof is a sequence of justifications which allow transformation of one expression to the next. Iverson discusses the role of computers in mathematical notation in [Ive 2000]. In this paper he quotes A. N. Whitehead By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect increases the mental power of the race.F. Cajori Some symbols, likeA. de Morgan ,, ,, , that were used originally for only positive integral values of, that were used originally for only positive integral values of stimulated intellectual experimentation whenstimulated intellectual experimentation when is fractional, negative, or complex, which led to vital extensions of ideas.is fractional, negative, or complex, which led to vital extensions of ideas. Mathematical notation, like language, has grown up without much looking to, at the dictates of convenience and with the sanction of the majority. Other noteworth quotes with relevance for J include: Friedrich Engels In science, each new point of view calls forth a revolution in nomenclature. Bertrand Russell A good notation has a subtlety and suggestiveness which at times make it almost seem like a live teacher. A. N. Whitehead, in Introduction to Mathematics. Certainly, the J notation, being executable, relieves the brain the task of doing routine calculations letting it concentrate on the ideas behind the calculation. The notation also removes certain ambiguities of mathematical notation, not confusing _3 (minus three) with the application -3 (negate three). An example of the kind of extensions provided by a good notation to which Cajori refers can be found in the notation for outer product (spelled .). +/ . * (matrix product) expressed as an outer product led to other useful outer products such as +./ . *.. Sadly, J is not widely accepted within computer science and even stranger is its lack of acceptance within mathematics.
http://www.cs.trinity.edu/~jhowland/math-talk/functional1/
crawl-002
refinedweb
6,042
68.6
Be sure to not put a space between the "---" and the "vanilla". -- Sent from my phone. Please excuse my brevity. Advertising On February 2, 2018 1:35:07 PM PST, Patrick Connolly <p_conno...@slingshot.co.nz> wrote: >On Fri, 02-Feb-2018 at 10:25AM +0100, peter dalgaard wrote: > >|> Or, to avoid accusing you of lying. what you think is "vanilla" >|> probably isn't. What exactly did you do? On Unix-likes, I would do >|> something like this > >|> echo >'options(repos=list(CRAN="cran.r-project.org"));install.packages("Rcpp")' >| R --vanilla >|> >|> (or maybe is better...) > >Thanks for the suggestion. > >I simply did >R -- vanilla > >I tried it again this morning so that I could compare the output. >However, it *worked* fine -- just as I thought it would done >yesterday. Why it didn't work yesterday is a mystery. > >I've had a few other things behaving strangely on this machine so >there might be an OS issue, not an R issue. > >Thanks for taking the time. > >Patrick > >|> >|> -pd >|> >|> >|> >|> > On 2 Feb 2018, at 08:15 , Jeff Newmiller ><jdnew...@dcn.davis.ca.us> wrote: >|> > >|> > Your last statement is extremely unlikely to be true. The dplyr >package should not be present in a vanilla environment, so there should >be no such conflict. >|> > -- >|> > Sent from my phone. Please excuse my brevity. >|> > >|> > On February 1, 2018 11:00:01 PM PST, Patrick Connolly ><p_conno...@slingshot.co.nz> wrote: >|> >> When i tried to install the hunspell package, I got this error >|> >> message: >|> >> >|> >> Error: package ‘Rcpp’ 0.12.3 was found, but >= 0.12.12 is >required by >|> >> ‘hunspell’ >|> >> >|> >> So I set about installing a new version of Rcpp but I get this >message: >|> >> >|> >> Error in unloadNamespace(pkg_name) : >|> >> namespace ‘Rcpp’ is imported by ‘dplyr’ so cannot be unloaded >|> >> >|> >> How does one get around that? I tried installing Rcpp in a >vanilla >|> >> session but the result was the same. >|> >> >|> >> TIA >|> >> Patrick >|> >> >|> >> >|> >>> sessionInfo() >|> >> R version 3.4.3 (2017-11-30) >|> >> Platform: x86_64-pc-linux-gnu (64-bit) >|> >> Running under: Ubuntu 14.04.5 LTS >|> >> >|> >> Matrix products: default >|> >> BLAS: /home/pat/local/R-3.4.3/lib/libRblas.so >|> >> LAPACK: /home/pat/local/R-3.4] compiler_3.4.3 magrittr_1.5 R6_2.1.2 assertthat_0.1 >|> >> parallel_3.4.3 >|> >> [6] tools_3.4.3 DBI_0.3.1 dplyr_0.4.3 Rcpp_0.12.3 >|> >> grid_3.4.3 >|> >> >|> >> >|> >> -- >|> >> >~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~. >|> >> >|> >> ___ Patrick Connolly >|> >> {~._.~} Great minds discuss ideas >|> >> _( Y )_ Average minds discuss events >|> >> (:_~*~_:) Small minds discuss people >|> >> (_)-(_) ..... Eleanor Roosevelt >|> >> >|> >> >~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~. >|> >> >|> >> ______________________________________________ >|> >> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >|> >> >|> >> PLEASE do read the posting guide >|> >> >|> >> and provide commented, minimal, self-contained, reproducible >code. >|> > >|> > ______________________________________________ >|> > R-help....@cbs.dk Priv: pda...@gmail.com >|> >|> >|> >|> >|> >|> >|> >|> >|> ______________________________________________ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see PLEASE do read the posting guide and provide commented, minimal, self-contained, reproducible code.
https://www.mail-archive.com/r-help@r-project.org/msg247726.html
CC-MAIN-2018-09
refinedweb
476
69.18
QLabel does not name a type Greetings all, I'm a total newbie in Qt. I'm trying to add a public attribute to my OMR Class (Main class). The idea is to use this publicly available attribute to access a programatically added QLabel. To do this I added the property in my H file, like this: @ #ifndef OMR_H #define OMR_H #include <QMainWindow> namespace Ui { class OMR; } class OMR : public QMainWindow { Q_OBJECT public: explicit OMR(QWidget *parent = 0); QLabel *label_img; // <<< This one ~OMR(); private slots: void on_loadimg_clicked(); void on_pushButton_clicked(); private: Ui::OMR *ui; }; #endif // OMR_H @ But when I try to compile I get the error: "QLabel does not name a type" Why does that happen? PS: I'm pretty sure there is a better way to do what I'm trying to do, and if you can, please tell me how. But I also want to know exactly why is QT not allowing me to do it this way. Thanks in Advance. - Chris Kawa Moderators You are missing #include <QLabel> It greatly depends on what you are trying to do but it's usually not a good idea to expose "naked pointers" to the ui element. Instead you can make this label private and add some public access methods that have more semantic meaning, eg. displayMessage(Qstring msg) or showCountOfSomething(int value) and then manipulate label directly inside of them. This is sorta general rule of encapsulation. Outside of main window you shouldn't be concerned with gui types or pointers, but call some well named methods that take care of internals. Geez! that was really stupid! LOL! that fixed the problem. I did try to include QLabel but I did it only on the CPP's. Anyways thanks for your (fast) reply. I'll keep in mind your suggestion. - Johnforace This post is deleted!
https://forum.qt.io/topic/25534/qlabel-does-not-name-a-type
CC-MAIN-2018-51
refinedweb
305
71.85
Download source code for Ajax style File Upload Step 1: Create a new website >select the programming language as c#> Name the project as FileuploadAjaxStyle Step 2: Just drag and drop the HTML Fileupload and a button.. Double click on that double it will generate the script automatically.. Add the span and give it a id as message.. We are going to use this span to store the message..Page looks like this Step 3: Now lets go to the code behind.. Add ICallBackEventHandler interface to the page.. Now the page looks like this.. Step 4: Now Just register this page by adding GetCallBackEventReference.. In GetCallbackEventreference we are passing the argumet as arguments, we are calling the callback as results.. results is the place where we are going to store our result.. If error occurs then we are calling the function called onerror.. true means it results asynchronously without postback.. Step 5: Now we are going to use WebClient in RaiseCallbackEvent which exist in the namespace System.Net.. Now u create anotherpage and give the name as FileuploadTarget.aspx and just run the FileuploadTarget.aspx.. copy the address from the address bar.. UploadFile Method takes 3 parameters as Address,Web method and the Filename.. When u upload the file it is stored in eventArgument.. Step 6: Now lets add javascript to the Default.aspx page.. Step 7: Now lets go to code behind of Default.aspx.. We need to return cbref in GetCallbackResult method.. Step 8: Now lets go to the FileuploadTarget.aspx code behind.. We are using HttpPostedFile and retrieving the file from Default.aspx.. Next steps are all basic where i am pointing it to the folder called anup.. Checking whether that Directory exist, if not lets create one and saving the file to the directory which i have create.. Step 9: Now its a time to run the application I hope you people like this application. Thanks Latest Articles Latest Articles from Anup1252000 Login to post response
http://www.dotnetfunda.com/articles/show/484/ajax-style-file-upload
CC-MAIN-2019-43
refinedweb
330
68.26
: Carrying privacy – No Graphene or Lineage support…am I stuck carrying a google-tracker now? (LG “Journey” on Tracfone) I prefer flip phones but now I need web access while on job-site, I don’t know why I thought I could just put Lineage onto any handset (swore you could just root a phone and put a new OS on, didn’t recall it being model-specific but haven’t used smartphones for nearly a decade) What can I do to eliminate, or reduce if that’s all I can do, my “google footprint” here? I’ve used ADB to remove every package with “google” in the title but that doesn’t remove all the google stuff (and some of it simply cannot be removed, one of them – if removed – makes it so the phone will go into an endless loop upon reboot, requiring a full hard-reset) Thanks a ton for any advice, I know about F-Droid and am using the duckduckgo browser and FOSS alternatives to, say, Calendar adn Youtube-viewer but the fact it’s still on this device/hardware that seems to have “Google” tattoo’d to its bones is bothersome, any & all things I can do to ameliorate this would be greatly appreciated!! customs and immigration – Carrying Exhibition items From India to Dubai & Back I am attending an exhibition in Dubai, & i intend to carry the exhibit items with me as check-in baggage. The items are surgical instruments and none of those items contain Lithium batteries. They are also not under any goods which can be classified as Dangerous goods. The value of the goods is Indian Rupees (INR) 1,00,000 /- I am exploring the following scenarios with respect to clearing the items at respective customs: Using an ATA Carnet: I am filling the Carnet and creating documentation for it, But can i clear it myself as an exporter instead of using a clearance agent? Can the same be done at Dubai Customs? Declaring at Customs section: Can I declare the items of such nature at Mumbai Customs, Do i need to pay import duty as checked on cybex.in, which is as follows: Assessable Value – (A) (CIF Value + 1% Landing Charge of CIF) (A) 101000.00 Basic Duty – (B) (A) x Basic Duty R ate 10 (B) 10100.00 Preferential Duty – (B) (A) x Pref. Duty Rate 0 (B) 0.00 CVD: Additional Duty – (C) (A+B) x CVD Rate 12 (C) 13332.00 Central Excise Edu Cess – (D) (C) x Central Excise Edu Cess ra te 3 (D) 399.96 Customs Education Cess – (E) (B+C+D) x Customs Edu. Cess ra te 3 (E) 714.96 Special CVD – Special Duty – (F) (A+B+C+D+E) x Spl. CVD rate 4 (F) 5021.88 Total Custom Duty (A+B+C+D+E+F) INR 29568.80 Do i need to pay similar Duty during Import into Dubai? Again the question arises, Can i clear this at the Customs section myself as the exporter of goods Carrying likes from an already posted picture and setting as my profile picture Hi guys struggling to post a already posted picture of myself to repost as my profile picture. It was just normal post not a profile picture. I want to set it with the likes i already had from when i originally posted a few weeks back. Thanks sharepoint online – Custom CSS hide O365 header carrying to new site when opened in same tab I have added a custom css spfx extension which simply hides the O365 header below HubNav modern. Now issue arises when user clicks on a link which opens in same tab, the site they navigate to is also having the css applied even though my spfx extension is applied to just the hub site home. The new site home page has header hidden, but clears off when refreshed. Is there any way to clear the css by checking if the page is navigating away? Any other ideas? dnd 5e – How much does a sprite familiar’s equipment count against its carrying capacity? Your. dnd 5e – Does a sprite familiar’s equipment count against its carrying capacity? 5e offers no guidance on the subject of equipment weight for differently sized creatures Unlike previous editions, nowhere in 5e’s published rules is the question of equipment weight for larger or smaller creatures addressed. The DMG’s section on designing new monsters briefly addresses weapons sized for larger creatures: Big monsters typically wield oversized weapons that deal extra dice of damage on a hit. Double the weapon dice if the creature is Large, triple the weapon dice if it’s Huge, and quadruple the weapon dice if it’s Gargantuan. But this is only about the damage the weapon deals. Weight is not addressed. As written, all equipment weighs the same no matter what size it is. This is obviously nonsense, but 5e’s design has made a deliberate decision to elide these kind of concerns (in contrast to previous editions which cover this subject in much more detail). If the DM wants to get into this level of detail, they must decide for themselves what rulings to apply. Personally, when I adjudicate item weight by creature size, I find that the easiest way to do it in the context of 5e’s existing rules is simply to apply the same rules to equipment weight as apply to carrying capacity – which is to say, equipment for Tiny creatures weighs half what it normally does and equipment for creatures larger than Medium weighs double per step in size. It’s not the most realistic ruling you could make, but it is at least consistent with the game’s existing abstractions about weight and size. dnd 3.5e – Is there any way to gain the endless special quality without carrying around a small necromantic magic item? Dragon Magazine #354 has a fairly well-known special quality called endless, which prevents aging and all its normal effects, but is unfortunately not actually granted by its associated feat, Wedded to History. Within the pages of this magazine, the only way to gain the quality (DM fiat aside) is to have someone cast kissed by the ages on you, and then to forevermore give up a magical item body slot and risk taking a penalty if you ever lose the item–which also radiates enough necromancy to make many NPCs very uneasy. But what about methods outside the pages of the magazine? Is there any sort of feat, feature, or other special means by which someone can gain this extraordinary special quality, without needing to go around holding a pseudo-phylactery? Interview questions – carrying out traversing from the left or from the right I saw Professor Sheep's Leetcode 140: Word Break II Youtube video here: CPP code here: My own Python translation codes below. I think in this case the code can be optimized by crossing from right to left, since the code only starts with the last word. I passed both implementations to leetcode and executed the procedure from the left with 40 ms and the procedure from the right with 36 ms. I don't think the left and right are the same. Just like functional programming, where foldLeft has advantages over foldRight in general (no language optimization or special cases). In this case, I think the right-to-left approach is an advantage. Please help me to check my thinking or to prove myself convincingly. From the left (like Professor Sheep) class Solution: def wordBreak(self, s: str, wordDict: List(str)) -> List(str): dic = set(wordDict) memo = {} def internal(sc): if not sc: return () if sc in memo: return memo(sc) res = () if sc in dic: res.append(sc) for pos in range(1, len(sc)): right = sc(pos:) if right not in dic: continue left = sc(:pos) lfls = internal(left) for lf in lfls: res.append(lf + " " + right) memo(sc) = res return res return internal(s) From the right def wordBreak(self, s: str, wordDict): dic = set(wordDict) memo = {} def internal(sc): if not sc: return () if sc in memo: return memo(sc) res = () if sc in dic: res.append(sc) for pos in range(len(sc), -1, -1): right = sc(pos:) if right not in dic: continue left = sc(:pos) lfls = internal(left) for lf in lfls: res.append(lf + " " + right) memo(sc) = res return res return internal(s) ``` legal – Obtain a Weapon Carrying Permit in a country where you work I'm currently going to school to become an archaeologist and wondering if it's possible to get a gun license in another country if I'm working in a potentially dangerous situation where I'm in need of valuables. may be attacked excavations or for a reason that I can not control, such as a war. I have heard that it is possible with the written permission of the country you are entering, and I wonder what the probable cause is and how to do it. I also know that some of the archaeologists of the past of both British and American nationality carried pistols and rifles with them, for example in the Amazon Jungle as in the case of the Fawcett expedition, but these examples are not necessarily modern ones. Remember, I only want to take a weapon to the most dangerous places where my work might bring me. For example, in a war zone or in a place with high ground for protection, as in an area that offers a wilderness with little protection for all the intense claims of legal protection. And no, for all Englishmen, I do not intend to bring it to Britain or any of the larger European countries like Germany, Russia or France, all of which have adequate protection against prosecution for my concerns.
https://proxies-free.com/tag/carrying/
CC-MAIN-2021-17
refinedweb
1,644
63.83
I'm trying to initiate a prefab which needs a link to my GUIText Object. I really dont know what to do... I had my player as an Object in the Hierarchy but now I need to make a prefab of it, so this is where I left off before creating a prefab: class ..... public GUIText speedDisplay; void update(){ updateSpeed(); } public void updateSpeed() { speedDisplay.text = "Speed: " + (int)speed+", "+(int)((speed/maxSpeed)*100)+"% of max"; } Is the GUIText object you are referencing into the prefab inside the prefab's hierarchy? I dont really know what you mean but no it's not and I cant drag it in. Obviously. Isn't there a way to connect them with code? Answer by kleber-swf · Oct 28, 2016 at 06:33 PM You can't create a prefab with a reference to something outside it's own hierarchy. For example: if you have a scene like this: And reference the GUI Text inside the Prefab, the reference isn't saved because Unity cannot get the instance of that GUI Text in other scenes, for example. To make this work, GUI Text need to be inside the Prefab's hierarchy, like this: GUI Text Prefab But! If your reference is intent to be dynamic, you could add a tag (let's say named "tag") to the GUI Text and find it with: GameObject.FindGameObjectWithTag("tag") Remember to cache it in Start() method if you use the object often because FindGameObjectWithTag is kinda heavy. Start() FindGameObjectWithTag Answer by $$anonymous$$ · Oct 28, 2016 at 04:17 PM First off, I want to point out that GUIText is outdated. You should use Text Instead. If theText is on an object that has a specific tag, you can do this: using System.UI; public Text exampletext; void Start(){ exampletext = GameObject.FindGameObjectWithTag("taggoeshere").GetComponent<Text>(); } GameObject.FindGameObjectWithTag gets the gameobject in the scene that has the tag you want, and then GetComponent() gets the component of the Text type from the GameObject.. Prefabs-based UI or dynamical UI in OnGUI() method?,What better UI based on Prefabs or on OnGUI() method 0 Answers Some components missing in the prefab (clone) 0 Answers Instantiated Prefabs Are Destroyed After 4 instantiations 1 Answer prefab navagent doesnt work,prefab navmesh doesnt work 1 Answer Can't modify BoxCollider center/size on base prefab 0 Answers EnterpriseSocial Q&A
https://answers.unity.com/questions/1263734/guitext-to-a-prefab.html
CC-MAIN-2022-40
refinedweb
396
60.45
Personally, given the amount of binary releases that are distributed off of our very own infrastructure (and I'm not even counting our namespace on things like Docker hub -- I'm just talking about the INFRA we run) I don't think that the argument "binary releases are NOT endorsed by ASF" will fly very far. I think the best defense for us is to, perhaps, position them as UGC, but given the practices around existing PMC I don't think that would be easy to do. So the question really boils down to -- how much of a liability this could potentially be for us? Thanks, Roman. On Tue, Nov 6, 2018 at 4:55 PM Daniel Shahaf <d.s@daniel.shahaf.name> wrote: > > CC += legal-discuss@ since this really isn't an incubator-specific topic any > more. The context is precompiled binary artifacts on >. > > David Nalley wrote on Tue, Nov 06, 2018 at 17:06:50 -0500: > > So let's assume a PMC (or PPMC) goes through the same process with > > binaries in terms of reviewing, voting on, promoting, and publishing > > to the world a binary release on behalf of the PMC and Foundation. > > Binaries are published to the same location that source tar balls are > > - are featured on download pages provided by the ASF. Perhaps even > > with the situation being that people download the binary artifacts > > from ASF resources tens of thousands, or maybe even millions of times > > more frequently than the source tarballs. > > > > From that scenario I have some questions: > > > > 1. Would a reasonable person (or jury) suspend disbelief long enough > > to consider our protestations that our 'releases' are source only, and > > that as a Foundation we didn't release, propagate, promote, or > > distribute the binaries in question? A rose by any other name..... > > 2. Should the Board be taking an active interest in projects (release > > managers?) who promote and publish their binaries in this manner on > > our hardware? > > 3. Is lack of Board action tantamount to tacit approval of this > > behavior? Can we really claim ignorance? > > 4. Should Infrastructure be actively monitoring and removing binaries > > which find their way to dist.a.o/archive.a.o - especially since our > > header for dist.a.o says that the directories contain releases of > > Apache software? > > 5. Should we be alerting individual release managers that publishing > > convenience binaries exposes them individually to liability? > > 6. What alternative can we offer to projects that want to distribute binaries? > Can the RM upload precompiled binaries to his space? > Can the project's download page link to them as the > primary/canonical/recommended binaries? Can the project's download page link > to the RM's binaries as one alternative among many
http://mail-archives.eu.apache.org/mod_mbox/incubator-general/201811.mbox/%3CCA+ULb+u1CHs93A1QV_--ERbDQMuDEVbnusbKrR4G0KH-FnjkEw@mail.gmail.com%3E
CC-MAIN-2019-51
refinedweb
448
53.71
A couple of people have posted pictures from FlashForward: If you post some pictures, let me know, and I will list them here. Here are all of the FlashForward slides that I have found posted online: If you know of some more slides online, post them in the comments. Christ. I came in a little late. orangedesign created the menus for lucasart's starfighter games (2) for ps2. All of the menus were created within macromedia flash, and played back within a Flash player included with the game. Have to conisder localization. They do the German version first since the german words tend to be the longest. Memory considerations, only 32 megs of ram on ps2. compressed sizes of images doesn't matter, it is the uncompressed size. reducing the number of colors. the butterflyed the images (symetrical, so they only have to load half of each image (and then flip it)). design process fred is showing some of the images they presented to the lucas arts to get an idea of the type of imagery they were looking for in the jedi starfigter menus. (just regular images they found on the web). originally, they had a more dirty, mechaninal interface, but in the end it became more modern, clean look. showed series of drawings of early menu prototypes. really cool. tips and tricks memory issues framerate issues Localization Sony Requirements card issues, example: these all have to be asked and considered within the flash movie. Middleware layer. the layer of scripting between the hardware / game and flash. example of setting something in the game's middleware. this tells the game to play in stereo mode: getURL("callback://SetStereoStatus", 2); getURL("callback://GetStereoStatus", "variableName"); this tells the middleware what variable name to use when it passed the data back to flash. you have to wait one frame in flash before you can reference the data. Why should flash be on Playstation 2? Currently not avaliable. playstation 2 is very popular platform. three versions of flash player for ps2 showing pictures on ps2 linux kit. pretty cool. comes with harddrive and ethernet port. website: Fred wrote a chapter on Flash for the PS2 in Flash Enabled : Flash Design and Development for Devices. orangedesign.com secretlevel.com lucasarts.com future they want to be able to use the flash player within the games, overlaying the action and even showing video. testing. worked on pc. they had firewire connection into test unit that they used to upload the entire game into the ps2, and then test it on the console. orange did not do the sound. end of session. Phillip started with an overview of the Pocket PC platform, and show some accessories for the Pocket PC (keyboards, cameras, storage, etc)... Can you make money with Flash on the Pocket PC? Yes. Phillip brought out a pocket pc phone edition (very nice and sleek). phillip talk about his animated today program. this is a prgram that allows you to run flash animations as your pocket pc today screen background. he showed some fish, a nyc subway map and a flash movie that displays current stock data. phillip showed his store for animated today screens. he then looged onto handango.net to show how he is making money selling them. showed some new screens that he is going to releas (some really cool animations, some of garfield). showed erricson t68 phone, and some of its accessories. showed how to dial phone from flash. use fscommand and tellurl (supported by multiple phones). phillip then showed some clothes made specifically to hold and work with devices. he showed his wireless operation app. a flash interface of the classic operation game that controls a vest phillip wears. if you tuch the sides, it shocks whoever is wearing the vest (over the web). phillip keeps track of where people try to shock him (the crouch). showed buddy lee challenge / staring contest. cobranded with television campaign. phillip said that he has seen a watch with a flash interface (but he cant talk about it). showed wearfi, gps bracelets for kids (lets you know where you kids are). he then showed a flash interface for a gps movie trailer tracker, as you walk near a movie theature it tells you what movies are avaliable. then showed his gps walking stick. basically a pocket pc on top of a walking stick with a flash interface. phillip is a co-author of flash enabled. his web site is flashenabled.com. end of session This was my session on Flash Remoting (sorry, i couldn't blog it in real time). It was an advanced session, and i spent a lot of time talking about architeting Flash Remoting Applications. I discussed Object Oriented Client / Server interfaces, which is a design patter where you encapsulte all of your client / server code within ActionScript objects. This makes the code more reusable, but also creates a simplified ActionScript API for the service. It also abstracts all of the complexity of the code, and client / server communications away from the developer / user. I show some simple ActionScript examples that demonstrated this (i will upload them later). I then went through and showed some Applications that used some Client / Server Service Libraries (Email and Stock). I showed an stock charting app, and well as a Flash Email client i created (which has a sneak peak of some new components). I then showed a simple flash application that called the google web service via Flash Remoting, and allowed you to search google from Flash. Finally, i pulled up the stock application again, change one line of code (pointing to the server), and switched the back end code from ColdFusion to an .NET DLL written in c#. The Flash Remoting code was the same conencting to both. The server side code has no flash specific code in it, and I pulled up a windows app that used the DLL to deomstrate this. I will try to post some more details and files later. Christian Cantrell. Christian started by covering some of his slides that he was not able to get to yesterday. He also showed some of his code for the Flash mp3 player that hooks up to his iPod, and finally he show a simple app that uses Flash Remoting to create a Flash console for OSX (allows him to run a pseudo console from a Flash app). I just realized thay my session is at 10:15 and not 11:00, so I am going to have to cut these notes short. I noticed a couple of people taping the Macromedia Keynote on Thursday. Does anyone plan on posting video online? If so, let me know. Josh Dura has posted the winner's of the FlashForward Film Festival. You can view them here. [via jdb] Last day of Flash Forward, and it looks like it will be as busy as the previous couple of days. I am going to go to the Flash Remoting QA for Christian Cantrell's session, I then have my session titled Rich Application Development with Flash Remoting. After that I have my QA session (I am going to miss Eric Natzke's session ; (). In the afternoon, i am going to phillip torrone's sessions, Fred Sharples (Star Wars in Flash: Developing for the Playstation 2), (i am not sure what session I will go to at the end.) Anyways, I am going to try to post my notes and comments. You can view them and all of my other FlashForward notes in my FlashForward section. Everyone is waiting for the Flash Film Festival to start (drinking free beers). I was planning to blog the film festival winners during the session, but my battery is super low so i might not be able to. Check back later for updates. Christian Cantrell : qantrell@yahoo.com : note all code examples, and slides (very nice) will be included online. Flash Remoting is a server-side technology which allows for the intergration of Flash applications with existing application server logic. Exposes remote services through a simple ActionScript API. Uses AMF (Action Messaging Format) to communicate with player / server. Very efficient binary format made specifically for ActionScript. Getting started : need Flash Remoting ActionScript objects: Creating packages for Flash Remoting Services A package is implimented as a logical directory structure where the files that contain Flash Remoting services are kept. (start from the document root). Uses domain name in reverse (com/domainname) to prevent namespace collisions (i.e. two services with the same name). by placing them in a directory structure like this, you dont have to worry about other peoples files overwriting yours (just like java packages). In actionscript, you use dots, instead of slashes to refer to directory structure (com.domainname) (note, i think you can do it either way-mc). Creating Flash Remoting Services with ColdFusion pages (.cfm) ColdFusion pahes that are called through Flash Remoting have access to the "flash" variable scope, which contains all of the data sent from Flash. The directory containing the page is considered the service, and the page name is treated like the function name. simple flash remoting service in CFML <!---package com.macromedia.flashforward---> <cfset str = #flash.params[1]# /> <cfset flash.result = Reverse(#str#) /> This is a ColdFusion service that takes a string, and returns it in reverse. The params array is an array that contains all of the aruguments and data passed in from the Flash applicaiton (via Flash Remoting). ActionScript: NetServices.setGatewayURL(url); this tells Flash where the server is located. var con = NetServices.createGatewayConnection(); //this returns a reference to the server. var pageService = con.getService("com.macromedia.flashforward", this); //this gets a reference to the remote service. the second parameter, specifies where the functions that will handle the data sent back from the server will be (in this case on the same timeline as the ActionScript code). (note : christian is running his presentation from a powerbook with OSX. He is running ColdFusion MX on OSX.). Creating services for Flash using ColdFusion Components A single ColdFusion Component can provide multiple services (implimented as functions). These can also be called directly from ColdFusion pages, or as web services. note : code will be avaliable online. ActionScript Instead of specifying callback functions that receive the data from the server, on the main timeline, you should attach them to an object, so you keep the functions within their own scope / namespace. var result = new Object(); result.onResult = function(data){//stuff here}; christian named his buttons the same as his remote service, that way he can use the button labels to decide which function to call: function clickHandler(button){urlService[button.getLabel()](input.text);}; performing a databse query using a ColdFusion Component (CFCs) The entire ColdFusion Query object can be returned directly to Flash. it will be converted to an ActionScript RecordSet object. You can set a pagesize which determines the number of rows the server wil lreturn to the client until the client asks for more (i.e. only initially send first 10 rows of 1000). Server will then send the rest of the rows on demand (without going back to the database, only to Flash Remoting Adaptor). Just use a regular CFQUERY tag to get a Query object. You can then return it directly to Flash using CFRETURN. function that receives data from server: function onResult(result) { //dataConsumer, dataProvider, labelString, dataString DataGlue.bindFormatStrings(name, result, "#LastName#, #FirstName#"," #EmployeeID#") } note, my battery is running low. ServerSide ActionScript SSAS allows Flash Developers and designers access to server-side programming with almost no learning code. simple SSAS service: function sayHello() return "Hello World"; called the same way from Flash as you would call ColdFusion services. my battery died. Christian showed some server side actionscript examples, and (connecting to DB and loading files). he then showed an Flash App that used Flash Remoting to play mp3s off of his iPod. pretty sweet. The slides for this presentation are online. Robert started off by showing off some of the Flash drawing experiments that have been done by the flash community (all to the tune of "Christmas on acid"). (btw, the room is packed). Shape Drawing API Every shape that you draw will be within a movie clip. MovieClip.lineTo() : draws a line. initially it is invisble, so you need to use lineStyle to give it a color. starts at 0,0 coordinate of movie clip that it is contained within. MovieClip.moveTo : moves the drawing point to a new position. does not do anything visible. MovieClip.curveTo : draws curves. takes 4 numbers, two for each curve. control and anchors. anhors are where they end up, and control influences the curve. curves toward control points, but doesn't touch it. MovieClip.beginFill : for creating solid shapes. MovieClip.endFill : If you are not at the starting point, then the final side will automatically be drawn to the starting point. you can then remove the line style (leave it empty), and it will remove the last line. MovieClip.beginGradientFill : for dynamically creating gradient fills. MovieClip.clear : will clear all drawings within movieclip. it resets drawing properties (such as linestyle). framebased, use onEnterFrame event. First step is clear graphics from previous frame, and then draw. repeat. (try putting a lot of values set by random()). showing more cool drawing api examples. (a bunch of 3d wireframe worlds by glenn thomas, a cad like drawing program, sam wan's mysterons, ). extending drawing API. MovieClip.drawLine() (see robert's site for notes / example code). takes a starting and ending corrdinate and draws a line. showing a bunch of other custom drawing functions (triangles). note, i had to leave early (halfway though). I was planning on putting up notes on this session, but Pete is showing so much cool stuff that i forgot to take notes. Basically, he is building video (quicktime, tv, etc) using Flash. Realy cool stuff. The keynote was awesome. To quote Lynda Weinman, "This was the best keynote ever". It started off with Kevin Lynch talking about the Macromedia MX launch, the most significant and aggressive launch in the history of Macromedia. Kevin then spent some time discussing usability in Flash, pointing out some of the books that have been written on the topic (skip intro, flash 99% good), and some of the people who have been pushing usability in Flash, such as Chris MacGregor of Flazoom.com, and more recently Jacob Nielsen. Kevin then talked about the importance of engaging in a discussion about Flash usability, especially with those who are critical, in order to learn from some of their arguments. Kevin then discussed Macromedia Flash and Advertising, and pointed the incredible growth of the use of Flash in online advertising, and its successes. In august of last year, there were 100 million Flash advertising impressions a week. Today there are 1 billion impressions a week. We then showed a Mike's hard lemonade Flash movie, that was created from a television commercial. Kevin then invited me up on the stage, and we created a simple application using Flash MX, ColdFusion MX and Flash Remoting. Of course, I had some typos in my code, but after those were fixed, we created an app that used a new DataGrid component that we have been working on to display the results of a data base query that shows all of the FlashForward Session information.. You will have to use the link above to view them.). After another long round of applause, Kevin asked Chris Hock from the Flash Communication Server team to come onstage, and build a Flash based communication application. Using the Flash Communication Server components (kevin noted they would be available soon), he built an application that had video windows, login element, a list of everyone connected, and a status light. He wrote no ActionScript at all. He published, and then some Macromedia people in the audience also connected via their webcams. After that, the audience was invited to go to the computers, and ask questions via the app that was just built. I spoke with a lot of people after the keynote, and everyone was really excited about the presentation and the communication server. This was definitely the best keynote that i have ever seen us present. EricD has posted some pictures from Flash Forward.. I just got back from the Macromedia Flash Communication Server party. Lots of people showed up to see information and demos of the Flash Communication Server. The sound was messed up, but people seemed to be excited about the demos. We showed the Jeremy Allaire Macromedia MX presentation, authoring Flash Communication Apps with the Flash Communication components (not yet available) (imagine building full fledge text and video chat room in about 2 minutes), and a collaborative whiteboard application. Afterwards, there was a chance to meet and chat with everyone. I talked to Phillip Torrone, Christian Cantrell, Glenn Thomas, Branden Hall, Nigel Pegg, Eric Dolecki (met him for the first time), Natalie Zee, Steve Leone, Erik Natzke, Dave Yang, and Veronique Brossier (first time). Everyone was excited about the possibilities of the new technologies, and i could see them already starting to think up cool things to do with it. Branden Hall and Christian Cantrell are on the same DC - NYC train that I am. Branden showed me some useful Flash Remoting stuff he figured out. You can pass a responder object for a remote method as the first argument to the method call. This allows you to pass a separate instance of an object when you are calling the same remote method multiple times. The only caveat is that the responder function must have an onResult method. It will not work if your use functionName_Result. Here is an example: var result = new Object(); result.onResult = function(data) { trace("data received"); } //netservices code snipped service.functionName(result, "foo"); //netservices code snipped service.functionName(result, "foo"); Pretty cool stuff. I am off for NYC and FlashForward. I will be posting FlashForward related information in the FlashForward section. I will be away from email for a couple of hours this afternoon. Flash Forward is this week in New York City. I am heading up there tomorrow, and thus may not post much for a day or so. I am planning on blogging the conference and report on sessions, premieres, announcements (except some big ones), and anything else going on at the conference. I have created a new section, FlashForward, where I will be posting everything from the conference. If there is anything in particular that you want to hear about or from, post your requests in the comments section, and i will see what i can do.
http://radio.weblogs.com/0106797/categories/flashForward/
crawl-002
refinedweb
3,111
65.73
Archives Business Week's latest on Overseas Outsourcing - Dot Bomb Part Deaux? Infoworld Development Survey Commerce One goes bankrupt Everyone else is making money but me......... Well, not really. I find that people seem to always think that others are making huge money from whatever they are doing. Linux -> Windows migration HP drops Itanium 2 workstations Simple Reflection in .NET Whidbey Here is some simple Reflection code that I wrote to see what is in some .NET assemblies. Suggestions are welcome. Its not very "Whidbey-esque," but it gets the job done. Non-techncial - Spyware/Adware appear to be gone Personal. Sql Dependency rules in .NET 2.0 Whidbey There are a couple of rules that I have been posted in the newsgroups that I wanted to post. "Select * from Table" will not work with the SqlDependency object. MSMQ Processing of multiple pieces of data You want to process a bunch of data associated with a single MSMQ message. First, create a class with the Serializable attribute. MSMQ processing of a string Here is some sample code that processes messages that are in an MSMQ queue (isn't that redundant?). It pulls a string from a defined queue and does something with it. Behind the scenes, it uses the threadpool to process messages in the queue. Non-Technical Observation about Technology: Cell phone users are the new smokers Ever notice how smokers congregate around a doorway on the outside of a building? Well, I see that cell phone users are now doing the same. Non-Technical - Adware/Spyware developers and marketing folks are not my favorite people At one of my customer's, when I am not there, someone else will use the computer that I use, due to resource limitations. That is fine with me, I'm not there all the time. Unforntunately, about a month ago, one of these folks somehow got hoodwinked into installing some adware/spyware. Now, whenever, I show up in the morning, I get one popup screen after another (AOL 9.0, MSN, Refurbished Dell computers (like I would ever buy a Dell again), and such) on this development computer. I have tried Spybot and Ad-Aware SE. Spybot sees almost nothing (yes it is fully updated). Ad-Aware SE sees some of these files, but whenever I instruct Ad-Aware to remove the files, these same files keep coming back. I'm not criticizing Spybot and Ad-Aware for being bad products, merely that they can't handle the current situation. I have googled for all of these adware, spyware, malware products, but I have never been able to get these scum bags removed. These products are wupdt.exe, polmx3.exe, and qwnpln.exe. I am fearful that the only way to resolve this problem is going to be to repave and start over with this machine. About 12 months ago, someone installed some adware product at my office. It messed up everything with the computer and the only solution was to repave. About 3 months ago, I got caught by the Download.Ject virus and I just repaved and started over. These spyware people wonder why no one likes them.......Geez. Optimizing Indexes with Sql Server 2005 (Yukon) Beta 2 If. Registration free COM? Have you seen this in Yukon Beta 2? NewSequentialId() joins the fray There is a new command in Yukon Beta 2 for generating GUIDs / uniqueidentifiers. It is the NewSequentialId() method. I don't know if it will eliminate the potential performance problems I have seen when using uniqueidentifiers, but it is there. I don't know if it will make it to the final released version. Trying to Debug with Win64 and Whidbey .NET Beta 1? If you are trying to debug using Win64 and Whidbey .NET Beta 1, and you get an error about the remote debugging components not being installed, check out this tip I got from Jeff Schwartz in one of the newsgroups. Hooray, my Dell Laptop works again After two new keyboards, my Inspiron 8200's pointing stick works again. Yippeee. Finally. It only took two new keyboards plus the replacement that they did for me at the depot to fix the problem. I amnot happy with the way Dell handled the situation. One would think that they would send it back from the depot working properly the first time. Second, they sent two faulty keyboards to me. It was the third keyboard that resolved the problem...........Ok, enough whining about that. I am back and I am working. The US Economy is definitely on the upswing The US Economy is definitely on the upswing. Want to know why I know that? Because in the last couple of months, the body shop headhunters have been coming out of the woodwork wanting me to come work for them. I have now been contact 3 times in the last week and 5 times in the last month. All I have to do is take a pay cut and sign away every bit of intellectual property I own and I can belong to them. Oh, oh, what should I do........ What I know about .NET (amount of knowledge I have of .NET) ---------------------------------------- = 0 (total amount of stuff to learn about .NET) SqlTypes vs. SqlDbType Confusion Something to watch out for. I jsut caught myself on this. I am getting easily confused by the SqlTypes class and the SqlDbType enumeration. Thanks to Erik Porter for reminding me of this. SqlTypes are used when getting data out of the database. SqlDbType is used with SqlParameters for calling commands. While not a 100% accurate, I am going to think of them in general like this: SqlTypes are for getting data out while SqlDbType is used for putting data in. New Visual Studio .NET SKUs announced ASP.NET Wizard control in Whidbey aka ASP.NET 2.0 Have you seen the Wizard Control in Whidbey Beta 1? Very cool idea. Basically, it allows you to complete a series of steps in a fairly easy manner. I did some basics with it. Here is the simple code. Wow, what a comeback Georgia Tech 28 - Clemson 24 - Warning non-technical I don't get into college football too much anymore (being that I am 37 and well out of school), except for when my team (Georgia Tech) plays. I had watched and hoped that something good would happen. When the score was Clemson 17 - GT 7 on a long Clemson run for a touchdown, I knew that all hope was lost. Then, GT scored to cut it to Clemson 17 - GT 14. Clemson came back and scored on another long run to make it Clemson 24 - GT 14. Then, GT scored to make the score Clemson 24 - GT 21. I figured that after GT messed up on the onside kick that the game was effectively over. Then a series of miracles started. On second down and 1, Clemson was stopped for no gain. On third down, Clemson fumbled, but recoved with the result now being that it was fourth down and one. Clemson ran the play clock down and took a five yard penalty for a delay of game. On fourth down, the Clemson snapper, rolled the football by the kicker, who ran back recovered the ball and was buried by a couple of GT players. With about 15 seconds to go, GT had the ball on the Clemson 11 yard line. GT quarterback Reggie Ball thru an 11 yard touchdown pass to Calvin Johnson for the go ahead score. That still wasn't the end of the game. Due to a penalty for excessive celebration, and a 49 yeard kickoff return, Clemson had the ball on the GT 31 yard line with 3 seconds to go in the game. A Charlie Whitehurst pass fell incomplete in the end zone as time expired. And the miracle was complete.............. Birmingham talk on ASP.NET Whidbey Tonight's talk in Birmingham went very well tonight. ASP.NET Whidbey talk in Birmingham, AL I am doing a talk on ASP.NET Whidbey in Birmingham, Al tonight at the Birmingham Software Developer's Association. Comeby if you are in the Birmingham area. New Oracle ODP.NET Data Provider available GetProviderSpecificFieldType() in .NET 2.0 Whidbey Back with Classic ADO, you could always get the datatype of the field in a recordset (adVarChar and such). I wanted to do that again and get the data in the form of the System.Data.SqlTypes I never did find a way to do that under .NET 1.x. With .NET 2.0 Whidbey, I can use the GetProviderSpecificFieldType() method to get the type of the field as defined by the System.Data.SqlTypes. Here is some quicky code that I wrote to do this: Connection String magic with Whidbey Within the System.Data.Common namespace is the DbConnectionStringBuilder class. This class allows you to manage and easily parse the connection string to make changes as necessary within your application. Why would you want to do this? Well, you might have a database management application and want to change values within the connection string so that you can easily make move between one database session and another. This class is very cool. Check it out. MySql Connector/.NET New build of a .NET data provider for MySql is available.
http://weblogs.asp.net/wallym/archive/2004/09
CC-MAIN-2014-49
refinedweb
1,558
75.5
Knowing the whereabouts of a particular object/person has always been comforting. Today, GPS is being extensively used in asset management applications like Vehicle Tracking, Fleet Tracking, Asset monitoring, Person tracking, Pet Tracker etc. For any tracking device the primary design consideration will be about its battery expectancy and monitoring range. Considering both, LoRa seems to be perfect choice since it has very low power consumption and can operate on long distances. So, in this tutorial we will build GPS tracking system using LoRa, the system will consist of a Transmitter which will read the location information from the NEO-6M GPS module and transmit it wireless over Lora. The receiver part will receive the information and display it on a 16x2 LCD display. If you are new to LoRa then learn about LoRa and LoRaWAN Technology and how it can be interfaced with Arduino before proceeding further. To keep things simple and cost-effective for this project we will not be using a LoRa gateway. Instead will perform peer to peer communication between the transmitter and receiver. However, if you want a global range you can replace the receiver with a LoRa Gateway. Also since I am from India we will be using the 433MHz LoRa module which is legal ISM band here, hence you might have to select a module based on your country. That being said, let’s get started… Materials Required - Arduino Lora Shield – 2Nos (PCB design available for download) - Arduino Uno – 2Nos - SX1278 433MHz LoRa Module – 2 - 433MHz Lora Antenna - NEO-6M GPS Module - LCD Display Module - Connecting wires Arduino LoRa Shield To make building things with LoRa easier, we have designed a LoRa Arduino Shield for this Project. This shield consists of the SX1278 433MHz with a 3.3V regulator designed using LM317 Variable regulator. The Shield will directly sit on top of Arduino providing it LoRa capabilities. This LoRa Shield will come in handy when you have to deploy LoRa sensing nodes or to create a LoRa mesh network. The complete circuit diagram for the LoRa Arduino Shield is given below The Shield consists of a 12V jack which when powered will be used to regulate 3.3V for the LoRa module using the LM317 regulator. It will also be used to power the Arduino UNO through Vin pin and the regulated 5V from the Arduino is used to power the LCD on the shield. The output voltage of the LM317 is fixed to be 3.3V using the resistor R1 and R2 respectively, the value of these resistors can be calculated using the LM317 Calculator. Since the LoRa module consumes very low power, it can also be powered directly from the 3.3V pin of Arduino, but we have used an external regulator design since LM317 is more reliable than the on-board voltage regulator. The shield also has a potentiometer which can be used to adjust the brightness of the LCD. The connection of LoRa module with Arduino is similar to what we did in our previous tutorial of Interfacing Arduino with Lora. Fabricating PCB for LoRa Shield Now that our circuit is ready, we can proceed with designing our PCB. I opened by PCB design software and began forming my tracks. Once the PCB design was complete my board looked something like this shown below You can also download the design files in GERBER format and fabricate it to get your boards. The Gerber file link is given below Download Gerber File for Arduino LoRa Shield. Assuming the PCB is 80cm×80cm you can set the dimensions as shown below. . . I turned on my soldering rod and started assembling the Board. Since the Footprints, pads, vias and silkscreen are perfectly of the right shape and size I had no problem assembling the board. Once the soldering was complete the board looked like this below, as you can see it fits snug on my Arduino Uno Board. Since our project has an Arduino LoRa transmitter and a Arduino LoRa receiver we will need two shields one for receiver and the other for transmitter. So I proceeded with soldering another PCB, both the PCB with LoRa module and LCD is shown below. As you can see only the receiver LoRa shied (left one) has an LCD connected it, the transmitter side only consists of the LoRa module. We will further connect a GPS module to the transmitter side as discussed below. Connecting GPS module to LoRa Transmitter The GPS module used here is the NEO-6M GPS module, the module can operate on very low power with a small form factor making it suitable for tracking applications. However there are many other GPS modules available which we have used previously in different kind of vehicle tracking and location detection applications. The module operates in 5V and communicates using Serial communication at 9600 baud rate. Hence we power the module to +5V pin of Arduino and connect the Rx and Tx pin to digital pin D4 and D3 respectively as shown below The pins D4 and D3 will be configured as software serial pins. Once powered the NEO-6M GPS module will look for satellite connection and the will automatically output all the information serially. This output data will be in the NMEA sentence format which stands for National Marine Electronics Association and is the standard format for all GPS devices. To learn more about using GPS with Arduino, follow the link. This data will be large and most time we have to phrase it manually to obtain the desired result. Lucky for us there is a library called TinyGPS++ which does all the heavy lifting for us. You also have to add the LoRa library if you have not done it yet. So let’s download both the library from below link Download TinyGPS++ Arduino Library Download Arduino LoRa Library The link will download a ZIP file which can then be added to the Arduino IDE by following the command Sketch -> Include Library -> Add.ZIP library. Once you are ready with the hardware and library we can proceed with Programming our Arduino boards. Programming Arduino LoRa as GPS Transmitter As we know the LoRa is a transceiver device, meaning it can both send and receive information. However in this GPS tracker project we will use one module as transmitter to read the co-ordinate information from GPS and send it, while the other module as a receiver which will receive the GPS co-ordinate values and print it on the LCD. The program for both the Transmitter and Receiver module can be found at the bottom of this page. Make sure you have installed the libraries for GPS module and LoRa module before proceeding with the code. In this section we will look at the transmitter code. Like always we begin the program by adding the required libraries and pins. Here the SPI and LoRa library is used for LoRa communication and the TinyGPS++ and SoftwareSerial library is used for GPS communication. The GPS module in my hardware is connected to pin 3 and 4 and hence we also define that as follows #include <SPI.h> #include <LoRa.h> #include <TinyGPS++.h> #include <SoftwareSerial.h> // Choose two Arduino pins to use for software serial int RXPin = 3; int TXPin = 4; Inside the setup function we begin the serial monitor and also initialize the software serial as “gpsSerial” for communication with our NEO-6M GPS module. Also note that I have used 433E6 (433 MHz) as my LoRa operating frequency you might have to change it based on the type of module you are using. void setup() { Serial.begin(9600); gpsSerial.begin(9600); while (!Serial); Serial.println("LoRa Sender"); if (!LoRa.begin(433E6)) { Serial.println("Starting LoRa failed!"); while (1); } LoRa.setTxPower(20); } Inside the loop function we check if the GPS module is putting out some data, if yes then we read all the data and phrase it using the gps.encode function. Then we check if we have received a valid location data using the gps.location.isValid() function. while (gpsSerial.available() > 0) if (gps.encode(gpsSerial.read())) if (gps.location.isValid()) { If we have received a valid location we can then begin transmitting the latitude and longitude values. The function gps.location.lat() gives the latitude co-ordinate and the function gps.location.lng() gives the longitude co-ordinate. Since we will be printing them on the 16*2 LCD we have to mention when to shit to second line, hence we use the keyword “c” to intimate the receiver to print the following information on line 2. LoRa.beginPacket(); LoRa.print("Lat: "); LoRa.print(gps.location.lat(), 6); LoRa.print("c"); LoRa.print("Long: "); LoRa.print(gps.location.lng(), 6); Serial.println("Sent via LoRa"); LoRa.endPacket(); Programming Arduino LoRa as GPS receiver The transmitter code is already sending the value of latitude and longitude co-ordinates, now the receiver has to read these values and print on the LCD. Similarly here we add the library for LoRa module and LCD display and define to which pins the LCD is connected to and also initialize the LoRa module like before. ); } } Inside the loop function we listen for data packets form the transmitter LoRa module and the size of it using the LoRa.parsePacket() function and store it in “packetSize” variable. If packets are received then we proceed with reading them as characters and print them on the LCD. The program also checks if the LoRa module is sending the keyword “c”, if yes then prints the remaining information on the second line. if (packetSize) { // If packet received Serial.print("Received packet '"); lcd.clear(); while (LoRa.available()) { char incoming = (char)LoRa.read(); if (incoming == 'c') { lcd.setCursor(0, 1); } else { lcd.print(incoming); } } Arduino LoRa GPS Tracker Working Once the hardware and program is ready we can upload both the codes in the respective Arduino modules and power them using a 12V adapter or USB cable. When the Transmitter is powered you can notice the blue LED on the GPS module blinking, this indicates that the module is looking for satellite connection to get co-ordinates. Meanwhile the Receiver module will power on and display a welcome message on the LCD screen. Once the transmitter sends the information the receiver module will display it on its LCD as shown below Now you can move around with the transmitter GPS module and you will notice the receiver updating its location. To know where exactly the transmitter module is you can read the latitude and longitude values displayed on the LCD and enter it into Google maps to get the location on map as shown below. The complete working can also be found in the video given at the bottom of this page. Hope you understood the tutorial and enjoyed building something useful with it. If you have any doubts you can leave them in the comment section below or use our forums for other technical queries. Lora Sender Code #include <SPI.h> #include <LoRa.h> #include <TinyGPS++.h> #include <SoftwareSerial.h> // Choose two Arduino pins to use for software serial int RXPin = 3; int TXPin = 4; // Create a TinyGPS++ object TinyGPSPlus gps; SoftwareSerial gpsSerial(RXPin, TXPin); void setup() { Serial.begin(9600); gpsSerial.begin(9600); while (!Serial); Serial.println("LoRa Sender"); if (!LoRa.begin(433E6)) { Serial.println("Starting LoRa failed!"); while (1); } LoRa.setTxPower(20); } void loop() { while (gpsSerial.available() > 0) if (gps.encode(gpsSerial.read())) if (gps.location.isValid()) { Serial.println("Sending to LoRa"); LoRa.beginPacket(); LoRa.print("Lat: "); LoRa.print(gps.location.lat(), 6); LoRa.print("c"); LoRa.print("Long: "); LoRa.print(gps.location.lng(), 6); Serial.println("Sent via LoRa"); LoRa.endPacket(); } } Lora Receiver Code /*Program to receive the value of temperature and Humidity via LoRa and prin on LCD *Dated: 24-06-2019 *For: */ ); } } void loop() { int packetSize = LoRa.parsePacket(); if (packetSize) { // If packet received Serial.print("Received packet '"); lcd.clear(); while (LoRa.available()) { char incoming = (char)LoRa.read(); if (incoming == 'c') { lcd.setCursor(0, 1); } else { lcd.print(incoming); } } } } Dec 24, 2019 I am unable to find relevent antenna for 433MHz LoRa module. Can you please provide me the link to purchase LoRa Antenna???
https://circuitdigest.com/microcontroller-projects/lora-based-gps-tracker-using-arduino-and-lora-shield
CC-MAIN-2020-50
refinedweb
2,037
55.64
The meshgrid operation creates multi-dimensional coordinate arrays: import numpy as np x = np.arange(2) y = np.arange(3) ii, jj = np.meshgrid(x, y, indexing='ij') ii # Expected result # array([[0, 0, 0], # [1, 1, 1]]) np.meshgrid() returns a tuple with the inputs broadcast across rows and columns (and higher dimensions if you pass more vectors in). Here’s what the jj array looks like: jj # Expected result # array([[0, 1, 2], # [0, 1, 2]]) I’ll talk about the indexing='ij' argument below. But first, let’s take a look at an example. A realistic use case Ok, let’s look at an example. Say we have a simple linear regression model y = w0 + w1 * x where w0 is the intercept and w1 is the slope. If x is a 1-dimensional NumPy array of inputs and y is a 1-dimensional NumPy array of targets, we can calculate the mean-squared error (MSE) loss for a given w0 and w1 with the following: yhat = w0 + w1 * x loss = np.square(yhat - y).mean() Notice, if our data is fixed, then the loss becomes a function of w0 and w1. That means we can plot the loss surface to see how it varies with w0 and w1. This is a perfect use case for meshgrid(). First, let’s fix w0=3 and w1=2.5 and then generate 50 data points using this model plus some random noise: x = np.random.normal(0, 1, size=50) y = 3 + 2.5 * x + np.random.normal(0, 1, size=50) A scatter plot of these points points will look something like this. You can see visually that the y-intercept is roughly 3 and the slope is roughly 2.5. Now we want to create all possible values of w0 and w1 in the range [-5, 5] so we can compute the loss value for those pairs: w0, w1 = np.meshgrid( np.linspace(-5, 5), np.linspace(-5, 5), indexing='ij', ) w0.shape # Expected result # (50, 50) At this point w0 and w1 both have shape (50, 50). That’s because np.linspace() creates a 50 evenly spaced values by default. If we flatten w0 and w1 and line up the elements, we get 2500 evenly spaced points on a grid. Let’s take a look at the first 10 points: list(zip(w0.ravel(), w1.ravel()))[:10] # Expected result # [(-5.0, -5.0), # (-5.0, -4.795918367346939), # (-5.0, -4.591836734693878), # (-5.0, -4.387755102040816), # (-5.0, -4.183673469387755), # (-5.0, -3.979591836734694), # (-5.0, -3.7755102040816326), # (-5.0, -3.571428571428571), # (-5.0, -3.36734693877551), # (-5.0, -3.163265306122449)] Now, we can compute the model for each (w0, w1) pair for all 50 random data points. That allows us to compute MSE for each of the weight values: yhat = w0[:, None] + w1[:, None] * x loss = np.square(y - yhat).mean(-1) Here we expand the w0 and w1 arrays so that we can use NumPy broadcasting to compute yhat for all points at the same time. The result yhat has shape (2500, 50): it’s the predicted value for all pairs of (w0, w1) for all 50 points. After that, we use the mean function to reduce the column dimension and compute MSE for each point on the grid. Now, we can reshape loss to be (50, 50) and plot loss for each pair of weights: loss = loss.reshape(w0.shape) plt.contourf(w0, w1, loss, 30) plt.plot(3, 2.5, 'x', c='white', ms=7) plt.xlabel("w0") plt.ylabel("w1") plt.title("Loss surface"); That’s a nice looking loss surface! I’ve added a mark at the true values for w0 and w1. The loss surface will never have its optimum at exactly the true values since it’s learning the function on a random sample of all data, but this is pretty close! Indexing Ok let’s revisit indexing. The indexing='ij' argument tells NumPy to use matrix indexing instead of Cartesian indexing. For 2-dimensional meshgrids, matrix indexing produces coordinate arrays that are the transpose of Cartesian coordinate arrays. The question is whether those coordinate arrays should be interpreted as the indices of a matrix (matrix indexing) or points in Euclidean space (Cartesian indexing). Here’s a diagram that displays the index pairs for xy versus ij indexing: Notice for Cartesian indexing, the x-axis increases as your move “to the right” and the y-axis increases as you move “down”. Other tensor libraries In NumPy, you actually don’t need meshgrid() but it’s good to know it because the same function exists in PyTorch and TensorFlow. Although the API is more limited in both cases (and the default indexing for PyTorch is different) the basic usage is roughly same: import torch x = torch.arange(2) y = torch.arange(2) xx, yy = torch.meshgrid(x, y) xx # Expected result # tensor([[0, 0], # [1, 1]]) And: import tensorflow as tf x = tf.range(2) y = tf.range(2) xx, yy = tf.meshgrid(x, y, indexing='ij') xx # Expected result # <tf.Tensor: shape=(2, 2), dtype=int32, numpy= # array([[0, 0], # [1, 1]], dtype=int32)> One thing to watch out for: in NumPy and TensorFlow, the default indexing is Cartesian, whereas in PyTorch, the default indexing is matrix. If you want to avoid confusion, you can plan to set indexing='ij' whenever you call meshgrid() in NumPy or TensorFlow. You probably don’t want to force yourself to remember which indexing gets returned by default. 1 thought on “Ben Cook: NumPy meshgrid: Understanding np.meshgrid()” I was very satisfied with the try. The hair color is the same as my hair, if it is matte, even better. The front bangs are a bit long and a little less, and there are no other flaws. Good quality and cheap all 5 points.
https://www.coodingdessign.com/python/ben-cook-numpy-meshgrid-understanding-np-meshgrid/
CC-MAIN-2021-10
refinedweb
979
73.98
Bug #10627 Output module template art::ProvenanceDumper now forces the detail class to do parameter set validation Description This might not be a bug but I am not sure how else to file it. Mu2e has an output module Analyses/src/DataProductDump_module.cc This module predates the file dumper output module and does similar work; we are keeping it around as an example. This module is created from an output module template: namespace mu2e { class DataProductDumpDetail; typedef art::ProvenanceDumper<DataProductDumpDetail> DataProductDump; } class mu2e::DataProductDumpDetail { // contents deleted. }; DEFINE_ART_MODULE(mu2e::DataProductDump); We jumped from art v1_15_00 to art v1_17_02. In art v1_15_00 the module compiled. In art v1_17_02 we had to add an empty definition of the struct Config in order for the module to compile: class mu2e::DataProductDumpDetail { public: struct Config {}; // contents deleted. }; I think that the reason is that the module template now does checking of its parameter set. I think that this forces the detail class to check it's parameter set. Our detail class does not use any parameters providing a correct Config class is trivial. This should be better documented. We should also have a discussion among the stakeholders about the desired behaviour in this situation: if a module template uses the parameter set validation do we want to require that the detail class does too? What options are there? In the long run I think that forced validation is good. The question is if we need a different policy for the transition period ( it is also possible that for this very narrow question the transition period is now over ). History #1 Updated by Christopher Green about 4 years ago - Subject changed from Output module template art::ProvenanceDumper now forces the detail class to do parameter set validatio to Output module template art::ProvenanceDumper now forces the detail class to do parameter set validation - Category set to Documentation - Status changed from New to Accepted - Estimated time set to 2.00 h - SSI Package art added - SSI Package deleted ( ) The behavior and required actions will be documented as requested. #2 Updated by Kyle Knoepfel about 4 years ago #3 Updated by Kyle Knoepfel about 4 years ago - Status changed from Accepted to Resolved - % Done changed from 0 to 100 #4 Updated by Kyle Knoepfel about 4 years ago - Status changed from Resolved to Closed Also available in: Atom PDF
https://cdcvs.fnal.gov/redmine/issues/10627
CC-MAIN-2020-05
refinedweb
391
50.26
. I have installed Active Python 3.5 x64 on Windows 10 and XAMPP for Apache server. In C:\xampp\apache\conf\extra\ i have created a file active-python-manual.conf with the content After an restart of Apache i can reach the Active Python documentation by typing in the browser.. I have a software requirement for version 3.1 of active python -- does anyone know where I can find this download? Hello, I had no idea there were different ActiveState Pyhton typologies (i.e. FirstClass and "regular"). 1) Where can I get the latest "regular" ActiveState Pyhton 2 for 32bit Windows XP SP3 and for 64bit Windows 7 SP1? and 2) Where can I get the latest "regular" ActiveState Python 3 for 32bit Windows XP SP3 and for 64bit Windows 7 SP1? Here and here I can read "ActivePython requires Windows XP, or later.". Thank you! Here I currently don't see an installation file of ActivePython 2.7.13.2716 for Windows x86 32bit (Windows XP in my specific case). Here I can read that Windows XP is OK with 2.7.13. Can someone help? Thx ActivePython 2.7.13.2715 (ActiveState Software Inc.) based on Python 2.7.13 (default, May 16 2017, 11:18:28) [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow Traceback (most recent call last): File "", line 1, in File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/__init__.py", line 24, in from tensorflow.python import * Just installed the lastest version of ActivePython on my Windows 10. Can't use any off ipython, jupyter, ... out of the box from a cmd. Please help me finding a global solution to my problem, I only succeeded to "hack" the install. Context: -------- Fired a cmd.exe shell. > ipython Returns: echo off python.exe non module ipython found. Same for almost every packages I tried. First thing: ----------- Please put an @ before "echo off" to avoid prompting => "@echo off" Second thing: ------------ All .bat shortcuts seems to be broken. I had AP 2.7.13.2714 installed and running just fine. I d/l'ed 2715, and tried to install it after uninstalling 2714 and rebooting. I removed c:\python27 completely. All options (Typeical, Custom and Complete) immediately exit with an error and no system changes. I d/l'ed another copy, and it is identical to the first copy. I obtained another copy of 2714, but it likewise ends prematurely with an unknown error. Any suggestions? There is plenty of disk space, so that's not the problem.
http://support.activestate.com/forums/activepython-support-0
CC-MAIN-2018-17
refinedweb
445
78.75
I have a form in my very basic React app where I want to allow the user to enter text and create their username by updating state. However when the "OK" button is clicked, the created username appears on the page for about half a second then the page auto-refreshes and restores the default state. It only happens when the button is contained within a form element. It works fine and the page doesn't refresh when I remove the form element, however I don't want to sacrifice the style and formatting of the bootstrap form. Here's my render method: render() { return ( <div class="container-fluid text-center"> <form class="form-inline"> <input type="text" class="form-control" placeholder="UserName"/> <button onClick={this.createUser.bind(this)}OK</button> </form> <h1>User: {this.state.userName}</h1> <h1>Points: {this.state.points}</h1> </div> ) } Your button doesn't have a type attribute, so the default will be submit (see this page). This means that when you click your button the onClick handler will be called, but the default browser action of submitting the form will also happen. Try specifying a type of button instead: <button type="button" onClick={this.createUser.bind(this)}OK</button> Also, if you don't specifically need a form element you could try changing it to a <div>. The bootstrap form-inline style doesn't require a form, as mentioned in the documentation under the Inline form heading: Add .form-inline to your form (which doesn't have to be a <form>)...
https://codedump.io/share/MZclHV1zHWH3/1/reactjs-bootstrap-form-without-refreshing-page
CC-MAIN-2017-17
refinedweb
257
54.52
A thread is an independent path of execution in a running program. When an Android program is launched, the system creates a main thread, which is also called the UI thread. This UI thread is how your app interacts with components from the Android UI toolkit. Sometimes an app needs to perform resource-intensive tasks such as downloading files, making database queries, playing media, or computing complex analytics. This type of intensive work can block the UI thread so that the app doesn't respond to user input or draw on the screen. Users may get frustrated and uninstall your app. To keep the user experience (UX) running smoothly, the Android framework provides a helper class called AsyncTask, which processes work off of the UI thread. Using AsyncTask to move intensive processing onto a separate thread means that the UI thread can stay responsive. Because the separate thread is not synchronized with the calling thread, it's called an asynchronous thread. An AsyncTask also contains a callback that allows you to display the results of the computation back in the UI thread. In this practical, you learn how to add a background task to your Android app using an AsyncTask. What you should already know You should be able to: - Create an Activity. - Add a TextViewto the layout for the Activity. - Programmatically get the id for the TextViewand set its content. - Use Buttonviews and their onClickfunctionality. What you'll learn - How to add an AsyncTaskto your app in order to run a task in the background of your app. - The drawbacks of using AsyncTaskfor background tasks. What you'll do - Create a simple app that executes a background task using an AsyncTask. - Run the app and see what happens when you rotate the device. - Implement activity instance state to retain the state of a TextViewmessage. You will build an app that has one TextView and one Button. When the user clicks the Button, the app sleeps for a random amount of time, and then displays a message in the TextView when it wakes up. Here's what the finished app looks like: The SimpleAsyncTask UI contains a Button that launches the AsyncTask, and a TextView that displays the status of the app. 1.1 Create the project and layout - Create a new project called SimpleAsyncTask using the Empty Activity template. Accept the defaults for all other options. - Open the activity_main.xmllayout file. Click the Text tab. - Add the layout_marginattribute to the top-level ConstraintLayout: android:layout_margin="16dp" - Add or modify the following attributes of the "Hello World!" TextViewto have these values. Extract the string into a resource. - Delete the app:layout_constraintRight_toRightOfand app:layout_constraintTop_toTopOfattributes. - Add a Buttonelement just under the TextView, and give it these attributes. Extract the button text into a string resource. - The onClickattribute for the button will be highlighted in yellow, because the startTask()method is not yet implemented in MainActivity. Place your cursor in the highlighted text, press Alt + Enter (Option + Enter on a Mac) and choose Create ‘startTask(View) in ‘MainActivity' to create the method stub in Main/textView1" android: <Button android: </android.support.constraint.ConstraintLayout> AsyncTask is an abstract class, which means you must subclass it in order to use it. In this example the AsyncTask performs a very simple background task: it sleeps for a random amount of time. In a real app, the background task could perform all sorts of work, from querying a database, to connecting to the internet, to calculating the next Go move to beat the current Go champion. An AsyncTask subclasshas finished loading. When you create an AsyncTask subclass, you may need to give it information about the work which it is to perform, whether and how to report its progress, and in what form to return the result. When you create an AsyncTask subclass, subclass called MyAsyncTask with the following class declaration might take the following parameters: - A Stringas a parameter in doInBackground(), to use in a query, for example. - An Integerfor onProgressUpdate(), to represent the percentage of job complete - A Bitmapfor the result in onPostExecute(), indicating the query result. public class MyAsyncTask extends AsyncTask <String, Integer, Bitmap>{} In this task you will use an AsyncTask subclass to define work that will run in a different thread than the UI thread. 2.1 Subclass the AsyncTask In this app, the AsyncTask subclass you create does not require a query parameter or publish its progress. You will only be using the doInBackground() and onPostExecute() methods. - Create a new Java class called SimpleAsyncTaskthat extends AsyncTaskand takes three generic type parameters. Use Void for the params, because this AsyncTask does not require any inputs. Use Void for the progress type, because the progress is not published. Use a String as the result type, because you will update the TextView with a string when the AsyncTask has completed execution. public class SimpleAsyncTask extends AsyncTask <Void, Void, String>{} - At the top of the class, define a member variable mTextViewof the type WeakReference<TextView>: private WeakReference<TextView> mTextView; - Implement a constructor for AsyncTaskthat takes a TextViewas a parameter and creates a new weak reference for that TextView: SimpleAsyncTask(TextView tv) { mTextView = new WeakReference<>(tv); } The AsyncTask needs to update the TextView in the Activity once it has completed sleeping (in the onPostExecute() method). The constructor for the class will therefore need a reference to the TextView to be updated. What is the weak reference (the WeakReference class) for? If you pass a TextView into the AsyncTask constructor and then store it in a member variable, that reference to the TextView means the Activity cannot ever be garbage collected and thus leaks memory, even if the Activity is destroyed and recreated as in a device configuration change. This is called creating a leaky context, and Android Studio will warn you if you try it. The weak reference prevents the memory leak by allowing the object held by that reference to be garbage collected if necessary. 2.2 Implement doInBackground() The doInBackground() method is required for your AsyncTask subclass. - Place your cursor on the highlighted class declaration, press Alt + Enter (Option + Enter on a Mac) and select Implement methods. Choose doInBackground()and click OK. The following method template is added to your class: @Override protected String doInBackground(Void... voids) { return null; } - Add code to generate a random integer between 0 and 10. This is the number of milliseconds the task will pause. This is not a lot of time to pause, so multiply that number by 200 to extend that time. Random r = new Random(); int n = r.nextInt(11); int s = n * 200; - Add a try/ catchblock and put the thread to sleep. try { Thread.sleep(s); } catch (InterruptedException e) { e.printStackTrace(); } - Replace the existing returnstatement to return the String"Awake at last after sleeping for xx milliseconds", where xx is the number of milliseconds the app slept. return "Awake at last after sleeping for " + s + " milliseconds!"; The complete doInBackground() method looks like this: and display that string in the TextView: protected void onPostExecute(String result) { mTextView.get().setText(result); } The String parameter to this method is what you defined in the third parameter of your AsyncTask class definition, and what your doInBackground() method returns. Because mTextView is a weak reference, you have to deference it with the get() method to get the underlying TextView object, and to call setText() on it. 3.1 Implement the method that starts the AsyncTask Your app now has an AsyncTask class that performs work in the background (or it would if it didn't call sleep() as the simulated work). You can now implement the onClick method for the "Start Task" button to trigger the background task. - In the MainActivity.javafile, add a member variable to store the TextView. private TextView mTextView; - In the onCreate()method, initialize mTextViewto the TextViewin the layout. mTextView = findViewById(R.id.textView1); - In the startTask()method, Update the TextViewto show the text "Napping...". Extract that message into a string resource. mTextView.setText(R.string.napping); - Create an instance of SimpleAsyncTask, passing the TextView mTextViewto the constructor. Call execute()on that SimpleAsyncTaskinstance. new SimpleAsyncTask(mTextView).execute(); Solution code for MainActivity: package com.example.android.simpleasynctask; import android.support.v7.app.AppCompatActivity; import android.os.Bundle; import android.view.View; import android.widget.TextView; /** * The SimpleAsyncTask app contains a button that launches an AsyncTask * which sleeps in the asynchronous thread for a random amount of time. */ public class MainActivity extends AppCompatActivity { // The TextView where we will show results private TextView mTextView; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); mTextView = findViewById(R.id.textView1); } public void startTask(View view) { // Put a message in the text view mTextView.setText(R.string.napping); // Start the AsyncTask.. There are several things going on here: - When you rotate the device, the system restarts the app, calling onDestroy()and then onCreate(). The AsyncTaskwill continue running even if the activity is destroyed, but it will lose the ability to report back to the activity's UI. It will never be able to update the TextView that was passed to it, because that particular TextView has also been destroyed. - Once the activity is destroyed the AsyncTaskwill continue running to completion in the background, consuming system resources. Eventually, the system will run out of resources, and the AsyncTaskwill fail. - Even without the AsyncTask, the rotation of the device resets all of the UI elements to their default state, which for the TextViewis the default string that you set in the layout file. For these reasons, an AsyncTask is not well suited to tasks which may be interrupted by the destruction of the Activity. In use cases where this is critical you can use a different type of background class called an AsyncTaskLoader which you will learn about in a later practical. In order to prevent the TextView from resetting to the initial string, you need to save its state. You will now implement onSaveInstanceState() to preserve the content of your TextView when the activity is destroyed in response to a configuration change such as device rotation. - At the top of the class, add a constant for the key for the current text in the state bundle: private static final String TEXT_STATE = "currentText"; - Override the onSaveInstanceState()method in MainActivity to preserve the text inside the TextViewwhen the activity is destroyed: @Override protected void onSaveInstanceState(Bundle outState) { super.onSaveInstanceState(outState); // Save the state of the TextView outState.putString(TEXT_STATE, mTextView.getText().toString()); } } - In onCreate(), retrieve the value of the TextViewfrom the state bundle when the activity is restored. // view.()); } } Android Studio project: SimpleAsyncTask Challenge: The AsyncTask class provides another. - An AsyncTaskis an abstract Java class that moves intensive processing onto a separate thread. AsyncTaskmust be subclassed to be used. - Avoid doing resource-intensive work in the UI thread, because it can make your UI sluggish or erratic. - Any code that does not involve drawing the UI or responding to user input should be moved from the UI thread to another, separate thread. AsyncTaskhas four key methods: onPreExecute(), doInBackground(), onPostExecute()and onProgressUpdate(). - The doInBackground()method is the only method that is actually run on a worker thread. Do not call UI methods in the doInBackground()method. - The other methods of AsyncTaskrun in the UI thread and allow you to call methods of UI components. - Rotating an Android device destroys and recreates an Activity. This can disconnect the UI from the background thread in AsyncTask, which will continue to run. The related concept documentation is in 7.1: AsyncTask and AsyncTaskLoader. Android developer documentation: Other resources:AsyncTask app. Add a ProgressBar that displays the percentage of sleep time completed. The progress bar fills up as the AsyncTask thread sleeps from a value of 0 to 100 (percent). Hint: Break up the sleep time into chunks. For help, see the AsyncTask reference. that sets the appropriate attributes to determine the range of values. - The AsyncTaskbreaks the total sleep time into chunks and updates the progress bar after each chunk. - The AsyncTaskcalls the appropriate method and implements the appropriate callback to update the progress bar. - The AsyncTaskneeds to know which views to update. Depending on whether the AsyncTaskis implemented as an inner class or not, the views can either be passed to the constructor of the AsyncTaskor defined as member variables on the Activity. To find the next practical codelab in the Android Developer Fundamentals (V2) course, see Codelabs for Android Developer Fundamentals (V2). For an overview of the course, including links to the concept chapters, apps, and slides, see Android Developer Fundamentals (Version 2).
https://codelabs.developers.google.com/codelabs/android-training-create-asynctask?hl=en
CC-MAIN-2020-45
refinedweb
2,091
55.54
Introduction Taking good pictures in poor lighting conditions can seem like magic to non-photographers. It takes a combination of skill, experience and the right equipment to accomplish low light photography. Images captured in low light lack color and distinctive edges. They also suffer from poor visibility and unknown depth. These drawbacks make such images inappropriate for personal use or image processing or computer vision tasks. We will learn improving illumination in night time images. For everyone not blessed with photography skills, we can enhance such images using image processing techniques. A method was presented by Shi et al. for this purpose in their paper, “Nighttime low illumination image enhancement with a single image using bright/dark channel prior.” This paper will act as the base of this post. For the layman, the solution for low lighting is using flash, but you must have noticed, sometimes flash can result in unwanted effects like red-eye, glare, etc. As a bonus for our dear readers, we will discuss how to rectify pictures with inconvenient lighting and try to solve the limitations faced by this technique. We will be using this image given below throughout our explanation. The image is taken from the paper cited above. - Introduction - Theory - Framework of Improving illumination in night time images - Further Improvements - Limitations - Results - Summary Theory We aim to utilise a dual channel prior-based method for low illumination image enhancement with a single image. Compared to using multiple images, image enhancement with a single image is simpler. Single image enhancement does not need additional assistant images or require exact point-to-point fusion between different images. Many conventional image processing techniques such as the well-known histogram equalization-based methods, wavelet transform-based method, retinex-based methods can be used to get brighter images. However, they might lead to contrast over-enhancement or noise amplification. This is where the dual channel prior based solution comes in. An image prior is simply put, “prior information” of an image that you can use in your image processing problem. You would wonder why we use dual channel instead of utilising just the bright channel of a low light image since it would contain the maximum leftout information. Taking the dark channel into consideration removes block effects in some regions and helps see artefacts clearly as illustrated in the image below. Framework of Improving Illumination in Night Time Images Before we dive into enhancing our images let us understand the steps involved. The flowchart below enlists the steps that we will be following in order to obtain an illuminated version of the night-time image. This is done by first, obtaining the bright and dark channel images. These are just the maximum and minimum pixel values in the local patch of the original image, respectively. Next, we compute the global atmosphere light since that gives us the most information about the relatively brighter parts of the image. We use the channels and atmospheric light value to obtain the respective transmission maps and take the darkness weights into consideration under special circumstances. We will discuss this in detail here. Now we are all set. From step 5 in the flow chart, note that the improved illumination image can be found by using the formula: (1) where I(x) is the enhanced image, Inight(x) is the original low-light image, A is the atmospheric light, and t(x) is the corrected transmission map. Step 1: Obtaining the Bright and Dark channel Prior The first step is to estimate the bright and dark channel priors. They represent the maximum and minimum intensity of pixels, respectively, in a local patch. This procedure can be imagined as a sliding convolutional window, helping us find the maximum or minimum value of all channels. Estimating the dark channel prior (2) Estimating the bright channel prior (3) where Ic is a color channel of I and Ω(x) is a local patch centered at x. y is a pixel in the local path Ω(x). Let us dive into the code now! Python import cv2 import numpy as np def get_illumination_channel(I, w): M, N, _ = I.shape # padding for channels padded = np.pad(I, ((int(w/2), int(w/2)), (int(w/2), int(w/2)), (0, 0)), 'edge') darkch = np.zeros((M, N)) brightch = np.zeros((M, N)) for i, j in np.ndindex(darkch.shape): darkch[i, j] = np.min(padded[i:i + w, j:j + w, :]) # dark channel brightch[i, j] = np.max(padded[i:i + w, j:j + w, :]) # bright channel return darkch, brightch We first import cv2 and NumPy and write the function to get the illumination channel. The image dimensions are stored in the variables M and N. Padding of half the kernel size is applied to the images to ensure their size remains the same. The dark channel is obtained using np.min to get the lowest pixel value in that block. Similarly, the bright channel is obtained by using np.max to get the highest pixel value in that block. We will need the value of the dark channel and the bright channel for further steps. So we return these values. A similar code is written for C++ and given below. C++ std::pair<cv::Mat, cv::Mat> get_illumination_channel(cv::Mat I, float w) { int N = I.size[0]; int M = I.size[1]; cv::Mat darkch = cv::Mat::zeros(cv::Size(M, N), CV_32FC1); cv::Mat brightch = cv::Mat::zeros(cv::Size(M, N), CV_32FC1); int padding = int(w/2); // padding for channels cv::Mat padded = cv::Mat::zeros(cv::Size(M + 2*padding, N + 2*padding), CV_32FC3); for (int i=padding; i < padding + M; i++) { for (int j=padding; j < padding + N; j++) { padded.at<cv::Vec3f>(j, i).val[0] = (float)I.at<cv::Vec3b>(j-padding, i-padding).val[0]/255; padded.at<cv::Vec3f>(j, i).val[1] = (float)I.at<cv::Vec3b>(j-padding, i-padding).val[1]/255; padded.at<cv::Vec3f>(j, i).val[2] = (float)I.at<cv::Vec3b>(j-padding, i-padding).val[2]/255; } } for (int i=0; i < darkch.size[1]; i++) { int col_up, row_up; col_up = int(i+w); for (int j=0; j < darkch.size[0]; j++) { double minVal, maxVal; row_up = int(j+w); cv::minMaxLoc(padded.colRange(i, col_up).rowRange(j, row_up), &minVal, &maxVal); darkch.at<float>(j,i) = minVal; //dark channel brightch.at<float>(j,i) = maxVal; //bright channel } } return std::make_pair(darkch, brightch); } The dark and bright channels are obtained by initializing a matrix with zeroes and filling them with values from the image array, where CV_32FC1 defines the depth of each element and the number of channels. Padding is applied to the images by half the kernel size to ensure their size remains the same. We iterate over the matrix to get the lowest pixel value in that block which is used to set the dark channel pixel value. Obtaining the highest pixel value in that block gives us the bright channel pixel value. cv::minMaxLoc is used to find the global minimum and maximum values in an array. Step 2: Computing Global Atmosphere Lighting The next step is to compute the global atmosphere lighting. It is computed using the bright channel obtained above by taking the mean of the top ten percent intensities. Ten percent of values are taken to ensure that a small anomaly does not affect it highly. Python def get_atmosphere(I, brightch, p=0.1): M, N = brightch.shape flatI = I.reshape(M*N, 3) # reshaping image array flatbright = brightch.ravel() #flattening image array searchidx = (-flatbright).argsort()[:int(M*N*p)] # sorting and slicing A = np.mean(flatI.take(searchidx, axis=0), dtype=np.float64, axis=0) return A To achieve this via code, the image array is reshaped, flattened and sorted according to maximum intensity. The array is sliced to include only the top ten percent of pixels, and then the mean of these is taken. C++ cv::Mat get_atmosphere(cv::Mat I, cv::Mat brightch, float p=0.1) { int N = brightch.size[0]; int M = brightch.size[1]; // flattening and reshaping image array cv::Mat flatI(cv::Size(1, N*M), CV_8UC3); std::vector<std::pair<float, int>> flatBright; for (int i=0; i < M; i++) { for (int j=0; j < N; j++) { int index = i*N + j; flatI.at<cv::Vec3b>(index, 0).val[0] = I.at<cv::Vec3b>(j, i).val[0]; flatI.at<cv::Vec3b>(index, 0).val[1] = I.at<cv::Vec3b>(j, i).val[1]; flatI.at<cv::Vec3b>(index, 0).val[2] = I.at<cv::Vec3b>(j, i).val[2]; flatBright.push_back(std::make_pair(-brightch.at<float>(j, i), index)); } } // sorting and slicing the array sort(flatBright.begin(), flatBright.end()); cv::Mat A = cv::Mat::zeros(cv::Size(1, 3), CV_32FC1); for (int k=0; k < int(M*N*p); k++) { int sindex = flatBright[k].second; A.at<float>(0, 0) = A.at<float>(0, 0) + (float)flatI.at<cv::Vec3b>(sindex, 0).val[0]; A.at<float>(1, 0) = A.at<float>(1, 0) + (float)flatI.at<cv::Vec3b>(sindex, 0).val[1]; A.at<float>(2, 0) = A.at<float>(2, 0) + (float)flatI.at<cv::Vec3b>(sindex, 0).val[2]; } A = A/int(M*N*p); return A/255; } Step 3: Finding the Initial Transmission Map The transmission map describes the portion of the light that is not scattered and reaches the camera. In this algorithm, it will be estimated from the bright channel prior using the following equation: (4) Ac is simply the maximum of a local patch of the atmospheric light. Python def get_initial_transmission(A, brightch): A_c = np.max(A) init_t = (brightch-A_c)/(1.-A_c) # finding initial transmission map return (init_t - np.min(init_t))/(np.max(init_t) - np.min(init_t)) # normalized initial transmission map In the code, the initial transmission map is calculated using the formula and then used to calculate the normalized initial transmission map. C++ cv::Mat get_initial_transmission(cv::Mat A, cv::Mat brightch) { double A_n, A_x, minVal, maxVal; cv::minMaxLoc(A, &A_n, &A_x); cv::Mat init_t(brightch.size(), CV_32FC1); init_t = brightch.clone(); // finding initial transmission map init_t = (init_t - A_x)/(1.0 - A_x); cv::minMaxLoc(init_t, &minVal, &maxVal); // normalized initial transmission map init_t = (init_t - minVal)/(maxVal - minVal); return init_t; } Step 4: Using Dark Channel to Estimate Corrected Transmission Map A transmission map is also calculated from the dark channel prior, and the difference between the priors is calculated. This calculation is done to correct potentially erroneous transmission estimates attained from the bright channel prior. Any pixel x with an Idifference channel of less than the set value of alpha (determined by an empirical experiment as 0.4) is in a dark object which makes its depth unreliable. This makes the transmission of pixel x unreliable. Hence the unreliable transmission can be corrected by taking the transmission maps’ product. Python def get_corrected_transmission(I, A, darkch, brightch, init_t, alpha, omega, w): im = np.empty(I.shape, I.dtype); for ind in range(0, 3): im[:, :, ind] = I[:, :, ind] / A[ind] #divide pixel values by atmospheric light dark_c, _ = get_illumination_channel(im, w) # dark channel transmission map dark_t = 1 - omega*dark_c # corrected dark transmission map corrected_t = init_t # initializing corrected transmission map with initial transmission map diffch = brightch - darkch # difference between transmission maps for i in range(diffch.shape[0]): for j in range(diffch.shape[1]): if(diffch[i, j] < alpha): corrected_t[i, j] = dark_t[i, j] * init_t[i, j] return np.abs(corrected_t) The above code for Python and below for C++, does precisely this: We use the get_illumination_channel function we created in the first code snippet to obtain the dark channel transmission map. The parameter omega, usually set to 0.75, is used to correct the initial transmission map. The corrected transmission map is initialized as the initial transmission map. Its value will remain the same as the initial transmission map if the difference between dark and bright channel is more than alpha i.e. 0.4. If the difference at any place is less than alpha, we take the transmission maps’ product as mentioned above. C++ cv::Mat get_corrected_transmission(cv::Mat I, cv::Mat A, cv::Mat darkch, cv::Mat brightch, cv::Mat init_t, float alpha, float omega, int w) { cv::Mat im3(I.size(), CV_32FC3); //divide pixel values by atmospheric light for (int i=0; i < I.size[1]; i++) { for (int j=0; j < I.size[0]; j++) { im3.at<cv::Vec3f>(j, i).val[0] = (float)I.at<cv::Vec3b>(j, i).val[0]/A.at<float>(0, 0); im3.at<cv::Vec3f>(j, i).val[1] = (float)I.at<cv::Vec3b>(j, i).val[1]/A.at<float>(1, 0); im3.at<cv::Vec3f>(j, i).val[2] = (float)I.at<cv::Vec3b>(j, i).val[2]/A.at<float>(2, 0); } } cv::Mat dark_c, dark_t, diffch; std::pair<cv::Mat, cv::Mat> illuminate_channels = get_illumination_channel(im3, w); // dark channel transmission map dark_c = illuminate_channels.first; // corrected dark transmission map dark_t = 1 - omega*dark_c; cv::Mat corrected_t = init_t; diffch = brightch - darkch; //difference between transmission maps for (int i=0; i < diffch.size[1]; i++) { for (int j=0; j < diffch.size[0]; j++) { if (diffch.at<float>(j, i) < alpha) { // initializing corrected transmission map with initial transmission map corrected_t.at<float>(j, i) = abs(dark_t.at<float>(j, i)*init_t.at<float>(j, i)); } } } return corrected_t; } Step 5: Smoothing Transmission Map using Guided Filter Let us take a look at the definition of the guided filter. Guided image filtering is a neighborhood operation, like other filtering operations, but takes into account the statistics of a region in the corresponding spatial neighborhood in the guidance image when calculating the value of the output pixel. In essence, it is an edge-preserving smoothing filter. I have used the implementation of this GitHub repository for it. This filter is applied to the corrected transmission map obtained above to get a more refined image. Step 6: Calculating the Resultant Image A transmission map and the atmospheric light value were required to get the enhanced image. Now that we have the required values, the first equation can be applied to get the result. Python def get_final_image(I, A, refined_t, tmin): refined_t_broadcasted = np.broadcast_to(refined_t[:, :, None], (refined_t.shape[0], refined_t.shape[1], 3)) # duplicating the channel of 2D refined map to 3 channels J = (I-A) / (np.where(refined_t_broadcasted < tmin, tmin, refined_t_broadcasted)) + A # finding result return (J - np.min(J))/(np.max(J) - np.min(J)) # normalized image First, the grayscale refined transformation map is converted to a grayscale image to ensure the number of channels in the original image and the transformation map are the same. Next, the output image is calculated using the equation. This image is then max-min normalized and returned from the function. C++ cv::Mat get_final_image(cv::Mat I, cv::Mat A, cv::Mat refined_t, float tmin) { cv::Mat J(I.size(), CV_32FC3); for (int i=0; i < refined_t.size[1]; i++) { for (int j=0; j < refined_t.size[0]; j++) { float temp = refined_t.at<float>(j, i); if (temp < tmin) { temp = tmin; } // finding result J.at<cv::Vec3f>(j, i).val[0] = (I.at<cv::Vec3f>(j, i).val[0] - A.at<float>(0,0))/temp + A.at<float>(0,0); J.at<cv::Vec3f>(j, i).val[1] = (I.at<cv::Vec3f>(j, i).val[1] - A.at<float>(1,0))/temp + A.at<float>(1,0); J.at<cv::Vec3f>(j, i).val[2] = (I.at<cv::Vec3f>(j, i).val[2] - A.at<float>(2,0))/temp + A.at<float>(2,0); } } double minVal, maxVal; cv::minMaxLoc(J, &minVal, &maxVal); // normalized image for (int i=0; i < J.size[1]; i++) { for (int j=0; j < J.size[0]; j++) { J.at<cv::Vec3f>(j, i).val[0] = (J.at<cv::Vec3f>(j, i).val[0] - minVal)/(maxVal - minVal); J.at<cv::Vec3f>(j, i).val[1] = (J.at<cv::Vec3f>(j, i).val[1] - minVal)/(maxVal - minVal); J.at<cv::Vec3f>(j, i).val[2] = (J.at<cv::Vec3f>(j, i).val[2] - minVal)/(maxVal - minVal); } } return J; } Further Improvements Although the image is full of color, it looks blurry and sharpening would improve the picture. We can utilize cv2.detailEnhance() for this task but this will increase noise. So we can use cv2.edgePreservingFilter() to limit it. However, this function will still induce some noise. Hence it is not ideal to do this if the images were noisy from the beginning. img = cv2.detailEnhance(img, sigma_s=10, sigma_r=0.15) img = cv2.edgePreservingFilter(img, flags=1, sigma_s=64, sigma_r=0.2) For a deeper understanding of these techniques, refer to this article. Limitations The method does not perform well if there is any explicit light source in the images such as a lamp or a natural light source like the moon covering a significant portion of the image. Why is this a problem? Because such light sources will push up the value of the atmosphere intensity. As we were looking for the top 10% percent of the brightest pixels, this will cause those areas to overexpose. This cause and effect is visualized in the image comparison set shown below. To overcome this, let us analyze the initial transmission map made by the bright channel. The task seems to reduce these intense spots of white, which are causing those areas to over-expose. This can be done by limiting the values from 255 to some minimal value. Python def reduce_init_t(init_t): init_t = (init_t*255).astype(np.uint8) xp = [0, 32, 255] fp = [0, 32, 48] x = np.arange(256) # creating array [0,...,255] table = np.interp(x, xp, fp).astype('uint8') # interpreting fp according to xp in range of x init_t = cv2.LUT(init_t, table) # lookup table init_t = init_t.astype(np.float64)/255 # normalizing the transmission map return init_t To implement this with code, the transmission map is converted to the range of 0-255. A lookup table is then used to interpolate the points from the original values to a new range, which reduces the effect of high exposure. C++ cv::Mat reduce_init_t(cv::Mat init_t) { cv::Mat mod_init_t(init_t.size(), CV_8UC1); for (int i=0; i < init_t.size[1]; i++) { for (int j=0; j < init_t.size[0]; j++) { mod_init_t.at<uchar>(j, i) = std::min((int)(init_t.at<float>(j, i)*255), 255); } } int x[3] = {0, 32, 255}; int f[3] = {0, 32, 48}; // creating array [0,...,255] cv::Mat table(cv::Size(1, 256), CV_8UC1); //Linear Interpolation int l = 0; for (int k = 0; k < 256; k++) { if (k > x[l+1]) { l = l + 1; } float m = (float)(f[l+1] - f[l])/(x[l+1] - x[l]); table.at<int>(k, 0) = (int)(f[l] + m*(k - x[l])); } //Lookup table cv::LUT(mod_init_t, table, mod_init_t); for (int i=0; i < init_t.size[1]; i++) { for (int j=0; j < init_t.size[0]; j++) { // normalizing the transmission map init_t.at<float>(j, i) = (float)mod_init_t.at<uchar>(j, i)/255; } } return init_t; } The graph below is a visual representation of how this tweak in the code would affect the pixels. We can see the difference between the images obtained by enhancement using the method in the paper and the results obtained following the workaround we have just discussed. Results The final step is to create a function that combines all the techniques and passes it as an image. Python def dehaze(I, tmin=0.1, w=15, alpha=0.4, omega=0.75, p=0.1, eps=1e-3, reduce=False): I = np.asarray(im, dtype=np.float64) # Convert the input to a float array. I = I[:, :, :3] / 255 m, n, _ = I.shape Idark, Ibright = get_illumination_channel(I, w) A = get_atmosphere(I, Ibright, p) init_t = get_initial_transmission(A, Ibright) if reduce: init_t = reduce_init_t(init_t) corrected_t = get_corrected_transmission(I, A, Idark, Ibright, init_t, alpha, omega, w) normI = (I - I.min()) / (I.max() - I.min()) refined_t = guided_filter(normI, corrected_t, w, eps) # applying guided filter J_refined = get_final_image(I, A, refined_t, tmin) enhanced = (J_refined*255).astype(np.uint8) f_enhanced = cv2.detailEnhance(enhanced, sigma_s=10, sigma_r=0.15) f_enhanced = cv2.edgePreservingFilter(f_enhanced, flags=1, sigma_s=64, sigma_r=0.2) return f_enhanced C++ int main() { cv::Mat img = cv::imread("dark.png"); float tmin = 0.1; int w = 15; float alpha = 0.4; float omega = 0.75; float p = 0.1; double eps = 1e-3; bool reduce = false; std::pair<cv::Mat, cv::Mat> illuminate_channels = get_illumination_channel(img, w); cv::Mat Idark = illuminate_channels.first; cv::Mat Ibright = illuminate_channels.second; cv::Mat A = get_atmosphere(img, Ibright); cv::Mat init_t = get_initial_transmission(A, Ibright); if (reduce) { init_t = reduce_init_t(init_t); } double minVal, maxVal; // Convert the input to a float array cv::Mat I(img.size(), CV_32FC3), normI; for (int i=0; i < img.size[1]; i++) { for (int j=0; j < img.size[0]; j++) { I.at<cv::Vec3f>(j, i).val[0] = (float)img.at<cv::Vec3b>(j, i).val[0]/255; I.at<cv::Vec3f>(j, i).val[1] = (float)img.at<cv::Vec3b>(j, i).val[1]/255; I.at<cv::Vec3f>(j, i).val[2] = (float)img.at<cv::Vec3b>(j, i).val[2]/255; } } cv::minMaxLoc(I, &minVal, &maxVal); normI = (I - minVal)/(maxVal - minVal); cv::Mat corrected_t = get_corrected_transmission(img, A, Idark, Ibright, init_t, alpha, omega, w); cv::Mat refined_t(normI.size(), CV_32FC1); // applying guided filter refined_t = guidedFilter(normI, corrected_t, w, eps); cv::Mat J_refined = get_final_image(I, A, refined_t, tmin); cv::Mat enhanced(img.size(), CV_8UC3); for (int i=0; i < img.size[1]; i++) { for (int j=0; j < img.size[0]; j++) { enhanced.at<cv::Vec3b>(j, i).val[0] = std::min((int)(J_refined.at<cv::Vec3f>(j, i).val[0]*255), 255); enhanced.at<cv::Vec3b>(j, i).val[1] = std::min((int)(J_refined.at<cv::Vec3f>(j, i).val[1]*255), 255); enhanced.at<cv::Vec3b>(j, i).val[2] = std::min((int)(J_refined.at<cv::Vec3f>(j, i).val[2]*255), 255); } } cv::Mat f_enhanced; cv::detailEnhance(enhanced, f_enhanced, 10, 0.15); cv::edgePreservingFilter(f_enhanced, f_enhanced, 1, 64, 0.2); cv::imshow("im", f_enhanced); cv::waitKey(0); return 0; } Take a look at the gif below showing some other images enhanced with this algorithm. Summary To sum it up, we started with understanding the problems associated with images taken in poor or low lighting conditions. We discussed step by step the method presented by Shi et al. to enhance such images. We also discussed further improvements and limitations of the technique presented in the paper. The paper presented an excellent technique to increase the illumination of low-light images. However, it works only on those images with constant illumination throughout. As promised, we also explained a workaround to overcome the limitations for images with bright spots, such as a full moon or lamp within the image. For future development of this method, we can try to control this reduction via a trackbar. The trackbar would help users play around to better understand the appropriate values for enhancement and set the optimum values needed for an individual image. We hope you enjoyed the discussion and explanation. Do let us know your experience and results by leaving a comment.
https://learnopencv.com/improving-illumination-in-night-time-images/
CC-MAIN-2022-21
refinedweb
3,875
50.33
Global data, while usually considered poor design, nevertheless often is a useful means to preserve state between related function calls. When it comes to using threads, the issue unfortuantely is complicated by the fact that some access synchronisation is needed, to avoid that more than one thread will modify the data. There are times when you will want to have a globally visible object, while still having the data content accessible only to the calling thread, without holding off other threads that contend for the "same" global object. This is where thread local storage (TLS) comes in. TLS is something the operating system / threading subsystem provides, and by its very nature is rather low level. From a globally visible object (in C++) you expect that its constructors are getting called before you enter "main", and that it is disposed properly, after you exit from "main". Consequently one would expect a thread local "global" object beeing constructed, when a thread starts up, and beeing destroyed when the thread exits. But this is not the case! Using the native API one can only have TLS that needs neither code to construct nor code to destruct. While at first glance this is somewhat disappointing, there are reasons, not to automatically instantiate all these objects on every thread creation. A clean solution to this problem is presented e.g. in the "boost" library. Also the standard "pthread" C library addresses this problem properly. But when you need to use the native windows threading API, or need to write a library that, while making use of TLS, has no control over the threading API the client code is using, you are apparently lost. Fortunately this is not true, and this is the topic of this article. The Windows Portable Executable (PE) format provides for support of TLS-Callbacks. Altough the documentation is hard to read, it can be done with current compilers i.e. MSVC 6.0,7.1,... Since noone else seemingly was using this feature before, and not even the C runtime library (CRT) is making use of it, you should be a little careful and watch out for undesired behaviour. Having said, that the CRT does not use it, does not mean it does not implement it. Unfortunately there is a small bug present in the MSVC 6.0 implementation, that is also worked-around by my code. If it turns out, that the concepts, presented in this article, prove to be workable in "real life", I would be glad if this article has helped to remove some dust from this topic and make it usable for a broader range of applications. I could e.g. think of a generalized atexit_thread function that makes use of the concepts presented here. Before going to explain the gory details, I want to mention Aaron W. LaFramboise who made me aware of the existence of the TLS-Callback mechanism. If you are using the precompiled binaries, you simply will need to copy the *.lib files to a convenient directory where your compiler usually will find libraries. So you will copy the files from the include directory to a directory where your compiler searches for includes. Alternatively you may simply copy the files to your project directory. The following is a simple demonstration of usage, to get you started. #include <process.h> // first include the header file #include <tls.h> // this is your class struct A { A() : n(42) { } ~A() { } int the_answer_is() { int m = n; n = 0; return m; } int n; }; // now define a tls wrapper of class A tls_ptr<A> pA; // this is the threaded procedure void run(void*) { // instantiate a new "A" pA.reset(new A); // access the tls-object ans = pA->the_answer_is(); // note, that we do not need to deallocate // the object. This is getting done automagically // when the thread exits. } int main(int argc, char* argv[]) { // the main thread also gets a local copy of the tls. pA.reset(new A); // start the thread _beginthread(&run, 0, 0); // call into the main threads version pA->the_answer_is(); // the "run" thread should have ended when we // are exiting. Sleep(10000); // again we do not need to free our tls object. // this is comparable in behaviour to objects // at global scope. return 0; } While at first glance it might appear natural that the tls-objects should not be wrapped as pointers, in fact it is not. While the objects are globally visible, they are still "delegates" that forward to a thread local copy. The natural way in C++ to express delegation is a pointer object. (The technical reason of course is, that you cannot overload the "." operator but "->" can be overloaded.) You can use this mechanism when building a "*.exe" file of course, but you also can use it when building a "*.dll" image. However when you are planning to load your DLL by LoadLibary() you should define the macro TLS_ALLOC when building your DLL. This is not necessary when using your DLL by means of an import library. A similar restriction applies when delay-loading your DLL. Please consult your compiler documentation when you are interested in the reasons for this. (Defining TLS_ALLOC forces the use of the TlsAlloc() family functions from the Win32 API.) LoadLibary() TLS_ALLOC TlsAlloc() The complete API is kept very simple: tls_ptr<A> pA; // declare an object of class A pA.reset(new A); // create a tls of class A when needed pA.reset(new A(45)); // create a tls of class A with a custom constructor // note, that this also deletes any prior objects // that might have been allocated to pA pA.release(); // same as pA.reset(0), releases the thread local // object A& refA = *pA; // get a temporary reference to the contained object // for faster access pA->the_answer_is(); // access the object Please again note, that it is not necessary to explicitely call the destructors of your class (or release()). This is very handy, when you are writing a piece of code, that has no control over the calling threads, but must still be multithread safe. One caveat however: The destructors of your class are called _after_ the CRT code has ended the thread. Consequently when you are doing something fancy in your destructors, which causes the CRT to reallocate its internal thread local storage pointers, you will be left with a small memory leak of the CRT. This is comparable in effect to the case when you are using the native Win32 API functions to create a thread, instead of _beginthread(). release() _beginthread() In principle that is all you need. But wait! I mentioned a small bug in the version 6 of the compiler. Luckily it is easy to work around. I provided an include file tlsfix.h which you will need to include into your program. You need to make sure it is getting included before windows.h. To be more precise: the TLS library must be searched before the default CRT library. So you alternatively may specify the library on the command line on the first place, and omit the inclusion of tlsfix.h. tlsfix.h windows.h I will not discuss the user interface in this place. It suffices to say, that it essentialy is the same as in the boost library. However I omitted the feature of beeing able to specify arbitrary deleter functions, since this would have raised the need to include the boost library in my code. I wanted to keep it small and just demonstrate the principles. However, my implementation also deviates from boost insofar as I am featuring native compiler support for TLS variables, thus gaining an almost 4 times speed improvement. No need to say, that my implementation of course is Windows specific. When thinking about TLS for C++ the main question is how to run the constructors and destructors. A careful study of the PE format (e.g. in the MSDN library) reveals, that it almost ever provided for TLS support. (Thanks again to Aaron W. LaFramboise who read it carefully enough.) Of special interest is the section about TLS-Callback: The program can provide one or more TLS callback functions (though Microsoft compilers do not currently use this feature) to support additional initialization and termination for TLS data objects. A typical reason to use such a callback function would be to call constructors and destructors for objects. Well it is true, that the compilers do not use the feature, but there is nothing that prevents user code to use it though. One somehow must convince the compiler (to be honest it is the linker) to place your callback in a manner, so the operating system will call it. It turns out, that this is surprisingly simple (omitting the deatils for a moment). // declare your callback void NTAPI on_tls_callback(PVOID h, DWORD dwReason, PVOID pv) { if( DLL_THREAD_DETACH == dwReason ) basic_tls::thread_term(); } // put a pointer in a special segment #pragma data_seg(".CRT$XLB") PIMAGE_TLS_CALLBACK p_thread_callback = on_tls_callback; #pragma data_seg() You can even add more callbacks, by appending pointers to the ".CRT$XLB" segment. The fancy definitions are available from the windows.h and winnt.h include files in turn. ".CRT$XLB" winnt.h Now about the details: You will find at times, that your callbacks are not getting called. The reason for this is when the linker does not correctly wire up your segments. It turns out, that this coincides with when you are not using any __declspec(thread) in your code. A further study of the PE format description reveals: __declspec(thread) __tls_used _tls_used Consequentyly, when the linker does not find the _tls_used symbol it won't wire in your callbacks. Luckily this is easy to circumvent: #pragma comment(linker, "/INCLUDE:__tls_used") This will pull in the code from CRT that manages TLS. When using a version 7 compiler, that is all you need. (Actually I tried this with 7.1.) It turns out, however that using a version 6 compiler does not work. But the operating system cannot be the culprit, since code compiled by version 7 does work properly. After a little guess-work you will find out, that the CRT code from version 6 is slightly broken, because it inserts a wrong offset to the callback table. It is easy then to replace the errenous code and convince the linker to wire in the work around before the broken version from the CRT. You can study the tlsfix.c file from my submission, if you are interested in the details. tlsfix.c Which is the first function of your program that is getting called by the operating system? Of course it is not main(). This was easy. Then mainCRTStartup specified as the entry-point in the linker comes to mind. Wrong again. Interestingly the first function beeing called is the Tls-Callback with Reason == DLL_PROCESS_ATTACH. But wait. Don't rely on this. This is not true on WinXP. I observed this on Win2000 only. main() mainCRTStartup Reason == DLL_PROCESS_ATTACH I did not yet try the code on Win95/98, WinXP-Home-Edition and Win2003. I would be interested on feedback about using this code on these platforms. In principle it should work, because it is a feature of PE and not the operating system, but ... 08.28.2004 Uploaded documentation, source and sample code. This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here #include <windows.h> #include <process.h> #if !defined(_INC_STDLIB) && !defined(_STDLIB_H_) && !defined(_STDLIB_H) #include <stdlib.h> #endif #if !defined(_INC_STDIO) && !defined(_STDIO_H_) && !defined(_STDIO_H) #include <stdio.h> #endif void NTAPI on_tls_callback( PVOID DllHandle, DWORD dwReason, PVOID Reserved ) { if( DLL_THREAD_ATTACH == dwReason ) MessageBox( 0, "DLL_THREAD_ATTACH", "", 0 ); else if( DLL_THREAD_DETACH == dwReason ) MessageBox( 0, "DLL_THREAD_DETACH", "", 0 ); } // put a pointer in a special segment #pragma data_seg(".CRT$XLB") //#pragma data_seg(".tls") static __declspec( thread ) PIMAGE_TLS_CALLBACK p_thread_callback = on_tls_callback; #pragma data_seg() #pragma comment(linker, "/INCLUDE:__tls_used") int main(int argc, char* argv[]) { printf("main\n"); } void CTlsAlloc::OnDllMain(DWORD dwReason) { switch(dwReason) { case DLL_THREAD_DETACH: ThreadTerm(); TRACE0("Thread detaching from TlsAlloc!\n"); break; case DLL_PROCESS_DETACH: ThreadTerm(); TRACE0("Process detaching from TlsAlloc!\n"); break; } } for (p = last; p != 0; p = p->prev) (*p->pdtor)(rgpv[p->id]); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/8113/Thread-Local-Storage-The-C-Way?msg=911151
CC-MAIN-2015-40
refinedweb
2,079
64.2
4.6 Release Notes Table of Contents Release 4.6.6 is the LAST PUBLIC RELEASE in the 4.6 release series from the RTEMS Project. Legacy support for the 4.6 releases is available from OAR Corporation. Release 4.6.6 of RTEMS is primarily a problem fixing release and the problems addressed are discussed here. The primary driving factor for 4.6.6 was a patch to address GCC optimizing code outside of interrupt disable critical sections. Other bug fixes and improvements were also committed to the branch so this release includes them. The notable changes are that the SPARC LEON3 BSP received significant improvements including supprt for SpaceWire? and a new NIC from Gaisler Research. The PC386 BSP was modified to allow easy compile time selection of VGA or COM1 as the console. 4.6.6 is available for download from. Please be warned that the entire set of tools for all hosts can consume an enormous amount of space. For the 4.6 series, this is approximately 1.6 GB. The bulk of this is tool binaries for approximately 10 target CPU families on GNU/Linux, Cygwin, and Solaris. It is highly unlikely that any non-maintainer requires a full download. Please try to download just what is required. In addition to the standard RTEMS feature set?, the 4.6 release series includes these major improvements over previous RTEMS releases: Tool Improvements - Pre-built tools now installed into /opt/rtems-4.6 to avoid conflicts with tools from previous release series. - All RPMs are now digitally signed to ensure their integrity and origin. The key is available for download from. - RPMs may be downloaded and updated using APT or Yum? - Tool binaries are available for: - RedHat? Gnu/Linux? 7.3 and later -- - all Fedora Core releases -- - Cygwin -- - Solaris -- RTEMS Improvements - New port to the ARM including network optimizations - New port to the OpenCores?.org OR32 - MIPS port redesigned to support more CPU models including the space hardened LSI 33000 derivative Mongoose V and the Toshiba TX3904. - New Board Support Packages for: - ARM simulator in GDB - ARM based Cogent EDP7312 - ARM VegaPlus? with support for on-CPU peripherals - MIPS Mongoose V and on-CPU peripherals - Motorola MCF5206Elite Coldfire - Motorola MTX603e, MVME2100, and MVME230x - Motorola MPC8260ADS - PC BSP variants optimized for 486, Pentium and above, and Athlon - Reneas/Hitachi? SH4 generic BSP - SPARC ERC32 BSP variation for CPU model without FPU with support on-CPU peripherals - SPARC LEON2 and LEON3 with support for on-CPU peripherals - Tosbiba MIPS JMR3904 and on-CPU peripherals - FAT filesystem supporting FAT12, FAT16, and FAT32 - ATA/IDE device support - NFS client - Dynamic downloading via CEXP shell - Numerous ported third party packages including Python and TCL - Many more improvements including more POSIX support and much reorganization as part of the ongoing effort to move RTEMS to full automake support and separate the CPU and BSP portions of RTEMS. Changes Per Point Release Release 4.6.6 Changes 11 problems reports () filed by users were closed between 4.6.5 and 4.6.6. Four (4) of these were BSP specific. These problems can be roughly categorized as follows: - Run-Time Problems (7) - 830/filesystem - termios ioctl(FIONREAD) reported wrong number of chars - 843/rtems_misc - memory corruption in web server when not using "internal" malloc - 849/networking - rtems portmapper stack overrun - 850/rtems - watchdog with delay of 1 failing to time out - 855/tests - a bug in the define of macro function sprint_time() - 886/filesystem - fcntl(F_GETFL) POSIX violation (non-blocking fd doesn't return O_NONBLOCK) - 890/networking - Webserver POST DoS vulnerability - BSP and Port Specific Problems Reports (5) - 719/bsps - m68kpretaskinghook.c vs if ((unsigned long)&pointer == 0) - 834/bsps - PPC BSP exception handler stack popping fix - 837/bsps - 4.6.5 broken on motorola_powerpc (mvme23xx, 26xx, ..) other than mvme2100 - 845/bsps - mvme2100 BSP has MMU disabled Release 4.6.5 Changes 6 problems reports () filed by users were closed between 4.6.4 and 4.6.5. Five (5) of these were BSP specific and only one was cross-platform. These problems can be roughly categorized as follows: - Run-Time Problems (1) - PR 829/rtems - task variable dtor called with wrong argument< - BSP and Port Specific Problems Reports (5) - 527/bsps - mbx8xx BSP should be modified in various ways (same fix as PR 822) - 577/bsps - Lib Shared Console Close correction - 822/bsps - MBX8xx-BSP does not boot (same fix as PR 527) - 827/bsps - sparc BSP update - 833/bsps - PPC BSPs must not enable FPU across user ISR Release 4.6.4 Changes 38 problems reported () by users were fixed between 4.6.2 and 4.6.4. On a positive note, 3 of these issues were simple improvements in argument parsing, 8 were general improvements in either functionality or cleanup, and 12 were BSP or port specific. These problems can be roughly categorized as follows: - Improved Parameter Validation (3) - PR 628/rtems - POSIX sigset of 0 should result in EINVAL - PR 749/networking - NULL pointer derefernce in show_inet_route(). - PR 750/pppd - NULL pointer derefernce in wait_input(). - Run-Time Problems (10) - PR 577/bsps - Lib Shared Console Close correction - PR 692/rtems - Region Manager broken for blocking when empty - PR 745/rtems - internal timers are not always reinitialized - PR 772/networking - select() not waking up when socket becomes writable - PR 790/rtems - Extensions name are not correctly managed - PR 796/rtems - sem_timedwait uses relative time instead of absolute time - PR 805/rtems - It is not possible to create more than 11 posix timers - PR 807/rtems - Timer chain corruption with simultaneous used by different priority interrupt - PR 808/rtems_misc - printk crashes with field size smaller than value. - PR 820/rtems - core msgq count not atomic with chain inserts - Improvements (8) - PR 704/bsps - ide controller wrapper should only be once in the source tree - PR 721/filesystem - unlink does not yet work for DOSFS - PR 742/rtems - cpukit/score/include/rtems/system.h stringify pollutes application namespace - PR 744/filesystem - fix unlink for dosfs - PR 786/rtems - Backport mallocfreespace optimization to 4.6 - PR 810/rtems - ide_part_table.h does not link with cpp - PR 817/rtems - rtems_gxx_recursive_mutex_init_function - PR 819/filesystem - symbol clash in ttyname.c and ttyname_r.c - BSP and Port Specific Problems (12) - PR 581/bsps - PSIM BSP should migrate to powerpc new exception processing model - PR 617/bsps - cdtest sample and C++ constructors do not work on psim - PR 693/rtems_misc - mc146818a_ioreg.c is non-portable - PR 696/bsps - powerpc: old_exception_processing/cpu.c need bsp.h - PR 697/bsps - Suspicious comments in helas403/flashentry/flashentry.S - PR 715/bsps - libchip/rtc/mc146818a.c should not include bsp.h - PR 717/bsps - motorola_powerppc bootloader miscompiled - PR 743/bsps - pc386 BSP build fails on rtems-4-6-branch - PR 777/bsps - psim: Add a Processor_Synchronize command in bsp.h - PR 778/bsps - score603e - modify SCORE_.. to BSP_.. for globally used defs. - PR 781/rtems_misc - libchip: ns16550 has an error in the baud rate calculation - PR 816/bsps - mpc8xx ethernet struct has taddr members in the wrong order - Documentation Issues (1) - PR 706/doc - Classic API task_is_suspended doc has extra not - Tool Issues (1) - PR 594/tools - GNAT Is missing - Test Issues (2) - PR 628/rtems - POSIX sigset of 0 should result in EINVAL. XXX - PR 721/tests - bogus assert in itrontests/itronsem01 - Miscellaneous - PR 350/make_build - Honor user setting LD_PATHS as addition to LDFLAGS Release 4.6.3 Changes There was officially no 4.6.3 release. There is a tag in CVS however, the release process was aborted when what we thought at the time was a serious problem was discovered. Since the number was already used, it was just skipped. Release 4.6.2 Changes 37 problems reported () by users were fixed between 4.6.1 and 4.6.2. On a positive note, 8 of these issues were simple improvements in argument parsing and 7 were general improvements in either functionality or cleanup. These problems can be categorized as follows: - Improved Parameter Validation (8) - PR 618/rtems - add null checks to Classic API - PR 628/rtems - POSIX API sigset of 0 should return EINVAL - PR 629/rtems - POSIX mqcreate check msg size <= not < 0 - PR 651/rtems - Classic API task ident missing NULL check - PR 652/rtems - Classic API signal send does not detect empty signal set - PR 659/rtems - Heap get size does not check for out of range address - PR 660/rtems - thread stack allocation -- detect math overflows - PR 661/rtems - object mp name search -- fix invalid dereference - Run-Time Problems (6) - PR 609/rtems - Race condition between _Thread_Dispatch and _Thread_Tickle_timeslice - PR 626/networking - avoid printing 0xFFFFF for bytes > 127 - PR 641/rtems - Events sent to a task entering rtems_event_receive() may be lost - PR 654/rtems - Thread_Control watchdog is not initialized on reuse - PR 681/misc - stack checker dereferences bad pointer - PR 692/rtems - Region Manager broken for blocking when empty - PR 686/networking - if_fxp.c locks up - Improvements (7) - NO PR -- Added NFS client to Add-On Packages - PR 605/rtems - mips cpu.c uses C++ style comments - PR 619/bsps - (mpc6xx) account for latency when updating decrementer clock - PR 625/networking - small fixes to some libchip/network drivers - PR 676/networking - /etc/resolv.conf contains NTP instead of DNS servers - PR 676/misc - rtmonuse.h missing extern C wrapper - PR 688/bsps - Add libchip support for Motorola MC146818 time-of-day clock - BSP and Port Specific Problems (8) - PR 569/bsps - i386/pc386: pcibios.c lacks BusCountPCI() -- see 608 - PR 606/bsps - (PowerPC) Minor enhancements and fixes to PCI support - PR 607/networking - Adjusted dec21140 PCI config to minimum sane - PR 608/bsps - Preliminary fix for pc386 BusCountPCI() - PR 671/bsps - ARM _ISR_Set_level broken - PR 679/bsps - i386ex linkcmds missing newer sections - PR 680/bsps - mpc8260ads network driver does not compile - PR 691/bsps - gen405 has linkcmds buglet - Documentation Issues (3) - PR 627/doc - document behavior when stacksize < minimum - PR 682/doc - Getting Started -- minor corrections to PATH and tar examples - PR 683/doc - User's Guide preface has chapters misnumbered - Tool Issues (2) - PR 588/tools - Remove m505/roe multilib variant in 3.4.0 patch - PR 621/tools - powerpc-rtems-gcc >= 3.3.2 lacks -DUSE_INIT_FINI Release 4.6.1 Changes Fourteen problems reported () by users were fixed between 4.6.0 and 4.6.1. These problems can be categorized as follows: - General Build Failures (1) - PR574 (libcpu/i386/Makefile.am missing) prevented all i386 BSPs from building. - RTEMS Run-Time Problems (5) - PR582 (POSIX message queue memory allocation problem) fixed a problem which rendered POSIX message queues unusable. - PR584 (Critical section window in Event Timeout) closed a small window. - PR598 (MIPS FP context switch error) added the needed save and restore of the FPCS register. - PR599 (PPPD memory allocation issues) fixed places where memory was allocated and not freed. - PR604 (sparc cpu_asm.S has window resulting in corrupted CWP) addressed a small window in the interrupt task switch path which resulted in the CWP getting corrupted. - BSP Specific Problems (3) - PR505 (libbsp/m68k/shared/setvec.c) fixes a warning with gcc 3.3 and newer. - PR575 (motorola_powerpc: conflicting linkcmds) removed an unused and confusingly different linkcmds in the shared codebase. - PR602 (mpc8260ads irq.h typo) prevented this BSP from building the C++ RTEMS examples. - Documentation Issues (1) - PR576 (FAQ/embedded.t is outdated) removed an out of date file. - Test Issues (3) - PR583 (tm26 and tm27 broken) fixes these tests to address needing to be out of a dispatching critical section before allocating memory. - PR596 (sp32 broken for buffered output) lets this test run when bufer test IO mode is enabled. - PR595 (sp13 buffer overrun) addresses a buffer overrun - Tool Issues (1) - PR594 (GNAT missing) addresses the problem that rtems-4.6-gcc3.2.3newlib1.11.0-2 did not include GNAT. rtems-4.6-gcc3.2.3newlib1.11.0-3 tool binaries for GNU/Linux now include GNAT. Release 4.6.0 Changes This was the first release in the 4.6 Release Series and first to have all the major improvements listed at the top of this page.
https://devel.rtems.org/wiki/Release/4.6
CC-MAIN-2020-24
refinedweb
2,019
54.12
You can choose from two ways to program with MSMQ. MSMQ exposes a C-level API as well as a set of ActiveX components. As a Visual Basic programmer, you're much better off using the ActiveX components when you program MSMQ. Note that some MSMQ functionality is accessible only through the C-level API. Fortunately, you can meet most of the common requirements for message passing in a distributed application by using the ActiveX components. In the rare circumstances in which you need to use the C-level API, you can use Visual Basic Declare statements and call into the API directly. However, this chapter will examine only the use of ActiveX components. Once you include a reference to MSMQ's type library (the Microsoft Message Queue Object Library) in your Visual Basic project, you can use these ActiveX components in your code. I won't cover every possibility for programming with MSMQ's ActiveX components. MSMQ provides several ActiveX components, and each one has quite a few properties and methods. Instead, I'll simply get you started by offering MSMQ programming examples that demonstrate the most common types of code required in a distributed application. You should use the HTML document MSMQ Programmer's Reference to complement the material presented in this chapter. Let's begin our tour of MSMQ programming by creating a queue. MSMQ lets you create both public queues and private queues. We'll start by creating a public queue. In this chapter, you should assume that all queues are public unless I indicate they are private. You can create a public queue in one of two ways. You can create a queue by hand using the MSMQ Explorer, or you can create a queue programmatically. We'll create one by hand first. Simply right-click on a computer in the MSMQ Explorer, and choose Queue from the New menu. You must give the queue a name and specify whether you want the queue to be transactional. I'll defer a discussion of transactional queues until later in this chapter. For now, just create a queue by giving it a name, and leave the Transactional check box deselected. Click OK to create the queue. After you create the queue, you can examine its attributes by right-clicking on it and choosing Properties. You'll see a tabbed dialog box in which you can modify various properties of the queue. When you examine the queue properties, you'll notice that MSMQ has automatically generated a GUID to identify the queue. You can also create a queue programmatically using an MSMQQueueInfo object. First you must create the object and assign it a valid PathName. A queue's PathName should include the name of the computer and the name of the queue. For example, look at the following code: Dim qi As MSMQQueueInfo Set qi = New MSMQQueueInfo qi.PathName = "MyComputer\MyQueue" qi.Label = "My Queue" qi.Create This example uses an MSMQQueueInfo object to create a new queue. Once you set the PathName property, you can create a queue by invoking the Create method. This example also sets the Label property of the new queue. A label is optional, but it can be helpful when you need to locate the queue later on. The Create method takes two optional parameters. The first parameter indicates whether the new queue is transactional. The second parameter, IsWorldReadable, lets you indicate whether the queue will be readable to users other than the owner. The default value for this parameter is False, which means that only the queue's owner is able to receive messages from the queue. If you pass True to this parameter, the queue can be read by all users. Whatever you pass, all users can send messages to the queue. You can also set queue security permissions by modifying the discretionary access control list (DACL) for the queue. You do this by opening the queue's Properties dialog box and navigating to the Security tab in the MSMQ Explorer. Note that you can abbreviate the PathName for a local queue so that you don't have to hardcode the name of the computer. You must do this when you want to write generic code that will run on many different computers. A single dot (as in .\MyQueue) signifies that the queue path is defined on the local computer. You can use this abbreviated form when you create and open a local queue. For example, you can rewrite the previous code as follows: Dim qi As MSMQQueueInfo Set qi = New MSMQQueueInfo qi.PathName = ".\MyQueue" qi.Label = "My Queue" qi.Create In addition to creating queues, you can use an MSMQQueueInfo object when you want to search for or open an existing queue. Let's say you want to get a little tricky and create a queue when one with a predefined caption doesn't already exist. First, you can run a query against the MQIS with an MSMQQuery object to determine whether a queue with a certain label already exists. You run a query by invoking the LookupQueue method, which returns an MSMQQueueInfos object. The MSMQQueueInfos object is a collection of MSMQQueueInfo objects that match your lookup criteria. Here's an example of conducting a lookup by a queue's caption: Dim qry As MSMQQuery Set qry = New MSMQQuery Dim qis As MSMQQueueInfos Set qis = qry.LookupQueue(Label:="MyComputer\MyQueue") Dim qi As MSMQQueueInfo Set qi = qis.Next If qi Is Nothing Then ' The queue did not exist. Set qi = New MSMQQueueInfo qi.PathName = "MyComputer\MyQueue" qi.Label = "MyComputer\MyQueue" qi.Create End If In this example, a new queue is created only if a queue with the label MyComputer\MyQueue doesn't already exist. Note that you can also use other types of criteria when you run a lookup query. Now let's open a queue and send a message. The next object you need to understand is an MSMQQueue object. At first, the relationship between MSMQQueueInfo objects and MSMQQueue objects can be a little confusing. It's reasonable to conclude that an MSMQQueue object represents a physical queue because of its name. However, you're better off thinking of it as a queue handle. For example, you can open three different MSMQQueue objects on the same physical queue: Dim qi As MSMQQueueInfo Set qi = new MSMQueueInfo qi.PathName = ".\MyQueue" Dim qSend As MSMQQueue Set qSend = qi.Open(MQ_SEND_ACCESS, MQ_DENY_NONE) Dim qPeek As MSMQQueue Set qPeek = qi.Open(MQ_PEEK_ACCESS, MQ_DENY_NONE) Dim qReceive As MSMQQueue Set qReceive = qi.Open(MQ_RECEIVE_ACCESS, MQ_DENY_NONE) You can see that an MSMQQueueInfo object represents a physical queue and that an MSMQQueue object actually represents an open handle to the queue. When you call Open, you must specify the type of access you want in the first parameter. You can peek at as well as receive from a queue when you open it with MQ_RECEIVE_ACCESS. However, if you want to send messages while also peeking at or receiving from the same queue, you must open two MSMQQueue objects. Remember to invoke the Close method on an MSMQQueue object as soon as you've finished using it. You can use the second parameter to Open to specify the share mode for the queue. The default value of this parameter is MQ_DENY_NONE, which means that the queue can be opened by more than one application for receive access at the same time. You must use this setting when you open a queue using MQ_PEEK_ACCESS or MQ_SEND_ACCESS. However, when you open a queue with receive access, you can set the share mode to MQ_DENY_RECEIVE_SHARE to prevent other applications from receiving messages at the same time. When one application opens a queue with both MQ_RECEIVE_ACCESS and MQ_DENY_RECEIVE_SHARE, no other application can open the queue in receive mode. An application using this mode will be the only one that can remove messages from the queue. When you create a public queue, MSMQ assigns it an identifying GUID and publishes it in the MQIS. This allows other applications to open the queue by assigning the computer name and queue name to the PathName property. This also allows other applications to find the queue by running queries against the MQIS. However, the process of publishing a public queue takes up time and disk space and is sometimes unnecessary. Imagine an application that consists of hundreds or thousands of independent clients that all require a local response queue. In this situation, it makes sense to use private queues. Private queues must be created locally, and they are not published in the MQIS. They're published only on the computer on which they reside. As you'll see later in this chapter, you can send the information about a private response queue in the header of a request message. This lets you establish bidirectional communication between a client application and the server. More important, using private queues means that you don't have to publish all those response queues, which saves both time and disk space. You can create a private queue by adding Private$ to the queue's PathName, like this: Dim qResponseInfo As MSMQQueueInfo Set qResponseInfo = New MSMQQueueInfo qResponseInfo.PathName = ".\Private$\MyResponseQueue" qResponseInfo.Create MSMQ applications can send messages to private queues on other machines as long as they can find the queues. This isn't as easy as locating public queues because you can't open a private queue using a PathName—it isn't published in the MQIS. Later in this chapter, I'll show you a technique for passing the response queue's information to another application in a request message. Another way that you can send messages to private queues on another computer is by using the FormatName property. This technique is valuable when you are dealing with private queues on disconnected clients. When a queue is created, MSMQ creates a FormatName for it. Here's an example of two different FormatName properties for a public queue and a private queue: PUBLIC=067ce2cb-26fc-11d2-b56b-f4552d000000 PRIVATE=f38f2a17-218e-11d2-b555-c48e04000000\00000022 The FormatName of a public queue includes the GUID that identifies the queue in the MQIS. A private queue doesn't have its own GUID. Instead, its FormatName includes the GUID that identifies the local computer and an extra computer-specific queue identifier. An application can send messages to a private queue across the network by assigning the FormatName before invoking the Open method. Of course, the application must know the FormatName ahead of time. Let's send our first message. MSMQ makes this task remarkably easy. You can create a new MSMQMessage object and prepare it by setting a few properties. You can then invoke the MSMQMessage object's Send method, and MSMQ will route your message to its destination queue. Here's a simple example: Dim qi As MSMQQueueInfo Set qi = New MSMQQueueInfo qi.PathName = ".\MyQueue" Dim q As MSMQQueue Set q = qi.Open(MQ_SEND_ACCESS, MQ_DENY_NONE) ' Create a new message. Dim msg As MSMQMessage Set msg = New MSMQMessage ' Prepare the message. msg.Label = "My superficial label" msg.Body = "My parameterized request information" msg.Priority = MQ_MAX_PRIORITY ' Send message to open queue. msg.Send q q.Close As you can see, MSMQ's ActiveX components make it pretty easy to open a queue and send a message. The message in the last example was prepared by setting three properties. The Caption is a string property of the message header that distinguishes or identifies a particular message. The two other message properties are the message body and the message priority. In MSMQ, a message body is stored as an array of bytes. The body is typically used to transmit parameterized data between the sender and the receiver. This example demonstrates that you can simply assign a Visual Basic for Applications (VBA) string to a message body. The receiver can read this string from the message body just as easily. However, in many cases you'll use a message body that is more complex. For example, you might need to pass multiple parameters from the sender to the receiver. I'll revisit this topic later in this chapter and discuss how to pack parameterized information into the message body. The last property used in the example is the message priority. A message has a priority value between 0 and 7; the higher the value, the higher the priority. MSMQ stores messages with higher priority levels at the head of the queue. For example, a message with a priority level of 6 is placed in the queue behind all messages of priority 7 and behind messages of priority 6 that have already been written to the queue. The new message is placed ahead of any message of priority 5 or lower. The MSMQ type library contains the constants MQ_MAX_PRIORITY (7) and MQ_MIN_PRIORITY (0). The default priority for a new message is 3. You can use the MSMQ Explorer to examine the messages in a queue, as shown in Figure 11-5. You should see a list of all the messages that have been sent to the queue but have not been received. As you can see, messages with the highest priority are placed at the head of the queue. The message at the head is usually the first one to be received. You must have read permissions for a queue in order to see the messages in it with the MSMQ Explorer. There might be times when your installation of MSMQ doesn't give you these read permissions by default. You can modify the access permissions for a queue by right-clicking on it in the MSMQ Explorer and choosing Properties. If you navigate to the Security tab, you can change both the owner and the permissions for the queue so you can see the messages inside it. It's especially useful to look at the header attributes and bodies of messages when you're beginning to program with MSMQ. Figure 11-5. You can examine the messages in a queue using the MSMQ Explorer. Messages with the highest priority are at the head of the queue. Before I move on to the next section, I want to introduce a few other important message properties. The first is the Delivery property, which has two possible settings. The default setting is MQMSG_DELIVERY_EXPRESS, which means that the message is sent in a fast but unreliable fashion. Express messages are retained in memory only while they're being routed across various computers toward their destination queue. If a computer crashes while holding express messages, the messages could be lost. To ensure that a message isn't lost while being routed to its destination queue, you can set the Delivery property to MQMSG_DELIVERY_RECOVERABLE. The message will be flushed to disk as it is passed from one computer to another. The disk I/O required with recoverable messages results in significant performance degradation, but the message won't be lost in the case of a system failure. When you send nontransactional messages, you must explicitly set the Delivery property if you want recoverable delivery. When you send transactional messages, the Delivery property is automatically set to MQMSG_DELIVERY_RECOVERABLE. When a message is sent to a queue, MSMQ assigns it an ID property. This property is a 20-byte array that uniquely identifies the message. MSMQ generates the ID by using two different values. The first 16 bytes of the ID are the GUID of the sending computer. (MSMQ assigns an identifying GUID to every computer during installation.) As you can see in Figure 11-5 (shown earlier), the first part of the message ID is always the same for any message sent from the same computer. The last 4 bytes of the ID are a unique integer generated by the sending computer. In most cases, you don't need to worry about what's inside the Byte array. However, if you need to compare two IDs to see whether they represent the same message, you can use VBA's StrComp function with the vbBinaryCompare flag. Each message also has a CorrelationID property. Like the ID, this property is also stored as a 20-byte array. Let's look at a problem to see why this property is valuable. Let's say that a client application sends request messages to a server. The server processes the requests and sends a response message for each request. How does the client application know which request message is associated with which response message? The CorrelationID property solves this problem. When the server processes a request, it can assign the ID of the incoming request message to the CorrelationID of the outgoing response message. When the client application receives a response message, it can compare the CorrelationID of the response message with the ID from each request message. This allows the sender to correlate messages. As you can see, the CorrelationID is useful when you create your own response messages. As you'll see later in this chapter, MSMQ also assigns the proper CorrelationID automatically when it prepares certain system-generated messages, such as an acknowledgment message. To receive a message, you first open an MSMQQueue object with receive access, and then you invoke the Receive method to read and remove the first message in the queue: Dim qi As MSMQQueueInfo Set qi = New MSMQQueueInfo qi.PathName = ".\MyQueue" Dim q As MSMQQueue Set q = qi.Open(MQ_RECEIVE_ACCESS, MQ_DENY_NONE) Dim msg As MSMQMessage ' Attempt to receive first message in queue. Set msg = q.Receive(ReceiveTimeout:=1000) If Not (msg Is Nothing) Then ' You have removed the first message from the queue. MsgBox msg.Body, vbInformation, msg.Label Else ' You timed out waiting on an empty queue. End If q.close There's an interesting difference between sending and receiving a message with MSMQ. You invoke the Send method on an MSMQMessage object, but you invoke the Receive method on an MSMQQueue object. (This doesn't really cause problems; it's just a small idiosyncrasy of the MSMQ programming model.) If a message is in the queue, a call to Receive removes it and returns a newly created MSMQMessage object. If there's no message in the queue, a call to Receive behaves differently depending on how the timeout interval is set. By default, a call to Receive has no timeout value and will block indefinitely if no message is in the queue. If you don't want the thread that calls Receive to block indefinitely, you can specify a timeout interval. You can use the ReceiveTimeout parameter to specify the number of milliseconds that you want to wait on an empty queue. If you call Receive on an empty queue and the timeout interval expires before a message arrives, the call to Receive returns with a null reference instead of an MSMQMessage object. The code in the last example shows how to set a timeout value of 1000 milliseconds. It also shows how to determine whether a message arrived before the timeout expired. If you don't want to wait at all, you can use a ReceiveTimeout value of 0. A ReceiveTimeout value of ?1 indicates that you want to wait indefinitely. (This is the default if you don't pass a timeout value.) You can call Receive repeatedly inside a Do loop to synchronously remove every message from a queue. The following example shows how to receive all the messages from a queue and fill a list box with message captions: Dim qi As MSMQQueueInfo Set qi = New MSMQQueueInfo qi.PathName = ".\MyQueue" Dim q As MSMQQueue Set q = qi.Open(MQ_RECEIVE_ACCESS, MQ_DENY_RECEIVE_SHARE) Dim msg As MSMQMessage Set msg = q.Receive(ReceiveTimeout:=0) Do Until msg Is Nothing lstReceive.AddItem msg.Label Set msg = q.Receive(ReceiveTimeout:=0) Loop q.Close You can set the share mode for MQ_DENY_RECEIVE_SHARE so that your application won't have to contend with other applications while removing messages from the queue. Use a timeout value of 0 if you want to reach the end of the queue and move on to other business as soon as possible. Sometimes you'll want to inspect the messages in a queue before removing them. You can use an MSMQQueue object's peek methods in conjunction with an implicit cursor to enumerate through the message in a queue. After opening a queue with either receive access or peek access, you can call Peek, PeekCurrent, or PeekNext. Peek is similar to Receive in that it reads the first message in the queue. However, Peek doesn't remove the message. If you call Peek repeatedly, you keep getting the same message. Another problem with Peek is that it has no effect on the implicit cursor behind the MSMQQueue object. Therefore, it is more common to work with PeekCurrent and PeekNext. You can move the implicit cursor to the first message in a queue with a call to PeekCurrent. As with a call to Receive, you should use a timeout interval if you don't want to block on an empty queue. After an initial call to PeekCurrent, you can enumerate through the rest of the messages in a queue by calling PeekNext: Dim qi As MSMQQueueInfo Set qi = New MSMQQueueInfo qi.PathName = ".\MyQueue" Dim q As MSMQQueue Set q = qi.Open(MQ_PEEK_ACCESS, MQ_DENY_NONE) Dim msg As MSMQMessage Set msg = q.PeekCurrent(ReceiveTimeout:=0) Do Until msg Is Nothing ' Add message captions to a list box. lstPeek.AddItem msg.Label Set msg = q.PeekNext(ReceiveTimeout:=0) Loop q.Close The ReceiveCurrent method is often used in conjunction with PeekCurrent and PeekNext. For example, you can enumerate through the messages in a queue by peeking at each one and comparing the properties of the current message against criteria of the messages you want to receive and process. For example, after calling PeekCurrent or PeekNext, you can compare the label of the current message with a specific caption that you're looking for. If you come across a message with the caption you're looking for, you can call ReceiveCurrent to remove it from the queue and process it. The examples I have shown so far of peeking and receiving messages have all used synchronous techniques for examining and removing the messages in a queue. These techniques are easy ways to read or remove all the messages that are currently in a queue. They also let you process future messages as they are sent. The following code doesn't use a timeout interval; it blocks until a message is sent to the queue. It processes all messages until the queue is empty and then blocks until more messages arrive: ' Assume q is an open MSMQQueue object with receive access. Dim msg As MSMQMessage Do While True ' Loop forever. ' Wait indefinitely for each message. Set msg = q.Receive() ' Process message. Loop While this style of coding allows you to process messages as they arrive, it also holds the calling thread hostage. If you have a single-threaded application, the application can't do anything else. However, you can use MSMQ events as an alternative to this synchronous style of message processing. MSMQ events let your application respond to asynchronous notifications that are raised by MSMQ as messages arrive at a queue. You can therefore respond to a new message without having to dedicate a thread to block on a call to Receive or PeekNext. Let's look at how MSMQ events work. The MSMQ eventing mechanism is based on the MSMQEvent component. To use events, you must first create an MSMQEvent object and set up an event sink. Next you must associate the MSMQEvent object with an MSMQQueue object that has been opened for either peek access or receive access. You create the association between the two objects by invoking the EnableNotification method on the MSMQQueue object and passing a reference to the MSMQEvent object. After you call EnableNotification, MSMQ notifies your application when a message has arrived by raising an Arrived event. You learned how to set up an event sink with Visual Basic in Chapter 6. As you'll recall, to create an event sink you must use the WithEvents keyword and declare the source object's reference variable in the declaration section of a form module or a class module. The following code shows how to set up an event sink for a new MSMQEvent object and associate it with an open MSMQQueue object: Private qPeek As MSMQQueue Private WithEvents qPeekEvents As MSMQEvent Private Sub Form_Load() Dim qi As MSMQQueueInfo Set qi = New MSMQQueueInfo qi.PathName = ".\MyQueue" Set qPeek = qi.Open(MQ_PEEK_ACCESS, MQ_DENY_NONE) Set qPeekEvents = New MSMQEvent qPeek.EnableNotification qPeekEvents End Sub This example uses peek access, but events work in a similar manner for receiving messages. Once you set up the MSMQEvent object's event sink and call EnableNotification, you will be notified with an Arrived event as soon as MSMQ finds a message in the queue. Here's an implementation of the Arrived event that adds the caption of new messages to a list box as they arrive in the queue: Sub qPeekEvents_Arrived(ByVal Queue As Object, ByVal Cursor As Long) Dim q As MSMQQueue Set q = Queue ' Cast to type MSMQQueue to avoid IDispatch. Dim msg As MSMQMessage Set msg = q.PeekCurrent(ReceiveTimeOut:=0) If Not (msg Is Nothing) Then lstPeek.AddItem msg.Label End If q.EnableNotification qPeekEvents, MQMSG_NEXT End Sub Note that this example calls EnableNotification every time an Arrived event is raised. This is required because a call to EnableNotification sets up a notification for only the next message. If you want to receive notifications in an ongoing fashion, you must keep calling EnableNotification in the Arrived event. It is also important to pass the appropriate cursor constant when you call EnableNotification. This example passes the constant MQMSG_NEXT in order to advance the implicit cursor. The next time an Arrived event is raised, a call to PeekCurrent examines the next message in the queue. You should also note that the code in the example above peeks at every message that was stored in the queue when the MSMQEvent object was set up. In other words, MSMQ raises events for existing messages as well as future messages. If you care only about future messages, you can synchronously advance the implicit cursor to the last existing message before calling EnableNotification. When you prepare a message, you must often pack several different pieces of parameterized information into the body before sending it to a queue. On the receiving side, you must also be able to unpack these parameters before you start processing the sender's request. Up to this point, I've shown you only how to pass simple VBA strings in a message body. Now we'll look at how to pass more complex data structures. A message body is a Variant that is stored and transmitted as a Byte array. You can read and write the usual VBA data types to the body, such as Boolean, Byte, Integer, Long, Single, Double, Currency, Date, and String. MSMQ tracks the type you use in the message header. This makes it quite easy to store a single value in a message body. However, it doesn't solve the problem of packing in several pieces of data at once. To pack several pieces of data into a message, you must understand how to use the Byte array behind the message body. Using an array behind the message body is tricky because it must be an array of bytes. If you assign another type of array to the message body, MSMQ converts it to a Byte array. Unfortunately, once your data has been converted to a Byte array, there's no easy way to convert it back to the original array type on the receiving side. This means that a simple technique such as sending your parameters in a String array won't work as you might hope. A Byte array is flexible because it can hold just about any binary or text-based data. If you don't mind working with a Byte array directly, you can pack the message body using code like this: Dim msg As MSMQMessage Set msg = New MSMQMessage Dim data(11) As Byte ' Fill the array with parameterized data. data(0) = 65: data(1) = 66 data(2) = 67: data(3) = 68 data(4) = 49: data(5) = 51 data(6) = 53: data(7) = 55 data(8) = 57: data(9) = 97 data(10) = 98: data(11) = 99 msg.Body = data msg.Send q Figure 11-6 shows the Body tab of the message's Property dialog box, which you can view using the MSMQ Explorer. The message body shown in the figure is the same one that was generated in the last code example. The Body tab shows the contents of the message in both hexadecimal format and ANSI format. How do you unpack this Byte array from the message body in a receiver application? It's pretty easy. All you have to do is create a dynamic array reference and assign the message body to it, like this: Dim msg As MSMQMessage Set msg = q.Receive() Dim d() As Byte d = msg.Body ' Now the Byte array is populated. ' For example, to inspect value in position 2 Dim Val As Byte Val = d(2) While it's important for you to understand that the message body is always stored as a Byte array, the technique I have just shown isn't always the best way to pack and unpack your parameterized information. Writing and reading Byte arrays gives you as much flexibility as MSMQ can offer, but it doesn't offer high levels of productivity. Figure 11-6. A message body is always stored as a Byte array. The left side shows the hexadecimal value of each byte in the array; the right side displays the ANSI character that represents the value of each byte. The first byte in this body has the decimal value 65 and the hexadecimal value 41, and the letter A is its ANSI character representation. It can also be tricky and time consuming to write the code for packing and unpacking several pieces of parameterized information into a Byte array. Several other techniques are easier and faster to program. You should work directly with Byte arrays only when the data being packed is fairly straightforward or no other technique can give you the results you need. OK, let's put your knowledge of Byte arrays to work and pack several parameters into a single message body. Suppose you want to send a request message to submit a sales order. The body of the request message must include a customer name, a product name, and a requested quantity. How do you pack these three pieces of information into a message body? We'll look at three different techniques: using a string parsing technique, using a Visual Basic PropertyBag object, and using a persistent Visual Basic class to read and write an entire object into the message body. You've already seen that it's easy to write and read a VBA string to and from a message body. As you'll recall from Chapter 6, a VBA string is stored internally using a COM data type known as a basic string (BSTR). A BSTR maintains the actual string data with an array of Unicode characters. Because a BSTR is based on Unicode, it requires 2 bytes per character; ANSI strings require only 1 byte per character. Packing a VBA string into a message body is easy because MSMQ does the Byte array conversions for you behind the scenes. When you assign a string to a message body, MSMQ simply converts the Unicode characters array to a Byte array. On the receiving side, when you assign the message body to a string variable, MSMQ creates a new BSTR and populates it with the Unicode characters from inside the Byte array. The conversion going on behind the scenes is somewhat complicated, but things couldn't be easier in terms of the Visual Basic code that you must write. Now let's look at a simple string parsing technique to write the three parameters to a message body. You can simply create a long string by concatenating your parameters and using a character such as a semicolon (;) to delimit each one. This string can be easily written to and read from the message body. The only tricky part is writing the code to pack and unpack the string. Let's begin by packing the string: Function PackMessage1(ByVal Customer As String, _ ByVal Product As String, _ ByVal Quantity As Long) As String PackMessage1 = Customer & ";" & Product & ";" & CStr(Quantity) End Function The PackMessage1 method takes three parameters and embeds them in a single VBA string. The embedded semicolons are used by the receiving code to unpack the string. The sending application can now use PackMessage1 to pack up a message and send it on its way: Dim MsgBody As String MsgBody = PackMessage1("Bob", "Ant", 100) msg.Body = MsgBody msg.Send q On the receiving side, you must provide the code to unpack the string. The following UnpackMessage1 method walks the string and pulls out the packed parameter values one by one: Private Sub UnpackMessage1(ByVal MsgBody As String, _ ByRef Customer As String, _ ByRef Product As String, _ ByRef Quantity As Long) Dim StartPosition As Integer, Delimiter As Integer StartPosition = 1 Delimiter = InStr(StartPosition, MsgBody, ";") Customer = Mid(MsgBody, StartPosition, Delimiter - StartPosition) StartPosition = Delimiter + 1 Delimiter = InStr(StartPosition, MsgBody, ";") Product = Mid(MsgBody, StartPosition, Delimiter - StartPosition) StartPosition = Delimiter + 1 Quantity = CLng(Mid(MsgBody, StartPosition, Len(MsgBody) - Delimiter)) End Sub Now that you have the code to unpack the string, the rest is fairly straightforward. You can receive or peek at a message and extract the request parameters from the body. Here's an example of using the UnpackMessage1 method in the receiving application: Set msg = q.Receive() Dim PackedMsg As String PackedMsg = msg.Body Dim Customer As String, Product As String, Quantity As Long UnpackMessage1 PackedMsg, Customer, Product, Quantity ' Customer, Product, and Quantity are now populated. Parsing strings offers much higher productivity than using a Byte array directly. While the code might be tedious to write, it usually isn't very complicated. It's also much easier than working with Byte arrays. However, Visual Basic 6 has a few new options that you should consider before you decide how to pack and unpack your parameters. In the following sections, I'll present two other Visual Basic 6 techniques that offer higher levels of productivity than this parsing technique. PropertyBag objects aren't new with Visual Basic 6. You might have used them if you programmed ActiveX controls with version 5. However, Visual Basic 6 is the first version that allows you to create PropertyBag objects with the New operator. This means you can create a stand-alone PropertyBag object to pack and unpack your parameters. A PropertyBag object is useful because it can automate most of the tedious work of packing and unpacking your parameterized information. Each PropertyBag object has a Contents property, which represents a Byte array. You can write named values into this Byte array using the WriteProperty method. Once you write all your parameters into a PropertyBag object, you can use the Contents property to serialize the Byte array into the message body: Function PackMessage2(ByVal Customer As String, _ ByVal Product As String, _ ByVal Quantity As Long) As Byte() Dim PropBag As PropertyBag Set PropBag = New PropertyBag PropBag.WriteProperty "Customer", Customer PropBag.WriteProperty "Product", Product PropBag.WriteProperty "Quantity", Quantity PackMessage2 = PropBag.Contents End Function This method takes three parameter values and returns a Byte array. (Note that Visual Basic 6 can use the array type as a method return value.) The PropertyBag object writes your named values into a stream of bytes using its own proprietary algorithm. You can use the PackMessage2 method in the sender application to pack a message body, like this: Dim msg As MSMQMessage Set msg = New MSMQMessage msg.Body = PackMessage2("Bob", "Ant", 100) msg.Send q Once you pack up a Byte array in the sender application, you need a second PropertyBag object on the receiving side to unpack it. The UnpackMessage2 method unpacks the message using the ReadProperty method of the PropertyBag object: Sub UnpackMessage2(ByRef MsgBody() As Byte, _ ByRef Customer As String, _ ByRef Product As String, _ ByRef Quantity As Long) Dim PropBag As PropertyBag Set PropBag = New PropertyBag PropBag.Contents = MsgBody Customer = PropBag.ReadProperty("Customer") Product = PropBag.ReadProperty("Product") Quantity = PropBag.ReadProperty("Quantity") End Sub Now you can use the UnpackMessage2 method in the receiver application to unpack the message: Set msg = q.Receive() Dim Customer As String, Product As String, Quantity As Long UnpackMessage2 msg.Body, Customer, Product, Quantity ' Customer, Product, and Quantity are now populated. As you can see, the PropertyBag object makes your life much easier because it packs and unpacks your parameters for you. This technique does carry some overhead compared to the string parsing technique, however. The PropertyBag object writes proprietary header information into the Byte array in addition to the name of each property. To give you an idea of how much overhead is involved, let's compare the two code examples above. The code for the string parsing technique created a message body 22 bytes long, and the PropertyBag technique created a message body 116 bytes long. The overhead of the PropertyBag technique depends on the size of the parameters being passed. The overhead becomes less noticeable as your parameter values become larger. Also keep in mind that the header information for each MSMQ message is quite large itself. An MSMQ message header typically contains 136 bytes or more no matter how big the body is. You must weigh the trade-offs between productivity and efficiency. The last technique for passing parameterized information in a message body is perhaps the most exciting. MSMQ lets you read and write entire objects to the message body. However, the object must belong to a certain category. MSMQ can serialize the properties of an object into and out of a message body if the object implements either IPersistStream or IPersistStorage. These are two standard COM interfaces that derive from IPersist. The interface definitions for IPersistStream and IPersistStorage contain parameters that are incompatible with Visual Basic. You can't implement these interfaces in a straightforward manner using the Implements keyword. Fortunately, Visual Basic 6 has added persistable classes. When you create a persistable class, Visual Basic automatically implements IPersistStream behind the scenes. Persistable classes let you read and write objects in and out of the message body directly. Every public class in an ActiveX DLL and ActiveX EXE project has a Persistable property. You must set this property to Persistable at design time to make a persistent class. When you make a class persistent, the Visual Basic IDE lets you add a ReadProperties and a WriteProperties method to the class module. You can add the skeletons for these two methods using the wizard bar. (The wizard bar consists of two combo boxes at the top of the class module window.) You can also add the InitProperties method, although it isn't required when you use MSMQ. You can use the ReadProperties and WriteProperties methods to read your properties to an internal PropertyBag object. Visual Basic creates this PropertyBag object for you behind the scenes and uses it to implement IPersistStream. Remember, your object must implement IPersistStream in order for MSMQ to write it to a message body. When MSMQ calls the methods in the IPersistStream interface, Visual Basic simply forwards these calls to your implementations of ReadProperties and WriteProperties. Using persistable classes with MSMQ is a lot easier to use than it sounds. For example, you can create a new persistable class and add the properties you want to pack into the message body. Next you provide an implementation of ReadProperties and WriteProperties. Here's a Visual Basic class module that does this: ' COrderRequest: a persistable class. Public Customer As String Public Product As String Public Quantity As Long Private Sub Class_ReadProperties(PropBag As PropertyBag) Customer = PropBag.ReadProperty("Customer", "") Product = PropBag.ReadProperty("Product", "") Quantity = PropBag.ReadProperty("Quantity", "") End Sub Private Sub Class_WriteProperties(PropBag As PropertyBag) PropBag.WriteProperty "Customer", Customer PropBag.WriteProperty "Product", Product PropBag.WriteProperty "Quantity", Quantity End Sub As you can see, it's pretty easy. Once you have a persistable class like the one shown above, you can pack it into a message body, like this: Dim msg As MSMQMessage Set msg = New MSMQMessage ' Create and prepare object. Dim Order As COrderRequest Set Order = New COrderRequest Order.Customer = txtPCS1 Order.Product = txtPCS2 Order.Quantity = txtPCS3 ' Assign the object to the message body. ' Your WriteProperties is called. msg.Body = Order msg.Send q When you assign an object to the message body, MSMQ performs a QueryInterface on the object to see whether it supports either IPersistStream or IPersistStorage. Since your object supports IPersistStream, MSMQ knows that it can call a method on this interface named Save. Visual Basic forwards the call to Save to your implementation of WriteProperties. You write your parameters into the PropertyBag, and these values are automatically copied into the message body as an array of bytes. In the receiver applications, things are just as easy. You can rehydrate a persistent object from a message body by creating a new reference and assigning the message body to it: Set msg = q.Receive(ReceiveTimeOut:=0) Dim Order As COrderRequest Set Order = msg.Body Dim Customer As String, Product As String, Quantity As Long Customer = Order.Customer Product = Order.Product Quantity = Order.Quantity When you assign a message body to a reference using the Set keyword, MSMQ creates a new instance of the object and calls the Load method of IPersistStream. Visual Basic forwards this call to your implementation of ReadProperties. Once again, you use the PropertyBag object to extract your data. You should keep a few things in mind when you use persistable classes with MSMQ. First, this parameter-packing technique uses a bit more overhead than the technique using a stand-alone PropertyBag object, and it uses considerably more overhead than the string parsing technique. Second, you should create your persistable classes in ActiveX DLLs so that every application that sends and receives messages can leverage the same code. One last thing to note is that you can use persistable classes with MSMQ only after you have installed Windows NT Service Pack 4. Earlier versions of MSMQ aren't compatible with Visual Basic's implementation of IPersistStream. In particular, your code will fail when you try to assign an object created from a persistable class to an MSMQ message body. This means you must install Windows NT Service Pack 4 (or later) on all your production machines as well as your development workstations when you start working with persistable classes.
https://flylib.com/books/en/4.220.1.65/1/
CC-MAIN-2020-45
refinedweb
7,144
63.8
IntroductionThe Reactive-Streams initiative becomes more and more known in concurrency/parallelism circles and there appear to be several implementations of the specification, most notably Akka-Streams, Project Reactor and RxJava 2.0. In this blog post, I'm going to look at how one can use each library to build up a couple of simple flow of values and while I'm at it, benchmark them with JMH. For comparison and sanity checking, I'll also include the results of RxJava 1.0.14, Java and j.u.c.Stream. In this part, I'm going to compare the synchronous behavior of the 4 libraries through the following tasks: - Observe a range of integers from 1 to (1, 1000, 1.000.000) directly. - Apply flatMap to the range of integers (1) and transform each value into a single value sequence. - Apply flatMap to the range of integers (1) and transform each value into a range of two elements. The runtime environment: - Gradle 2.8 - JMH 1.11.1 - Threads: 1 - Forks: 1 - Mode: Throughput - Unit: ops/s - Warmup: 5, 1s each - Iterations: 5, 2s each - i7 4790 @ 3.5GHz stock settings CPU - 16GB DDR3 @ 1600MHz stock RAM - Windows 7 x64 - Java 8 update 66 x64 RxJavaLet's start with the implementation of the tasks in RxJava. First, one has to include the library within the build.gradle file. For RxJava 1.x: compile 'io.reactivex:rxjava:1.0.14' For RxJava 2.x: repositories { mavenCentral() maven { url '' } } compile 'io.reactivex:rxjava:2.0.0-DP0-SNAPSHOT' Unfortunately, one can't really have multiple versions of the same ArtifactID so either we swap the compile ref or switch to my RxJava 2.x backport, which is under a different name and different package naming: compile 'com.github.akarnokd:rxjava2-backport:2.0.0-RC1' Once the libs are set up, let's see the flows: @Params({"1", "1000", "1000000"}) int times; //... Observable<Integer> range = Observable.range(1, times); Observable<Integer> rangeFlatMapJust = range .flatMap(Observable::just); Observable<Integer> rangeFlatMapRange = range .flatMap(v -> Observable.range(v, 2)); The code looks the same for both versions, only the imports have to be changed. Nothing complicated. Observation of the streams will generally be performed via the LatchedObserver instance which extends/implements Observer and will be reused for the other libraries as well: public class LatchedObserver<T> extends Observer<T> { public CountDownLatch latch = new CountDownLatch(1); private final Blackhole bh; public LatchedRSObserver(Blackhole bh) { this.bh = bh; } @Override public void onComplete() { latch.countDown(); } @Override public void onError(Throwable e) { latch.countDown(); } @Override public void onNext(T t) { bh.consume(t); } } Since these flows are synchronous, we won't utilize the latch itself but simply subscribe to the flows: @Benchmark public void range(Blackhole bh) { range.subscribe(new LatchedObserver<Integer>(bh)); } Let's run it for both 1.x and 2.x and see the benchmark results: This is a screenshot of my JMH comparison tool; it can display colored comparison of throughput values: green is better than the baseline, red is worse. Lighter color means at least +/- 3%, stronger color means +/- 10% difference. Here and all the subsequent images, a larger number is better. You may want to multiply the times with the measured value to get the number of events transmitted. Here, Range with times = 1000000 means that there were ~253 million numbers emitted. It appears RxJava 2.x can do quite the numbers better, except in the two RangeFlatMapJust cases. What's going on? Let me explain. The improvements come from the fact that RxJava 2.x has generally less subscribe() overhead than 1.x. In 1.x when one creates a Subscriber, it will be wrapped into a SafeSubscribe instance and when the Producer is set on it, there is a small arbitration happening inside setProducer(). As far as I can tell, the JIT in 1.x will do its best to remove the allocation and the synchronization, but the arbitration won't be removed which means more instructions for the CPU to execute. In contrast, in 2.x there is no wrapping and no arbitration at all. Edit: (wrong explanation before) The lower performance comes from the serialization approaches the two versions use: 1.x uses the synchronized-based emitter-loop and 2.x uses the atomics-based queue-drain approach. The former is elided by the JIT whereas the latter can't be and there is always a ~17 ns overhead per value. I'm planning a performance overhaul for 2.x anyways so this won't remain the case for too long. In conclusion, I think RxJava does a good job both in terms of usability and performance. Why am I mentioning usability? Read on. Project ReactorProject Reactor is another library that supports the Reactive-Streams specification and provides a similar fluent API as RxJava. I've briefly benchmarked one of its earlier version (2.0.5-RELEASE) and posted a picture of it, but I'm going to use the latest snapshot of it. For this, we need to adjust our build.gradle file. repositories { mavenCentral() maven { url '' } } compile 'io.projectreactor:reactor-stream:2.1.0.BUILD-SNAPSHOT' This should make sure I'm using a version with the most performance enhancements possible. The source code for the flows look quite similar: Stream<Integer> range = Streams.range(1, times); Stream<Integer> rangeFlatMapJust = raRange.flatMap(Streams::just); Stream<Integer> rangeFlatMapRange = raRange .flatMap(v -> Streams.range(v, 2)); A small note on the Streams.range() here. It appears the API has changed between 2.0.5 and the snapshot. In 2.0.5, the operator's parameters were start+end (both inclusive) which is now changed to start+count thus matches RxJava's range(). The same LatchedObserver can be used here so let's see the run results: Here, reactor2 stands for the 2.1.0 snapshot and reactor1 is 2.0.5 release. Clearly, Reactor has improved its performance by reducing the overhead in the operators (by a factor of ~10). There is, however a curious result with RangeFlatMapJust, similar to RxJava: both RxJava 1.x and Reactor 2.1.0 outperform RxJava 2.x and with roughly the same amount! What's happening there? I know that flatMap in RxJava 1.x is faster in single-threaded use because it uses the emitter-loop approach (which utilizes synchronized) which can be nicely elided by the JIT compiler and thus the overhead is removed. In 2.x, the code, currently, uses queue-drain with 2 unavoidable a atomic operations per value on the fast path. So let's find out what Reactor does. Its flatMap is implemented in the FlatMapOperator class and what do I see? It's almost the same as RxJava 2.x flatMap! Even the bugs are the same! Just kidding about the bugs. There are a few differences so let's check the same fast-path and why it can do 4-8 million values more. The doNext() looks functionally identical: if the source is a Supplier, it gets the held value directly without subscription then tries to emit it via tryEmit(). Potential bug: If this path crashes and goes into reportError(), the execution falls through and the Publisher gets subscribed to. Potential bug: In RxJava 2.0, we always wrap user-supplied functions into try-catches so an exception from them is handled in-place. In Reactor's implementation, this is missing from doNext (but may be present somewhere else up in the call chain). The tryEmit() is almost the same as well with a crucial difference: it batches up requests instead of requesting one-by-one. Interesting! if (maxConcurrency != Integer.MAX_VALUE && !cancelled && ++lastRequest == limit) { lastRequest = 0; subscription.request(limit); } The same re-batching happens with the inner subscribers in both implementations (although this doesn't come into play in the given flow example). Nice work Project Reactor! In the RangeFlatMapRange case, which doesn't exercise this fast path, Reactor is slower although it uses the same flatMap logic. The answer is a few lines above in the results: Reactor's range produces 100 million values less per second. Following the references along, there are a bunch of wrappers and generalizations, but those only apply once per Subscriber so they can't be the cause for the times = 1000000 case. The reason appears to be that range() is implemented like RxJava 2.x's generator (i.e., SyncOnSubscribe). The ForEachBiConsumer looks tidy enough but I can spot a few potential deficiencies: - Atomic read and increment is involved which forces the JIT'd code to re-read the instance fields from cache instead of keeping it in a register. The requestConsumer could be read into a local variable before the loop. - Use == or != as much as possible because the other kind of comparisons appear to be slower on x86. - The atomic decrement is an expensive operation (~10ns) but can be delayed quite a bit: once the current known requested amount runs out, one should try to read the requested amount first to see if there were more requests issued in the mean time. If so, keep emitting, otherwise subtract all that has been emitted from the request count. RxJava's range doesn't do this latter at the moment; HotSpot's register allocator seems to be hectic at times: too many local variables and performance drops because of register spill (on x64!). Implementing this latter optimization involves more local variables and thus the risk of making things worse. In conclusion, Project Reactor gets better and better with each release, especially when it adopts RxJava 2.x structures and algorithms ;) Akka-StreamsI believe Akka-Streams was the most advertised library from the list. With a company behind it and a port from Scala, what could go wrong? So let's include it in the build.gradle: compile 'com.typesafe.akka:akka-stream-experimental_2.11:1.0' So far so good, but where do I start? Looking at the web I came across a ton of examples, in Scala. Unfortunately, I don't know Scala enough so it was difficult for me to figure out what to use. Plus, it doesn't help that with Eclipse, the source code of the library is hard to navigate because it's in Scala (and I don't want to install the plugin). Okay, we won't look at the source code. It turns out, Akka-Streams doesn't have a range operator, therefore, I have prepopulate a List with the values and use it as a source: List<Integer> values = rx2Range .toList().toBlocking().first(); Source.from(values).??? A good thing RxJava is around. Akka-Stream uses the Source object as factory method for creating sources. However, Source does not implement Publisher at all! One does not simply observe a Source. After digging a bit, I found an example which shows one has to use runWith that takes a Sink.publisher() parameter. Let's apply them: Publisher<Integer> range = Source .from(values).runWith(Sink.publisher()); Doesn't work; the example was out of date and one needs a Materializer in runWith. Looking at the hierarchy, ActorMaterializer does implement it so let's get one. ActorMaterializer materializer = ActorMaterializer .create(???); Publisher<Integer> range = Source.from(values) .runWith(Sink.publisher(), materializer); Hmm, it requires an ActorRefFactory. But hey, I remember the examples creating an ActorSystem, so let's do that. ActorSystem actorSystem = ActorSystem.create("sys"); ActorMaterializer materializer = ActorMaterializer .create(actorSystem); Publisher<Integer> range = Source.from(values) .runWith(Sink.publisher(), materializer); Finally, no more dependencies. Let's run it! Doesn't work, crashes with missing configuration for akka.stream. Huh? After spending some time figuring out things, it appears Akka defaults to a reference.conf file in the classpath's root. But both jars of the library have this reference.conf! As it turns out, when the Gradle-JMH plugin packages up the benchmark jar, it puts both reference.conf files into the jar and both of them end up in there under the same name; Akka then picks up the wrong one. The solution: pull the one from the streams jar out and put it under a different name into the Gradle sources/resources. Sidenote: this is still not enough as by default Gradle ignores non java files, especially if they are not under src/main/java. I had to add the following code to build.gradle to make it work: processResources { from ('src/main/java') { include '**/*.conf' } } Config cfg = ConfigFactory.parseResources( ReactiveStreamsImpls.class, "/akka-streams.conf"); ActorSystem actorSystem = ActorSystem.create("sys", cfg); ActorMaterializer materializer = ActorMaterializer .create(actorSystem); List<Integer> values = rx2Range .toList().toBlocking().first(); Publisher<Integer> range = Source.from(values) .runWith(Sink.publisher(), materializer); Compiles? Yes! Benchmark jar contains everything? Yes! The setup runs? Yes! Benchmark method works? No?! After one iteration, it throws an error because the range Publisher can't be subscribed to more than once. I've asked for solutions on StackOverflow to no avail; whatever I've got back either didn't compile or didn't run. At this point, I just gave up on it and used a trick to make it work multiple times: defer(). I have to defer the creation of the whole Publisher so I get something fresh every time: Publisher<Integer> range = s -> Source.from(values) .runWith(Sink.publisher(), materializer).subscribe(s); In addition, as I suspected, there is no way to run Akka-Streams synchronously, therefore, any benchmark with the other synchronous guys can't be directly compared. Plus, I have to use the CountDownLatch to await the termination: @Benchmark public void akRange(Blackhole bh) throws InterruptedException { LatchedObserver<Integer> lo = new LatchedObserver<>(bh); akRange.subscribe(lo); if (times == 1) { while (lo.latch.getCount() != 0); } else { lo.latch.await(); } } Note: I have to use a spin-loop over the latch for times == 1 because Windows' timer resolution and wakeup takes pretty long milliseconds to happen at times and without spinning, the benchmark produces 35% lower throughput. Almost ready, we still need the RangeFlatMapJust and RangeFlatMapRange equivalents. Unfortunately, Akka-Streams doesn't have flatMap but has a flatten method on Source. No problem (by now): Publisher<Integer> rangeFlatMapJust = s -> Source.from(values) .map(v -> Source.single(v)) .flatten(FlattenStrategy.merge()) .runWith(Sink.publisher(), materializer) .subscribe(s) ; Nope. Doesn't work because there is no FlattenStrategy.merge(), despite all the examples. But there is a FlattenStrategy.concat(). Have to do. Nope, still doesn't compile because of type inference problems. Have to introduce a local variable: FlattenStrategy<Source<Integer, BoxedUnit>> flatten = FlattenStrategy.concat(); Works in Eclipse, javac fails with ambiguity error. As it turns out, javadsl.FlattenStrategy extends scaladsl.FlattenStrategy which both have the same concat() factory method but different number of type arguments. This isn't the first time javac can't disambiguate but Eclipse can! We don't give up and use reflection to get the proper method called: Method m = akka.stream.javadsl.FlattenStrategy .class.getMethod("concat"); @SuppressWarnings({ "rawtypes", "unchecked" }) FlattenStrategy<Source<Integer, BoxedUnit>, Integer> flatten = (FlattenStrategy)m.invoke(null); Publisher<Integer> rangeFlatMapJust = s -> Source.from(values) .map(v -> Source.single(v)) .flatten(flatten) .runWith(Sink.publisher(), materializer) .subscribe(s) ; Finally, Akka-Streams works. Let's see the benchmark results: Remember, since Akka can't run synchronously and we had to do a bunch of workarounds, we should expect numbers will be lower by a factor of 5-10. I don't know what's going on here. Some numbers are 100x lower. Akka certainly doesn't throw an Exception somewhere because we'd see 5M ops/s in those cases, regardless of times. In conclusion, I'm disappointed with Akka-Streams; it takes quite a hassle to get a simple sequence running and apparently requires more thought to a reasonable performance. Plain Java and j.u.c.StreamJust for reference, let's see how the same task looks and works with plain Java for loops and j.u.c.Streams. For plain Java, the benchmarks look simple: @Benchmark public void javaRange(Blackhole bh) { int n = times; for (int i = 0; i < n; i++) { bh.consume(i); } } @Benchmark public void javaRangeFlatMapJust(Blackhole bh) { int n = times; for (int i = 0; i < n; i++) { for (int j = i; j < i + 1; j++) { bh.consume(j); } } } @Benchmark public void javaRangeFlatMapRange(Blackhole bh) { int n = times; for (int i = 0; i < n; i++) { for (int j = i; j < i + 2; j++) { bh.consume(j); } } }The Stream implementation is a bit complicated because a j.u.c.Stream is not reusable and has to be recreated every time one wants to consume it: @Benchmark public void streamRange(Blackhole bh) { values.stream().forEach(bh::consume); } @Benchmark public void streamRangeFlatMapJust(Blackhole bh) { values.stream() .flatMap(v -> Collections.singletonList(v).stream()) .forEach(bh::consume); } @Benchmark public void streamRangeFlatMapRange(Blackhole bh) { values.stream() .flatMap(v -> Arrays.asList(v, v + 1).stream()) .forEach(bh::consume); } Finally, just for fun, let's do a parallel version of the stream benchmarks: @Benchmark public void pstreamRange(Blackhole bh) { values.parallelStream().forEach(bh::consume); } @Benchmark public void pstreamRangeFlatMapJust(Blackhole bh) { values.parallelStream() .flatMap(v -> Collections.singletonList(v).stream()) .forEach(bh::consume); } @Benchmark public void pstreamRangeFlatMapRange(Blackhole bh) { values.parallelStream() .flatMap(v -> Arrays.asList(v, v + 1).stream()) .forEach(bh::consume); } Great! Let's see the results: Impressive, except for some parallel cases where the forEach synchronizes all parallel operations back to a single thread I presume, negating all benefits. In conclusion, if you have a synchronous task, try plain Java first. Conclusion In this blog post, I've compared the three Reactive-Streams library for usability and performance in case of a synchronous flow. Both RxJava and Reactor did quite well, relative to Java, but Akka-Streams was quite complicated to set up and didn't perform adequate "out of box". However, there might be some remedy for Akka-Streams in the next part where I compare the libraries in asynchronous mode. Nincsenek megjegyzések: Megjegyzés küldése
http://akarnokd.blogspot.com/2015/10/comparison-of-reactive-streams.html
CC-MAIN-2018-39
refinedweb
2,981
50.23
PythonCard, Python, and opinions on whatever technology I'm dabbling in these days like XML-RPC and SOAP. Do Top Lists have value? Radio has at least three public pages that show top 100 lists. These types of listings are sort of self-fulfilling if users visit them to find sites. Once a blog is listed on a top list, it will likely get additional referral traffic from those pages and then possibly subscriptions and/or bookmarks which helps reinforce the popularity of the blog. When I was still doing City.Net (Excite Travel, now defunct) we made a list of the most popular cities based on web log traffic. After that list was made public on the site there was very little variation to which cities appeared on the list, only slight variations of the rankings of the cities on the list. People were using the popular cities page to navigate directly and appeared to spend less time just browsing or searching for alternative destinations. In Radio blog terms this means that the entrenched sites that already have a lot of blog rolling going on will likely stay in the top 100 and as the Radio community installed base increases, the new blogs will have to play catch up. Perhaps the traffic numbers don't really matter except perhaps as ego enforcement for the most popular sites or a discouragement to new bloggers that expected to suddenly be popular. Or is all this just a form of voyeurism? I have to admit that I find myself drawn to the rankings, which might be due to my former life at a web startup where traffic was everything. Then I have to remind myself that the actual traffic is so small that what the numbers really show is how little readership even the most popular blogs (at least in the rankings above) get compared to entrenched name brand sites. When you compare a commercial site that gets a million page views per day, say roughly 700 page views per minute versus a blog getting 700 page views for a whole day (on a good day) it shows that blogs have a long way to go to get the same kind of reach as a name brand publication. Of course, it may be that writers like Dvorak get a very small fraction of the millions or tens of millions of page views that ZD gets each day and that writers for the New York Times have less web readership than something like Scripting News. But I don't have a Top 100 list to point to for the answer. :) SOAP WSDL Verifiers and Analyzers If you're writing client-side code and SOAP messages to talk to a web service described by a WSDL service, you'll probably benefit from seeing how one or more of the sites below parses the WSDL for a given service. If you're using a scripting language such as Python, the display of function calls and argument lists with appropriate namespace and soapaction is quite useful. This is a follow-up to my earlier post about frustrations with WSDL and SOAP interop.
http://radio.weblogs.com/0102677/2002/03/04.html
crawl-002
refinedweb
526
62.31
meta-packages can't be installed on 64-bit Gutsy Bug Description Using 64-bit Gutsy, I installed APTonCD via synaptic, and created a CD complete with a meta-package. Upon using the CD to update my secondary machine. which also runs 64-bit Gutsy, I found that while the individual packages are fine, the meta-package produced a "Wrong architecture 'i386'" error, as if it were 32-bit, and cannot be used. could you please send us the result of $ uname -m command in your terminal? x86_64 can you send these files .disk/info and aptoncd-metapackage stored in your backup dvd? Thank you The .disk file contains nothing but the following single line of text: APTonCD for ubuntu gutsy - amd64 (2008-02-18 22:21) CD1 The metapackage is attached. Thanks for your time and help. I have had the same problem in 8.10 and 9.04: The AptOnCD metapackage works fine with 32-bit machines, but fails for 64-bit machines with the "Wrong architecture 'i386'" error. This is very frustrating, as there doesn't seem to be a workaround -- other than re-installing all the packages by hand. The title ought to be updated to "meta-packages can't be installed on 64-bit platforms" and bumped up; surely this isn't a complicated fix... As of 10.04, this is still an unresolved issue... For 64-bit machines, APTonCD is a totally broken and useless program. Thank you for the report, a patch for that bug is going to be ready until next monday. Sorry about the delay to get it done. Thank you for the patch. Sadly, as of 10.10, the metapackage has the "Wrong architecture 'i386'" error. I don't know whether it could be useful, but in order to have the metapackage created with right arch I modified the file /usr/lib/ /usr/lib/ or $srcdir/ (they are alike). I'm on Ubuntu 11.04 x86_64. Here's an excerpt from attachment (actually I simply added/edited a pair of lines): @@ -17,6 +17,7 @@ [..] import gobject +from APTonCD.core.utils import SystemInfo class MetaPackage( """ @@ -24,13 +25,14 @@ """ [..] def __init__(self, filename=""): + util = SystemInfo() self.fileName = filename [..] self.mtPriority = 'optional' - self.mtArch = 'i386' + self.mtArch = util.architecture self. It works (at list as far as I tested) two typos: "from /usr/lib/ should be "from pool/universe/ and "at list" --> "at least" If it's just a meta-package, they should be creating it with the "all" architecture (or was it "any"?).
https://bugs.launchpad.net/aptoncd/+bug/193140
CC-MAIN-2018-34
refinedweb
423
67.65
>>IMAGE Why shared code across projects or assemblies? There are multiple reasons to share code across project or assemblies: - Code Reuse: Shared library enables code reusability by placing code into the library and the developer should not have to re-write the same code for more than one time. - Multiple front-ends: Multiple web application can be developed by splitting web assemblies to your project and these all web application uses same data access layer. - Separate development: Separate development of assemblies can be done independently. There are two ways to share a library in a mixed development ecosystem: - .Net Standard library: It can be directly shared between applications based on versions match. - Multi-targeting framework: It uses multi-targeting cross-compiled library for more than one platform. Share Library through .Net Standard or concept of PCL .NET standard introduced by Microsoft provides a common standard environment for APIs across Microsoft development system which can be viewed as the successor to Portable class libraries which simplifies the business of targeting different platforms. The .NET standard provides curated sets of APIs when PCLs which are based on the profiles defined by intersecting platform capabilities. Using this we can create a library that can be referenced by both the .NET framework and .NET Core application. Ensure that the .NET standard Library NuGet package is added to any .NET framework project, which wants to reference a .NET standard library. There are more than half a dozen versions of the .Net standard and it is not clear which version to target. In the higher version of .Net standard provide a wider range of APIs and in lower version provide a wide range of platform. .NET standard provides limited support for the .NET framework and each version is supported by a different version. Example- Use .NET standard 1.2 which is available for both .NET core and .NET framework 4.5.2 application. Version support for below 1.3 is pretty sketchy. Multi-targeting Framework The main advantage of .NET core is that it can be used or implement cross-platform. In the bigger development environment, the .NET framework cannot guarantee recent vintage therefore the multi-targeting can provide a wide range of shared libraries. So that developer can compile single project natively for both .NET standard and .NET framework this cost multiple set of compiled output. In visual studio 2015 multi-targeting is pretty choppy because of Json based old xproj format. Developer can compiler more than one framework but can’t use direct project referenced in .NET framework project. This is difficult, you either had to migrate the target project to the xproject structure or distribute the dependency, neither approach was ideal. Read More: Class Library In .net Core The developer has to manually edit the project file to get it working. Re-loading can be one of the solutions after making any manual change to the framework. Refer the following example which shows the adjustment of the Target Framework element to get a project to compile for more than one framework. net452;netstandard1.3 After compiling the project, you will find two outputs in the BIN folder, one for each specified framework, and create a reference to the shared assembly for the project built with .NET Framework 4.5.2. NuGet package is a supportive tool for accessing the third-party API. Using NuGet developer can contribute own developed library so that everyone can download and install it and use it in their project. Let's start with an example: Let start with creating a blank solution to put the class library project. In Visual studio blank solution serves as a container for one or more projects and you can add related projects to the same solution. Step 1: Create the blank solution Open visual studio 2019 Open the start window and choose to create a new project. In Create new project page, search Blank solutiontemplate in the search box and then Choose Next. Enter DemoLibrary in the Project name box in Configure your new project page. Then press Create button. Step 2: Create a class library project. Add a new “DemoLibrary” named .NET class library project to the solution. Right-click on the solution and select Add->New Project from Solution Explorer. Search Library in Add new project page in the search box. Select C# from the language list, and then select All platforms from the platform list. Select the Class Library template and click on the Next button. Enter DemoLibrary in the Project name Box on Configure your new project page and click on theNext button. Replace Class.cs with the following code and rename with StringLibrary named file and Save the file. using System; namespace DemoLibrary { public static class StringLibrary { public static bool StartsWithLower(this string str) { if (string.IsNullOrWhiteSpace(str)) return false; char ch = str[0]; return char.IsLower(ch); } } } The class library DemoLibrary.StringLibrary contains the StringLibrary method. This method returns a Boolean value when the string starts with a Lower character. Char.IsLower() method returns true if a character is lower. Select Build->Build solution or press CTR+SHIFT+B to verify that the project compiles without error. Step 3: Add Console application to the solution Add a “DemoConsole” console application that uses the class library. This application will prompt the user to enter a string and return true if the string contains a lower character. Add .Net Console application named it “DemoConsole” to the solution. Right-click on the solution and select Add->New Project from Solution Explorer. Search Console in Add new project page in the search box. Select C# from the language list, and then select All platforms from the platform list. Select the Console Application template and click on the Next button. Enter DemoConsole in the Project name Box on Configure your new project page and click on the Next button. 2.Replace Program.cs with the following code: using System; using DemoLibrary; namespace DemoConsole { class Program { static void Main(string[] args) { int row = 0; do { if (row == 0 || row >= 25) ResetConsole(); string input = Console.ReadLine(); if (string.IsNullOrEmpty(input)) break; Console.WriteLine($"Input: {input} {"Begins with lowercase? ",30}: " + $"{(input.StartsWithUpper() ? "Yes" : "No")}\n"); row += 3; } while (true); return; // Declare a ResetConsole local method void ResetConsole() { if (row > 0) { Console.WriteLine("Press any key to continue..."); Console.ReadKey(); } Console.Clear(); Console.WriteLine("\nPress only to exit; otherwise, enter a string and press :\n"); row = 3; } } } } The row variable is used to maintain a count number of rows written in console application when its value greater than 25 the code clear console application. Are You Looking for Dedicated ASP.Net Core Developer ? Your Search ends here. Step 4: Add a project Reference Console application cannot access the class library. To access the method of the library, create a project reference to the class library project. Right-click on DemoConsole project’s Dependency node and select Add project reference in Solution Explorer. Select DemoLibrary project and click on OK, In the Reference Manager. Step 5: Run the application Set DemoConsole Application as a startup project from solution explorer Run application by pressing CTRL+F5 Conclusion In this blog, we have discussed how to create the shared library in asp.net core which is used for developers to reuse packages as a library in projects and we have also discussed examples.
https://www.ifourtechnolab.com/blog/how-to-create-a-shared-library-in-net-core
CC-MAIN-2022-05
refinedweb
1,225
58.99
/* Tree-dumping functionality for intermediate representation. Copyright (C) 1999, 2000, 2003,_DUMP_H #define GCC_TREE_DUMP_H #include "splay-tree.h" #include "tree-pass.h" typedef struct dump_info *dump_info_p; /* Flags used with queue functions. */ #define DUMP_NONE 0 #define DUMP_BINFO 1 /* Information about a node to be dumped. */ typedef struct dump_node_info { /* The index for the node. */ unsigned int index; /* Nonzero if the node is a binfo. */ unsigned int binfo_p : 1; } *dump_node_info_p; /* A dump_queue is a link in the queue of things to be dumped. */ typedef struct dump_queue { /* The queued tree node. */ splay_tree_node node; /* The next node in the queue. */ struct dump_queue *next; } *dump_queue_p; /* A dump_info gives information about how we should perform the dump and about the current state of the dump. */ struct dump_info { /* The stream on which to dump the information. */ FILE *stream; /* The original node. */ tree node; /* User flags. */ int flags; /* The next unused node index. */ unsigned int index; /* The next column. */ unsigned int column; /* The first node in the queue of nodes to be written out. */ dump_queue_p queue; /* The last node in the queue. */ dump_queue_p queue_end; /* Free queue nodes. */ dump_queue_p free_list; /* The tree nodes which we have already written out. The keys are the addresses of the nodes; the values are the integer indices we assigned them. */ splay_tree nodes; }; /* Dump the CHILD and its children. */ #define dump_child(field, child) \ queue_and_dump_index (di, field, child, DUMP_NONE) extern void dump_pointer (dump_info_p, const char *, void *); extern void dump_int (dump_info_p, const char *, int); extern void dump_string (dump_info_p, const char *); extern void dump_string_field (dump_info_p, const char *, const char *); extern void dump_stmt (dump_info_p, tree); extern void queue_and_dump_index (dump_info_p, const char *, tree, int); extern void queue_and_dump_type (dump_info_p, tree); extern void dump_function (enum tree_dump_index, tree); extern void dump_function_to_file (tree, FILE *, int); extern void debug_function (tree, int); extern int dump_flag (dump_info_p, int, tree); extern unsigned int dump_register (const char *, const char *, const char *, int, int); #endif /* ! GCC_TREE_DUMP_H */
http://opensource.apple.com/source/libstdcxx/libstdcxx-39/libstdcxx/gcc/tree-dump.h
CC-MAIN-2015-35
refinedweb
306
75.2
bool flag = false;void loop() { if ( flag == false ) { for ( int pos = 180; pos >= 1; pos -= 5 ) { flag = !flag; Serial.print("pos = "); Serial.print(pos); Serial.print(", flag = "); Serial.println(flag); } }}void setup(){ Serial.begin(9600);} #include <Servo.h> Servo myservo; // create servo object to control a servo // a maximum of eight servo objects can be created int led = 12;int button =2;int val = 0; int pos = 0; // variable to store the servo position boolean counter = false;void setup() { myservo.attach(9); // attaches the servo on pin 9 to the servo object pinMode(led, OUTPUT); pinMode(button, INPUT);} void loop() { if(counter == false) { for(pos = 0; pos < 180; pos += 5) // goes from 0 degrees to 180 degrees { // in steps of 1 degree myservo.write(pos); // tell servo to go to position in variable 'pos' delay(15); // waits 15ms for the servo to reach the position } for(pos = 180; pos>=1; pos-=5) // goes from 180 degrees to 0 degrees { myservo.write(pos); // tell servo to go to position in variable 'pos' delay(15); } counter = !counter; // after the first loop, this will lock it out until arduino is manually reset or if reset by soon to be implemented button. }// **Add button here ** if button is pressed, counter = false, code will loop again.} First, allow me to apologize for my ignorance. There is obviously a very simple thing that I'm just not seeing. To me, it looks as if the loop checks the variable and if false, runs a sequence that changes the variable to true at the end. So, one change. But it's clear I'm mistaken. runs a sequence that changes the variable to true at the end.
http://forum.arduino.cc/index.php?topic=148166.0;prev_next=prev
CC-MAIN-2015-27
refinedweb
280
64.2
3 uses for functools.partial in Django2021-05-05 Python’s functools.partial is a great tool that I feel is underused. (If you don’t know what partial is, check out PyDanny’s explainer.) Here are a few ways I’ve used partial in Django projects. 1. Making reusable fields without subclassing It’s common to have field definitions the same across many models, for example a created_at field tracking instance creation time. We can do create such a field like so: from django.db import models class Book(models.Model): created_at = models.DateTimeField( default=timezone.now, help_text="When this instance was created.", ) Copying this between models becomes tedious, and makes changing the definition hard. One solution to this repetition is to use a base model class or mixin. This is fine, but it scatters the definition of a model’s fields, prevents local customization of arguments, and can lead to complex inheritance hierarchies. Another solution is to create a subclass of DateTimeField and add it to every model. This works well but can lead to complications with migrations, as the migration files will refer to the subclass by import and we will need to update them all if we refactor. We can instead use partial to create a “cheap subclass” of the field class. Our models will still directly use DateTimeField, but with some arguments pre-filled. For example: from functools import partial from django.db import models CreatedAtField = partial( models.DateTimeField, default=timezone.now, help_text="When this instance was created.", ) class Book(models.Model): ... created_at = CreatedAtField() class Author(models.Model): ... created_at = CreatedAtField() partial also allows us to replace the pre-filled arguments, so we can override them when needed: class DeletedBook(models.Model): originally_created_at = CreatedAtField( help_text="When the Book instance was originally created.", ) We can also apply this technique to fields in forms, Django REST Framework serializers, etc. 2. Creating many upload_to callables for FileFields When using the model FileField, or subclasses like ImageField, one can pass a callable as the upload_to argument to calculate the destination path. This allows us to arrange the media according to whatever scheme we want. If we are using multiple such fields that share some logic for upload_to, we can use partial to avoid creating many similar functions. For example, if we had a user model that allows uploading two types of image into their respective subfolders: from functools import partial from django.db import models def user_upload_to(instance, filename, category): return f"users/{instance.id}/{category}/{filename}" class User(models.Model): ... profile_picture = models.ImageField( upload_to=partial(user_upload_to, category="profile"), ) background_picture = models.ImageField( upload_to=partial(user_upload_to, category="background"), ) 3. Pre-filling view arguments in URL definitions Sometimes we might have a single view that changes behaviour based on the URL mapping to it. In such a case we can use partial to set parameters for the view, without creating wrapper view functions. For example, imagine we have a user profile view, for which we are currently working on “version two” functionality: def profile(request, user_id, v2=False): ... if v2: ... return render(request, ...) We can map two URL’s to the profile function view, switching v2 to True for the a preview URL: from functools import partial from django.urls import path from example.core.views import public urlpatterns = [ ..., path("profile/<int:user_id>/", public.profile), path("v2/profile/<int:user_id>/", partial(public.profile, v2=True)), ] 🎉 My book Speed Up Your Django Tests is now up to date for Django 3.2. 🎉 Buy now on Gumroad One summary email a week, no spam, I pinky promise. Related posts: - Three more uses for functools.partial() in Django - A Guide to Python Lambda Functions - Better Python Decorators with wrapt Tags: python, django
https://adamj.eu/tech/2021/05/05/3-uses-for-functools-partial-in-django/
CC-MAIN-2021-43
refinedweb
617
50.12
Question: Why is the method undefined if it's just there? Details: I have a very simple mailer class: class ProductMailer < ApplicationMailer def sample_email mail(to: "me@example.com") # I hardcode my own email just to test end end And a very simple call from ProductsController: def sample_email ProductMailer.sample_email().deliver_later redirect_to @product, notice: 'Email was queued.' end The email fails to be sent. I am using Sidekiq to process emails in background. The Sidekiq Web UI shows failed jobs in the Tries page and I can see why it failed: NoMethodError: undefined method `sample_email' for ProductMailer:Class I tried to rename the method and restart the server with rails server but none of that removes the error. I am not using any namespaces. Question: Why is the method undefined if it's just there? Note: I found out by chance that the method is found if I name it notifybut maybe that's because I'm overwriting some method from ActionMailer base class, I don't know. It's because you've defined an instance method, and then you try to call it on a class. Change it to def self.sample_email .... Answer: Restart Sidekiq I created the mailer class before starting Sidekiq, but I renamed the sample_email method while Sidekiq was already running, so it seems that Sidekiq doesn't recognize new methods on-the-fly. I renamed the method because I am used to development environment, where you can change almost anything on the fly...
http://www.devsplanet.com/question/35270172
CC-MAIN-2016-50
refinedweb
248
63.19
We will walk through an example that involves training a model to tell what kind of wine will be “good” or “bad” based on a training set of wine chemical characteristics. First, we’re going to import the packages that we’ll be using throughout this notebook. Then we’ll bring in the CSV from my desktop. You can get the raw data from UCI’s ML Database. We’re also using sci-kit learn. For more information on installing sci-kit to use sklearn packages, visit this website. In: #Importing required packages. import pandas as pd import matplotlib.pyplot as plt import seaborn as sb import numpy as np %matplotlib inline #Importing sklearn… College is obviously expensive, but is it still a wise investment? We’ve all heard how expensive college is getting, along with plenty of criticism surrounding its value in a changing job market. Of course, there are many benefits beyond the monetary ones that should be considered when exploring college options, but for the purpose of this post I’m going to limit the scope and purely assess the financial benefit of attending college. The main financial benefit of attending college is the earnings differential received by a college graduate over a high school graduate; Payscale provides 20-year return on investment (ROI)…: Using Monte Carlo methods, we’ll write a quick simulation to predict future stock price outcomes for Apple ($AAPL) using Python. You can read more about Monte Carlo simulation (in a finance context) here. 1) Pull the data First, we can import the libraries, and pull the historical stock data for Apple. For this example, I picked the last ~10 years, although it would be valuable to test sensitivities of different ranges as this alone is subjective. # Import required librariesimport mathimport matplotlib.pyplot as pltimport numpy as npfrom pandas_datareader import dataapple = data.DataReader('AAPL', 'yahoo',start='1/1/2009')apple.head()#Next… I’m a product junkie with a passion for data.
https://gregjamesbrown.medium.com/?source=post_internal_links---------4----------------------------
CC-MAIN-2021-31
refinedweb
329
54.22
Slender datastructures in Python for efficient work! Project description Slender Slender provides chainable, type-safe, enhanced datastructures over the well-known built-ins. - List is an enhanced list having all the functionalities that the basic listbuitl-in type does but extended with a lot of useful functions. - Set is a superset of the built-in one, works as the general settype but it does a lot more that. - Dictionary is a key-value pair container, like dict, which is built with heavy functionalities. - Tuple is a finate ordered list over the built-in tuple. Install pip install slender Usage from slender import List, Set print(List([1, 2, 3, 4, 5]) \ .delete_if(lambda x: x % 2 == 0) \ .map(lambda x: x * 2) \ .chain(['a', 'b']) \ .each_with_index() \ .to_list()) # => [[0, 2], [1, 6], [2, 10], [3, 'a'], [4, 'b]] print((Set({1, 2, 2, 3, 6, 7, 8}) \ .subtract(Set({3, 5, 10})) \ .select(lambda x: x % 2 == 0) \ << 4 \ << 5 \ ).map(lambda x: x * 2)) # => {4, 8, 10, 12, 16} Documentation For further information, read the documentation that can be found: Contribution - Fork it! - Make your changes! - Send a PR! Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/slender/2.0.1/
CC-MAIN-2020-16
refinedweb
212
76.82
TZSET(3P) POSIX Programmer's Manual TZSET(3P) #include <time.h> extern int daylight; extern long timezone; extern char *tzname[2]; void tzset(void); The tzset() function shall use the value of the environment variable TZ to set time conversion information used by ctime(3p), localtime(3p), mktime(3p), and strftime(3p). POSIX.1‐2017,. If a thread accesses tzname, daylight, or timezone directly while another thread is in a call to tzset(), or to any function that is required or allowed to set timezone information as if by calling tzset(), the behavior is undefined. The tzset() function shall not return a value. No errors are defined. The following sections are informative. Example TZ variables and their timezone differences are given in the table below: ┌───────────┬────────────┐ │ TZ │ timezone │ ├───────────┼────────────┤ │EST5EDT │ 5*60*60 │ │GMT0 │ 0*60*60 │ │JST-9 │ -9*60*60 │ │MET-1MEST │ -1*60*60 │ │MST7MDT │ 7*60*60 │ │PST8PDT │ 8*60*60 │ └───────────┴────────────┘ Since the ctime(), localtime(), mktime(), strftime(), and strftime_l() functions are required to set timezone information as if by calling tzset(), there is no need for an explicit tzset() call before using these functions. However, portable applications should call tzset() explicitly before using ctime_r() or localtime_r() because setting timezone information is optional for those functions. None. None. ctime(3p), localtime(3p), mktime(3p), strftime(3p) The Base Definitions volume of POSIX.1‐2017, Chapter 8, Environment Variables,ZSET(3P) Pages that refer to this page: time.h(0p), daylight(3p), localtime(3p), mktime(3p), strftime(3p), timezone(3p)
https://man7.org/linux/man-pages/man3/tzset.3p.html
CC-MAIN-2021-25
refinedweb
250
52.19
This is the mail archive of the libstdc++@sources.redhat.com mailing list for the libstdc++ project. Versions: gcc v2.95.2 as released 1999-10-24 ( dated 1999-10-25) libstdc++ v2.10.0 (per Makefile.in) dated 1999-04-02 (included with gcc) Context: sparc-sun-solaris2.5 and 2.6, Gnu make I have the following problems installing gcc v2.95.2. Since the relevant sources and scripts are in the gcc release, I'm posting to gcc-help, but since the issues appear to be with libstdc++ I'm cross-posting to that list. 1. Having created and changed to a build directory $objdir, I execute $srcdir/configure --prefix /usr/local/gcc/$vers --enable-shared \ --enable-languages=c++,f77,java It configures gcc (and its subdirectories), libiberty and texinfo, but does not configure libio or libstdc++, and the toplevel Makefile does not do those directories. From the source to `configure` I'm not able to figure out the --with switches, contingencies, etc. that would cause those directories to be configured coordinately with gcc. If I'm reading the source correctly, they *should* be recognized. My alternative seems to be to figure out how to suppress namespaces in gcc and to build libio and libstdc++ separately. Can anyone help me get `configure` to behave? 2. As compiled, g++ expects libstdc++ v3, but the distributed package is v2, neither of which got installed. What do I do, get a development snapshot of v3 or jigger the g++ specs file to use v2 by default? Or lie and make a symbolic link from g++-3 to g++-2? Verbose output: the third from last line is what leads me to believe that version 3 is sought. [root bamboo /m3/Gnu/gcc-obj 40] g++ -v hello.C /usr/local/bin/g++: exec /usr/local/gcc/2.95.2/bin/g++ -v hello.C Reading specs from /usr/local/gcc/2.95.2/lib/gcc-lib/sparc-sun-solaris2.5/2.95.2/specs gcc version 2.95.2 19991024 (release) /usr/local/gcc/2.95.2/lib/gcc-lib/sparc-sun-solaris2.5/2.95.2/cpp -lang-c++ -v -D__GNUC__=2 -D__GNUG__=2 -D__GNUC_MINOR__=95 -D__cplusplus __EXCEPTIONS -D__GCC_NEW_VARARGS__ -Acpu(sparc) -Amachine(sparc) hello.C /var/tmp/ccTgN0cY.ii GNU CPP version 2.95.2 19991024 (release) (sparc) #include "..." search starts here: #include <...> search starts here: /usr/local/include /usr/local/gcc/2.95.2/lib/gcc-lib/sparc-sun-solaris2.5/2.95.2/../../../../sparc-sun-solaris2.5/include /usr/local/gcc/2.95.2/lib/gcc-lib/sparc-sun-solaris2.5/2.95.2/include /usr/include End of search list. The following default directories have been omitted from the search path: /usr/local/gcc/2.95.2/lib/gcc-lib/sparc-sun-solaris2.5/2.95.2/../../../../include/g++-3 End of omitted list. hello.C:1: iostream.h: No such file or directory 3. We intend to use this gcc/g++ as a production compiler for our students and faculty. However, going over the Completion Checklist dated 1999-05-18 () I see that most functions in libstdc++ are in T status (implemented pending tests) or V status (passed test suite), but quite a number are S (stub only) and X (partially implemented or known buggy). Incomplete implementation of the "new" features described in ISO 14882 but not in the ARM is not of too much concern, but I have qualms about giving non-gurus a library with significant historic features that are broken, like va_end etc. and setjmp, even though they're uncommon. What is your opinion about using this release in a production environment in which ARM-compliant code has to work? 4. Good news for a change: we have tcl/tk 8.0 and thus are unable to compile dejagnu-1.3 (which uses tk 7.5 symbols), but I ran many of the tests from egcs-tests-1.1.2.tar.gz, scoring them manually. gcc failed only on its one Known Bug (if a structure that has a volatile data member is declared in a parameter list and has the register keyword, the keyword is treated as an error rather than being ignored). g++ test cases (that I ran) that didn't include header files were all compiled / executed / rejected correctly. James F. Carter Voice 310 825 2897 FAX 310 206 6673 UCLA-Mathnet; 6115 MSA; 405 Hilgard Ave.; Los Angeles, CA, USA 90095-1555 Internet: jimc@math.ucla.edu (finger for PGP key) UUCP:...!{ucsd,ames,ncar,gatech,purdue,rutgers,decvax,uunet}!math.ucla.edu!jimc
http://gcc.gnu.org/ml/libstdc++/2000-07/msg00207.html
CC-MAIN-2019-26
refinedweb
766
51.14
. What is LCM? LCM (Least Common Multiple) of two numbers is smallest number that is divisible by both. For Example, LCM of 30 and 20 is 60 Explanation: 60 / 20 = 3 60 / 30 = 2 We can find LCM using below ways – In this post, we learn to find Least Common Multiple (LCM) of two number using simple method. 1. Program to Find LCM of Two Numbers (Simple Way) We use simple while loop and if-else block to find least common factor of two numbers. Sourcecode – import java.util.* fun main() { val read = Scanner(System.`in`) println("Enter a:") val a = read.nextInt() println("Enter b:") val b = read.nextInt() val lcm = findLCM(a, b) println("LCM of $a and $b: $lcm") } private fun findLCM(a: Int, b: Int): Int { val biggerNum = if(a > b) a else b var lcm = biggerNum while(true) { // Break loop we get number divisible by a and b both. if(((lcm % a) == 0) && ((lcm % b) == 0)) { break } lcm += biggerNum } return lcm } When you run the program, output will be Enter a: 13 Enter b: 17 LCM of 13 and 17: 221 Here, we have created an object of Scanner. Scanner takes an argument which says where to take input from. System.`in` means take input from standard input – Keyboard. read.nextIn() means read anything entered by user before space or line break from standard input – Keyboard. Input read by scanner is then stored in variable number Let Assume a = 20, b = 30 So, findLCM(20, 30) function is called. At first, Variable lcm = 30 because 30 > 20. A while loop is run. Note: while loop is exited only when lcm is divisible by both numbers. At first, lcm = 30 Since lcm is not divisible by 20. ((lcm % a) == 0) && ((lcm % b) == 0) condition fails. Value of lcm is incremented by 30 (30 > 20). Now, lcm = 60. lcm is divisible by 20 and 30 both. ((lcm % a) == 0) && ((lcm % b) == 0) condition succeeds. So, while loop breaks. Finally, 60 is returned. We went through Kotlin program to find LCM of two numbers.
https://tutorialwing.com/kotlin-program-to-find-lcm-with-example/
CC-MAIN-2020-50
refinedweb
346
75.5
How to set up a GraphQL Server using Node.js, Express & MongoDB - 1980 So you are planning to start with GraphQL and MongoDB. Then you realize how can I set up those two technologies together? Well, this article is made precisely for you . I’ll show you how to set up a GraphQL server using MongoDB. I will show you how you can modularize your GraphQL schema and all this using MLab as our database. All the code from this article is available here. So now, let’s get started. Why GraphQL? GraphQL is a query language for your APIs. It was released by Facebook back in 2015 and has gained a very high adoption. It’s the replacement of REST. With GraphQL, the client can ask for the exact data that they need and get back exactly what they asked for. GraphQL also uses a JSON-like query syntax to make those requests. All requests go to the same endpoint. If you’re reading this article, I assume that you know a little bit about GraphQL. If you don’t know, you can learn more about GraphQL here. Getting started First, create a folder, then start our project. npm init -y Then install some dependencies for our project. npm install @babel/cli @babel/core @babel/preset-env body-parser concurrently cors express express-graphql graphql graphql-tools merge-graphql-schemas mongoose nodemon And then @babel/node as a dev dependency: npm install --save-dev @babel/node Babel Now we’re gonna set up Babel for our project. Create a file called .babelrc in your project folder. Then, put the @babel/env there, like this: { "presets": ["@babel/preset-env"] } Then go to your package.json and add some scripts: { "scripts": { "server": "nodemon --exec babel-node index.js" } } We’ll have only one script that we’re gonna use in our project. “server” — It will mainly run our server. Server Now, in our root folder create the index.js file. It will be where we’re gonna make our server. First, we’re gonna import all the modules that we’ll use. import express from "express"; import expressGraphQL from "express-graphql"; import mongoose from "mongoose"; import bodyParser from "body-parser"; import cors from "cors"; Then, we’re gonna create our connect with MongoDB using Mongoose: const app = express(); const PORT = process.env.PORT || "4000"; const db = "Put your database URL here."; // Connect to MongoDB with Mongoose. mongoose .connect( db, { useCreateIndex: true, useNewUrlParser: true } ) .then(() => console.log("MongoDB connected")) .catch(err => console.log(err)); What about that db const? This is where you’re gonna put your database URL to connect MongoDB. Then you’re gonna say to me: “But, I don’t have a database yet”, yes I got you. For that, we’re using MLab. MLab is a database-as-a-service for MongoDB, all you need to do is go to their website (click here) and register. After you register, go and create a new database. To use as free, choose this option: Choose US East (Virginia) as an option, and then give our database a name. After that, our database will show at the home page. Click on our database, then go to User and create a new user. In this example, I’m gonna create a user called leo and password leoleo1. After our user is created, on the top of our page, we find two URLs. O_ne to connect using Mongo Shell. The other to connect using a MongoDB URL. We copy the second one._ After that, all you need to do is paste that URL on our db const at the index.js file_._ Our db const would look like this: const db = "mongodb://leo:[email protected]:21753/graphql-mongodb-server"; Express Now we’re gonna finally start our server. For that, we’ve put some more lines in our index.js and we’re done. app.use( "/graphql", cors(), bodyParser.json(), expressGraphQL({ schema, graphiql: true }) ); app.listen(PORT, () => console.log(`Server running on port ${PORT}`)); Now, run the command npm run server and go to localhost:4000/graphql and you’ll find a page like this: MongoDB and Schema Now, in our root folder, make a folder named models and create a file inside called User.js (yes, with capital U). Inside of User.js, we’re gonna create our first schema in MongoDB using Mongoose. import mongoose from "mongoose"; const Schema = mongoose.Schema; // Create the User Schema. const UserSchema = new Schema({ id: { type: String, required: true, unique: true }, name: { type: String, required: true }, email: { type: String, required: true } }); const User = mongoose.model("User", UserSchema); export default User; Now that we have created our User schema, we’re gonna start with GraphQL. GraphQL In our root folder, we’re gonna create a folder called graphql. Inside that folder, we’re gonna create a file called index.js and two more folders: resolvers and types. Queries Queries in GraphQL are the way we ask our server for data. We ask for the data that we need, and it returns exactly that data. All our queries will be inside our types folder. Inside that folder, create an index.js file and a User folder. Inside the User folder, we’re gonna create an index.js file and write our queries. export default ` type User { id: String! name: String! email: String! } type Query { user(id: String!): User users: [User] } type Mutation { addUser(id: String!, name: String!, email: String!): User editUser(id: String, name: String, email: String): User deleteUser(id: String, name: String, email: String): User } `; In our types folder, in our index.js file, we’re gonna import all the types that we have. For now, we have the User types. import { mergeTypes } from "merge-graphql-schemas"; import User from "./User/"; const typeDefs = [User]; export default mergeTypes(typeDefs, { all: true }); In case you have more than one type, you import that to your file and include in the typeDefs array. Mutations Mutations in GraphQL are the way we modify data in the server. All our mutations will be inside our resolvers folder. Inside that folder, create an index.js file and a User folder. Inside the User folder, we’re gonna create an index.js file and write our mutations. // The User schema. import User from "../../../models/User"; export default { Query: { user: (root, args) => { return new Promise((resolve, reject) => { User.findOne(args).exec((err, res) => { err ? reject(err) : resolve(res); }); }); }, users: () => { return new Promise((resolve, reject) => { User.find({}) .populate() .exec((err, res) => { err ? reject(err) : resolve(res); }); }); } }, Mutation: { addUser: (root, { id, name, email }) => { const newUser = new User({ id, name, email }); return new Promise((resolve, reject) => { newUser.save((err, res) => { err ? reject(err) : resolve(res); }); }); }, editUser: (root, { id, name, email }) => { return new Promise((resolve, reject) => { User.findOneAndUpdate({ id }, { $set: { name, email } }).exec( (err, res) => { err ? reject(err) : resolve(res); } ); }); }, deleteUser: (root, args) => { return new Promise((resolve, reject) => { User.findOneAndRemove(args).exec((err, res) => { err ? reject(err) : resolve(res); }); }); } } }; Now that all our resolvers and mutations are ready, we’re gonna modularize our schema. Modularizing our schema Inside our folder called graphql, go to our index.js and make our schema, like this: import { makeExecutableSchema } from "graphql-tools"; import typeDefs from "./types/"; import resolvers from "./resolvers/"; const schema = makeExecutableSchema({ typeDefs, resolvers }); export default schema; Now that our schema is done, go to our root folder and inside our index.js import our schema. After all that, our schema will end up like this: import express from "express"; import expressGraphQL from "express-graphql"; import mongoose from "mongoose"; import bodyParser from "body-parser"; import cors from "cors"; import schema from "./graphql/"; const app = express(); const PORT = process.env.PORT || "4000"; const db = "mongodb://leo:[email protected]:21753/graphql-server"; // Connect to MongoDB with Mongoose. mongoose .connect( db, { useCreateIndex: true, useNewUrlParser: true } ) .then(() => console.log("MongoDB connected")) .catch(err => console.log(err)); app.use( "/graphql", cors(), bodyParser.json(), expressGraphQL({ schema, graphiql: true }) ); app.listen(PORT, () => console.log(`Server running on port ${PORT}`)); Playing with our queries and mutations To test our queries and mutations, we’re gonna start our server with the command npm run server, and go to localhost:4000/graphql. Create user First, we’re gonna create our first user with a mutation: mutation { addUser(id: "1", name: "Dan Abramov", email: "[email protected]") { id name email } } After that, if the GraphiQL playground returns to you the data object that we created, that means that our server is working fine. To make sure, go to MLab, and inside of our users collection, check if there is our data that we just created. After that, create another user called “Max Stoiber”. We add this user to make sure that our mutation is working fine and we have more than one user in the database. Delete user To delete a user, our mutation will go like this: mutation { deleteUser(id: "1", name: "Dan Abramov", email: "[email protected]") { id name email } } Like the other one, if the GraphiQL playground returns to you the data object that we created, that means that our server is working fine. Get all users To get all users, we’re gonna run our first query like this: query { users { id name email } } Since we only have one user, it should return that exact user. Get a specific user To get a specific user, this will be the query: query { user(id: "2"){ id name email } } That should return the exact user. And we’re done! Our server is running, our queries and mutations are working fine, we’re good to go and start our client. You can start with create-react-app. In your root folder just give the command create-react-app client and then, if you run the command npm run dev, our server and client will run concurrently! Recommended Courses: ☞ Projects in Node.js - Learn by Example ☞ ChatBots: Messenger ChatBot with API.AI and Node.JS ☞ Master the MEAN Stack - Learn By Example Suggest: ☞ Learn GraphQL with Laravel and Vue.js - Full Tutorial ☞ Top 4 Programming Languages to Learn in 2019 to Get a Job ☞ Top 4 Programming Languages to Learn In 2019 ☞ Python Django Tutorial 2018 for Beginners ☞ Dart Programming Tutorial - Full Course ☞ Introduction to Functional Programming in Python
https://geekwall.in/p/S1spZJm0Q/how-to-set-up-a-graphql-server-using-node-js-express-mongodb
CC-MAIN-2020-40
refinedweb
1,706
67.55
Created on 2012-08-12 17:58 by ita1024, last changed 2013-08-01 20:55 by catalin.iacob. On opensuse 12.1, python 3.3 installs its extensions in /usr/local/lib64/python3.3/lib-dynload/, but the .py files are in /usr/local/lib/python3.3/. When launching python 3.3 one gets: Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Python 3.3.0b1 (default, Aug 11 2012, 10:45:34) [GCC 4.6.2] on linux Type "help", "copyright", "credits" or "license" for more information. Traceback (most recent call last): File "/etc/pythonstart", line 5, in <module> import atexit File "<frozen importlib._bootstrap>", line 1294, in _find_and_load File "<frozen importlib._bootstrap>", line 1258, in _find_and_load_unlocked ImportError: No module named 'atexit' The same thing happens when installing with a --prefix. Moving the directory lib64/python3.3/lib-dynload to lib/python3.3/lib-dynload fixes the problem. please could you attach the configure options, generated _sysconfigdata.py and a log of the install step? > File "/etc/pythonstart", line 5, in <module> and this seems to be a patched/distro installation. Do you see this with an unpatched one as well? I have not modified anything related to python on my opensuse install, i have only grabbed the latest tarball, compiled and installed. Here is the result with python 3.3.0 beta 2 ./configure make (sudo make install) > log I am observing the same outputs: $ /usr/local/bin/python3.3 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Python 3.3.0b2 (default, Aug 14 2012, 00:25:40) [GCC 4.6.2] on linux Type "help", "copyright", "credits" or "license" for more information. Traceback (most recent call last): File "/etc/pythonstart", line 5, in <module> import atexit ImportError: No module named 'atexit' >>> import atexit Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named 'atexit' >>> The file /etc/pythonstart contains: """ # startup script for python to enable saving of interpreter history and # enabling name completion # import needed modules import atexit import os import readline import rlcompleter # where is history saved historyPath = os.path.expanduser("~/.pyhistory") # handler for saving history def save_history(historyPath=historyPath): import readline readline.write_history_file(historyPath) # read history, if it exists if os.path.exists(historyPath): readline.set_history_length(10000) readline.read_history_file(historyPath) # register saving handler atexit.register(save_history) # enable completion readline.parse_and_bind('tab: complete') # cleanup del os, atexit, readline, rlcompleter, save_history, historyPath """ I will attach _sysconfigdata.py next (only one attachment) Here is the generated Lib/_sysconfigdata.py file the configure step sets LIBDIR to /usr/local/lib64. Please find out why this is not set to /usr/local/lib. This is also an issue on openSUSE 12.2 with the release version of Python 3.3 when compiling from sources. OBS (openSUSE Build Service) has RPMs for 3.3rc1. I'm assuming they've got a patch which fixes this issue, and looking at the spec file (lines 61, 62 -): # support lib-vs-lib64 distinction Patch02: Python-3.3.0b2-multilib.patch The URL for that patch is I haven't verified it, but I'm guessing that patch has something to do with the issue. The top level directory for the sources for their RPMs are at I applied the OpenSUSE patch to the current cpython tip (3.4a0), rebuilt, then reinstalled. I verified that sys.path contains directories which contain "lib64". I can import the time module now, which failed before applying the patch. The patch didn't apply perfectly (two chunks were slightly offset), so I generated a new patch, which is attached. It's been awhile since I did anything with bug reports. I tweaked a few fields. Let me know if I muffed anything. Sorry, I don't have access to anything but the OpenSUSE systems at work. I was just verifying that it solved my installation problems. I'll see if I can nudge some of the other things forward. Skip, You mix two technologies one is --libdir that could be specified at configure time and another one is sub-directory name for libraries as path suffix to some prefixes. You could just adjust python to use user specified path (--libdir) and this is more flexible instead you hard coded stuff in configure script. Also I could not understand why is opened new issue. Just search for libdir and one is Issue 1294959 - 7 years old with the same idea. Later mean that solution is not acceptable. Before this goes any farther, let me make this clear - I *did not* develop this patch. It was developed by the OpenSUSE folks. I had trouble installing Python from source on the OpenSUSE system under my desk at work. I asked about the problem on python-list@python.org. Someone directed me to this open bug report. I saw the reference to the patch, applied it, and installed. I verified that the installed Python executable now works as I expected, and reported that. That is all I did. Various responses seem to suggest that people think I wrote the patch. I did not. I didn't even look at the contents of the patch before applying it. I regenerated it relative to the current hg trunk simply because when I first applied it, a couple chunks were applied with offsets. If my feedback is not sufficient to help move this bug report forward, my apologies. I know next to nothing about writing configure scripts, let alone the right way to do that. I don't mess around at this level of things any more. Haven't in at least a decade. I don't know what a host macro is. I have never used --libdir. If people are looking to me to correct the flaws in this patch, they are likely going to wait for awhile. In reply to msg168184, LIBDIR is set to include lib64 instead of lib because openSUSE explicitly does it that way in their multilib implementation. More specifically, the CONFIG_SITE environment variable is set to /usr/share/site/x86_64-unknown-linux-gnu which contains, among more stuff: catalin@opensuse:~/hacking/cpython> cat $CONFIG_SITE | grep libdir # If user did not specify libdir, guess the correct target: if test "$libdir" = '${exec_prefix}/lib' ; then libdir='${exec_prefix}/lib64' /usr/share/site/x86_64-unknown-linux-gnu is owned by package site-config whose README says: site-config: Site Paths Configuration for autoconf Based configure Scripts ========================================================================== Site configuration for autoconf based configure scripts provides smart defaults for paths that are not specified. All autoconf based configure scripts will automatically resource site script using CONFIG_SITE environment variable. It works without any explicit user interaction. Currently implemented features: Automatic libdir setup to $exec_prefix/lib or $exec_prefix/lib64 ---------------------------------------------------------------- Depending on architecture, site script should correctly and automatically switch between lib and lib64 libdir. libexecdir setup to $exec_prefix/lib ------------------------------------ Upstream libexecdir defaults to $exec_prefix/libexec. This directory is not part of FHS 2.2, so we change it to $exec_prefix/lib (yes, it is correct to set it to $exec_prefix/lib even for bi-arch platforms). Most projects add package name to this path, so you most probably get what FHS 2.2 expects.
http://bugs.python.org/issue15631
CC-MAIN-2016-22
refinedweb
1,202
59.09
LV what is meant by "package index script" - code such as pkgIndex.tcl?DGP The default package unknown callback, [tcl_PkgUnknown] finds the index scripts for the packages it manages in files named pkgIndex.tcl, so yes, that's the most common example of an index script.Other package unknown callbacks might choose to retrieve or generate their index scripts from other sources, or by other methods. Probably most important is that a package index script should never raise an error.This can be tricky because index scripts can be evaluated in just about any kind of Tcl interpreter, by any registered [package unknown] manager, so you should not depend on things you might depend on in your more day-to-day Tcl programming.Especially noteworthy along those lines is that a package index script might be evaluated in a Tcl interpreter for any release of Tcl from 7.5 on. This means you should not be using any Tcl 8 features in your index script until after the index script itself verifies that the interp has a recent enough Tcl in it.Index scripts can rely on the [return] command to cleanly terminate evaluation of the index script. So, a useful technique in an index script is to check the interp for a recent enough Tcl release to support the rest of the index script, and to support the package indexed by the index script: if {![package vsatisfies [package provide Tcl] 8]} {return}Likewise, an index script should not raise any of Tcl's other return codes like [break] or [continue]. It's best to think of those as causing undefined behavior, and just avoid them completely in index scripts. The actual behavior will probably depend on the internal details of the [package unknown] handler that the index script author really shouldn't know about, let alone depend on. An index script can also depend on all of the commands built in to Tcl being available by their simple names. This does not mean the index script is evaluated in the global namespace. It only means that non-buggy [package unknown] handlers will not mask Tcl's built-in commands in the context they provide for index script evaluation. An index script can rely on the existence of a variable known in the current context as dir. The contents of that variable are the absolute file system path to the installation directory associated with the package indexed by this index script. It is up to the [package unknown] manager to perform this initialization. Thus the [package unknown] manager keeps track of what installation directory goes with what package goes with what index script. The default [package unknown] handler, [tclPkgUnkown] achieves this by assigning to dir the name of the directory that contains a file named pkgIndex.tcl that contains the index script.Since the index script does not know what namespace context or proc context it might be evaluated in, if it wants to access a global variable, it should either use the [global] command to bring that variable into current scope, or use a fully-qualified name for the variable (::tcl_platform) after verifying a Tcl 8 interpreter. An index script should not call [package require] for any package. Evaluation of an index script is not about loading any package, it is only about registering a load script for later use. Other than performing that registration by calling [package ifneeded], an index script should strive to be free of side-effects. An index script should not assume it is kept in a file and is evaluated by [source]. This means the index script should not depend on [info script] to return anything useful. The dir variable is the interface to use to discover the installation directory of the package, and that's all the index script should need to know. Don't do this: package ifneeded foo 1.0 "load [file join $dir foo[info sharedlibextension]]"That will break if $dir contains spaces. Do this instead: package ifneeded foo 1.0 [list load [file join $dir foo[info sharedlibextension]]]Be sure to read the discussion at pkgIndex.tcl which provides a Tcl 8.5 alternative preferred by DKF. On 5-jan-2004 in c.l.t., Michael Schlenker explains with respect to:`Probably most important is that a package index script should never raise an error.'The problem is mainly with errors thrown when package require looks for packages, not when they are found and are about to be loaded.i.e. the following example code is deemed OK: proc loadMyPackage {dir} { if { ![CheckRequirement] } { # this is OK return -code error $errMsg } source [file join $dir sourceFile.tcl] } package ifneeded myPackage [list loadMyPackage $dir]Of course, the error could also be raised inside the sourceFile.tcl.DGP The problem with either of those approaches is they do not play well with package's handling of multiple installed versions of a package. Essentially, the [package ifneeded] tells the [package] system that a package is available to be loaded, even though the attempt to load it is going to produce an error. If this index script is for myPackage version 1.2, [package] is going to prefer loading it over myPackage version 1.1, which might not have the "error on load" problem. That would mean we're ignoring a package that works in favor of one that doesn't. It's best if you can avoid [package ifneeded] registration of packages that you know cannot successfully load in the current interp.In an environment where you can be sure there's only one version of your package "installed" (the internals of a Starkit, perhaps?), you can probably get away with that. But in that environment a more generally correct index script will also work, so why not?Another, perhaps less serious, problem is that the example does have the side effect of creating a [loadMyPackage] command.Lars H: If the need is to perform several commands for loading a package, then the helper proc is quite unnecessary. A multiline package ifneeded script is straightforward to construct using format: package ifneeded myPackage 1.0 [format { package require sourceWithEncoding sourceWithEncoding utf-8 [file join %s myPackage.tcl] } [list $dir]]Note how the helper sourceWithEncoding package is not loaded until myPackage is actually required. Also note that dir is list-quoted before it is passed to format, since it will appear as a complete word in the script (this is similar to how bind percent substitution works).NEM: Or a lambda (in Tcl 8.5+): if {.Lars H: The big disadvantage of that is of course that it's only suitable for packages that require Tcl 8.5. An advantage is that it lets you create variables that are local to the script.
http://wiki.tcl.tk/5900
CC-MAIN-2018-05
refinedweb
1,128
61.87
tl;dr -- Quick & easy method for a single terminal/command session In Canopy 1.4 and above, you can make Canopy be the default Python for the duration of a single terminal session: Windows: Open a "Canopy Command Prompt" window from the Canopy Tools menu. Please see Note #3 below. Mac/Linux: Open a "Canopy Terminal" window from the Canopy Tools menu. For more persistent settings, read on. Background At the end of Canopy's initial setup, there is an option, which is selected by default, to make Canopy's Python be your default Python at the terminal / shell / command prompt. (This option will not apply to any terminal sessions that are already open, but rather to terminal windows opened subsequently.) You might reasonably decide not to accept this default, for example if you are currently using another Python distribution (such as EPD) for production work, or do not want .py files to be associated with Canopy's editor at this time. This will usually not affect operation inside the Canopy GUI application. However not making Canopy your default Python does complicate your ability to use Canopy Python from a command prompt, including installing external packages, because it may not be immediately obvious where Canopy's User Python is actually located if you want to refer to it explicitly. Canopy uses virtual environments as described here, so that there are actually three Canopy Pythons. You should only use the Canopy User Python! Running the wrong Python (or IPython) will lead to confusing misbehavior. How to check whether Canopy User Python is your default Python Start python or ipython from a terminal session, do import sys; sys.prefix, and check that the output value ends in "Enthought/Canopy[something]/User" (specifically, it should match most of the path shown in the platform-specific table below. It should also match the value of sys.prefix that is shown when you type the same command in the Canopy GUI's Python panel. If you did not initially make Canopy your default Python, but would now like to make it so persistently, you have two options: Set Canopy Preference - usually very easy but not for everyone: Open the Canopy Preferences dialog (on the General tab) and click on "Set [Canopy] as default". Notes: - This does not always succeed; sometimes this Canopy GUI dialog can't actually decide whether Canopy Python would be the default in a terminal. We apologize for the inconvenience, and are working to improve the behavior. If this happens to you, we suggest using one of the other options below. - This will subsequently cause the OS to open all python files, by default, in the Canopy editor (file association). This is the safest approach because it protects you from inadvertently running malicious scripts. After a python file has been opened in the Canopy editor, you can run it with a single keystroke. Nonetheless, this will be not be the desired behavior for all users, and in this case you should not use this functionality. - (Windows): If you wish to use binary package installers such as Chris Gohlke's, this is your only option; such installers typically require that Windows registry entries be set, so just modifying PATH, as is done by the following options, will not suffice. Manually set PATH - fairly easy: To make Canopy be your default Python in all subsequent Terminal/Command sessions, you can manually prepend your PATH with the following platform-specific directory: To do this: Linux or Mac OS X Edit your ~/.bash_profile file, and uncomment (remove the initial "# " from) the line that looks like one of the following (depending on how early you installed Canopy; more details here): # source [...] /activate # VIRTUAL_ENV_DISABLE_PROMPT=1 source [...] activate (On some systems, this line will be in the ~/.profile or ~/.bashrc file. Wherever you find it, you can usually just uncomment it when you want Canopy to be your default Python. Note that if you have both .bash_profile and .profile on OSX, then only .bash_profile will be used, unless another config file explicitly invokes .profile .) Windows Editing the Windows PATH through the Control Panel (System / Advanced / Environment / User) is awkward and error-prone. We recommend using a utility such as the free Rapid Environment Editor, to back up your existing PATH settings, modify them, and/or switch between different PATH settings. One possible source of confusion in Windows: if Canopy was installed for current user, and you already have a Python (e.g. EPD 7.3) which was installed for all users, then even if you specified that Canopy should be your default Python, the "all user" python default will take precedence. The way to override this is to modify the System PATH environment variable to begin with the Canopy Python directory, or to remove Python altogether from this environment variable. (Again, Rapid Environment Editor makes this much easier to do.) You will need admin permission to do this. macOS and Linux: Using Canopy Python in an automated cronjob: If you are automating the execution of a python script using cron, you may find that your script is being executed with the system python installation rather than the python provided by Canopy. This is because the cron job has a very limited scope of environment variables passed to it when execution begins. If you have manually edited your PATH variable in your .bashrc, .bash_profile, or .profile, then you will need to source that file before executing your python code in the cronjob. For example, to run a script called my_script.py with the python installed in the Canopy User python environment, assuming you have edited your .bash_profile with the correct PATH, you need to source this file then execute the python code. The command you execute with cron would then look similar to this: source ~/.bash_profile && python my_script.py Undoing Canopy as the default Python To remove Canopy from your PATH, simply reverse the (OS-specific) manual process described above. Manually changing file associations (.py in this case) is more difficult. The usual process, which applies to most applications including Canopy, is that when you install another application which you wish to assume control of a particular file association, it will do so for you. Metaphorically speaking, you change color by re-painting rather than by attempting to scrape off the previous layer. Please do not enter support requests in article comments Please use article comments for suggestions to improve the article. For individual support requests, please follow these guidelines. Anyway it's not completely clear. From user point of view, Yes or No, setting/unsetting CanoPython as default python, is EQUIVALENT to uncomment/comment this line in the .profile user ? VIRTUAL_ENV_DISABLE_PROMPT=1 source [...] activate Hi. I very stupidly and very rashly deleted the lines above from by bash_profile. I have tried re-installing Canopy to replace the text, but this hasn't worked. I set up the environment in the default location, but did not make Canopy my default Python. Are you able to provide the text of the lines that Canopy adss to bash_profile so I can re-instate them? @Marc, see the end of the "Environment setup" section of the Users Guide for your OS: Hello, Is there an "activate" script for c-shell users (especially in the recently released version 1.2)? I use tcsh as my shell for a number of reasons, so the bash-only activate script doesn't do me any good. I know that using c-shell makes me a horrible person, but I can't switch to bash and would really like to use canopy-installed python at the command line for my work. If there's no c-shell version of activate, could you please outline a useful workaround? Many thanks for your help! @Deatrick, you don't need activate at all; just write an alias that prefixes <home-dir>/Enthought/Canopy\_64bit/User/binto your PATH. I am unable to set Canopy as default right after installing. My system is 64-bit Mac OS X. I tried uninstall and remove all associate files then reinstall. I clicked on "Start using Canopy", but it was unable to start the application. The first time I use it and set the Canopy to be my default Python. But now I want to cancel the option, how to do? When I start python, I get the following: Enthought Canopy Python 2.7.6 | 64-bit | (default, Jan 29 2014, 17:35:36) [GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] on linux2 Does this mean it is running GCC version 4.1.2? I am working with CUDA and that throws an error that gcc 4.6 and up are not supported. Is there any workaround for making GCC 4.1.2 system-wide? I am on a cluster and do not have root access! Thanks :) Update: simplified the article, reflecting that Canopy 1.4 provides a "Canopy Command Prompt" or "Canopy Terminal" command in the Tools menu. Under Win7 (64-bit) I forgot that I had Python 2.5 already on my computer when I installed Canopy 1.4.1975. I got the dreaded "Canopy is not your default Python environment" and the pushbutton 'Set as default' was disabled. An "import sys; sys.prefix" in python started from a terminal showed "C:\Python25" and in Canopy showed 'C:\Users\ACG\AppData\Local\Enthought\Canopy\User'. I read the above, but did not want to delete the previous install or work with the registry. My Quick fix without deleting anything or messing with the registry was to rename the 'Python25' directory to 'Python25xx'. Then 'import sys; sys.prefix' showed 'C:\Users\ACG\AppData\Local\Enthought\Canopy\User' in both the terminal launch python and Canopy . The 'Edit | Preference' tab dialog box in Canopy showed 'Canopy is your default Python environment' with the pushbutton labeled 'Unset as default' was enabled. Hope that helps someone. ACG
https://support.enthought.com/hc/en-us/articles/204469730-Make-Canopy-User-Python-be-your-default-Python-i-e-on-the-PATH-
CC-MAIN-2018-39
refinedweb
1,654
63.59
Is Perl Better Than a Randomly Generated Programming Language? 538 First time accepted submitter QuantumMist writes "Researchers from Southern Illinois University have published a paper comparing Perl to Quorum(PDF) (their own statistically informed programming language) and Randomo (a programming language whose syntax is partially randomly generated). From the paper: 'Perl users were unable to write programs more accurately than those using a language designed by chance.' Reactions have been enthusiastic, and the authors have responded." Better? (Score:5, Funny) Better? How about we start with distinguishable? Re:Better? (Score:5, Funny) Indeed. This is the reason why the Obfuscated Perl Contest is run by the Department of Redundancy Department. Re:Better? (Score:5, Informative) Yet another ridiculous summary. The study wasn't which language was better, it was in which language can first-time users write a program more accurately. My guess is that Cobol would beat any of the three - it is designed from the ground up to be readable. Quorum looks a lot like Pascal (Score:2) My guess is that Cobol would beat any of the three - it is designed from the ground up to be readable. So are Pascal and Python. In fact, Quorum looked a lot like Pascal from what I saw in the PDF. Re:Quorum looks a lot like Pascal (Score:5, Insightful) Languages that consider whitespace need to die. Re:Quorum looks a lot like Pascal (Score:4, Insightful) Well said. If you want your code properly indented, just indent it. It's like the Python apologists are incapable of formatting their code properly unless the language forces its particular version of "properly" on you. Before the trolls fire back: In the case of code written by others, run it through a pretty-printer. Problem solved. Oh, as a bonus, you can use that same tool to format code the way you prefer, and switch it back to whatever style your company requires at the press of a button. Why is this a bad thing? Re: (Score:2) Because in practice, the automated code cleaner results in almost every line of code in the file to have a difference highlighted by my company's source code repository diff generator. This obfuscates the true nature of the change to the logic in the code I am making in order to fix a bug or implement a feature. In turn, that makes it harder for people responsible for maintaining the code to determine what exactly changed from version to version. Re: (Score:3) I've always felt that version control systems should store syntax trees, but have never had the time to do the work to do that. Re: (Score:3) Just enforce a formatter on commit. If the formatted code is any different from the original file, abort the commit. Git makes this kind of thing easy. It also means the repository is always in a sane state. A simple script can reformat all changed files trivially before a commit operation. Re: (Score:3) No-one is saying that Python is good because it forces you to indent. Quite the opposite: all sane people indent their code anyway, whatever the language, so why not use that to indicate program structure? Re: (Score:3) Because not everyone uses the same indentation as everyone else. If indentation rules need to be worked out before starting a project, you're wasting more time than a language where indentation has no meaning. Re: (Score:3) My main objection to semantic indent can be summarized in this psuedo code example: //end foo //end fubu class fubu function foo(bar) start function more code all's well console.log("debug message") more real code Having that debug console statement out of band with the rest of the functional indents makes it easy to notice when scanning code. Now you might say one should never debug th Re: (Score:3) Re: (Score:3) Oh the fucking irony of it. I was trying to post the following using pre and code tags without success and just ended proving your point: Sure. Because def function(): if condition: while ok: do_something() end while end if end def Is much more readable than: def function(): if condition: while ok: do_something() Re: (Score:3, Insightful) Most languages consider whitespace. In most programming languages where both of the following are valid, they will have different semantics: 1: foo bar 2: foobar Quite a lot of languages even distinguish between different types of whitespace, e.g., C where the following two constructs are different, despite differing only in which particular kind of whitespace: 1: // foo(); bar(); 2: // bar(); foo(); Python may be unusual in which differences in whitespace it considers Re: (Score:2) I had no trouble parsing that :) But, yes you are correct. Re: (Score:3) Really, Python's problem is that both spaces and tabs are legal - if the language required one or the other, it would be fine, modulo subjective readibility arguments about braces. Re:Quorum looks a lot like Pascal (Score:5, Interesting) Fortran (at least, IV and earlier) totally ignored white space, even in the middle of an identifier. Of course, this led to problems like DO 10 I = 1.10 meaning "assign the floating point number 1.10 to variable DO10I", when the programmer meant to type DO 10 I = 1,10 meaning "loop from here to label 10 varying I from 1 to 10". An error something like this caused the Mariner II probe to Venus to go off course at launch and the Range Safety Officer hit the destruct. Re:Quorum looks a lot like Pascal (Score:4, Funny) Fortran is interesting, theologically - it considers God to be real unless declared integer. Re:Quorum looks a lot like Pascal (Score:5, Insightful) Re:Quorum looks a lot like Pascal (Score:5, Interesting) If those punctuation marks (or keywords) make the code more readable, then they're not gratuitous are they? I, for one, find brace-less languages fantastically hard to read, Python especially. I LUUUUURV Python so much that if it was legal I would marry it, but I completely agree. Curly braces to denote block starts and stops make the code easier to read and manage. I should not have to wonder whether a function or block continues past the bottom of the current screen's worth of code when it ends with a few lines of whitespace because I have to know the indentation level of the next line of code to know if it's in a different block context than the last line of code on the current page. I also should never have to wonder if I re-indented code correctly when cut/pasting or adding/removing a level of block nesting. I don't care if Python wants to keep the indentation requirements. Forcing the code of awful programmers to be more readable in this way is a good thing. Forcing all code to be less readable in another way is a bad trade-off. Just add in the damn braces! Then I can use tools to auto-indent for additional readability. Re: (Score:3) Re:Quorum looks a lot like Pascal (Score:5, Insightful) Personally, I find that curly braces make code easier to read on top of perfect indentation. In truth, though, it's not so much the braces, as it is the nearly-empty lines of code that are spend to put those braces (note: this specifically applies to ANSI-style brace layout only, not K&R style). It creates a kind of a visual box, clearly delimited, with body of the block in it - more so than plain indentation does by itself. That said, I wouldn't call Python "fantastically hard to read", quite the opposite - it tends to be one of the easiest languages to read. Not because of indentation, but because its basic syntax is rather clean. Re: (Score:3) That is only one of the two syntactic roles assigned to parentheses. The other is to disambiguate priority. For instance, you have to write (a + b) * c if you don't want it to mean add(a, mult(b, c)). But you see, combining multiple lines is very exactly this: priority disambiguation. Consider "if (cond) a; b". Priority is such that the statement is parsed like "(if (cond) a); (b)", because the if statement doesn't eat up semicolons. If you want it to mean "(if (cond) ((a); (b)))", then you could just put p Re: (Score:2) I remember monospaced fonts. Punch cards with FORTRAN used them. Remington typewriters used them too. Re: (Score:2) Get a language that can be programmed using with any text editor. Re: (Score:2) Way to misunderstand what was being said on purpose. You must be a fun guy at parties. indeed (Score:2) me: My hovercraft (pantomimes puffing a cigarette)... is full of eels (pretends to strike a match). them: Ahh, matches! me: Ya! Ya! Ya! Ya! Do you waaaaant... do you waaaaaant... to come back to my place, bouncy bouncy? Re: (Score:2) Quorum looked a lot like BASIC to me. Only the keywords were different. The headline for the article is horrible (as usual). The headline (and summary) neglect to mention that this test was given to people who had no experience in programming. We compared novices that were programming for the first time using each of these languages, testing how accurately they could write simple programs using common program constructs (e.g., loops, conditionals, functions, variables, parameters). My takeaway from this "research" is that Perl is not a good language for beginners. If you already know the general concepts of programming, Perl is fairly easy to pick up. Re: (Score:3) COBOL is designed to be readable, but it's hardly writable. (roughly 10 years of experience developing COBOL code). Re: (Score:3) No one writes COBOL anymore. We just tweak it. Re: (Score:3) As I understand it, there was only one original COBOL program ever written. Everyone else copied & modified it for their purpose. Re: (Score:3) Indeed. vim is impossible for a first-time user. That does not mean it is a terrible editor. Over-emphasizing day 1 productivity is a bad thing when most of your days will not be 'day 1'. It's the study participants. (Score:2) You know, the "study" (which I didn't read, this being slashdot 8-) probably involved exposing the languages in question a hugely diverse and wide ranging number of College Undergrads That Fancy Themselves Programmers. As such, the fact that the quality of the code was not distinguishable despite the language chosen indites the programmers more than the languages. The problem with most studies is that College Freshmen already know everything so any attempt to test them is doomed to fail. Re: (Score:2) Nonono, it's pseudorandom. They just used a very good function. Next question (Score:2) How does C++ fair? LOL Re:Next question (Score:5, Funny) How does C++ fair? Farely average. Re: (Score:2) I know C++ fairly well, trust me, it's the most pointlessly complex language on the planet. And the boilerplate goes on forever. C++ might have developed sanely if they'd introduced it's major features in reverse order, i.e. lambdas way back in 1983, templates a bit later, and class methods only during the last decade. As it stands, there are basically two types of C++ code : code that badly emulates functional programming styles, and code consisting entirely of calls to simple wrappers around extern "C" f Re: (Score:3) Unfortunately, C++ remains the only language with a full-featured yet concise RAII, which is its main advantage when compared to C. And templates, while messy, are also extremely efficient in terms of generated code - more so than similar mechanism (generics etc) in pretty much any other language I know of. Re: (Score:2, Funny) How does C++ fair? LOL #%@$&#@^UGSOWDYRO&F@#L(EGFGP*$TW This Script written in Perl computes the answer. Re:Next question (Score:4, Funny) I dunno. Since it's a comment on Perl, starting with a # would seem to be entirely accurate according to the syntax. Re: (Score:2) In a productivity study of experienced users, perl & python were best and C++ worst in both time to finish and lines of code. [connellybarnes.com] Re: (Score:2) I've used Python, and Perl is my "goto" language (sorry, bad pun) so I tend to suspect they would do better than C/C++ in these areas too, but Re: (Score:3, Informative) The study cited has several biases in favor of the scripted languages that are acknowledge by the author in the references of your supplied link. Primarily: - The non-scripted languages (C, C++, Java) were tested under formal conditions in 1997 / 1998 (Java 1.1 I assume), the script programmers wrote their programs at home and self reported their times (and in most cases spent several days thinking about the problem before starting work, time which was not included). - The script programmers were told that the Re: (Score:2) Far more problems fall into what you seem to consider "easy". My guess is you don't know either language nor what a hard problem really is. Re: (Score:2) High end video games tend to involve some reasonably sticky problems... Re: (Score:2) I did not suggest they did not. Only that they were not the only source of such problems. Most video games seem to cheat their way out of lots of problems, something programs used for business cannot often do. A classic example of such cheating is instant hit bullets. Java? (Score:2) How is Java better than C++? Trick question? (Score:4, Funny) I always thought Perl was a randomly generated programming language. Re:Trick question? (Score:5, Funny) Re: (Score:2) Like everything else in Perl, the name is too long. Pathological Rubbish would have been more apropos. Re:Trick question? (Score:4, Informative) Hence the name: Pathologically Eclectic Rubbish Lister. Note for the ignorant... that REALLY IS what it stands for! Re: (Score:2) I think APL has the edge there. It went so far as to make up its own non-ASCII symbol set. Perl Is way better (Score:5, Informative). It's possible to write bad unreadable code in anything, but it's just so much easier in Perl that I shudder anytime I get asked to look at someone elses Perl code. That has NEVER been a good experience. Re: (Score:2) Sounds more like an issue of EBCAK. You can make a program that's illegible, blaming Perl for the incompetence or sloth of the people that are writing the code is hardly a fair. What about all those C programs where code is being run from random other files without concern for organization or maintainability? Re: (Score:2) Yeah but wasn't this supposed to be measuring the efforts of "first time users". Maintaining someone else's code is an entirely different problem. Trying to sort out someone else's code is generally a scary experience across the board. You can make spaghetti out of any language. Re: (Score:2) Trying to sort out someone else's code is generally a scary experience across the board. You can make spaghetti out of any language. IME it's easier to read Java code, even decompiled java code, than just about anything. C# sharp can be easy too, but a lot more regx use, linq and such ugliness drag it down. Re: (Score:3) Re: (Score:3) use strict; Learn it, live it, love it. Re:Perl Is way better (Score:5, Insightful) I would suggest that perhaps Perl is particularly effective in separating good from bad programmers. In other languages, restrictions allow bad programmers to write code that *looks* good. But if you see readable, understandable Perl code, you know you've got a keeper. "if you see readable, understandable Perl code" (Score:2) One of these days that may happen to me. Re: (Score:2) I would suggest that perhaps Perl is particularly effective in separating good from bad programmers. In other languages, restrictions allow bad programmers to write code that *looks* good. But if you see readable, understandable Perl code, you know you've got a keeper. I've looked at Perl like I look at English. It's possible to write really well done English that uses some obscure structures for emphasis, or to increase clarity. It is however more likely that someone will piece together the most incoherent confusing material into an English essay, and you will have difficulty following it. Illegible code in Perl is not a fault of the language, but rather a fault of the programmer. Whether the matter of Perl letting people write so hideously is a good or bad thing, it must Re:Perl Is way better (Score:5, Insightful) This! Perl is a "beautiful" language -- in the same way some people talk about certain human languages (e.g. romance languages, Russian, or Sanskrit) being beautiful, as opposed to merely functional. Other people will disparage those same languages as being too this, or not enough that... the same kind of debate we see with programming languages, particularly with Perl, which is kind of interesting. And for some of those human languages, you'll also hear people lament how horribly some non-native speakers butcher them, perhaps because those non-native speakers are using them merely as a "functional" language, rather than grasping the full depth of expression that is possible. This analogy has at least some merit I think, since Perl is a language that was designed "linguistically" at least in some sense, in that it has the same kinds of patterns that natural languages have and is chock full of idioms and expressions, that some programmers (myself included) find not only useful from a functional perspective, but actually enhance the creative process that happens when one writes code. I think part of that is due to Larry Wall's now venerable "Programming Perl" -- which is one of the few truly valuable programming books that's also actually fun to read -- especially if you're one of those people that thinks at least a little like Larry, and enjoys a dry wit. Anyway, so yes, I totally agree, programmers that need "restrictions" in a scripting language to have their code be readable are definitely a certain "kind" of programmer. Not that they are better or worse programmers, they just don't embrace the TMTOWTDI philosophy, which is something that the society at-large doesn't generally embrace, so it's no surprise that there seem to be a lot of people that shit all over Perl. I've seen (in my own code and in others) truly beautiful and elegant Perl code that reads like a story, and also the "line noise" code people complain about. Which is really all about regular expressions. Some people really love 'em, perhaps a little too much. Though as has been pointed out probably a billion times, there's nothing wrong with one-off throwaway code that looks like line noise, but if you're building a giant system, then your code should be all pretty and commented and generally sociable. It's both unfortunate (and I still hope... a mixed-blessing) that Perl 6 has taken so long to come about. In that PHP went and pretty much took over it's niche as a web-development and "glue" language. Though the Perl community is still strong, if small, and I have no doubt that it will remain a living language for a long time to come, if for no other reason than the fact that CPAN is awesome, and there are zillions of lines of code written in Perl that a lot of people depend on every day, and that when Perl6 matures, I think it will enjoy a resurgence, within the Perl community, and perhaps much further, simply because of the simple and powerful philosophies that it encodes. Easy things should be easy and hard things possible. Re: (Score:2) Comments are supposed to tell you what's going on. In fact, Perl has a built-in self-documentation system that makes it a breeze to document and find the documentation you want. You don't maintain perl code by trying to understand it and tweak it. You maintain it by replacing lines or blocks of code with better written code. And if you're not man enough to write better code, wtf are you doing trying to maintain it in the first place? Re: (Score:3) I certainly hope not. Whenever i see comments in C++ or Java code i'm thinking "why did you not write this to be more ovious way in the first place, wtf needs explanations here". There a few cases where code needs comments IMO, and class-level and function-level docs are perfectly OK. But comments within source are a sign of a) something incredibly clever being done b) sloppy design or poorly written code that needs explanation on whats going on In 99% of Re: (Score:3, Insightful) Despite some of the ill founded comments in this discussion, natural language is not comparable to computer language. Programming is closer to mathematics then human language. In the same Re: (Score:3). One could - quite validly - say the same about the English Language. Now, I'll grant programming and spoken/written languages don't overlap perfectly with one another. That's why languages like LISP have such elegance; what they're designed to express is something far more abstracted and formalised in nature. It's possible to conceive of a complex structure and accompanying set of behaviours and properties simply by scanning a screenful of LISP, but English is narrative in nature. You don't scan across; you scan from top to bottom. It's possible to write bad unreadable code in anything, but it's just so much easier in Perl that I shudder anytime I get asked to look at someone elses Perl code. That has NEVER been a good experience. Perl can be difficult to grok, but it can be elegant as well. I've experienced revulsion looking at Perl code before, but never so consistently as with ASP and PHP. These are languages (and I use that term loosely) that simply cannot be made pretty. In the right hands, Perl can be as elegant and expressive (and opaque, and efficient) as Shakespearean English. Argue however you like, the same is not true of many other languages. Python has clarity and simplicity. It's truly an engineer's language. LISP, as I've said, is beautiful in the same way architecture can be beautiful: taken as a whole, rather than a story. I didn't understand the appeal of Ruby until I learned that its inventor is Japanese. Then it all became clear. What seemed like awkward, nearly backward syntactical constructions suddenly made sense. In other words, horses for courses. But arguing that Perl is not readable in its very nature is like arguing that English in incomprehensible based entirely on watching Jersey Shore. Re: (Score:3) Again, this depends on the programmer who wrote the code, not the language. Sure, but all the Perl documentation I've ever seen (Camel Book, etc.) encourages Perl coders to concentrate on the result foremost, even at the expense of the process. Thinking about how to write well-structured code seems to be actively discouraged in the Perl community. Once it works, you're done. The Python community were among the first point this out: Sure, there may be "more than one way to do it," as the Perl hackers like to say, but there's probably one good way to do it. If you don't even bother to Re: (Score:2) It depends on both. I mentioned specifically that you can write bad unreadable code in anything because it's true. But that's like saying you can kill people with a screwdriver. It's true, but it's an awful lot easier with a shotgun. Perl just seems to make it an awful lot easier to "be clever" and come up with something that nobody can understand later. I don't consider that a good thing. Re: (Score:3) Since it's an acronym, PERL is acceptable too. Perl = name of language. perl = name of compiler / interpreter PERL = acronym for Practical Extension and Report Language Novices learning from whom...? (Score:2) Who taught them Perl? Where did they learn to call subroutines with an ampersand? A Perl 4 manual? OK they're novices but even I didn't write loops using C-style loops as a novice Perl coder because I was reading that it was more readable to do for($a..$b) instead. Re:Novices learning from whom...? (Score:5, Informative) Yes it was Perl 4 [perl.org], which is one of the flaws in this study. Re: (Score:2) That's sort of the the point. I'm not a good programmer, but when I code, I tend to use Perl, I focus on making the code legible and typically don't take on much with it. Perl works well with that, but there's plenty of folks that use Perl for things that it's not really intended for and don't have any idea what maintainable code should look like. Ultimately GIGO, you need more than a study like this to determine whether or not Perl is better than a randomly generated programming language. Ultimately, I woul Re:Novices learning from whom...? (Score:5, Informative) "we did not train participants on what each line of syntax actually did in the computer programs. Instead, participants attempted to derive the meaning of the computer code on their own." They were not trained. They were just shown code samples with no explanation. The code samples had 1-letter variable names and no comments. The Perl sample uses $_[0} for getting the first sub argument instead of shift, and "for ($i = $a; $i = $b; $i++)" to do a for loop instead of "foreach $i ($a .. $b)", so it is deliberately obfuscated Perl. Re: (Score:3) A shift would have been more intuitive? No, but perhaps a "my ($a,$b,$c) = @_;" would have been. Since I'm a long-time Perl programmer, I can't really speak for the newbie. But the use of the numerous $_[n]-lines is probably unclear. In any case, it is considered bad code, since it is both hard to read and error prone. Using a foreach, instead of the C-style for loop, is certainly easier and MUCH closer to the implementation used in Quorum and Randomo. So that, at least, was very poorly thought-out. And Randomo? Is it really random? Or is it Re: (Score:3, Informative) It in fact has three disadvantages: it bypasses any prototype coercions, it passes @_ unmodified by default, and it's unidiomatic. All of these fencepost errors I've fixed argue otherwise. Not so fast.... (Score:5, Informative) Also... (Score:5, Insightful) While Perl has never had a particular reputation for clarity, the fact that our data shows that there is only a 55.2 % (1 - p) chance that Perl affords more accurate performance amongst novices than Randomo, a language that even we, as the designers, nd excruciatingly difcult to understand, was very surprising. This is a complete misunderstanding of what a p value [wikipedia.org] means in statistical inference. The p value is not, and should not be interpreted as, the chance that "Perl affords more [or less] accurate performance." The p value is the chance, given that there is no difference, of obtaining a difference as large or larger. This is covered in first-year statistics. Re: (Score:2) ...of what a p value [wikipedia.org] means in statistical inference. The p value is not, and should not be interpreted as, [perl's divergence in] accurate performance." The p value is the chance, given that there is no difference, of obtaining a difference as large or larger. And they say English can't be obfuscated like programming languages. Re: (Score:3) If p is the chance, given no difference, of obtaining a result that is larger, what would you interpret (1-p) to mean? What are they trying to prove? (Score:3) Re: (Score:2) Re: (Score:2) That whitespace is a good way to delimit blocks. Re: (Score:2) What bad habits would one learn from, say, Python? Indentation as syntax. Re: (Score:2) Indentation is a good habit even if it's not necessary in a language's syntax. Re: (Score:2) Indentation is a good habit even if it's not necessary in a language's syntax. Too much work I'm lazy. Right click auto format puts in all the appropriate indents. Re: (Score:2) Exactly like a Python IDE. Re: (Score:2) Sure but confusing it with syntax is a stupid idea. Re: (Score:3) First, a question: Why is it such a bad thing to use whitespace as syntax? Second, the act of indenting your code is not in itself a bad thing (when we talk "normal" languages like C), so why is it suddenly a bad habit when they pick it up in Python? In a hundred years we will see this as brilliant.. (Score:2) I keep reading the full paper (+points for publishing the whole thing!) and have yet to hit upon the definition of the word "accurate" they are using to measure the results. Apparently that is contained inside their previous paper with no direct link. On page 3 though, Perl is described as "A well-known commercial programming language". Really? C# is a commercial language, Perl Better is a strong word (Score:2) The participants didn't know the languages before. If anything, the study only proved that Perl has a steep learning curve. APL (Score:2) Didn't APL prove this a long time ago? Well written Perl (Score:4, Interesting) Re: (Score:2) Not each line. Only lines that need to be separated. There's no need for a semicolon if the next line is a closing curly, for example. Some insert them anyhow, and I can see the rationale for doing so. But unfortunately, that also encourages cut/paste programming, which is especially bad for perl. I remove superfluous semicolons precisely so I will have to stop and think before doing a cut/paste job. Southern Illinois University-Edwardsville (Score:2) long term benefits (Score:2) Perl and language (Score:3) Perl is a language, just like Dutch, Swedish, English, German and most of the others. In just about any language there is, to paraphrase a well-known Perl motto, more than one way to say something. That is in many ways a good thing, especially when it comes to using the language creatively as a novelist or poet or similar type of wordsmith does. It is true that this quality does tend to make Perl programs somewhat hard to grasp for the uninitiated in the programmers style of writing. That is another quality the Perl language shares with those other languages mentioned above - did you understand all of Finnegans Wake the first time you read it? In other words, Perl is a writers' language. It is not an editors' language. Once you get into the right mood, Perl flows like your native language does. Done right, this can lead to great things. It can also lead to the sort of notes you made when attending those lectures you did not care about in the first place, and did not understand in the second. Use Perl for things you care about, and it will provide you the means to express yourself in just the right way (for you). If you give someone Lisp, (Score:3)
https://developers.slashdot.org/story/11/10/27/213231/is-perl-better-than-a-randomly-generated-programming-language
CC-MAIN-2018-17
refinedweb
5,346
71.85
Olin university chapter/Software/PythonTutorial From OLPC Purpose (This wiki is still under construction; please bear with the suckiness) Olin OLPC-software team includes a wide variety of people with a variety of skills. This page is created to allow everyone to have fundamentals in programming in python and pygame, in order to create activities for the xo. This tutorial will only include fundamentals, up to what is taught in Olin's Software Design course. For more complicated cool stuff, please refer to online documentation. If you want to learn how to design cool things, take Software Design, website found here. Structure This tutorial contains five modules, including an extra module for anyone with no exposure to programming in general. Each day includes significantly more advanced material than the day before, so it's up to the reader whether they wish to do all five days at once, or one per week. The writer also strongly recommends that the suggested projects at the end of each "day" not be skipped, and be done without time constraint, as there's no way you can learn a programming language without searching documentation for every cool function you can think of. (Along that line of thought, Allen Downey's homework assignments are also sweet for extra practice.) Downloading Python This is the link for python download. Most Linux distributions should come with the latest Python version; in Windows, you have to dig around a bit. Day 0: Fun in the Swamp This extra module is intended for anyone who does not have much exposure to object oriented programming (ie Java, Python, C++). For those who just want to learn Python, you can skip this module. In this module, we will experiment with using other people's code to draw pretty pictures and get comfortable with dot notation. Swampy (Swampy is created by Allen Downey, and is used here with his permission) Go to the Swampy install page and download Swampy.zip. This is Allen Downey's sweet collection of various fun programs. Put the zip file in your folder of choice, and open them in your editor of choice. (IDLE comes with our laptops, in which you hit F5 to run.) Try running some of the programs suggested on the install page. Take some time to poke around the code you're curious how things work. (Not all the programs will run in Windows, so if you are super curious, reboot into Linux and mess around with it there.) TurtleWorld.py TurtleWorld.py contains some fun features that allow you to actually use some of your cool programming skills. - Run TurtleWorld.py. A GUI (graphical user interface) should pop up, with options like Print Canvas, quit, Make Turtle, Clear, and Run File. - Hit run file. By default, the file loaded should be turtle_code.py. It should make a nice tree-like picture for you. This is Bob the Turtle's way of welcoming you to his world! - When you're done with Tree-Bob, click Make Turtle. A table of controls should pop up, including 'bk', 'fd', 'lt', 'rt', 'pu', 'pd', which stand for back, forward, left, right, pen up, and pen down (respectively). Mess with these controls until you get a feel of what they do. In the rather big text window, there should be code there like world.clear() bob = Turtle(world) - Open turtle_code.py and see if you can see what codes Allen is using to make the little guy move and stuff. Test your theories in the box and click run code. - When you get comfortable with the coding, open a file and make a script for moving the turtle around. See if you can do something fun, like write your name across the screen in different colors. Yay! you have just completed the first major software Design homework! Summary project Create 4 functions that automate the process of writing the letters "H","E","L",and "O". Then write an all-encompassing function that writes "Hello" across the screen. Day 1: Python command line and datatypes What makes Python special Welcome to Day 1 of Python! Let's start by discussing some things that make Python different from other languages. First off, Python is an object-oriented programming language. That is to say, like Java and C++, Python allows for features such as encapsulation, modularity, polymorphism, and inheritance. For more details on OOP, visit the wikipedia page. In addition, unlike Java and C, Python is an interpreted language, as opposed to compiled. In a compiled language, your code is first optimized and translated into an intermediate language before it is translated into machine code and run. In an interpreted language, the code is translated into byte code on the fly. What does this mean for you as a programmer? Most significantly, it allows you to test your code in the command line. Let's try a few examples. Using Python command line In Windows, you can go to Start >> run, and type 'cmd'. When the black box appears, type 'python'. Your left-hand-thingy should change from C:> to >>>. To exit, press Ctrl+Z. If for some reason, the command "python" doesn't work in the Windows command line, this probably means that you have to add it to your path. In order to do this, go to Control Panel --> System --> Advanced --> Environmental Variables. Under the Environmental Variables, select PATH, and hit Edit. DO NOT DELETE ANYTING! At the end of the list, append in the path address to your python file. It should be located in your C:, but check just to make sure. The program IDLE also comes with command line interface. You can open any file in Idle and press F5 - just running a program should pull up the command window. In Linux, just open the terminal and hit 'python'. Again, you will get >>>, and press Ctrl+D to exit. Ok! Let's start with the basics. Try some basic arithmetic. >>> 4+2 6 >>> 2*5 10 >>> 24253+25**234 13120851772591970218341729665918412362442230858130915344051855763613858400562 32050137162814324990843283325386797397736524945233295734089673870945050769957 55091452989029811052637862837845279926865242823532256606449119992115313321360 56422089272734917501243994926621269360992197686300019541030814629323231201851 74047946929931664878L (Note how, in that last operation, ** stands for ^, since ^ is reserved for 'xor'.) Floating point division Everything up until now should be somewhat intuitive. Let's try something else. >>> 25/10 2 Hmm, that doesn't seem right, last time I checked. I wonder what could be wrong? Some of you may recognize this as classic floating point division mistake. What's happening here is that python has stored the values 25 and 10 as integers, and when it divides them, it can only return another integer. In this case, it rounds the answer for you. In order to use floating point division for real, you have to actually use a decimal point. >>> 25./10 2.5 >>> 25/10. 2.5 As soon as that decimal point is there, the float (a number that isn't an integer) is declared, and you can continue work as usual. Unlike many other langauges, python doesn't have doubles. It calls them all floats. But this brings up a fundamental difference between Matlab, a nice simulation environment, and Python, a programming language. Matlab is slow and clunky because it does many things for you, like convert integers to floats when necessary. Python is fast, and flexible, but requires you to kind of know what you're doing. Strings Let's try something Matlab is not so good at dealing with: strings. Try some of the following stuff: >>> blubbermonkeys Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> blubbermonkeys NameError: name 'blubbermonkeys' is not defined Gee, I wonder why that didn't work. Wait, hang on. In most programming languages, strings aren't just whatever you type in. They are actually a special type of object. If I type in 'blubbermonkeys' straight up, it's actually asking Python "What's a blubbermonkey?" And since you've never previously defined a blubbermonkey, then, well, Python doesn't know. So instead, we use quotes to designate strings. >>> "Blubbermonkeys" 'Blubbermonkeys' >>> "I love OLPC!" 'I love OLPC!' Yay! that worked! Now let's try more interesting things. >>> 'Tank' + 'Nikki' 'TankNikki' We fused together Tank and Nikki! Sweet! For any of you who have formated an output file from scratch, you know that one of the coolest things in the world are escape characters. In python, as in many other langauges, you use a '\' to indicate that you are 'escaping' from string mode, and into cool formatting world. What can you do with escape characters? Well, you can create a new line ('\n'), for one, or use tabs ('\t'). Let's attempt this. >>> 'Tank\n Nikki\n Mel\n SJ' 'Tank\n Nikki\n Mel\n SJ' Ewww... Ok, so just like in MatLab, where you can technically display stuff by not putting in that last semicolon, it isn't the most elegant way of doing it. In MatLab, we use 'disp' to display things intentionally. In Python, we use 'print'. >>> print 'Tank\n Nikki\n Mel\n SJ' Tank Nikki Mel SJ >>>print 'Tank \t is a sophomore \n Nikki \t is a junior \n Mel \t is a grad \n SJ \t is a real adult' Tank is a sophomore Nikki is a junior Mel is a grad SJ is a real adult Sweet! That looks a ton better. Variables Hey, let's define a variable! For you MatLab lovers out there, it's just like Matlab. For you Java and C fans out there, you will love Python, because it's much easier. You don't need to explicitly state the type of variable before declaring it; Python just kind of knows. Well, sort of. Let's look just at integers and strings. >>> a = 3 >>> b = 4 >>> c = a**2 + b**2 >>> print c 25 >>>>> print name + ' is Great!' Yifan is Great! As you are quickly discerning, anything can be defined as a variable. So far, all we really know what to play with are strings and numbers, but when we get to making classes, we'll see that we can go way beyond that. Datatypes At this point it is most prudent to look at some datatypes. As I have hinted to already, Python has floats, doubles, integers, and strings. The other major datatypes are tuples, lists, and dictionaries. Lists, as its name represents, are just collections of stuff in order, and are usually represented in brackets. There are many kinds of lists, such as: >>> mylist = [1,2,3,4,5] >>> print mylist [1, 2, 3, 4, 5] >>> mylist = ['Elsa','Colin','Xy'] >>> print mylist ['Elsa', 'Colin', 'Xy'] Neat. I wonder if I can add to this list. >>> mylist[4] = 'Aaron' Traceback (most recent call last): File "<pyshell#8>", line 1, in <module> mylist[4] = 'Aaron' IndexError: list assignment index out of range Aww, crap! I guess that means I'll just have to replace Xy, then. >>> mylist[3] = 'Aaron' Traceback (most recent call last): File "<pyshell#9>", line 1, in <module> mylist[3] = 'Aaron' IndexError: list assignment index out of range Wait, what? This makes no sense now! My list is 3 elements long, I'm replacing the third element... Ah, but grasshopper, in the real world, three elements long lists have indices 0,1,2, not 1,2,3. So, if we try again >>> mylist[2] = 'Aaron' >>> print mylist ['Elsa', 'Colin', 'Aaron'] Sweet! I replaced Xy! And, for future reference, if I didn't want to replace xy, I could have added Aaron to the list as follows: >>> mylist.append('Aaron') >>> print mylist ['Elsa', 'Colin', 'Xy', 'Aaron'] Neat. I can make a list and replace any element of it I want. What else can I do? Well, I can find out how long a list is: >>>len(mylist) 4 Or, even better... >>> for item in mylist: print item + ' is AWESOME!' Elsa is AWESOME! Colin is AWESOME! Xy is AWESOME! Aaron is AWESOME! Hey look at that! Lists allow you to use for loops! If it isn't evident yet, the for loop structure is done by iterating through a list. Actually, this is exactly what you do when you use a for loop in Matlab; when you say for x = 1:10 <do stuff> What you actually are doing is instantiating a vector x including the integer values 1-10, and looping through each element in x. In Python, it's exactly the same, except the syntax is more like for x in [1,2,3,4,5,6,7,8,9,10]: <do stuff> (note the colon) Or, for shorthand, for x in range(10): <do stuff> (Note to audience. Actually I am lying to you. range(10) returns a list of [0,1,2,3,4,5,6,7,8,9], not [1,2,3,4,5,6,7,8,9,10]. In the real world, everything starts with 0; in Matlab, they start with 1. How odd.) Tuples Tuples are very much like lists, except they aren't modifiable. They are instantiated using '()' ie: >>> x = (1,2,3) So exactly are some differences between lists and tuples? Let's try some simple experiments. >>> x[1] = 3 Traceback (most recent call last): File "<pyshell#36>", line 1, in <module> x[1] = 3 TypeError: 'tuple' object does not support item assignment How sad. As it turns out, tuples are useful for things that you want one way forever, like the size of your window in a GUI. You should never try to change a tuple value. If you really, really, really want to, though, you can convert it to a list, as follows: >>> a = (3,1,4) >>> list(a) [3, 1, 4] What about for loops? You can, thankfully, use tuples in looping, ie: >>> for n in range(len(x)): print 'I have', n, ' apples' I have 0 apples I have 1 apples I have 2 apples So tuples aren't that evil. Dictionaries Dictionaries are kind of sort of like lists, except that they are unordered, and like actual dictionaries, come in a key-value pair. That is to say they are mostly used for lookup. You indicate dictionaries using 'dict([<key>,<value>],[<key>,<value>])' or using curly brackets '{}' >>> dict([('Brian', 'Mod Con'), ('Mark', 'ICB'), ('Ben', 'Design Nature')]) {'Brian': 'Mod Con', 'Ben': 'Design Nature', 'Mark': 'ICB'} Note how the order is not preserved, but the pairing is. To iterate through a dictionary, you can use d.keys() to go through the keys, and d.values() to go through the values. To go through both, you can use d.items() to get a tuple pair. Examples: >>> d = dict([('Brian', 'Mod Con'), ('Mark', 'ICB'), ('Ben', 'Design Nature')]) >>> for i in d.keys(): print 'I love ' + i I love Brian I love Ben I love Mark >>> for i in d.values(): print 'I love ' + i I love Mod Con I love Design Nature I love ICB >>> for k,v in d.items(): print 'I love ' + k + ' and ' + v I love Brian and Mod Con I love Ben and Design Nature I love Mark and ICB Sweet! That was a ton of stuff for day 1. Let's take a deep breath and dive into the summary project! Summary project: Organizing list of senators Download the following file and put it in your directory of choice: List of Congress - THESE DON'T EXIST ANYMORE In the spirit of the election, I have copy-pasted a list of US senators into listOfCongress.txt. If you open it you can see that I made no efforts whatsoever to make it neat - the original site organized it by alphabet, in two columns seperated by a tab. At the end of each letter, there's an annoying '^ return to top' text. - You are an announcer at some special meeting. It is your job to announce the name of each congressperson, alphabetically by last name, which state, and which district this person is from. Sadly, you get sweaty palms and tend to mess up when you get nervous, so you need to first write out word for word exactly what you would say before you say it. Arrange the text such a format. Use this file to get started. - Whew! That was fun! Now let's try something more interesting! Organize the information in the following format - Each person is a key-value pair in a dictionary - The keys are the "first, last" name of the congressperson - The values are len = 2 tuples, where the first value is the state that the congressperson is from, and the second is the district number - Make a trivia game out of this with your own scoring system. Refer to this file again for help on command-line inputs. Hint: You may want to look here for string manipulation help. Solutions (As feedback, it would be superly awesome if you were to upload your solution here somewhere, so I and others can see what you got out of this tutorial.) Day 2: Classes, objects, functions Encapsulation, modularity, polymorphism, and inheritance Up until now, we've really only been working with cute tricks. By this point, however, we should be comfortable with the basic datatypes of Python, and capable of writing simple scripts to do just about anything tedious. But for large scale programs, we need to take advantage of the higher level tools Python offers us. As mentioned earlier, Python is unique in that it is an object-oriented programming language, meaning that it encompasses features like encapsulation, modularity, polymorphism, and inheritance (taken from the wikipedia page). Encapsulation is the ability to make things invisible. It would be very painful if we had to program everything in the bits and bytes of machine language that our computer actually uses to make things happen. Instead, we give our thanks to people who write assemblers and compilers so that we as programmers can write at a higher level, while the more nitty-gritty stuff happens invisible to us. Modularity gives us the ability to divide tasks into modules. From an outsider's view, a module is only a black box that takes in parameters and returns answers. This makes dividing the work up easier; if you and I write a peice of code together, then I don't really care how you write your part as long as I know how I can use your part in my part, or vice versa. Polymorphism is when an object can appear and be used as another object. Pretend that I had a function that performs a square root function. I then instantiate two variables: a = 1, and b = 2, and then perform sqrt(a) and sqrt(b), the square root function views a and b as the same thing. In this way, two objects of the same type can be used in the exact same way. Finally, inheritance, is, just as it sounds, the ability of an object to take the attributes of another object. Say I were trying to program a whale. This could get very tedious. Thankfully, my friend has already programmed mammals in general. I can then inherit from the 'mammal' program in defining 'whale'. Whew. That was a lot of theory. Let's see how these ideas apply to things like classes, objects, and functions. Functions Functions allow for the features like encapsulation and modularity to work. Just as you might expect, it's a way to take code and encapsulate it in a block that takes in parameters and returns outputs. The format for functions in Python is as follows: def <function name> (<param 1>,<param2>): <insert code here> return <output> Say I wanted to write a function that adds two numbers, a and b. I could do something like def add(a,b): c = a + b return c or def add(a,b): return a + b Wow. That was rather roundabout. What if we programmed an entire calculator instead? Maybe something more like: def calculate(a,b,op): if op == '+': return a + b elif op == '-': return a - b elif op == 'x': return a*b elif op == '/': return (a+0.0)/b And I can nest that function into another function and make a command-line calculator: def calculator(): a = input("Type the equation \n") b = a.split() print calculate(float(b[0]),float(b[2]),b[1]) Note that this function doesn't take in any paramters nor returns any values. It is, in a sense, the 'main' function of this example. I can use that function now to do some fun things. while(1): calculator() If I run this, I get something that looks like Type the equation '8 + 2' 10.0 Type the equation '5 - 1' 4.0 Type the equation '2 x 5' 10.0 Type the equation '2 / 8' 0.25 Congratulations, kids! You have created your first calculator! Interestingly enough, the first computer created by Intel (the 4004 chip) was basically a glorified calculator. And we just did it in 9 lines of code! Oh the power of encapsulation! Classes and objects One way to think of classes and objects is to think of it like taxonomy. Say that all programming things are living things. These living things are then divided into animals, plants, fungi, algae, and bacteria. The animals are then further divided into things with spines and things without spines. And so on, you get the picture. It is easy to divide programs in the same way. Say that I wanted to write a program that would count my change for me. I might make a class, 'Coins'. All coins are metal, amd most are silver, and hopefully they all fit in your pocket. class Coins(): madeOf = 'metal' color = 'silver' fitInPocket = True I can then make several more specific classes of special coins, all of which inherit from "Coins". class Quarters(Coins): value = 0.25 class Dimes(Coins): value = 0.1 class Nickles(Coins): value = 0.05 class Pennies(Coins): value = 0.01 color = 'bronze' Note that Pennies are kind of a mutation when it comes to color, so I redefined that. On the flip side of things, I might want to do the same with bills. In this case, I can make more classes: class Bills(): madeOf = 'paper' color = 'green' fitInPocket = True class OneDollar(Bills): value = 1.0 class FiveDollar(Bills): value = 5.0 class TenDollar(Bills): value = 10.0 class TwentyDollar(Bills): value = 20.0 Ok. So what? Well, now I can make things happen. Say I had 23 pennies, 25 nickles, a quarter, and a five dollar bill. I might calculate the total change as follows. c = [] for n in range(23): p = Pennies() c.append(p) for n in range(25): n = Nickles() c.append(n) q = Quarters() c.append(q) f = FiveDollar() c.append(f) totalChange = 0 for n in c: totalChange += n.value print totalChange And if I run this, I get 6.73 Ok. Now you're thinking, "I can think of at least five hundred better ways to calculate this." You may be right, but what if, instead of a handful of change, you had a jar of change? Or, what if you were a store clerk and wanted to know how much stuff is in the register? And, what if instead of knowing how many coins you had, you had a program that could identify each coin by its weight (another attribute!), and count it up for you? Then this isn't such a bad way to approach it. The bottom line is this: scripting is great for cute, fun, shortcut things, but for any real-sized program, you're going to need to use functions, classes, and objects in order to organize your work in some robust form. Summary project (under construction) Do this Software Design homework 07 Day 3: Files Today we embark upon the fun adventure of dealing with files! After this module, you should be able to make any permanent programs, ie can read from a file and write to a file to save its state. Think of the possibilities! There's really not much for me to write about when it comes to files. The theory is exactly the same as before: remember to make lots of functions, classes, and objects and to use all kinds of datatypes to make your code robust and easy to decode. Most of the goods on dealing with files can be found here. Here are some important examples: Opening and reading from a file f = open("hello.txt") try: for line in f: print line finally: f.close() Or, according to the site above, if you have Python 2.5 or higher, from __future__ import with_statement with open("hello.txt") as f: for line in f: print line You can also write to a file using the 'write' command. Really, that's all you need to know. Summary project Again, we steal Allen Downey's homework assignment: Markov Analysis Day 4: GUIs What are GUIs So you now know how to make your computer do menial tasks for you. Clearly, though, that's not how the guys at Microsoft do it; when was the last time you had to use the Windows command line? (As in you had no other alternative?) If you haven't guessed already, GUIs stand for graphical user interfaces. These are what most software companies use to make their programs more appealing for the general public who may be more comfortable with clicking a little thing that says "Would you like to save now, pretty please?" than just typing "save file" into a command line. In this module, we'll be working with GUIs a bit. There are different packages you can use. It just so happens that Tkinter comes with most Python installations, and is what Allen Downey usually uses in his educational programs. I, personally, however, am more familiar with wxPython, so I'm going to show you some basic ways of getting comfortable with that. Packing The philosophy behind most GUI programming is packing stuff on your screen. There are many different ways to do this. We can, for example, stack them in neat little grids like we stack eggs or fruit (Grid Sizers). Or, we could just shove them on top of each other in any way that fits, like we stack crap in our dorm rooms (Box Sizers). There are many different packing mechanisms, or sizers, some better than others in different situations. So what exactly is it that you're packing? In the fruit industry, we are packing oranges. On moving day, we pack coffee cups, T-shirts, unused bars of soap, etc. In the GUI world, we pack widgets; scroll bars, menu bars, buttons, etc. Some examples of packing in wxPython: (Both images taken from [[1]], a very nice page to learn wxPython) As you can see, GUI packing is a bit more layered than typical packing. If I'm just trying to store my books away, I might just shove them in a box. But in GUI packing, I'm probably going to shove my BoxOfBooks into another BoxOfStuff, perhaps adjacent to a BoxOfClothing that contains three boxes of its own for each type of clothing. Once you get comfortable with this idea of layered boxes, however, the rest is just syntax. TKinter vs wxPython The two major packages that support GUI building in Python are TKinter and wxPython. Although TKinter is a more established package that usually comes with your python install, more and more people are using and developing wxPython. At any rate, as mentioned before, because I have personally used wxPython far more than TKinter, that's the package we're going to talk about. However, for your own benefit, there is a nice pros and cons list provided [here]. (Yes, it is on wxPython's website, and therefore is skewed, but pretty accurate nonetheless) Existing Tutorials There are several nice tutorials existing that go through how to program in wxPython, so I will start out by providing them. First, the [Getting Started] page is very good, and will walk you through the basic packing structure. You can also find [an older tutorial here]. Downloads To start, first download and install wxPython. Next, I'm going to suggest something you might kill me for, but also download and install wxGlade. Glade products are basically GUI-based programs for making GUIs. Although they are quite atrocious to use to make a GUI of any reasonable scale, I find it a sweet tool to use when just trying to figure out how to code a piece of something. Hello World To get started, and just to check that you installed the right version of wxPython, try running the following code: import wx app = wx.PySimpleApp() frame = wx.Frame(None, wx.ID_ANY, "Hello World") frame.Show(True) app.MainLoop() You should get a nice frame window that looks like: Great! Now all we have to do is add some widgets! Using Glade First off: wxGlade is dumb. But just like pet rocks are dumb, if you're trying to break a window, they can still be useful. Remember this when you use wxGlade and realize just how dumb it is. It's not there to be your user-friendly tool to make your life easy, but if you understand it enough, you can use it to bypass a lot of confusing API. The way that we're going to work through this is to first use Glade once to see how you would use it to build GUIs. Then, we'll look through the compiled code and see where Glade is dumb. An easier way is to first build the framework of your code by using some of the examples provided in the [Getting Started tutorial]. Then, when you need a particular widget, use Glade to see what the code and parameters for it is, a particularly useful method when you know what a widget looks like but not what it's called. When you get good at this, you will no longer need Glade, and can just look up the API for each widget directly. Glade GUI-for-GUIs walkthrough After you've installed wxGlade, boot it up. This is what it should look like. The best way to start out is to make a frame. Click on the upmost leftmost icon on the panel, which should give you a frame with a free 1x1 box sizer inside. Sizers in general are the GUI's way of packing your stuff. When you open any GUI, you will see that it's composed of buttons, pictures, text, toolbars, etc. All of these are lovingly called "widgets" in GUI world. These widgets go into sizers. There are many different kinds of sizers. GridSizers will put your stuff in an n by n grid. BoxSizers only allow you to choose the number of rows or column. Any sizer will allow you to place another sizer within a cell, and for that reason, many people use BoxSizer, despite its limited ability, because it's easier to think of it in an embedded design paradigm. By inserting a frame, you have a free box sizer. Note that if you click the third icon in the first row, you will get a free panel. If you want to stick anything in the panel, you need to insert a boxsizer. But that's not what we're going to do in this walkthrough. Instead, we're going to play with the box sizer. Right click the sizer in the Tree window, and click "Add slot". Your sizer is now split into two vertical rows. If, instead, you had wanted horizontal columns, you can delete the sizer, insert a new sizer, and set it as horizontal instead of vertical. But let's not do that in this walk through. Instead, let's experiment with placing widgets into sizers. Select any widget and click on the upper cell. You can see that the cell size hugs the widget. You will also notice that some widgets, like menubars and toolbars, don't go into the cells, but instead are floating. That isn't the way they should appear on the GUI, but these two widgets have become so standardized that they don't need to go into the sizers. You can also insert a widget in the bottom cell, but if you do that, you're done with the Frame. So instead, we're going to embed a boxsizer by selecting the middle icon in the bottom row and clicking the bottom empty cell. Keep the settings on wx.Horizontal, and select 2 for the number of slots. Great! Now fill these two cells with your widget of choice. Sweet. Now let's look at some code! In the Tree window, click Application. In the Properties window, scroll down. Make sure it's set for Language: Python, and compatibility: the version of python you are now running. As you can see, on my version of Glade, it annoyingly only works with the even-number versions. In this case, yes, you do have to download a compatible version of python. Click the "..." button and find the place you want to store the code. Then click "Generate Code" to get the goods. Sweet! Cleaning it up Whew! That was a lot of work! After all that effort, this is the code that was generated: #!/usr/bin/env python # -*- coding: iso-8859-15 -*- # generated by wxGlade 0.6.3 on Wed Jan 28 14:54:37 2009 import wx # begin wxGlade: extracode # end wxGlade): # begin wxGlade: MyFrame1.__do_layout_1.Add(sizer_2, 1, wx.EXPAND, 0) self.SetSizer(sizer_1) sizer_1.Fit(self) self.Layout() # end wxGlade # end of class MyFrame1 class MyMenuBar(wx.MenuBar): def __init__(self, *args, **kwds): # begin wxGlade: MyMenuBar.__init__ wx.MenuBar.__init__(self, *args, **kwds) self.__set_properties() self.__do_layout() # end wxGlade def __set_properties(self): # begin wxGlade: MyMenuBar.__set_properties pass # end wxGlade def __do_layout(self): # begin wxGlade: MyMenuBar.__do_layout pass # end wxGlade # end of class MyMenuBar if __name__ == "__main__": app = wx.PySimpleApp(0) wx.InitAllImageHandlers() frame_1 = (None, -1, "") app.SetTopWindow(frame_1) frame_1.Show() app.MainLoop() Try running the code. You will probably see a multitude of errors. Sadly, our pet rock, aka Glade, is really dumb and sucky in creating code that is optimal, and at times, correct. Therefore, we have to do some cleaning up. I went through and cleaned up this code a bit to give you an idea of what needs to be changed. First off, there is a commented header generated that allows some compabitility with wxGlade. #!/usr/bin/env python # -*- coding: iso-8859-15 -*- # generated by wxGlade 0.6.3 on Wed Jan 28 14:54:37 2009 Kill it. Next, scroll to the very end and look at the very last block if __name__ == "__main__": app = wx.PySimpleApp(0) wx.InitAllImageHandlers() frame_1 = (None, -1, "") app.SetTopWindow(frame_1) frame_1.Show() app.MainLoop() if __name__ == "__main__": app = wx.App() wx.InitAllImageHandlers() frame_1 = MyFrame1(None, -1, "") app.SetTopWindow(frame_1) frame_1.Show() app.MainLoop() There are a few things downright wrong with this code. First of all, by assigning frame_1 to a tuple, you don't really get a frame. So change the line frame_1 = (None, -1, "") to frame_1 = MyFrame1(None, -1, "") Now, instead of initializing a tuple, you have initialized a wxFrame with three arguments. At this point, your script should run. You should see a rather hideous GUI box with little widgets that overlay on top of each other. The other thing that wxGlade isn't supporting is packing each sizer separately. In many ways, it's like wxGlade is shoving all of your valuables in a box with some cardboard, without really sealing up each individual box neatly. That's your job.Go to the __do_layout(self)function of the MyFrame class. Here, you will see the badly packed sizers. You will notice that, although there is a line for sizer_1.Fit(self) there is no equivalent for sizer_2. So, after everything has been added into sizer_2, add another line: sizer_2.Fit(self) Now, you should get a less hideous GUI that looks something more like: Not the prettiest, but it is what you want. A copy of the final code should look something like: import wx_2.Fit(self) sizer_1.Add(sizer_2, 1, wx.EXPAND, 0) self.SetSizer(sizer_1) sizer_1.Fit(self) self.Layout() class MyMenuBar(wx.MenuBar): def __init__(self, *args, **kwds): wx.MenuBar.__init__(self, *args, **kwds) self.__set_properties() self.__do_layout() def __set_properties(self): pass def __do_layout(self): pass if __name__ == "__main__": app = wx.PySimpleApp() wx.InitAllImageHandlers() frame_1 = MyFrame1(None, -1, "") app.SetTopWindow(frame_1) frame_1.Show() app.MainLoop() What does it all look like? Summary Project Make a GUI for any of the previous programs you have made. These could include: - Trivia - Calculator - Poker - Markov Analysis or anything else you can dream of! Day 5: PyGame By this point, you should be pretty comfortable with the general python language. The next step in OLPC programming is to learn how to actually make a game. In this section, we will go over how to use pygame, and then we can make activities! Alternatives to PyGame The author of this wiki chose to give a tutorial on PyGame because - it is one of the better documented game development packages for python - it is capable of many things - it comes with the Sugar image However, we do recognize that many people develop with other packages and in other ways, ie PyGTK, PyCairo. The current OLPC-Nepal deployment develops their activities using Flash. Comments about the merits of Flash aside, it is often the case that the immediacy of an activity outranks the need for high quality activities, so it is often important to choose the package that best fits the needs. Downloads Go here for Python downloading options. Be sure that it is compatible with your current version of python. There should be packages available for Windows, Mac, and all Linux distributions. Existing Pygame Tutorials We won't discuss much about the syntax idiosyncrasies since there are many, many, MANY fantastic existing pygame tutorials that will do that for you. Many of them can be found at the python documentation page, with rather famous ones like Line by Line Pummel-the-Chimp. Also, there is a nice collection of Pygame code examples, which you can go through and mess with, and see how you affect the program. Using PyGame There are many ways of using PyGame to further your gaming needs. We will discuss two distinct paradigms that we've observed, and hope that it helps you plan out your PyGame project. Pygame: an action-filled adventure! In programming games that require a lot of action and collision detection, most people use PyGame Sprites. Fundamentally, a sprite is - a rect object (for position info) - an image object PyGame then allows for all these sprites to be placed in a group. While the game runs, the program iterates through these sprites to constantly update their position, color, and size, to re-render them. Sprite objects also come with collision control; these features allow Sprites to commonly be used when the game requires high action, ie shooting, running around, etc. Pygame: a world of puzzles and drawing Alternatively, your program may not require so much collision control and high-speed rendering, but may require better manipulation of shapes. In this paradigm, it may be easier to use the pygame.draw functions, which make it easy for one to draw all sorts of shapes and colors. However, you can't exactly move a drawing; if you want an object to move, you would have to erase and redraw it. This may be fine for slow-rendering situations like word games, puzzles, etc, but may not be compatible for the action-filled games. OLPC's Pygame Wrapper Now you have a wonderfulicious python program. Great! What the crap are you going to do with it though? Hey, I have an idea! How about we bundle it into a .xo bundle, and then stick it onto one of those cute laptops and see if it works? Fantastic! Making the bundle Now, there are several ways we can go about this. If it is your intent to really get to know how to make .xo bundles, you may find this rather wonderful Sugar activity tutorial handy. If, instead, you're lazy and you just want your activity to work, you can download a Pygame Wrapper that will essentially take care of all of that for you. Testing the bundle Again, this is a problem that can be approached in many ways. Let's try some interesting things. If you already have an xo, put your program onto a USB stick and stick it into the xo. Try to first run the python program straight up (eg, Terminal --> python <game>.py) If that works, you may have to run a setup and reboot, but then your game should be playable. If you don't, then you can try a number of things. If you have tons of space on your computer, and your game isn't terribly big and feature-filled, you can download an emulator or use jhbuild. If you want to try something super-awesome and you happen to have a 2G or bigger USB lying around unused, you can try your game on a Sugar On A Stick, an external boot partition. If none of these work for you, please leave a note here with a description of your problems. We learn the most by seeing where the bugs are so please help us! Summary project Go to neopets.com. Find your favorite game and re-make it. Don't steal their images or publicize it without their permission! Now make it a .xo bundle. Now try it on an xo. Fun and games! Woo!
http://wiki.laptop.org/go/Olin_university_chapter/Software/PythonTutorial
CC-MAIN-2018-05
refinedweb
6,972
73.88
Random thoughts on all things .NET... The. NAB 2009 rocked !! It was exhausting working the booth, standing on my feet all day, talking to so many people over the course of four days, but it was so exciting at the same time. We showed many cool things at the Microsoft booth, including Silverlight 3, IIS 7 Smooth Streaming, our Fast search engine, our Advertising solutions, a ton of solutions from our partners, and last but not the least – some very interesting open source starter kits that my team built, and that I will blog about in coming posts. But I am especially excited about a specific application I was directly involved in. : - It is a great proof point that Silverlight is not just for consumer facing scenarios on the web, but is equally suited for more rigorous applications in broadcast media workflow. - It is one of the first production applications to use the h.264 playback capabilities being enabled in Silverlight 3. MTV creates all their proxies as QuickTime .mov files, with the essence encoded using h.264 compression, either in SD or in HD resolutions. Since QT is essentially a variant of the MP4 container structure, and SL 3 supports parsing MP4 and decoding h.264 natively, we could playback all of MTV’s QT content natively in SL3, without any further transcoding. This was a huge win-win for all involved both from a time and cost perspective. I provided some of the necessary technology guidance to a joint team from Microsoft, Vertigo and MTV in implementing the solution. You can read more about the general press release from Viacom CIO Joe Simon here and about the specific case study here. Recently I needed to bind some embedded images to data templates. If you look at the Image type in Silverlight 2, you will see that it exposes a Source property of type ImageSource that can be set to the URI of an image either relative to the XAP, or in its absolute form. If the URI is valid, the image stream is opened and read at runtime, an underlying BitmapImage is created around the stream and is then fed into Image.Source (BitmapImage derives from ImageSource). However this is all great when the images are on a web site. But what about images that are packaged with my XAP ? It is actually not very hard to bind those either. Assuming you have a XAP assembly named Foo (with Foo being the default namespace), and say your image, Bar.png is stored in a project folder named Images, once you compile the assembly, the image is embedded into the assembly as a resource named Foo.Images.Bar.png. To get this done through Visual Studio mark your image as an Embedded Resource. However to access the embedded image and create a BitmapImage out of it, you will have to write some code like this: BitmapImage bim = new BitmapImage(); bim.SetSource(this.GetType().Assembly.GetManifestResourceStream("Foo.Images.Bar.png")); This BitmapImage instance can then be bound directly to the Image.Source property using a traditional binding expression. However, you may already know all of this, and in any case I wanted to make this a little more general purpose. I did not quite like the fact that the string literal representing the image (in the above case “Foo.Images.Bar.png” ) could not be directly fed to the Image element in my XAML like a URI. I had to create some sort of artificial property in some type that would instead expose a BitmapImage instance, and then bind that property to my Image.Source. In effect what I wanted to do was something like this: <Image Source="Foo.Images.Bar.png" /> but sadly that would not work. So instead I took the approach of using a value converter. I wrote a converter that looks like this: public class ImageResourceNameToBitmapImageConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { //check to see that the parameter types are conformant if (!(value is string) || targetType != typeof(ImageSource) ) return null; BitmapImage MenuItemImage = new BitmapImage(); try { MenuItemImage.SetSource(this.GetType().Assembly. GetManifestResourceStream(value as string)); } catch (Exception Ex) { return null; } return MenuItemImage; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } Now in my XAML I can declare the converter: <local:ImageResourceNameToBitmapImageConverter x: and then an Image element like this: <Image Source="{Binding Converter={StaticResource REF_ImageResourceNameToBitmapImageConverter}}" DataContext="Foo.Images.Bar.png" /> This does the trick. Since the Binding does not specify a path, the converter gets the DataContext (which is the qualified image resource string name) passed in as the value parameter, and all I do in the converter is use the same code as before to create and pass out the BitmapImage instance. Came in pretty handy and saved me the grief of creating unnecessary CLR properties for each necessary image binding. Our I have been inconspicuously absent on blogosphere for a while now. For my handful readers – I hope you missed me :-). I am back to blogging and hope to put out some more interesting stuff for you guys in the coming weeks. First for exciting news. We are close, very close to finishing up Silverlight 2. The product team just made the first release candidate available, and you can download it here. You can read more about the release in Scott Guthrie’s blog post here. We also just announced the first service pack to Expression Encoder 2. Video is very important to my work, and there are some tremendous changes to be included in the SP. H.264/AAC output – finally !!!. Read more about it in Ben’s blog post here and James’ post here. I have also been busy finishing my book on Silverlight 2, that I have been coauthoring with my teammate Rob Cameron. You can pre-order the book here. We are in the final phases of editing, but APress is also planning to put out an e-book version in their Alpha book program that allows you to pre-purchase the book and progressively read chapters as they are being edited. You can find more details at the APress web site. Rob and I are really excited about the book, and for those of you who decide to give it a try, we hope you enjoy reading it as much as we enjoyed writing it. It was hard writing a book targeting the RTM version of a technology that was still being built while we wrote. But we are pretty proud of the end product, and if you are planning to work with Silverlight 2, we are confident you will find the book useful. Until the next post… Woo - !! Sone folks pointed that the code sample attachment in my previous post on the Async Multiple File Upload Control for Silverlight 2 was broken. I have fixed it in the post as well provided a link below to the code on my Windows Live SkyDrive. Sorry for the incovinience. By :. Note that upping the boundary may be a lucrative option for very large file uploads to achieve faster uploads, but this is client memory, and you can soon get OutOfMemory exceptions if you are not careful. Note that upping the boundary may be a lucrative option for very large file uploads to achieve faster uploads, but this is client memory, and you can soon get OutOfMemory exceptions if you are not careful.. Also the service now puts all uploaded files in a subfolder called Assets under the service directory. This is hardwired in the service code. Change it as you see fit, but do remember to change service code accordingly as well. Also the service now puts all uploaded files in a subfolder called Assets under the service directory. This is hardwired in the service code. Change it as you see fit, but do remember to change service code accordingly as well. There are a lot of things incomplete in the : I!!!!!: <Button RenderTransformOrigin="0.625,2.55" Grid. <Button.Content> <Grid> <Grid.RowDefinitions> <RowDefinition /> <RowDefinition/> </Grid.RowDefinitions> <RadioButton Content="RB 1" Grid. <CheckBox Content="CB 2" Grid. </Grid> </Button.Content> </Button>. <ListBox IsSynchronizedWithCurrentItem="True" HorizontalAlignment="Left" Margin="158,142,0,194" Width="96" Template="{DynamicResource CTListBox}"> <ListBoxItem Content="Item 1"/> <ListBoxItem Content="Item 2"/> <ListBoxItem Content="Item 3"/> <ListBoxItem Content="Item 4"/> </ListBox> : <ListBox IsSynchronizedWithCurrentItem="True" HorizontalAlignment="Left" Margin="158,142,0,194" BorderBrush="LightBlue" BorderThickness="7" Background="Black" Width="96" Template="{DynamicResource CTListBox}"> <ListBoxItem Content="Item 1"/> <ListBoxItem Content="Item 2"/> <ListBoxItem Content="Item 3"/> <ListBoxItem Content="Item 4"/> </ListBox> <ListBoxItem Content="Item 1"/> <ListBoxItem Content="Item 2"/> <ListBoxItem Content="Item 3"/> <ListBoxItem Content="Item 4"/> </ListBox> <Button Content="Hello World" RenderTransformOrigin="0.625,2.55" Grid.. Scott Guthrie announced a CTP release for ASP.Net Extensions available for public download. Lots of exciting features that you can play with, including the new ASP.Net MVC Framework, as well as the Entity Framework. ASP.Net 3.5 Extensions CTP MVC Toolkit Extras Quick Starts Also download the beta 3 build of Entity Framework Tools for Visual Studio 2008 RTM from here. ADO.Net Entity Framework Tools Dec 07 Community Technology Preview It is becoming increasingly obvious that designing telecommunication architectures in a service oriented way actually makes a lot of sense. Joe Hofstader, another Architect on our team, has tons of experience building service based solutions for the telecom industry. Check out his article on Caas here. Joe does an excellent job in positioning a Caas Reference architecture based on SIP and IMS. If. Customers that I demonstrate Silverlight to, often ask me about Live Streaming using Silverlight. It is actually pretty trivial to demo a live stream. To create the live source, all you need is a digital video camera that plugs into either a USB port or the 1394 port on your laptop. Your standard DV Handycam is fine(I have tried my Sony Mini DV successfully) , and so are any of the typical clip-on style video conferencing cameras (like a Microsoft Lifecam). Obviously this provides an SD source – to get HD you will probably have to shell out some more money and get one of the new fangled HD consumer cameras. You will also need an encoder (either Expression Media Encoder or Windows Media Encoder works fine) and a MediaElement on a page pointing to the URL where the encoder publishes the stream. If you want to get fancy and demonstrate a somewhat more realistic scenario, you can also add Windows Media Services as the streaming service to this mix – I typically use Windows Media Services 2008 running in a VM. Once you plug in the camera and switch to Live Streaming mode in your encoder, xMedia Encoder or WM Encoder will pick up the camera as both a video and an audio source. Below is a snapshot of xMedia Encoder using my Lifecam as a source. The default publishing options for xMedia Encoder is to broadcast over port 8080 - make sure you check the Streaming checkbox in the Output tab. Your MediaElement declaration can look like so: <MediaElement x: And below is the result - my handsome mug in all its glory :-). If you want to make the scenario a little bit more realistic, you may want to add a streaming service to the mix. It is highly unlikely that a production environment would allow players to directly connect to an encoder. I have been pretty successfully using Windows Media Services 2008 on a Windows Server 2008 Enterprise RC0 VM. Once you install Media Services, you will need to add the Streaming Media Server role to the server instance. You will then need to take the following steps to set up Windows Media Server to stream your live content: Once you are done, revisit Expression Media Encoder, and change the Streaming settings in the Output tab to publish to the publishing point you just set up, by providing the URL to the publishing point. The URL is of the format http://[Media Server Name]:[Port You Selected]/[Publishing Point Name]. Clicking the Pre Connect button will confirm connection, and may ask for credentials depending on your domain settings. With this out of the way, if you start Encoding, the stream will automatically be pushed to the Publishing Point by Expression Media Encoder. <MediaElement x: And that's all you need to do to get Windows Media Server into the streaming process.
http://blogs.msdn.com/jitghosh/
crawl-002
refinedweb
2,082
53.31
Work with the application visualization The Visualization tab displays the application nodes and dependencies in graphical format. The initial view on the Visualization tab is the main view. The circles represent nodes, and the arrows show dependencies and direction between nodes, incoming or outgoing. If you choose Explore alternate visualizations for grouping, Microservice Extractor groups your application nodes according to namespaces or islands. By default, no groups are created. The main view reflects any updates you make to your groupings using the alternate visualization. Visualization topics Features of the AWS Microservice Extractor for .NET visualization tool You can perform the following tasks from the Visualization (nodes and dependencies) page to help you group your application nodes to extract as a smaller service. Create custom groups to visualize a segmentation of the service You can create groups in the following ways: Drag and drop (main view only) — Select one or more nodes by clicking on them, then drag the node or nodes together. Choose or right-click (main and alternate views) — Choose or right-click a node to open the Actions menu. From the Actions menu, you can choose to Add node to group. The Add node(s) to group pane appears on the right, where you can choose to add nodes to an existing group or create a new group, and select the Group name and, optionally, the Group color . Groups are indicated by dotted rectangles. You can collapse and expand groups by choosing the minimize and maximize icons in the left corner of each rectangle. Collapsing a rectangle helps to reduce visual noise as you focus on other areas of the service. View node details Select one or more nodes. Selected nodes are indicated by a dotted circle. Incoming and outgoing dependencies for the selected nodes are highlighted as red (outgoing) or blue (incoming). If you select more than one node, each selected node will appear as dotted, and the dependencies will be highlighted for all of the selected nodes. When you choose or right-click on a node, you can select View node details from the Actions menu. The Node details panel appears on the right. Node details include the following tabs and information for one or more selected nodes: General — Shows the selected nodes, their dependencies, and runtime profiling information. The arrows, or Edges show the direction of the dependency, incoming or outgoing. The call count for each node dependency is also displayed. .NET Core portability — Shows the selected nodes and their .NET Core portability status. If a node is not compatible for .NET Core portability, hover over the status message to view the details and potential remediation. Reset view Choose Reset view to reset the visualization to the original state, or as it was arranged when you first launched it. All new groups are removed, and all changes will be discarded. Add nodes to a namespace group (Alternate visualization for grouping your application) From the Alternate visualization for grouping your application view, under the Namespace view tab, choose a node or namespace, and choose or right-click on it to add it to a namespace group. From the Actions menu, choose Add node to group and add the details about the group to which the node should be added in the Add node(s) to group pane on the right. When you return to the main view, your updates in the namespace view will be reflected. Note that you cannot drag and drop nodes in the Namespace view. View namespace details (Alternate visualization for grouping your application) From the Alternate visualization for grouping your application view, under the Namespace view tab, choose a node or namespace, and choose or right-click it to view namespace details. From the Actions menu, choose View namespace details. The Namespace details pane appears on the right, which displays the following information: General — Displays the selected nodes, their dependencies, and information about possible shared state access. Node summary — Displays the edges between nodes. When you return to the main view, any updates you made in the namespace view will be reflected. View islands, and add nodes to an island The Island view displays the nodes arranged as independent islands of connected nodes. No dependencies are detected to or from nodes within an island and nodes within other islands. The Island view helps you to identify potential node groupings to extract as independent services. When you return to the main view, any updates you made in the Island view will be reflected. You cannot drag and drop nodes in the Island view. View Legend The Legend displays the meanings of the symbols in the visualization. A gray shaded circle indicates a node. A dotted circle indicates a selected node. A gray rectangle indicates a group. A gray rectangle that contains an expand icon indicates a collapsed group. A blue arrow indicates a dependency incoming to a node. A red arrow indicates a dependency outgoing from a node. View Group classification Choose Group classification from the bottom of the visualization to view the name, ID, and color assigned to each group in the visualization. View runtime profiling information You can view the number of call counts from the main view by hovering over the arrows in the visualization. From the Alternate visualization for grouping your application view, you can hover over the rectangle edges to view the number of nodes, and incoming and outgoing dependencies. Search and filter From the main visualization page, you can search and filter by Class ID or Group name by selecting either option from the dropdown list of the search bar, and then entering the Class ID or Group name in the search bar. You can clear your filters by selecting Clear filters. Edit group name and color After you have created a group, you can edit the name and color of the group by choosing or right-clicking it to open the Actions menu, then choosing Edit group name and color. You can update the group name and color in the Edit group name and color pane that appears on the right. Main visualization After you onboard an application, Microservice Extractor displays its nodes and dependencies as a graph. No groups are created by default. You can create groups, modify them, or create new groups to associate with a functionality that guides refactoring. Use the main visualization to view your groups and prepare for extraction after creating groups in the main visualization, or after exploring recommended groupings based on namespace and islands using the alternate visualization. Manually remove node dependencies to prepare parts of your application for extraction as smaller services. The parts are displayed as groups in the graph. Microservice Extractor can also extract API endpoints as separate services by isolating the code that underlies the API endpoints and replacing local calls with network calls. This creates a new implementation of the calling class in a new solution, while preserving the interface and original solution. You can then develop, build, and deploy the new repositories independently as services. For more information about actions you can take from the main visualization, see Features of the AWS Microservice Extractor for .NET visualization tool. For help with grouping your application by namespace or islands, use the alternate visualization. Alternate visualization You can explore suggested groupings for your nodes by choosing Explore alternate visualizations for grouping from the main visualization. The Alternate visualization for grouping your application page displays tabs for Namespace view and Island view. Namespace view The Namespace view displays nodes grouped by namespace. Namespace groups are represented by dotted rectangles. You can add and remove nodes from a group by choosing or right-clicking a node, which displays a menu of options. This menu includes options to Add node to group and Remove node from group. When you select one of these options, you can customize the nodes to add or remove using the right-hand pane. If you are working with a large application and want to collapse a namespace group, choose the minimize symbol in the upper left corner of the rectangle. Choose the expand symbol to reopen it. To add all nodes from a single namespace to a group, right-click on a namespace and choose Add all nodes to group. To view details about a namespace, choose or right-click a group or node, and choose View namespace details. The details will appear in the Namespace details pane on the right. If a class accesses a state that is shared by classes that belong to multiple groups in the application, modification of the shared state may result in errors when you extract the nodes as a smaller service. If the Shared state access detected message appears next to a class, check whether the class accesses a state that is shared by classes that belong to other groups. If so, update your application source code to remove access to the shared state. You can view runtime profiling information (call count and dependency direction) by hovering over the edges of a group. Return to the main view at any time by choosing Back to main view. When you return to the main view, any updates you made in the namespace view will be reflected. Island view The Island view displays the nodes arranged as independent islands of connected nodes. No dependencies are detected to or from nodes within an island and nodes within other islands. The Island view helps you to identify potential node groupings to extract as independent services. You can return to the main view at any time by choosing Back to main view. When you return to the main view, any updates you made in the Island view will be reflected. For more information about actions you can take from the alternate visualization, see Features of the AWS Microservice Extractor for .NET visualization tool.
https://docs.aws.amazon.com/microservice-extractor/latest/userguide/microservice-extractor-use-visualization.html
CC-MAIN-2022-40
refinedweb
1,642
54.02
06 January 2010 17:53 [Source: ICIS news] Correction: In the ICIS news story headlined “?xml:namespace> HOUSTON (ICIS news)--As US biodiesel production grinds to a halt, prices may spike for glycerine - the chemical that goes into everything from toothpaste to polyols, sources said on Wednesday. The biodiesel industry is the main source of crude glycerine, which is manufactured as a co-product of the renewable fuel. Now that biodiesel refiners are closing their doors amid a government delay in extending subsidies to the industry, crude glycerine production rates are approaching zero, market sources said. Now that the biodiesel spigot is closed, glycerine refiners can expect to put up more money to acquire supply, said Daniel Oh, president of Renewable Energy Group (REG), the largest supplier of biodiesel in the US. “Glycerine is obviously is a huge output of the biodiesel industry, and there is going to be an impact,” Oh said.Another possible wrench in the works is coming from Archer Daniel Midland (ADM), which is expected to start its renewable propylene glycol (PG) plant sometime during the quarter. The plant, with a nameplate capacity of up to 100,000 tonnes/year, is expected to be a huge draw on US crude glycerine. ADM did not immediately answer questions regarding the plant’s exact capacity or start-up date. A spokesman declined to comment on market conditions. Overall US glycerine consumption is 40m lb/month (18m kg/month), while domestic production is 28m lb/month, according to industry statistics. Imports make up the difference. Domestic glycerine sellers said prices could jump by as much as 10 cents/lb ($220/tonne or €154/tonne) in the next few months before suppliers in Asia and Europe - both regions awash with glycerine priced lower than in the US - decide to fill the breach. “If ADM hits 100,000 tonnes/year, it will impact the domestic market in the short term until imports catch up,” said one large-volume glycerine buyer in the US. “The imports are there and will step up over the quarter.” Vegetable glycerine spot prices were assessed at 21-23 cents/lb FD (free delivered) northwest Europe on 23 December, according to data from global chemical market intelligence service ICIS pricing. In Asia, vegetable glycerine was assessed at 23-25 cents/lb CFR (cost and freight) northeast China. Even with shipping costs, that material from abroad could become more desirous if US glycerine values moves much higher than their current levels. Vegetable glycerine was assessed at 27-31 cents/lb FOB (free on board) midwest on 23 December. “When that ADM machine turns on - if it’s well built and keeps running - it will be an interesting year,” a buyer said. Major US refined glycerine producers include ADM, Proctor & Gamble, Cargill, Vantage Oleochems and VVF. ($1 = €0.70)
http://www.icis.com/Articles/2010/01/06/9323037/corrected-us-glycerine-prices-set-to-skyrocket-on-low-biodiesel-production.html
CC-MAIN-2014-52
refinedweb
470
58.52