text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Announcing a Unified .NET Documentation Experience - | - - - - - - Read later Reading List Microsoft has announced its new .NET API browser. This browser will be the place to find all relevant .NET documentation. Reference documentation for the .NET Framework, .NET Core, .NET Standard, Xamarin, and the Azure NuGet packages are included in the first release. Additional areas of coverage will be added based on user feedback. Now there is a standardized way to search, present, discover, and navigate all .NET-compatible SDK documentation in one place. Since a search engine will no longer be needed to find the relevant documentation, and where to download the appropriate package, developer productivity will be improved. You can search for a namespace, class, or method by its full or partial name directly into the API Browser. This will work in a similar manner to IntelliSense. Typing a keyword will give a list of possibilities you can choose from. You can also pick a set of Quick Filters to narrow the search space, or you can search through all the APIs. Filters allow you to read the documentation with C#, VB, or C++ examples; there are also filters to specify which version of .NET you wish to search. You no longer have to worry if a type is contained in a certain version. At the top of the browser there is a table of contents for the APIs so that you can read through the documentation if necessary. A PDF download is also available. The documentation itself has been made more readable because you see the most important information first. The general overview and the programming examples (filtered by language) will come before the detailed documentation. Whether you are browsing a namespace, type, or member, the table of contents on the left side of the browser will show the children of what you are browsing. You will then be able to see the context of what you are looking at. Continuous integration tools allow accurate documentation to be available within hours of release. The managed reference documentation is auto-generated from the NuGet or framework distributions. This documentation is generated using open tools such as DocFX and mdoc. Community contributions to the documentation will be enabled within the next month. Since the documentation uses the ECMAXML format, there is a consistent format for all the SDKs. Content can be contributed using Markdown, embedded in the auto-generated documentation. You do not need to know anything about documentation file formats. The User Voice site allows users to request improvements, and vote on which suggestions are most important. This includes what additional documentation users want to see added to the API browser. The Twitter handle, @docsmsft, is available for quick updates. Rate this Article - Editor Review - Chief Editor Action
https://www.infoq.com/news/2017/04/dotnet-unified-documentation?utm_term=global
CC-MAIN-2017-22
refinedweb
459
57.87
I have a DateField and it's set to editable = false. Still it seems that I can change date value. I'm I only having this problem? If i set property enabled = false, controls is colored grey and value isn't readable. Any workaround on this? editable = true means you can type into the textfield present in DateField.false means,the textfield is not editable.Still you can change the date by selecting a data with the help of button.enabled=false means datefield cannot receive any user interaction What you're saying is true, but if I set enabled to false, control gets colored by grey and text in textfield is hard to see(read). You can try modifying disabledColor and disabledSkin to control the way DateField appears when it is disabled You can hide "downArrowButton" instance in custom skin or just set its "visible" property to false ("downArrowButton" can be accessed via mx_internal namespace). Or use the following code trick I used disabledColor to solve this issue.
https://forums.adobe.com/thread/770911
CC-MAIN-2018-30
refinedweb
169
65.22
A beautifully designed and Open Source React Native Framework Galio Galio is a beautifully designed, Free and Open Source React Native Framework. Galio is one of the coolest UI libraries you could ever use, licensed under MIT. Carefully crafted by developers for developers. Ready-made components, typography, and a gorgeous base theme that is easily adaptable to each project. Quick Start 1. Project Setup Go ahead and install the app version of Galio in order to play around with our components and screens! git clone cd galio git checkout examples npm install or yarn install 2. Project testing Terminal cli: expo start After initializing your local server you are now able to test the app inside your simulator by running: npm run ios or yarn run ios (or try an Android simulator) Use our iOS or Android app directly on your physical device by running it inside Expo! 3. SDK library instructions Use our awesome components inside your own projects by running: npm install galio-framework or yarn add galio-framework Import our UI components to your screens: import { Block, Button, Card, Icon, Input, NavBar, Text } from 'galio-framework'; Components Under Galio's belt: - Block - Button - Card - Checkbox - Icon - Input - NavBar - Radio - Slider - Text - Switch - GalioTheme Documentation The documentation for Galio is hosted at our our website Resources - Website: galio.io - Expo: expo.io - Built with Galio: Galio Apps - Issues: GitHub Issues Page Reporting Issues We use GitHub Issues as the official bug tracker for Galio. Here are some advices for our users that want to report an issue: - Make sure that you are using the latest version of Galio. Check for your fork's master branch status and see if it's up to date with the upstream/master (our repository) - Provide us with reproductible steps for the issue. - Some issues may be platform specific, so specifying what platform and if it's a simulator or a hardware device will help a lot.
https://reactnativeexample.com/a-beautifully-designed-and-open-source-react-native-framework/
CC-MAIN-2019-47
refinedweb
324
57.2
Proposal: stop removing language extensions from script names Currently, the Policy's section 10.4 contains: When scripts are installed into a directory in the system PATH, the script name should not include an extension such as .sh or .pl that denotes the scripting language currently used to implement it. In the Policy, should has the following meaning (sect. 1.1): Non-conformance with guidelines denoted by should (or recommended) will generally be considered a bug, but will not necessarily render a package unsuitable for distribution. As a consequence, programs with names such as foo.pl are actively renamed foo in many Debian packages. One of the main consequences of this policy is that a script developed on Debian and calling these programs will not run on other platforms such as Fedora, CenOS, Mac OS, etc. This proposal is to relax the Policy by stopping to ask to rename the scripts, in order to restore portability between Debian and other systems. Pros and cons Reasons for not renaming scripts in Debian Renaming programs create incompatibilities with systems where they were not renamed. Most major Free and non-Free platforms (Fedora, CentOS, Mac OS X, etc.) do not have Debian's policy of renaming programs. Therefore, a script developed on Debian will not run on these programs and vice-versa. This breakage comes as a surprise to non-Debian users, who are mostly unaware of our policy, but also to Debian users, who are not all aware either. The breakage does not affect only scripts, but also documentation, which is either not systematically corrected, or not correctable (on-line tutorials, books, etc.). Regression tests may also break, or fail to test the renamed scripts when they call the programs from the build directory directly instead of the PATH. Removing language name extensions increase the probability of namespace collisions. For the users who want to ensure compatibility with other platforms, discovering which program has been renamed is time-consuming and error prone. For the maintainers of packages applying the policy, the renaming gymnastic is taking extra time, and opens the possibility of introducing new bugs (for instance when there are a foo and a foo.bar program in the same package. Reasons for renaming scripts in Debian Users should not care what the implementation language is. It looks unprofessional and flies in the face of unix tradition. It's harder to type. If Upstream rewrites the script in a different programming language and does not implement a migration strategy, Debian users will benefit from the stability of the script name (for instance, foo.py becomes foo.pl, but in Debian the name foo is used constantly). Alternative benefit: the package maintianer does not have to implement a migration strategy for Debian a posteriori. Language name extensions are not needed in Debian to execute a script with the correct interpreter. Using language extensions is fundamentally wrong. Requiring good practice on the part of our upstreams is part of the "price" of being distributed as part of Debian. Reasons for not recommending upstream to distribute scripts without extension names If they want to support Windows, they need to include an extension in the file name. Reasons for not recommending upstream to distribute scripts without extension names Rewriting an API-compatible script in a different language is made easier, as it avoids the awkward situation where foo.bar would be written in the baz language.
https://wiki.debian.org/Proposals/StopRemovingScriptNameExtensions
CC-MAIN-2020-29
refinedweb
570
54.42
Message-ID and Mail.app Deep Linking on iOS and macOS Last week, we concluded our discussion of device identifiers with a brief foray into the ways apps use device fingerprinting to work around Apple’s provided APIs to track users without their consent or awareness. In response, a few readers got in touch to explain why their use of fingerprinting to bridge between Safari and their native app was justified. At WWDC 2018, Apple announced that starting in iOS 11 apps would no longer have access to a shared cookie store. Previously, if a user was logged into a website in Safari on iOS and installed the native app, the app could retrieve the session cookie from an SFSafari to log the user in automatically. The change was implemented as a countermeasure against user tracking by advertisers and other third parties, but came at the expense of certain onboarding flows used at the time. While iCloud Keychain, Shared Web Credentials, Password Autofill, Universal Links, and Sign in with Apple have gone a long way to minimize friction for account creation and authentication, there are still a few use cases that aren’t entirely covered by these new features. In this week’s article, we’ll endeavor to answer one such use case, specifically: How to do seamless “passwordless” authentication via email on iOS. Mail and Calendar Integrations on Apple PlatformsMail and Calendar Integrations on Apple Platforms When you view an email on macOS and iOS, Mail underlines detected dates and times. You can interact with them to create a new calendar event. If you open such an event in Calendar, you’ll see a “Show in Mail” link in its extended details. Clicking on this link takes you back to the original email message. This functionality goes all the way back to the launch of the iPhone; its inclusion in that year’s Mac OS X release (Leopard) would mark the first of many mobile features that would make their way to the desktop. If you were to copy this “magic” URL to the pasteboard and view in a text editor, you’d see something like this: "message:%3C1572873882024.NSHIPSTER%40mail.example.com%3E" Veteran iOS developers will immediately recognize this to use a custom URL scheme. And the web-savvy among them could percent-decode the host and recognize it to be something akin to an email address, but not. So if not an email address, what are we looking at here? It’s a different email field known as a Message-ID. Message-IDMessage-ID RFC 5322 §3.6.4 prescribes that every email message SHOULD have a “Message-ID:” field containing a single unique message identifier. The syntax for this identifier is essentially an email address with enclosing angle brackets ( <>). Although the specification contains no normative guidance for what makes for a good Message-ID, there’s a draft IETF document from 1998 that holds up quite well. Let’s take a look at how to do this in Swift: Generating a Random Message IDGenerating a Random Message ID The first technique described in the aforementioned document involves generating a random Message ID with a 64-bit nonce, which is prepended by a timestamp to further reduce the chance of collision. We can do this rather easily using the random number generator APIs built-in to Swift 5 and the String(_:radix:uppercase:) initializer: import Foundation let timestamp = String(Int(Date().time IntervalInterval Since1970 * 1000)) let nonce = String(UInt64.random(in: 0..<UInt64.max), radix: 36, uppercase: true) let domain = "mail.example.com" let MessageSince1970 * 1000)) let nonce = String(UInt64.random(in: 0..<UInt64.max), radix: 36, uppercase: true) let domain = "mail.example.com" let Message ID = "<\(timestamp).\(nonce)@\(domain)>" //"<[email protected]>"ID = "<\(timestamp).\(nonce)@\(domain)>" //"<[email protected]>" We could then save the generated Message-ID with the associated record in order to link to it later. However, in many cases, a simpler alternative would be to make the Message ID deterministic, computable from its existing state. Generating a Deterministic Message IDGenerating a Deterministic Message ID Consider a record structure that conforms to Identifiable protocol and whose associated ID type is a UUID. You could generate a Message ID like so: import Foundation func message ID<Value>(for value: Value, domain: String) -> String where Value: Identifiable, Value.ID == UUID { return "<\(value.id.uuidID<Value>(for value: Value, domain: String) -> String where Value: Identifiable, Value.ID == UUID { return "<\(value.id.uuid String)@\(domain)>" }String)@\(domain)>" } Mobile Deep LinkingMobile Deep Linking The stock Mail client on both iOS and macOS will attempt to open URLs with the custom message: scheme by launching to the foreground and attempting to open the message with the encoded Message-ID field. Generating a Mail Deep Link with Message IDGenerating a Mail Deep Link with Message ID With a Message-ID in hand, the final task is to create a deep link that you can use to open Mail to the associated message. The only trick here is to percent-encode the Message ID in the URL. You could do this with the adding method, but we prefer to delegate this all to URLComponents instead — which has the further advantage of being able to construct the URL full without a format string. import Foundation var components = URLComponents() components.scheme = "message" components.host = Message ID components.string! // "message://%3C1572873882024.NSHIPSTER%40mail.example.com%3E"ID components.string! // "message://%3C1572873882024.NSHIPSTER%40mail.example.com%3E" Opening a Mail Deep LinkOpening a Mail Deep Link If you open a message: URL on iOS and the linked message is readily accessible from the INBOX of one of your accounts, Mail will launch immediately to that message. If the message isn’t found, the app will launch and asynchronously load the message in the background, opening it once it’s available. As an example, Flight School does this with passwordless authentication system. To access electronic copies of your books, you enter the email address you used to purchase them. Upon form submission, users on iOS will see a deep link to open the Mail app to the email containing the “magic sign-in link” ✨. Other systems might use Message-ID to streamline passwordless authentication for their native app or website by way of Universal Links, or incorporate it as part of a 2FA strategy (since SMS is no longer considered to be secure for this purpose). Unlike so many private integrations on Apple platforms, which remain the exclusive territory of first-party apps, the secret sauce of “Show in Mail” is something we can all get in on. Although undocumented, the feature is unlikely to be removed anytime soon due to its deep system integration and roots in fundamental web standards. At a time when everyone from browser vendors and social media companies to governments — and even Apple itself, at times — seek to dismantle the open web and control what we can see and do, it’s comforting to know that email, nearly 50 years on, remains resolute in its capacity to keep the Internet free and decentralized.
https://nshipster.com/message-id/
CC-MAIN-2021-49
refinedweb
1,181
50.67
© Jeremy Singer Guessing Game Now let’s put together everything we have learned this week. We are going to write a moderately long Haskell program, consisting of multiple functions and I/O actions. Guessing Game The program is going to be a guessing game, called Starman. In this single-player, text-based game, there is a word which the player needs to guess. For each turn of the game, the player guesses a single letter. If that letter is correct, then the guessed letters are displayed in the correct places in the word. If that letter is incorrect, then the user loses a star. Once the user has no stars left, they have lost the game. However if the user guesses all the letters in the word, they have won the game. Because this game is quite long, we should use a texteditor (like Notepad++ on Windows, TextEdit on Mac or Gedit on Linux). Well, I actually use emacs … if you’ve heard of it. Start by creating an empty text file called starman.hs — the hs extension is to indicate that this file contains Haskell source code. Key Functions The heart of the game involves checking the player’s guess. We want to know whether the guess was right. This outcome is a Bool value, either True or False. We need to update the displayed word, if the guess was right, by replacing appropriate dashes in the displayed word with the correctly guessed character. Therefore the result type of the function is a pair (Bool, String). The first element of the pair is the guess outcome. The second element is the String to display to the user for the next round. Now, the checking function needs to know: - The secret word, a String - The current display, also a String - The character guessed by the player These are the inputs to the checking function. So now we can state the type of the function: check :: String -> String -> Char -> (Bool,String) Here is a great programming tip. It’s always helpful to work out the type of a function first. This focuses your attention on what the function is supposed to compute, and what data it needs to do it. Good software engineers do specification before implementation. What will the check function body look like? The player’s guess is correct if and only if the guessed character c is in the target word. So the guess is correct if c `elem` word The new displayed word will be: [(if x==c then c else y) | (x,y) <- zip word display] This is a list comprehension, where we select each letter from either the actual word or the old display. The word is plaintext, whereas the display starts with all dashed characters. check :: String -> String -> Char -> (Bool, String) check word display c = (c `elem` word, [if x==c then c else y | (x,y) <- zip word display]) The next function we will define is the turn function. This is the function that will be called each time it is the player’s turn to enter a guess. First we need to check how many guesses the player has left: if n == 0 If there are any guesses left, then we need to see whether the player is correct or not: if word == display So we will have two if checks, each followed by putStrLn status messages and the end of the function calling sequence (since it is the end of the game). However if neither of the if conditions is true, then the player can take a turn, so we call another function to get another character from the user input. turn :: String -> String -> Int -> IO () turn word display n = do if n==0 then putStrLn "You lose" else if word==display then putStrLn "You win!" else mkguess word display n Note that there is a neater way to write the turn function, using Haskell guards, but we won’t learn about guards until next week. mkguess word display n = do putStrLn (display ++ " " ++ take n (repeat '*')) putStr " Enter your guess: " q <- getLine let (correct, display') = check word display (q!!0) let n' = if correct then n else n-1 turn word display' n' What is the type of mkguess? Can you work it out and add it before the function definition? We grab a line of user input, but only use the first character for the guess. This will fail if the user just hits ENTER without typing any characters, since q will be an empty string. OK, so now we just need a top-level function. Let’s call this starman: starman :: String -> Int -> IO () starman word n = turn word ['-' | x <- word] n This function takes two arguments, the first is the word to be guessed, and the second is the number of incorrect guesses the player is allowed. Running the Code Let’s put all these four functions into a textfile, called starman.hs Save the file, then start ghci perhaps by typing ghci into a DOS command prompt, running WinGHCi or typing ghci in a terminal window (macOS or Linux). If you are in the correct directory, i.e. the one where you saved starman.hs, you should be able to type :l starman.hs and the program should load. It will either say something like: [1 of 1] Compiling Main ( starman.hs, interpreted ) Ok, modules loaded: Main. or report an error if you have made a mistake in the source code anywhere. Check and make corrections if necessary. An error might look like this: [1 of 1] Compiling Main ( starman.hs, interpreted ) (some error report) Failed, modules loaded: none. The error report should have a line number, so you can see where the mistake is. Try to fix it by following the instructions, or comparing your code with what’s written above. Let us know in the comments section if you have any problems here. When you get Ok from ghci, then you can run the program. At the ghci prompt type starman "functionally" 5 and start playing the game! You will return to the GHCi prompt when the starman function completes. We have provided the Haskell source code for starman.hs as a download below, along with some comments. You could use this, but it would be much better to type in the program yourself and try to understand it. Possible Extensions A real improvement to the game would be to generate a random word, perhaps from a list of words or a dictionary file. If you are feeling ambitious, you might try this. It would involve generating a random number i and read in the ith word from a dictionary. You might import System.Random and use a Haskell random number generator. © University of Glasgow
https://www.futurelearn.com/courses/functional-programming-haskell/0/steps/27208
CC-MAIN-2020-05
refinedweb
1,130
80.72
-rel git cl try -b mac-rel git cl try -b win7-rel The pixel tests are a bit special. See the section on running them locally for details. are a special case because they use an external Skia service called Gold to handle image approval and storage. See GPU Pixel Testing With Gold for specifics. TL;DR is that the pixel tests use a binary called goldctl to download and upload data when running pixel tests. Normally, goldctl uploads images and image metadata to the Gold server when used. This is not desirable when running locally for a couple reasons: Additionally, the tests normally rely on the Gold server for viewing images produced by a test run. This does not work if the data is not actually uploaded. The pixel tests contain logic to automatically determine whether they are running on a workstation or not, as well as to determine what git revision is being tested. This should mean that the pixel tests will automatically work when run locally. However, if the local run detection code fails for some reason, you can manually pass some flags to force the same behavior: In order to get around the local run issues, simply pass the --local-run=1 flag to the tests. This will disable uploading, but otherwise go through the same steps as a test normally would. Each test will also print out file:// URLs to the produced image, the closest image for the test known to Gold, and the diff between the two. Because the image produced by the test locally is likely slightly different from any of the approved images in Gold, local test runs are likely to fail during the comparison step. In order to cut down on the amount of noise, you can also pass the --no-skia-gold-failure flag to not fail the test on a failed image comparison. When using --no-skia-gold-failure, you'll also need to pass the --passthrough flag in order to actually see the link output. Example usage: run_gpu_integration_test.py pixel --no-skia-gold-failure --local-run=1 --passthrough If, for some reason, the local run code is unable to determine what the git revision is, simply pass ‘--build-revision aabbccdd’. Note that aabbccdd must be replaced with an actual Chromium src revision (typically whatever revision origin/master is currently synced to) in order for the tests to work. This can be done automatically using: run_gpu_integration_test.py pixel --no-skia-gold-failure --local-run --passthrough --build-revision `git rev-parse origin/master`. Be sure to use the correct swarming dimensions for your desired GPU e.g. “1002:6613” instead of “AMD Radeon R7 240 (1002:6613)” which is how it appears on swarming task page. You can query bots in the chromium.tests.gpu pool to find the correct dimensions: python tools\swarming_client\swarming.py bots -S chromium-swarm.appspot.com -d pool chromium.tests.gpu-rel, mac-rel, win7-rel build step will contain either one or more links titled gold_triage_link for <test name> or a single link titled Too many artifacts produced to link individually, click for links, which itself will contain links. In either case, these links will direct to Gold pages showing the image produced by the image and the approved image that most closely matches it. Note that for the tests which programatically check colors in certain regions of the image (tests with expected_colors fields in pixel_test_pages), there likely won't be a closest approved image since those tests only upload data to Gold in the event of a failure. If your CL adds a new pixel test or modifies existing ones, it's likely that you will have to approve new images. Simply run your CL through the CQ and follow the steps outline here under the “Check if any pixel test failures are actual failures or need to be rebaselined.” step. If you are adding a new pixel test, it is beneficial to set the grace_period_end argument in the test‘s definition. This will allow the test to run for a period without actually failing on the waterfall bots, giving you some time to triage any additional images that show up on them. This helps prevent new tests from making the bots red because they’re producing slightly different but valid images from the ones triaged while the CL was in review. Example: from datetime import date ... PixelTestPage( 'foo_pixel_test.html', ... grace_period_end=date(2020, 1, 1) ) You should typically set the grace period to end 1-2 days after the the CL will land. Once your CL passes the CQ, you should be mostly good to go, although you should keep an eye on the waterfall bots for a short period after your CL lands in case any configurations not covered by the CQ need to have images approved, as well. All untriaged images for your test can be found by substituting your test name into:<test name>.
https://chromium.googlesource.com/chromium/src/+/7fed2b3a20996b3751bf23798f52beffe7956fd9/docs/gpu/gpu_testing.md
CC-MAIN-2020-29
refinedweb
826
59.84
Excellent! Thanks. On Thu, Apr 9, 2009 at 12:19 PM, Dino Viehland <dinov at microsoft.com> wrote: > There is an implicit conversion from PythonType -> Type. So (Type)(PythonType)o will work here. Alternately you can just strongly type a function in C# as taking a Type and call that from Python and the conversion will happen automatically. > >> -----Original Message----- >> From: users-bounces at lists.ironpython.com [mailto:users- >> bounces at lists.ironpython.com] On Behalf Of Alex News >> Sent: Thursday, April 09, 2009 9:14 AM >> To: users at lists.ironpython.com >> Subject: [IronPython] RuntimeType from PythonType in C# code >> >> I have embedded IronPython 2.01 into my C# application, and am >> extremely impressed by the ease of integration and the >> interoperability with the .NET types. However I am a bit stumped by >> something that probably is quite simple. >> >> 1) In a python script, I import one of my application types. >> 2) From that script I call a function implemented in C#. I pass in >> the type (not an instance of the type) >> 3) My C# function sees a PythonType. I would like to get the >> associated RuntimeType, but can't figure out how to do so. >> >> For example (using a system type): >> >> python: >> from System import String >> myfunction(String) # calls C# delegate defined below >> >> C#: >> delegate(object o) >> { >> // o is of type PythonType. I would like to get the associated >> System.RuntimeType (in this case System.String) >> } >> >> I've searched the web, but most of the advice seems outdated: >> >> PythonType.UnderlyingSystemType is internal, so I can't access it. >>- >> December/006130.html >> >> I'm not sure what "Ops" is. I suspect it is related to the namespace >> IronPython.Runtime.Operations, but I can't find an equivalent >> class/function in there (although there is quite a bit there, so I may >> have missed it). >>- >> May/000668.html >> >> Any help would be greatly appreciated. >> >> Thanks, >> Alex >> _______________________________________________ >> Users mailing list >> Users at lists.ironpython.com >> > > _______________________________________________ > Users mailing list > Users at lists.ironpython.com > >
https://mail.python.org/pipermail/ironpython-users/2009-April/010005.html
CC-MAIN-2019-47
refinedweb
335
67.76
Finally we want to make our address bar do something. We will use connect a Qt signal from the line edit into a method which will load the address. Below self.window.show() add this connect line and the following method: QObject.connect(self.addressBar, SIGNAL("returnPressed()"), self.loadUrl) def loadUrl(self): print "Loading " + self.addressBar.text() self.web.load( QUrl(self.addressBar.text()) ) Qt signals are emitted by objects when interesting things happen. The QLineEdit documentation tells us about the returnPressed() signal. So we connect that from our addressBar line edit into a method we make called loadUrl() In C++ signals are connected to special methods called Slots but in Python we can connect them to any method. Our loadUrl() method will print out the contents of the address bar onto your terminal, then will load the QWebView with the address. (Remember to include the http:// at the start of the address.) Our completed web browser, loading a different URL. « Back to Part 6 | On to conclusion »
https://techbase.kde.org/index.php?title=Development/Languages/Python/PyKDE_WebKit_Tutorial/Part7&oldid=34448
CC-MAIN-2015-22
refinedweb
168
58.48
Subject: Re: [hwloc-users] hwloc 1.5, freebsd and linux output on the same hardware From: Sebastian Kuzminsky (seb_at_[hidden]) Date: 2012-10-05 19:03:54 On Fri, Oct 5, 2012 at 5:01 PM, Samuel Thibault <samuel.thibault_at_[hidden]>wrote: > Sebastian Kuzminsky, le Sat 06 Oct 2012 00:55:57 +0200, a écrit : > > binding to CPU0 > > could not bind to CPU0: Resource deadlock avoided > > Mmm, from what I read in the freebsd kernel: > > /* > * Create a set in the space provided in 'set' with the provided > parameters. > * The set is returned with a single ref. May return EDEADLK if the set > * will have no valid cpu based on restrictions from the parent. > */ > > _cpuset_create(struct cpuset *set, struct cpuset *parent, const cpuset_t > *mask, > cpusetid_t id) > { > > if (!CPU_OVERLAP(&parent->cs_mask, mask)) > return (EDEADLK); > > Could it be that due to administration rules lstopo is not allowed to > bind on cpu 0-9 ? In that case the x86 backend can not detect anything > there. > Hm. It may be that we're doing something funny and reserving those CPUs. I'll run some tests on Monday and get back to you.
http://www.open-mpi.org/community/lists/hwloc-users/2012/10/0743.php
CC-MAIN-2015-32
refinedweb
187
71.34
Hi all, this is my first post here. I'm trying to parse an int and a String into a method using the Scanner and I'm not sure if this is possible? This is what the simple code looks like: import java.util.Scanner; class Actions { public void move(String direction, int movement) { System.out.println("Moving " + movement + " metres"); } } public class Robot { public static void main(String[] args) { Actions robot1 = new Actions(); Scanner scan = new Scanner(System.in); robot1.move(scan.nextLine()); System.out.println("Enter direction and distance to move"); } } As you can see I can get user input using scan.nextLine() or nextInt - but is it possible to parse both at the same time? Hope this makes sense. Thanks for any replies.
http://www.javaprogrammingforums.com/file-i-o-other-i-o-streams/39688-basic-scanner-question.html
CC-MAIN-2018-51
refinedweb
125
67.86
Looking to explore Red Hat OpenShift streamlined CI/CD workflows to run your ACE container natively on Red Hat OpenShift? In this post we show how to run a simplistic pipeline on OpenShift. You don’t have an OpenShift cluster? No worries. You can also follow this tutorial on your local workstation using Minishift (for OpenShift 3) or CodeReady (for OpenShift 4) which gives you a local cluster for development purposes. You can download CodeReady Containers and Minishift from; Red Hat CodeReady Containers Key concepts and definitions Before we start, we suggest you refresh your Kubernetes and OpenShift knowledge. Below we summarized some of the key concepts that are referred to from OpenShift documentation. Image: An image holds a set of software that is ready to run, while a container is a running instance of a container image.. Image repository: An image repository is a collection of related container images and tags identifying them. For example, the OpenShift Jenkins images are in the repository. Imagestream: An imagestream provides a way of storing different versions of the same basic image. An imagestream and its associated tags provide an abstraction for referencing container images from within OpenShift Container Platform. Imagestreams do not contain actual image data, but present a single virtual view of related images, similar to an image repository. Project: A project allows a community of users to organize and manage their content in isolation from other communities. Application: You can deploy an OpenShift application from components that include source or binary code, images, and templates. As a result of application creation OpenShift objects that will build, deploy, and run the application will also be created. Template: Another concept unique to Openshift is the Template object. An OpenShift Template bundles some number of objects that are related from a functional point of view. In terms of application promotion, Templates can serve as the starting point for setting up resources in a given staging environment, especially with the parameterisation capabilities. Additional post-instantiation modifications are very conceivable though when applications move through a promotion pipeline. Deployment Config: Deployments and DeploymentConfigs in OpenShift. 1.1 Why CI/CD for integration? There are many places to read about Continuous Integration and Continuous Delivery so we won’t cover them here. Instead, let’s take a step back and ask the question why. CI/CD practices emerged as one of the key enablers of agility. It is essential to streamline and automate the cycle of understanding a requirement, implementing that requirement, and indeed implementing changes in the requirement to the delivered solution. Whilst CI/CD is most commonly discussed in the context of application development, it is just as relevant to integration development. In fact, considering that integration solutions are the enablers for many business-critical applications to interact with other systems effectively, it is crucial that the integration artefacts can be created with the same agility as the application components. We need to simplify and accelerate the building and deployment of the integrations yet retain quality assurance and adhere to security policies. The good news is Red Hat OpenShift and IBM App Connect work together to make it easy to build your CI/CD pipeline for your containerized integration solutions. A CI/CD pipeline includes many different steps, from compilation and verification of the source code to testing and then on to the creation of an executable and ultimate deployment into an environment. Ideally, we want to get to a position where all of this can be triggered from a change in a source code repository or initiated via command or user interface. By the end of this article you have learnt how to set up such a pipeline for an IBM App Connect integration. For further reading, we can highly recommend reviewing “Accelerating Modernization with Agile Integration” to understand the importance of CI/CD for the adoption of agile integration and to explore different hands-on examples. 1.2 Why Red Hat OpenShift for CI/CD? Red Hat OpenShift includes streamlined CI/CD workflows to help teams get to production faster. Whilst Jenkins is the widespread way of approaching CI/CD, Red Hat OpenShift is evolving to offer OpenShift Pipelines, which is based on the upstream project Tekton and is available as a Developer Preview release. Thus, you have the choice of proceeding with a generic and mature solution based on Jenkins pipelines or exploring OpenShift Pipelines which is a Kubernetes-native framework aiming to better support pipeline-as-code and GitOps approaches common in cloud-native solutions in the future. Before going into further detail, there are few concepts about the OpenShift build process that have to be understood. A build is the general term for the process of transforming the application’s source or binary artefacts from some input source into a resulting object which most often is a runnable container image. A BuildConfig is a configuration object which defines the characteristics of a specific build process. A build strategy is one of the choices specified in the BuildConfig, which decides which of the pre-defined types of build will be performed. A build can be triggered via a number of inputs such as: - when the source code changes (e.g. via a webhook call from a GitHub repository) - when the base image used for the build changes - when a user manually requests a new build (e.g. via command line or UI). Once built, resources are published as specified by the build configuration (BuildConfig), typically to the OpenShift registry. The process is characterized by the build strategy specified in the BuildConfig and one or more input sources. OpenShift supports a number of build strategies: - Docker build invokes the docker build command, and it expects a Dockerfile and all required artefacts in it to produce a runnable image. - Source-to-Image (S2I) build uses a specially prepared base image that accepts application source code to be injected so that it can assemble a new runnable image. - Custom build can run plain Docker images as a base embedded with build process logic to produce objects that are specified. - Pipeline build allows you to define a CI/CD pipeline workflow for execution by the Jenkins pipeline plugin. The workflow is defined by a Jenkinsfile either embedded directly in the build configuration or supplied in a Git repository and referenced by the build configuration. The product documentation is a great place to grasp a more in-depth understanding. 1.3 Scenarion Overview In this example we will demonstrate how to build a containerized ACE application on OpenShift using native Jenkins pipelines. The following diagram shows the steps included in the example pipeline: When a commit triggers a pipeline execution: - OpenShift takes the BAR artefact from GitHub Repository and builds a Docker image by layering the BAR file on top of ACE image pulled from DockerHub - The new Docker image is then pushed to local image repository with a new tag. - The tagged image is deployed in a fresh new container in the environment. Note that we have the option to add another stage of building the BAR file as part of the pipeline but for this example we will just focus on building the image. We will follow along using both the web console and the command-line interface so that you can choose whichever option best fits your need. 1.3.1 Create Jenkins Application Red Hat OpenShift provides an application template for running Jenkins in a container image, which can be used to set up a basic flow for continuous testing, integration, and delivery. Perform the following steps to create a Jenkins application. - Log in to Red Hat OpenShift web console. - Click the Create Project button to create a new project. Alternatively, you can create your project using CLI: oc project ace - Click the Catalog tab and select Jenkins (Ephemeral). We have selected jenkins-ephemeral which is practical for development and test purposes. It uses ephemeral storage meaning that on pod restart, all data is lost. If you want your Jenkins application data to survive a pod restart you should choose the Jenkins which uses a persistent volume store. - Pop up window appears to configure Jenkins Application. The first page displays information about the template. Click Next to continue to the next page. - Accept the default values and click Create - Review the results and click Continue to the project overview. Alternatively, you can create your Jenkins application using CLI: oc new-app jenkins-ephemeral As a result, Jenkins deployment will be created in your project, and a pod will be running with two services created: one for the Jenkins webui and the other for the jenkins-jnlp service for Jenkins slave/agent. - Open a Web browser on your workstation and type the route URL provided in the Jenkins service. As OAuth integration is enabled by default you can log in to Jenkins web console using your OpenShift credentials. Click Log in with OpenShift. - Authorize Jenkins to access to your OpenShift account. You will be redirected to Jenkins web console. 1.3.2 Create Docker build configuration As mentioned earlier in order to build a docker image on OpenShift platform we can make use of BuildConfig which can be defined as yaml or json. - Create a new file named ace-app-bc.yaml with the following content: - Navigate to Add and click Import YAML. - Upload the BuildConfig file by dragging & dropping, selecting it, or pasting from the clipboard and click create. - Outcome of a successful import is a BuildConfig which you can access using the OpenShift Web Console, choose Builds -> Build Configs Alternatively, you can create the BuildConfig using CLI: oc create -f ace-app-bc.yaml 1.3.3 Create Jenkins Pipeline In this stage we will create a minimalist pipeline workflow that will first build an ACE image and then deploy. So, our pipeline consists of just two steps. We will consume ace-app-bc BuildConfig to trigger the ACE image build. - Create a build configuration file named ace-app-pipeline.yaml with the following content: Note that jenkinsfile section is explained further down the page. - Navigate to the Add to Project and click Import YAML. - Upload the BuildConfig file by dragging & dropping, selecting it, or pasting from the clipboard and click Create. Alternatively, you can create the pipeline using CLI: oc create -f ace-app-pipeline.yaml - Outcome of a successful import is a BuildConfig which you can access using the OpenShift Web Console, choose Builds -> Build Configs Let’s explore what we describe in Jenkinsfile section of the ace-app-pipeline BuildConfig; Stage 0: Declare Jenkins Agent We will declare Jenkins agent (also known as slave) in which the pipeline will be executing. OpenShift provides three images that are suitable for use as Jenkins agents: Base, Maven, and Node.js images. We will be using Node.js agent to execute our pipeline. Stage1: Build At this stage, openshift.selector selects the build config named ace-app which we defined in the previous step. Using that build config, it initiates a build. After the build is complete, you will see an image named ace-app tagged as latest. Stage 2: Deploy At this stage, openshift.newApp creates a new application named ace-app using the image produced at build stage. Note that this step is executed only in first time execution of pipeline. In next pipeline builds, this stage is skipped if the ace-app already exists in the namespace. But still, a deployment will be automatically done after each build is completed successfully due to the Image Change Trigger capability of Deployment Config. Meaning, after each new image build, an automated deployment will initiate. We will talk about this in the later posts. 1.3.4 So-LET’S TRY! Start the pipeline with the OpenShift Web Console. Navigate to the Builds → Pipeline section and click Start Pipeline. Alternatively, you can start the pipeline using CLI: oc start-build ace-pipeline Once the pipeline is started, you should see the following actions performed within your project: - A job instance is created on the Jenkins server. - A Node.js pod is launched, as our pipeline requires Node.js as Jenkins agent. - “Build” stage of the pipeline launches a build and upon completion a new image is pushed to imagestream. - “Deploy” stage of pipeline creates and exposes an application using the image produced in the previous stage. If the application already exists, the steps are skipped. - Node.js Jenkins agent pod is deleted by default after the pipeline execution completes or is stopped. You can visualize the pipeline execution by viewing it directly in OpenShift Web Console, choose Builds -> Builds. (or from Jenkins Web Console.) Outcome of a successful pipeline execution is a new deployment configuration and a new pod and related service and the routes for ACE dashboard and integration service. You can access the routes using the OpenShift Web Console, choose Networking -> Routes Just go ahead and test your API exposed by your containerized ACE application. That’s it You might ask at this point how about deploying to IBM Cloud Pak for Integration. Well, in that case we just need to replace “oc new-app” command with “helm install” command in our deploy script. As we said at the beginning, this is just a warmup that you can try in your local workstation and make yourself familiar with ACE containers. We will post more examples in the future. -
https://developer.ibm.com/integration/blog/2020/03/13/create-your-integration-application-on-openshift-using-jenkins-pipeline/
CC-MAIN-2020-40
refinedweb
2,240
53.31
Zato services as containers for Python functions and methods Acting as containers for enterprise APIs, Zato services are able to invoke each other to form higher-level processes and message flows. What if a service needs to invoke a hot-deployable Python function or method, though? Read on to learn details of how to accomplish it. Background Invoking another service is a simple matter - consider the two ones below. For simplicity, they are defined in the same Python module but they could be very well in different ones. from zato.server.service import service class MyAPI(Service): def handle(self): self.invoke('create.account', data={'username': 'my.user'}) class CreateAccount(Service): def handle(self): self.logger.info('I have been invoked with %s', self.request.raw_request) MyAPI invokes CreateAccount by its name - the nice thing about it is that it is possible to apply additional conditions to such invocations, e.g. CreateAccount can be rate-limited. On the other hand, invoking another service means simply executing a Python function call, an instance of the other service is created (a Python class instance) and then its handle method is invoked with all the request data and metadata available through various self attributes, e.g. self.request.http, self.request.input and similar. This also means that at times it is convenient not to have to write a whole Python class, however simple, only to invoke it with some parameters. This is what the next section us be about. Invoking Pythom methods Let us change the Python code slightly. from zato.server.service import service class MyAPI(Service): def handle(self): instance = self.new_instance('customer.account') response1 = instance.create_account() response2 = instance.delete_account() response3 = instance.update_account() class CustomerAccount(Service): def create_account(self): pass def delete_account(self): pass def update_account(self): pass There are still two services but the second one can be effectively treated as a container for regular Python methods and functions. It no longer has its handle method defined, though there is nothing preventing it from doing so if required, and all it really has is three Python functions (methods). Note, however, a fundamental difference with regards to how many services are needed to implement a fuller API. Previously, a service was needed for each action to do with customer accounts, e.g. CreateAccount, DeleteAccount, UpdateAccount etc. Now, there is only one service, called CustomerAccount, and other services invoke its individual methods. Such a multi-method service can be hot-deployed like any other which makes it a great way to group utility-like functionality in one place - such functions tend to be reused in many places, usually all over the code, so it is good to be able to update them at ease. Code completion There is still one aspect to remember about - when a new instance is created through new_instance, we would like to be able to have code auto-completion available. If the other service is in the same Python module, it suffices to use this: instance = self.new_instance('customer.account') # type: CustomerAccount However, if the service whose methods are to be invoked is in a different Python module, we need to import for its name to be know to one’s IDE. Yet, we do not really want to import it, we just need its name. Hence, we guard the import with an if statement that never runs: if 0: from my.api import CustomerAccount class MyAPI(Service): def handle(self): instance = self.new_instance('customer.account') # type: CustomerAccount Now, everything is ready - you can hot-deploy services with arbitrary functions and invoke them like any other Python function or method, including having access to code-completion in your IDE.
https://zato.io/blog/posts/service-container.html
CC-MAIN-2022-27
refinedweb
612
54.83
Fragmentation in the Windows World 413 Greyfox writes "While various members of the Industry press have been raising the spectre of potential Linux fragmentation, we've been seeing some very real fragmentation in the Windows world. This story details the fact that there are now 7 different versions of the Windows 98 second edition and they're not all the same product! Add that to the assorted versions of Wince, NT (3.whatever to 5.0 betas) 95, and the die-hard 3.1 users who are STILL out there and you have a real mess on your hands. And programs for most of these versions of Windows are much less portable at the source code level than UNIX programs are. I've got a fair shot at taking any given UNIX program (Say, Gnome) from Linux to HP/UX to SCO or Solaris and having it work without any (or any major) changes to the source code. Most of the time you'll have to write your windows code from scratch. " GNOME is just starting to be portable (Score:1) So... it's 95% working on Solaris now, and I'm happy, but... Do you know how many times I've found #include linux/*.h in gnome code? or how many times I found that shell scripts with #!/bin/sh really meant #!/usr/local/bin/bash (sorry, linux users sh and bash are not the same!) I'd seen some #define CFLAGS -m486 stuff in there as well or sound stuff hard coded to use I can't blame linux developers for writing software to take advantage of linux features like glibc2.1 and linux headers, but be aware that the result is code that needs to be ported to other unices. Re:Rewrite Windows code from scratch? (Score:1) Ugh. Re:Rewrite Windows code from scratch? (Score:2) Most code would benefit from a recompile and some minor changes, but it's not necessary. My experience with Win98 SE (Score:1) 1) Fresh install on a Celeron 400 with an unusual zenon motherboard with built-in components. Runs fine. In fact, it is my main system which I do everything on, including software developement, and I never turn it off. It stays up for weeks at a time without any system crashes. 2) Upgrade from Win95 OSR2 on a Pentium 166MMX with standard components. This system runs fine as well. In fact, the Win98 install/upgrade went off without a single problem and it cleared up several nagging problems I was having with Win95. I also happen to use win98 SE on several systems at work. I am quite pleased with the product and I would recommend it to anyone. I especially love the internet gateway feature they've built in. Now I don't need a linux system to get my home network on the internet. That is my 2 bits on win98. Re:ah, good old build 950... (Score:1) Re:40% incompatibility for W2K? (Score:2) Re:Visual Basic version madness (Score:1) However, even after I had a full upgrade path to 6.0 (what the customer wanted it re-coded in), I STILL had a problem with legacy controls inside the forms. It was a monsterous project, to port a simple futures calculator. After that fiasco, I decided to stick to MFC only (when I actually program under Windows Where I work, we had problems with Word and Access macros too. If you don't upgrade version by version, you'll be left behind in the Microsoft world... Win98 SE (Score:1) I am sure fragmentation doesn't help either, the more fragmentation, the more codebase. In the world of Linux, fragmentation happens within several different companies (like distribution) and sometimes in various code-bases, but the fragmentation isn't covered by any one company, like with MSoft. The extra effort hurled into supporting and upgrading all these different fragmented code-bases, Win9x in all it's forms, winNT, W2k, WinCE etc. etc. Maybe it is time for Windows to stop charter everyone and their kitchen sink, and start putting effort into releasing systems that fix things while not breaking other things. Win98 SE got the cute nick-name: Shit Edition after it refused to work on several peaces of hardware which its "old, buggy" version works just fine at. In some 15 installs, the new edition has only worked on 2. Someone will probably scream: change your hardware, but seriously, I would rather just change to an OS that actually work with what I have (it seems like 98 SE has a special affinity for not working with various network cards). It's about time Windows got it's stuff together, soon it won't even be good for playing games at if this is to continue, thus loosing it's usability all together. Sorry, minor typo (Score:1) Sorry... Re:all those versions.. (Score:1) My experience is much the same. Win95 SR1 was crashing on me at least once every 15 minutes. I then upgraded to 98 RC 2, and crashed maybe once an hour. After further upgrading to RC 3, the machine ran better than it ever had for almost 9 months. At that point i was installing a new HD and doing general cleanup and decided "hey, while i'm at it i'll just put 98 full ver on" Big mistake: 80% of the time the machine doesn't boot properly. 3/4 of the time i get at least 2 blue screens, even on a simple restart. every time this happens i have to reset, boot into safe mode, _then_ restart, at which point normal mode comes up fine. But it works great after that! And people wonder why i leave the damn thing on all the time. RC 3 was/is far more stable than the final ver in my opinion. Re:How about porting from win32 to win16? (Score:2) Re:Windows CE (Score:2) This means that there are many different versions of Linux in your view. Not to mention how many hundreds of OS/processor combinations of UNIXes there are...makes Windows look positively unified. Re:Rewrite Windows code from scratch? (Score:1) I feel REALLY stupid now.... (Score:1) LESS THAN OR EQUAL TO! I keep forgetting. (preview, PenguinDude, preview) Sorry..... Re:Not true (Win16 to Win32) (Score:1) Who is going to go out and develop a new Win31 project who doesn't already have 16bit development tools? The last release of Microsoft's 16bit compiler (8.0c) runs fine under Win98 and WinNT, even supports long filenames. As for "API reading from a serial port" changing, the only real changes were a few function names (open, close, read, write all changed to standard file i/o versions). All the wierd DCB stuff is still supported with no changes! I'm near the end of porting a big Win16 program to Win32 (maintaining both at the same time from the same source). Very little of the work is API oriented. Most of the work is rewriting 16bit assembly language, and wierd 16bit specific stuff with FAR or HUGE pointers and other kludgey hacks. Re:Pro Linux FUD?? (Score:1) Which is a common occurence. Take the office I just stopped working at. There were five computers there: two Dells with P166's, an older Gateway, a pair of brand new Gateways with PII's, and a no-name clone. The old Gateway flakes out three or four times before lunch, but that's because the user is a serious 'cute little applet' freak. One of the Dell's can't run for more than 20 minutes without a GPF some days, and other days complains not at all. The new Gateways haven't had time to get twitchy and run happily all day long with little complaint. The other Dell (which used to be mine before I got a new Gateway from the boss to shut me up) once had to be rebooted 10 times in a day. The administrative intern in the office told me I was the only person she'd ever heard complain about Windows, and that she'd been using it for years with no problems. Fact is, there's no telling some days. It took me a long time to get over using a computer very gingerly once I installed Linux at home, because I was tired of things breaking every ten minutes. Other people use their Win boxes as functional multitasking workstations and have little trouble. I've given up arguing with anyone about it because I've learned that 'works for me' is the great argument ender. People will denigrate happy Windows users to the high heavens as having anemic skills and no clues, or as being brainwashed victims of the Redmond thought control sattelites, when the fact is, toy operating system or not, it gets the job done for people. Especially (but not exlusively to) people who are sort of timid about doing much but using what their box came with. I personally don't like Win 9x much as a result of my experiences, and have the usual complaints about Microsoft's business practices, but I've learned my experiences are on the negative side of a pretty wide spectrum of opinions from all sorts of people. I just wish people would stop worrying about Microsoft and get on with their lives. As much as I despise the apparent need some people have to define themselves in social terms based on their computer's OS, it's hard to ignore, because the fixation on the topic permeates the discourse of the community. ---------- mphall@cstone.nospam.net Re:Pro Linux FUD?? (Score:2) Compare that to Linux binary compatibility. Just look at the mozilla binaries directory. Just for the x86, there's separate binaries for glibc and libc5 and libc4 systems. Plus there's the fact that none of those will run on old a.out systems. Re:Rewrite Windows code from scratch? (Score:1) Rewrite Windows code from scratch? (Score:2) M$ is tearing their ass up to maintain backward compatibility, and that's one of the reasons they aren't getting on. And as of the different Win98 versions, what do you have to recode to get it from one to the other? The changes are cosmetic, because having no source code every little freaking change on source level needs a completely different version. First ? -- "The use of COBOL cripples the mind. Its teaching, therefore, should be Wine is crap? (Score:1) Half of the stuff runs? Like, what might that be? From the apps I've tried, nearly all of them worked to what I expected. -xk My favorite part: (Score:1) A confused Win98 user, discussing the random "shutdowns" experienced under Win98 SE: She sounded mildly surprised! -k Re: Fragmentation in the Windows world (Score:1) Backwards INcompatibility (Score:1) That's absolutely correct. I can write one program and have it crash with exactly the same error code under every version of Windows! ;-) Re:example (Score:1) S.S.D.D. (Score:1) Re:Pro Linux FUD?? (Score:1) Wordperfect. Coincidence? Re:My favorite part: (Score:1) First of all, it doesn't look like you read the article in question. Second, I ran Win95 for about a year total, and let me tell you, it crashed every single day, sometimes more than once. I don't have any odd hardware, just an NE2000 ethernet card, an SB AWE32, and an ATI video card. If that's not standard hardware, supported by just about every OS, then I don't know what is. And I did run NT for a time. What drove me nuts (other than the weird, super hard crashes that occurred every couple of weeks or so) was the configuration. Every little thing required a reboot--that doesn't drive you insane? And I bet the specs on NT uptime wouldn't be what they are (and they are pretty poor to begin with) without the constant rebooting. But back to Linux. Since I have installed Linux, my computer has been like a well trained puppy--always doing exactly what I want it to do, and never shitting on the rug, if you know what I mean. It is the best OS for the desktop. -k Re:Not true (Score:1) Fragmentation isn't just about applications... (Score:2) The fact that Wince, Win98, and WinNT all have entirely different source code and are administered differently is IMHO a big liability. If I outgrow my Linux box I can upgrade to a big SGI (or Sun or IBM) Unix box and it still works more-or-less the same way. Sure the GUI admin tools look different, but their not so radically different that I can't adapt in a day or two. On the other hand, if my company wants to change all it's Win98 desktops for WinNT, I have to learn a different operating system entirely. Unlike Unix, the similarities are cosmetic and the differences are fundamental. Changing technologies like this is a big deal for companies that have large existing staff that either need to be replaced (in a very competitive market) or retrained. Windows CE (Score:1) Re:Pro Linux FUD?? (Score:1) I think the story poster was trying to highlight that the IT media seems to be concentrating on potential linux fragmentation without looking at other OS's. That said, as far as the topic goes, if it is fud, it is fud-light. I've personally experienced significant issues moving our source base forward from Win16->Win32, and then sprinkling "if (HIBYTE(HIWORD(::GetVersion())) lessthanssymbol 0x80)"'s arround my printing code. If I was to bitch it would be with printing and gdi in particular. (I can't seem to get the less than symbol to appear when posting "plain old text" I must be an idiot.) =) Also, the various releases of comctrl32.dll have subtly changed certain message behavior of some controls our software uses. Admittadly it was somewhat cosmetic, but we have a commitment to making sure our UI is consistant across software releases, so it was something we had to spend time working around. Tell the guy... (Score:1) We just moved at pretty large app from 3 to 6 at work, so I know for a fact that it's possible -- Re:Simple recompiles (Score:1) It's NOT A LIE. It's using their definition. (Score:1) Calm down, calm down. Alot of what the author said was false. Alot of it was FUD. BUT.. I think everyone is missing the point. The point here is that it IS true that if a software vendor designs a program SPECIFICALLY for, say, Win95, there's a fairly good chance that it will NOT work on ALL versions of Windows. Sure, with a little effort, you could probably solve the problem. Sure, a clueful developer would not design software that way. THAT DOESN'T MATTER. M$ and the Linux FUD-meisters are going on and on about Linux fragmentation, yet the issues Linux faces if the distributions become much more different are IDENTICAL to the ones the various Windows distributions (NT, CE, 98, 3.x) face TODAY. ie: MINOR issues that a smart software developer can work around and/or suggest user-implemented fixes for. So let's keep it apples to apples (no pun intended). EITHER Linux is in no danger of fragmenting OR Windows is fragmented already. You can't have it both ways and this article, in my opinion, is merely an attempt to attack Windows using the same logic and language used to attack Linux. Indeed: Windows is not a unified platform (Score:2) Among the two major product lines, consumer Windows (95/98/98SE) and small business Windows (NT3.51, NT4, NT2000), there are lots of differences and incompatibilities; even major chunks of the APIs are different. Then, there is CE, which is really just a completely different operating system. With all that complexity, Microsoft's operating systems still don't scale over a large range of hardware: all you get is systems that run from hefty PDA to small server. Linux spans that range with just a single OS and API. UNIX/POSIX systems more generally run from on anything from small, embedded, real-time system to the largest scientific supercomputers, parallel machines, and mainframes. Compared to UNIX/POSIX, I find that both the interoperability and the scalability of Windows platforms are disappointing. Re:It's this kind of crap that kills us all (Score:1) Fragmentation a good thing? (Score:1) What if we could get the Windows market to fragment in the same fasion? Could we end up with a Windows that might actually be usable and stable? Jack Re:I feel REALLY stupid now.... (Score:1) Ok, deep breaths, PenguinDude. Count to 10... Re:Not true (Score:1) Microsoft included an entire separate CD-ROM of Visual C++ Version 1.52 in the shrinkwrapped box that Visual C++ 4.0 came in, because VC 4 can't build Win16 (or DOS) binaries. Re:Rewrite Windows code from scratch? (Score:2) Re:It's this kind of crap that kills us all (Score:1) Win9x has UNICODE (Score:1) The difference is: Win9x internals use ASCII WinNT internals use UNICODE if you decide to use the other one, it has to convert it to make use of it. Re:Rewrite Windows code from scratch? (Score:1) Actually the 'Basic' API isn't all that portable either - if you start from Windows NT and want to use all of it's features. For instance, I once took advantage of the 32-bit GDI coordinate space under NT4 for a map function (Considering 4Bx4B is _far_ bigger than say 1024x768, this works really well and allows smooth scrolling/zooming.) It could be modified for Win9x, but it wouldn't work nearly as well. Same goes for many of the other cool NT features... I wouldn't be suprised if most of the things in the 1st edition of "Advanced Windows NT" wouldn't work on 95 quite right. :) Re:Pro Linux FUD?? (Score:1) Am I alone in this happening to me? (yes, well, obviously if it happened to another person I know, I'm not really alone in it, but I was speaking within the context of slashdot. I think. Anyway, it's coffee time. mmmm.... raspberry chocolate.......) Re:Visual Basic version madness (Score:1) The main problem I tried to illustrate was the fact that ealier versions of VB used controls that were no longer supported in the later versions. As any VB programmer knows, when you got a ton of controls all squeezed onto one form, maintenence becomes a real bear. Anyone know of a way to port the old controls over to VB 5.0/6.0? I'd rather use the new controls for 5.0/6.0 that replaced the legacy ones, however, if there is a faster way to simply import the legacy controls, that would make the job a piece of cake. Re:Pro Linux FUD?? (Score:1) I test Windows 2000 and every single Win32 app has worked besides games which the installers were written in a stupid way, instead of checkig for DirectX they check for Windows 9.x and then fail....Thats not Microsoft's fault, that is the people who write the software. Re:Not true (Score:1) I may be forgetting a few of the details in this mess (nitpickers, have at it! we'll all be fascinated by your recall of arcana!) but I remember what a horrible mess it seemed at the time. XCDRoast was great, once I had it installed. My own experiance. (Score:1) I just got a full time Linux Programming job. I will be leaving a Windows only company. Here is my experiance. First let me say that I work for a realy big company. The second largest in the world. Here is what I found about windows programming and the reason I can't wait to delete all the windows programming software off my Hard drives at home. First I will start with C code. At a large company you have a lot of programs that are old as dirt. Like one C program I had to kinda redo to get to compile with MSVC++ 5.0. When I looked at the code for the first time I thought It was very well written. There were even If defs on the reusable source files that could be re-compiled to OS/2 and Unix. It was these files that caused the most problems. Why you ask? Because Microsoft in a attempt to kill portable C code has changed alot of functions that were ANSII Standard functions to functions that Now have a underline in the frount and capitals in the function name. This turned out to be a nightmare since the Help files for Microsoft sometimes did not make the new names clear. Not to mention the fact that now the code was not nearly as portable as it was befour. since most of these functions were part of the code that should not have needed to be changed to compile on other OS's. After 4 years writing C code that should be as easy as a recompile to another OS this deeply affected my opion of Microsoft. Okay Okay... so this is a pretty easy problem to fix.... It was time consuming and still sucked. It is also a reason why I will be using a different compiler then MSVC++ in the future. ( if i ever do windows again ). Second problem. This time MSVB. From VB4.0 16bit to VB6.0 32bit. This time a control from a 3rd party that worked perfect under windows 3.11 did not work at all under windows95. So... I had the task to compile to VB 32 bit. When trying to compile under VB6.0 I was presented with a ton of other problems. Seems that all the other 3rd party Objects that was used in the VB program would not work at all under VB6.0. Seems that Microsoft had purchased all these controls and put them into VB6.0 with the same names only now they all used the Variant data types. This nightmare extended it self to all most every method that displayed data to the user. So yet again. Not a real hard problem, but very time consuming and it sucked. I have done quite a bit of Unix type programming in my time. Most of the time I would write things like BSD Socket services on linux first. Almost every time I would take my Linux source and move it to UnixWare, SCO, or BSD. All I had to do was to change the compile options in the Makefile. The *nix code I wrote 4 years ago can still be compiled on all most any *nix box. I can't say that for my C code for Windows. Now I don't even try to make windows code portable. I don't think that the "Hard Core" windows programmers even care about running there code on other platforms. So maybe they are more inclined to dink around every time Microsoft comes out with a new Dev Package. I'm not. Thats why I'm leaving the world of MS Windows. Besides InfoWare ( or Web based applications ) will make the OS a client is using a mute point anyway. Also. since I work for a really large company that has based there application on MS products there will be quite alot of programming needed to be done when they try to get the current application Security model working on Win2000. I'm glad I wont be around for that! Re:Overloaded system (Score:1) Re:Not true (Score:1) This is true. Portability is a non-issue for 99% of Windows developers; my Win32 program will run on any version of Windows 9x without any changes. In many cases it will also run on NT. Portability is a must in the *nix world where there really is fragmentation between operating systems, window toolkits, &c. Incidentally, I see a bunch of questions about porting a program from Win32 to Win16, which is just plain stupid; it almost never needs to be done. You might as well port it to MS-DOS. If you have a win32 program that absolutely needs to run under Windows 3.x, you install the Win32s library, which allows Win 3x to run most 32bit apps. One of the reasons I have not started programming Linux yet is that I am loath to waste my precious time distributing two versions for just KDE and Gnome. Clarification on POSIX Support (Score:1) The posix subsystem is separate from the win32 subsystem. That means that you can't make any calls to the win32 API if your program uses the posix subsystem! No graphics, none of the standard windows tools. As such the POSIX subsystem is almost entirely useless, and the only place I have seen it used is in the latest 'get administrator privs' crack. Enh; that's not really the problem... (Score:2) Now, in practice it's not that big a problem for most people (the differences are not normally a compatibility issue), but it has been a serious headache for the folks working on winelib with ANSI compilers that don't like the Microsoft-isms. --- Re:Rewrite Windows code from scratch? (Score:1) Now, the reverse is Not true, but can be if you are moderately careful. porting VB is a major pain in the ass (Score:1) -A setup program for a VB3 program made with the VB3 setup wizard works in 3.1, crashes in 95 -the grid and database controls changed significantly between VB3 and 5, requiring a great deal of recoding to port the program. -One VB5 program which, while compiled as a -One VB5 program which ran fine in the VB IDE, but crashed when run as a And they wonder why they're losing developers..... Re:Article (Score:1) Create a blank text file on your formatted harddrive called "ntldr"; no ".txt", just "ntldr". Install from your Windows 98SE upgrade CD. Win98 will see that you have NT on your system and will "upgrade" it, and replace any and all portions that are "missing". You will need to have the CD drivers on a boot floppy or boot from the CD. For Win95 create a text file with these lines: [Setup] ProductType=1 ccp=0 Run D:\win95\setup.exe C:\"filename".txt from DOS. This will install a full version. It's important to note that you must have a legal right to do so, but ability does not equal authority. All of this assumes, of course, that you want to install Win 9x on a system. I personally wouldn't want major software upgrades cluttering a system regardless of the OS. If we've all followed good housekeeping procedures with regards to partitioning system and data, this shouldn't be a traumatic experience. Re:Not necessarily fragmentation, but still a pain (Score:1) I've changed the motherboard twice on my NT4 machine with the same installation. I had to make sure that the drives stayed on the same Controller and in the same master/slave configuration. My ISA modem had to be set up again, but that was it. The only time I've reinstalled was for shits and giggles and to reconfigure where the OS and the data lived. No. Fragmentation in a closed system not good... (Score:1) How would you suggest that we "[take] a copy [of Windows] and rewr[i]te it and ma[k]e it better and better"? Without the source, we cannot improve things. It just means that Microsoft's (and their OEMS') already overworked undercapable support structures will become worse and more monolithic. This whole situation is evenn further compounded by Windows' braindead approach and culture around shared libraries. Would any Linux users here be happy if, when an application was installed by an unprivileged user, without asking the user, it went and replaced libc.so or any of the X libs with a newer, incompatible version? This _is_ the accepted way of doing things on Windows. The net result is you have a complex system of applications, all stepping on each others' toes. Would you like to be providing the support for this system? FUD (Score:1) yea, i had that happen... (Score:1) it's werd to one day bootup your system and get 'error no operating system found'. this really freaked me out. and was one of the meny reasons i moved to linux. nmarshall #include "standard_disclaimer.h" R.U. SIRIUS: THE ONLY POSSIBLE RESPONSE Re:Not necessarily fragmentation, but still a pain (Score:1) The only thing that was more difficult in Linux was my soundcard, but only because it comes with a special install disk for Windows. Without that disk, Windows has problems with it, even though it's a genuine SB. The only thing I had to manually enter any settings for was my soundcard. Every thing else is automatically figured out at boot. Re:right !!! (Score:1) Complete bulshit ! Re:how multiuser is 3.1 (not 3.11)? (Score:1) because adobe takes same "multiuser" policy as microsoft." Well, I know I'm going to get flamed for this.. but.. NT _is_ multiuser. Each process has an owner (which can be any user on the system), and inherits the permissions of that owner. Processes running on the machine at the same time do not need to have the same owner. In fact, when you're logged onto an NT server, there's usually processes owned by three other users (System, WWW, and maybe Administrator) running continuously during your session. In this way, NT is somewhat like UNIX. However, from NT 3.x to 4, there has been no way for multiple users to interactively log on concurrently. The windowing system is basically based off of the same code that was introduced in Windows 3.0, which doesn't even pretend to be multiuser. The only functionality that NT 4 really lacks compared to Linux in "multiuser-ness" are a remotely displayable windowing system, a text-based way to log in remotely (telnetd/ssh/what have you), and multiple virtual consoles. This changed a little bit with the introduction of RConsole in NT 4 Resource Kit, which provides a text-based interactive login for remote users, and VDesk, which provides up to 9(?) multiple virtual desktops (in which you can be logged in as one user on one desktop and another user on the other). However, Windows 2000 is where a Microsoft OS will finally have all the 'real' multiuser perks. Remotely displayable GUI (which I'm sure the Samba guys or someone else will produce a Linux/X client for) will be included (in Win2k Advanced, at least), and get this.. a telnetd. Plus the functionality of VDesk. As for the fact that Photoshop doesn't save your settings correctly, blame Adobe and not Microsoft. In the Registry (which, yes, is evil), there's HKEY_LOCAL_MACHINE and HKEY_CURRENT_USER. When you log into an NT (or even 95 or 98) box, it reads in the values of HKEY_LOCAL_MACHINE, and then overrides those values with the values in HKEY_CURRENT_USER, if they exist. Applications are supposed to store settings like window position, etc, in HKEY_CURRENT_USER, and only the out-of-the-box defaults in HKEY_LOCAL_MACHINE (in case a user hasn't used that particular app before). However, application vendors are often stupid and store all their settings in HKEY_LOCAL_MACHINE, which means that no matter who's logged in, the app uses the same settings. I don't mean to sound like an NT cheerleader here.. I actually use Linux at home and prefer it. However, it annoys me to see people bashing NT for things that (in my mind) it actually does correctly (such as multiuser-ness in the registry and dynamic web pages). One would sound much more intelligent by bashing NT for things it sucks at (such as being a mail server or not having to reboot). Re:oh.... (Score:1) Matt Re:Best Method to Upgrade Windows 9.X (Score:1) Re:Apps and kernel versions (Score:1) yeah, closed-source binary only kernel modules from don't count!" ipchains. It requires at least a 2.1something kernel (though you can patch a 2.x kernel to work with it) and doesn't dig pre 2.0 kernels at all. SunOS Fragmentation (Score:1) Among other things, I've had to deal with the Sun WorkShopPro family of compilers, particularly f77 and f90, versions 3.0, 3.2, 3.5, 4.0, 4.2, and 5.0. Taken pair-wise, that's 12 combinations -- no two of which are link-compatible. (Of course, Sun claims that they are... :-) I wish they'd reimburse me for about $12K in extra labor that incompatibility has cost us just this last week;-( And the PHB's are Sun-bigots, in love with their stuff. It's hard to make them listen to reason. fwiw Re:Hmmm, i just wana to say.... (Score:1) Re:Not true (Score:1) Embedded) that have next to no support for the win32 API." When will people understand that WinCE (the most appropriately named Microsoft product ever) and NT Embedded are not Windows and Windows NT? They're completely different operating systems, with different code bases, and similar but different APIs, that try to share a common look and feel with MS's other products. It's kind of like complaining that porting your NeXTSTep wharf applications to LiteStep isn't seemless. Similar look, completely different guts. Re:Not necessarily fragmentation, but still a pain (Score:1) Move your NIC, windows will install a second driver for it, losing ALL the settings for the thing... Install a new mother board, when you boot windows you will have no cdrom, and it will try to install drivers for the disk controller, but you can't use the cdrom to access your friggin win98 disk! To fix this, boot off a floppy and load some dos cdrom drivers, then boot windows and you have access to your cdrom again. in linux... Install new mother board. Boot Linux. Linux sees the disk controller just fine, finds all the hardware you had in your machine and boots without a problem. Re:THIS STORY IS A SERIOUS ISSUE! (Score:1) Wow...a tad touchy, aren't we? In reply to the rest of your comment: Let me guess...you also believe that little green men abduct people on a regular basis. MS has played a big brother role...that's not in question. But you toss out the idea that MS distributes unseen patches...now, correct me if I'm wrong but something like that would leave finger prints...such as extra packets being transmitted, changed dll's that some AV program would pick up. You might suggest that the Windows API allows for such modifications. Well, considering the quality of the rest of Windows, I think evidence to that point would have already come about. When you have proof of such calls, then you can start throwing out your "theories." When it comes to MS specific URL requests, I ran a couple request on altavista and yahoo using both NS and IE on a Mac at my place of emplyoment. Identical results (at least the first 20). Let me play devil's advocate for a second: kfm:KDE::ie:Windows Without kfm's web intergration, KDE would be a significantly different desktop enviornment. Are you telling me that KDE can do web integration, and MS can't? As far as the "Windows 95 Compatable" deal goes, its MS..its a big corportaion, and like most big corporations, its multifacited(sp?). One comittee judges 3rd party software, and another one actually writes the OS. Such problems are to be expected. Wow, I sound like a Microserf here. Makes me feel kinda dirty. -------------------------- Re:7 different versions of SE? Pish. (Score:1) We're not talking here about WinNT vs Win9x vs Win3.1. When a customer buys one of those products, they know what they're getting, just as customers understand that SuSE and Redhat may be considerably different. But what about two Win98SE people, who each got it by a different method? Something might work for one, but not the other, because of subtleties related to the install. Confusing, no? backwards compatibility != lack of fragmentation (Score:2) Backwards compatibility isn't the same as lack of fragmentation. Major chunks of the APIs are different or missing between the different Windows versions (even leaving CE out of the equation). What is shared is often incompatible in subtle or not so subtle ways. And a lot of the backwards compatibility is only a short-term workaround and impractical (do you really think you can use an 8.3 version of Word on NT for long before it becomes a logistical nightmare?). Similarly, in theory, you can keep your old source code and still compile it. But in a commercial development environment, that's not really an option. When Microsoft comes out with MFC, ATL, or a new database access library, you have to use the new stuff to be able to take advantage of the new features and remain competitive, and that often requires a fundamental rewrite of your application. Which matters to you more as a developer and user depends. But I think it is accurate to say that the Windows platform is quite fragmented, even though it may offer a lot of backwards compatibility. Re:Fragmentation isn't just about applications... (Score:2) The last time I had an IRIX box on my desk, I missed a few fine points from GNU system utils (on my Linux box). No problem, they all compiled and ran 'out of the box'. Linux can run SCO apps (I understand that SCO can now run Linux apps as well). If different OSes from different vendors with diametrically opposed philosophies can manage to be compatable, why can't two teams within the same company manage to do the same? Re:Rewrite Windows code from scratch? (Score:2) Where I work, we develop Win32 apps to run on Windows 95/98 machines. The same code base typically will work for these platforms. However, in our experience, it is rare that the same code base will produce a properly functioning application on any variant of NT, or on Win 3.x. In fact, one of the developers on our staff is a big NT proponent and develops exclusively on that platform -- developing components whose target platform is Win95/98. It is far and away the norm that the stuff he writes will work under NT but not on the target platform. He is always surprised at this. Likewise, apps the rest of us are developing under Win98 typically misbehave/break under NT. Maybe we're morons, but I don't think so. Sure MS is "tearing their ass up" to maintain backward compatibility. That doesn't mean they are accomplishing their goal. And, in fact, it is well known that apps often don't work on one platform that do run on the other. Hell, they seem to have enough trouble keeping apps running under different iterations of Windows 9x! I do think the article stretches its point a bit when defining 7 variations of Windows. It's a bit unfair to call separate distribution methods different variants. Sure, based on the distribution method you choose, you may wind up with a different set of code than what someone else got, but the same point could easily be made for other OS's. How many different ways can you aquire Redhat Linux, for instance (source CD, RPM CD, download source, download RPMs, download tarballs, just to name a few)? His broader point is valid, however. We have at least Windows 3.x, WfW 3.x, Win 95, Win 95 OSR2, Win 98, Win 98 SE, Win NT 4, W2k, and Win CE. These platforms could be fairly judged to be at least as fragmented as some are claiming exists with Linux. I think the level of fragmentation on the Windows side is actually higher. Re:Rewrite Windows code from scratch? (Score:1) Re:How about porting from win32 to win16? (Score:1) an recompile What's the motivation for porting a NT/Win2k application back to Win3.11? What exactly have you been smoking? It's like porting a big Linux application to Minix. Re:But Variety is the spice of life... (Score:1) Here's my theory. If you set up ANY operating system in a shitty and haphazard manner, it will crash. This is true for both Linux and NT. Linux comes out of the box (or the tar file, or the ftp site) set up in a fairly well-done manner. It's also relatively easy to set up Linux in a well-done manner (it's hard to do it perfectly, but that's hard on any os). Windows 95/98/NT, however, comes out of the box set up in a shitty and haphazard manner. It's also fairly difficult to set up in a well done manner (and hard to do in a perfect manner, also). However, if you really know what you're doing with NT (which I do, thank you), you'll actually get a groovy thing called uptime. The NT server here has been up since the day they released Service Pack 5, and before that, was up till the day they released SP 4. (and it handles a lot of shit. Shared drives and printers, WWW Server, SQL server, etc). The Linux box here has been up since I moved the kernel to 2.2.10, and before that, was up since I moved the kernel to 2.2.5 (it also handles a lot of shit.. Mail, DNS, WWW, FTP). So.. as I see it, NT can be just as stable as Linux. The difference is, Linux usually comes to you already stable. NT you have to put a lot of work into. Re:Cyrix?? (Score:1) Cyrix chips suck. Theory: Crappy companies that release crappy hardware only write drivers or design hardware for Windows consider the driver bug-free if it doesn't crash within 5 minutes of light use. Then they go out of business or get bought and eviscerated by the company that bought them. Linux does better with this crappy hardware than Windows does because many drivers for Linux are written by people who really can't afford good hardware, but hate having their box crash. Therefore, they first write drivers that expect the hardware to do exactly what it should, and then write workarounds for particular pieces of crappy hardware so that they'll work correctly, too. Ever compiled a kernel? There's stuff like this all over it, and actually, that's a very good thing. So in many cases, the Linux drivers/code actually work better than the manufacturer supplied drivers/code. Windows sucks with this crappy hardware because (a) the company doesn't care, or (b) if you don't have money, MS doesn't really care about you, and since they can certainly afford the top of the line, they aren't plagued by crashes from crappy hardware. If MS has to write a driver, they just write code that expects the hardware to do what it should, and freaks out if the hardware doesn't. Re:Visual Basic version madness (Score:2) The experience was vile. All our old VBX controls were replaced by OCXs. In the case of one of the controls we used (Truegrid, if anyone cares), the entire interface was rewritten. As a result, every form I had using a grid had to be rewritten. In addition, the database interface had some bizarre bugs that required major changes in the tools I used to generate SQL queries. Suffice it to say that the whole thing was a hideous mess. My best advice after going through that is to stick to whatever version of Visual Basic you first started using. Do not upgrade under any circumstances. D ---- Re:Not true (Score:2) So what happens when you compile a multithreaded application on a kernel-0.99-libc4-System, sucker? You install the pthreads library first? What a load of FUD (Score:2) Others have addressed the way that many Windows applications port more easily between 95/98/NT[1] than UNIX applications port between e.g. Linux/AIX/HPUX. Heck, thanks to glibc-maintainer incompetence, even portability between Linux distros is often questionable. What I'd like to point out is the issue of driver portability. Windows 98 adopted the "Windows Driver Model" which is a minor variation on the model NT had used all along. While this doesn't necessarily mean that the same driver binary will work with both, the changes required are trivial. Compare this to the situation in UNIX-land. Driver models are drastically different between UNIX versions, even between those that supposedly use standards such as DDI/DKI or DLPI. Occasionally a driver can be ported with little change, but more often 30-50% of the driver code has to be different for each platform. The saddest part is that there have been efforts to agree on a common UNIX driver model, but people right here always shout them down as a way to make it easier for vendors to ship closed binaries. *sigh* You want openness, you take fragmentation with it as a necessary consequence, and I wish everyone who tries to have their cake and eat it too (in this or other contexts) would just choke to death. [1] As an owner of and programmer for a CE device, I think it's fair to say that CE isn't truly part of the "Windows family". Yes, it has the same GUI, mostly, but the internals are totally different. Re:But Variety is the spice of life... (Score:2) it handles a hell of a lot more that linux does. But what Linux handles, it handles well. That in spite of the driver writer generally having NO documentation, NO vendor assistance (occasionally outright vendor hostility), and charging nothing for it. MS handles a lot of hardware because the hardware vendors wrote the drivers. On the other hand, when's the last time you booted NT on your Indy or SparcServer? Re:Fun with VB (Score:2) Now that we have 400mhz machines running it, it's fast again. But of course we haven't upgraded to VB 6. My recommendation is that we don't. D ---- Re:How many linux paches are there? (Score:2) Probably as many as there are Windows patches. The difference is, most of the Windows patches are MS internal. Most of the Linux patches are also Linux internal, it's just that with open source, internal only means freely available. Re:Not necessarily fragmentation, but still a pain (Score:3) Re:Not true (Score:2) Office 2000 (Score:2) - Standart - Professionnal - Premium - Developpers + Now, double that for the academic versions. + Also add 4 more for the upgrade versions. (but you can't upgrade an academic version). * 2 because there are french translations of all the products, they are all in the same box as most linux apps do. = Total: 24 different boxes of Ms Office 2000. But wait! You can also buy all the products individually (frontpage, word, excel, whatever..) Imagine how fun it is, and how clients are usually pissed when they need to buy these products. Now that's what I call Marketting: This forces most stores to buy a big load of Ms Products, since to get a better "cost" price they have to buy alot. But they also have to buy all the different versions. Therefore, they have to store these boxes somewhere, and that's why in all stores you see a wall full of boxes, since it's either they have alot of this crap, or they won't have any. Therefore, stores are forced to make free publicity for Microsoft products and they make it sound very important because they have a big pile of boxes. But they don't have a choice but to have a big pile of boxes since there are so many different products to display! (that was hard to explain, don't be too supprised if I typed it all wrong Re:Rewrite Windows code from scratch? (Score:2) DEVICEHIGH=C:\FORTE16\DRIVERS\CDMKE.SYS Forte16 is the soundcard, and this non-IDE CD-ROM connects through the soundcard, so i assume these are the CD drivers. It's a 16-bit soundcard, and the drivers are 16-bit DOS drivers, yet are still loaded in CONFIG.SYS, and appear to work fine. Pro Linux FUD?? (Score:2) there are some issues relating to driver incompatibility between NT and 9x, but other then that, all win32 platforms are compatible. Apps don't need recompiling. NT and 9x can run win3.1 code, and 9x can run DOS code. (NT can run some DOS, but not all. I think) I realize that a UNIX program is not going to compile out of the box, (although NT is postix compliant, I'm not sure if its a very good implementation or not though) But neither will Mac or BeOS program. what does that have to do with anything? I've seen this type of anti-MS FUD before, in a document stating that win64 OS's wouldn't support win32 properly. This was because the original version of win NT didn't support win16 very well (NT 3.5 has an emulation layer, or something). It was blatantly false, however, considering that windows NT currently supports 16 bit code, and 9x is half built of it (not that that's a good thing....) Please realize that when you are dishonest, it calls into question everything you say. I for one never believed anything Apple said in there information, because some of the things they said weren't true at all (or no longer true). If you don't want to alienate potential linux users, you must not tell them things that they know aren't true. If you don't use windows, or have any experience with it, don't say things about it (the same is true for win* crashing all the time. It's not 100%, but it's not 0% ether) Disclaimer: I use win98, I hate NT, I think linux is cool, and plan to install it, and I dislike Microsoft. "Subtle mind control? Why do all these HTML buttons say 'Submit' ?" Re:Rewrite Windows code from scratch? (Score:2) Even when I create KDE/QT apps main is redundant. You really don't need to deal with main! --------------------------- ^_^ smile death approaches. Re:Rewrite Windows code from scratch? (Score:2) Many NT 3.1 apps broke moving even to NT 3.5. When Office 95 came out, it was accompined by a NT service pack (for 3.51) which, again broke certain things, presumably because NT's Win32 was changed to be compatible with 95's Win32. I'd be real curious if MS Office 4.2 for NT (~1994) would even run on NT 4.0SP5 or Win2K. It ouuughhhht to, but the number of NT workstation users was so low back in those days that the lack of backwards compatiblity probably doesn't affect anyone. -- Re:Rewrite Windows code from scratch? (Score:2) I know of certain Win 3.1 client-server programs that relied on special network drivers and the like that broke under 95. (Much of this stuff was NetWare ODI/NETX stuff with a *.386 driver to go along with it.) As far as CD-ROM drives and SCSI cards, MS put alot of effort in Win95 to get the old DOS drivers to run under 95. (And yes, you can try to run the Novell DOS stuff, although it's troublesome.) The fact that you might want run those old drivers was the primary rational for MS developing Win95 instead of just dropping DOS/Win and just going with NT a long time ago. -- Re:Talking out your ass (Score:2) Yeah, that's why Win3.1 had no mouse or video card support. -- Re:Rewrite Windows code from scratch? (Score:2) From what I heard, the problems with NT/9x apps on W2K are primarly due to the fact that they're ignorant of security and try to do things like installing their DLLs in the system directory. NT4 got around this by shipping with pretty loose security (especially on parts of the registry). MS got smart and realized that if they are ever going to make an omelette they're going to have to break some eggs. For what it's worth, Win2000 is going to ship with a bunch of scripts for the obvious problems (Office 95,97!, etc.), and a tool to switch to box into low security mode. -- NT POSIX (Score:2) Well, if you don't like notepad, you can run vi. -- Re: Fragmentation in the Windows world (Score:2) The fact that NT doesn't do a full blue screen unless the kernel has crashed proves you're full of it. You're example is purely a DOS/Win behavior. Basically, I'm sick of hearing slashdotters try to extrapolate their 9x horror stories to NT because it makes them sound more legitimate. Hopefully people here are smart enough to see past the similar GUI. -- Re:how multiuser is 3.1 (not 3.11)? (Score:2) Actually, blame Microsoft too. Just like Adobe, MS Office 95 and 97 isn't multi-user aware either. (Office 2000 is.) So, if Microsoft can't even code their own applications to take advantage of NT features which have existed for 6 years, why would 3rd parties even bother? -- Exchange/Outlook (Score:2) Umm, Exchange is an server application that runs under a service account on NT. It has nothing do with NT's multiuserness. And to answer your question, it can't because MS Outlook and Exchange groups at MS are too retarded to be even on the same page, so you get a sucky client-server product. Exchange does allow server-side scripts, but they all run under the same privlege level as the server. Not good for user mailboxes. So, your options are to either run Lotus Notes (which has a server-side security model) or a brain-dead type user-level mailstore such as traditionally found on Unix systems. -- Why MS Linux/BSD will never happen (Score:2) MS SQL 7, MS Exchange, MS SMS, COM, DCOM, ActiveDirectory, ADO, CDO, IIS, MFC, WDM, DirectX, etc. Despite this Think of Apple a few years back. Many folks (including some on the board of directors) thought they should drop the Mac and just make high end WinTel boxes. They chose to make Macintoshes, even if meant going out of business. You don't think Microsoft is just as arrogant to do the same with Windows? -- Who needs fragmentation anyway? (Score:2) The question is: is fragmentation a technical problem or a marketing problem? A more important question is: how does fragmentation actually harm a platform? There seems to be a general fear of fragmentation about in the Linux community. But fragmentation doesn't seemed to have harmed Microsoft any; If anything, the confusion just encourages their customers to go out and buy the very latest of everything "just to be on the safe side". In the realm of Consumer Linuxism, this would translate into the latest Redhat or the latest Caldera, wouldn't it? Personally I want everybody to do it my way.
https://slashdot.org/story/99/08/06/0251209/fragmentation-in-the-windows-world
CC-MAIN-2018-05
refinedweb
9,399
73.07
Please convert gnome-keyring to multiarch Bug Description [Impact] Several applications are relying on this package as seen in comments 5 and 6. [Test case] Attempt to install both i386/amd64 versions of libp11- 1. Run the command 'wine notepad'. Notice the warning from p11-kit. 2. sudo apt-get install libp11- 3. Run the command 'wine notepad' again. Notice that the warning is gone. / Actual xsession-error logged on Precise i386: WARNING: gnome-keyring:: couldn't connect to: /tmp/keyring- GNOME_KEYRING_ GPG_AGENT_ GNOME_KEYRING_ GPG_AGENT_ SSH_AUTH_ GNOME_KEYRING_ GPG_AGENT_ SSH_AUTH_ GNOME_KEYRING_ GPG_AGENT_ SSH_AUTH_ but this path exist and owned by user (with -rwxrwxr-x rights for pkcs11 file, and drwx------ for the folder). The files: control, gpg, pkcs11 & ssh are all 0 bit long. So i'm wondering about a timing race. precise; making a symbolic link doesn't help It seems this package (and dependencies) must be available as a 'gnome- Pretty Please fix :-) Is this still in progress? Is it deferred to after Quantal release? (Quantal needs it too I believe) I've got some Wine apps in the wild that are breaking cause of it (see linked duplicate bugs), plus the aforementioned apps in this report. This is on my radar to do but will probably make it in Quantal updates at a later date. I should have some free cycles next week to look more into this issue. Thank you for your patience Adam rdeps testing gksu_2. gnome-python- gnome-session_ gvfs_1. seahorse_ ubuntu- virt-manager_ --- This is the error with ubuntu-sso Traceback (most recent call last): File "setup.py", line 328, in <module> **extra) File "/usr/lib/ __requires( File "/usr/lib/ __add_ File "/usr/lib/ if alias.name and __external_ File "/usr/lib/ mod = __import_ File "/«PKGBUILDDIR» from ubuntu_sso import utils File "/«PKGBUILDDIR» from ubuntu_sso.logger import setup_logging File "/«PKGBUILDDIR» os. File "/usr/lib/ makedirs(head, mode) File "/usr/lib/ makedirs(head, mode) File "/usr/lib/ mkdir(name, mode) OSError: [Errno 13] Permission denied: '/sbuild- dh_auto_build: python setup.py build --force returned exit code 1 make: *** [build] Error 1 dpkg-buildpackage: error: debian/rules build gave error exit status 2 Sessions still open, not unmounting ─────── Build finished at 20121015-1749 Thanks svn also affected: karp@karp: p11-kit: couldn't load module: /usr/lib/ after upgrade from 11.10 to 12.04 Thanks I'm going to do a precise diff today so sponsors team can look at them both. Adam Packaging changes from precise 3.2 to quantal/raring 3.6.0 will take me some more time to get sorted out for mult-arch. Your diff for raring doesn't appear to add gnome-keyring-bin as a dependency fo gnome-keyring. I'd add this myself, but there may be a better way to go about this, and you would need to make the same change for precise and quantal as well. Unsubscribing ubuntu-sponsors for now, please re-subscribe when this is fixed up. Status changed to 'Confirmed' because the bug affects multiple users. Hi, for me the dirty workaround was to download the gnome-keyring_ from https:/ /usr/lib/ Yours, Steffen Didn't work for me as Spotify still isn't able to validate server certificate: err:ole: err:ole: err:ole: err:winhttp: I don't think the two proposed debdiffs here are correct for implementing multi-arch for gnome-keyring. While the plug-ins for gnome-keyring itself, and the pam module, could be built as multi-arch, it isn't necessary to do so. In fact, building the gnome-keyring plug-ins (currently in usr/lib/ This is a better patch to add multi-arch support for gnome-keyring. It moves the p11-kit module to a new package, moves the pam module to multi-arch, and correctly marks the gnome-keyring package as multi-arch foreign, as it includes the main binary, and data files. Please be sure to doublecheck compatibilty with p11-kit 0.14 which requires a .module extension for PKCS#11 autoloading while you are making large changes to this package. Andreas, that seems like it should be a separate patch, and something that should go upstream in GNOME (and possibly is already fixed there), while this multiarch patch currently only affects the packaging. This bug was fixed in the package gnome-keyring - 3.6.2-0ubuntu2 --------------- gnome-keyring (3.6.2-0ubuntu2) raring; urgency=low * debian/control: - Move the PKCS#11 module into a separate package. (LP: #1094319) - Convert to use multi-arch. (LP: #859600) * debian/ - The pkcs11 module needs to be a separate package. * debian/ - Use the multi-arch directory for the PAM module. -- Rodney Dawes <email address hidden> Mon, 07 Jan 2013 15:03:31 -0500 64-bit Wine no longer throws an error message every time it runs with new libp11- Is this going to be fixed for quantal too? What about precise (said to be LTS)? please fix for ubuntu 12.10 also, i have same bug for 1.5.27 wine versions too. I apologize for the delay, all debdiffs are uploaded and proper teams subscribed. I'll check back on its progress next week and push harder for review if not done by next Wednesday. Thanks Adam s/uploaded/attached to bug/ Thanks for the patch, I have two comments about it though: - d/control is autogenerated, please modify d/control.in instead; - shouldn't "usr/bin" be removed from gnome-keyring. > - d/control is autogenerated, please modify d/control.in instead; I only altered control.in, however, I believe d/control was auto-updated during the source package build. > - shouldn't "usr/bin" be removed from gnome-keyring. Nice catch Ill fix that up and re-submit. Thanks Adam Should be fix in precise debdiff now Why are you making debdiffs against an older gnome-keyring package, that are incompatible with the changes already in raring? My bad, was on autopilot for a moment. The other's still apply I believe. Thanks Adam The quantal debdiff you posted is against an older version as well, as is the precise version. Also, if the proposed debdiffs were pushed out as updates on those platforms, it would break upgrades to the newer Ubuntu versions, as the changes are incompatible. -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 04/08/2013 11:02 AM, Rodney Dawes wrote: > The quantal debdiff you posted is against an older version as well, > as is the precise version. Also, if the proposed debdiffs were > pushed out as updates on those platforms, it would break upgrades > to the newer Ubuntu versions, as the changes are incompatible. > Ok ill fix it, Thanks! - -- Adam Stokes "Don't salt your green beans before you try them, some may think you make rash decisions." -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http:// iQEcBAEBAgAGBQJ sxnR15CtIc1ibU8 2IivsQH/ dpibpVy+ 2jMUx8eKe701bTN q9dISHyoiBb6LnS =nnBB -----END PGP SIGNATURE----- Here's a debdiff for Quantal with the same changes that are in Raring already. Hopefully these dont suck :) Thanks! Adam Looks like you are pulling the gnome-keyring from the release pocket, and not from -proposed, as should be. So you now just applied my changes to the release packages, rather than the updated versions that were released on Quantal/Precise. See the Quantal backport patch I attached for the appropriate changes to the latest version on Quantal. I'm unsure if an updated package has been released for Ubuntu 12.04 (Precise Pangolin) yet but the currently available version of gnome-keyring:i386 is not installable due to libgcr-3-common not being multiarch (bug #998715). Thanks jhansonxi, I'll see about getting libgcr multiarch this week. Adam You wouldn't be able to install gnome-keyring:i386 on x86_64 with these changes in raring either even, without replacing a very large portion of the system with the 32-bit packages. Nor do I see any reason why one would ever want to, either. There's no reason do that. At this point I think we can say that a quantal SRU is out of scope now. Precise has a longer life-time, so keeping that one open. @Martin, Quantal is still supported for almost a full 12 months more. Is it really out of scope to push SRUs to it after only 6 months in? If this were Raring, and there were only a couple months left of support, then I would agree it might be out of scope to push such a change. But given the work here is done, I don't see a good reason to not push it, unless something is actually broken, which I doubt, given the same changes are included in Raring already, and it works fine there. :) Hello Riku, or anyone else affected, Accepted gnome-keyring:/ Looks odd, this is gnome-keyring 3.2.2, but libp11- Breaks: gnome-keyring (<< 3.6.2-0ubuntu2~) Replaces: gnome-keyring (<< 3.6.2-0ubuntu2~) This two should contain the version of THIS package, e.g. 3.2.2-2ubuntu4.2. And for coinstalling x86_64 and i386 variants, see below. I've added proposed as suggested in https:/ # sudo apt-get install gnome-keyring: Reading package lists... Done Building dependency tree Reading state information... Done Selected version '3.2.2-2ubuntu4.2' (Ubuntu:-keyring:i386 : Depends: libcap-ng0:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. Removed from precise-proposed due to the above regression. Please fix and reupload with a bumped version number. Please resuscrbibe the sponsoring team once fixed. Thanks :) Looks like sid has quite a bit of changes to gnome-keyring compared with what's in precise, but multiarch is not one of them. However, the wine version in sid does not seem to suffer this problem with the library. 'wine notepad' only gives me an error in precise, but not in sid. sid has wine 1.4.1. Why change pam/gnome-keyring to mutliarch if it would be enough to just split of the pkcs11 package and make it multiarch? @eraserix have you looked at the diff? That's exactly what is done in 13.04, and what the diff is meant to backport to the older releases.. Yes, I did. But I don't see why gnome-keyring is marked as Multi-Arch: foreign and libpam- I see it, it is sufficient to make 'libp11- Multi-Arch: same and be done with it. To resolve the warning from wine you would have to explicitly install the new libp11-kit package for i386. I hope I'll get around to posting a working patch for 12.04 later today. BTW, how is the new libp11- to the affected users? Will wine declare a dependency on libp11- On Mon, Jun 24, 2013 at 2:14 AM, Rodney Dawes <email address hidden> wrote: > @eraserix have you looked at the diff? That's exactly what is done in > 13.04, and what the diff is meant to backport to the older releases.. > > -- > You received this bug notification because you are subscribed to the bug > https:/ > > Title: > Please convert gnome-keyring to multiarch > > Status in “gnome-keyring” package in Ubuntu: > Fix Released > Status in “gnome-keyring” source package in Precise: > In Progress > Status in “gnome-keyring” source package in Quantal: > Won't Fix > Status in “gnome-keyring” source package in Raring: > Fix Released > Status in “gnome-keyring” package in Debian: > New > > Bug description: > [Impact] > Several applications are relying on this package as seen in comments 5 and 6. > > [Test case] > Attempt to install both i386/amd64 versions and preferably verify if the reported applications in this bug are able to run successfully > > / > attached patch does the conversion. > > To manage notifications about this bug go to: > https:/ @eraserix: gnome-keyring is marked Multi-Arch: foreign so that, for example, an amd64 gnome-keyring can satisfy an i386 package's dependency on it. The version numbers in Breaks and Replaces look wrong to me too. As jhansonxi wrote in comment 48, libp11- So, my take ;). This patch just splits the pkcs11 library away from gnome-keyring binary package and leaves the rest of the packages alone. I still don't see the use case for doing more, the warning when wine is started will be gone with this patch. And as far as I can tell, libgcr is still not multiarch on raring. For the testcase, I'd say this should be something like that: [Test case] (on an amd64 system) 1. Run the command 'wine notepad'. Notice the warning from p11-kit. 2. sudo apt-get install libp11- 3. Run the command 'wine notepad' again. Notice that the warning is gone. I also had a go at this today. :) I built a gnome-keyring package for Precise in my PPA: https:/ It is adam-stokes' last attempt plus the following changes: - Change gnome-keyring's Recommends on libp11- to Depends. - Change libp11- gnome-keyring to (<< ${binary:Version}). @eraserix: I agree that installing libp11- @eraserix: as for how is the new libp11- to the affected users? Both wine and cups got updates in Raring to Recommends and Depends the new libp11- wine1.4 (1.4.1-0ubuntu5) raring; urgency=low * Add new libp11- - Should fix a few apps (LP: #859600) -- Scott Ritchie <email address hidden> Tue, 12 Feb 2013 17:12:01 -0800 cups (1.6.2-1ubuntu2) raring; urgency=low * debian/control: Added libp11- the cups-binary package (LP: #1157904). -- Till Kamppeter <email address hidden> Wed, 20 Mar 2013 19:58:01 +0100 I guess these will need to be SRU'd as well. I'm still inclined to have gnome-keyring Depends on libp11- Yes, I've been holding back on a Wine SRU because I wanted it to integrate this fix. Similarly, I will also want to backport wine1.6 to precise when it releases within a few weeks, and it would be very nice to have this done by then. I've updated the description with more specific instructions on how to test this SRU. As for Depends vs Recommends from debian policy manual: The Depends field should be used if the depended-on package is required for the depending package to provide a significant amount of functionality. The Recommends field should list packages that would be found together with this one in all but unusual installations. I'm kind of inclined to stay with recommends. I can still store passwords in the keyring without the pkcs11 library, no? @graham: Thanks for the info about how the new multiarched package finds its way to the users. For the gnome-keyring package to be considered multi-arch in Precise, surely libgck and libgcr need to be multi-arch as well? Then we can close LP: #998715 for Precise in this SRU. My other concern is what will happen when someone picks up libp11- Quantal's gnome-keyring will overwrite libp11- So, the solution imagined here is not to fix the two duplicates of this bug, but to introduce a new feature, namely make every library in gnome-keyring multiarch? What's to be gained by this? OK. Everyone, just relax. First of all, this bug is not for the p11kit issue. That bug is #1094319 as clearly mentioned in the changelog entry. This bug is specifically about enabling Multi-Arch for the gnome-keyring source package, so that some of its binaries may be co-installable as needed. While I had fixed both bugs at the same time in Raring, they are not the same issue. And on top of that, the gcr source package being converted to Multi-Arch is another separate issue. Please do not conflate them all into being the same bug. They are separate issues, even if they have dependencies on one another. The update issue is another issue on top of those, which will also need to be carefully considered, and which means all of these fixes will need to be SRUed into Quantal first, indeed. Or we simply maintain the status quo in Precise and Quantal for these 3 issues, as it is a non-fatal issue, and only a slight annoyance, only when using certain software which requires use of 32-bit pieces while running on 64-bit. I see no good reason to force N complex updates upon the user base, to fix a very minor annoyance (which isn't this bug anyway). I've built another gnome-keyring package for Precise in my PPA that I hope fixes all three issues (multi-arch gnome-keyring, multi-arch gcr and split libp11- The test case (on an x86_64 system) would be something like: $ wine notepad You should see the error message: p11-kit: couldn't load module: /usr/lib/ $ sudo apt-get install libp11- $ wine notepad The p11kit error no longer appears. $ sudo apt-get install libgcr-3-1:i386 The following packages are installed: libgck-1-0:i386 libgcr-3-1:i386 and libgcr-3-common I can attach a debdiff if needed. To be clear the error message isn't the annoyance part, it's that it's a real error that does prevent some apps from running (and there's no way to make them run on 64-bit at all in current precise). @graham: I think the usual way to do this is that you crate a debdiff to the current version in precise. Attach it to this bugreport and subscribe ubuntu-sponsors when done. Name/Version of the patch should be something like that: gnome-keyring_ I will be able to work on this and gcr again in a couple of days' time. Hopefully early next week I'll attach debdiffs. In the meantime, you are welcome to test gnome-keyring from my PPA. This is Rodney's Quantal patch from comment #41 with the following changes: The gnome-keyring package depends on libp11- The libp11- I believe this needs to be fixed in Quantal before it can be fixed in Precise so that the upgrade path is not broken. Marking 'new' in Quantal and Precise. @Adam Conrad (adconrad) Updated SRU uploaded into precise-proposed queue. Status changed to 'Confirmed' because the bug affects multiple users. OK, quick update on status for Quantal and Precise. The Quantal patch was never uploaded to the archive. The Precise patch was uploaded by xnox, but was rejected because of the ${binary:Version} issue below. I've just uploaded a modified patch from comment 75 to Quantal only. I'd like to get Quantal landed before doing Precise, because as people have mentioned, the upgrade path gets complicated otherwise. Here are the changes I made to Graham's Quantal patch: - You shouldn't specify Breaks or Replaces lines with automatically generated versions like ${binary:Version}. This means that every time the package is built, it will say it breaks previous versions. Instead, just hardcode the version that you put in debian/changelog. - When using DEB_HOST_MULTIARCH, please also make sure to manually set it [1] as recommended in the multiarch conversion doc [2]. This is because while normally the buildd will set it for us, it's not a requirement. For example, when dep8 tests rebuild the source, the variable is not set for us. Not likely an issue in practice , but still good to do. - No need for quantal-proposed in the changelog entry. You can just say quantal now and LP will do the right thing. Minor point, but just saying. - No need to manually add multiarch-support to the Pre-Depends line. It comes in via the ${misc:Pre-Depends} bit. Again, minor point. So let's see how the Quantal upload goes. [1] DEB_HOST_MULTIARCH ?= $(shell dpkg-architecture -qDEB_HOST_ [2] https:/ Hello Riku, or anyone else affected, Accepted gnome-keyring ran 'wine notepad' and saw the following: p11-kit: couldn't load module: /usr/lib/ I installed gnome-keyring and libp11- I ran 'wine notepad' again and did not see the p11-kit warning. @mterry: thanks for the fixes and the upload. I can update my patch for precise, but as it stands, it also converts gcr, which was split out of gnome-keyring into its own package in quantal, to multiarch. Before we can convert gnome-keyring (including gcr) to multiarch in precise, we would first need to convert gcr to multiarch in raring and quantal, see LP: #998715. This bug was fixed in the package gnome-keyring - 3.6.1-0ubuntu1.1 --------------- gnome-keyring (3.6.1-0ubuntu1.1) quantal; urgency=low * Backport the following changes from gnome-keyring in Raring: - Move the PKCS#11 module into a separate package. (LP: #1094319) - Convert gnome-keyring to multi-arch. (LP: #859600) -- Graham Inggs <email address hidden> Sat, 06 Jul 2013 11:07:07 +0200. Is it really still important to be able to upgrade from 12.04 to 12.10? 12.04 is supported for 3 more years, 12.10 for something like a month. Once 14.04 is out, you will be able to upgrade from 12.04 directly. Status changed to 'Confirmed' because the bug affects multiple users.
https://bugs.launchpad.net/ubuntu/+source/gnome-keyring/+bug/859600
CC-MAIN-2015-27
refinedweb
3,523
72.87
Here's another nice PuzzlOR problem which lets us look at different ways to tackle an optimization problem, including deterministic and stochastic approaches.? Problem set-up We'll import the necessary libraries and write some utility functions for modeling this problem. %matplotlib inline from matplotlib import pyplot as plt import seaborn as sns import numpy as np np.set_printoptions(precision=2) neighborhoods = ('A6', 'B2', 'B4', 'B5', 'B7', 'C5', 'C10', 'D9', 'E2', 'E6', 'E8', 'F3', 'F5', 'G8', 'G9', 'H3', 'H5', 'H7', 'H8', 'J4') tuple_to_alphanumeric = lambda i, j: '{}{}'.format('JIHGFEDCBA'[i], range(1, 11)[j]) alphanumeric_to_tuple = lambda n: ('JIHGFEDCBA'.index(n[0]), int(n[1:]) - 1) Here are the starting positions of the neighborhoods in array form: N = np.array(map(alphanumeric_to_tuple, neighborhoods)) N array([[9, 5], [8, 1], [8, 3], [8, 4], [8, 6], [7, 4], [7, 9], [6, 8], [5, 1], [5, 5], [5, 7], [4, 2], [4, 4], [3, 7], [3, 8], [2, 2], [2, 4], [2, 6], [2, 7], [0, 3]]) As in any optimization problem, we wish to minimize some loss function. Here we are trying to minimize a cost defined as a constant times a sum of distances, so the main building block for our loss function will actually be a distance metric. We will define a function that, given an $(x,y)$ power plant coordinate $s$ and a collection of $(x,y)$ neighborhood coordinates $N$, will return an array of Euclidean distances between the power plant $s$ and each neighborhood $n \in N$. def dists(s, N): """ determine each neigborhood's distance from a given point s -- an array-like point (x, y) for a single power plant N -- an array of neighborhood placements """ inds = N - np.array(s) return np.sqrt(inds[:, 0] ** 2 + inds[:, 1] ** 2) As an example usage, plugging in $(5,5)$ for a single power plant will return the distances from that point to every single neighborhood: dists((5, 5), N) array([ 4. , 5. , 3.61, 3.16, 3.16, 2.24, 4.47, 3.16, 4. , 0. , 2. , 3.16, 1.41, 2.83, 3.61, 4.24, 3.16, 3.16, 3.61, 5.39]) As usual, we'll represent our problem in matrix form as a convenient way to assemble many simultaneous pieces of information at once. To that end, we'll build on our previous function and define a new one that takes a collection of power plant $(x,y)$ coordinates and returns a matrix of every power plant-neighborhood distance. Each row will correspond to one power plant location, and each column will correspond to one neighborhood location. Practically speaking, we are simply stacking rows from the function above. def coords_to_dist_matrix(ss, N): """ returns the (N power plants x N neighborhoods) matrix of all Euclidean distances ss -- an array-like of multiple (x, y) power plant placements N -- 2D array of neighborhood locations """ return np.vstack(dists(s, N) for s in ss) And here is an example usage, picking some arbitrary starting points. ss0 = [(1, 2), (0, 6), (8, 3)] coords_to_dist_matrix(ss0, N) array([[ 8.54, 7.07, 7.07, 7.28, 8.06, 6.32, 9.22, 7.81, 4.12, 5. , 6.4 , 3. , 3.61, 5.39, 6.32, 1. , 2.24, 4.12, 5.1 , 1.41], [ 9.06, 9.43, 8.54, 8.25, 8. , 7.28, 7.62, 6.32, 7.07, 5.1 , 5.1 , 5.66, 4.47, 3.16, 3.61, 4.47, 2.83, 2. , 2.24, 3. ], [ 2.24, 2. , 0. , 1. , 3. , 1.41, 6.08, 5.39, 3.61, 3.61, 5. , 4.12, 4.12, 6.4 , 7.07, 6.08, 6.08, 6.71, 7.21, 8. ]]) In this problem, the distance (or loss) for each neighborhood is based on the closest power plant; anything farther away doesn't matter. We can think of this as the closest power plant to a neighborhood being responsible for that neighborhood. Here we'll define a function which takes the $S \times N$ matrix of distances from each power plant $s$ to each neighborhood $n \in N$, and return an array that tells us which power plant is responsible for each neighborhood. def responsibilities(ss, N): """ given a list of points where power plants are located, return an array of which power plant is closest to each neighborhood ss -- an array-like of multiple (x, y) power plant placements N -- 2D array of neighborhood locations """ ds = coords_to_dist_matrix(ss, N) return np.argmin(ds, axis=0) responsibilities(ss0, N) array([2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 0, 1, 1, 0, 0, 1, 1, 0]) Having done that footwork, we can get to the heart of the issue: what is the sum of distances from each power plant to the neighborhoods for which it is responsible? def responsible_dist_sums(ss, N, rs=()): """ returns the sums of edge lengths for each power plant, accounting for which is closest. ss -- an array-like of multiple (x, y) power plant placements N -- 2D array of neighborhood locations rs -- optional forced assignments of responsibility """ if not len(rs): rs = responsibilities(ss, N) sums = [dists(s, N)[np.argwhere(rs == i)].sum() for i, s in enumerate(ss)] return sums responsible_dist_sums(ss0, N) [11.255832815336875, 11.003896913132159, 33.329311428233581] The total loss is just the sum of those sums times a constant of \$1,000,000 which we will ignore. For certain applications, loss is thought of as the energy state of the system (lower is better). Simulated annealing is one of those, so we'll go by that convention and call the loss function $E$. def E(ss, N, rs=()): """ objective function, returns the cost (in millions) to connect all neighborhoods to electricity ss -- an array-like of multiple (x, y) power plant placements N -- 2D array of neighborhood locations rs -- optional forced assignments of responsibility """ return sum(responsible_dist_sums(ss, N, rs=rs)) E(ss0, N) 55.589041156702613 We now have everything necessary to solve this problem: constraints, decision variables, and a loss function (or "objective function"). That being said, it's helpful to picture what's going on in the problem for at least two reasons—as a sanity check to make sure that there aren't any obvious bugs, and to solidify intuition about what's actually happening. Here's a function that will take some power plant placements, the fixed data about where neighborhoods are, and depict distances between power plants and the neighborhoods for which they are responsible. def plot_placements(ss, N, rs=()): """ plot the neighborhoods and power plants together ss -- an array-like of multiple (x, y) power plant placements N -- 2D array of neighborhood locations rs -- optional forced assignments of responsibility """ plt.figure(figsize=(6, 6)) colors = ['#30a2da', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b'] # the problem puts the (0, 0) origin in the bottom left, while our matrix # calculations use an origin in the upper left; we'll use the height just # to flip everything vertically for display purposes height = N[:, 0].max() if not len(rs): rs = responsibilities(ss, N) # plot the neighborhoods for i, (y, x) in enumerate(N): # plot the responsibility edges sy, sx = ss[rs[i]] plt.plot((x, sx), (height - y, height - sy), c=colors[rs[i]], zorder=1) # plot the neighborhoods themselves plt.scatter(x, height - y, marker='s', s=100, c='gray', lw=2, edgecolor=colors[rs[i]], zorder=2) # plot the power plants for i, (y, x) in enumerate(np.asarray(ss)): plt.scatter(x, height - y, s=100, c=colors[i], zorder=3) # tweak the settings plt.grid(True) plt.xlim(-0.5, 9.5) plt.ylim(-0.5, 9.5) plt.xticks(np.arange(10), np.arange(1, 11)) plt.yticks(np.arange(10), 'ABCDEFGHIJ') plt.show() plot_placements(ss0, N) How to solve it First of all, this problem is a clear cut instance of a combinatorial optimization problem. We have 100 places to put power plants, and can think of having 100 decision variables $x_{ij} \in \{0,1\}$ for whether or not we have a power plant on that grid point. While the puzzle we are working on is a toy problem, it is also related to the Euclidean minimum spanning tree and Steiner tree problems, both of which are interesting, important, and used to model real world problems—including power plant placement! Generally speaking, there are two approaches to this kind of optimization problem: deterministic and stochastic. Deterministic methods include brute force search as well as integer program algorithms such as branch and bound. Popular stochastic optimization approaches include simulated annealing, genetic algorithms, and many other variations on the same idea. We'll solve it both ways, deterministically and stochastically. Deterministic approach: brute force The classic approach to solving this kind of integer program (IP) would be to set up the model (constraints, decision variables, and objective function) and use a solver specifically designed for solving linear programs. Now, we could set up a linear program here. (In a previous post, I showed how to set up and solve a linear program in Python.) But before we do that, let's review a couple of facts about this problem: For the grid specified in the problem description, there are $10^2=100$ possible locations to put power plants. Each location's decision variable has two possible states $\{0,1\}$: not having a power plant or having one. Without applying any knowledge of constraints (number of power plants, non-colocation), that means there are $2^{100}$ possible configurations. That's more possibilities than there are stars in the universe. However, applying the constraints of placing only three power plants and requiring that they be non-colocated, that number decreases to $\dbinom{100}{3}$. How big is that? from scipy.misc import comb comb(100, 3) 161700.0 So there are really not that many candidate solutions; our loss function is fast enough that we can just try them all and pick the best. import itertools possibilities = itertools.product(range(10), range(10)) combinations = list(itertools.combinations(possibilities, 3)) %%time results = np.zeros(161700, dtype=np.float32) for i, c in enumerate(combinations): results[i] = E(c, N) CPU times: user 32.2 s, sys: 17.3 ms, total: 32.2 s Wall time: 32.2 s So now we have an array of all values of the loss function: results array([ 123. , 114.34, 109.14, ..., 97.71, 97.75, 106.11], dtype=float32) Out of curiosity, we can also see how values of this loss function are distributed: sns.distplot(results) plt.xlabel('loss') plt.ylabel('frequency') plt.show() And the reveal: which answer was best? idx_opt = results.argmin() brute_opt = combinations[idx_opt] brute_opt ((3, 3), (4, 7), (8, 4)) Or, in our alphanumeric grid system: [tuple_to_alphanumeric(*s) for s in brute_opt] ['G4', 'F8', 'B5'] And here is what that optimal solution looks like: plot_placements(brute_opt, N) loss_opt = E(brute_opt, N) # guaranteed to be optimal, we tried everything! loss_opt 35.62746370694812 So there's the answer. But let's be clear: brute force was only possible here because we're dealing with a tiny feasible region. The computational complexity of brute force search makes it inappropriate for any non-trivial combinatorial optimization problems. Even if the domain isn't intractably large, a common problem is that the loss function is expensive to run. Ours takes microseconds; what if it took minutes? We would no longer be able to explore exhaustively, instead we would have to sample. Stochastic approach: simulated annealing Simulated annealing is a powerful stochastic optimization method with deep connections to MCMC. It uses a clever analogy to the physical process of annealing metal as inspiration for choosing acceptance probabilities in the proposal step. There's a ton of great material out there discussing SA, but here are the basic ideas: We start at some candidate solution and also initialize a variable to keep track of the best solution we ever saw. At each iteration, we propose a new candidate solution by perturbing the current candidate in some way. If the proposed solution is better, we always "move" (that is, start using it as the current proposal). If not, we might move anyway because we want to explore the domain; if we're strictly hill-climbing then we risk getting stuck in a local optimum. (This is the fundamental idea of the Metropolis-Hastings algorithm.) Our chance of moving to a worse proposal is positively related to the current temperature and negatively related to how much worse the proposal is than the current solution. We'll start with a high temperature in order to explore widely, but gradually let the temperature decay so we can focus on optimizing locally. Sometimes, we will re-anneal by temporarily bumping the temperature way back up. This is where the metallurgy analogy comes in; in order to bring out flexibility in metals they may be repeatedly heated and cooled. In this way, SA tries to balance twin goals of (1) exploring widely and (2) squeezing the last bit of optimization out of probable winners. I'll adapt some simulated annealing Python code from a previous project, available directly as optimization.py in our project's Github repo. Given a loss function ( energy), a function to make proposals by perturbing the current state ( perturb), and a starting state ( n0), this simulated annealing function will continue to explore the problem domain. def sim_anneal(energy, perturb, n0, ntrial=100000, t0=2.0, thermo=0.9, reanneal=1000, verbose=True, other_func=None): print_str = 'reannealing; i[{}] exp(dE/t)[{}] eprev[{}], enew[{}]' # initialize values temp = t0 n = n0 e_prev = energy(n) # initialize our value holders energies = [e_prev] other = [other_func(n)] if other_func else [] # keep track of the best e_best = e_prev n_best = n0 for i in xrange(ntrial): # get proposal and calculate energy propose_n = perturb(n) e_new = energy(propose_n) deltaE = e_prev - e_new # store this for later if it's better than our best so far if e_new < e_best: e_best = e_new n_best = propose_n # decide whether to accept the proposal if e_new < e_prev or np.random.rand() < np.exp(deltaE / temp): e_prev = e_new n = propose_n energies.append(e_new) if other_func: other.append(other_func(n)) # stop computing if the solution is found if e_prev == 0: break # reanneal if necessary if (i % reanneal) == 0: if verbose: print print_str.format(i, np.exp(deltaE / temp), e_prev, e_new) # re-anneal up to fraction of temperature temp = temp * thermo # if temp falls below minimum, bump back up if temp < 0.1: temp = 0.5 return n_best, np.array(energies), np.array(other) So how to perturb the current solution? There are many different ways we could do this. Typically, the best ways will allow enough random change to help explore the domain but preserve some similarity between the current and proposed candidates—otherwise, we're just doing a completely random exploration which defeats the purpose. Here, we'll perturb the current solution by randomly choosing one of the power plants and adding $\mathcal{N}(0,1)$ noise to its current position. def perturb(ss): # pick one to change change_idx = np.random.randint(0, ss.shape[0]) # now add some noise to it to_add = np.zeros_like(ss) to_add[change_idx] += np.random.normal(size=ss.shape[1]) return ss + to_add And now we're ready to go: %%time # set up energy function as partially applied E where we # make sure the best solution is still optimal when # snap solutions back to the grid points energy = lambda ss: E(np.round(ss), N) # create a tracker to keep on eye on placements as they move other_func = lambda x: x # set up the starting point as a 3x2 numpy array of floats n0 = np.array(ss0).astype(np.float32) result_anneal, energies_anneal, placements = sim_anneal(energy, perturb, n0, ntrial=20000, other_func=other_func) reannealing; i[0] exp(dE/t)[0.0593907764853] eprev[55.5890411567], enew[61.2362738421] reannealing; i[1000] exp(dE/t)[0.0408912512961] eprev[37.8542831783], enew[43.6085936367] reannealing; i[2000] exp(dE/t)[1.0] eprev[48.3335861023], enew[48.3335861023] reannealing; i[3000] exp(dE/t)[0.0206678320062] eprev[40.541702378], enew[46.1975421478] reannealing; i[4000] exp(dE/t)[0.738177605231] eprev[37.2618773221], enew[37.2618773221] reannealing; i[5000] exp(dE/t)[0.610439234241] eprev[41.9258414077], enew[42.508745412] reannealing; i[6000] exp(dE/t)[1.43678946261] eprev[37.0126172164], enew[37.0126172164] reannealing; i[7000] exp(dE/t)[0.07539378505] eprev[36.5180406271], enew[38.9908647128] reannealing; i[8000] exp(dE/t)[0.015727570433] eprev[39.0553598151], enew[42.6302522689] reannealing; i[9000] exp(dE/t)[0.249633246856] eprev[39.206677435], enew[40.2819726496] reannealing; i[10000] exp(dE/t)[0.00446724071565] eprev[37.2000580164], enew[40.9734451818] reannealing; i[11000] exp(dE/t)[0.00171647529362] eprev[36.8503862652], enew[40.8467531218] reannealing; i[12000] exp(dE/t)[0.000153432337517] eprev[36.1617494573], enew[41.1224835513] reannealing; i[13000] exp(dE/t)[2.01545654804] eprev[37.2038039252], enew[37.2038039252] reannealing; i[14000] exp(dE/t)[0.012635535274] eprev[36.3689550951], enew[38.3689550951] reannealing; i[15000] exp(dE/t)[2.18151210147e-07] eprev[35.6274637069], enew[41.9434119429] reannealing; i[16000] exp(dE/t)[2.41844518407e-06] eprev[36.2198695632], enew[41.0126639254] reannealing; i[17000] exp(dE/t)[2.92088966703e-08] eprev[35.7765492389], enew[41.5631285326] reannealing; i[18000] exp(dE/t)[1.563048236e-06] eprev[35.7765492389], enew[39.7897413671] reannealing; i[19000] exp(dE/t)[4.15299476428e-05] eprev[36.0126639254], enew[38.7384383931] CPU times: user 4.66 s, sys: 43 µs, total: 4.66 s Wall time: 4.66 s plot_placements(result_anneal, N) Now we can obey the constraints of the original problem by "snapping" the power plants' optimized positions in $\mathbb{R}$ to the closest grid points: rounded = np.round(result_anneal) plot_placements(rounded, N) So we got to the brute force solution. How quickly though? # plot the energies over time plt.plot(energies_anneal) # plot the rolling minimum of all proposals so far plt.plot(np.minimum.accumulate(energies_anneal), 'r', alpha=0.5, label='minimum so far') # plot the optimal value (we know this from brute force, in real world # stochastic optimization we probably wouldn't) plt.axhline(y=loss_opt, c='g', ls='--', label='optimal') plt.ylim(ymin=loss_opt) plt.xlim(xmax=energies_anneal.size) plt.xlabel('accepted proposal number') plt.ylabel('energy') plt.legend() plt.show() As we can see, this algorithm started getting pretty good answers almost immediately, and fairly quickly converged down towards the optimum. Not only that, we actually found an optimal solution. And it's important to remember that we weren't even perturbing the current candidate in a smart way. One of the reasons simulated annealing is so widely used is that in practice it's robust over many possible choices of parameters, which include how to make proposals, starting temperature, number of times to re-anneal, fraction of temperature to re-anneal up to, etc. Curveball solution: using k-means clustering One of the interesting things about this problem is its many connections to other problems. Thinking about what we're really doing as we move the power stations around, it becomes clear that we are trying to break up neighborhoods into 3 clusters and place the power plants as close to the centroid of each cluster as possible. This is an instance of k-means clustering. Here, $k=3$, and we'll momentarily relax the assumption from the original problem that power plants must be at grid points (we'll fix it later). %%time from sklearn.cluster import KMeans km = KMeans(n_clusters=3) km.fit(N) CPU times: user 46.8 ms, sys: 8.01 ms, total: 54.8 ms Wall time: 52.5 ms km.cluster_centers_ array([[ 2.83, 2.67], [ 4.12, 7.12], [ 8. , 3.83]]) plot_placements(km.cluster_centers_, N) Once again, we'll snap these real-valued placements to the nearest grid points to get an acceptable solution: rounded = np.round(km.cluster_centers_) plot_placements(rounded, N) And again, we arrive at the optimum. Any comments or suggestions? Let me know.
http://isaacslavitt.com/2015/02/26/combinatorial-optimization/
CC-MAIN-2018-13
refinedweb
3,346
54.12
Integrating GXT with Adobe AIR Integrating GXT with Adobe AIR Hello Commnity, A couple of months ago i started a personal project to enable GXT developers to be able to deploy their application inside adobe AIR. the library is almost done and here are some of the features : 1) A FilesystemExplorer component that enable to browse local filesystem (see sreenshot) 2) An Adapter for GXT so there is no Securty Error from Adobe AIR (eval ) 3) An Adapater to make GWT RPC possible inside AIR (You still can use normal AJAX though) 4) Export : GXT Chart to PDF, HTML, PNG 5) Export GXT Grid to EXcel 6) Drag and Drop from the OS to the App 7) and more to come.... and off cource the application still can run as a normal web app (Without the AIR functions) The library is nt finische yet, but i m planing to relaseae it end of the month (Open Source and Free of course) as a POC i build a lil application (See sreenshots) I would like to have some feedback, to know if there are peoples out there, who need something line that(Hopefully i did nt word for nothing ) Greets E Sorry for my poor english Hello E, I m pretty new to GXT and to this forum. I m doing some research for my company and and i saw your post. What exactly is the purpose of your library ? We have a customer, who would like to export the GXT Chart to PDF. Can you confirm that your library can do that ? How is that hapening ? on the server using a pdf library ? when exactly are you going to release ? Sorry if i ask a lot of question. It s because we really need this Regards Sam Hi Sam, The idea of the library is not really new. I wanted to created a web application that can run on both: browser and desktop using the java language I ve searched the internet and the forum on how to integrate gxt and adobe air. but i couldnt find a good solution, so i decided to start my own project. the generated pdf was generated on the client using adobe AIR API. no Server involved. But this only work on the AIR client. If u wanted to generate pdf in the web app then u should use a server side technology. Like i said in the post before the library will include some adapters to make GXT work inside AIR and some other utiliy tools(Notification for example..) It also include a Dev Mode for AIR so u cant debug your app inside AIR. I hope i will release it by the end of the month, but i cant tell for sure , end of september is the target. But i also dont want to release something people do not need or allready hav, that why i was asking for feedback. Glad somebody has the need Greets E Please go ahead with your development, I think the community will love it especially if it has good features and is open source. I am looking forward to its release.Odili Charles Opute Proudly Nigerian Cotributions Ext.ux.Image Ext.ux.Wizard Ext.plugin.ModalNotice Ext.plugin.ComboLoader Ext.ux.form.ScreenshotField Hi Ekambos, as I see your project is wonderful. It will be great if the open source is available. features will be very useful in all projects. I'm waiting you impatiently.... Hello Guys, Thanks for your words. I m working hard to get the project out by end of september. So stay tune Here is an Example of how programming with the library will look like Code: public class Demo implements EntryPoint { public void onModuleLoad() { Button b = new Button("Click Me"); b.addSelectionListener(new SelectionListener<ButtonEvent>() { @Override public void componentSelected(ButtonEvent ce) { if( ! Runtime.isAIRRuntime()){ MessageBox.alert("Hello From the Browser", "I m running inside a web Browser", null); }else{ final File f = File.getDesktopDirectory(); f.browseForOpen("Choose a File"); f.addEventListener(Event.SELECT, new AIREventListener<Event>() { @Override protected void onAIREvent(Event event) { String fileName = f.getName(); MessageBox.alert("Hello From Adobe AIR", "I m running inside air and : <b>" + fileName + "</b> was selected", null); } }); } } }); RootPanel.get().add(b); } } Greets E browser.pngfileselected.pngfileprompt.png That looks pretty promising and impressive I m also looking forward for your release. Keep up the good work. Sam Hello people, i m still working to release the toolkit. But because of some personal issue i really have not worked on it the last two weeks. The other thing is i m really new on releasing softwares, actually i never did. can anyone tell me wich license will make more sence. i want the thing to stay free but any contribution will hav to benefit to the others too. here is another entry i ve posted on the adobe site: like allways feedback are very welcome Greets Alain Hi there This looks very interesting! I'm looking forward for your first release. Is there any update on your schedule? Can't wait. Hang in there! Greets oxy
http://www.sencha.com/forum/showthread.php?109597-Integrating-GXT-with-Adobe-AIR
CC-MAIN-2014-35
refinedweb
851
63.19
GameFromScratch.com Now we are going to look quickly at using a camera, something we haven’t used in any of the prior tutorials. Using a camera has a couple of advantages. It gives you an easier way of dealing with device resolution as LibGDX will scale the results up to match your device resolution. It also makes it easier to move the view around when your scene is larger than a single screen. That is exactly what we are going to do in the code example below. I am using a large ( 2048x1024 ) image that I obtained here. Alright, now the code: package com.gamefromscratch;.input.GestureDetector; import com.badlogic.gdx.input.GestureDetector.GestureListener; import com.badlogic.gdx.math.Vector2; public class CameraDemo implements ApplicationListener, GestureListener { private OrthographicCamera camera; private SpriteBatch batch; private Texture texture; private Sprite sprite; @Override public void create() { camera = new OrthographicCamera(1280, 720); batch = new SpriteBatch(); texture = new Texture(Gdx.files.internal("data/Toronto2048wide.jpg")); texture.setFilter(TextureFilter.Linear, TextureFilter.Linear); sprite = new Sprite(texture); sprite.setOrigin(0,0); sprite.setPosition(-sprite.getWidth()/2,-sprite.getHeight()/2); Gdx.input.setInputProcessor(new GestureDetector(this)); } public void dispose() { batch.dispose(); texture.dispose(); public void render() { Gdx.gl.glClearColor(1, 1, 1, 1); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); batch.setProjectionMatrix(camera.combined); batch.begin(); sprite.draw(batch); batch.end(); public void resize(int width, int height) { public void pause() { public void resume() { public boolean touchDown(float x, float y, int pointer, int button) { // TODO Auto-generated method stub return false; public boolean tap(float x, float y, int count, int button) { public boolean longPress(float x, float y) { public boolean fling(float velocityX, float velocityY, int button) { public boolean pan(float x, float y, float deltaX, float deltaY) { // TODO Auto-generated method stub camera.translate(deltaX,0); camera.update(); return false; public boolean zoom(float initialDistance, float distance) { public boolean pinch(Vector2 initialPointer1, Vector2 initialPointer2, Vector2 pointer1, Vector2 pointer2) { Additionally in Main.java I changed the resolution to 720p like so: = "camera"; cfg.useGL20 = false; cfg.width = 1280; cfg.height = 720; new LwjglApplication(new CameraDemo(), cfg); } When you run it you will see: Other then being an image of my cities skyline, its pan-able. You can swipe left or right to pan the image around. The code is mostly familiar at this point, but the important new line is: camera = new OrthographicCamera(1280, 720); This is where we create the camera. There are two kinds of cameras in LibGDX, Orthographic and Perspective. Basically an orthographic camera renders what is in the scene exactly the size it is. A perspective camera on the other hand emulates the way the human eye works, by rendering objects slightly smaller as they get further away. Here is an example from my Blender tutorial series. Perspective: Orthographic: Notice how the far wing is smaller in the perspective render? That’s what perspective rendering does for you. In 2D rendering however, 99 times out of 100 you want to use Orthographic. The values passed to the constructor are the resolution of the camera, the width and height. In this particular case I chose to use pixels for my resolution, as I wanted to have the rendering at 1280x720 pixels. You however do not have to… if you are using physics and want to use real world units for example, you could have gone with meters, or whatever you want. The key thing is that your aspect ratio is correct. The rest of the code in create() is about loading our image and positioning it about the origin in the world. Finally we wire up our gesture handler so we can pan/swipe left and right on the image. The next important call is in render(): batch.setProjectionMatrix(camera.combined); This ties our LibGDX camera object to the OpenGL renderer. The OpenGL rendering process depends on a number of matrix to properly translate from the scene or world to screen coordinates during rendering. camera.combined returns the camera’s view and projection matrixes multiplied together. If you want more information about the math behind the scenes you can read here. Of course, the entire point of the Camera classes is so you don’t have to worry about this stuff, so if you find it confusing, don’t sweat it, LibGDX takes care of the math for you. Finally in our pan handler ( huh? ) we have the following code: camera.translate(deltaX,0); camera.update(); You can use translate to move the camera around. Here we move the camera along the X axis by the amount the user swiped. This causes the view of the image to move as the user swipes the screen/pans the mouse. Once you are done modifying the camera, you need to update it. Without calling update() the camera would never move. There are a number of neat functions in the camera that we don’t use here. There are functions to look at a point in space, to rotate or even rotate around ( orbit ) a vector. There are also functions for projecting to and from screen to world space as well as code for ray casting into the scene. In a straight 2D game though you generally won’t use a lot of this functionality. We may take a closer look at the camera class later on when we jump to 3D. Programming Java, LibGDX, Tutorial
http://www.gamefromscratch.com/post/2013/11/06/LibGDX-Tutorial-7-Camera-basics.aspx
CC-MAIN-2017-13
refinedweb
898
56.35
Newbie to Cairngormkiran7881 May 25, 2010 3:23 AM Hi , I am newbie to Cairngorm , Please excuse if questions were silly . As i am seeing the coding of ModelLocator of cairngorm : I observe that some authors simply implement the IModelLocator interface and some wont , can anybody suggest what is the correct approach ?? For example : public class ModelLocator public class ModelLocator implements IModelLocator - 2. Re: Newbie to CairngormAnsury May 25, 2010 4:42 AM (in response to kiran7881) Cairngorm "best practices" are fuzzy at best and non-existent at worst. I tended to use this architecture when using that bloated framework: I suspect IModelLocator may come from a different (newer) version of the architecture. Since I've sworn off the use of such a boilerplate demanding monstrosity, I can't say whether it is newer or not. If you really want to adhere to the most defensible "correct" approach (there is none really), I'd suggest you do what the latest official Cairngorm examples do. If there's inconsistency even there, then really you'd have to ask the maintainers of the architecture what gives. If anything I've been moving away from more structured 'frameworks' more and more over time. Swiz is one of my favorite since it's so lightweight and flexible (definitely different from Cairngorm), but for some Flex apps even that may not be needed. 3. Re: Newbie to CairngormUbuntuPenguin May 25, 2010 8:34 AM (in response to kiran7881) kiran7881 wrote: Hi , I am newbie to Cairngorm ... See , you've messed up already .... Save yourself some pain , learn a seccond generation framework ( Mate , Swiz ) not Cairngorm and certainly not PureMVC. 4. Re: Newbie to CairngormKarl_Sigiscar_1971 May 25, 2010 8:44 AM (in response to UbuntuPenguin) The trend is toward inversion of control (IoC aka Dependency Injection) frameworks for Flex. Parsley Spring ActionScript Swiz ... Adobe Consulting has created Cairngorm 3 libraries on top of Parsley 5. Re: Newbie to CairngormUbuntuPenguin May 25, 2010 9:10 AM (in response to Karl_Sigiscar_1971) IoC FTW. I hadn't heard of Spring Actionscript. Looking over the documentation , it looks pretty simple. But it also seems like it has some of the same annoyances of Cairngorm , PureMVC. That being that they are descended from the Java world , which is HUGE ( JPL,J2EE,Java Imaging,JDBC ) compared to the AS3/Flex world. Sometimes, it seems like this leads to such an abstraction, you end up with a mountain of boilerplate code to get anything simple done , IMHO. 6. Re: Newbie to CairngormKarl_Sigiscar_1971 May 25, 2010 9:45 AM (in response to UbuntuPenguin) For simple applications, architectural frameworks are not required. They uselessly complexify everything. For more complex, data-driven applications, especially in the enterprise, you have to use one or you risk creating your own boilerplate code to cope with complexity... The Java world and the Flex world are intermixed. LCDS and Blaze DS are implemented with Java and run on Java Application Servers, just like other Adobe LiveCycle products. Go to Java One to get a feel of it. 7. Re: Newbie to CairngormAnsury May 26, 2010 5:22 AM (in response to Karl_Sigiscar_1971) I concur with the sentiment above hinting that Cairngorm is mostly a has-been architecture largely due to requiring too much boilerplate code. I will add, think carefully before drinking the Mate kool-aid (possibly the Spring Flex framework?), as even the Java world is starting to realize the insanity of XML configuration ("XML programming" if you're a critic) schemes and has been moving away from this. In fact take anything, other than IoC/dependency injection (the true reason for Spring's early success IMO), related to Spring with a grain of salt since the name recognition inspires so many bandwagon jumpers attempting to capitalize on Spring's early success with IoC. Just because it has the 'Spring' stamp of approval on it doesn't automatically make it a good idea or a good design. 8. Re: Newbie to CairngormUbuntuPenguin May 26, 2010 7:07 AM (in response to Ansury) So what architecture type/frameworks do you like ? I have seen Google Guice online line and even toyed with it. I have yet to build an application that runs with it though. 9. Re: Newbie to CairngormAnsury May 26, 2010 9:36 AM (in response to UbuntuPenguin) Like anything related to Spring or Apple, I take anything stamped with "Made by Google" with a grain of salt. Not to put down Google at all, they have plenty of good things going for them, but some complete rubbish too. They had a Flex app page at one point I believe and it was rubbish, made Flex look like it was a crude toy (although given how much of a fetish Google has with JavaScript and Ahax "development", perhaps this was the goal...) Anyway as for Guice, I *think* I had good impressions when I briefly looked at it, but I've yet to use it or even look into it in very much depth. It doesn't seem to be "taking off" in any big way yet, unfortunately. Unlike many I do not give supposed "industry standards" (such as Spring) the benefit of the doubt, and having seen Spring evolve over the years, I am looking to move on to greener and more productive pastures while taking the core lessons learned (IoC, TDD). So currently I generally use Spring in Java, Swiz (or nothing) in Flex. 10. Re: Newbie to CairngormKarl_Sigiscar_1971 May 27, 2010 7:54 AM (in response to Ansury) Spring is still heavily used in the Java community. There are some new Java projects that use it in the enterprise. As everything, it has been created to solve some problems. Then, as you use something, you see its shortcomings and move on to something else. 11. Re: Newbie to CairngormAnsury Jun 3, 2010 1:14 PM (in response to Karl_Sigiscar_1971) Karl_Sigiscar_1971 wrote: As everything, it has been created to solve some problems. And created some new ones in the process. (Too many people assume "the Spring way" == "the best way".) I do wonder why Spring has such an XML fetish - didn't their mommas teach them that compile-time errors are greatly preferable to run-time errors? 12. Re: Newbie to CairngormUbuntuPenguin Jun 3, 2010 2:25 PM (in response to Ansury) How would you get a run-time error with XML injection but not a compile time , you can see what classes/interfaces you are injection , no ? And we must remember how the framework jungle looks in Flex/AS3 as of today. When you take into account trying to structure an app with things like Cairngorm and PureMVC , the cure is worse than the disease. I would take (m)xml over those two any day. But I saw something on a blog which really summed it up , "No architectural framework can make a bad programmer write good code". I've seen people use frameworks as a crutch , when I mentioned the lack of separation of concern to a former co-worker , his response was "but we're using PureMVC !" It all boils down to the basics and people learning "best practices" of the language and programming in general , regardless of the framework. It just turns out that some frameworks start out as the inverse of best practices. </Rant against garbage programmers> Ubu 13. Re: Newbie to CairngormAnsury Jun 3, 2010 3:44 PM (in response to UbuntuPenguin) UbuntuPenguin wrote: How would you get a run-time error with XML injection but not a compile time , A simple text typo (or even screwed up xml tag), entirely forgotten bean, refactored class name (if you do a quick refactor), wrong class type, forgotten injection (although admittedly this wouldn't be solved by dumping xml config), produce those annoying run time errors. Especially annoying in web dev since you have to deploy your entire app to a slow, bloated web container before you find out about any run time errors (and even then, it's only one at a time, if you have 5 errors we're talking 5 deploys to fix them all--ugh!). Setting up Spring Security is especially painful thanks to XML shenanigans. And don't even get me started with dependencies and "jar hell"... (more run time errors). The naive may make claims like "you're using the compiler as a crutch!", but I call bull on that--why throw away a useful tool that can instantly point out problems for you? Also when I say "XML" don't take that as a rant against "MXML"-- actually I think mxml is fantastic. A very simple and ingenious solution, which actually *reduces* the amount of code we write. MXML does produce compile-time errors, so it's a win-win. Not so with a Spring (or any) XML "application context" or config file. Now I'm not completely against them (some mild usage, especially like BlazeDS or GraniteDS does, doesn't hurt much), but when we're actually replacing a ton of source code with "XML programming", alarm bells go off. I'm sure some Spring apologist will claim TDD "fixes" their run time error deficiency but, meh--not really (sounds like a real crutch to me ). Like I probably said above I think I'm with you on moving away from frameworks, or at least using non-intrusive, lightweight, low-configuration frameworks that don't impose knee-jerk patterns on you. Swiz being the example I have in mind (although honestly I'm not sure how well it scales up for very large apps). 14. Re: Newbie to CairngormAnsury Jun 3, 2010 3:50 PM (in response to Ansury) I've not used it yet, but here's an example of what I'll probably look into as an alternative to "standard" Spring (w/xml). A "pure Java" way of doing Spring DI. Still "Spring" in name, but not all "Spring products" are equal--some are good, some are for the kool aid drinkers IMO. 15. Re: Newbie to Cairngormdaslicht Jun 7, 2010 5:57 AM (in response to UbuntuPenguin) Hi, How to get SWIZ RC1 running with Flash Bulder 4 ? As far as I add the swc I get those warnings and the project will not be runnable nor can I see something in the design view: Description Resource Path Location Type Design mode could not load swiz-framework-1.0.0-RC1.swc. It may be incompatible with this SDK, or invalid. (DesignAssetLoader.CompleteTimeout) I even tried if this might be fixed in Flex SDK 4.1 but its the same. Any Idea how to fix ? Cheers Marc
https://forums.adobe.com/thread/646119
CC-MAIN-2018-30
refinedweb
1,776
67.99
The wmemcpy function was wrong so I changed it in cwchar. It wasn't incrementing the src pointer so only the first character of src was being copied. Kostas Pagratis Consulting Engineer Rogue Wave Software a Quovadx(TM) Division (w) 303-545-3268 (c) 303-817-1605 -----Original Message----- From: Martin Sebor (JIRA) [mailto:jira@apache.org] Sent: Thursday, October 13, 2005 4:53 PM To: stdcxx-dev@incubator.apache.org Subject: [jira] Created: (STDCXX-48) [IRIX] std::wmemcpy() copies only the first character [IRIX] std::wmemcpy() copies only the first character ----------------------------------------------------- Key: STDCXX-48 URL: Project: STDCXX Type: Bug Components: 21. Strings Versions: 4.1.2 Environment: IRIX 6.5 Reporter: Martin Sebor Assigned to: Martin Sebor Fix For: 4.1.3 The program below aborts at runtime when compiled with SGI MIPSpro and run on SGI IRIX 6.5. Note that IRIX doesn't define wmemcpy (the macros _RWSTD_MO_WMEMCPY and _RWSTD_MO_WMEMCPY_IN_LIBC are both defined) so the called function must be ours. $ cat t.cpp && nice gmake t -r && LD_LIBRARY_PATH=../lib ./t #include <cassert> #include <cwchar> int main () { wchar_t dst [4] = { 0, 1, 2, 3 }; std::wmemcpy (dst, L"abc", 3); assert (L'a' == dst [0]); assert (L'b' == dst [1]); assert (L'c' == dst [2]); assert (L'\0' == dst [3]); } CC -c -I/build/sebor/dev/stdlib/include/ansi -D_RWSTDDEBUG -D_REENTRANT -D_RWSTD_USE_CONFIG -I/build/sebor/mipspro-7.41-15d/include -I/build/sebor/dev/stdlib/include -I/build/sebor/dev/stdlib/examples/include -g -ansiW -woff1429,1460,1521,3150,3333 t.cpp CC t.o -o t -LANG:std=off -Wl,-woff,84 -L/build/sebor/mipspro-7.41-15d/lib -lpthread -L/build/sebor/mipspro-7.41-15d/lib -lstd15d -lm rm t.o Assertion failed: L'b' == dst [1], file t.cpp, line 10, pid B86430 Abort (core dumped) -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: - For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/stdcxx-dev/200510.mbox/%3C4FA4B7B3231C5D459E7BAD020213A942030A3347@bco-exchange.bco.roguewave.com%3E
CC-MAIN-2015-06
refinedweb
328
50.63
Today I'm working in Spyder just like any other day but when I try to import arcpy, I get this: import datetime import os import arcpy Traceback (most recent call last): File "<ipython-input-3-5467a3dc9fe3>", line 1, in <module>. I imported datetime & os just to see if they worked okay and they do; I get this error if I let the console window sit long enough. I'm curious what the connection is with arcpy, spyder, python and Portal/ArcGIS Online is and how I can avoid this in the future.... eta moments later.... Shut down spyder and reopened it: cured.... (?) Happened to me, once or twice... waiting and/or rebooting fixed the issue: You will have to follow the long and circuitous path in your installation path to arcpy C:\Your_path_to_arcgis_pro\Resources\ArcPy\arcpy which contains Bin, Resources and Support the __init__.py files starts the imports, which imports stuff... including _base.py amongst others. You can follow the trail which generally leads to a dead end (ie *.pyd files usually) and try to figure out why any connection would be needed. In short... you probably shouldn't know or want to know. I have explored some of this Import arcpy.... what happens to check namespace, load time and bloat issues when just importing arcpy
https://community.esri.com/thread/222141-arcpy-fails-to-import-in-spyder
CC-MAIN-2018-43
refinedweb
218
73.58
RAD That Ain't Bad: Domain-Driven Development with Trails By now, there is a good chance you have at least heard of Ruby on Rails. For those who haven't, Rails is a framework using the Ruby language that allows one to create database-driven web applications in a fraction of the time it would normally take. I'm not going to cover Rails in this article, as Curt Hibbs has already done a masterful job in " "">Rolling on Rails." Instead, this article will focus on how we can do Rails-esque "rapid application development the right way" in Java. I first heard of Rails as I was hanging out after a meeting of my local Java users group with my good friend Jim Weirich. Jim is a well-known Ruby nut and all around good, smart guy. So when he started talking very excitedly about "this new Ruby web thing I can't remember the name of," I decided to hang out a little longer and take a look at a video he wanted to show me. I probably wouldn't have bothered to read a long article about it, but heck, even I can spare a few minutes to watch a video. The video was a screen capture of a developer creating a fully functional database-driven web application in ten minutes. As a Java developer my first reaction to Rails was sheer, unabashed envy. Developing a web application in Java, even with best-of-breed technologies such as Spring, Hibernate, and Tapestry, is still much more difficult than cranking out a Rails application. My next reaction was to think about how we can bring some of the brilliant ideas of Rails to Java. I'm here to say with certainty that we can, and I've spent the last several months working to make it possible for any Java developer to do it. The fruit of this effort is a framework named, unoriginally enough, "">Trails. Despite the name, Trails is in no way a port of Rails. Rather, it is a framework designed to bring a similarly radical productivity increase to J2EE web application development. What's the Problem? The first thing we need to figure out is what makes Java development with our current technologies and methods more difficult than we would like. To highlight the problem, let's imagine we are developing a J2EE web application using Spring and Hibernate, and that we need to add a new type of domain object called Person to the application. The precise steps will vary depending on what web framework we select, but here are the steps we would typically need to perform: - Create Personclass. - Create PersonDAOclass. - Create Persontable in database. - Define PersonDAOin Spring application context XML file. - Create Personpage or action class. - Add Personpages to web framework XML configuration files. - Create personListpage to list Person instances. - Create personEditpage to edit Person instances. Of course, these steps will vary, depending on our specific application design and the frameworks we select, but in general they are representative. I'm hoping that seeing these list of steps has you thinking "Phew, that's a lot of work!" Can we do better? What's the Solution? What is the real problem here? I'm going to suggest that we need to stop repeating ourselves. In fact, this point is so important that it's worth saying again: We need to stop repeating ourselves. All we really want is to add a new type of entity to our system, yet we have at least eight different things we need to do. What if we could dramatically reduce the number of steps required? What if we could reduce the number of steps to: - Create Personclass? How could that be possible? Well, let's think about those eight steps again. I'm going to propose that all of the information we really need to produce a simple, working application is contained in the Person class. From it, we can determine: - What kind of attributes a person can have. - The name of each attribute. - The type of each attribute. Using just this information, we can make enough assumptions to produce a working application. What if we assume: - For each entity, we want screens in our application to perform basic operations such as create, retrieve, update, and delete (CRUD). - We want each entity to be persisted in a database. - We want a database table to be created for each entity. - We would like screens to manage the relationships between different entities. Of course, these assumptions will not always be correct, but in many applications they will be. If we had a framework that could use these assumptions to produce a working application based on our domain model, we could greatly accelerate development in many cases. Furthermore, if this framework let us easily override these assumptions where necessary, we could quickly produce a working prototype application and "flesh it out" into our final application. Introducing Trails Trails is a domain-driven development (DDD) framework for Java. Its goal is to allow us to develop J2EE web applications with the fewest redundant steps. The term "domain-driven development" refers to the process of developing an application with a rich domain model: in the most basic example of a Trails application, the domain model will be the only code we write! Trails uses this domain model as the only source of information it needs to dynamically create a basic application. As mentioned above, it makes a lot of assumptions to be able to do this, and we'll explore how to override those assumptions in a future installment. But that's getting ahead of ourselves. First, let's start with a very simple Trails application. If you have not already done so, download and unzip Trails. You will also need the following installed on your system to use Trails: - The Java 5 JDK. - Tomcat 5.5 or later. - Ant 1.6. Note: Be sure to have your ANT_HOMEsystem property set correctly. Trails will use this property to add a .jar to ANT_HOME/lib. Our First Trails Application For this article, we will gradually recreate the Recipe application from Curt Hibbs' RoR article. To create our application, you need to be in the same directory where you unzipped Trails. In this directory, do ant create-new-project. You will be prompted for the following: - Base directory - Name of project For the name of the project, enter "recipe." This will build a project with the following directory structure: Be sure, if you have not already done so, to add a user to Tomcat with privileges to use the manager application, as the Trails Ant build will be unable to deploy our application otherwise. By default, Tomcat does not have such a user, but it's easy to add one. Edit your conf/tomcat-users.xml file (relative to where you installed Tomcat). Add a line like this: [/prettify][/prettify] [prettify] <user name="craigmcc" password="secret" roles="standard,manager" /> To complete the setup process, edit the build.properties file in the directory where you installed Trails. Below is a list of the properties in this file. How 'Bout Some Code? Alright, now that setup is out of the way, let's write some code. If you have been paying attention, you can probably guess what code we'll develop first. If you said "domain object," give yourself a pat on the back. Domain objects in Trails are just plain old Java objects (POJOs). Because Trails uses Hibernate to persist our domain model into a relational database, we will also need to add some JSR-220 persistence annotations to tell Hibernate some extra information it needs. For a first domain object, let's create a Recipe class, as follows: package org.trails.recipe; import java.util.Date; import javax.persistence.Entity; import javax.persistence.GeneratorType; import javax.persistence.Id; @Entity public class Recipe { private Integer id; private String title; private String description; private Date date; @Id(generate=GeneratorType.AUTO); } } This is a fairly simple JavaBean: we've got properties for Title, Description, and Date, and an Id property. We also have two JSR-220 annotations. We have an @Entity annotation that tells Hibernate that this class is persistent. We also have an @Id(generate=GeneratorType.AUTO) that tells Hibernate which property is the identifier property (a "primary key," in database parlance), and that we want this property to be automatically generated. Notice that we don't need to explicitly mark our other properties as persistent. This is because Hibernate, in conformance to the EJB3 spec, will assume all of our properties are persistent unless we explicitly mark them as @Transient. Step One ... and We're Done! Believe it or not, we've now developed our first Trails application. Let's deploy it and see it in action. If it is not already running, start your Tomcat instance. Next, go into the directory where you created the project and do . This will build our application and deploy it in our. This will build our application and deploy it in our ant deploy running Tomcat instance. For maximum simplicity, Trails uses HSQL as a simple in-memory database and lets Hibernate create all the tables (this is, of course, configurable). To see our application in action, we simply point your browser at. For nothing more than the cost of our simple domain class, Trails gives us a simple application that will lets us work with recipes. The default home page of a Trails application will show a list of all of the domain object types in our application, as seen in Figure 1. "Application Home Page" /> Figure 1. Application home page Following the List Recipes link takes us to a page which, if you can believe it, lists all the recipes. As you can see in Figure 2, there aren't any yet. Figure 2. List recipes Following the New Recipe link takes us to a screen that will let us create a new recipe, shown in Figure 3. Notice the date widget provided for us at no extra charge. Figure 3. Edit recipe Not bad for just coding one class, eh? How It Works Some of you already thinking "How is all that code being generated?" There's a short and simple answer to that question: It's not. Trails eschews code generation for the simple reason that code generated is still code to maintain. Rather than generate code, Trails dynamically creates your application at runtime. For each domain class, a set of metadata is built up through a combination of reflection and Hibernate mapping information. Intelligent web components then use this metadata to produce a UI. Sounds simple enough, but as you probably can guess, there's a lot going on under the covers. Fortunately, Trails has a lot of help. One of the key goals of Trails is to eliminate unnecessary code, so it only makes sense that Trails avoids reinventing wheels wherever possible. In fact, Trails leverages other frameworks to do a lot of the heavy lifting. Trails uses: - Hibernate for persisting domain objects to the RDBMS, as well creating the tables. - Spring for dependency injection and configuration. - Tapestry as the web component application framework. Relationships We have an application, but it's not very interesting. What makes an application interesting is not objects in isolation, but objects and their relationships. Let's introduce the concept of categories to our domain model and assert that a Recipe is in exactly one Category. We'll start by creating a Category class: package org.trails.recipe; import javax.persistence.Entity; import javax.persistence.GeneratorType; import javax.persistence.Id; import org.apache.commons.lang.builder.EqualsBuilder; @Entity public class Category { private Integer id; private String name; @Id(generate=GeneratorType.AUTO) public Integer getId() { return id; } /** * @param id The id to set. */ public void setId(Integer id) { this.id = id; } public String getName() { return name; } /** * @param name The name to set. */ public void setName(String name) { this.name = name; } public boolean equals(Object obj) { return EqualsBuilder.reflectionEquals(this, obj); } public String toString() { return getName(); } } Like Recipe, this is a basic POJO with two annotations. Notice that we have overridden two methods from Object: toString() and isEquals(). These methods are necessary for Trails to build a web interface for objects that are related to Category. The toString() method is necessary to tell Trails how to display a Category. The isEquals() method is necessary for Category objects to work properly when used in a List. We will see how these are important shortly. Now that we have created a Category class, we can add a category property to our Recipe class: private Category category; @ManyToOne public Category getCategory() { return category; } public void setCategory(Category category) { this.category = category; } Nothing fancy here, just a simple JavaBean property with an annotation to tell Hibernate what kind of relationship this is. Now we'll probably need to actually get into the nitty gritty and start typing some HTML, right? Nope, not yet. Trails will give us an application that manages the Recipe- Category relationship for free, no grunt coding required. Don't believe me? Run ant redeploy. When you visit the initial page, notice the link to List Categories. Follow this link and click on the New Category link. Create a couple of categories. Now go back to the Recipe page and create a new Recipe. You should see now see a Category field on the Edit Recipe page, as seen in Figure 4. "Assigning a Category to a Recipe" /> Figure 4. Assigning a category to a recipe Trails will give you, free of charge, a list of all of the Category objects for you to choose which Category a Recipe belongs in. This is where the toString() and isEquals() methods come into play. The toString() method is used to display the label in the select list, and isEquals() is used to determine which Category was selected. Conclusion Let's take stock of what we've done. We've built a complete (though simple) J2EE application that lets us manage recipes and assign them to categories. The only code we've written has been our domain model, and in return, we have an application that includes a web UI and database persistence. We have solid architecture that builds on proven frameworks such as Spring, Hibernate and Tapestry. And we've built this in just a few minutes. In the next installment we will explore Trails in greater depth. We will learn how to customize a Trails application to override the assumptions Trails makes. We will also see how Trails also handles relationships more complex than a simple many-to-one. And finally, we'll explore how Trails supports validation by annotating your domain classes. Resources - Code from this article: - For those who want to play along with our contestants at home, a "">.zip of the source - For those who always skip to the answers at the back of the book, the ""> completed .war file ready to deploy - Trails - Tapestry (see also O'Reilly CodeZoo: Tapestry) - Hibernate (see also O'Reilly CodeZoo: "">Hibernate) - Spring (see also O'Reilly CodeZoo: "">Spring) - Ruby on Rails - "">"Rolling on Rails," by Curt Hibbs - Login or register to post comments - Printer-friendly version - 10767 reads
https://today.java.net/pub/a/today/2005/06/23/trails.html
CC-MAIN-2015-40
refinedweb
2,550
57.06
This discussion came up on the alt.net list last night and I had this post about half way done so I agreed to push it to the top of my stack and get it done for today given the timeliness of the information. I apologize that I lied about what the next post would be. I will have the follow up on the last DDDD post next. Consider the following code: Repository<Customer> repository = new Repository<Customer>(); foreach(Customer c in repository.FetchAllMatching(CustomerAgeQuery.ForAge(19)) { } The intent of this code is to enumerate all of the customers in my repository that match the criteria of being 19 years old. This code is fairly good at expressing its intent in a readable way to someone who may have varying levels of experience dealing with the code. This code also is highly factored allowing for aggressive reuse. Especially due to the aggressive reuse the above code is commonly seen in domains. Developers are trained that reuse is good and therefore tend towards designs where reuse is applied. The reuse can be seen two-fold. The first is in the definition of an IRepository<T> interface something like: interface IRepository<T> { IList<T> FindAllBy(IQuery<T> query); void Add(T item); void Delete(T item); … } Then people using Object Relation Mappers such as Hibernate will tend to make a generic implementation of this interface since the ORM does most of the heavy lifting for them ex: Repository<T> Show me the polymorphism! A main reason why one would favor a generic contract that is possibly specialized such as in the example of IRepository<T> is that one could write code that operated upon IRepository<T> directly to perhaps do things like be a “generic object editor”. That is it uses the various repositories in a polymorphic fashion. Quite simply, where and more importantly why would someone want to do this? Finding a place or a reason for how this polymorphism would be used is extremely difficult under the guise of domain driven design. Perhaps in some sort of admin interface, but even this would fail to the forms over data complexity test and is likely better off being done in another methodology such as Active Record. As if the utter lack of necessity of a shared interface were not enough the introduction of such an interface actually causes further issues. C(?)R(?)U(?)(D(?) Some objects have different requirements than others. A customer object may not be deleted, a PurchaseOrder cannot be updated, and a ShoppingCart object can only be created. When one is using the generic IRepository<T> interface this obviously causes problems in implementation. Those implementing the anti-pattern often will implement their full interface then will throw exceptions for the methods that they don’t support. Aside from disagreeing with numerous OO principles this breaks their hope of being able to use their IRepository<T> abstraction effectively unless they also start putting methods on it for whether or not given objects are supported and further implement them. A common workaround to this issue is to move to more granular interfaces such as ICanDelete<T>, ICanUpdate<T>, ICanCreate<T> etc etc. This while working around many of the problems that have sprung up in terms of OO principles also greatly reduces the amount of code reuse that is being seen as most of the time one will not be able to use the Repository<T> concrete instance any more. Revisiting the intent. What exactly was the intent of the repository pattern in the first place? Looking back to [DDD, Evans] one will see that it is to represent a series of objects as if they were a collection in memory so that the domain can be freed of persistence concerns. In other words the goal is to put collection semantics on the objects in persistence. The key here is that as a rule there should be no persistence logic within the domain. This allows the domain to be more easily tested, tested independently of persistence and moved easily from a persistence mechanism (this is more important as a long term maintainability goal as opposed to “we want to use XML files now”). Simply put the contract of the Repository represents the contract of the domain to the persistence mechanism for the aggregate root that the repository supports. The realization that the Repository is less of an object and more of a contract to infrastructure is both a subtle and important one. The importance of the “Contract” The Repository represents the domain’s contract with a data store (another common word we may use here is that it is the seam). This is extremely important as one can tell every possible way that the domain interacts with the data store by looking at the contract. When it comes time to optimize the database for performance as an example one can look at the repositories and figure out what the domain requires of the data store. The unfortunately is only useful if the contract is narrow and specific. Consider the conceptual difference between the following. Repository<T>.FindAllMatching(QueryObject o); CustomerRepository.FindCustomerByFirstName(string); In terms of figuring out what the contract to the data store actually is the first example gives us no information.). Simply put: allowing query objects to be passed into the repository widens the contract to a point of uselessness. But reuse is good? None of us like writing the same code over and over. However a repository contract as is an architectural seam is the wrong place to widen the contract to make it more generic. You will note that nothing has precluded the use of Repository<T> only really of IRepository<T>. So the answer here is to still use a generic repository but to use composition instead of inheritance and not expose it to the domain as a contract. Consider: public interface ICustomerRepository { IEnumerable<Customer> GetCustomersWithFirstNameOf(string _Name); } In the customer repository composition would be used. Public class CustomerRepository { private Repository<Customer> internalGenericRepository; Public IEnumerable<Customer> GetCustomersWithFirstNameOf(string _Name) { internalGenericRepository.FetchByQueryObject(new CustomerFirstNameOfQuery(_Name)); //could be hql or whatever } } The key here is that our seam as exposed to the domain is a very specific architectural seam as opposed to the open/generic seam that allows us to do anything. Composition is used as opposed to inheritance to gain reuse while minimizing the width of the repository contract within the domain. Hint: Minimize the complexity at the entry/exist seams of your domain by making the seams as explicit as possible. yes lots of different vocabulary for them. Narrowing contracts is the important part. I’ve stumbled across this problem before and solved it by narrowing the contract to specific cases – remember, Repositories are not DBALs – they shouldn’t be provided with queries, its repositories themselves which – basing on domain requirements – are constructing the abstract data queries and pass them into DBAL ( what you called the “generic repository” ) here. I would not use inheritance and use composition instead because if you did that the contract of your repository would include GetByCriteria which I don’t want Greg, not sure if you’re still reading this, but why would you use composition for the generic repository instead using inheritance like the following: public class CustomerRepository : GenericRepository, ICustomerRepository { } GenericRepository will implement general methods defined in ICustomerRepository without any extra work, but you still get the freedom to explicitly implement any methods in CustomerRepository. Pingback: Event oriented databases : a new kind of persistence paradigm - Design Matters - ZenModeler's blog Pingback: planetgeek.ch » What is that all about the repository anti pattern? Pingback: Interesting links from 2011-01-22 (Kanban) | What is going on under my hat thanks for response: the question is I saw some implementation whereas IList _repository maintained. say when I am constructing my Repository I am filling it with _repository = LoadAllItems() in ctor. Then in my Find(lambda expression) function I do _repository.Where(lambdaEx). my main headache actually comes when I have subcategory as my customers has orders.. so what is the convenient way of loading? my agregate root is customer should I keep a reference to order as an object or just id number. I know there is a way of achieving lazy loading however I just want pretty plain solution. I mean if I want to follow DDD and do everything through Root then my Customer should actually hold all orders as list of objects.. assume that I have thousands of customers? thank you in advance @rusl: I am not sure I understand your question. You mention an internal collection, if your repository is over say SQL server why would you need an internal list? hi Greg, one question, maybe a bit off topic or naive… I just was wondering should I have to maintain internal collection in my repository. I mean when I have a function like getallcustomers() should I keep in my memory and further refer to this list.. or my function should only return a list and thats it. in that way my repository in loading data acts just like some DAL function. is not it? could you please help me with this… I got tons of other question however this one of essential ones.thanks. Besnik, What is the actual contact? what do they produce back? How wide is that contract? Have you looked at CQRS? Why would you want to do this in your domain, that is pretty obviously a report. Hi Greg, I’m trying to solve this problem (and querying of the data in respect to Open-Closed principle) in open source project. The idea is to separate specifications (queries) from the repository interface. You would load the domain entity like this: = customerRepository.Specify () IList .NameStartsWith(“Greg”) .OlderThan(18) .ToResult() .Take(3) .ToList(); Very nice distinction and explanation — for someone just getting started in DDD. I like the more specific v. generic repositories but share Christian’s worry about a very large contract. What if we defined a middle ground with constants in the domain repository interface instead of a multitude of methods, ex. findStudentsBy(FIRST_NAME)? It would keep the contract simple and the implementation in the infrastructure. Can’t we just define our query using the Specification pattern and centralized them into the repository assembly. Then have the Application Service assembly pass the specification query to the Find method on the repository? So, that we have less permutation of Find** methods … Thanks Regards kitchai yong Greg, Thanks for writing this. I agree with most of the things you’ve pointed out. I do have one question, however, about specifying the contract/seam between a domain model and a data store layer. From what I understand, we do not want to have a “generic” contract (like the example you gave) because it becomes too “wide” and renders it useless as a contract. I agree. However, if we have methods such as “findStudentByFirstName”, “findStudentByLastName”, “findStudentByOrganization”, “findStudentBasedOnHairColor”, etc, etc, won’t we end up with a rather large contract? For a sufficiently large domain aggregate, it’s possible to have hundreds of those very specific “find*” methods. What would be an elegant solution for something like this? NlFKyt first 3 letters of the title DDD Certainly most of DDD is overkill for pure CRUD but you shouldn’t be using it for CRUD either But do you think to many layer of encapsulation and the application services function just wrapping what repository have to do is useless for CRUD scenario? I dont think we get so much benefit to have custom repository. ryzam I believe the reasons why have been answered in this post. An application service is not a replacement for proper encapsulation of your domain. Hi Greg Why dont we let our service that control that behaviour. For example instead we have all custom repository handle the related persistence job, we move to services to control that behavior Ex: public class LecturerService : ILecturerService .Save(lecturer); { void CanCreateLecturer(Lecturer lecturer) { genericRepo } } Good post! I have written a similar item on my blog a while ago. My only “objection” is that I find a bit too much having to keep all the custom repositories only to constrain the amplitude of the generic one. Services do that for me, and clients only have direct access to them but never to the repositories. So instead of a CustomerRepository I would have a CustomerService making the calls to the generic repository. Interestingly enough I was coming to the same conclusion as you! I kept on looking at my Repository classes thinking about how I could generalize them, but ended up fighting that urge to generalize because I like the fact that my UserRepository had some methods on it specific to User concerns. The only difference is that I am using inheritance rather than composition. (eg UserRepository : BaseRepository , IRepository ) I thought it better to write up my view as a whole and then we can discuss it on Alt.Net. I read alt.net mails too. And I think people are confused when they see Repository word in both CustomerRepository and Repository . What I understand is Repository is a contract between DataMapper and Domain Model while CustomerRepository is a contract between Repository and Presentaion Layer( UI). the Repositories are different in terms of purpose of usage. right? I recommend everyone that reads this article to make sure they click on “Think Before Coding” to see what he has to add to this pattern. Greg, This was a great write up, I just recently came across the Repository pattern while looking through the S#arp Architecture project and I definitely saw the strengths of it (and the perhaps over zealousness of it in some aspects) but using it in delegation seems like a perfect fit. This is a manual pingback… Repositories and IQueryable the paging case. This is an extension of the discution about leting IQueryable flow oustide of repositories or not. I just wrote a post where I came to a similar conclusion, but using granular repository traits (ICan , ICanSave etc) to build up an interface that exposes a subset of methods from a mainly-generic concrete implementation. “A common workaround to this issue is to move to more granular interfaces such as ICan , ICanUpdate , ICanCreate etc etc. This while working around many of the problems that have sprung up in terms of OO principles also greatly reduces the amount of code reuse that is being seen as most of the time one will not be able to use the Repository concrete instance any more.” Not true – my concrete ProjectRepository subclasses Repository , so it uses all generic public methods. Common methods we do want like GetById() and Save() pass straight through to the generic base. Common methods we don’t want like Remove() aren’t exposed by IPersonRepository, so the domain doesn’t know about them. Heaps of re-use, and without having to write a whole load of little bridge methods from your CustomerRepository to the Repository inside. (last ) I feel the same way in so much as internal repositories should never be exposed to the client (e.g. the one developing in the domain). However, I still see plenty of merit in an inheritance-based public repository model instead of composition. I’m a firm believer in building for the rule and not the exception, and the examples you provided (non-updatable domain objects, the shopping cart) are the exception – it feels unnecessary to change the repository API for a few objects which may not conform to the rule. For most data-driven applications, a concrete inheritance-based repository model with an internal generic repository is a perfectly suitable solution. It’s not too advanced for developers unaccustomed to S.O.L.I.D, and it’s flexible enough to support more advanced scenario’s while giving developers their generic juice. As an example, lets take a newsfeed style service into consideration. Each story involves 1) an author, and 2) a target domain object on which the story is about. We’ll need the target domain object of each story, and this is incredibly easy to do in a simple loop over any number of domain objects when your public repositories are of Repository . Without the support of polymorphism, I might have to rely on reflection, or still revert back to IReadRepository style of inheritance (which always feels like smell to me). Also you made a very important point when you said “[make] the seams as explicit as possible”. If you’re going to use query objects, don’t expose them in your contract! Use them the same way as you would an internal repository – the repository exists to say “this is how you can find instances of me”, and providing an arbitrary query mechanism can mean an arbitrary death to your persistent storage (especially if it’s a relational database). At the end of the day, either solution is equally workable, and there is no one-size fits all pattern. My experience is generally against entirely mutable domain objects, with a few that are not (.e.g categories). I completely agree with this. It has always bothered me when people implement generic repositories in this way because you are now shifting potentially database specific concerns back out of the repository. By exposing a query object and allowing them to be built anywhere within the domain it makes it impossible to control (and limit) what kind of queries are formed. Your solution through composition is a nice middle ground that would still reduce a huge amount of repetitive code. Excellent post. Good stuff, as you know I feel exactly the same. Few comments: 1) Reuse – Couldn’t agree more, I’ve had massive reuse using concrete repositories for years using exactly this sort of technique (though I consider the composed repository to really be just a persistence helper class, an artifact of the implementation). and ISave etc). I found the polymorphism made my persistence tests cleaner, for example SaveTestHelper took an ISave , but as you said on the forums delegates might have worked too (though I find they can lead to less readable test code and had complaints from other devs about my existing use of them in test code). etc, agree you can’t use Repository but you can still get lots of reuse. I think its a valid approach. or QueryObject). 2) Polymorphism – I agree its not necessarily that useful and if you need it you can get it with concrete repositories (I 3) C(?)R(?)U(?)(D(?) – ICan 4) The importance of the “Contract” – Nicely put, agree completely. When you say “every possible way that the domain interacts with the data store” its also the way clients of the domain do, which is very useful. 5) Specifications – I think passing them in can be useful, but only if they are constrained (so not just allowing clients to pass in Specification On DDDD, it might be worth backtracking and explaining why we need it and where. Understanding the problems you see it fixing for the average DDD project would be very useful. Great post. I think the point you made about making the seams explicit is a great one. This point is really made clear in this post when talking about repositories. Thanks!!! “I avoid using sql databases as my transactional store and as such don’t have these issues.” Are you saying that when using as relational database, the wheels are turning and that a generic repository is a viable solution in case one can use fetching strategies à la Udi for optimizing load? I agree with your options for an RDBMS, but for me, selling an OODB or distributed hashtable is mission impossible. Again, great mind provoking post and looking forward to the one(s). Greg: We have long-since used BOTH the Repository implementation to get our reuse AND an ICustomerRepository to get the best of both worlds. ICustomerRepository would have the ‘GetCustomerByFirstName(…) method defined on it and all consumers of customers would use just this repository interface only. CustomerRepository *dereives* from Repository but *implements* ICustomerRepository. This way, it gets the benefit of the reuse of the generics-enabled repository base-class but since consumers depend only on the ICustomerRepository interface, they aren’t ‘aware’ of the .FindAll (…) etc. methods that are actually there on the class. This gives us both the benefit of the reusable generics-enabled Repository base class AND the intention-revealing methodnames hanging off the interface for the consumers of the repository to access. Curious your opinion on this approach…? yes by readonly I mean in terms of operations you are performing to it (it is updated by some process that keeps it in sync but as far as you are concerned you see it as being readonly). @Greg, re: “…hitting a read only reporting model…” Describing the reporting model as read-only surprised me. As changes are made to the transactional model, in many cases you would want those changes reflected in the reporting model as well correct? Can I assume that you mean read-only in the transactional context of DDD/repositories? Me thinks another post will be required on that? here are some options.. Object Database Unstructured storage (think bigtable/couch db) Event stream persistence (reassemble object based on their histories). Greg What, if not sql database, do you use as your transactional store? “I avoid using sql databases as my transactional store and as such don’t have these issues.” I don’t get this, but I expect this is a whole series of blog posts unto itself. Oh you silly Habs fans 😉 Are your domain repositories ignorant to the persistence/ORM technology? Do they just rely on your infrastructure repositories to handle the details or are the infrastructure repositories just for code re-use? Nick I believe the problem you are running into is a function of the impedance mismatch. I avoid using sql databases as my transactional store and as such don’t have these issues. In general Udi’s solution (I have seen it before) is a pretty good one to the problem though still painful in general. Shaun a better question would be why are you running reports in your domain. Note that by report here I am referring to something that sounds like it should be hitting a read only reporting model as opposed to your transactional model. Adhoc queries have no place in a domain that is modeling transactional behaviors. Greg I curious if your domain repositories are ignorant to the persistence/ORM technology and just rely on your infrastructure repositories to handle the details? If so, how do you effectively abstract ad-hoc queries to your infrastructure repositories? “So the answer here is to still use a generic repository but to use composition instead of inheritance and not expose it to the domain as a contract.” Hehe, I was thinking of suggesting just that while reading your post. This is how I’ve been doing things until now for pretty much the reasons you’ve posted, but have always had trouble figuring out how to optimize queries (which I why I started the thread on alt.net.) E.g. I can have a CustomerRepository.GetCustomerByLastName used in many different use cases. Some use cases are more perfomant if the customer’s orders are eager loaded and others more performant if not. How do I handle this? The options I can think of are: 1. Udi’s solution of interfaces. (Sorry his site seems to be down so can’t get the link.) 2. One repo method per use case. 3. Having a parameter on the repo method indicating the fetch strategy. Anyway, still stumped. huh? Does this mean that the method Find on List is just as bad? What exactly is the difference between l = getallcustomers(); internalGenericRepository.FetchByQueryObject(new someclass(_name)) and List var c = l.Find(new someclass(_name).finder) ???
http://codebetter.com/gregyoung/2009/01/16/ddd-the-generic-repository/
CC-MAIN-2018-51
refinedweb
4,002
52.6
List All the Notes Now that we are able to create a new note, let’s create a page where we can see a list of all the notes a user has created. It makes sense that this would be the homepage (even though we use the / route for the landing page). So we just need to conditionally render the landing page or the homepage depending on the user session. Currently, our Home containers is very simple. Let’s add the conditional rendering in there. Replace our src/containers/Home.js with the following. import React, { Component } from "react"; import { PageHeader, ListGroup } from "react-bootstrap"; import "./Home.css"; export default class Home extends Component { constructor(props) { super(props); this.state = { isLoading: true, notes: [] }; } renderNotesList(notes) { return null; } renderLander() { return ( <div className="lander"> <h1>Scratch</h1> <p>A simple note taking app</p> </div> ); } renderNotes() { return ( <div className="notes"> <PageHeader>Your Notes</PageHeader> <ListGroup> {!this.state.isLoading && this.renderNotesList(this.state.notes)} </ListGroup> </div> ); } render() { return ( <div className="Home"> {this.props.isAuthenticated ? this.renderNotes() : this.renderLander()} </div> ); } } We are doing a few things of note here: Rendering the lander or the list of notes based on this.props.isAuthenticated. Store our notes in the state. Currently, it’s empty but we’ll be calling our API for it. Once we fetch our list we’ll use the renderNotesListmethod to render the items in the list. And that’s our basic setup! Head over to the browser and the homepage of our app should render out an empty list. Next we are going to fill it up with our API. If you liked this post, please subscribe to our newsletter, give us a star on GitHub, and check out our sponsors. For help and discussionComments on this chapter
https://branchv21--serverless-stack.netlify.app/chapters/list-all-the-notes.html
CC-MAIN-2022-33
refinedweb
296
58.28
Hi Everyone! Just a quick update this time. We've seen more people posting about JavaScript extensions and scripts and we love hearing about what you're doing with them and seeing people embrace NatVis, 'dx', and LINQ. One of the things you've been asking us about is getting IntelliSense when writing their scripts. We've pulled together an IntelliSense definition file that you can use in VS Code to get IntelliSense on all the host.* debugger object APIs. To use the IntelliSense definition: - Download the IntelliSense definition - JSProvider.d.ts.zip - Extract JSProvider.d.ts to the same folder as your script - Add “/// <reference path="JSProvider.d.ts" /> ” to the top of your script With that in your script, VS Code will automatically give you IntelliSense on the host APIs provided by JSProvider in addition to the structures in your script. Just type “host.” and you’ll see IntelliSense for all the available APIs. If you hit any problems, don't hesitate to comment below. -Andy @aluhrs13 Have a problem with currentProcess \ currentThread – not avalilable 3: kd> dx -r3 Debugger.State.Scripts.testscript Debugger.State.Scripts.testscript Contents : [object Object] host : [object Object] diagnostics : [object Object] namespace currentSession : Remote KD: KdSrv:Server=@{},Trans=@{1394:Channel=10} currentProcess : Unknown exception currentThread : Unknown exception memory : [object Object] Shoot an email to WinDbgFB@microsoft.com with more details like WinDbg version, host, and target OS versions and we can take a look. Is this the latest edition ?
https://blogs.msdn.microsoft.com/windbg/2017/05/18/intellisense-for-debugger-javascript-apis/
CC-MAIN-2018-17
refinedweb
246
57.57
15 articles on home security Bajet $30-250 USD I am requesting 15 articles written concerning the topic of home security.. We have a large number of ongoing projects and the right candidate can look forward to additional writing projects. is ASCII text file (.txt). Using Windows line breaks that is a carriage return (cr) character followed by a new line (nl) character. ___. ___ Keep paragraphs between 2 and 6 sentences in length. ___ After the title write a. ___ At the top of the article place the keyword phrase followed by a colon (:) and then). *** A sample of the article structure I am requesting for the articles is at the very bottom of this description. ABOUT THIS PROJECT --------------------------------- Main Phrases: home security Overall Theme: The typically reader is a home owner who is concerned about the safety and security of my family in our home and is researching various home security options to protect his home and family...
https://www.my.freelancer.com/projects/articles-home-security/
CC-MAIN-2017-47
refinedweb
160
61.97
On 06/13, Roland McGrath wrote:>> > Oh. And another problem, vfork() is not interruptible too. This means> > that the user can hide the memory hog from oom-killer.>> I'm not sure there is really any danger like that, because of the> oom_kill_process "Try to kill a child first" logic.But note that oom_kill_process() doesn't kill the children with thesame ->mm. I never understood this code.Anyway I agree. Even if I am right, this is not very serious problemfrom oom-kill pov. To me, the uninterruptible CLONE_VFORK is bad byitself.> > But let's forget about oom.>> Sure, but it reminds me to mention that vfork mm sharing is another reason> that having oom_kill set some persistent state in the mm seems wrong.Yes, yes, this was already discussed a bit. Only if the core dump is inprogress we can touch ->mm or (probably better but needs a bit more locking)mm->core_state to signal the coredumping thread and (perhaps) for somethingelse.> > Roland, any reason it should be uninterruptible? This doesn't look good> > in any case. Perhaps the pseudo-patch below makes sense?>> I've long thought that we should make a vfork parent SIGKILL-able.Good ;)> (Of> course the vfork wait can't be made interruptible by other signals, since> it must never do anything userishYes sure. That is why wait_for_completion_killable(), not _interrutpible.But I assume you didn't mean that only SIGKILL should interrupt theparent, any sig_fatal() signal should.> I don't know off hand of any problem with your> straightforward change. But I don't have much confidence that there isn't> any strange gotcha waiting there due to some other kind of implicit> assumption about vfork parent blocks that we are overlooking at the moment.> So I wouldn't change this without more thorough auditing and thinking about> everything related to vfork.Agreed. This needs auditing. And CLONE_VFORK can be used with/without allother CLONE_ flags... Probably we should mostly worry about vfork ==CLONE_VM | CLONE_VFORK case.Anyway. ->vfork_done is per-thread. This means that without any changesdo_fork(CLONE_VFORK) can return (to user-mode) before the child's threadgroup exits/execs. Perhaps this means we shouldn't worry too much.> Personally, what I've really been interested in is changing the vfork wait> to use some different kind of blocking entirely. My real motivation for> that is to let a vfork wait be morphed into and out of TASK_TRACED,I see. I never thought about this, but I think you are right.Hmm. Even without debugger, the parent doesn't react to SIGSTOP. Say, int main(voif) { if (!vfork()) pause(); }and ^Z won't work obviously. Not good.This is not trivail I guess. Needs thinking...Oleg.
http://lkml.org/lkml/2010/6/14/282
CC-MAIN-2014-52
refinedweb
452
68.36
/* $NetBSD: wwadd.c,v 1.5 1997/11/21 08:36:54add.c 8.1 (Berkeley) 6/6/93"; #else __RCSID("$NetBSD: wwadd.c,v 1.5 1997/11/21 08:36:54 lukem Exp $"); #endif #endif /* not lint */ #include "ww.h" /* * Stick w1 behind w2. */ void wwadd(w1, w2) struct ww *w1; struct ww *w2; { int i; struct ww *w; w1->ww_order = w2->ww_order + 1; w1->ww_back = w2; w1->ww_forw = w2->ww_forw; w2->ww_forw->ww_back = w1; w2->ww_forw = w1; for (w = w1->ww_forw; w != &wwhead; w = w->ww_forw) w->ww_order++; for (i = w1->ww_i.t; i < w1->ww_i.b; i++) { int j; unsigned char *smap = wwsmap[i]; char *win = w1->ww_win[i]; union ww_char *ns = wwns[i]; union ww_char *buf = w1->ww_buf[i]; int nvis = 0; int nchanged = 0; for (j = w1->ww_i.l; j < w1->ww_i.r; j++) { w = wwindex[smap[j]]; if (w1->ww_order > w->ww_order) continue; if (win[j] & WWM_GLS) continue; if (w != &wwnobody && w->ww_win[i][j] == 0) w->ww_nvis[i]--; smap[j] = w1->ww_index; if (win[j] == 0) nvis++; ns[j].c_w = buf[j].c_w ^ win[j] << WWC_MSHIFT; nchanged++; } if (nchanged > 0) wwtouched[i] |= WWU_TOUCHED; w1->ww_nvis[i] = nvis; } }
http://opensource.apple.com//source/shell_cmds/shell_cmds-18/window/wwadd.c
CC-MAIN-2016-40
refinedweb
195
85.99
Next: nc_put_vara_ type, Previous: nc_put_var1_ type, Up: Variables The nc_put_var_ type family of functions write all the values of a variable into a netCDF variable of an open netCDF dataset. This is the simplest interface to use for writing a value in a scalar variable or whenever all the values of a multidimensional variable can all be written at once. The values to be written are associated with the netCDF variable by assuming that the last dimension of the netCDF variable varies fastest in the C interface. The values are converted to the external data type of the variable, if necessary. Take care when using the simplest forms of this interface with record variables when you don't specify how many records are to be written. If you try to write all the values of a record variable into a netCDF file that has no record data yet (hence has 0 records), nothing will be written. Similarly, if you try to write all of a record variable but there are more records in the file than you assume, more data may be written to the file than you supply, which may result in a segmentation violation. int nc_put_var_text (int ncid, int varid, const char *tp); int nc_put_var_uchar (int ncid, int varid, const unsigned char *up); int nc_put_var_schar (int ncid, int varid, const signed char *cp); int nc_put_var_short (int ncid, int varid, const short *sp); int nc_put_var_int (int ncid, int varid, const int *ip); int nc_put_var_long (int ncid, int varid, const long *lp); int nc_put_var_float (int ncid, int varid, const float *fp); int nc_put_var_double(int ncid, int varid, const double *dp); ncid varid tp up cp sp ip lp fp dp Members of the nc_put_var_ type family return the value NC_NOERR if no errors occurred. Otherwise, the returned status indicates an error. Possible causes of errors include: Here is an example using nc_put_var_double to add or change all the values of the variable named rh to 0.5 in an existing netCDF dataset named foo.nc. For simplicity in this example, we assume that we know that rh is dimensioned with time, lat, and lon, and that there are three time values, five lat values, and ten lon values. #include <netcdf.h> ... #define TIMES 3 #define LATS 5 #define LONS 10 int status; /* error status */ int ncid; /* netCDF ID */ int rh_id; /* variable ID */ double rh_vals[TIMES*LATS*LONS]; /* array to hold values */ int i; ... status = nc_open("foo.nc", NC_WRITE, &ncid); if (status != NC_NOERR) handle_error(status); ... status = nc_inq_varid (ncid, "rh", &rh_id); if (status != NC_NOERR) handle_error(status); ... for (i = 0; i < TIMES*LATS*LONS; i++) rh_vals[i] = 0.5; /* write values into netCDF variable */ status = nc_put_var_double(ncid, rh_id, rh_vals); if (status != NC_NOERR) handle_error(status);
http://www.unidata.ucar.edu/software/netcdf/old_docs/docs_3_6_1/netcdf-c/nc_005fput_005fvar_005f-type.html#nc_005fput_005fvar_005f-type
crawl-003
refinedweb
451
54.05
... -vars.html ... -in-python What I got out of this was, every object is backed up by a __dict__. Alright, but is a __dict__ an object, function, etc? Does it mean a dictionary? What does the __dict__ do? Its an attribute ok, but could someone explain an attribute better please? I got this out of the docs.python.org: - Code: Select all Attribute | Meaning __dict__ func_dict | The namespace supporting arbitrary function attributes. Could someone explain? I think this definition means the string or field, etc. that supports something about a function. Again, I know an attribute is a reference to I think the name of a function in a module. So then, it supports certain functions from a module? Is it like the namespace itself?
https://www.hackthissite.org/forums/viewtopic.php?p=88121
CC-MAIN-2020-29
refinedweb
125
78.96
I have to set my cin's and cout's to work in a while loop. The way it is supposed to work is, the loop will end once someone enters "-999" as the int number... as you can see below. I have yet to add in the function call and other things, but I think I have come far enough to be able to ask this. I think if I put the cin << number; in the loop, it will not terminate because it will not check until it has gone thru everything. Can someone tell me what to do please? Code:#include <iostream> #include <iomanip> using namespace std; double commission (double actualsales, double basesales, double compercent); //function for the program int main () { //the int main has begun string name; int number; double basepay, compsales, printsales, compcom, printcom, totalcom, totalpay; cout << "Welcome to the APSU Computer Company Commission Statement program!" << endl; cout << "Please Enter Salesperson's Identification or -999 to terminate." << endl; cin >> number; cout << "Please Enter Salesperson's Name." << endl; cin.ignore (80, '\n'); getline (cin, name); cout << "Please Enter Salesman Base Salary." << endl; cin >> basepay; cout << "Please Enter Personal Computer Sales." << endl; cin >> compsales; cout << "Please Enter Printer Sales." << endl; cin >> printsales; }//the int main has ended. oh my! double commission (double actualsales, double basesales, double compercent) {//birth of a function double comm; if (actual sales > basepay) { comm = actualsales * compercent; //and you get to keep your job :) } else { comm = 0; } return comm; }//death of a function
http://cboard.cprogramming.com/cplusplus-programming/84670-need-help-while-loop-pretty-please.html
CC-MAIN-2014-23
refinedweb
248
70.84
- Using Vectors in classes - public access - Debugging in Visual C++ 2005 Express. - only first character is displated from string - Path della dll in uso! - How to convert std::wstring to const WCHAR* - Set Output data in a file with Visual c++ - Paradox Language driver change - how to insert a new line in a label, when the first line is imported from a dll file? - copy constructor doubt - Copy construtor - Multidimensional arrays in C - C++ library for SCADA project (Linux/Unix, Windows, OpenVMS) - Line reader function - scanf() quesion? - large and sparse matrices - save iterator as a DWORD - Template array with pointers - Need a computer - Why construtors dont have return type? - vector subscript out of range - Generic container where type is known only at runtime - wxDialog not displaying control - how to pass a struct to a function - how to op text file use stl string? - Iterators - C++ image Help - help with DB2 prep and binding in C++ - static SQL - Making Includes - How Does One Discover the Full Path Name of the Current Directory? - suggestions please - "Virtual functions allow polymorphism on a single argument" ? - Help: Problem with SDL code - msvc and strdup? - Generic programming - Polymorph in Place - WildCard - Problem searching an array with String Compare - OpenGL + C++ : 3D triangulation - need help - Mechanical Engineer with VC++ Skills - .Net VC++ Java C++ Windows Internals Unix Internals - .Net VC++ Java C++ Windows Internals Unix Internals - Mechanical Engineer with VC++ Skills - Neeeeeeed A Helpppppppp - Why specify a virtual function in a derived class? - Need Answer For my Question - Creating .txt file - Binary or text file - Invoking a clr event from a non-clr library - Tally9 - Self-reproducing program in Ken Thompson's Turing paper - Please expplain this declaration - Build error - inline a function from a .s file? - vector class string sort - pointer error - How to replace a string in #include define by using #DEFINE - isctype.c assert - want to access I/O ports through C - How to read a ISO-9660 file stored in H.D. through C - why does initialise with vector not work here ? - read directory and sub-directory - what on earth can c++ do? - Survey: ATI vs. NVIDIA - Survey: ATI vs. Nvidia - a c++ program printing squares and cube but using if statement only - Using a 'while' loop to control input - pointer to char - Preincrement operator - Function inp / outp - outtextxy is stopping floodfill - Fast way to read a text file line by line - Printf function problem - Matricies in C++ - Run external program - Exception Handling - assigning to void pointers - Avoid nested "if's"? - File operation problem - How To Add Noise To Data - Cat Chasing Rat In Maze - Scanning USB devices and reading USB descriptors - Differences between Copy on Write (COW) and counted reference - Pointers to key and value of a map - weird problem with new - how to write a message based producer consumer program - get 1D row of 2D vector array - OutputIterator value_type - templates, inheritance and [] operator overloading - How to read a short int variable? - Problem with C++ dll... - Problem with template class on GCC - MFC Code problem - Backup Of Database Using VC++ - Problem in Button - Sudoku Solver - To Create Scrolling Banner - C++ - exit() from within C++ program - Help With Changing Integer Data Type Into Character? - Variable Truncation - How to improve C skill? - IsAnagram - Magic Number !! - Delete Folder (C++) - question on program using functions and arrays - Rad C++ - Not sure where this goes - question about software - Efficient way to search through points - Dev-C++_IDE_settings - wxWidgets on CodeBlocks - Linker Errors (release build) - Adding a function's argument while preserving the API compatibility? - char[] and char* - Warning message - sprintf function - boost::variant Binary visitation - Newsread test - Yahoo Hacked! Credit Card information stolen OMG! - Call base class constructor? - how to pass this parameter into the function - weird C++ "ambiguous call to overloaded function" - A simple question on heap objects. - Wii - Integers vs floats - A Template that rejects classes. - Exception - Function pointer problem - pointers in function - problem of reading a file which contains a structure return in it - passing file - Display array values in visual c++ ? - aggregate return warnings - Stream operator in namespace masks global stream operator - Dealing with naive malloc() implementations - can a container contain references? - memory reading problem in c++ - Is it possible to identify caller function from a callee? - I/O Audio library (cross platform) - declare member variable for each type in a typelist - Urgent openings in MNC's for experienced Professionals - Code to display Rbtree - Output and operator precedence in C - controlling a application throgh external key(vc++) - Freeing dynamic memory of lex tokens - Huge memory allocation - An overloaded operator& needs the address of its argument - need a concept - _efficiently_ obtain n-th word from a std::string - Searching Windows Directories in C programming - I have question with Toolbar - problem - Usage of "auto". What for? - Compile time vs runtime? - can a static member function be overloaded?? - problem - what is pass by ref function? - Sniffer C - Read data from File in columnwise Emergency - problem - network - Can this cause a program to crash? - i want to use % (mod ) operator for a variable of type double - static and function calls help+ - Read unicode text file - bytes to unsigned long - Question on passing a structure to a function - intrenal process of malloc and free? - intrenal process of malloc and free? - Generic iterators to specific types - access of memory beyond allocation, not causing segmentation fault... - How to memcpy specifying starting and ending points... - C/C++ interview question - Need custom software - pay good! - Need custom software - pay good! - Text File Manipulation - Frequency of most repeated element from input. - c text loader - c++ Execute - Lang C - Help With Changing Integer Data Type Into Character? - segmentation error fprintf in c - General File management Library - How to Port C++ dll in Linux to Windows? - structure and malloc - Efficient communication between PHP and C++ ? - Parsing - wont run - dynamic charecter in c & output format - Problem with checksum in RAW SOCKET c - DB question - Problem with A Tiiinnny Word processor C++ - Parallell struct dependancy in C++ - Grammar: What's in a type-specifier-seq? - warning: taking address of temporary - warning: taking address of temporary - Custom Pool Allocator Question - Error checking using sscanf - from c++ to python - Static member in a class - subtraction and division using srand - Using "this" in a base initializer list? - NUMERIC CENTER - printf - Puzzle: make new compilers understand what g++ 2.95.3 compiled - Dereferencing a null-pointer allowed? - Format specifier - Puzzle: make new compilers understand what g++ 2.95.3 compiled - Linker error: undefined reference - DrawItem in VC++ - How to delete a locked file ? - Tool for reviewing c++ code ? - Binary values assignment - handling input and output data in files - random no. generator - Please help me with this program. - Access Violation. - Computer Graphics-programming C++ - Question about Inline functions. - Purpose of #define inside struct definition? - Can size of array allocated through new be changed ?? - xml file reader - Method of writing If conditions in C/C++ - (un)connected udp socket question - #defines - operator << for std::string in #define - C++ Vs VC+ - C++ Lists - trying to stop this loop after condition is meet - Export the contents of a SqlDataReader to CSV file in ASP.NET - Another Group - Dialog Box - Can you fix this code? - Comparison issues using fstat and very large files in C++ - Boost lib problem in debug model, VC++ - How can I write graceful codes? - Code & W.Form - Explicit typedef - Palindrome - Need information on graphics programming using C - Pointer to vector<T> contents - Sorting a map<T*> - Implementation of rollDie() method - error C2228: left of '.rent' must have class/struct/union type - numerical integration of sin(x)/x - Where to find the exact requirement of the operand for C operators? - Console width and height, MS Visual C++ - String array() in C - String problems - program 4 skool project - program 4 skool project - program 4 skool project - exiting a socket and reading a file queries - Problem with template - want to write a program for myself - CAN'T USE GOOGLE SEARCH ENGINE - Dynamically stored strings. What is the exact method? - Conversion of datatypes (probably not what you think :p) - Which form of namespace usage is better and why? - OpenMP - dynamic and static? - Namespace goes with template? - C++ alternative for PHP's $$variable - character constant is tool long of its type - Custom Software Development - Custom Software Development - switch case to a jump table - read operation in c++ - Implementation of SNMPv2 - Virtual Machine implementation problem, Please help me to spot the bug - SIGSEGV in malloc() - c to m-file - use of #define (...) #undefine ? - how can wecalculate the average for sparse matrix - Can someone please explain to me what is my teacher asking me? - How to scan linked source directory using cscope - operator overloading - trouble understanding a problem when using strcat - operator evaluation question - segmentation error - segmentation error - Question about comment parsing between C and C++ compiler - Trouble with setting environment variables. WIN32 using system(); - Directly reading/writing a HDD - switch not assigning value problem - trouble printing 'Hello World' - New Skype 4.1 Alpha released for linux. - New Skype 4.1 Alpha released for linux. - a function to verifies if the values are correctly ordered in the tree - Running processes and open ports - Problem overloading extraction operator - copying a deque with dynamically allocated objects - memory leak? - Help with command-line prog - Printing 2 dimentional arrays - getting integers from unknown number of lines from a file. - use of STL has problems - give me some hints - help me with this - Partial Specialization Method - ctime clock - Passing a temporary object to a reference parameter - Linking code with templates and template libraries - a compiler error with template function - All what you are looking for and much more - All what you are looking for and much more - I want help on some keywords in C language - how is string::c_str() implemented? - simple program on execv() function - Did you read about that? - how to draw somthing (e.g. a line or pixel) on a dialog box? - creating an iterator - threads - Aix Stl - Structs - Simple code for reading word by word from a .txt or .doc file - Simple code for de-comment a .C file - signigicance of VOID in C language - Guys don't miss this. - Did you read about that? - How to get a templated class to determine template argument list on it's own? - Help learning C++ - C++, Win32 API, .NET, Java, J2ee in UK-Work permit - Get variable from file - Need Help - What am I doing wrong ? - Running .EXEs From A String Path - what does __declspec does exactly? - Error - Classes and structures - loops - How do I output this pattern? - fibonacci series using recursion - class and array of struct - Help with templates and code generalization - How to write PHP server - copy the string - Precedence of operators - difference between Rad and C respect delphi - Sending console output to printer - delete, hash_map - Problem with writing and reading from serial device ttyS0 - Save Game - Vector - Interdependent classes - namespaces, - "istream" problem - how to add header files - new() and memcpy() - Initializing an array comprised of very long strings - handling with arrays, what will be faster??? - Program crashes need help - Help with reinterpret_cast in g++ - Variadic Functions - passing command line argument to array - return address of string from a function - memory aligment problem - File comparison in c++ - skips through - inclusion guards - Temporary and non-const member function - CFP: ISMM'07 (Montreal, Oct 07) - CFP: ISMM'07 (Montreal, Oct 07) - please solve this query for me. - find the bugs - find the bugs - macros and side effects - switch and friends - ambiguous superclass functions when specializing STL class - how to access parts of string in c - Shared objects with configuration files in them - Possible to do the following with Exceptions ? - How to communicate between different classes? - Binary set - how to calculate offset? - Find Closest value searching an array with linear search - Compiling C++ on MS Windows - Query on pointers - how can i create google desktop api using C++.plz help! - Use of Switch Case - scanf right square bracket - MFC classes: CDatabase, CRecordset - simple quiery on strings - ACL- Access Control List - nth biggest number in unsorted array - destructor invoked without constructor. - Virtual Variables? - converting string to char* - overriding placement new + memory management - function pointers - default assignment operator - why template class member fuction definition should be in header file?? - Converting string to charater type - c program for cramer's rule, gaussian elimintion - static array as a class member - c++ rpg - Dynamic allocation to static memory allocation - optimization techniques- help me - why does this call the destructor? - palindrome using pointers? - memory problem.. - Compilation error - Configuration Management - Error C2065 - how to copy/store system func return data - Managed and Unmanaged C++ Interop - CLRSCR(); and GOTOXY(); - reading words from a file in C that was stored in an array(PROGRAM )i require - For Function passing LPCTSTR or CString which is better?? - C++ program unable to utilise close() in linux environment - Switch Case or If Else If - serialize a class - how long can a temporary object live? - overload "operator->" - help with logic - How to make a loop that asks user if they want to enter more information? - printf function - skips through script - question on a simple for loop - How to read whole line including white space? - help with ofstream - Again help... - memory - Binary trees - Binary trees - memory - reference member variable question - C# Method Call Help - storing 2d array - looking for code to read screen parameters from command prompt - System call and library call - Mobile communication middleware - asking about MFC - polymorphism; dummy parameters - Help - List View - Locating build-in functions - Problem with static variable in template class - Problem with a list operation - Alternative to "system()" in C++ - Passing a pointer of constructor - Simple C# validation. - Can anyone tell me the Syntax of GET POST - How to read a part of an integer - How to control movement on the screen - VS2005 compile error "C2039" - What is the diference between #include <cstdio.h> and #include <stdio.h> - aCC compiler error. - inheritance is not for code-reuse (??) - smart pointers -- C++ Primer - algorithms and c programs for crammer's rule, gaussian elimination - Decimal places in a numer - C++ Subroutines - char* to type and file - nicely printf a matrix - How to get the last key pressed? - string reverse - A simple unit test framework - error - Searching for stack based containers - reading words from a file in C that was stored in an array(PROGRAM )i require - checking isnumaric for a string variable - fatal error C1083 - Bit manipulation - ((u32) (0-1) << (24)) ? - Weird outside class function definition in C++? - Class or structure, which is efficient in C++? - why we use Void main () - Problem in the code - how to clear display - Using STL maps with custom constructor in a class - need help to make application crypto-steganography - oops for chess board game. - Convert exponential string to non-exponential format - Help needed converting C++ to C# - error LNK2019 - newton raphson method - unsigned char - please help - Need help using fgets and sscanf to parse input - LNK2001 : Unresolved external symbol - storing results into an 2D array - VC++ 6 IDE Tutorial REQUIRED - Read file in C - Plz Help Its Urgent - Get Files from Folders in a share using C++ - How to modify a file which contains a vector? - error LNK2001: Unresolved external symbol - Private members? - Meaning of the word hard coded - Function Arguments - How to pass various structures to function? - C program for Inter Process communication ( Tx and RX) - i dont know what the problem is... - Visual C++ and large 2d arrays - Construction/assignment problem involving templates. - two types of heap allocated objects--any benefit? - ... the most common database used in Linux - Why segfault and no NULL match in for loop? - whats wrong with my function? - Derived Class Constructors - Template question - LNK2019 on you own non-static class member functions Oo - I need help for std::codecvt<> - Its not really C++ but programming logic - Vectors and Data files - function quick question - C program- How to output in zip format? - Mapping - expected primary-expression error - static_cast better than C cast ? - Opening raw character device in DOS/Windows 98 - Structure Definition Tool - Linking problems in debug mode - Extern union - Mapping Class Diagram to C++ coding - need help with undefined reference errors - Strange include order problems - how to initialize static references to an object, second attempt - bad_alloc error... - Skipping parameters in a printf() - Reading many data files - problem declaring template - catch() block not working... - memory for global variable... - Object construction question - Sorting of vector according to a string which contains date in a particular format - const member variables in a class... - Pointer on templated class - template template parameters - how to read from a csv file and write in another file ? - how to convert a 'const char *' to 'char *' in vc++ - Multiplication instead of pointer declaration - Does default function will return any value? - Who calls main() in C++ programs - Sending mail without using outlook express - Whether a Template function is possible in a normal class - Can't edit a project convert from VC 6 to VS 2005 ! - malloc call not returning - Mac modifier - regarding the project title - help for turbo c++ program-amicable numbers - error correcting - Unresolved external 'atoi' - save code for a .pnm image editor - very simple pass by reference question - Some thoughts on polymorphism - store values inputted with getline() in variables in struct - error: aggregate 'Window wnd' has incomplete type and cannot be defined - Source Code - boolean expression - queue like operation on array of structs - virtual function declaration - Change single value object to multiple value object - Does Qt4 TCP Socket client can communicate with POSIX Server ?
https://bytes.com/sitemap/f-308-p-52.html
CC-MAIN-2019-43
refinedweb
2,892
50.77
Codes are wrong The above code is totally wrong, so correct it!!! Program is for create a new file file.createNewFile() should be replaced as file.deleteFile(). Java already has a method to remove file Java already has a method to remove file File f = new File("filename"); f.delete(); Error in the Program In the program u have not mentioned any delete() function so how can it be able to delete the file....please update the code import java.io.File; import java.io.*; public class DeletingFile { public static void main(String args[])throws IOE io io write a program in java to accept a filename from user and print... Scanner(System.in); System.out.println("Enter file: "); String str=input.nextLine(); File file=new File(str); FileReader fr=new IO in java the directory as before along with a file containing a set of queries. Each line...IO in java Your search mechanism will take as input a multi-word... the words specified appear closer to one another. For example, if you search java binary file io example java binary file io example java binary file io example Working With File,Java Input,Java Input Output,Java Inputstream,Java io Tutorial,Java io package,Java io example . The instance of this class represents the name of a file or directory on the host file... to the existence of a corresponding file/directory. If the file exist, a program can... it, deleting it, reading from or writing to it. The constructors of the File class Delete file or Directory Delete file or Directory  ... or a directory by using a java program. We pass the file name or the directory name to which... the delete() on this file, If it is not a file then we check for the directory and also Moving file or directory from one directory to another Moving file or directory from one directory to another... from one directory to another. In this section an example is given...; If you want to move file or directory from other directory io io create a meanu based text editor in java having features of creating file,viewing file,deleting file,renaming file,copying file and cutting file Getting Current Directory from java - Java Beginners Getting Current Directory from java I'm designing an application in linux where i need to get the current directory of my java file... I'm using.../example/java/io/GetCurrentDir.shtml Thanks Java - Deleting the file or Directory deleted; false otherwise. delete() Deletes the file or directory denoted... Java - Deleting File  ... of file and directory pathnames. Explanation This program takes Java IO Path called the delimiter. The file separator varies from O/S to O/S for example... a file uses the complete directory path from root node to file is an absolute... not contains the complete directory path from root node to the file which has
http://roseindia.net/tutorialhelp/allcomments/5038
CC-MAIN-2014-41
refinedweb
484
57.47
Video President Trump: Going to achieve even more in 2018 President Trump holds a cabinet meeting. **Want FOX News Halftime Report in your inbox every day? Sign up here.** On the roster: Can Trump triangulate? – Issa calls quits, boosts Dem chances for House takeover – Budget deal grows with GOP push for defense spending – Judge checks Trump on DREAMers – Icy Hot, IRL CAN TRUMP TRIANGULATE? These are said to be anxious times for the populist insurgents who toppled the Republican establishment in 2016 and propelled Donald Trump into the presidency. Their onetime champion in the White House, Steve Bannon, has quickly descended from a perch as lofty as the National Security Council to banishment and disgrace after helping deliver back-to-back special election defeats in Alabama for his former boss and getting caught once too often trashing Trump’s progeny in public. Meantime, the forces of the old order are ascendant. Having delivered in the nick of time a major first-year legislative success for the president with tax cuts and having brought new measures of purpose and order to the once-rampageous White House, the corporatist claque that traditionally steers the GOP seems to have regained the rudder. Team Chaos is in retreat, damaged not just by – duh – chaos, but also the ongoing depredations of the probe into Russian interference in the 2016 election. America’s head prefect, Special Counsel Robert Mueller, has already laid low some key figures of early populist Trumpism and many of the others are now preoccupied with contemplating multipliers on the hourly billing rate sheets for Washington criminal defense lawyers. Saying you’re “wired” used to be a good thing in Washington. Not so much these days. House Speaker Paul Ryan and Senate Majority Leader Mitch McConnell have played a relatively long game with Trump. After weathering the early turmoil of “American carnage” rhetoric, unforced errors on the Russia probe, hit-and-miss appointments, Bannonian over-swings on policy and the triple-Lindy bellyflop on ObamaCare without getting to full rupture, Ryan, McConnell and Trump have established – for now – a fruitful partnership. Mitt Romney is gearing up a Senate run, earmarks are back on the board, Trump is getting ready to head for Davos and Bannon is heading for a late lunch at Denny’s. All that says a great deal about the manner of MAGA in the first weeks of 2018. Who’s draining whom, here? But nothing, of course, says as much about the condition of Trumpism as the issue that launched the movement in the first place: Immigration. A great alarm that went up among the presidents earliest and most ardent supporters at his remark Tuesday in a televised immigration negotiation with congressional Republicans and Democrats, during which Trump spoke of an immigration policy of “love” (sorry, Jeb Bush) and, most significantly saying of a deal that would trade tougher enforcement for amnesty for illegal immigrants already in the country: “If you want to take it that further step, I’ll take the heat. I will take all the heat. You are not that far away from comprehensive immigration reform.” Now, it would be a mistake to take any president too literally in a moment like this one, doubly so for one to whom such pronouncements are as steamed spinach is to a Vitamix. But, coming as it did after the unpersoning of Bannon and weeks of celebrating tax cuts with the GOP establishment, it caught Trump’s most nationalistic supporters just wrong. Warnings of “one-term” and “betrayal” and the like flew up on social media and on air. If Trump cut a deal on the more than 10 million illegal immigrants, they said they would make him pay. Bannon, already plotting his return (presumably muttering into his order of Moons Over My Hammy) sent word through one of his many, many press contacts that he was already building a political network to “help” Trump after the establishment abandoned him. According to our Bannonese to American translator, “help” means come after Trump’s family, generate a bajillion self-serving stories about Republican civil war and then lose safely held Republican seats. The problem for those who believe they now hold Trump hostage over immigration is that they are in the small minority of Americans on the subject, even within their own party on some issues. We should remember from the outset that Americans overwhelmingly favor a fairer, more orderly and better enforced policy on immigration. One of the main reasons that Trump is president today is that leaders in both parties somehow forgot that obvious truth between 2011 and 2016. But when it comes to what to do about the so-called DREAMers – young adults brought to the United States as minors – there’s also overwhelming sentiment in favor of mercy. December polls from WSJ/NBC News and Quinnipiac University found similar results with supermajorities in both surveys wanting to let the DREAMers stay. Quinnipiac even asked about letting them and offering a chance to become citizens – 77 percent favored that plan. That’s reflective of the last Fox News poll on the subject, taken in September, when our pollsters inquired about all illegal immigrants, not just those brought as minors. What should become of those millions already working in the U.S.? Should they be deported in as large a number as possible or offered a way to become legal residents? Eighty-three percent favored legalization over deportation for all illegal immigrants and 79 percent favored outright citizenship following background checks for DREAMers. Just 12 percent supported the administration’s threatened revocation of work permits. With numbers like these, it’s hardly unreasonable to think that Trump could get away with doing on immigration what Bill Clinton did on welfare and steal the issue right out from under the Democrats’ noses. And if Trump gets a sturdy section of border wall and keeps ratcheting up enforcement of existing laws, those numbers would likely look even better. And just wait until Democrats start howling about how tough and unfair the demands of the White House demands are. Throw in some media think pieces on the subtle racism/gender inequality of immigration reform and you’ll have opposition down to single digits! As much as the Blue Team would hate to lose the issue as a political tool, the same nationalist/populist factions that helped launch the Trump 2016 dirigible would hate it even more. Immigration outrage has been the top topic of the anti-establishment wing of the GOP since at least 2007, when they stuffed habaneros up George W. Bush’s nose over the subject. None of this is to say that it is certain or even likely that Trump will execute a Clintonian (him, not her) pivot as his presidency progresses. This has been an especially reactive, jolty administration and crafting a deal like that one requires relatively long spells of steadiness. And, of course, the razor-sharp asterisk of the Russia probe could slice things to bits any given day. But what if Trump got even half a deal? Or even showed considerable progress toward an agreement? Bannon may be right that the political establishment would drop Trump like a hot rock when he comes too inconvenient. But what if the lift gets easier and lighter? What if Trump gets even a bit more popular as the worst predictions of his fiery and furious term fail to come to pass and Americans learn to tune out more of this splenetic venting? Again, neither certain nor likely. We have seen many a man and woman go wrong betting that this was finally the Trump pivot. But having been betrayed by the leader of his self-styled palace guard and gifted success by his former establishment foes, if ever there was a moment for Trump to try triangulation, this would surely be it. THE RULEBOOK: MMMMHMM… “Sometimes whole circles are defaulters; and then they increase the mischief which they were instituted to remedy.” – Alexander Hamilton and James Madison, Federalist No. 19 TIME OUT: ALMOST HEAVEN Atlantic: “In 2007, astrophysicists at West Virginia University stumbled upon something strange… [And] On Wednesday, [Jason Hessels] and other members of an international team of astronomers announced new results from their observations … [from] the Arecibo Observatory in Puerto Rico and the Green Bank Telescope in West Virginia. ….” Flag on the play? – Email us at SCOREBOARD Trump net job-approval rating: -22 points Change from one week ago: down 1.4 points [President Trump’s score is determined by subtracting his average job disapproval rating in the five most recent, methodologically sound public polls from his average approval rating, calculated in the same fashion.] ISSA CALLS QUITS, BOOSTS DEM CHANCES FOR HOUSE TAKEOVER The Hill: “Rep. Darrell Issa by about 7 points. ‘Throughout.” Poll shows Arpaio and McSally neck and neck in Arizona – KNXV: .” Hoyer urged a candidate to end his primary campaign – Colorado Politics: “Minority Whip Steny Hoyer of Maryland, the No. 2 Democrat in House leadership, encouraged Democrat Levi Tillemann to end his primary campaign in Colorado’s 6th Congressional District during a December meeting at a Denver hotel, saying that state and national congressional and party leaders had decided ‘very early on’ to consolidate their resources behind another Democrat, Jason Crow, to run against Republican incumbent Mike Coffman, according to detailed notes Tillemann wrote immediately after the meeting.” Silver: ‘Are Democrats’ Senate Chances In 2018 Overrated?’ – FiveThirtyEight: ‘toss.” Trump provides boost to Scott Senate run with drilling relief – Politico: ‘Winter White House’ escape at Mar-a-Lago in Palm Beach.” Last Republican running in Ohio gives fundraising boost to himself – WOSU: “The remaining Republican running for U.S. Senate wants to show he’s serious about his bid to win the GOP’s nomination, and ready to take on Democratic incumbent Sherrod Brown this fall. Cleveland-area businessman Mike Gibbons has heard the criticism that he’s not able to raise the money needed to take on Brown..” Vance is now ‘seriously considering’ running for Senate – Columbus Dispatch: “Columbus resident and Hillbilly Elegy author J.D. Vance is more than flirting with a run for the Republican nomination for U.S. Senate following the exit of Treasurer Josh Mandel from the race. ‘J.D. is seriously considering a run at this point,’ said Jai Chabria, a former top adviser to Gov. John Kasich who now is working with Vance, who has never sought or held elective office. … ‘There’s been an unbelievable amount of support both from Ohio Republican leaders and people with an interest in the Ohio Senate race from a national perspective,’ Chabria said Wednesday morning.” Judges toss North Carolina gerrymander – [Raleigh] News and Observer: “A.” SCOTUS listen to Ohio plan targeting inactive voters – ‘trying to protect their voter rolls…What we’re talking about is the best tools for that purpose.’” BUDGET DEAL GROWS WITH GOP PUSH FOR DEFENSE SPENDING WSJ: “The brewing budget deal on Capitol Hill could get much larger thanks to a late Republican push for a bigger boost in defense spending. As the spotlight has been on immigration negotiations, lawmakers and aides have been working behind the scenes to hammer out a two-year deal that would not only prevent the budget limits known as the sequester from kicking in, but potentially raise spending beyond that. While many expected lawmakers to eventually reach an agreement adding $200 billion over two years over sequester levels, that sum would grow if a GOP push to ramp up military spending even more is successful. Both Republicans and Democrats want to prevent spending limits established in 2011 from kicking in. Without congressional action, defense spending would be shaved by $54 billion and nondefense spending would see a $37 billion cut in fiscal 2018, which ends in September.” Womack gets initial nod for Budget chairman – The Hill: “Rep. Steve Womack (R-Ark.) has won support from the Republican Steering Committee to chair the House Budget Committee, Speaker Paul Ryan (R-Wis.) announced Tuesday evening. If approved by the full GOP conference, Womack will replace budget Chairwoman Diane Black (R), who is stepping down from the post to focus on her 2018 bid for Tennessee governor. The Steering Committee, a powerful panel of leadership allies led by Ryan, also recommended that Rep. Darin LaHood (R-Ill.) replace retiring Rep. Pat Tiberi (R-Ohio) on the Ways and Means Committee, the influential tax-writing panel announced Tuesday.” JUDGE CHECKS TRUMP ON DREAMERS Fox News: “President Trump blasted America’s ‘broken’ and ‘unfair’ court system Wednesday, just hours after a federal judge blocked the administration from turning back the Obama-era Deferred Action for Childhood Arrivals (DACA) program, which shielded more than 700,000 people from deportation. ‘It just shows everyone how broken and unfair our Court System is when the opposing side in a case (such as DACA) always runs to the 9th Circuit and almost always wins before being reversed by higher courts,’ Trump tweeted Wednesday morning. The president’s tweet was in reference to an injunction made by U.S. District Judge William Alsup from San Francisco, who ruled late Tuesday that the DACA program must stay intact until a final judgment is reached — meaning those already approved for DACA protections and work permits must be allowed to renew them before they expire. The court’s order doesn’t change the administration’s ‘position on the facts,’ the Justice Department said.” 7-Eleven stores are new target for immigration crackdown – WaPo: who may have unauthorized workers on their payroll. ‘Today.” Immigration hardline Dems gum up votes – WashEx: .” MUELLER ADDS MEMBER FOCUSED ON CYBER ISSUES TO TEAM WaPo: .” ‘Sneaky Dianne’ – Bloomberg: “President Donald Trump said Democratic Senator Dianne Feinstein’s release of testimony on the controversial dossier alleging connections between the president and Russia was ‘underhanded and possibly illegal,’ and he called on Republicans to ‘take control’ of congressional probes. ‘The fact that Sneaky Dianne Feinstein, who has on numerous occasions stated that collusion between Trump/Russia has not been found, would release testimony in such an underhanded and possibly illegal way, totally without authorization, is a disgrace. Must have tough Primary!’ Trump wrote Wednesday on Twitter. … The committee’s Russia probe has dissolved into bitter partisan sniping, with Republicans and Democrats tangling over the origins of the dossier, which was funded in part by Trump’s political opponents, and how the FBI and other agencies may have used it in their investigations.” Cohen files lawsuits against Fusion GPS, BuzzFeed – Politico: “Michael Cohen, longtime attorney for President Donald Trump, filed defamation lawsuits Tuesday evening against research firm Fusion GPS and BuzzFeed. The Fusion GPS lawsuit, which was filed at a federal court level, seeks $100 million in damages. A separate amount in damages for the BuzzFeed lawsuit, which was filed at the state court level, will be determined at trial, according to court documents obtained by POLITICO. ‘Enough is enough of the #fake #RussianDossier,’ Cohen tweeted Tuesday. ‘Just filed a defamation action against @BuzzFeedNews for publishing the lie filled document on @POTUS @realDonaldTrump and me!’” PLAY-BY-PLAY Morris Fiorina: ‘The Meaning of Trump's Election Has Been Exaggerated’ – RCP Manchin to back Trump’s Health secretary pick – WashEx Administration will destroy voter data collected by abortive fraud probe – Politico Rep. Jackie Speier brings “Me Too” movement to Hill, invites lawmakers to wear black to Trump's State of the Union – The Hill Republican Governor Baker rocks a 74 percent approval rating in deep-blue Massachusetts – WBUR WH aides have been notified to make decision: Leave now or stay through midterms – CBS News AUDIBLE: REBOUND “Discontinued.” – The judge presiding over what had been the divorce proceeding between former Clinton top aide Huma Abedin and disgraced former congressman Anthony Weiner, according to the NY Post, which reports that the couple has withdrawn their divorce petition. Share your color commentary: Email us at [email protected] and please make sure to include your name and hometown. ICY HOT, IRL Daily Hive: “A room in Canada’s very own ice hotel caught fire on Monday evening. Hôtel de Glace, located 35 minutes from Quebec City, is known for its beautiful rooms… made of ice. However, on Monday night, a fire was reported in one of the rooms occupied by two guests. Luckily, no one was seriously injured. ‘A small fire occurred at the Ice Hotel of Valcartier overnight on Monday, forcing the evacuation and temporary closure of the establishment,’ said the hotel in a press release sent to Daily Hive. ‘The fire began in a suite occupied by two customers. They were transported by ambulance and have been discharged by medical authorities.’ As a result of damage caused by the fire, the hotel will be closed on January 9 and. The post Can Trump triangulate? appeared first on Shareabler. This post first appeared on Shareabler, please read the originial post: here
https://www.blogarama.com/politics-blogs/1287588-shareabler-blog/23533754-can-trump-triangulate
CC-MAIN-2022-40
refinedweb
2,816
58.21
Why Does JavaFX Get to Cheat? Why Does JavaFX Get to Cheat? Join the DZone community and get the full member experience.Join For Free Atomist automates your software deliver experience. It's how modern teams deliver modern software. Why does JavaFX get to do things that would be frowned upon if done in plain old Java? The prime example is it's treatment of layout managers and panels. In JavaFX a layout manager is bound together with a JPanel to create one cohesive unit. The API is then tailored specifically around the needs of that combonation of layout manager and panel. Now I look at the code and say to myself, "Hey that's convenient", and it IS, but that's not that point. Someone at some point in the existence of Java decided that the layout logic should be separate from the storage of components in a container. I'm sure there are good reasons for this separation. Now JavaFX seems to be making the statement that the separation was not a good idea. In essence saying that an easy to use API trumps this particular design pattern and justifies tight coupling. I can't say I disagree in this circumstance. I just want to lever the playing field. If JavaFX gets to break the rules so can I. Panel Classes. Throwing caution to the wind I used every dirty trick I know to try and create some easy to use panels for plain old Java. Here is what I have so far. - BorderPanel - CardPanel - GridPanel - BoxPanel - GBFPanel (Grid Bag Factory Panel) BorderPanel, CardPanel and GridPanel The border and card panels just add some convenient methods methods and are not that exciting. I have methods like setNorth() for BorderPanel and next() and previous() for CardPanel. With GridPanel I decided to create a constructor that takes a two dimensional array of components. In this way you can specify the exact location of every component in the grid in a way that actually reflects the layout of the final product. GridPanel Example: Component[][] grid = new Component[][] { { new JButton("1"), new JButton("A") }, { GridPanel.spacer(), new JButton("B"), new JButton("Blue") }, { new JButton("3") } }; GridPanel gp = new GridPanel(grid, 5, 5); As an added bonus this constructor will automatically add empty space when a row is under sized. BoxPanel With BoxPanel I was able to get a litte more creative. I added an enum called DIRECTION which defines spacer(), glue() and build() methods. I also added a varags version of add so that multiple components can be added in the same line. Box Example 1: - This code shows the new addSpacer() method, and the varargs version of add. BoxPanel bp = new BoxPanel(X_AXIS); bp.addSpacer(5); bp.add(new JLabel("Section 1"), X_AXIS.spacer(5), new JButton("A"), X_AXIS.spacer(5), new JButton("B")); bp.addSpacer(10); bp.add(new JComboBox()); bp.addGlue(); bp.add(new JLabel("Section 2"), X_AXIS.spacer(5), new JButton("1"), X_AXIS.spacer(5), new JButton("2")); bp.addSpacer(5); Box Example 2: - This code shows the new build() method of Direction. It also nests BoxPanels to make a grid. BoxPanel rowsPanel = new BoxPanel(Y_AXIS); rowsPanel.add(Y_AXIS.spacer(10)); for(int rows = 0; rows < 10; rows++ ){ if(rows % 2 == 0){ rowsPanel.add(X_AXIS.build(X_AXIS.spacer(5), new JLabel("Row :" + rows), X_AXIS.spacer(5) , new JComboBox(), X_AXIS.spacer(5))); }else{ BoxPanel oddColumns = new BoxPanel(X_AXIS); oddColumns.add(X_AXIS.spacer(5), new JLabel("Row :" + rows), X_AXIS.spacer(5) ,new JButton("Button 1"), X_AXIS.glue()); oddColumns.addSpacer(10); oddColumns.add(new JTextField(), X_AXIS.spacer(5) ,new JButton("Button 2"), X_AXIS.glue(), X_AXIS.spacer(5)); rowsPanel.add(oddColumns); } rowsPanel.add(Y_AXIS.spacer(10)); } GBFPanel - GridBagFactoryPanel GridBagLayout is by far one of the most complicated layout managers in the JDK. I decided that most of the problems come from the GridBagConstraints object. It just seems that the add(Component comp, Object constraint) method of Container is just not flexible enough to handle the kinds of constraints GridBag really needs. I wanted to deal with only the individual constraints that are relevant to the component I'm dealing with. Here is what I came up with. public class DemoPanel extends GBFPanel { private JButton button1 = new JButton("Button 1"); private JButton button2 = new JButton("Button 2"); private JButton button3 = new JButton("Button 3"); private JButton button4 = new JButton("Button 4"); private JButton button5 = new JButton("Button 5"); private JButton button6 = new JButton("Button 6"); private JButton button7 = new JButton("Button 7"); private JButton button8 = new JButton("Button 8"); private JButton button9 = new JButton("Button 9"); private JButton button10 = new JButton("Button 10"); public DemoPanel(){ addDefaultConstraints(FILL.BOTH); add(button1, WEIGHT.x(1)); add(button2, WEIGHT.x(1)); add(button3, WEIGHT.x(1)); add(button4, GRID.WIDTH.REMAINDER); add(button5, GRID.WIDTH.REMAINDER); add(button6, GRID.WIDTH.RELATIVE); add(button7, GRID.WIDTH.REMAINDER); add(button8, GRID.HEIGHT.REMAINDER, WEIGHT.y(1)); add(button9, GRID.WIDTH.REMAINDER); add(button10, GRID.WIDTH.REMAINDER); } } Recognize this code? No? It's the GridBagLayout javadoc example. See? And in just 10 lines of layout code, with no side effects to keep track of. Ok let me explain what's going on. I started by creating an interface defining the job of a single constraint. public interface GridBagFactoryConstraint {Then using a combination of enums and static factory methods I implemented a class representing every constraint from GridBagConstraints. Here is the implementation of the FILL enum as an example. public void apply(GridBagConstraints gbc); } public enum FILL implements GridBagFactoryConstraint {I know what you are thinking and yes, I really did all of them. The next step was to create an add method in GBFPanel that would accept a variable number of these constraint objects. HORIZONTAL(GridBagConstraints.HORIZONTAL), VERTICAL(GridBagConstraints.VERTICAL), BOTH(GridBagConstraints.BOTH), NONE(GridBagConstraints.NONE); private int gbcValue; private FILL(int val) { gbcValue = val; } public void apply(GridBagConstraints gbc) { gbc.fill = gbcValue; } } public void add(Component comp, GridBagFactoryConstraint... constraints) {This is the add method used in my example code above. Now some of you may have noticed the call to addDefaultConstraints(). This method lets you specify constraints that you wish to be applied to every component added to the panel. Its handy for factoring out common constraints. The constraints passed in the add method will always override any default values. Some time ago there was a "layout manager shoot-out" over on java.net. Here is how a GBFPanel entry might have looked. .... } public class AddressShootout extends GBFPanel { private JTextField lastNameTF = new JTextField(15); private JTextField firstNameTF = new JTextField(15); private JTextField phoneTF = new JTextField(15); private JTextField addressTF = new JTextField(20); private JTextField emailTF = new JTextField(15); private JTextField stateTF = new JTextField(2); private JTextField cityTF = new JTextField(15); public AddressShootout() { addDefaultConstraints(ANCHOR.E, INSETS.top(5), INSETS.left(5), INSETS.right(5)); add(new JLabel("Last Name")); add(lastNameTF, FILL.HORIZONTAL, WEIGHT.x(1)); add(new JLabel("First Name")); add(firstNameTF, FILL.HORIZONTAL, WEIGHT.x(1), GRID.WIDTH.REMAINDER); add(new JLabel("Phone")); add(phoneTF, FILL.HORIZONTAL, WEIGHT.x(1)); add(new JLabel("Email")); add(emailTF, FILL.HORIZONTAL, WEIGHT.x(1), GRID.WIDTH.REMAINDER); add(new JLabel("Address")); add(addressTF, FILL.HORIZONTAL, WEIGHT.x(1), GRID.WIDTH.REMAINDER); add(new JLabel("City"), INSETS.bottom(5)); add(cityTF, FILL.HORIZONTAL, WEIGHT.x(1), INSETS.bottom(5)); add(new JLabel("State"), INSETS.bottom(5)); add(stateTF, FILL.HORIZONTAL, WEIGHT.x(1), INSETS.bottom(5)); } } I reimplemented both the javadoc and Java tutorial examples for GridBagLayout and discovered two things. 1. This gets the job done in a lot less code. 2. The examples are almost totally useless. I then went on to reimplement a number of other "example layouts" from some of the 3rd party layout managers. I again discovered two things. 1. This gets the job done in close to the same amount of code. 2. Their examples are much more focused on actual development situations. Conclusions I think GridBag benefits the most from this evil tight coupling. The other layouts became easier to read but gained little real power. It's safe to say that the more complicated a layout manger the more it will benefit from tight coupling. Other possibilities for this treatment include SpringPanel, OverlayPanel, FlowPanel, GroupPanel, TablePanel, MigPanel, FormPanel and PagePanel. So this is evil, right? JavaFX gets to do it. Why shouldn't we? Is this an architectural compromise that's worth it? Get the open source Atomist Software Delivery Machine and start automating your delivery right there on your own laptop, today! Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/why-does-javafx-get-cheet
CC-MAIN-2019-04
refinedweb
1,440
51.75
In this tutorial I’ll continue making my Android SQLite Address Book app. I hope this code is useful enough to use as a cheat sheet when you need to use an SQLite database in Android. I cover a ton of topics: How to Create a SQLite Database and Tables, How to Issue Queries, How to Insert Data, How to Update Data, How to Delete Data, SQLiteOpenHelper, execSQL, SQLiteDatabase, ContentValues, rawQuery, Cursor, and more. The code below will help. If you like tutorials like this, it helps to tell Google+ with a click here Code From the Video DBTools.java // DBTools.java package com.newthinktank.contactsapp; import java.util.ArrayList; import java.util.HashMap; import android.content.ContentValues; import android.content.Context; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteOpenHelper; // SQLiteOpenHelper helps you open or create a database public class DBTools extends SQLiteOpenHelper { // Context : provides access to application-specific resources and classes public DBTools(Context applicationcontext) { // Call use the database or to create it super(applicationcontext, "contactbook.db", null, 1); } // onCreate is called the first time the database is created public void onCreate(SQLiteDatabase database) { // How to create a table in SQLite // Make sure you don't put a ; at the end of the query String query = "CREATE TABLE contacts ( contactId INTEGER PRIMARY KEY, firstName TEXT, " + "lastName TEXT, phoneNumber TEXT, emailAddress TEXT, homeAddress TEXT)"; // Executes the query provided as long as the query isn't a select // or if the query doesn't return any data database.execSQL(query); } // onUpgrade is used to drop tables, add tables, or do anything // else it needs to upgrade // This is droping the table to delete the data and then calling // onCreate to make an empty table public void onUpgrade(SQLiteDatabase database, int version_old, int current_version) { String query = "DROP TABLE IF EXISTS contacts"; // Executes the query provided as long as the query isn't a select // or if the query doesn't return any data database.execSQL(query); onCreate(database); } public void insertContact(HashMap<String, String> queryValues) { // Open a database for reading and writing SQLiteDatabase database = this.getWritableDatabase(); // Stores key value pairs being the column name and the data // ContentValues data type is needed because the database // requires its data type to be passed")); // Inserts the data in the form of ContentValues into the // table name provided database.insert("contacts", null, values); // Release the reference to the SQLiteDatabase object database.close(); } public int updateContact(HashMap<String, String> queryValues) { // Open a database for reading and writing SQLiteDatabase database = this.getWritableDatabase(); // Stores key value pairs being the column name and the data")); // update(TableName, ContentValueForTable, WhereClause, ArgumentForWhereClause) return database.update("contacts", values, "contactId" + " = ?", new String[] { queryValues.get("contactId") }); } // Used to delete a contact with the matching contactId public void deleteContact(String id) { // Open a database for reading and writing SQLiteDatabase database = this.getWritableDatabase(); String deleteQuery = "DELETE FROM contacts where contactId='"+ id +"'"; // Executes the query provided as long as the query isn't a select // or if the query doesn't return any data database.execSQL(deleteQuery); } public ArrayList<HashMap<String, String>> getAllContacts() { // ArrayList that contains every row in the database // and each row key / value stored in a HashMap ArrayList<HashMap<String, String>> contactArrayList; contactArrayList = new ArrayList<HashMap<String, String>>(); String selectQuery = "SELECT * FROM contacts"; // Open a database for reading and writing SQLiteDatabase database = this.getWritableDatabase(); // Cursor provides read and write access for the // data returned from a database query // rawQuery executes the query and returns the result as a Cursor Cursor cursor = database.rawQuery(selectQuery, null); // Move to the first row if (cursor.moveToFirst()) { do { HashMap<String, String> contactMap = new HashMap<String, String>(); // Store the key / value pairs in a HashMap // Access the Cursor data by index that is in the same order // as used when creating the table contactMap.put("contactId", cursor.getString(0)); contactMap.put("firstName", cursor.getString(1)); contactMap.put("lastName", cursor.getString(2)); contactMap.put("phoneNumber", cursor.getString(3)); contactMap.put("emailAddress", cursor.getString(4)); contactMap.put("homeAddress", cursor.getString(5)); contactArrayList.add(contactMap); } while (cursor.moveToNext()); // Move Cursor to the next row } // return contact list return contactArrayList; } public HashMap<String, String> getContactInfo(String id) { HashMap<String, String> contactMap = new HashMap<String, String>(); // Open a database for reading only SQLiteDatabase database = this.getReadableDatabase(); String selectQuery = "SELECT * FROM contacts where contactId='"+id+"'"; // rawQuery executes the query and returns the result as a Cursor Cursor cursor = database.rawQuery(selectQuery, null); if (cursor.moveToFirst()) { do { contactMap.put("firstName", cursor.getString(1)); contactMap.put("lastName", cursor.getString(2)); contactMap.put("phoneNumber", cursor.getString(3)); contactMap.put("emailAddress", cursor.getString(4)); contactMap.put("homeAddress", cursor.getString(5)); } while (cursor.moveToNext()); } return contactMap; } } Hello Derek. Thanks for the beautiful tutorials. Just want to pay your attention to a small issue about auto_complete in eclipse. When you want to type for example android:textColor=""you can start from teand press ctrl+space and it will give a list of usefull suggestions. Another example: android:text="@string/add_contact – Type tethen ctrl+space for android:text=”" – Type @sthen ctrl+space for @string/ – Type adthen ctrl+space for add_contact This is for windows. I don’t know what is the equivalent in Mac. Thank you. There are many short-cuts. Just few more: For android:layout_weight="1" Just type wthen Ctrl+Space and it will give you a list of suggestions,so that you can easily pick it out. For android: – Type pathen Ctrl+Space for android:padding="" – Type @dthen Ctrl+Space for @dimen/ – Type pthen Ctrl+Space for pendding_5dp Hope that help you instead of jumping from place to place to copy things, or jump to Graphical layout just for the purpose of searching for a string value. Thank you for the great tips 🙂 Thank you for the great tips 🙂 I very much appreciate them! Wow, the range of things you are covering with this ContactsApp is great, learning so much in just this 5 part series alone! I’m a bit of a slow learner but the way you present this is brilliant. Derek, thanks x 100 for the wealth of information on your site, and for sharing your expertise! Thank you 🙂 Yes I’m doing my best to cover as many topics as possible in each app I make. I’m glad you are finding them useful Thank you for this nice tutorial.:) You’re very welcome 🙂 One more question: why do we need to close the database after an insert operation but not after a delete? -Jay You should actually close it in both situations, but since this was a very simple app I didn’t run into any issues. Hi Derek, I managed to solve the problems I had with the interface and the positioning of the buttons. I just uninstalled and jdk for Eclipes reinstalled following the tutorial Install Android Development Tools. However now I have a problem, for the tutorial 12 I am not able to make a file that is named DBTools.java. Will I need to install additional plug-in? I have all of the code in this one package. If you import it into Eclipse it will be easy to see how everything is laid out. I hope that helps 🙂 Great tutorials, I can give you another shortcut tip. When you have to import any library just press: CTRL + SHIFT + O(the letter), and if you want to format and indent your code just press: CTRL + SHIFT + F. Have a great day! Thank you for the tips. I know about the shortcuts, but I decided not to use them in tutorials because I can never trust that anyone else knows them. I hope that makes sense 🙂 Hi Derek, Have you worked on using a pre-populated database e.g. android.db is your database name. You created all the tables and inserted data using SQLite Database browser or from command line. You saved it to assets/android.db directory. From there, your program can copy it over to android database location /data/data//database/. I tried the solution from the following article, however, I was not successful. It always fails because a database table already exists. Do you know how to do this? Please share your solution. Thanks a lot! Hi, You basically just save your database to the assets folder in your Android project. Then just use it like it was created in your app. How do you call a database you’ve already created? And where do you put it in your app? Thank you in advance. Here you go Hi Derek, Thanks a lot for these tutorials, they are really much more helpful than any book on the topic that i have read. I tend to get bored very fast with java books, they simply contain to much nonsense between the lines. Actually, the only good programming book that i have read were about Phyton (Python Programming for the Absolute Beginner). In each chapter you creat a program from start to finish, with no unanswered questions after the chapter. Similar to what you are doing with your tutorials! Ive been looking for such a java book for years now, and 6 java books later i found your tutorials by chance. The only regret is that i were trying so long with the books. Keep up the good work! Hi Oskar, Thank you for the very nice compliments 🙂 I’m making a new Android tutorial as well very soon. What’s a hashmap how it works. Here is a tutorial on Hash tables which are similar. Well I see I didn’t got it completely but why did you created DBTools.java in other project than the Address Book Application? Can I create it in the later? Tip : Do add the next video page link on your website. For example – there should be a link on this page pointing to Part 13 page. I always try to separate out any classes that I can to keep everything modular. I have every Android video on this page Android video tutorial
http://www.newthinktank.com/2013/06/android-development-tutorial-12/
CC-MAIN-2016-50
refinedweb
1,666
55.64
Hello frens, I am new to ASP.Net and I am trying to create an application where I want to send my form contents to my email ID. I used the 2 approaches. 1) HTML form: <form enctype="multipart/form-data" runat=server <p>Enter your name below:<br /> <input type="text" name="names" size="100" /></p> <p><input type="submit" value="Submit Feedback" /><br /> <input type="reset" value="Reset Feedback" /></p> </form> While the above code does send me a email with the name entered in the text box "text1", it open MS Outlook when I click on "Submit" button and there I have to again click the Send Button in the Outlook Window. Is there anyway that I can send an email directly just by clicking the Submit button once rather than having to send through Outlook? Something like as soon as I click on the Submit button on my form, it send a email and gives a confirmation that the message was sent. 2) ASP.Net namespace System.Web.Mail: using System.Web.Mail; public partial class email : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { SmtpMail.Send("abcd@efg.com", "ijkl@mnop.com", "Hiiiii", "body text here"); Response.Write("Success..!"); } } This approach does not work at all. It gives me the following exception. System.Runtime.InteropServices.COMException: The "SendUsing" configuration value is invalid. Do I need to give some SMTP server destination? Please help...! Thanks Sharon.
https://www.daniweb.com/programming/web-development/threads/183221/email-application
CC-MAIN-2017-09
refinedweb
243
65.83
Representation of an email. More... #include "config.h" #include <stdbool.h> #include "mutt/mutt.h" #include "email.h" #include "body.h" #include "envelope.h" #include "tags.h" Go to the source code of this file. Representation of an email. email.c. Definition at line 41 of file email.c. Definition at line 68 of file email.c. Strictly compare message emails. Definition at line 84 of file email.c. compute the size of an email Definition at line 110 of file email.c. Drop a private list of Emails. The Emails are not freed. Definition at line 123 of file email.c. Definition at line 144 of file email.c.
https://neomutt.org/code/email_8c.html
CC-MAIN-2020-05
refinedweb
110
83.02
packaging - china manufacturer guide - we help buyers source chinese suppliers, products or services -- Web ChinaProductguide.com China - packaging Bolex Adhesive Tape Co.Ltd [China] - Packaging City: Shenzhen Second adhesive tape factory in china mainland. Was founded in 1984, devoted to the promotion of various packaging materials in local market. In 1992, adhesive tape factory was built, began to produce BOPP adhesive tape and sell them widely in china mainland to meet China's joining WTO and quick-increasing overseas demands, Bolex (Shenzhen) adhesive products co., ltd. Was set up in 1995. We specialize in manufacturing and exporting various adhesive products including BOPP packing tape, stationery tape, masking tape, double sided tape etc. With advanced fully automatical coating lines and slitting equipment, we have now become one of the leading adhesive tape manufacturers in china. We export directly 400 more containers each year to over 50 countries around the world, mainly to Europe, Middle East, North Africa, Oceania and South America.We are a well-organized and responsible exporter and manufacturer in this line. Our experienced professionals are always committed to providing efficient customer service and excellent products at competitive prices. Your early visit to us is warmly welcomed. Taizhou Taihua Plastic Products Co., Ltd. [China] - Packaging City: Taizhou Our factory is a bigger factory produced all kinds of fibc in china. We can produce 300 thousand pieces of fibc in a month and our products are saled to the entire world. The most import, our price is cheaper than others. If you are interested in our products, you may e-mail me. Shanghai Yuhong Petrochemical Engineering Co. Ltd. [China] - Packaging City: Shanghai We can supply various kinds of composite craft paper J&D Electrical Co., Ltd. [China] - Packaging City: Shenzhen We dedicate to provide better living style to the consumer and tailor-made business solution to customer. With our continuous effort in product innovation and customer service, now we have established extensive connection with customers worldwide. Our products spectrum covers equipments for industrial use (bottle making, water treatment-total solution for pure water, carbonated water and juice, bottling and packing) and appliance for home use (water cooler, mini cooler and water purifier). Kan Specialities Material Co., Ltd [China] - Packaging City: Hangzhou Our group is one of biggest paper manufactory in China, we could supply you many good quality papers. There fore, with the strict quality management system-ISO9002, we can provide the first class products in the world for customers.Our company's main products are: 1. Teabag filter paper2. Electrolytic capacitor paper 3. Special paper for mask4. Filter paper for desiccating agent bag5. Cotton paper for double-side adhesive Tape6. Vacuum cleaner paper for vacuum cleaner bag7. Carbon paper 8. Fluorite9. Grey back white back paper for outside packing. Prosoft Scitechnology Co., Ltd [China] - Packaging City: Xiamen We offer a wide range of CD and DVD packaging products. We can supply standard CD jewel cases, slim-line jewel cases, plastic wallets and paper wallets from quantities of 100 to thousands. Most items are in stock and can be dispatched within two days. Also available are CD ejector cases, c-shells, spiders, empty CD spindles, mini CD cases, business card CD jewel cases and multi-CD vacpacks. Shenzhen Yost Industrial Co., Ltd [China] - Packaging City: Shenzhen Our company as a manufacture mainly producing all kinds of packing tape, the details as below: Gummed kraft paper, Waterless kraft tape, BOPP packing tape, Stationery tape, Printed tape, Double coated tape, Wrinkle coated tape, Nylon tape for machine use and hand use, Shrink packing film, PE protect film, Polypropylene plastic rope, paper roller and packing equipment. Changhzou Heyi Plastics Co, .Ltd [China] - Packaging City: Changzhou Sino-Japanese cooperative venture exclusively manufacturing plastic seedling pot (sprout pot, flower pot, plant pot). Our technology is advanced but our products' price are competitive in the world. Why don't you have a look at the price and contact us! (Picture attached) It is one of the earliest companies in China to produce the plastic seedling pots to the domestic market. Annual production is about 1, 000 tons. The products are mostly sold to markets in USA, Japan and Europe. The companys technology and equipment is capable of producing 60/75/80/90/105/120/135 and other size specs of seedling pots. Qingdao Full-Trust International Co., Ltd [China] - Packaging City: Qingdao We specialize in the following bags: 1. Bags for Tesun Mechanical [China] - Packaging City: Guangzhou Manufacturer of 2", 3/4" capseal, capseal tools for industrial steel drums, with Hi quality & Woooooo... price. Hebei Fuhua Plastic Co., Ltd. [China] - Packaging City: Shijiazhuang We are the largest professional manufacturer of flexible intermediate bulk container bags in North China, a Joint Venture established with Malaysia with solid technical force, advanced processing equipments and strict quality control system. Our main products are FIBC Bags/Jumbo Bags/Bulk Bags, Industrial Cement Slings and PP/PE Woven Bags, etc, in annul output of 2, 000, 000pcs bulk bags and 30 million PP woven bags. Our Bulk Bags are widely used for economical and efficient way to pack, store and handle products, as package for the bulk goods in granule or powder, such as chemical raw materials, ore, fertilizer, etc; and building materials such as cement, sand, cinders, waste; foodstuffs such as grains, flour, sugar, peanut, seed and so on. Weight capacity from 25kgs to 2500kgs. With 16 years' experience in this line, we can design and produce Jumbo Bags according to customers' detailed requests. We are especially good at U-panel Type, Baffled Bags, Over-Lock and Chain Stitch Sewing, as well as Circular Type bags, Sift-Proof seams. Our FIBC bags mainly export to USA, Canada, Italy, Japan, Malaysia, the Middle East Area, etc. Our bags have been well recognized for the reliable quality and reasonable prices home and abroad. As the direct manufacturer and exporter, we can assure the bottom quotation and the best service, in any time we persist in the goal Never Sacrifice Quality for Price. We are ready to be your sincere and reliable partner! Zhejiang Shengtong Plastics Co., Ltd. [China] - Packaging City: Shaoxing We are BOPP film and bopet film manufacturer in china, at the moment we have two production line of BOPP film with annual capacity of 50, 000 m.tons and one production line of bopet with annual capacity of 16, 000 m.tons. We can offer plain BOPP film for printing and laminating, for tape grade, and so on. We can offer bopet for metallizing, and so on. Ready Packaging Co., Ltd. [China] - Packaging City: Nanjing We are professional manufacturer of paper products in china which include paper bag, jewelry box, gift box, watch box, cosmetic box, carton box, paperboard box and others. With nearly 10 years experience, till now we have developed products more than thousands items, covering all kinds of packaging box. Our products sell well on the Market of Hong Kong, USA, Europe, Japan, Australia, Malaysia, and South Africa. All depend on our advanced machinery, strict quality control system, best service and lowest price. Ruian City Hoping Packing Machinery [China] - Packaging City: Ruian We are one of the leading manufactures of pharmaceutical products in china. We are therefore looking for an agent with an efficient sales organization in that area to help us to market our products. We are sure that you will be quite satisfied with our services and the excellent qualities of our goods. Please let me know if you are interested in import machinery from us or want to know more information about us. Qingdao Ruifeng Plastics Factory [China] - Packaging City: Qingdao Thank you very much for your time. We are the professional manufacturer of multi-layer laminated & colors printed plastic bags and pouches in Qingdao, China. Our main products are as). We can also produce mesh bags for packing onion, potato, garlic, etc.Our shopping bags include pp woven shopping bags and PP non woven bags. They are both on a good market in the Europe and America. Ningbo Asia Plastics Technology Co., Ltd [China] - Packaging City: Ningbo We are a Taiwan-invested enterprises locating in Ningbo of China, with a total investment value of USD7000. As one of the largest manufactories and exporters of BOPP film in China, we have two BOPP film production lines imported from Bruckner company of Germany with annual production capacity of 50000MTS. Our BOPP film are widely used for printing, packing, lamination, tape, and so on. The thickness of our BOPP film is from 12micron to 60micron.If you are interested in establishing business relations with us in this line, please let us know your specific requirement, we assure you of our best attention to any inquires from you, and anticipate your prompt response in this respect. Jinguan Metal Products Co., Ltd [China] - Packaging City: Nanhai, Guangdong Province, China We are very glad to introduce our company for you. We are a private-owned enterprise specializing in aluminum cases. Our products range from cosmetic case, brief case, tool case, polit case, barbecue case, portable computer case, CD case, photographic case, and gun case etc. With a factory area of 12, 000 square meters, advanced facilities and skilled staff, we produce about 250, 000 cases each month. Our products are selling well in the USA, Europe, Japan, South Africa, Russia, etc. OEM is also welcome. We are looking forward to having a good cooperation with you. Elegant Packaging International Supplier Co., Ltd [China] - Packaging City: Shanghai We are the import and export company in Shanghai, China dealing with all kinds of plastic bags for more than 16 years. Please visit our website for plastic bags. Dalian Shengda Plastice Packaging Products Plant [China] - Packaging City: Dalian The largest manufacturer of multi-layer co-extrusion film in China. In order to provide high quality products and economically meet customers requirements, SD have introduced 1 seven-layer co-ex machine from Brampton Canada. Products? Our multi-layer co-ex films are custom tailored to your product's barrier and protection requirements. We combine a variety of materials such as Nylon, EVOH, PE and so on, to form structures, which create oxygen and water vapor barriers to maintain the integrity of your product. Shandong Oriental International Trading Corp., Ltd [China] - Packaging City: Qingdao We are a large trading corp. located in Qingdao, China. We deal with the packaging products. We can supply you various kinds of paper bags, packaging tissue paper, plastic bags and so on. Any buyers from the world please contact me for more information. Jiangyin Yongjia Plastic Products Co., Ltd [China] - Packaging City: Jiangyin We are the manufactory of plastic bags in China. We avail ourselves of this opportunity to approach you for the establishment of trade relations with you.We are sino-foreign joint-venture with factory area 6, 600 sqm, factory building area 3, 500 sqm, total investment 2 million dollars and it has the first class of domestic production capacity and advanced equipment.All machinery equipments are imported from Taiwan including 48 sets of 24 items, the main technical process devices are 7 sets of amphibious high speed film blowers, 2 sets of mini type high speed film blowers, 5 sets of automatic hot cutters and cold cutters, 7 sets of point cutters, 2 sets of edge sealing machines and 5 sets of link automatic press.The main products of our corp. are HDPE t-shirt bag, garbage bag, hand tear bag, purchase bag, varies of LDPE packing bag, PP garments packing bag, PP food packing bag, etc. we also accept the processing business. Our Corp has 3, 000 tons annual production capacity, 80% of which are exported.As you know, it is our policy to trade with the people of all countries on the basis of equality and mutual benefit, we believe we shall be able, by joint efforts, to promote friendship as well as business.We hope they will reach you in due course and will help you in making your selection. Please advise what articles you are interested in at present. I will quote the lowest price for you. Guangdong Provincial Guanghong Foods Industrial Co., Ltd. [China] - Packaging City: Guangzhou Our company can supply Sell diversiform cargo lashing.ratchet lashings are used for tying down loads while transporting, shifting or moving them. They have replaced traditional jute ropes, chains and wires used for transportation and for a variety of other applications. One side of webbing is sewn to the ratchet hand, the other is free to passing around the load or through narrowing openings and inserting into the ratchet spool. Fabricated for any practical. Web length plus 6 inches additional length for end hold. Hongda Hologram Co., Ltd. [China] - Packaging City: Wuxi The company specializes in laser holographic technology, products research, equipment making, packaging printing and the main products are wide width laser glazing film, transfer aluminum foil paper, anti-fake base, laser label, holographic equipment making, technology transfer and packaging presswork. The company printed anti-fake logo for State Tax Bureau, Public Security Bureau, Supervision Bureau and so on. It is the member of China Anti-fake Association and the designated unit for printing cigarette and pharmaceutical labels in the province. Wuxi City Hengtong Instrument Packaging Factory [China] - Packaging City: Wuxi We supply various packing boxes for aluminum alloy instruments and EVA shockproof inner linings. Shanghai Henglei Hologram Co,. Ltd [China] - Packaging City: Shanghai Company produces hologram stickers, hologram machines, hologram masters, hologram cards, including all kinds of temper evident holograpghic stickers, holographic film, transparent holographic cards, 2D/3D and dot-matrix masters, photopolymer, hologram embosser (1200mm soft hologram embosser and hard hologram embosser), electronic forming machine, hologram mastering lab, laminator and die-cutting punch, etc. Hologram Mall [China] - Packaging City: Shanghai Supply holographic stickers, Lenticular, hologram machines, hologram masters. Wuxi City Hengtong Instrument Packaging Factory [China] - Packaging City: Wuxi We supply various aluminum alloy instruments cases and EVA shockproof inner linings. Wuxi City Hengtong Instrument Packaging Factory [China] - Packaging City: Wuxi We supply various briefcases, CD cases, tool cases, cosmetic cases, computer cases, gun cases, camera cases, Jewellery cases.), Shopping bags and handbags.These are commonly used for packing feed, fertilizer, grain, rice, flour, salt, sugar, cement, mineral and other lumpy and fine materials.Welcome to visit our website and we hope to establish friendship with the friends from all over the world! Tian Jin Mei-I Plastics Ind., Ltd [China] - Packaging City: Tian Jin, China Professional manufacturer of high quality packing materials in China. We are one of the leading packing product manufacturer and exporter. Our products include PP strapping band, PE Film, BOPP film, PP split yarn (filler of cable), PP soft tape, PP 3-ply soft rope, PP/PE twisted twine, PP tape and so on? They are mainly exported to Japan, Europe, North and South America, Australia, South-east Asia, Middle-east, South-east Africa and India, etc. Free counter
http://www.chinaproductguide.com/packaging.html
crawl-002
refinedweb
2,482
54.93
. An example of this would be one of the images shown before: For this temporary example, I will use the following image: You are encouraged to use your own image. As usual, our starting code can be something like: import numpy as np import cv2 img = cv2.imread('watch.jpg',cv2.IMREAD_COLOR) Next, we can start drawing, like: cv2.line(img,(0,0),(150,150),(255,255,255),15) cv2.imshow('image',img) cv2.waitKey(0) cv2.destroyAllWindows() The cv2.line() takes the following parameters: where, start coordinates, end coordinates, color (bgr), line thickness. The result here: Alright, cool, let's get absurd with some more shapes. Next up, a rectangle: cv2.rectangle(img,(15,25),(200,150),(0,0,255),15) The parameters here are the image, the top left coordinate, bottom right coordinate, color, and line thickness. How about a circle? cv2.circle(img,(100,63), 55, (0,255,0), -1) The parameters here are the image/frame, the center of the circle, the radius, color, and then thickness. Notice we have a -1 for thickness. This means the object will actually be filled in, so we will get a filled in circle. Lines, rectangles, and circles are cool and all, but what if we want a pentagon, or octagon, or octdecagon?! No problem! pts = np.array([[10,5],[20,30],[70,20],[50,10]], np.int32) # OpenCV documentation had this code, which reshapes the array to a 1 x 2. I did not # find this necessary, but you may: #pts = pts.reshape((-1,1,2)) cv2.polylines(img, [pts], True, (0,255,255), 3) First, we name pts, short for points, as a numpy array of coordinates. Then, we use cv2.polylines to draw the lines. The parameters are as follows: where is the object being drawn to, the coordinates, should we "connect" the final and starting dot, the color, and again the thickness. The final thing you may want to do is write on the image. This can be done like so: font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img,'OpenCV Tuts!',(0,130), font, 1, (200,255,155), 2, cv2.LINE_AA) Full code up to this point would be something like: import numpy as np import cv2 img = cv2.imread('watch.jpg',cv2.IMREAD_COLOR) cv2.line(img,(0,0),(200,300),(255,255,255),50) cv2.rectangle(img,(500,250),(1000,500),(0,0,255),15) cv2.circle(img,(447,63), 63, (0,255,0), -1) pts = np.array([[100,50],[200,300],[700,200],[500,100]], np.int32) pts = pts.reshape((-1,1,2)) cv2.polylines(img, [pts], True, (0,255,255), 3) font = cv2.FONT_HERSHEY_SIMPLEX cv2.putText(img,'OpenCV Tuts!',(10,500), font, 6, (200,255,155), 13, cv2.LINE_AA) cv2.imshow('image',img) cv2.waitKey(0) cv2.destroyAllWindows() The result: In the next tutorial, we're going to cover basic image operations that we can perform.
https://pythonprogramming.net/drawing-writing-python-opencv-tutorial/
CC-MAIN-2021-39
refinedweb
482
67.25
gosuMembers Posts11 Joined Last visited gosu's Achievements Newbie (1/14) 6 Reputation 1 Community Answers Auto-scroll for GSAP draggable gosu replied to prashantpalikhe's topic in GSAPThat's really nice feature I was going to do it myself.. perfect timing - Okay it turns out the extern file provided is almost doable. I needed to add some support for Draggable, may help someone: var TweenMax = {}; var Draggable = {}; Draggable.create = function () {}; Draggable.prototype.hitTest = function () {}; Draggable.prototype.applyBounds = function () {}; Draggable.prototype.onDragStart = function () {}; Draggable.prototype.onComplete = function () {}; Draggable.prototype.onStart = function () {}; Draggable.prototype.onDrag = function () {}; Draggable.prototype.onPress = function () {}; Draggable.prototype.onRelease = function () {}; Draggable.prototype.onDragEnd = function () {}; TweenMax.prototype.lagSmoothing = function () {}; - Are there any new GSAP extern files, I guess this is quite outdated? - This issue may not be related to greensock directly (more of a jQuery issue). - Try this: var $elements = $('div'); $elements.on("mouseover", function() { TweenMax.to($elements.not($(this)), .3, {opacity:0}); }).on("mouseout", function() { TweenMax.to($elements, .3, {opacity:1}); }); Draggable hitTest ordered by overlapping area gosu posted a topic in GSAPHey, I am using Draggable with hitTest() to determine possible drop spots. I needed more advanced solution to get the drop spots ordered by percent covered (if draggable object is dropped on 2 or more droppable spots - I needed the one which has the most overlap area covered). Otherwise you end up with let's say 1 draggable element being dropped on 2 possible droppable elements (both having the minimum threshold), it won't matter if you cover one of the elements waaaay more than the other. For this I modified Draggable.js if (isRatio) { threshold *= 0.01; area = overlap.width * overlap.height; // edit here return [(area >= r1.width * r1.height * threshold || area >= r2.width * r2.height * threshold), area]; } return (overlap.width > threshold && overlap.height > threshold); and onDragEnd: lastOverlapScore = 0; var highestScoreObject = null; $.each(droppables, function (i, obj) { var overlap = draggable.hitTest(obj, '25%'); // some threshold if (Array.isArray(overlap) && overlap[0] === true && lastOverlapScore < overlap[1]) { lastOverlapScore = overlap[1]; var highestScoreObject = obj; } }); // highestScoreObject is the element with highest coverage area This may help someone looking for similar result or may be in use if you decide to add way to get the score. It would be less hacky solution of I could get the area from Draggable without having to modify it, but though I'm happy it works. How to refresh position of element when drag? gosu replied to vyquanghoa's topic in GSAPI think he's trying to say that when you try to drag the green square by holding it at the very right bottom edge, and you go outside red bordered rectangle (#placeholder) and the square gets resized, the position of the cursor stays very far from the center of the dragged square. Staggerto for Owl Carousel .Owl-Item class not working? gosu replied to Demorus's topic in GSAPI don't see any greensock api used in your example? I can't seem to get a div moving. gosu replied to rotaercz's topic in GSAPI don't think your selector is good. Try with var mydiv = $('img'[data-name=test]'); Draggable groups gosu replied to gosu's topic in GSAPThanks! I didn't realize this simple solution could work so good. Same solution with jQuery UI shows alot of lag when dragging -');
https://staging.greensock.com/profile/33402-gosu/
CC-MAIN-2022-40
refinedweb
553
60.11
The print() will not give feedback so i can’t see if some thing went wrong While the code resolve without a error. import random card_one = random.randint(1, 9) card_two = random.randint(1, 9) money_one = 100 money_two = 100 def high_card(player_one, player_two, bet_one, bet_two): if card_one <= card_two: result = "you lose" total_bet = (money_one - bet_one) + money_two + bet_two + bet_one if card_one >= card_two: result = "you won" total_bet = (money_two - bet_two) + money_one + bet_one + bet_two else: result = "it's a ti" print("Player one flipt a %s Player two flipt a %s Player one called %s Player two called %s player one You %s and now have $%.2f, player two You %s and now have $%.2f" %(card_one, card_two, player_one, player_two, result, total_bet)) money = high_card(2, 3, 30, 80)
https://discuss.codecademy.com/t/game-of-chance-cart-pick-why-wont-it-print/464846
CC-MAIN-2020-16
refinedweb
122
60.85
- restrict(direct3d) is renamed to restrict(amp). We shared that here. - Replaced grid, tiled_grid with extent and tiled_extent instead. We shared that here. - Added amp_graphics.h with concurrency:graphics namespace with Short Vector Types and Textures. We shared that here. - Changed default queuing _mode not to be immediate, and changed enum names slightly. We shared that here. - Added .wait(…) overloads to tile_barrier for finer grained control. We shared that here. - Replaced <writeonly<…> as option for array_view with .discard_data(). We shared that here. - Moved math functions to amp_math.h in fast_math and precise_math namespaces. We shared that here. - Replaced accelerator_restriction with .supports_double_precision + another property. We shared that here. - Removed all overloads that accepted accelerator instead use .default_view. We shared that here. - Added .set_default() to accelerator and changed how we pick the default. We shared that here. - Replaced global get_all_accelerators function with accelerator::get_all(). We shared that here. - Removed .z, .y, .x and instead use the subscript operator [0], [1], [2]. We shared that here. - Added tiled_extent::pad(). We shared that here. - Added tiled_extent:truncate. We shared that here. - Added accelerator_view_removed exception to help with TDR case. We shared that here. - Added .synchronize_async() to array_view, and made the other _async() methods work. We shared that here. - Added create_marker() to accelerator_view. We shared that here. - Removed .project() on array_view and instead you can still use subscript operator. We shared that here. - Changed the signature for a couple of the atomic operations. We shared that here. - Changed the type of D3D buffer underling C++ AMP arrays (minor impact to interop). We shared that here. - Made triple-digit number of bug fixes including perf improvements. We shared the bits, try re-running your existing code and we love feedback in our MSDN Forum. If you are moving code from the Developer Preview to the Beta and encounter any additional breaking changes or new features not listed above, then please let us know. Just got VS 11 and playing with AMP. So cool. Got 900 single prec. GFLOPS on a GTX 470 (which is close to theoretical max.) It is just some brain-dead code, but it took no time to create (i.e. I didn't have to create separate .cu files, or compile and launch OpenCL kernels etc.) I can hardly wait to try the debugger etc. This is going to be a game changer. Also, concurrency::get_accelerators() is now accelerator::get_all() eric, very good to hear, thanks for sharing. David, yes that is point #11, thanks for noticing. Whoops…didn't notice, thanks.. P.So. All Sample projects are updated by their owner at the blog post where the sample is posted. Please leave your comment under the blog post where you tried to get a sample that is not updated, so its owner can help you and everybody else can benefit from the help on that particular sample project. Many thanks. Thanks for the advice. I have just done that. Is there still no GPU debugging on Windows 7 with VS11 beta? If so, will it change in RTM version? Hi pauljurczak, yes, with Visual Studio 11 Beta you need Windows 8 Consumer Preview in order to debug your restrict(amp) code. We are still working to enable GPU debugging on Windows 7, and I am very confident we will get there. What I still don’t have at this point is a timeframe for when that will happen, sorry. How about intrinsic compiler support for the float16 "half" type? Hi asdf, no, that is not in the planned for this release – thanks for the feedback.
https://blogs.msdn.microsoft.com/nativeconcurrency/2012/02/29/changes-in-vs-11-beta-for-c-amp/
CC-MAIN-2016-44
refinedweb
594
71
Adding Community Runtime Anypoint Studio comes bundled with an embedded Enterprise runtime with a 30-day Enterprise trial license. You can add Community runtime to your Anypoint Studio instance to build and test applications in the free, open-source Mule runtime environment. In Studio, under the Help menu, select Install New Software. In the Work With field of the Install wizard, use the drop-down menu to select: Mule ESB Runtimes for Anypoint Studio Check the box to select Anypoint Studio Community Runtimes, and click Next. Click one or more runtimes you want to install. If you only want one version, expand the item and click the version you want. After selecting one or more versions, click Next. Click to agree to the terms and conditions of the license agreement, the Runtime of an Existing Project Complete the procedure above to install additional Mule runtimes on your instance of Studio. In the Package Explorer in Studio, double-click the mule-project.xmlfile to open it. Use the drop-down menu next to Server Runtime to select a new Mule runtime. If changing from Enterprise to Community, Studio displays a warning to advise that it may initiate updates to the namespace, and asks for your permission to proceed; click Yes to continue. Studio saves the change. Close the mule-project.xmlfile.?
https://docs.mulesoft.com/anypoint-studio/v/5/adding-community-runtime
CC-MAIN-2018-13
refinedweb
220
64.61
Packages from CDNsPackages from CDNs Because Deno supports remote HTTP modules, and content delivery networks (CDNs) can be powerful tools to transform code, the combination allows an easy way to access code in the npm registry via Deno, usually in a way that works with Deno without any further actions, and often enriched with TypeScript types. In this section we will explore that in detail. What about deno.land/x/? The deno.land/x/ is a public registry for code, hopefully code written specifically for Deno. It is a public registry though and all it does is "redirect" Deno to the location where the code exists. It doesn't transform the code in any way. There is a lot of great code on the registry, but at the same time, there is some code that just isn't well maintained (or doesn't work at all). If you are familiar with the npm registry, you know that as well, there are varying degrees of quality. Because it simply serves up the original published source code, it doesn't really help when trying to use code that didn't specifically consider Deno when authored. Deno "friendly" CDNsDeno "friendly" CDNs Deno friendly content delivery networks (CDNs) not only host packages from npm, they provide them in a way that maximizes their integration to Deno. They directly address some of the challenges in consuming code written for Node: - They provide packages and modules in the ES Module format, irrespective of how they are published on npm. - They resolve all the dependencies as the modules are served, meaning that all the Node specific module resolution logic is handled by the CDN. - Often, they inform Deno of type definitions for a package, meaning that Deno can use them to type check your code and provide a better development experience. - The CDNs also "polyfill" the built-in Node modules, making a lot of code that leverages the built-in Node modules just work. - The CDNs deal with all the semver matching for packages that a package manager like npmwould be required for a Node application, meaning you as a developer can express your 3rd party dependency versioning as part of the URL you use to import the package. esm.shesm.sh esm.sh is a CDN that was specifically designed for Deno, though addressing the concerns for Deno also makes it a general purpose CDN for accessing npm packages as ES Module bundles. esm.sh uses esbuild to take an arbitrary npm package and ensure that it is consumable as an ES Module. In many cases you can just import the npm package into your Deno application: import React from " export default class A extends React.Component { render() { return <div></div>; } } esm.sh supports the use of both specific versions of packages, as well as semver versions of packages, so you can express your dependency in a similar way you would in a package.json file when you import it. For example, to get a specific version of a package: import React from " Or to get the latest patch release of a minor release: import React from " esm.sh uses the std/node polyfills to replace the built-in modules in Node, meaning that code that uses those built-in modules will have the same limitations and caveats as those modules in std/node. esm.sh also automatically sets a header which Deno recognizes that allows Deno to be able to retrieve type definitions for the package/module. See Using X-TypeScript-Types header in this manual for more details on how this works. The CDN is also a good choice for people who develop in mainland China, as the hosting of the CDN is specifically designed to work with "the great firewall of China", as well as esm.sh provides information on self hosting the CDN as well. Check out the esm.sh homepage for more detailed information on how the CDN can be used and what features it has. SkypackSkypack Skypack.dev is designed to make development overall easier by not requiring packages to be installed locally, even for Node development, and to make it easy to create web and Deno applications that leverage code from the npm registry. Skypack has a great way of discovering packages in the npm registry, by providing a lot of contextual information about the package, as well as a "scoring" system to try to help determine if the package follows best-practices. Skypack detects Deno's user agent when requests for modules are received and ensures the code served up is tailored to meet the needs of Deno. The easiest way to load a package is to use the lookup URL for the package: import React from " export default class A extends React.Component { render() { return <div></div>; } } Lookup URLs can also contain the semver version in the URL: import React from " By default, Skypack does not set the types header on packages. In order to have the types header set, which is automatically recognized by Deno, you have to append ?dts to the URL for that package: import { pathToRegexp } from " const re = pathToRegexp("/path/:id"); See Using X-TypeScript-Types header in this manual for more details on how this works. Skypack docs have a specific page on usage with Deno for more information. Other CDNsOther CDNs There are a couple of other CDNs worth mentioning. UNPKGUNPKG UNPKG is the most well known CDN for npm packages. For packages that include an ES Module distribution for things like the browsers, many of them can be used directly off of UNPKG. That being said, everything available on UNPKG is available on more Deno friendly CDNs. JSPMJSPM The jspm.io CDN is specifically designed to provide npm and other registry packages as ES Modules in a way that works well with import maps. While it doesn't currently cater to Deno, the fact that Deno can utilize import maps, allows you to use the JSPM.io generator to generate an import-map of all the packages you want to use and have them served up from the CDN. ConsiderationsConsiderations While CDNs can make it easy to allow Deno to consume packages and modules from the npm registry, there can still be some things to consider: - Deno does not (and will not) support Node plugins. If the package requires a native plugin, it won't work under Deno. - Dependency management can always be a bit of a challenge and a CDN can make it a bit more obfuscated what dependencies are there. You can always use deno infowith the module or URL to get a full breakdown of how Deno resolves all the code. - While the Deno friendly CDNs try their best to serve up types with the code for consumption with Deno, lots of types for packages conflict with other packages and/or don't consider Deno, which means you can often get strange diagnostic message when type checking code imported from these CDNs, though skipping type checking will result in the code working perfectly fine. This is a fairly complex topic and is covered in the Types and type declarations section of the manual.
https://deno.land/manual/node/cdns
CC-MAIN-2022-21
refinedweb
1,196
57.1
The Web Version This work is licensed under the Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit. We have a blog that we can kind of use through the command line, but really blogs are on the web and used through a browser. Flask is the simplest web framework for Python. With just a few lines we can take what we have and run it locally as if it was on the internet. Since Flask is a 3rd party library we need to install it before we can use it. pip install --user flask If you are using Mac OS X/Linux you will need to type pip3instead of pip Your First Flask App Use your editor to create interface.py and type in the following basic flask application: from flask import Flask app = Flask(__name__) @app.route("/") def hello(): return "Hello Internet!" if __name__ == "__main__": app.run(debug=True) There is that __name__ again. We are close to explaining it. app is an instance of a class provided by the Flask framework and app.run() runs the development web server which allows us to test out how this Flask thing works. You can run it the same way you ran the blog.py program. $ python interface.py This program doesn't return to the command prompt, it outputs: * Running on and a blank line. The server will keep running until you type Control (or CTRL) and the letter "c". While the server is running you can type into your browser URL bar and test out your program! Reusing your blog.py code with Flask We have a blogging engine in blog.py and an web interface, now lets combine by implementing the index part of our online blog. To do that we will have to import the code from blog.py. We have imported code from the Python standard library and Flask but this out first time doing it from our own file. In interface.py alter the top section: from flask import Flask from blog import index, get_post app = Flask(__name__) The import line makes the index and get_post available to us in the interface.py file. if __name__ == __main__:comes in right here. When you import a file like blog.pyit runs the code in the file, but we don't want to run the code which provided our command line interface. When you run a program directly python file.py, then __name__is set to "main" by Python. This condition allows us to define code in a file which only get run when we run the file directly. We can use the index function we defined earlier to create a view (a function which returns HTML) in flask to start our blog app. Replace the hello function with: @app.route("/") def home(): slugs = index() return "<br>".join(slugs) The interface.py file should look like this now: from flask import Flask from blog import index, get_post app = Flask(__name__) @app.route("/") def home(): slugs = index() return "<br>".join(slugs) if __name__ == "__main__": app.run(debug=True) Blog Post Page We need another view to look at individual blog posts. Just like with the command line interface we need to be able to tell the view which blog post we want to view using the slug like we did last time. Flask lets us do this through the route function/decorator. Decorators are a more advanced concept, but using them very straight forward. For the curious a decorator take a function as its argument and returns another function. Let's define a post view below the home view: @app.route("/<slug>") def post(slug): content = get_post(slug) return content When you go to see your index page and you can manually to each post with and any other slug you have defined. Cool. The next step is to hook up it up using templates and links so you can go from the index page to each post. Templates Most web frameworks have a built in templating system. Templates enable you to write some of the more repeative aspects of HTML once and reuse them. Think of the header, footer or navigation on your favorite website. They apper on on every page, but we want want to duplicate it. If we did then a change in the footer would have to be made across all the pages in a web application! Flask uses a template language called Jinja2 and we will cover some of it today, but there is more when you are ready to learn about it. If you are interested in Django, a more powerful web framework written in Python, its template language is similar to Jinja2, and the latest release of Django works with Jinja2 directly. Flask looks for your template files in folder called templates right beside interface.py. Create the folder called templates either with the commandline or your file explorer. Use your editor to create a file called base.html in templates and type in the following: <!DOCTYPE html> <html> <head> <title>My Blog</title> </head> <body> {% block content %} {% endblock content %} </body> </html> This is basically the start and end of the two blog posts you wrote in the last section, but instead of the main part, the header and text, we have put in a placeholder {% block content %}. Jinja2 will pass through the HTML we write in our templates but it will process two kinds of special markup. The first is block which we just saw. The second is inserting data which uses a syntax like {{ foo }}. We will see an example shortly. Next create a file called post.html in templates and type in the following: {% extends "base.html" %} {% block content %} {{ post|safe }} {% endblock content %} This template has no HTML in it at all, but it is still valid. {% extends "base.html" %} tells Jinja2 to use that template first and then apply this template. What that means is inside post.html we can override blocks in base.html. The only block in base.html was {% block content %}. In base.html it contained nothing, but here we are placing {{ post|safe }} inside of it. {{ post }} get the value post out of the context (which we will see below) and inserts into the HTML. The |safe says to pass the content in post through the safe filter. There are filters for date and time formatting among other things. They are like functions you can use in the template. The safefilter tells Jinja2 that if the value, in this case postcontains HTML let it through. Normally Jinja2 would escape it to prevent malicious HTML from untrusted soruces, but we trust ourselves since we are writing the blog posts. Instead of {{ post|safe }} we could have written <h1>First Post</h1> <p>Hello, Internet!</p> But we want to write only one template for all blog posts instead of a template for each one. The next step is to use the template in your post function. Type out the new import and the changes in the post function: from flask import Flask, render_template # Change this line from blog import index, get_post app = Flask(__name__) @app.route("/") def home(): slugs = index() return "<br>".join(slugs) @app.route("/<slug>") def post(slug): content = get_post(slug) return render_template('post.html', post=content) # Change this line if __name__ == "__main__": app.run(debug=True) post=content is where we define the context of which varibles are available in our template. This should work, except the content inside each of your blog post file includes <html> and <head> element which aren't needed any more. Edit your "first-post.html" to remove everything in the base.html: <h1>First Post</h1> <p>Hello, Internet!</p> Do the same for any other blog posts you have written. Now you can run the server again, and you should see you first post again at. When you call post, the file is read and put into the content variable. Then in the render_template line we pass content in as }} part in the template takes the value in post, what was in content and inserts it into the resulting HTML. Better Home Page We have come a long way and you have done a great job! The last step is to make the index of the blog link to each post with a template. Use your editor to create templates/home.html and type in the following: {% extends "base.html" %} {% block content %} <h1>My Blog</h1> <ul> {% for slug in slugs %} <li><a href="{{ url_for('post', slug=slug) }}">{{ slug }}</a></li> {% endfor %} </ul> {% endblock content %} This template has two new things, first is a for loop. It works the same as the for loop you are familar with in Python, though it is specific to Jinja2. The second is the url_for which is a helper function provided by flask to construct the correct url given the name of a function view and the data passed into it. Update the home function in interface.py: @app.route("/") def home(): slugs = index() return render_template("home.html", slugs=slugs) # Change this line Re-run your application and you should have a blog you can surf around. After well deserved break we are going to put it on the Internet!
http://watpy.ca/learn/learn-to-code/4_Flask.md
CC-MAIN-2018-13
refinedweb
1,549
74.19
You can subscribe to this list here. Showing 2 results of 2 On Tue, Nov 04, 2003 at 03:20:51PM -0500, Curtis D. Gayheart wrote: > I might be outside the scope of what I'm supposed to be doing, but I've got > a script that defines one class than extends a compiled java class. The > script also has one method outside the class to create and return a new > instance of that class. From java, I create a proxy to the scripted class > with an interface and all is well; method calls end up at either the > superclass or the script appropriately. What I can't figure out, though, is > how to call the superclass from the script (analogous to super.myMethod() in > java). Is there a reserved word or invoking mechanism that I'm missing that > will allow this? You should be able to use super.myMethod() just as in Java, from the scripted subclass... Isn't that working? What happens? Actually, BeanShell does some fancy stuff to get that to work.. since there is not way to do it through the Java reflection API. We generate accessor methods into class automatically and call them for you when you say super.whatever. Pat Hi All, I have a weird requirement and pretty new to bean shell (I have been playing with it for couple of days) I am having the following code. ------------------------------------------------------ import java.util.Properties; import bsh.EvalError; import bsh.Interpreter; public class PropertyTest { public static void main(String[] args) throws EvalError { Properties property = new Properties(); property.put("value1", new Integer(2)); property.put("value2", new Integer(3)); property.put("value2", new Integer(0)); Interpreter i = new Interpreter(); // Construct an interpreter i.set("property", property); i.eval("property.value3 = property.value2 * property.value1;"); // I want to intercept the calls to Properties getter and setter values and call the Property.get("value1") etc // Can I do it? If yes, please tell me. //The following line should print 6 System.out.println("The value3:" + property.get("value3")); } } ------------------------------------------------------ Basically, what I want is to intercept the getter and setter called by bean shell and translate it to properties.get("value1") or properties.get("value2") or properties.set("value3", value) Is it possible? How would I go about it. One option I figured out is to create commands for set and get and use it to set and get values. But the expression would have to change it to some thing like set(property, get(property, "value2") * get(property, "value1")); But my requirement is to let the user define it as in code above ["property.value3 = property.value2 * property.value1;"]. Please help. Thanks in advance, _Jp _________________________________________________________________ Great deals on high-speed Internet access as low as $26.95. (Prices may vary by service area.)
http://sourceforge.net/p/beanshell/mailman/beanshell-users/?viewmonth=200311&viewday=6
CC-MAIN-2014-42
refinedweb
465
59.8
- Investing in a Gold Mine...Called Anastasia - Securitizing Human Capital - Are College Graduates Truly Wealthier? - The Best-Paying Careers? - Minor Initial Differences Magnify over Time - How Investing in Human Capital Pays - College Grads Learn to Buy Different Assets - Could the Fortunes of College Graduates Wane? - Does the Ivy League Pay Greater Dividends? - Distinct Groups of Students—And "Fun Capital" - Did Anastasia Accept the Offer? - Summary: The Four Principles in Action One of my rather bright undergraduate students from a few years ago, who stayed in touch over the years, decided after some years in the labor force to invest (more) in her human capital by returning to graduate school. She wanted to get a master's degree in advanced mathematical finance. I encouraged her to look broadly and consider all the top schools around the world. After a grueling application and admissions process, she was finally accepted to one of the best graduate schools, which happens to be located in the U.S. Midwest. This was quite an achievement for her, and I suspect her acceptance letter has been framed for posterity. Unfortunately, a few weeks after the good news came in the mail, she got a follow-up letter from the school with the financial details and a huge invoice. She was facing a total cost of almost $80,000 for a graduate education that takes less than two years to complete. (She probably didn't want to frame the invoice.) And although she understood that this was a great investment in her human capital, at the same time, she didn't have $80,000 sitting in a bank account ready to be withdrawn. Furthermore, this was a full-time, intense program that would limit her ability to earn any outside labor income while she was in school. So, like most prospective students, she began investigating various private loans through banks and organizations such as Sallie Mae and government-run student loan programs such as the Federal Direct Loan Program. The paperwork was daunting, the money wasn't free, and she started having some doubts. Was $80,000 in additional debt really worth it? After all, she was still paying off some of her undergraduate student loans. Investing in a Gold Mine... Called Anastasia When I found out about the dilemma she was facing—and the possibility she might abandon her educational plans—I offered Anastasia (not her real name, obviously) a deal, which I will outline here. I knew she was a bright and hard-working student who would do extremely well in graduate school and complete the program in the top of her class. My estimate was that she would then go on to a successful career in the financial services industry and likely earn thousands of dollars a year in salary and bonus. In my view, she had the potential of a high-producing gold mine or oil well, and I personally wanted the opportunity to invest in her human capital. So, I offered her $50,000 in cash—to finance the majority of her tuition—in exchange for a mere 10 percent of her pretax earnings during the first ten years after she graduated. To my way of thinking, the money I offered wasn't a loan or any type of debt. It was an investment. As long as she was in school, and wasn't earning any money, she owed nothing. I invited her to think of this as accepting a slightly higher tax rate in the future in exchange for a deeply subsidized education today. Should We Allow Human Capital Derivatives? As you saw in the Introduction, "Human Capital: Your Greatest Asset," investing time and money to develop your human capital pays off on average. But the dividends and investment returns—especially given student loan interest payments—might be less than in previous years, as the cost of investing in human capital (getting a college degree) continues to increase faster than inflation (as measured by the consumer price index, or CPI).1 This is especially true for elite private colleges and universities, in which tuition has risen the most and the fastest. College graduates in aggregate have more student loan debt than ever before and are entering the labor force with thousands of dollars in balance sheet liabilities, well before they have taken out their first mortgages. According to an article in the Wall Street Journal on September 3, 2009, today, almost two thirds of all college students have borrowed money to pay for their tuition; their debt load upon graduation is an average of slightly more than $23,000. Is there an alternative? I think so. Here's what I'm thinking: Perhaps there will come a day in the not-too-distant future when current and future students can sell a fraction of their (extraordinarily valuable) human capital when they are young to finance the costs of investing in education and going to school. These young students would get a lump sum of cash in advance or, alternatively, spread out over their years in school. These sums would not be considered a loan, or "bond-like." Rather, the funds would be considered "stock-like"—that is, similar to a company or a small business issuing shares (via an IPO or seasoned equity offering) to finance its expansion and investment opportunities. The money would be repaid by the student, eventually, in the form of preferred dividends for a predetermined period starting after graduation. I'm calling this concept Human Capital DerivativeS (HuCaDS), or, with tongue in cheek, Human Capital Daddy of Sugar. Here's how I figured the math. When Anastasia graduated in approximately 24 months, I anticipated she would be earning at least six digits—given her previous experience and the typical salary structure for specialists in her field. And, even if her salary remained constant at $100,000 per year (pretax) for the next ten years, that would yield me $10,000 for ten years on an initial investment of $50,000. To analyze this more precisely, I calculated something called the internal rate of return (or IRR) in my Excel spreadsheet program. A cash outflow of 50,000 today followed by zero cash flows for two years (while she is in school) and then by a positive cash flow of $10,000 from years 2 through 12 represents an annualized return of 10.25 percent. That investment return is much better than the rates at my local bank. In fact, this deal could turn out even better for both of us. Let's imagine that Anastasia performs better than expected, and by her fifth year back in the labor force, she is earning $200,000 per year. So, for the first five years I would receive $10,000 in dividends and for the remaining five years of our HuCaDS agreement, she would be sending me $20,000 each year to pay back my investment. That works out to an internal rate of return—for me—of 15.2%. This is better than you can hope for even in the most irrational of stock market bubbles! Of course, my HuCaDS arrangement would also leave me exposed to some downside risk as well. Anastasia might decide to shelve her completed master's degree and backpack across Europe or India for five years after graduation, which might satisfy her lifetime ambition to travel the world but would generate zero dividends for me. In that case, her return to the labor force would leave me only five years of cash flows in the contract term. Alternatively, she might decide to join the U.S. Peace Corps—or take a minimum-wage job at McDonalds paying only $25,000. Then the internal rate of return from my $50,000 investment would be zero, or possibly even negative. In those cases, I would have been better off putting the $50,000 under my mattress than investing in a HuCaDS with Anastasia. That is the risk and return trade-off for me: On the upside, I can get returns in the double digits—and on the downside, I could lose it all. Now I obviously would invest only a small fraction of my total net worth in human capital derivative arrangements, but at the same time, I would also derive some psychic dividends from having helped finance a student's education. I mention this (true) story because I think it could serve as an alternative for future students to onerous and anonymous student loan debt, with potentially crushing interest payments. In my teaching career, I see firsthand how current levels of student loan debt force people into jobs and careers they don't want or like, simply because they have to make the loan payments. My HuCaDS proposal would enable graduates to accept any job they truly want, knowing that they owe only a floating fraction of their salary as opposed to a fixed and unyielding obligation, as with a student loan.
http://www.informit.com/articles/article.aspx?p=1409812&amp;seqNum=5
CC-MAIN-2018-09
refinedweb
1,496
58.21
. Full sample code Complete example code for everything in this post can be found on Github at this repository: Project setup Start by creating a new .NET Core console application. Then add a package reference to the PowerShell SDK. At the time of this writing, I’m using .NET Core 3.1 with version 6.2.4 of the PowerShell SDK. This is the foundation step for all three of the example scenarios below. Install-Package Microsoft.PowerShell.SDK Example 1 (Default runspace) For the first example we are going to show the bare minimum code to run a script with a couple of input parameters, under the default runspace, and printing the resulting pipeline objects. Sidebar: What is a runspace? A PowerShell runspace executes your script code and maintains context/state for the session (loaded modules, imported functions, remote session state, etc). You can view the current runspaces in any PowerShell session by running the Get-Runspace cmdlet. In this example we use the default runspace. Calling PowerShell.Create() will get you a new hosted instance of PowerShell you can use within your .NET process. We call the .AddScript() and .AddParameters() methods to pass our input, and then call .InvokeAsync() to execute the pipeline. After the pipeline finishes we print any objects that were sent to the pipeline/output stream. /// ) { // create a new hosted PowerShell instance using the default runspace. // wrap in a using statement to ensure resources are cleaned up. using (PowerShell ps = PowerShell.Create()) { // specify the script code to run. ps.AddScript(scriptContents); // specify the parameters to pass into the script. ps.AddParameters(scriptParameters); // execute the script and await the result. var pipelineObjects = await ps.InvokeAsync().ConfigureAwait(false); // print the resulting pipeline objects to the console. foreach (var item in pipelineObjects) { Console.WriteLine(item.BaseObject.ToString()); } } } Example 2 (Custom runspace pool) The previous example works fine in many simple hosted script scenarios. However if you need to handle additional stream output (warnings, non-terminating errors, etc), or if you need to execute many scripts simultaneously, this example will demonstrate those usage patterns. Sidebar: What are PowerShell streams? PowerShell uses Piping (|) to pass objects from one cmdlet to another. In order to differentiate between pipeline objects and other output (errors, warnings, traces, etc), PowerShell writes objects to the different streams (or pipes) for these different types of data. That way the next cmdlet in the chain can just watch the Output stream and ignore the other streams. In this example we initialize and use a RunspacePool. Maintaining a runspace pool is helpful for multithreading scenarios where you need to run many scripts at the same time and have control over the throttle settings. It also allows us to import modules for each runspace used, configure thread re-use options, or specify remote connection settings. We also watch for three of the other output streams (warning, error, and information). This is helpful if we need to monitor things other than pipeline output, like Write-Host/Write-Information. using System; using System.Collections.Generic; using System.Management.Automation; using System.Management.Automation.Runspaces; using System.Threading.Tasks; namespace CustomRunspaceStarter { /// <summary> /// Contains functionality for executing PowerShell scripts. /// </summary> public class HostedRunspace { /// <summary> /// The PowerShell runspace pool. /// </summary> private RunspacePool RsPool { get; set; } /// <summary> /// Initialize the runspace pool. /// </summary> /// <param name="minRunspaces"></param> /// <param name="maxRunspaces"></param> public void InitializeRunspaces(int minRunspaces, int maxRunspaces, string[] modulesToLoad) { // create the default session state. // session state can be used to set things like execution policy, language constraints, etc. // optionally load any modules (by name) that were supplied. var defaultSessionState = InitialSessionState.CreateDefault(); defaultSessionState.ExecutionPolicy = Microsoft.PowerShell.ExecutionPolicy.Unrestricted; foreach (var moduleName in modulesToLoad) { defaultSessionState.ImportPSModule(moduleName); } // use the runspace factory to create a pool of runspaces // with a minimum and maximum number of runspaces to maintain. RsPool = RunspaceFactory.CreateRunspacePool(defaultSessionState); RsPool.SetMinRunspaces(minRunspaces); RsPool.SetMaxRunspaces(maxRunspaces); // set the pool options for thread use. // we can throw away or re-use the threads depending on the usage scenario. RsPool.ThreadOptions = PSThreadOptions.UseNewThread; // open the pool. // this will start by initializing the minimum number of runspaces. RsPool.Open(); } /// ) { if (RsPool == null) { throw new ApplicationException("Runspace Pool must be initialized before calling RunScript()."); } // create a new hosted PowerShell instance using a custom runspace. // wrap in a using statement to ensure resources are cleaned up. using (PowerShell ps = PowerShell.Create()) { // use the runspace pool. ps.RunspacePool = RsPool; // specify the script code to run. ps.AddScript(scriptContents); // specify the parameters to pass into the script. ps.AddParameters(scriptParameters); // subscribe to events from some of the streams ps.Streams.Error.DataAdded += Error_DataAdded; ps.Streams.Warning.DataAdded += Warning_DataAdded; ps.Streams.Information.DataAdded += Information_DataAdded; // execute the script and await the result. var pipelineObjects = await ps.InvokeAsync().ConfigureAwait(false); // print the resulting pipeline objects to the console. Console.WriteLine("----- Pipeline Output below this point -----"); foreach (var item in pipelineObjects) { Console.WriteLine(item.BaseObject.ToString()); } } } /// <summary> /// Handles data-added events for the information stream. /// </summary> /// <remarks> /// Note: Write-Host and Write-Information messages will end up in the information stream. /// </remarks> /// <param name="sender"></param> /// <param name="e"></param> private void Information_DataAdded(object sender, DataAddedEventArgs e) { var streamObjectsReceived = sender as PSDataCollection<InformationRecord>; var currentStreamRecord = streamObjectsReceived[e.Index]; Console.WriteLine($"InfoStreamEvent: {currentStreamRecord.MessageData}"); } /// <summary> /// Handles data-added events for the warning stream. /// </summary> /// <param name="sender"></param> /// <param name="e"></param> private void Warning_DataAdded(object sender, DataAddedEventArgs e) { var streamObjectsReceived = sender as PSDataCollection<WarningRecord>; var currentStreamRecord = streamObjectsReceived[e.Index]; Console.WriteLine($"WarningStreamEvent: {currentStreamRecord.Message}"); } /// <summary> /// Handles data-added events for the error stream. /// </summary> /// <remarks> /// Note: Uncaught terminating errors will stop the pipeline completely. /// Non-terminating errors will be written to this stream and execution will continue. /// </remarks> /// <param name="sender"></param> /// <param name="e"></param> private void Error_DataAdded(object sender, DataAddedEventArgs e) { var streamObjectsReceived = sender as PSDataCollection<ErrorRecord>; var currentStreamRecord = streamObjectsReceived[e.Index]; Console.WriteLine($"ErrorStreamEvent: {currentStreamRecord.Exception}"); } } } Shared App Domain One final example I created explores the shared .NET Application Domain. Since a hosted PowerShell runspace lives in the same .NET app domain as your calling application, you can do the following: - Send real (not serialized) class object instances defined in your hosting application as parameters into a script. - Instantiate class object instances in the script (for types defined in your hosting application). - Call static methods on types defined in your hosting application. - Return class objects to the pipeline. The sample code for this one is pretty lengthy, so head over to the Github repo to view the full example. Note: There is no special configuration or setup required to enable a shared app domain. The shared app domain can be leveraged in all runspace use cases. The main purpose of this code sample is to demonstrate how to leverage it. You can leverage the code in this example to do things like: - Pass rich objects between different scripts in a multi-step pipeline. - Provide pre-configured complex class instances in as parameters that may be easier to configure in .NET than in PowerShell. - Return a results/summary class object to the pipeline, instead of a large array of pipeline objects you need to interpret to determine if the script was successful or not. Common problems Here are a few of the common pitfalls that you might encounter when running hosted PowerShell scripts: Execution policy Executing scripts can fail if your execution policy is configured to prevent scripts from running. You may need to change your execution policy to resolve this. Read more here. Pipeline errors Terminating errors are the fatal errors that can occur inside your script code. If this happens the pipeline stops immediately and .InvokeAsync() will throw an exception. Implement error handling in your scripts to resolve this. Read more here. Pipeline output types If your hosting application prints the pipeline output objects to the console with .ToString() like the examples above do, then you may see type names printed in some cases. For example: a script calling the Get-Service cmdlet would show this in your console output repeated several times: “System.ServiceProcess.ServiceController“. This is expected behavior because the pipeline returns real class objects. The Get-Service command returns this type. For .NET types that do not implement a custom ToString() override, the type name will be printed instead. To solve this you can reach into the object and pull out or print other properties you need (like <serviceobject>.ServiceName). Module loading problems Trying to load a module inside your script and it isn’t loading? Check the basics: - Is the module actually installed on the system? - Is the module supported in PowerShell Core? Or is it a legacy PowerShell module? - Is it the correct bitness? PowerShell comes in x86 and x64 architectures, and some modules only work in one architecture. Hi! Great article, it helped me somewhat… I say somewhat because what I intend to do is to execute a lot of scripts over Azure. But, here’s the catch, the first command is Login-AzAccount, and when I use your code, the app stalls. On the powershell command line, a warning is shown when I execute that command, telling me to open the browser, etc, etc, etc. But on your sample code, nothing is output in any of the streams, and the app just stalls. Can you help me? Jorge from Portugal. LikeLiked by 1 person To solve this problem you will need to run Login-AzAccount but provide additional parameters to authenticate using a service principal with a password or certificate. This way the command can run unattended (no dialog prompts or interaction). You can read more about that process here: Hi I am trying to run some azureAD commands in function app 3.0 in C# and I am getting an exception “ClassName”: “System.Management.Automation.CommandNotFoundException”, “Message”: “The ‘Get-AzureRmTenant’ command was found in the module ‘AzureRM.Profile’, but the module could not be loaded. For more information, run ‘Import-Module AzureRM.Profile’.”, “Data”: null, and when I try to load the module using this cmdlet Import-Module AzureRM.Profile’ it gives an error saying incorrect cmdlet To cross-check whether the commands are available or not I ran Get-commands that listed whole lot of AzureRM commands So I haven’t run into this particular problem before, but I’m guessing that it might be caused by trying to load AzureRM cmdlets from dotnet core / PowerShell Core. I would recommend trying to use the newer dotnet core supported Azure commands available in the ‘Az’ module. The equivalent/replacement command is Get-AzTenant. This also assumes that you have the Az module installed. There is a restriction in function apps. It does not allow to add/install modules from nuget or powershell gallery. Either i download and manual upload and then try to import module not sure it will work thats too hacky. Function apps pwershell execution using C# has too many limitations. My basic doubt is if module is not available why the command is listed ? Anyway i am going to open issue on github let see if get answers. Thanks Man Hey, I am trying to achieve something similar. I have explained my problem scenario at. Can you please check and help me? Not quite sure what’s going on there, sorry– I haven’t tried to get this working within a linux hosted function app before. I may be missing something – I’m trying to understand the Shared AppDomain example, and I don’t see any difference between that and the default runspace example. Is there code missing here, or does the default runspace use the same AppDomain as the caller anyway? LikeLiked by 1 person Hi Jon– yeah great question. The shared appdomain example does in fact use the same code sample as the default runspace. Meaning you don’t have to do anything special to the runspace setup/configuration to enable this, it just works out of the box. The main difference in the project files here is in the program.cs file has different script code, and the the output handling after the runspace is called. I will add some clarifying comments in the Github repo and the post to make this more clear. Thank you – that’s *really* helpful to know 🙂 LikeLiked by 2 people Thank you for this great article,but i run the code and it is run successful without any errors but it is not effecting at all. my script is creating new local user and when i test it directly from the power-shell it is works and the user created but when i try it from this code it is not creating any user. LikeLiked by 1 person Its likely that there is an error in your script when it is being executed in the runspace. To help troubleshoot I would recommend setting: $ErrorActionPreference = ‘Stop’, at the top of your script, that way any terminating errors bubble up. Its possible there are errors you are not seeing. Great article, thanks a lot. I have a question about the bitness. Apparently our VS environment opens the x86 PowerShell architecture, and then a call to MSOnline fails because it only works in x64. Setting the target plattform of the project to x64 does not seem to help. Do you have any suggestions on how to force PowerShell to run in the x64 architecture? Thanks! If you set the platform target in the project properties to x64 (instead of anycpu) that should normally help. Is it just happening while launching in Visual Studio debug mode? Or does it also run incorrectly when launched outside of Visual Studio? If it only behaves incorrectly in Visual Studio, perhaps it could be the Debug launch settings in VS (for example its specifically launching an x86 debug process) Thanks a lot for your reply! The error is also happening when running the Release build of the executable outside Visual Studio. Thanks! Hmm not really sure what’s happening then, I would recommend trying a new stackoverflow question where you can post the code and more details. This is a really good article. Thanks Keith for putting this together. Is there a way I can run PowerShell 2 commands using this? I have some legacy scripts that I need to run via a .netcore app. And they require running using PowerShell 2. LikeLiked by 1 person @Siva — great question. The answer depends on the cmdlets used in your scripts. Using the runspace/SDK method described in this article means you can only run PowerShell Core compatible commands. Some of the commands used in your existing PS 2.0 scripts might be compatible, some might not be. I would review the breaking changes lists for PS 6.0 to help identify those potential issues: If you have commands that would break under PowerShell Core, then the only other options are to update your PowerShell scripts to be compatible, or to use the Process.Start() in .NET to launch a shell process to call Windows PowerShell. You just won’t get the integration used in the examples here. Thank you Keith for your reply. We are to run some of the PS scripts in here – And we do not plan to refactor these scripts at this stage. One of the script that I was running as using `Get-WmiObject` which definitely seem to have been v1 cmdlet. So that indicates that I only have the option of using Process.Start()! I might give that a go. LikeLiked by 1 person Thank you for the article. I am getting an error though, which is it seems to not be finding any of the cmdlets I have in my script. “ErrorStreamEvent: System.Management.Automation.CommandNotFoundException: The term ‘Connect-AzAccount’ is not recognized as the name of a cmdlet, function, script file, or operable program.” I have posted a question on stack overflow as well and have yet to get an answer. If you could help with this it would be greatly appreciated. The command not found error means it can’t load the Az module. I would check to make sure that the computer running the application has the Az module installed either globally, or in the user profile context that that your .net core application runs. Thank you for this really good article. It was great help for us. We have implemented a .net core PowerShell host that uses the Microsoft.PowerShell.SDK, is delivered as a self-contained exe and can execute all kinds of PowerShell scripts allowed using our intern data exchange api, executed via the included PowerShell SDK. So far it works great. In our scripts we want to use the “ThreadJob” module, which does not work on our host because the PS module “ThreadJob” is not included in Microsoft.PowerShell.SDK. Now we’ve found a nuget package: ThreadJob 2.0 Can we include this Nuget package in my host (in VisualStudio), and if so, how? How can we import (where is it physically) and deliver the module in our self-contained host? If you need to import a PowerShell module (like ThreadJob), then that module must be installed on the target machine where your code will run. You don’t package the ThreadJob module with your application, so ignore any nuget packages, that will not help. You need to run the ‘Install-Module’ cmdlet at some point and install the dependency. So your application code can call ‘install-module’ the first time it runs, OR you can do it via some sort of setup/installation process. An even better option would be to just remove the dependency on ThreadJob entirely and manage the parallelism via runspace pools and execute multiple scripts at once. I am getting Following Error when i tried to run Powershell Script thru Web API. using above mentioned Method 1. I am able to Connect Azure AD with Credential if my Connect Azure AD command is in Main Script. it is not working if my Script is like below Error Message:”The term ‘Connect-AzureAD’ is not recognized as a name of a cmdlet, function, script file, or executable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. ” Script like this: Import-Module MSOnline Import-Module Microsoft.Online.Sharepoint.Powershell -DisableNameChecking Import-Module ExchangeOnlineManagement Import-Module MicrosoftTeams function BoardingOpenConnections() { $script:allConnectionsReady = $false # Connect To Azure $script:AzureConnected = $false write-host “Connecting to Azure AD” Connect-MSolService -Credential $scriptCred Connect-AzureAD -Credential $scriptCred $script:AzureConnected = $true # Connect to Exchange OnLine $script:ExchangeConnected = $false Connect-ExchangeOnline -Credential $scriptCred -ShowProgress $False $script:ExchangeConnected = $true $script:allConnectionsReady = $True } Try { BoardingOpenConnections() } catch { } finally { } The “Connect-AzureAD” cmdlet is from the AzureAD module. I see your script installs other modules, but not AzureAD. This error means that either AzureAD is not installed in that system, or that module cannot be loaded (for other reasons). If i put the code of BoardingOpenConnections Function below Import Statement (Out side of Try catch ) then it is Connecting to Azure and MSolService but if i put same code inside Try catch blog then it is not working and throwing Error like ‘The term ‘Connect-MSolService’ is not recognized as the name of a cmdlet, function, script file,’ If the modules are installed in the system, then they don’t appear to be loading. I would confirm that the modules you are using are supported/working in PowerShell Core. If you are still stuck after that, then I would try posting a question on the Microsoft support forums or StackOverflow.
https://keithbabinec.com/2020/02/15/how-to-run-powershell-core-scripts-from-net-core-applications/?replytocom=2649
CC-MAIN-2021-31
refinedweb
3,256
57.37
Important: Please read the Qt Code of Conduct - <SOLVED>NCURSES not working properly from within IDE? The following code compiles and runs within the QTC IDE but does not clear screen. @#include <iostream> #include "term.h" #include "unistd.h" #include "ncurses.h" using namespace std; void Clear_Screen(bool RESET= true); int main() { Clear_Screen(); cout << "Hello World!" << endl; Clear_Screen(); cout << "Hello Again!" << endl; return 0; } void Clear_Screen(bool RESET) { if (!cur_term) { int result; setupterm(NULL,STDOUT_FILENO,&result); if (result <=0) return; } putp(tigetstr("clear")); if (RESET == true) putp(tigetstr( "rs1")); }@ As above After succesful compile the run from inside QTC gives @ Hello World! Hello Again! @ When run straight from XTerm result is @ Hello Again! @ debug shows "result" with value -1. Is it the way that QTC opens the application that is causing this error?? After some research i believe the TTY being applied by running it from inside QTC and directly from xterm are diff??? Just guessing????? setupterm() the way you have used it requires the presence of a TERM environment variable which in all likelihood is absent. You will also need to set "Run in Terminal" in your project settings in order to get a Curses compatible terminal (the Output Pane in Qt Creator is not). Thanks for your response. Already have "Run in Terminal" selected. bq. setupterm() the way you have used it requires the presence of a TERM environment variable which in all likelihood is absent. so what is the difference between running inside the IDE (absent) vs opening it directly from XTerm (present)?? The environment you have told Qt Creator to pass to your program does not contain a TERM environment variable, so one is not present. Xterm establishes an environment for its child processes that contains the TERM variable. Where am I telling QTC to pass this environment and can I change it so that running the application works correctly from inside the IDE?? I am assuming that QTC calls on like a shell program that opens the current project with one or two parameters like the one that " Press <return> to close this window" comes from. If the program can not be run properly from within the IDE I can not see how this would be a bug? The environment settings are in the same place as the "Run in terminal" option. It is fully configurable with several presets that you can work from. So are you saying that there is an environment setting or parameter that I need to set for this small application to work correctly from within the ide? I would love to know what it is! Solved It would appear that you have to insert TERM into your RUN ENVIRONMENT by clicking on ADD. Then give it the value of XTERM.
https://forum.qt.io/topic/33348/solved-ncurses-not-working-properly-from-within-ide
CC-MAIN-2021-43
refinedweb
460
73.27
In a multi-symbol cBot, will the OnTick() method trigger for each incoming tick on every symbol used, or just on incoming ticks of the associated chart? Do I need to write code to check if Bid/Ask price changed on each symbol, and trigger an event for such occurrence? Spotware said: Dear Trader,It's not possible. You could collect bid and ask prices in RunTime and then access them. Dear Trader, It's not possible. You could collect bid and ask prices in RunTime and then access them. OnTick() method is triggered on each incoming tick of the Symbol your cBot is set to run. Dear all how i can draw the value only of the rsi on the chart for multi time frame for this indicator ? using System; using cAlgo.API; using cAlgo.API.Indicators; using cAlgo.API.Internals; namespace cAlgo.Indicators { [Levels(30, 70, 80, 20)] [Indicator(IsOverlay = false, TimeZone = TimeZones.UTC)] public class MultiSymbolMA : Indicator { private RelativeStrengthIndex ma1, ma2, ma3, ma4; private MarketSeries series1, series2, series3, series4; private Symbol symbol1, symbol2, symbol3, symbol4; [Parameter(DefaultValue = "USDCAD")] public string Symbol1 { get; set; } [Parameter(DefaultValue = "EURAUD")] public string Symbol2 { get; set; } [Parameter(DefaultValue = "EURJPY")] public string Symbol3 { get; set; } [Parameter(DefaultValue = "GBPUSD")] public string Symbol4 { get; set; } [Parameter(DefaultValue = 5)] public int Period { get; set; } [Output("MA Symbol 1", Color = Colors.Red, PlotType = PlotType.Line, Thickness = 1)] public IndicatorDataSeries Result1 { get; set; } [Output("MA Symbol 2", Color = Colors.Magenta, PlotType = PlotType.Line, Thickness = 1)] public IndicatorDataSeries Result2 { get; set; } [Output("MA Symbol 3", Color = Colors.Yellow, PlotType = PlotType.Line, Thickness = 1)] public IndicatorDataSeries Result3 { get; set; } [Output("MA Symbol 4", Color = Colors.White, PlotType = PlotType.Line, Thickness = 1, LineStyle = LineStyle.Dots)] public IndicatorDataSeries Result4 { get; set; } protected override void Initialize() { symbol1 = MarketData.GetSymbol(Symbol1); symbol2 = MarketData.GetSymbol(Symbol2); symbol3 = MarketData.GetSymbol(Symbol3); symbol4 = MarketData.GetSymbol(Symbol4); series1 = MarketData.GetSeries(symbol1, TimeFrame); series2 = MarketData.GetSeries(symbol2, TimeFrame); series3 = MarketData.GetSeries(symbol3, TimeFrame); series4 = MarketData.GetSeries(symbol4, TimeFrame); ma1 = Indicators.RelativeStrengthIndex(series1.Close, Period); ma2 = Indicators.RelativeStrengthIndex(series2.Close, Period); ma3 = Indicators.RelativeStrengthIndex(series3.Close, Period); ma4 = Indicators.RelativeStrengthIndex(series4.Close, Period); } public override void Calculate(int index) { ShowOutput(symbol1, Result1, ma1, series1, index); ShowOutput(symbol2, Result2, ma2, series2, index); ShowOutput(symbol3, Result3, ma3, series3, index); ShowOutput(symbol4, Result4, ma4, series4, index); } private void ShowOutput(Symbol symbol, IndicatorDataSeries result, RelativeStrengthIndex RelativeStrengthIndex, MarketSeries series, int index) { int index2 = GetIndexByDate(series, MarketSeries.OpenTime[index]); result[index] = RelativeStrengthIndex.Result[index2]; string text = string.Format("{0} {1}", symbol.Code, Math.Round(result[index], 0)); ChartObjects.DrawText(symbol.Code, text, index + 1, result[index], VerticalAlignment.Center, HorizontalAlignment.Right, Colors.Yellow); } private int GetIndexByDate(MarketSeries series, DateTime time) { for (int i = series.Close.Count - 1; i >= 0; i--) if (time == series.OpenTime[i]) return i; return -1; } } } We do not provide coding assistance services. We more than glad to assist you with specific questions about cAlgo.API. You can contact one of our Partners or post a job in Development Jobs section for further coding assistance. Dear team, Is multi symbol still not avaiable for back testing yet? Thanks Is it still not avaiable for backtesting yet? Dear Trader,OnTick() method is triggered on each incoming tick of the Symbol your cBot is set to run. Hi Guys, This is a problem, a trading strategy should not be limited to one symbol ontick feed. We have a 1 to many relationship here as I have 1strategy being applied accrosss many symbols AND timeframes. I would siggests a Pubish subscribe pattern were so my bot can register to all the symbol ontick data feeds it wants to receive. Otherwise my other symbols which is not related to the bot's chart is only getting their updated data bassed on the chart's symbol so i will definitaly be delayed on my other symbols for data. Hope im making sense, This is a HUGEt issue for me... A timer is not an option as I'm trading fundamentals (calendar news) and with a huge news events every micro second count. Any suggestions? This is a big one for me. You cant have a trading strategy tied to 1 chart (one symbol and time frame). a trading strategy can be applied to many symbols and many timeframes. So you have a 1 to many relationship here apose to 1 to 1. If my cbot trades 5 pairs and I register the bot to 1 chart of the 4 AND with what you telling me is that I will only receive ontick events for the 1 symbol on the chart, right? AND what you suggesting is to get the other symbols data on the Chats OnTick event... then the 4 symbols would only be processed when symbol 1 has a OnTick event. THIS IS A HUGE RISK TO ME I would like to trade the calendar fundamentals and a micro split second is very important. Thus a timer based approach is also not feasable.. Is there no pub & sub (publish and subscribe pattern) were I can subscribe to for Ontick feeds for multiple symbols?
https://ctrader.com/forum/whats-new/1531?page=4
CC-MAIN-2019-09
refinedweb
845
51.14
Another benefit here, is if you ever have to login to the server, you can run the command from the terminal very easily. That's why I usually setup the Artisan command, I don't want to have a lot of steps, I want it to be simple to execute. 4th March, 2018 ahuggins left a reply on Daily Statistics • 1 week ago Another benefit here, is if you ever have to login to the server, you can run the command from the terminal very easily. That's why I usually setup the Artisan command, I don't want to have a lot of steps, I want it to be simple to execute. ahuggins left a reply on Daily Statistics • 1 week ago Well I usually make an Artisan command, and then you can use the Scheduler in the app/commands/kernel.php (I think) to set the timing: Then you can at least test the command by running the Artisan command. Now if you want a cron on your local env...not sure on that one. Haven't used windows in a while, but you aren't really worried about the Cron...that is going to be a function of the server software. Where as the command that is executed is what you should be focusing on. ahuggins left a reply on Setting Up A Subscription Service With Free Sub-users • 1 week ago If you use Spark, you can setup a subscription, but do not put a limit on the amount of people on the team, this would allow X number of people to be on the account/team. Read over this for more information: ahuggins left a reply on Access Route Parameters From Global Middleware • 1 week ago Post your middleware class, how is the $request variable being set? Are you instantiating the Request object, either by constructor injection or method injection? Are you getting any error, or are the above options returning null? ahuggins left a reply on Daily Statistics • 1 week ago So if you want like "Yesterdays Sales" I would create a cron job, that looks at the sales (orders) table, for all sales and sums up the total. You could store this in a new table, order_statistics or something like that, and then you can just grab a stored value. The idea being that once the clock passes into the new day...no more sales will happen on yesterday's date. And this way you are not dealing with calculating that data on the fly each page request. Then I would probably look at cacheing that view file, for 12 hours or something, so that you really aren't even hitting the db every time, because again, that order_statistics table should not change except when the cron job adds a new row. This would also provide you historical data that you could put in a graph, and then maybe you could have "current day sales" that you calculate on the fly, maybe once an hour. This gives you (or your client) a sort of "live" look at the sales coming in and can easily compare to last 30 days....but would not create a lot of load on the server. Then later you could get really fancy and use websockets to update the dashboard live anytime a sale came in. But I tend to think every hour or half hour of updating when the page loads is probably acceptable in most situations. 16th February, 2018 ahuggins left a reply on Save Via Relationship Is Not Saving ID • 1 month ago I wonder if you can use attach() with polymorphic relations and if that would populate the user_id...might be worth trying ahuggins left a reply on Save Via Relationship Is Not Saving ID • 1 month ago The polymorphic relationships only populate the usable_id and usable_type, or at least that's what I understand. Basically whatever the -able field is. Where the other types of relationships are looking for a specific field like user_id based on the model name. 12th February, 2018 ahuggins left a reply on Save Via Relationship Is Not Saving ID • 1 month ago I guess you could use a mutator on the Profile model. Something like: public function setUserIdAttribute($value) { if ($this->usable_type == 'App\User') { $this->attributes['user_id'] = $this->usable_id; } } The condition is to only set the user_id if the model that the profile is being saved with is a User model. You may have to adjust the namespace if it is not App\User. This should set the user_id field on save. If that doesn't work you could do the same thing but maybe move it to an Eloquent event, saved so that the data would have already been saved. 10th February, 2018 ahuggins left a reply on TSL 1.2 On Guzzle Requests • 1 month ago If you need your requests to come from an https url, you could always do valet secure in the root directory of your project. This means that Valet will serve your app via https. You might have to update your APP_URL in your .env file as well. As far as Guzzle, this might help: ahuggins left a reply on Save Via Relationship Is Not Saving ID • 1 month ago Why are you using a Polymorphic relationship? A profile seems like a pretty straightforward HasOne/BelongsTo relationship. Also, I think your problem might be how you are trying to save this data. Shouldn't $profile->user()->save($user); be more like $user->profile->save($cleanedData); Assuming you have the user at this point. Or I guess, the way I am suggesting, is based on a HasOne/BelongsTo relationship. ahuggins left a reply on Spark With Another Template, How Hard? • 1 month ago I think it will be tricky, or would be rather time consuming. But it is possible. The one thing is that you can do whatever you want with your Spark app. The "dashboard" is pretty easy to customize, but if you want to control the "settings" pages, that is where the tricky-ness will be. But since you are using a Bootstrap theme, it might be easier than I initially think. Most Bootstrap themes use quite a bit of the base styles, and provide a "theme" css file for its custom styles. It might take a little work to get things playing nicely, but the "settings" area uses Bootstrap. ahuggins left a reply on Laravel-snappy: File Was Not Created • 1 month ago what are the permissions on the output directory? 8th February, 2018 ahuggins left a reply on Spark 6.0 Need Extra Charge? • 1 month ago At Laracon online yesterday, I think Taylor said there was quite a bit of changes coming in Spark 6.0. Enough to warrant a new major release. He also said that any Spark licenses for 5.0, would be given a 50% discount to upgrade to 6.0. That would be $49.50 for single site, and $149.50 for multi-site licenses. To me that's not too bad to pay for about 12-24 months of of updates...especially considering that Spark should be being used on a SaaS that should be generating income. Not to mention, you do not have to upgrade. You can continue to use the 5.0 version as long as you want. Not sure I understand the problem. 1st February, 2018 ahuggins left a reply on Laravel Socialite Avatar (twitter) Not Updating • 1 month ago You should do something like: $user = User::updateOrCreate(['email' => $socialiteUser->email], [ 'name' => $socialiteUser->name, 'email' => $socialiteUser->email, 'contributor_code' => str_random(8), 'username' => $socialiteUser->nickname, 'twitter_avatar' => str_replace('http://','https://',$socialiteUser->avatar_original), 'verified' => 1, ]); // Leave this for you to figure out // $userRole = Role::whereName('basic')->first(); // $user->assignRole($userRole); // } ahuggins left a reply on Laravel Socialite Avatar (twitter) Not Updating • 1 month ago So this handleProviderCallback is used when your user tries to authenticate using Twitter...but you are not updating the user...only creating a new User this means that the second time they are authenticating with Twitter, this will not update their user. ahuggins left a reply on What To Do? • 1 month ago Do you have the resources/assets/sass/app.scss file in your repo? If that doesn't exist, then there is nothing for Webpack to compile 18th January, 2018 ahuggins left a reply on Require Selecting A Plan When Creating A New Team • 1 month ago So the individual user is on a plan of his own...then creates a team with a separate plan? Is that basically a feature that allows him to have collaborators and that's what the team plan covers? ahuggins left a reply on Require Selecting A Plan When Creating A New Team • 1 month ago I am assuming you are using the No card up front option? I would guess you could make a middleware, that checks if the created team has a plan...if not redirect the user to a page that requires sign up for a plan. Then just make sure the middleware is applied on the appropriate routes. That would be my initial take based on info provided, but hard to know if that is suitable for your situation. 21st December, 2017 ahuggins left a reply on Password Reset - Email Never Reaches The Recipient. • 2 months ago when you say "configured the env file" what do you mean? How are you setting up your mail? Using SMTP, mailgun, sparkpost, php mail function? I did this the other day, and it was tricky to actually get an email to send. I ended up using SMTP, so had the username and password in the env file, but also had to setup the MAIL_HOST, the DRIVER, PORT, and ENCRYPTION. 25th November, 2017 ahuggins left a reply on Examples Of Problems That Can Be Solved The Object Oriented Way? • 3 months ago I would say a Cart/Product is a pretty simple OOP example. ahuggins left a reply on Queue: Delay Dispatching Throws Error • 3 months ago What version of Laravel are you on? ahuggins left a reply on Queue: Delay Dispatching Throws Error • 3 months ago actually, it should be ok, not sure why it is not working for you ahuggins left a reply on Queue: Delay Dispatching Throws Error • 3 months ago I think this is a limitation of the dispatch() helper function. That function, returns a new PendingDispatch object, but the constructor of the PendingDispatch class does not return the object, so I don't think you will be able to chain method calls on it. You might be able to do ProcessOrder::dispatch($orderData)->delay(10) instead, or something along those lines. 17th October, 2017 ahuggins left a reply on Adding Username To Database(migration) Using Auth::user • 4 months ago You could do something like this: \\PostController.php public function store(Request $request) { request()->validate([ 'post' => 'required', 'location' => 'required', ]); Auth::user()->post()->create($request->all()); return redirect()->route('post.index') ->with('success','Post created successfully'); } And you would have to add a relationship on your User model for post(). Something like: \\ App\User.php public function post() { return $this->hasMany(Post::class, 'username', 'username'); } This assumes that you have a username column on both your User and Post models. 16th October, 2017 ahuggins left a reply on Call To Undefined Method Laravel\Lumen\Application::booted() • 5 months ago The Lumen Application class doesn't have the booted() method, which the Propaganistas package is looking for. You could extend the Application class that comes with Lumen and add a booted method that takes a call back and ties the passed callback to the Container...Might want to look at the Laravel Application class to see how it works: But you would also have to use the isBooted() method, and maybe some other things. 24th July, 2017 ahuggins left a reply on Testing Custom Artisan Command • 7 months ago use the Artisan Command to generate a test like php artisan make:test NameOfYourTest Or actually @themsaid probably explains it better here: 28th June, 2017 ahuggins left a reply on Nothing Showing, No Login, No Page, No Errors. • 8 months ago 22nd June, 2017 ahuggins started a new conversation Nothing Showing, No Login, No Page, No Errors. • 8 months ago Posting this in case anyone else experiences this. I think I had an older version of Spark, and tried to upgrade, but then the app wouldn't show anything. No pages, and very curiously, no errors in the console. Eventually I tracked it down to something involving the v-cloak directive. No link was turning up anything that was explaining what was happening. So I did a fresh install of Spark and everything was working. So I began trying to find what was causing the problem. Looks like Spark was built before Vue 2.0, and was using inline-templates, and maybe there was another change that the v-cloak directive needs to be on the actual Vue instance div. In older versions of Spark, the v-cloak directive was on the body tag, and moving it to the the spark-app div, then the site begins appearing again. Unfortunately, this does not seem to be mentioned in the upgrade guide. 12th June, 2017 ahuggins left a reply on Can't Find My Mailable Class In Controller • 9 months ago you try a composer dump-autoload? Figure you have, but also didn't say it, so worth asking. It looks right, so I would try the dump autoload and see if that helps 29th May, 2017 ahuggins left a reply on How To Insert Multiple Rows With Multiple Fields • 9 months ago What about just this: foreach ($inputs as $input) { StudentMark::create(array( 'course_id'=>Input::get('course'), 'semi_written_test'=>$input['written'], 'semi_reading_test'=>$input['read'], 'semi_class_activity'=>$input['activity'], 'semi_homework'=>$input['homework'], 'semi_total'=>$input['total'], 'student_id'=>$input['studentId'], )); } StudentMark::create($marks); It's not the best, since you are making a full roundtrip request for each $input....but it should at least create it in the db. Then you can work toward a better solution, that doesn't make as many requests. 25th May, 2017 ahuggins left a reply on I Am Getting "This Page Isn’t Working. Localhost Is Currently Unable To Handle This Request. HTTP ERROR 500" • 9 months ago what's the url you are trying to access? ahuggins left a reply on I Am Getting "This Page Isn’t Working. Localhost Is Currently Unable To Handle This Request. HTTP ERROR 500" • 9 months ago Is this the only site on the server? Can you visit the other ones? This looks like your local server is not running. ahuggins left a reply on I Am Getting "This Page Isn’t Working. Localhost Is Currently Unable To Handle This Request. HTTP ERROR 500" • 9 months ago What local server are you using? php artisan serve Homestead Valet Wamp, Mamp, Xampp, Ampps? ahuggins left a reply on 2 Variable In 1 Foreach • 9 months ago Are you using Eloquent Models? It looks like you are only using the Query builder. If you were using Eloquent Models, I would suggest using relationships then you can return the products and reference the relationship of the other tables you are trying to get the data of. ahuggins left a reply on 2 Variable In 1 Foreach • 9 months ago You have a problem in your product function. You are setting $k = within a foreach. This overwrites the $k value every time, so when in the view...there should only be one value in $k. ahuggins left a reply on I Am Getting "This Page Isn’t Working. Localhost Is Currently Unable To Handle This Request. HTTP ERROR 500" • 9 months ago Is your local server running? You're going to ? 27th April, 2017 ahuggins left a reply on Creating A Custom Facade / Service Provider - Please Help • 10 months ago Because someone could do this...if you remove a , it can not be from the last entry in the providers array. It needs to be one that would cause an error. ahuggins left a reply on Creating A Custom Facade / Service Provider - Please Help • 10 months ago Ran across this, because I experienced similar issue in 5.4. Posting it in case anyone else comes across this post. For me, I added the package service provider in the config/app.php, like we all know to do. Kept getting that the class was not found. At some point I ran all these commands in the terminal: composer dump-autoload php artisan config:clear php artisan cache:clear php artisan clear-compiled I think it was the config:clear, but I am not 100% sure on that. One thing you can do is go to your config/app.php file, go to the providers array, and remove a , from the end of a line. If you refresh your browser/app, and do not get a parse error. You know that you have a cache issue. Be sure to add the , back when you are done though. 20th April, 2017 ahuggins left a reply on Scheduler Problem • 10 months ago ahuggins left a reply on Scheduler Problem • 10 months ago crontab is mostly a Unix thing, so since you are using CMD/Windows...probably isn't going to work out for you too well. Do you know the server software you are using? IIS, Apache, Nginx? Are you using Wamp, Mamp, Ammps, or some other local server package? 28th January, 2017 ahuggins left a reply on Npm Install Sweetalert Missing .min.js File In Dist Folder • 1 year ago HA, turns out, I excluded *.min.js files from being searched in Sublime, and it also doesn't show them in the file tree. So that is why it was not showing up. ahuggins left a reply on Npm Install Sweetalert Missing .min.js File In Dist Folder • 1 year ago I've deleted and installed the sweetalert module a few times ahuggins started a new conversation Npm Install Sweetalert Missing .min.js File In Dist Folder • 1 year ago I am working on a Spark application, I run npm install and it looks like things install. But when I run gulp, the Elixir/Mix file tries to publish this .copy('node_modules/sweetalert/dist/sweetalert.min.js', 'public/js/sweetalert.min.js') but when I look in the public/js folder, there is no sweetalert.min.js file, and so I go look in the node_modules/sweetalert/dist/ folder the sweetalert.min.js file is missing too. I go look on Github for the Sweetalert repo and it has the sweetalert.min.js file listed...so does anyone know why I am not getting the min.js file in the dist folder? 14th December, 2016 ahuggins left a reply on WYSIWYG Editor For Page Content • 1 year ago So you are asking, how do you get your page content into a WYSIWYG editor? Usually, you have that as a property on your Page model, where you store the html, then in the edit/create page you have your WYSIWYG editor connect to a textarea field, you store that in your db, then when you edit, you set the property as the value of the textarea and have the WYSIWYG connect to the text area again. I've done this many times. Also, there is not a WYSIWYG that is going to automatically handle file uploads into your Laravel app. If there is, I am unaware. Most likely there might be one that has the majority of this built in the front end side, but you will most likely have to handle the upload yourself. 12th December, 2016 ahuggins left a reply on How To Get Base Url Form Json In Laravel 5.3? • 1 year ago Friendly reminder, you don't want your .env file in your Git repository ahuggins left a reply on How To Get Base Url Form Json In Laravel 5.3? • 1 year ago in the .env file, there is a key APP_URL. For your local development, you can set this to APP_URL= where on production you would set this to the actual domain. In your case you might want to set it to: APP_URL= but I can't say 100% that will work. ahuggins left a reply on Laravel Socialite Throws A Fatal Error When I Cancel The Authorization Process With Facebook • 1 year ago when Fb redirects you, it usually has a code in the url...I'm guessing when the user clicks "not now" it does not have this code in the url? ahuggins left a reply on WhereBetween In Laravel 5 • 1 year ago You need to make the $start and $end variables instances of Carbon...or make sure that the format matches what you have in the DB. ahuggins left a reply on WYSIWYG Editor For Page Content • 1 year ago I can also say, it would be unlikely that you would be able to enter Blade syntax in any WYSIWYG editor. They most likely are not programmed to process Blade. You could try to program it yourself, but not sure what you are really trying to do in order to need this. Add some more context and people might be able to suggest more. ahuggins left a reply on WYSIWYG Editor For Page Content • 1 year ago What system are you using? There is not a WYSIWYG in Laravel, which package or CMS or whatever? Want to change your profile photo? We pull from gravatar.com.
https://laracasts.com/@ahuggins
CC-MAIN-2018-13
refinedweb
3,597
71.44
help get Substring from String C get Substring from String This section illustrates you how to get the substring using... have consider will as str2 and the whole string as str1. The function strstr C Tutorials substring. C String lowercase...; C get Substring from String This section illustrates you how to get...; C String Substring In this section, you will learn how to get ; | and substring string and substring bbbnnnmmm*ccc remove second asterick and display it as bbb*nnn mmm*ccc and substring string and substring my code is bbbnnnmmm*ccc how to remove the asterick at second position and display the code as bbb*nnn mmm*ccc Substring in java Substring in java example:output 0f the code.. Input String:University of the Cordilleras Input Substring:of Output:1 ...use non static methods,don't use (.Substring).. ..please help me.. ..thank you.. deadline:November Taking Substring SubstringExample1.java C:\convert\rajesh\completed>java SubstringExample1 String : Rajesh kumar String after 3rd index: esh kumar Substring (1,2): a Download... Taking Substring   find and substring String"); String findstring = keybord.readLine(); String substring = keybord.readLine(); findreplace(findstring,substring); } catch(IOException io...find and substring **import java.io.BufferedReader; import Use of <fn:subString(String, int, int)> Tag of JSTL ; <c:out... Use of <fn:subString(String, int, int)> Tag of JSTL... and return string type. Syntax : java.lang.String substring method example .style1 { font-size: medium; } String substring() method example... a string. String class provide methods for finding substring from string. Method...:String = "Rose India"; var substring:String = str.substring(5,13 JPA Substring() Function () function. The substring() function returns the some part of string arguments..., second is starting index and third is length of string . JPA Substring... JPA Substring() Function   C++Tutorials other tutorials, such as C++: Annotations by Frank Brokken and Karel Kubat...; The CPlusPlus Language Tutorial These tutorials explain the C++ language... Borland C++ Builder 3 Page -Tutorials This section of my Builder web c++ c++ Write a console based C++ program that reads student information... name with the minimum GPA 7) Display Student GPAs as Letter Grades (A, B, C, D...: ID: int FirstName: string SecondName: string FirstExam: int SecondExam find a substring by using I/O package find a substring by using I/O package Write a java program to find a sub string from given string by using I/O package Java SubString ; String class provides number of methods for manipulating the strings. substring... of the original string as String object. substring provides a way of making a copy of a piece of a String. This method has two forms: substring(int beginIndex):  PHP get character from String !"; $a = substr($string, 0); $b = substr($string, 1); $c = substr($string... PHP Get Character From String To get sub string from a String we can use substr() method. Structure of this method is : string substr (string $var Taking Substring C Replace String C Replace String In this section, you will learn how to replace a string in C. Here... of pointer type are declared. The st represents the string, orig represents the  copy string to string objective c copy string to string objective c How to copy a value of NSString to another string in Objective C? I have to different views and i am will to show the NSString value in both of them. Thanks PHP SubString Function ($superString,5); echo "Sub-string is :".$substring."<br...); echo "Sub-string is :".$substring."<br/>"; ?>...-string is :".$substring."<br/>"; ?> Output: Sub c c++ - Java Beginners ]; String name = f[1]; String c = f[2]; String note = f[3...]; String name = st[1]; String c = st[2]; String note = st[3]; if (id.equals... = f[1]; String c = f[2]; String note = f[3]; if (!id.equals(t1 c c++ - Swing AWT c c++ Write Snake Game using Swings Hi Friend, Try...); } public static void main(String[] args) { new SnakeGame(); } } 2... gameOver(Graphics g) { String msg = "Game Over"; Font small = new Font("Helvetica C++ C++ How can i write this in dev c C# question C# question In C#, string is a class or primitive type. Explain c++ c++ use a prgrm as an example to xplain-: a)class b)object c)message d)cope resolution operator++ Describe the principle advantages of deploying a linked list versus a static array when implementing a Queue or a Stack c++ c++ write a prgrm tht accepts 3 integer values,then with a function to swap the values C++ C++ I have been asked to write a programme that gets five values,gets the sum,average and product please help me C# C# i need a code that will make program perform the basic operations upon the data Listview save, update and cancel c++ c++ write a programme that calculates the area and circumference of a rectangle #include<conio.h> #include<iostream.h> class rect { int l,b; int area; int peri; public: void get_rect() { cout<<"\n c programing c programing Write 2 implementations of C's string copy routine, one with a loop and the other using recursion. What is the security risk with this routine? The function should have the signature: char* strcpy(char* destination convert char to string objective c convert char to string objective c Converting char to string in objective c NSString *s = [[NSString alloc] initWithBytes:arr + 2 length:3 encoding:NSUTF8StringEncoding C++GraphicsTutorials C++ Graphics Tutorials  ... in this document is correct. C/C++ Windows programmers who want to learn... MFC. OpenIL in Dev-C++ OpenIL C and C++ books -fashioned C string) and in the second case a zero-terminated array of XMLCh's... In this section for example-based C tutorials that introduce you to ODBC... C and C++ books   classes in c++ classes in c++ 1- design and implement a class datatype...) { return (day + days) % 7; } public String toString... ""; } public static void main(String[] args C and C++ books-page8 C and C++ books-page8 The ODBC from C Tutorial In this section for example-based C tutorials that introduce you to ODBC API programming in C. Each tutorial contains String lastIndexOf(String str) the last index of a Substring from the String, we have applied lastIndexOf(String... occurrence of the given substring within the string. Here is the code... String lastIndexOf(String str)   String indexOf(String str) of a Substring from a String, we have applied indexOf(String str); method and then we have passed the Substring from the specified String in that to find its index as shown... of the given substring within the string. Here is the code of the program C program - SQL C program Thank You . Write some C programs using pointers . Write some C programs using files. Please reply me . Hi Friend, Please visit the following link: all c library functions all c library functions hi dear c coders, i need all c library functions with a detailed explanation. Who can give me usefull link? thanks beforehand! Hi Friend, Please go through the following link: C Tutorials about c and java :// c and java i need java and c language interview and objective questions with answers for the fresher.please provide to me Please Select string from array in objective c Select string from array in objective c Hi, I wanted to select a string from the database table in objective c. But somehow it is not working..can-you suggest anything regarding the fetch query in objective c? Thanks.   c program - Java Beginners c program Write a program to encrypt a string and decrypt match string to regular expression - Objective C match string to regular expression - Objective C How to match string value to regular expression in objective c? Use NSPredicate. See the example given below... NSString *searchString = @"loginStatus"; NSString C String to Int C String to Int In this section, you will learn how to convert a string represented value into an integer represented value in C. You can see in the given example that we c postfix - Framework :// Thanks...c postfix q: write a c program to read apostfix expression store it in array of character then evaluate this expression using eval and get_token C String Copy C String Copy In this section, you will learn how to copy the string in C. You can see in the given example, two string variables st1 and st2 are created. A string c program - IoC c program given the string "wordprocessing",write a program to read the string from the terminal and display the same in the following formats: a) word processing b) word processing c) w.p. String indexOf method example of the substring from the string. The indexOf() method start to search from beginning of the string and matching the substring from substring, if substring... .style1 { font-size: medium; } String indexOf() method example String Expressions String Expressions Name ______________________ Assume the following: String a = "abc"; String h = "Hello"; String name = "Michael Maus"; Show what...) ); __________ System.out.println( "Tomorrow".substring(2,4) ); __________ System.out.println( h.charAt(1 C String length C String length In this section, you will learn how to determine the length of a String in C. You can see in the given example, we have declared a string and initialize c/c++ - Development process c/c++ Creating animation using c/c++. The code for moving an object using c/c Why NSString Objective C C and how to define and initialize the NSString Class? Why NSString Objective C NSStrings is a class that deals with strings in objective c... = @"hello world"; You can also compare a string using the "isEqualToString Summary - String of these prototypes, i and j are int indexes into a string, s and t are Strings, and c... substring that matches regexStr with String t sa = s.split(regexStr...Java: Summary - String String Concatenation Following table shows how C# Error - Framework C# Error Please Solve The Following Error & Post Reply Error... button1_Click(object sender, EventArgs e) { string... for c# error Ask Questions? If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
http://www.roseindia.net/tutorialhelp/comment/94949
CC-MAIN-2013-20
refinedweb
1,699
63.09
Created on 2011-07-15 15:40 by jschneid, last changed 2018-10-15 18:10 by terry.reedy. This issue is now closed. The C compiler that comes with HP/UX 11 has some shortcomings that prevent building Python 3.2.1 out of the box. I am attaching patches here as I work through issues. The first patch fixes namespace shortcomings when trying to use struct termios. This patch works around the problem underlying issue 5999 by making sure the __STDC_VERSION__ macro is defined and has a value of at least 199901 You may the number of this issue in a comment of your patch. Oops. You may *add* the number of this issue in a comment of your patch. Workaround for compiler bug; HP/UX compiler refuses to past (implicitly char *) string constants with wide (wchar_t *) string constants. This patch is also pasted to issue 12561 (which should be closed). Note: There is disagreement as to the best way to proceed on this issue. Stinner (aka haypo) has a patch that should work with non ASCII character sets, and my patch will almost certainly not work in those cases. YMMV. Sorry - last comment should have been "compiler refuses to past*e*", not "past". I think that getpath.patch is wrong (it's just a workaround hiding a real bug): see msg140422 of the issue #12561. This patch just reduces compiler noise by explicitly casting pointers to void *. I expect the Visual Studio C/C++ compiler suite also issued these warnings, too. The HP/UX C compiler grumbles when a symbol that is declared static is later defined without the static storage class specifier. The attached patch just adds the missing "static" keywords. I'm adding the original listeners for issue 5999 to this one. The fileutils.patch patch attached to this issue directly addresses what's wrong in issue 5999; I'd consider it closed, but as I didn't open it, and I'm not actually part of the python project, that's not my call to make. From issue 12561 (which I will be closing): Author: STINNER Victor (haypo) * Date: 2011-07-15 15:36 Use >L"" CONSTANT< to decode a byte string to a character string doesn't work with non-ASCII strings. _Py_char2wchar() should be used instead: see for example this fix, commit 5b6e13b6b473. This is in reference to getpath.patch. I have no need to support internationalized versions of the constant strings my patch addresses, so Stinner's commit is overkill for me. Minor note: we prefer unified diffs, and when possible, all related changes in one file. See I can’t review the patches, as I don’t know C nor HP/UX. I’m not sure why you opened this report instead of following on the other one :) Update to getpath.patch (issue 12561) - this version uses _Py_char2wchar where possible. Unfortunately, as lib_python is declared and defined statically, this can't be used in both cases where the HP/UX compiler has issues. >The fileutils.patch patch attached to this issue directly addresses what's wrong in issue 5999; I'd consider it closed, ... When an patch is committed that fixes that issue, say so there and I or someone will close it. >From issue 12561 (which I will be closing):... so Stinner's commit is overkill for me. Issues sometimes expand beyond the original request. It is unclear to me whether Victor thinks that that issue is finished. If he thinks more should be done, then I think it should be left open. Terry - I apologize for jumping the gun a bit, and let me be a bit more clear. When I realized that the HP/UX compiler was going to have as many problems as it does compiling python 3, I decided it would be best to create a single issue for all of the compiler issues. In retrospect, opening an issue for the single compiler bug discussed in issue 12561 was a mistake, and I had hoped to migrate the discussion of that issue here. Please don't manage independent bugs in a single issue, even if they are related. The way this is done here will it make hard to track what exactly has been done, and what needs to be done. As it stands, please combine all patches into a single one, and we won't commit anything until the entire patch passes. Martin - I don't have time to manage your project's administrative requirements with respect to my fixes. I'm providing them out of the hope they will be of use to others who need to build on HP/UX, but I don't really care if they make it into the main branch or not. Here’s a unified diff. Arg, I thought I removed a duplicate patch but it was actually an updated version. Sorry about that; the link in the history at the bottom of this page still links to the file. Updated unified diff attached. Jim: Sorry if we reacted first with process remarks instead of thanking you for the patches and reviewing them. We value contributions, and we try to be welcoming, but sometimes we forget what it’s like to enter this community. Some things have become basic for us, like working with one unified diff instead of many context diffs, so we try to guide contributors so that their files can be easily read and fed to tools (such as code review or version control) and we can all move faster in the discussion leading to the commit. This is not about project administration, merely common formats to make work easier for all parties involved (for example, for users generating one diff for a whole checkout is easier than generating and uploading one diff per file). So, I hope that the diff I generated from yours will let people review it quickly, with no hurt feelings. I believe this issue can be safely closed. It is a no-brainer to compile Python from master on HP-UX with aCC these days. It works for me at least. Thank you for the update. I think any problems with the current HP/UX compiler are best reported on a new issue.
https://bugs.python.org/issue12572
CC-MAIN-2021-49
refinedweb
1,044
71.34
I want to call mymod.ipynb that I created from mytest.ipynb program and execute it. In Notebook, the error ModuleNotFoundError: No module named'mymod' Change the extension to .py and try it with IDLE software, it works fine. Is there a way to run ipynb extension in Notebook?Error message ModuleNotFoundError: No module namedApplicable source code python mymod.ipynb def myfunc (): print ("Hello!") mytest.ipynb import mymod mymod.myfunc () Change the extension to .py and try it with IDLE software, it works fine.Supplemental information (FW/tool version etc.) Please provide more detailed information here. - Answer # 1 - Answer # 2 I saw this tweet. A tool that allows you to extract a script from a Jupyter Notebook into a Python/R/Julia file, and then modify the file back to Jupyter. Now you can edit the Jupyter script with your favorite editor. jupytext Source Related articles - python - how to use the magic command %% time in jupyter notebook - python error using jupyter notebook - python 3x - construction of jupyter notebook environment on raspberry pi4 - i want to run downgraded python with jupyter notebook - python - jupyter notebook gives an error on import - python - the jupyter notebook on mac doesn't start - i cannot confirm the python module - how to embed a website in jupyter notebook (lab) - python - i want to use pyxel on ubuntu, but it says i can't find the module even though it is installed - i want to use the jupyter lab, a python environment construction on a mac - unable to connect to postgresql from python (no module named'psycopg2') - python - i want to start jupyter lab in the terminal - python 3x - about the solution of "no module named'mysql'" - python - i want to pass the module path when using the virtual environment created by virtualenv with jupyter notebook -'] You may not be able to do it, but you usually don't. I don't know how to answer. Jupyter notebook is not dedicated to Python, but is designed to be used in other languages such as R and Ruby, so code that can be used in Python is not always written. So it doesn't seem to be a mechanism that can be imported in general.
https://www.tutorialfor.com/questions-100740.htm
CC-MAIN-2020-40
refinedweb
367
70.63
I have been using Hadoop a lot now a days and thought about writing some of the novel techniques that a user could use to get the most out of the Hadoop Ecosystem. Using Shell Scripts to run your Programs I am not a fan of large bash commands. The ones where you have to specify the whole path of the jar files and the such. You can effectively organize your workflow by using shell scripts. Now Shell scripts are not as formidable as they sound. We wont be doing programming perse using these shell scripts(Though they are pretty good at that too), we will just use them to store commands that we need to use sequentially. Below is a sample of the shell script I use to run my Mapreduce Codes. #!/bin/bash#Defining program variablesIP="/data/input"OP="/data/output"HADOOP_JAR_PATH="/opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.0.0-mr1-cdh4.5.0.jar"MAPPER="test_m.py"REDUCER="test_r.py"hadoop fs -rmr -skipTrash $OPhadoop jar $HADOOP_JAR_PATH \-file $MAPPER -mapper "python test_m.py" \-file $REDUCER -reducer "python test_r.py" \-input $IP -output $OP I generally save them as test_s.sh and whenever i need to run them i simply type sh test_s.sh. This helps in three ways. - It helps me to store hadoop commands in a manageable way. - It is easy to run the mapreduce code using the shell script. - If the code fails, I do not have to manually delete the output directory The simplification of anything is always sensational. - Gilbert K. Chesterton Using Distributed Cache to provide mapper with a dictionary Often times it happens that you want that your Hadoop Mapreduce program is able to access some static file. This static file could be a dictionary, could be parameters for the program or could be anything. What distributed cache does is that it provides this file to all the mapper nodes so that you can use that file in any way across all your mappers. Now this concept although simple would help you to think about Mapreduce in a whole new light. Lets start with an example. Suppose you have to create a sample Mapreduce program that reads a big file containing the information about all the characters in Game of Thrones stored as /data/characters/ But you don't want to use the dead characters in the file for the analysis you want to do. You want to count the number of living characters in Game of Thrones grouped by their House. (I know its easy!!!!!) One thing you could do is include an if statement in your Mapper Code which checks if the persons ID is 4 then exclude it from the mapper and such. But the problem is that you would have to do it again and again for the same analysis as characters die like flies when it comes to George RR Martin.(Also where is the fun in that) So you create a file which contains the Ids of all the dead characters at /data/dead_characters.txt/: Whenever you have to run the analysis you can just add to this file and you wont have to change anything in the code. Also sometimes this file would be long and you would not want to clutter your code with IDs and such. So How Would we do it. Let's go in a step by step way around this. We will create a shell script, a mapper script and a reducer script for this task. 1) Shell Script #!/bin/bash#Defining program variablesDC="/data/dead_characters.txt"IP="/data/characters"OP="/data/output"HADOOP_JAR_PATH="/opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/contrib/streaming/hadoop-streaming-2.0.0-mr1-cdh4.5.0.jar"MAPPER="got_living_m.py"REDUCER="got_living_r.py"hadoop jar $HADOOP_JAR_PATH \-file $MAPPER -mapper "python got_living_m.py" \-file $REDUCER -reducer "python got_living_r.py" \-cacheFile $DC#ref \-input $IP -output $OP Note how we use the -cacheFile option here. We have specified that we will refer to the file that has been provided in the Distributed cache as #ref. Next is our Mapper Script. 2) Mapper Script import sysdead_ids = set()def read_cache():for line in open('ref'):id = line.strip()dead_ids.add(id)read_cache()for line in sys.stdin:rec = line.strip().split("|") # Split using Delimiter "|"id = rec[0]house = rec[2]if id not in dead_ids:print "%s\t%s" % (house,1) And our Reducer Script. 3) Reducer Script import syscurrent_key = Nonekey = Nonecount = 0for line in sys.stdin:line = line.strip()rec = line.split('\t')key = rec[0]value = int(rec[1])if current_key == key:count += valueelse:if current_key:print "%s:%s" %(key,str(count))current_key = keycount = valueif current_key == key:print "%s:%s" %(key,str(count)) This was a simple program and the output will be just what you expected and not very exciting. But the Technique itself solves a variety of common problems. You can use it to pass any big dictionary to your Mapreduce Program. Atleast thats what I use this feature mostly for. Hope You liked it. Will try to expand this post with more tricks. The codes for this post are posted at github here. Other Great Learning Resources For Hadoop: Also I like these books a lot. Must have for a Hadooper.... Large Scale Machine Learning with Python and Hadoop: The Definitive Guide: Storage and Analysis at Internet Scale The first book is a guide for using Hadoop as well as spark with Python. While the second one contains a detailed overview of all the things in Hadoop. Its the definitive guide Link to original article: Hadoop Mapreduce Streaming Tricks and Techniques
https://www.commonlounge.com/discussion/7f178a88128342c0b4a5feb4e30b2151
CC-MAIN-2019-47
refinedweb
948
64.91
Sometimes. Here is the thread that explains the bug: I originally reported this as a bug on containers issue tracker, but we seem to have concluded that this is probably a bug in the GHC optimizer itself. I think the shortest repro so far is this: import qualified Data.Set as S main = print $ let {-# noinline f #-} f () = T2 in S.fromList [f (), f ()] data T = T1 | T2 | T3 | T4 | T5 | T6 | T7 | T8 | T9 deriving (Show, Read, Eq, Ord, Bounded, Enum) which prints fromList [T2,T2] The person who derived this from my original repro says: And as I said earlier, comment out the T9 constructor => prints fromList [T2] as it should. Another interesting quote: Can confirm. Tested with ghc-8.6.1, containers-0.6.0.1 and leancheck-0.7.5 (so it does not seem to depend on the testing framework). Error occurs: - with ghc -O1 and -O2 (but not with -O0) - and if data type has at least 9 elements So, likely a bug in ghc's optimizer. in some cases, input has duplicates, but not always. This is a bad one, makes GHC 8.6.1 totally unusable for me.). GHC doesn't come with dynamic object files/libraries compiled with profiling thus it's not possible to compile code with -dynamic or -dynamic-too and -prof. This must be a known issue, but I couldn't find a ticket opened specially for it. I also don't know why these files are not included by default in the bindists. It looks like this was a false alarm. I re-run the script with the older GHC bindist and got single test suite run. Something has changed but the change is not caused by GHC. So, one test suite (test-threaded) is not run anymore for unknown reason. The other test suite (test) is run. They are almost identical BTW: test-suite test type: exitcode-stdio-1.0 hs-source-dirs: test main-is: Main.hs ghc-options: -Wall -fno-warn-missing-signatures -fno-warn-name-shadowing -fno-warn-unused-do-bind -fno-warn-unused-matches build-depends: base >= 4.3 && < 5 , stm , stm-delay test-suite test-threaded type: exitcode-stdio-1.0 hs-source-dirs: test main-is: Main.hs ghc-options: -Wall -threaded -fno-warn-missing-signatures -fno-warn-name-shadowing -fno-warn-unused-do-bind -fno-warn-unused-matches build-depends: base >= 4.3 && < 5 , stm , stm-delay There was also a bug in detection of failing packages (those that failing not during building but during configuration step were not included in the report at all), there 27 failing packages, but this is not relevant in this case. I have adjusted the setup to exclude any changes but those introduced by GHC: stackin binary form on every run without controlling whether it's the same stackevery time. stackbinary is now fixed and cannot change between runs. stackage-curator. snoyberg/stackage:nightlyalso changes frequently and so I created a different image that I won't update unless needed. stack updateon every execution of the script, which may have affected the results. I included this command in Dockerfileinstead so updates of Hackage indices are controlled as well. This should eliminate influence of external factors on results that we observe. So, again, sorry for the false alarm!. Good points about the logs. Right now we use information that stackage-curator gives us which isn't plenty. This probably can be improved. I'll investigate the std-delay failure tomorrow and if necessary download the actual logs from the cache. Sorry, for throwing it here without sufficient explanations. Here are some clarifications: And a blog post that describes everything in details is coming. If the sum of succeeding/failing test suites is decreases this indicates that some test suites failed to build. Hashes are GHC commit hashes on master branch (GHC compiled at those commits is used during execution of the build plan). How is it determined that this build failure is GHC's fault and not the fault of say, Stackage or a downstream dependency? The build plan we use is fixed, so versions of packages and their dependencies cannot change. Any change observed is thus a result of some change in GHC. Version of stm-delay is 0.1.1.1, the version is determined by the build plan, nigthly-2018-04-05, found here:. Please see this PR:. Looks like this has been merged. Currently we lack any 32-bit test environment on CircleCI. This will need to change as we still provide 32-bit binary distributions and Harbormaster currently does have a 32-bit test target. Here is how to reproduce the issue with GHC 8.2.2: $ cat Main.hs module Main (main) where main = case (1+1) :: Int of 1 -> return () $ ghc -Wall -Werror -fno-code Main.hs [1 of 1] Compiling Main ( Main.hs, nothing ) Main.hs:3:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () | 3 | main = | ^^^^ <no location info>: error: Failing due to -Werror. $ ghc -Wall -Werror Main.hs [1 of 1] Compiling Main ( Main.hs, Main.o ) Main.hs:3:1: warning: [-Wmissing-signatures] Top-level binding with no type signature: main :: IO () | 3 | main = | ^^^^ Main.hs:4:3: warning: [-Wincomplete-patterns] Pattern match(es) are non-exhaustive In a case alternative: Patterns not matched: p where p is not one of {1} | 4 | case (1+1) :: Int of | ^^^^^^^^^^^^^^^^^^^^... <no location info>: error: Failing due to -Werror. Note that missing-signature is reported both times, but the incomplete-patterns is not reported with -fno-code. Status update: I succeeded in running full builds with tests on AppVeyor using a private build cloud. Here is the PR I opened yesterday: I've got admin access to GHC AppVeyor now, but it needs to have premium plan and "private build cloud" feature enabled. For this we should disable premium plan for our (Tweag) fork and enable it for GHC AppVeyor account. This needs actions from Mathieu who is on vacation right now, so there may be a little delay with it. I could not find already existing ticket and so I'm creating this one. Ben mentioned (in his latest status update on GHC devops mailing list) that we need to do building on non-Debian-based systems on CI. To quote: e. Support for building on non-Debian-based systems (e.g. Fedora), which is necessary if we want to produce our binary distributions via CI. So this ticket is to track progress on that. I'm planning to start working on this tomorrow perhaps. The test profiling/should_run/scc001 fails on CI: module Main (main) where main :: IO () main = do print $ f True print $ g 3 print $ h 'a' f :: a -> a f x = x g :: Int -> Int g x = x h :: Char -> Char Just h = Just id And it outputs: True 3 ghc-stage2: ghc-iserv terminated (-11) instead of True 3 'a' I guess I'll mark it as an expected failure for now in my PR. Currently, --make in handy for compiling programs with boot files using only one ghc invocation. But it behaves strange. If I pass A.hs, B.hs, and A.hs-boot (where B.hs imports module A with that {-# SOURCE -#} thing), I get an error message complaining about module A defined twice. Surely GHC should be able to figure out that if the second file has the extension hs-boot, it's a boot file and should be used as such. After all, GHC is perfectly capable of finding boot files if it's given import directories to search in (with -i). We can't however easily use that because it so happens that if we tell ghc about such directories, they can contain files that should not be visible. I even tried a workaround: copy all boot files to a separate location and then point to that location with -i. To my surprise in that case ghc does not pick up those boot files. (And with more complex nesting of modules I expect more problems with this approach.) So it would be great if we could just feed all the relevant files into ghc and it would be smart enough not to complain about modules defined twice if one of the modules is in a file with hs-boot or lhs-boot extension, but use it as a boot file for corresponding module.
https://gitlab.haskell.org/trac-mrkkrp.atom
CC-MAIN-2021-17
refinedweb
1,413
65.32
Introduction: Pac-Man LED Pixel Panel Costume First Prize in the Halloween Contest 2017 Intro First off, let's state the obvious: we like to do family costumes. Last year we did space "aliens" which really looked more like people dressed in silver with LED strip accents. We really liked the LEDs since they are unique and make the kids easy to see when they are running around at night. We decided to do something that lights up again, but weren't sure what. It was my daughter that suggested Pac-Man, since she had recently played the Namco gem at a bowling alley a few weeks prior. That suggestion was all we would need. It was a great idea and we had to make it happen. Note: Please excuse the lack of great process photos.I hadn't initially planned on doing a write up while I was making this. I hope it all still makes sense! Step 1: Planning The first step is always planning, but hey it's worth noting, especially if you want to do a different 8-bit character. Choose the Characters We chose Pac-Man characters for the family. Pac-Man, Ms. Pac-Man, and the ghosts Blinky and Pinky. I wanted it to be accurate and authentic to the original 8-bit characters so I began by looking up the original game sprites. These help ensure we are getting the correct pixel pixel layout and shows each frame of the animations. For instance you can see the different frames of the ghost legs and Pac-Man's mouth in the sprites above. I then laid out each frame of the animations in Excel so we know where the number of LEDs and the overall layout of each board. The individual numbers of the LEDs are needed later for programming the Ardunio board. Number of LEDs The Ghosts cover 170 pixels, Pac-Man is 137, and Ms. Pac-Man is 151, which means we needed 628 LEDs for the pixels. (170+170+137+151) Size of Panels You can see by the layout in Excel that the Ghosts are 14 pixels in height by 14 pixels in width. Pac-Man is 13x13 and Ms.Pac-Man is 14x14. Once we knew the width and height by pixels we have to decide how big we wanted to make the panels. I wanted them all to be the same size relative to one another, so each pixel would be the same size on each costume. The strings of LEDs we purchased had a distance of about 3" (75mm) between LEDs. This gives us the high end of how big we can make each pixel. To scale them properly we started by making sure it would be the right size for the smallest family member, my son. I measure him and held up some cardboard cutouts to see what was the largest size he could still comfortably put his arms over and still walk. We ended up at 22" as the size of the ghosts in height and width, but you may want to do a different size. That set our scale of our pixels at 1.57" square (22" / 14 pixels, for the Ghost). Step 2: Materials We wanted the costumes to be relatively easily to wear and not too terribly cumbersome, so we opted for light materials rather than durable ones. Base: 1/8" cardboard, laser cut and hot glued Translucent Cover: Drafting Paper (Vellum or Mylar) LEDs : 12mm DC5V WS2811 Individually Addressable LEDs (Strings of 50) Board: Ardunio Uno R3 Power: 5V DC Portable Power Supply (Like a portable phone charger) + USB to DC Adapter Note: You see a different one in the photos, I explain this in the wiring section Misc: DC Power Jack Adapters Breadboard jumper wires Velcro Woven Duct Strap Duct Tape DC5V Power Supply, not needed, but helpful for testing Step 3: The Panel. The base is made from 1/8" cardboard sheets purchased from the local art/hobby store. Our laser cutter has a 32"x18" bed so I had them cut the sheets down to that size. Panel The panel is made up of a few main parts: the base, the slats, the cover, and the strap. The base holds the LEDs, the slate provide the pixels and overall stability of the panel, the cover diffuses the light, and the strap makes it portable. Modeling To make the files for the laser cutter I modeled the base using Rhino. I modeled the all the parts of the panel very accurately in an effort to let the laser cutter do as much work as possible. I made each panel so that each vertical and horizontal fin would slide into each other and they would notch also into the base. Setting it up this way allows the parts of the panel to slide and notch together without the need for much glue, while also ensuring we would get perfect squares for the pixels. Notes: The notches and joints have a 1/64" clearance on both sides to let the cardboard slide together nicely. The slots end up being 5/32" wide (1/8" + 1/64" + 1/64"). I have added the 3d model in DWG format to the files for those interested. Laser Cutting As mentioned before our laser cutter is 32"x18" so the CAD files I made to do the laser cutting are set on that size. I have added the .dwg file for the Ghost costume. You can use it to laser cut your own, or you can print it out and use it as a template to hand cut the cardboard. If you are going to do it by hand you can easily lay this out and cut strips of cardboard. You can could measure it out yourself, the pixels are about 1.5" square and 2" deep with a 12mm hole in the middle for the LEDs. You can also use the provided PDF as a template for cutting. Notes: We used 12mm LEDs and that's what size the holes are. The LEDs went in nicely, but since it is cardboard they got a little loose, especially if I had to take the LEDs out and put them back in. If I were to redo it I would make those holes slightly smaller. Assembly The base had to be cut in two pieces since our laser cutter bed's small dimension is 18", which is smaller than the 22" x 22" dimension of the base. I used good ole duct tape at the join to keep it together initially. This is really to keep it in place while adding the LEDs and slats. Once the slats were added to the base they provided the real support and stability. Each slat is labeled by scoring either "H#" or "V#" for Horizontal (H) and Vertical (V) and which number (#) in order. The slats slide together nicely since we added additional 1/32" clearance. My first tests left a lot less room where they slid together and made it impossible to put it all together. Once the were all slid together we put the slats on the base. They notched in where the plus sign shaped holes in the base are. Once it was on the base I duct taped the slats to the base on the ends. Then I added a small bead of hot glue the top left corner where the slats meet to keep them in place and ensure they wouldn't slip. The notches for the slats extend through the base so there is a small tab on the back of the base. I also added a bead of hot glue to these tabs to keep the slats securely on the base. After thought As you can see in the 3d model diagram I did not plan for an enclosure on the back of the panel. This was a mistake! I noticed this a couple of hours before tick-or-treating and I hastily cut 2.5" cardboard strips and added them to the back of the panel. to give something other than the LED wires to rest against the body. This helped, but really it should have been fully enclosed. When walking around with the panels with the back open the LEDs tend to get pushed in. You can see in some of the pictures where the pixels are no longer really pixels, but point lights. This is where the LEDs were pushed or fell in. I stopped the family a few times to fix these over the night or tick-or-treating. Note: If you are going to make these, plan for a back panel to enclose the LEDs. Don't make the same mistake. GhostLaserCutterFiles.dwg Step 4: LED Layout The planning for the LEDs strings occurred in Excel where I laid out the 170 individual LEDs in order from 0-169 for the Ghosts. Excel Layout The important things to note are the distances between pixels and the start point. If you don't want to cut and rewire the pre-strung lights you need to make sure you can reach from one hole to the other in your sequence. In my case distance between the individual LEDs on the pre-strung bundle is about 3 inches (75mm). This meant the pixels had to be adjacent to one another when laying them out. For the starting point just make sure it is close to where you can mount the Arduino and power supply. Placing The LEDs Following the pattern laid out in Excel we push the LEDs from the back into place in the cardboard. Note: The layout is mirrored since the excel layout it looking from the front, and we are pushing them in from the back. The 0 LED is on the left in Excel and on the right in the picture of the base from the back. The LEDs have a small flange that keeps them in place. I found that the cardboard is a bit flimsy so the LEDs did move around a bit when walking around. If you want to keep them in place more securely you could put a dab of hot glue on them. Step 5: Wiring Power Supply The best way to power this is with a 5V DC rechargeable battery (aka: a portable phone charger). Then use a USB to DC adapter to plug into your lights which should now be wired with the female DC adapter. I made a mistake and used a 8xAA Battery Pack with an On/Off switch that I had from the previous year's costumes. It was only while writing this did I realize it has a 12V output and I was using 5V LEDs. (I am no rocket surgeon but I think this is bad). Since I had the battery pack around I just picked it up and plugged it in and everything worked and I didn't second guess it ¯\_(ツ)_/¯ Interestingly enough the battery packs I used gave us only about 2 hours of on-time. After figuring out that they were the incorrect power supply for the panels I tested the panels with the correct 5VDC portable phone charger and they lasted much much longer. I tested a 2200 mAh and a 7800 mAh portable charger and the panels ran for 3h 40m and 12h 43m respectively. This means with a cheap "lipstick charger" you can get nearly 4 hours of on time. Note: The on-time I got from my test were with the ghost costumes (170 LEDs) running at 40 brightness out of 255 (The brightness is set in the code section) LEDs I used strings of 12mm WS2811 individually addressable LEDs for outdoor signage. I bought them on Amazon and they came in strings of 50. Since I needed over 600 I bought the 500 pack which came with a bit of a discount. Wiring The wiring of the power to the lights and then to the board is done as shown in the diagram and photo. The first LED on the string has a power-in connection. There are two wires (hot/red and ground/white). These are wired to a female power jack adapter. The power supply is plugged into this adapter. The 5V, GND, and DATA from that first LEDs 3pin JST connector/input port are connected to the 5V, GND, and 3 PIN connections on the Arduino board. Make sure you check your string of LEDs before wiring it up. My LED wires were Red(5V), Green (DATA), White(GND), but yours may be different. Notes: When using the USB to power the Arduino, unplug the 5V connection, but leave the GND Do not mix up the 5v and GND - that can fry your board. I know from experience, it doesn't smell good :( Step 6: Code Once you are all wired up the lights won't do anything when you plug them in. Maybe they will briefly blink. They need to be told what to do first and this is how I did it. I am not covering how to program an Arduino board. If you don't know how to do this there is a great article on Arduino's Website: Getting Started with Arduino FastLED I used the FastLED library for these. Its easy to use and easy to find examples to help figure everything out. It was useful that they use Web Colors so you can set any LED to any color by using the name. My Code Below is the code for the Ghost, I have also uploaded the Arduino code file. The basic outline for the logic is to fill all LEDs with the main color, then change the color of, and turn off the individual sets of LEDs to make each "frame". Once we have a set for each "frame" of the animation, we loop through them at X speed X number of times to make up the whole animation. There may be a smarter way of doing this, but this is how I did it and it worked for me. Notes: You can turn the brightness up and down it will have an effect on how long the batteries last. Change the //fill body color to change the color of the Ghost (its Red below) You can change the speed of the animations by changing the number and speed of the loops //ANIMATED PACMAN GHOST LED #include "FastLED.h" #define NUM_LEDS 170 #define DATA_PIN 3 CRGB leds[NUM_LEDS]; void setup() { delay(2000); FastLED.addLeds(leds, NUM_LEDS); FastLED.setBrightness(40); //Number 0-255 FastLED.clear(); } //looking left int eyes_pixels_l[] = {17, 20, 21, 24, 37, 38, 39, 40, 41, 47, 48, 49, 94, 102, 103, 106, 119, 120, 121, 122, 123, 129, 130, 131}; int pupils_pixels_l[] = {15, 16, 22, 23, 92, 93, 104, 105}; //looking right int eyes_pixels_r[] = {38, 39, 40, 46, 47, 48, 49, 50, 63, 66, 67, 75, 120, 121, 122, 128, 129, 130, 131, 132, 145, 148, 149, 152}; int pupils_pixels_r[] = {64, 65, 76, 77, 146, 147, 153, 154}; //remove pixels around feet int void_pixels_1[] = {8, 29, 30, 31, 83, 84, 85, 86, 138, 139, 140, 161}; int void_pixels_2[] = {7, 31, 55, 56, 57, 112, 113, 114, 138, 162}; int sad_ghost[] = {11, 26, 35, 48, 49, 53, 60, 64, 65, 80, 89, 104, 105, 109, 116, 120, 121, 134, 143, 158}; int eyes_seconds = 2; int reg_ghost_seconds = 10; int feet_delay = 120; //delay in ms b/w feet v1 v2 int eye_loop = (eyes_seconds*1000)/feet_delay; // how many times to look left and right before switching int reg_ghost_loop = reg_ghost_seconds/eyes_seconds; int sad_ghost_loop = 50; int sad_ghost_blink_loop = 10; void loop() { for(int i = 0; i < reg_ghost_loop; i++){ for(int i = 0; i < eye_loop; i++){ //fill body fill_solid(leds, NUM_LEDS, CRGB::Red); //set eyes for (int i = 0; i < 24; i++){ leds[eyes_pixels_l[i]] = CRGB::White; } //set pupil for (int i = 0; i < 8; i++){ leds[pupils_pixels_l[i]] = CRGB::White; } //set pupils for (int i = 0; i < 8; i++){ leds[pupils_pixels_l[i]] = CRGB::DarkBlue; } //remove around feet (v2) for (int i = 0; i < 10; i++){ leds[void_pixels_2[i]] = CRGB::Black; } FastLED.show(); delay(feet_delay); } for(int i = 0; i < eye_loop; i++){ /2) for (int i = 0; i < 10; i++){ leds[void_pixels_2[i]] = CRGB::Black; } FastLED.show(); delay(feet_delay); } } //sad ghost regular for(int i = 0; i < sad_ghost_loop; i++){ //fill all red fill_solid(leds, NUM_LEDS, CRGB::Blue); //set eyes for (int i = 0; i < 20; i++){ leds[sad_ghost[i]] = CRGB::Yellow; } /); } //sad ghost blinking for(int i = 0; i < sad_ghost_blink_loop; i++){ //fill body fill_solid(leds, NUM_LEDS, CRGB::Yellow); //set eyes for (int i = 0; i < 20; i++){ leds[sad_ghost[i]] = CRGB::Red; } /); } } Step 7: Add the Cover I recommend leaving the cover to the end since it is made of paper and can rip. It is also useful to be able to reach the LEDs from the front when you are putting them in place or making adjustments. I used a roll of drafting paper I had lying around. I am pretty sure it is vellum, but it might be mylar. Either way what you want to a translucent paper that is wide enough to cover the whole thing so you don't have to deal with a seam. You can get rolls of drafting paper at an art store or online. To keep the paper in place I put a line of regular old Elmers glue along the top edge of the vertical fins. I didn't do the ones horizontal ones as all you really need is for the paper to stay mostly in place. Keeping the paper glued down makes sure the grid read through and make the panel look like individual pixels. After you get the glue down lay the paper on top and put books or something heavy-ish on top to keep it in place and let the glue set. Once the glue sets you can cut the excess paper off. Leave a little extra bit of paper around the edges to fold over and tape down, this helps ensure the edges won't peel up or come loose. Step 8: Road Ready For the finishing steps we get it ready to go on the road. - Mount the Arduino and power supply to the back with Velcro - Tape the connections together so they don't come loose - Add a strap to carry it around. I used woven duct strap for this (like the drafting paper I had it around). - Load up some Pac-Man music and sound effects on your phone and bring a bluetooth speaker Step 9: Profit Once you are all done you can send out your 8-bit family into the neighborhood to impress the neighbors and collect your sweet sweet candy profits. Thanks for reading! If you have any questions feel free to ask! Recommendations We have a be nice policy. Please be positive and constructive. 34 Comments Incredible !!! Thanks for sharing. Congratulations! These are so wonderfully executed! So much fun! Bravo! Simply wonderful! What version of FastLED Library did you use? I used the latest version 3.1.6 I am insanely jealous. Outstanding work! The use of vellum as a diffusion material is sheer genius! Your warning about disconnecting the 5V connection to the strip when you are programming the Arduino is excellent advice. I have fried Arduino boards by pulling to much current the USB port when I forgot the strip was plugged in. May I offer a suggestion? If you place a diode between the 5V connection of the strip and the Arduino it will not allow the USB port to power the strip. The battery pack 5V can still pass through the diode and power the Arduino. You will experience a small voltage drop (about 0.7v for a 1N914B diode) but the Arduino will still work perfectly. Just connect the anode to the Arduino side, the cathode to the strip side and it all works safely. Congrats for being a super-cool dad, the world needs more Maker families! I did not know about the diode setup, I will have to test that out. Thanks for the suggestion and for checking out my project! I love these! I love the thought put into the configuration. My daughter saw me reading this and wants a Pinky costume! Now I just need to build/buy a laser cutter. Thank you! A couple of options. You could build it by hand by printing the PDFs and laying it out over cardboard to cut out. There are also companies that will do the laser cutting for you if you send them the files.
http://www.instructables.com/id/Pacman-LED-Pixel-Panel-Costume/
CC-MAIN-2018-09
refinedweb
3,456
77.87
I've found a problem with pivot_root that worked fine in 2.6.13.3, butMiklos Szeredi put these lines there with the following comment: fails for me, starting in 2.6.14-rc3 (haven't tried rc1 or rc2). This is for LTSP.org (Linux Terminal Server Project) thin clients. In our initramfs, we have a '/init' script that creates a mountpoint for a 2nd ramfs, and i'm trying to pivot_root to that mount point. I'm getting: pivot_root: Invalid Argument This worked perfectly in 2.6.13.3, so I looked at the 2.6.14-rc3 patch, and I found the code in fs/namespace.c that is causing it to fail for me: @@ -1334,8 +1332,12 @@ asmlinkage long sys_pivot_root(const cha error = -EINVAL; if (user_nd.mnt->mnt_root != user_nd.dentry) goto out2; /* not a mountpoint */ + if (user_nd.mnt->mnt_parent == user_nd.mnt) + goto out2; /* not attached */ if (new_nd.mnt->mnt_root != new_nd.dentry) goto out2; /* not a mountpoint */ + if (new_nd.mnt->mnt_parent == new_nd.mnt) + goto out2; /* not attached */ tmp = old_nd.mnt; /* make sure we can reach put_old from new_root */ spin_lock(&vfsmount_lock); if (tmp != new_nd.mnt) { The first of the 2 new tests are causing the pivot_root to fail for me. If I comment out those lines, it works again. I'm thinking that somebody put those lines there for a reason, so there's possibly something wrong with the way i've been doing this for a long time, and the tightening of the code has uncovered my problem. I'll explain how we use the initramfs/nfsroot:googeling a little bit I found the following link: [...] Somebody recently told me that pivot_root has been put in the 'evil way to do things' category, and that there was a new way, but he couldn't remember what that was.
http://lkml.iu.edu/hypermail/linux/kernel/0510.1/0013.html
CC-MAIN-2019-35
refinedweb
302
66.84
.1-1988 (``POSIX'').] [ANSI C X3.159-1989] FOPEN(3) OpenBSD Programmer's Manual FOPEN(3) NAME fopen, fdopen, freopen - stream open functions SYNOPSIS #include <stdio.h> FILE * fopen(char *path, char *mode); FILE * fdopen(int fildes, char *mode); FILE * freopen(char *path, be- ginning char- acter or as a character between the characters in any of the two-charac- ter strings described above. This is strictly for compatibility with AN- SI X. If fdopen() fails, the file descriptor fildes is not affected in any way. The freopen() function opens the file whose name is the string pointed to by path and associates the stream pointed to by stream with it. The original stream (if it exists) is always closed, even if freopen() fails. The mode argument is used just as in the fopen function. The primary use of the freopen() function is to change the file associated with a stan- dard), fseek(3), funopen(3) STANDARDS The fopen() and freopen() functions conform to ANSI X3.159-1989 (``ANSI C''). The fdopen() function conforms to IEEE Std1003.1-1988 (``POSIX''). CAVEATS Proper code using fdopen() with error checking should close(2) fildes in case of failure, and fclose(3) the resulting FILE * in case of success. FILE *file; int fd; if ((file = fdopen(fd)) != NULL) { /* perform operations on the FILE * */ fclose(file); } else { /* failure, report the error */ close(fd); } OpenBSD 2.6 June 4, 1993 2
http://www.rocketaware.com/man/man3/fopen.3.htm
crawl-002
refinedweb
236
73.98
Error Handling¶ Errors are inevitable when working with the API, and they can be correctly handled with try...except blocks in order to control the behaviour of your application. Pyrogram errors all live inside the errors package: from pyrogram import errors RPCError¶ The father of all errors is named RPCError and is able to catch all Telegram API related errors. This error is raised every time a method call against Telegram’s API was unsuccessful. from pyrogram.errors import RPCError Warning It must be noted that catching this error is bad practice, especially when no feedback is given (i.e. by logging/printing the full error traceback), because it makes it impossible to understand what went wrong. Error Categories¶ The RPCError packs together all the possible errors Telegram could raise, but to make things tidier, Pyrogram provides categories of errors, which are named after the common HTTP errors and are subclass-ed from the RPCError: from pyrogram.errors import BadRequest, Forbidden, ... Single Errors¶ For a fine-grained control over every single error, Pyrogram does also expose errors that deal each with a specific issue. For example: from pyrogram.errors import FloodWait These errors subclass directly from the category of errors they belong to, which in turn subclass from the father RPCError, thus building a class of error hierarchy such as this: - RPCError - BadRequest MessageEmpty UsernameOccupied ... - InternalServerError RpcCallFail InterDcCallError ... ... Unknown Errors¶ In case Pyrogram does not know anything about a specific error yet, it raises a generic error from its known category, for example, an unknown error with error code 400, will be raised as a BadRequest. This way you can catch the whole category of errors and be sure to also handle these unknown errors. In case a whole class of errors is unknown (that is, an error code that is unknown), Pyrogram will raise a special 520 UnknownError exception. In both cases, Pyrogram will log them in the unknown_errors.txt file. Users are invited to report these unknown errors in the discussion group. Errors with Values¶ Exception objects may also contain some informative values. For example, FloodWait holds the amount of seconds you have to wait before you can try again, some other errors contain the DC number on which the request must be repeated on. The value is stored in the x attribute of the exception object: import time from pyrogram.errors import FloodWait try: ... # Your code except FloodWait as e: time.sleep(e.x) # Wait "x" seconds before continuing
https://docs.pyrogram.org/start/errors
CC-MAIN-2021-49
refinedweb
412
53.92
I want to create an option inside my application for dark theme. I don't want to change the theme automatically using the default device theme. Only when the user will select it. So i am in face two options. First option: <StackLayout> <Frame BackgroundColor="{AppThemeBinding Light=Light, Dark=Dark}" /> </StackLayout> Second option. Create a static variable and a class. public class ThemeHelper { public static string ThemeName = "Dark"; public static Color FrameBackgroundColor = InitTheme(ThemeName); private static Color InitTheme(string themName) { switch (themName) { case "Dark": return Color.Black; case "Light": return Color.White; default: return Color.White; } } } } And then isnide xml <StackLayout> <Frame BackgroundColor="{x:Static local:ThemeHelper.FrameBackgroundColor}" /> </StackLayout> Does second way has any performace issue? Second way seems more easy to me cause i can implement more than one theme. Answers Based on your description, the first option is better. Yes, use static variables will take up part of the memory. AppThemeBindingalong with UserAppThememake it really easy to handle theme modes in your Xamarin.Forms apps. This works not only for colors, but images and other resources as well. Xamarin forums are migrating to a new home on Microsoft Q&A! We invite you to post new questions in the Xamarin forums’ new home on Microsoft Q&A! For more information, please refer to this sticky post.
https://forums.xamarin.com/discussion/187533/darktheme-vs-binding-a-color
CC-MAIN-2021-10
refinedweb
218
52.26
API Microversions¶ Background¶ Zun uses a framework we call has as its value a string containing the name of the service, container, and a monotonically increasing semantic version number starting from 1.1. The full form of the header takes the form: OpenStack-API-Version: container 1.1 If a user makes a request without specifying a version, they will get the BASE_VER as defined in zun/api/controllers/versions.py. This value is currently 1.1 and is expected to remain so for quite a long time. When do I need a new Microversion?¶ A microversion is needed when the contract to the user is changed. The user contract covers many kinds of information such as: the Request the list of resource urls which exist on the server Example: adding a new container/{ID}/foo which didn’t exist in a previous version of the code the list of query parameters that are valid on urls Example: adding a new parameter is_yellowcontainer/{ID}?is_yellow=True the list of query parameter values for non free form fields Example: parameter filter_by takes a small set of constants/enums “A”, “B”, “C”. Adding support for new enum “D”. new headers accepted on a request the list of attributes and data structures accepted. Example: adding a new attribute ‘locked’: True/False to the request body the Response the list of attributes and data structures returned Example: adding a new attribute ‘locked’: True/False to the output of container/{ID} the allowed values of non free form fields Example: adding a new allowed statusto container/{ID} the list of status codes allowed for a particular request Example: an API previously could return 200, 400, 403, 404 and the change would make the API now also be allowed to return 409. See 2 for the 400, 403, 404 and 415 cases. changing a status code on a particular response Example: changing the return code of an API from 501 to 400. Note Fixing a bug so that a 400+ code is returned rather than a 500 or 503 does not require a microversion change. It’s assumed that clients are not expected to handle a 500 or 503 response and therefore should not need to opt-in to microversion changes that fixes a 500 or 503 response from happening. According to the OpenStack API Working Group, a 500 Internal Server Error should not be returned to the user for failures due to user error that can be fixed by changing the request on the client side. See 1. (except in 2). The reason why we are so strict on contract is that we’d like application writers to be able to know, for sure, what the contract is at every microversion in Zun. If they do not, they will need to write conditional code in their application to handle ambiguities. When in doubt, consider application authors. If it would work with no client side changes on both Zun versions, you probably don’t need a microversion. If, on the other hand, there is any ambiguity, a microversion is probably needed. - 2(1,2) The exception to not needing a microversion when returning a previously unspecified error code is the 400, 403, 404 and 415 cases. This is considered OK to return even if previously unspecified in the code since it’s implied given keystone authentication can fail with a 403 and API validation can fail with a 400 for invalid JSON request body. Request to url/resource that does not exist always fails with 404. Invalid content types are handled before API methods are called which results in a 415. When a microversion is not needed¶ A microversion is not needed in the following situation:. In Code¶ In zun/api/controllers/base.py we define an @api_version decorator which is intended to be used on top-level Controller methods. It is not appropriate for lower-level methods. Some examples: Adding a new API method¶ In the controller class: @base.Controller.api_version("1.2") def my_api_method(self, req, id): .... This method would only be available if the caller had specified an OpenStack-API-Version of >= 1.2. If they had specified a lower version (or not specified it and received the default of 1.1) the server would respond with HTTP/406. Removing an API method¶ In the controller class: @base.Controller.api_version("1.2", "1.3") def my_api_method(self, req, id): .... This method would only be available if the caller had specified an OpenStack-API-Version of >= 1.2 and OpenStack-API-Version of <= 1.3. If 1.4 or later is specified the server will respond with HTTP/406. Changing a method’s behavior¶ In the controller class: @base.Controller.api_version("1.2", "1.3") def my_api_method(self, req, id): .... method_1 ... @base.Controller.api_version("1.4") # noqa def my_api_method(self, req, id): .... method_2 ... If a caller specified 1.2, 1.3 (or received the default of 1.1) they would see the result from method_1, and for 1.4 or later they would see the result from (commonly accessed with pecan.request). Every API method has an versions object attached to the request object and that can be used to modify behavior based on its value: def index(self): <common code> req_version = pecan.request.version req1_min = versions.Version('', '', '', "1.1") req1_max = versions.Version('', '', '', "1.5") req2_min = versions.Version('', '', '', "1.6") req2_max = versions.Version('', '', '', "1.10") if req_version.matches(req1_min, req1_max): ....stuff.... elif req_version.matches(req2min, req2_max): ....other stuff.... elif req_version > versions.Version("1.10"): ....more stuff..... <common code> The first argument to the matches method is the minimum acceptable version and the second is maximum acceptable version. If the specified minimum version and maximum version are null then ValueError is returned. Other necessary changes¶ If you are adding a patch which adds a new microversion, it is necessary to add changes to other places which describe your change: Update REST_API_VERSION_HISTORYin zun/api/controllers/versions.py Update CURRENT_MAX_VERin zun/api/controllers/versions.py Add a verbose description to zun/api/rest_api_version_history.rst. There should be enough information that it could be used by the docs team for release notes. Update min_microversionin .zuul.yaml. Update the expected versions in affected tests, for example in zun/tests/unit/api/controllers/test_root.py. Update CURRENT_VERSIONin zun/tests/unit/api/base.py. Make a new commit to python-zunclient and update corresponding files to enable the newly added microversion API. If the microversion changes the response schema, a new schema and test for the microversion must be added to Tempest. Allocating a microversion¶ If you are adding a patch which adds a new microversion, it is necessary to allocate the next microversion number. Except under extremely unusual circumstances and this would have been mentioned in the zun spec for the change, the minor number of CURRENT_MAX_VER CURRENT_MAX_VER.
https://docs.openstack.org/zun/latest/contributor/api-microversion.html
CC-MAIN-2022-21
refinedweb
1,138
56.05
This project is archived and is in readonly mode. Installation Error of Psycopg2 on windows 7 Reported by chandramouli | November 15th, 2012 @ 11:40 AM I was trying to iinstall Psycopg2 on win 7 (64 - bit, installed python2.7 32 bit) from - Installtion stops saying Python is not found in registry. - Have tried giving .Zip and extracted all files, placed in C:/Python27/ getting error "%1 is not a standard win32 command" on invoking psycopg2 any help would be highly appreciated as am stuck on this for a while. Daniele Varrazzo November 15th, 2012 @ 12:31 PM Hi, not getting your config. Win7 64 bit and you have Python 2.7 32 bit installed. Are you trying to install Psycopg 32 or 64 bit? I assume it must be 32 but it seems you are installing the 64 bit. Please clarify. chandramouli November 15th, 2012 @ 12:39 PM Hi Daniele , I was trying out with both 64 as well as 32 bit versions.. Both installers fail saying "Python2.7 is not found in the registry" Then I have unzipped them manually and placed them in C:\Python27 (my python instal dir) if i try to invoke psycopg2 it gives error "%1 is not win32 recognized command" Daniele Varrazzo November 15th, 2012 @ 01:46 PM psycopg is not to be invoked: it's not a program, it's a library. For me it just sounds like a screwed up Python installation. I'll ask Jason to take a look at this ticket: I can't help you further with Windows. chandramouli November 15th, 2012 @ 04:25 PM thanks for you time Daniele, yeah invoking it is a dumb (desperate)attempt am using python code which has import Psycopg2 statement in it, which is failing.. Daniele Varrazzo November 15th, 2012 @ 04:35 PM I've asked Jason to take a look at your case, but if it is possible, I'd suggest you to uninstall all the Python installation instances from your machine and try re-installing python again. Unpacking the .exe (which you have already discovered it is a .zip) and copying manually the psycopg2 directory and its contained file _psycopg.dll into a PYTHONPATH directory should work as well, provided Python is correctly installed and the 32/64 bit option and Python version match. Jason Erickson November 15th, 2012 @ 08:50 PM A few questions come to mind: - Is your python 2.7 installed from the installation package at python.org? - Are you using virtualenv? - Are you using python from a 3rd party package (ie, like ArcGIS, ABACUS, etc)? The python.org installation sets up the registry keys in one of the two locations for your environment: HKEY_CURRENT_USER\SOFTWARE\Wow6432Node\Python\PythonCore\2.7\InstallPath HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Python\PythonCore\2.7\InstallPath Being that the psycopg2 installation is not finding the installation keys leads me to believe that there is something unusually with your setup. From your description (32bit Python 2.7), you would need the following psycopg2 package: Can you remove all instances of psycopg2 that you installed by hand and start the python interpreter?: C:\Python27\python.exe And issue the following in the python interpreter: import psycopg2 We should see the following error (We want it to fail at this point) ImportError: No module named psycopg2 If we don't, it means that python is attempting to install psycopg2 from another location and you will have to remove those files. What I am looking for is that python is not grabbing psycopg2 from somewhere else, and that we are starting from a clean slate. Once we are at this point, we can extract the files by hand. Using a program like 7zip, open up the above psycopg2 installation package. Go into the PLATLIB directory. At this point, there will be a psycopg2 directory and a psycopg2-2.4.5-py2.7.egg-info file. The "psycopg2" directory should be extracted and placed in the "C:\Python27\lib\site-packages" directory. Start the python interpreter again and issue the import psycopg2, this time it should be successful with importing the module. (Fingers crossed). If you are still having problems, issue the following commands at the python interpreter and let me know the output: import sys sys.version Daniele Varrazzo November 24th, 2012 @ 10:58 PM - State changed from new to invalid.
https://psycopg.lighthouseapp.com/projects/62710/tickets/139
CC-MAIN-2015-18
refinedweb
724
62.07
I had an PHP script to update the webserver from a local directory. I'm migrating the script into Python, it works almost OK but after a PUT command the size of the file in the server isn't the same as the local file. Once I download again the file from the FTP server, the only difference is the CR/LF mark. It annoys me because the same script is comparing the size of the files to update. Also, it works perfectly in PHP vía ftp_put. from ftplib import FTP ftpserver = "myserver" ftpuser = "myuser" ftppass = "mypwd" locfile = "g:/test/style.css" ftpfile = "/temp/style.css" try: ftp = FTP(ftpserver, ftpuser, ftppass) except: exit ("Cannot connect") f = open (locfile, "r") try: (ftpfile) except: pass # ("TYPE I") #("STOR %s" % ftpfile, f)("STOR %s" % ftpfile, f) f.close() (ftpfile) ### Any suggestions? TIA, Pablo
http://ansaurus.com/question/2311-file-size-differences-after-copying-a-file-to-a-server-v%C3%ADa-ftp
CC-MAIN-2018-30
refinedweb
141
65.32
IRC log of workshop on 2004-06-02 Timestamps are in UTC. 00:00:28 [dino] DA: Are you making this problem look simpler than it really is? 00:00:31 [dino] s/you/we/ 00:00:34 [JibberJim] DA: Mobile guys want it to be simpler - can we making it simpler than the problems we have now. 00:01:03 [dino] MB: Java could solve the problem, but to clarify... 00:01:08 [JibberJim] MB: Java can solve the problem 00:01:13 [dino] MB: Interoperability is very important. 00:01:27 [dino] MB: It would be fantastic to write an applications and deploy to multiple platforms. 00:01:50 [dino] MB: [compares to Pattern programming] 00:02:04 [dino] MB: I want people to use that type of approach forWebApps. 00:02:17 [dino] MB: Tax form is exactly the app you want to use a pattern for. 00:02:27 [dino] MB: few simple business rules. 00:02:58 [dino] MB: The Adobe VBS is just a heirarchical search pattern 00:03:13 [dino] MB: Would be good to have a simple declarative language for this. 00:03:22 [dino] MB: We are constantly reinventing the wheel 00:03:44 [dino] MB: HTML worked because of cut & paste. 00:03:54 [Tantek] and view source 00:03:59 [dino] MB: How can we cut and paste a visual search? That's what we need. 00:04:18 [dino] Laszlo guy suggests MB look at LBX 00:04:23 [JibberJim] LZX ? 00:04:30 [shayman] LZX it is 00:04:36 [dino] whatever :) 00:04:36 [JibberJim] Lots of talk about languages, nothing about runtimes. 00:04:54 [dino] Laszlo guy's name? 00:05:04 [dino] Laszlo: Java has not taken on the client. 00:05:06 [JibberJim] "We love runtimes" 00:05:32 [JibberJim] Laszlo: What's the loading order? 00:06:03 [dino] Laszlo: We need to define the runtime. This is what XAML/.NET will do, but it is Windows only. 00:06:11 [dino] John Boyer - PureEdge 00:06:37 [JibberJim] John Boyer: we have Java now - a quick case study 00:07:51 [JibberJim] JB: we did an application that uses declaritive mark-up, we compared declaritive to what they could build in house, the demo app - the in house spent a day saying it would take a month to give an estimate - the declaritive solution was implemented in a day. 00:08:56 [dino] MM: Java didn't work on client because it was too slow. It is still too slow. 00:09:16 [JibberJim] Matt May - the javascript player is adding weight to make it too slow. Why not put this functionality into the player. 00:09:48 [JibberJim] [I'd say because then the web-app authors loses control, we're constrained to what we're given, with script we get what we're able to produce!] 00:10:01 [JibberJim] AH: CLI is an ECMA Standard 00:10:05 [JibberJim] [334 IIRC] 00:11:16 [Chris] as with most languages, the libraries rather than the language provide the real value 00:11:50 [JibberJim] AH: People have created a fragile platform when you get advanced systems... 00:12:45 [JibberJim] AH: getting features to a high quality is tough in declaritive situations. 00:14:03 [JibberJim] AH: we can get to 80-90% before you need script. 00:14:12 [JibberJim] AH: in form applications./ 00:15:18 [JibberJim] hakon lie: Java would've been a clean model, but it didn't work out - javasript won. 00:15:21 [JibberJim] [Hurrah!] 00:15:42 [JibberJim] HL: We hate javascript as much as everyone else here. We can right neat javascript. 00:15:56 [JibberJim] HL: Any spec W3 produces someone is going to abuse it. 00:17:07 [JibberJim] TVR: a tax code is best implemented in declaritive since it's easy to author by tax code guys rather than programmers. 00:17:33 [JibberJim] TVR: Focus on the data. 00:17:49 [sac] sac has joined #workshop 00:18:24 [klotz] Peter ??? Adobe PDF manager? 00:18:31 [JibberJim] Vincent Hardy: Java is a failure on the desktop - but it's on most phones. 00:18:39 [JibberJim] Kandaces I think klotz 00:19:17 [JibberJim] VH: Java is available on many clients - there's a JCP program to ensure it's available and the same on different platforsm. 00:19:33 [JibberJim] VH: API is very complicated! 00:20:07 [schepers] Kacandes 00:21:47 [JibberJim] Stever Zilles: What are the 5 things we need to focus on to get something better than we're at now, and how much better to we need to be? 00:22:52 [JibberJim] Josh France Telecom: Java on phones isn't really usable because they have to be written differently for different phones, we need more layers in there, to cope with different screen sizes and capabilities. 00:23:20 [JibberJim] Matt May: Sorry for giving impression Java was a failure... 00:24:05 [JibberJim] MM: With the beginning of the web we took a deep breath - we now can't breat in any more, we need to breathe out and in again, get the new things the body needs. 00:24:30 [JibberJim] MM: We need to ensure the people who are developing the web need the tools we need. 00:24:54 [JibberJim] How do we get XHTML into IE? 00:25:44 [schepers] that was IBM's Rich Schwarzfegger 00:26:25 [Chris] .... existing HTML - not XHTML - existing infrastructure (ie, no new stuff) team is busy with security stuff 00:26:35 [JibberJim] AHL we'll see how things pan out... 00:27:42 [JibberJim] Rich: How do we get MS to address it - they're not in the WG for XHTML. 00:28:11 [JibberJim] Tantek: We are in the XHTML WG, but they aren't doing what we want... 00:28:21 [Chris] TC: proorities at MS are HTML4 eratta and a revised spec - not XHTML 2 00:28:43 [Chris] SP: But that is your action item 00:29:28 [JibberJim] TC: Where's the test suites, errata's etc. 00:29:37 [JibberJim] TVR: Shouldn't you be doing this TC... 00:30:00 [JibberJim] dino: bringing us back on topic... 00:30:26 [JibberJim] Daniel, Sun: Any solution that we come up with that meets todays requirements will be really complicated. 00:30:44 [schepers] David Temkin, Laszlo 00:31:19 [JibberJim] David Temkin Laszlo: I agree things are complicated, but web-apps are a much smaller set than people are suggesting. We need an incredibly reliable RISC machine. 00:31:21 [bjoern] rrsagent, pointer 00:31:21 [RRSAgent] See 00:31:30 [JibberJim] DT: we don't need photoshop as a web-app. 00:31:58 [JibberJim] Bert Bos: Most things are too simple for Java. 00:32:17 [JibberJim] BB: Isn't the only reason java isn't used is that it's too much like a real programming language? 00:33:04 [Hixie] Steven: note that XHTML 1.0 didn't even describe how it should be implemented until well after it was a REC: 00:33:13 [Hixie] Steven: and that isn't even in a published document yet 00:33:29 [Hixie] Steven: so the core of how XHTML1 is to be implemented depends on a post to www-html 00:33:43 [Hixie] Steven: (which is pretty ridiculous imho) 00:35:55 [schepers] Paul Topping, Design Science 00:36:06 [JibberJim] MB - XForm like things normally get implemented again and again by huge teams. 00:36:35 [JibberJim] PT: Every real declaritive application runs into a break wall that ends up needing imperative languages. 00:36:38 [JibberJim] PT++ 00:37:11 [schepers] so do what you can declaratively, and solve the rest with script 00:37:20 [JibberJim] PT: declaritive folk seem to want a simpler world than reality... 00:38:04 [Chris] TVR: declarative allows the amount of imperative code to be reduced, goal is not to eliminate it entirely 00:38:22 [JibberJim] TVR: use the scripting experience of the past to move into declaritive systems. 00:38:30 [schepers] Steven noted that we should factor out the most common tasks, such as dependancy graphs, into declarative 00:40:31 [JibberJim] Sun guy Java's hard to write, however that doesn't stop us writing applications in an easier way that runs on top of java 00:41:10 [schepers] Chet Huss, Sun? 00:41:48 [JibberJim] Leigh Klotz - Maybe what we should be doing is XHTML+CSS+SVG+XForms client in Java, CLR, maybe flash. 00:42:05 [Chris] sort of like XSmiles, which already exists? 00:43:04 [JibberJim] John: if we wrap javascript into a VM is that good enough - no since we need rules in the declaritive forms, rather than imperative rules. 00:43:19 [schepers] seems that declarative is "safer" than imperative 00:44:57 [JibberJim] We're winding up now... thanks to everyone... and we're closed. 00:45:50 [JibberJim] Think about next steps over dinner. 00:46:08 [AntoineQuint] AntoineQuint has left #workshop 00:54:02 [robin] robin has joined #workshop 01:42:26 [bjoern_h] bjoern_h has joined #workshop 05:00:46 [Hixie] Hixie has joined #workshop 05:03:23 [bert_lap] bert_lap has joined #workshop 05:15:33 [dbaron] dbaron has joined #workshop 05:18:27 [Hixie] Hixie has joined #workshop 05:19:59 [Tantek] Tantek has joined #workshop 05:26:43 [tanteko] tanteko has joined #workshop 05:27:46 [tanteko] hello 14:51:37 [RRSAgent] RRSAgent has joined #workshop 14:51:44 [dino] rrsagent, make logs world-access 14:51:51 [dino] thanks buddy 15:41:11 [JibberJim] JibberJim has joined #workshop 15:42:42 [Hixie] Hixie has joined #workshop 15:43:23 [dbaron] dbaron has joined #workshop 15:43:43 [AntoineQuint] AntoineQuint has joined #workshop 15:44:31 [schepers] schepers has joined #workshop 15:47:55 [dino] bjoern, should be worky now 15:48:04 [bjoern] yes, it does, thanks 15:48:16 [Dave] Dave has joined #workshop 15:48:18 [dino] it seems that rrsagent doesn't listen to me 15:48:33 [JibberJim] That's fair though, neither do the rest of us. 15:48:41 [bjoern] hehe 15:54:04 [mimasa] mimasa has joined #workshop 15:57:53 [schepers] schepers has joined #workshop 16:03:45 [bert_lap] bert_lap has joined #workshop 16:04:03 [Steven] Steven has joined #workshop 16:04:11 [heycam] heycam has joined #workshop 16:05:26 [tvraman] tvraman has joined #workshop 16:06:14 [MDubinko] MDubinko has joined #workshop 16:11:16 [JibberJim] Good Morning, and welcome to day 2. 16:12:07 [JibberJim] RRSAgent, pointer 16:12:07 [RRSAgent] See 16:17:29 [Chris] Chris has joined #workshop 16:17:31 [bert_lap] Good morning, JibberJim and everybody. 16:17:40 [jrandall] jrandall has joined #workshop 16:17:40 [schepers] good morning 16:20:23 [robin] robin has joined #workshop 16:20:38 [dino] who is running this show? 16:20:42 [dino] what a mess~ 16:20:44 [dino] ! 16:20:56 [schepers] Steve Zilles asked what would mean that WebApps/CompDocs had gotten good enough to make a difference 16:21:32 [jrandall] jrandall has joined #workshop 16:22:07 [Chris] a possible answer - that there were authoring tools generating something that worked on multiple implementations 16:23:41 [JibberJim] Schepers: It will be a success when people who were writing HTML in 97 are doing it. 16:24:03 [JibberJim] Patrick Smitz: People don't have apps to say, they have things to say. 16:24:21 [JibberJim] ChrisL: Users may want to produce apps that do something! 16:25:24 [JibberJim] ChrisL: There's something more than interactive documents that people would want to provide, even without being a complete programmer. 16:25:51 [Chris] CL: people may want to produce small apps that are more than 'interactive documents' but less than 'full blown programs' 16:26:10 [JibberJim] Schepers: Webapps today are more complicated, but they could be small componentised reusable widgets that you can tie together. 16:26:32 [JibberJim] Glen, Ideaburst: Code's crap but people use it, tools could exist that people can create webapps. 16:27:45 [JibberJim] G, I: We spent most of our time on interface rather than creating applications. 16:29:28 [JibberJim] John Boyer, Pure Edge: Spreadsheet users aren't programmers, but they use declaritive stuff to do a lot of stuff. Point+click design folk are cheaper than programmers. 16:31:32 [Chris] I agree with John and he made my point better than I did 16:32:05 [shayman] shayman has joined #workshop 16:33:14 [JibberJim] Widgets we make are generally slow if we have to wack them all in script, and are Device dependant, a standard can obviate these needs. 16:33:51 [JibberJim] ^ That was Glen from Ideaburst 16:34:46 [JibberJim] Teppo Jalava 16:35:48 [JibberJim] TJ: shows SMIL/XForms - mixed docs are hard to read! 16:36:20 [JibberJim] TJ: Timesheets are like CSS for temporal layout. 16:37:44 [klotz] klotz has joined #workshop 16:38:15 [JibberJim] TJ is demoing an app using a timesheets impl. in X-Smiles 16:41:08 [JibberJim] Patrick Shmitz: Have you thought much about applying timesheets in compound docs and the issues? 16:42:11 [JibberJim] MarkBirkbeck: Is CSS tied to the timesheet? Can you change more than just display? 16:42:21 [JibberJim] PS: Yep you can change classes. 16:42:31 [JibberJim] TJ: Yep you can change classes. 16:44:30 [JibberJim] ChrisL: This is similar to something floated for DVD's XSS, what's the syntax? 16:44:36 [JibberJim] TJ: XML/SMIL like. 16:45:11 [dbaron] Hixie: cites 16:45:17 [JibberJim] IH: CSS WG has been looking at this in CSS3 16:46:37 [JibberJim] Erik Henmann on behalf of OASIS. 16:46:53 [JibberJim] EH: data standard for compound docs 16:48:07 [JibberJim] EH: DITA - Darwin information typing arch. 16:48:31 [Tantek] Tantek has joined #workshop 16:49:07 [JibberJim] EH: Granular/ strongly typed / referential - topics have semantic relationships, originally developed for software user assistance - useful for human readable content with formal definitions 16:55:15 [ph] ph has joined #workshop 17:02:19 [dino] Michael Pediaditakis 17:02:25 [dino] University of Kent 17:02:34 [dino] Slides at: 17:03:30 [JibberJim] Proposing a Generic XML Presentation, see the slides! 17:10:28 [Chris] alt and case look like smil switch, except for batch preprocessing 17:11:19 [JibberJim] I like this, very much agree with "declaritive layers implemented imperatively" 17:14:11 [tjjalava] tjjalava has joined #workshop 17:16:33 [dino] Paul Topping - Design Science 17:16:38 [dino] showing MathPlayer 17:16:40 [JibberJim] On MAthPlayer 17:16:43 [dino] no slides online yet 17:16:58 [JibberJim] PT: It demonstrates some of the problems of plugins etc. 17:17:26 [JibberJim] uses MS behaviours, DOM and loads of other interfaces in IE. 17:19:01 [JibberJim] PT: very loud speech reader of the MathML content. 17:19:27 [JibberJim] PT: means the screen reader or search needs to get through the multi-layers of the system. 17:20:09 [JibberJim] PT: Ideally we want to implement mathplayer in all browsers using standardised interfaces. 17:20:51 [JibberJim] PT: Mah is more complicated than the regular "include an image" as we need to place it appropriately sized within the paragraph, need comms between the HTML/MathML rendering layers. 17:22:40 [JibberJim] PT: We need to indicate to server the capabilities, can the player not only implement MathML, we need to know it can maybe play it or whatever. 17:23:01 [tjjalava] tjjalava has joined #workshop 17:23:27 [JibberJim] [I disagree, we're never going to get anywhere if we cabal-ise the web, authors can maybe make allowances to support bugs in a couple of major UA's, they can't author docs for every single player.] 17:24:08 [Dan] Dan has joined #workshop 17:24:42 [JibberJim] PT: MathML is a good use case for compound docs - you can't just display it as a picture, you need to search in it etc. 17:25:40 [JibberJim] PT: Let's not start on XForms and XAML without learning from the past. 17:26:06 [JibberJim] PT: Support for XML is not enough, need to allow support multiple vendors 17:26:29 [JibberJim] PT: We need a cross-platform runtime, but don't look to a hardware vendor. 17:36:55 [Steven] Steven has left #workshop 18:00:21 [jrandall] jrandall has joined #workshop 18:02:49 [JibberJim] Steven Pemberton on XForms 18:03:07 [JibberJim] SP: More impls on day of release than any previous W3 spec. 18:03:25 [JibberJim] SP: Lots of users, lots of vendors. 18:05:53 [JibberJim] SP: data is one or more XML docs, input/output/memory model etc. 18:06:11 [JibberJim] SP: Abstract input/output controls, impls can style it, or bound to SVG or ... 18:07:30 [JibberJim] SP: Imperative vs declaritive - in 80's developed several clocks which had different views 18:07:42 [JibberJim] SP: the shortest analogue clock in imperative was 1000 lines. 18:08:04 [JibberJim] [it would be less than 10 lines of javascript in SVG] 18:08:51 [AntoineQuint] where javascript = standards-compliant DOM scripting 18:08:54 [JibberJim] SP: Shows simple ticking clock code. 18:09:10 [JibberJim] Yes AntoineQuint 18:09:17 [naviuser] naviuser has joined #workshop 18:09:18 [JibberJim] SP: Some XForms impl just have the model. 18:09:29 [AntoineQuint] Jim, don't use the javascript term so much, people tend to think of the broken APIs and practices burried behind it :) 18:09:30 [JibberJim] SP: CSS or SVG for the impl, no commitment 18:09:50 [JibberJim] Sorry Antoine, but I still want to make it respectable... 18:09:57 [bjoern] "ECMAScript" 18:10:21 [JibberJim] SP: DI, Accessible, mostly declaritive but can be imperative. 18:10:23 [JibberJim] C 18:10:24 [AntoineQuint] JibberJim: I agree it is! 18:10:38 [JibberJim] C# is also an ECMA Script language though bjoern now. 18:11:43 [JibberJim] SP: Demos a couple of XForms examples 18:12:40 [JibberJim] Paul Topping - design topping: Your rendering of one of the examples is poor? ... 18:12:53 [JibberJim] SP: XForms is independant of the rendering 18:13:02 [JibberJim] SP: CSS 3 and SVG will address this. 18:13:30 [Hixie] (Well. CSS1 addressed the baseline issue back in 1996...) 18:13:41 [JibberJim] Cameron McCormack: Can you effect things other than the XML model, e.g. CSS or positioning? 18:14:05 [JibberJim] So it was a non-conformant CSS1 implementation in IE there Hixie? 18:14:16 [robin] just use SVG :) 18:14:23 [Hixie] JibberJim: i'd have to look at the exact markup to say, but yeah, probably 18:14:36 [JibberJim] SP: Yep, but it's not really what it primarily designed for. 18:14:58 [Hixie] JibberJim: it might be one of the undefined edge cases (form controls aren't well defined right now, as SP said, CSS3 is addressing that better) 18:15:26 [schepers] clock: it would also be less than 10 lines of declarative code in SVG, not just script, JibberJim 18:16:17 [JibberJim] John Boyer is demoing some games in XForms. 18:16:59 [JibberJim] JB: Wizard like data entry for tax forms. 18:17:24 [robin] schepers: you can't do the clock just declarative (I think) 18:17:59 [robin] there's an SVG analog clock on fwiw 18:18:01 [JibberJim] I think you can as long as they load it at a specific time... 18:19:17 [robin] you may be able to use wallclock(...) to pretend your animation started at some point in the past and rely on the SMIL engine to compute where the animation should be now 18:19:26 [JibberJim] JB: doesn't matter which form I'm editing, the model reflects changes everywhere. 18:19:49 [JibberJim] JB: Wrap the most appropriate host/rendering language around the rendering. 18:19:56 [JibberJim] er the model. 18:21:54 [JibberJim] JB: XForm skins are what we want to control the presentation. 18:24:20 [JibberJim] Kelvin from IBM: how are you rendering this? 18:24:38 [JibberJim] JB: We're using the Pure Edge XForms renderer - a plugin to the browsers. 18:28:16 [JibberJim] Micah Dubinko on XForms. 18:29:33 [Steven] Steven has joined #workshop 18:29:55 [shayman] shayman has joined #workshop 18:32:36 [JibberJim] Do we have a link to XForms basic? 18:32:48 [Hixie] 18:33:24 [JibberJim] Robin, you also mentioned a good intro doc to this? 18:33:26 [Tantek] HL: XForms spec is too complex. Authors will have a problem authoring XForms as it is now. Could we do something with the basic profile of XForms, remove namespaces, remove XPath? 18:34:10 [Tantek] ?? (from IBM): What is that people are screwing up with namespaces? 18:34:18 [robin] I can't find it 18:34:24 [robin] Leigh ought to know 18:34:24 [JibberJim] Too many to fit in your head. 18:34:26 [Tantek] MD: Namespace URIs are too long. More stuff than you can fit in your head. 18:34:29 [Hixie] Chris, another difference is that xpath is way way more powerful than selectors 18:34:50 [Tantek] MD: How XPath addressses Namespaces, doesn't use the default. 18:35:15 [JibberJim] hmm, so if code size is important, throw out CSS and just use an XPath based selector styling system? 18:35:53 [Hixie] unfortunately that would mean that the web didn't render, so it's not a viable business plan 18:35:57 [klotz] XPath in Java is 85K bytecodes. Do a google search. 18:35:57 [Hixie] (for a web browser) 18:36:28 [JibberJim] well, didn't render so sweetly... 18:36:31 [Hixie] klotz: including dynamic DOM changes in the face of script? 18:36:51 [Steven] (Spontaneous applause; only the second this meeting!) 18:37:15 [robin] JibberJim: I found something but unfortunately it's member only 18:38:19 [robin] how about an XPath Basic that has just the information contained in CSS Selectors? 18:38:31 [robin] then use straightforward conversion 18:38:31 [Hixie] klotz: (and does it implement the xforms extensions to xpath? note that mozilla's xpath implementation is more like 300k) 18:38:38 [JibberJim] Dave Raggett: 18:38:44 [robin] björn managed the reverse very easily 18:38:55 [JibberJim] DR: I'd like to break apps outside of the browser 18:39:03 [JibberJim] DR: more various apps. 18:39:34 [JibberJim] DR: cheaper, more flexible 18:39:41 [JibberJim] DR: Easily adapted 18:40:26 [JibberJim] DR: Author defined controls and Multi-modal interface 18:40:40 [JibberJim] DR: Let people use their own XML 18:40:43 [dbaron] actually Mozilla's is 170K (elf32 binary) 18:40:52 [dbaron] (I just gave Ian an incorrect number a minute ago) 18:41:08 [JibberJim] DR: Zinf demo - differently themed mediaplayer 18:41:50 [JibberJim] DR: no chrome, windows irregular shaped 18:41:57 [JibberJim] DR: Describe Object model in XML 18:42:28 [JibberJim] DR: need way to close etc. 18:42:37 [MDubinko] XForms validator is at 18:42:54 [robin] XPath in pure Perl is 39k, including complete docs and installer :) 18:43:05 [JibberJim] DR: This is beyond XForms, split UI into an abstraction and to a theme for pres. 18:43:09 [Hixie] robin: and does _that_ work with dynamic DOM changes? 18:43:49 [robin] no idea 18:43:57 [Hixie] that's the hard bit 18:44:02 [robin] hence my proposal to piggy back the CSS code that does that 18:44:24 [Hixie] certainly using selectors instead of xpath in xforms would be an interesting thing to look into 18:44:30 [MDubinko] but why is dynamically-updating XPath relevant to XForms? 18:44:34 [JibberJim] DR: Layout intentions use SVG/XBL for delivering such controls, so we can create novel controls. 18:44:46 [Hixie] MDubinko: because the whole point of xforms is that it is dynamic? :-) 18:44:55 [MDubinko] (XForms has an explicit reevaluate-XPath-now step) 18:45:07 [JibberJim] DR: Layout delegated down to the layout manager rather than doing it all yourself. 18:45:14 [robin] I'd rather not have CSS Selectors instead of XPath in XForms 18:45:16 [heycam] i looked at dependencies in xpath, to make sure expressions are evaluated when they need to be 18:45:16 [JibberJim] DR: declaritive treatment of behaviour 18:45:27 [robin] it would require all Full implementations to support both 18:45:33 [Hixie] so drop xpath 18:45:35 [heycam] it's not too difficult, but you do need to walk the path to work out which nodes are covered 18:45:36 [JibberJim] DR: Simple event binding, state transition between named states. 18:45:54 [robin] you can't drop XPath, it's extensible 18:46:00 [robin] where CSS Selectors aren't 18:46:05 [Hixie] sure they are 18:46:15 [Hixie] mozilla and opera have both extended them 18:46:24 [robin] that doesn't quite count :) 18:46:28 [Hixie] why not? 18:46:30 [JibberJim] DR: Could invoke methods or update data, or raise events. 18:46:32 [robin] well I know of no implementation in which you can plug in your own functions 18:46:39 [Hixie] that's an implementation detail 18:46:40 [robin] whereas XPath has that all over the place 18:46:49 [robin] no, it's also a specification detail 18:46:56 [robin] XPath supports it natively 18:46:57 [Hixie] (and in any case it is arguably one of the problems of xpath) 18:47:12 [Hixie] anyway xpath and selectors do different things 18:47:19 [Hixie] for example selectors has no equivalent of xpath true() 18:47:21 [robin] given a defined svg prefix bound to a namespace, I can add svg:bbox(foo) and it'll work 18:47:28 [Hixie] and can't do maths expressions 18:47:30 [Hixie] etc 18:47:32 [robin] right 18:47:38 [JibberJim] DR: we can treat all input uniformally as events. 18:47:44 [robin] XPath is more an incomplete superset 18:47:52 [JibberJim] DR: EMMA XML lang. for interpreted input. 18:47:55 [Hixie] it's just a different language imho 18:48:06 [robin] yeah but there's overlap 18:48:08 [Hixie] sure 18:48:20 [Hixie] selectors is about going from an element in a dom, and a selector, and telling you if it matches 18:48:27 [Hixie] whereas xpath is about an expression language that is dom-aware 18:48:28 [robin] overlap that should be consolidated so as to make things using either more implementable on small footprint targets 18:48:47 [Hixie] would have been nice if the xpath people hadn't reinvented the wheel, true 18:48:49 [heycam] xpath for matching is just what is used in xslt 18:49:04 [robin] yup, XPath does matching 18:49:18 [JibberJim] DR: Various stuff already going on, Voice Browser folk, DI etc. 18:49:20 [JibberJim] DR: 18:49:22 [JibberJim] DR: 18:49:26 [Hixie] well, it can return a node set, and you can examine the nodeset for membership 18:49:33 [robin] Hixie: either they're different languages and they haven't reinvented the wheel, or they're not and they have, but you can't say both in the same breath :) 18:49:33 [Hixie] which means you can do matching with it 18:49:40 [JibberJim] DR: use xml for descriping apps not jsut data - XBL and XForms 18:49:44 [robin] Hixie: nope 18:49:48 [robin] that's a usage detail 18:49:49 [JibberJim] DR: Layout interntions, and what else? 18:50:15 [robin] for instance XML::XPath has a matches(xpath, node) that doesn't work that way 18:50:18 [Hixie] robin: they invented a language which had overlap with an existing language (although slightly different goals) without working on that existing language 18:50:42 [robin] it reads the XPath "in reverse" as it were and tells you it matches 18:51:17 [Hixie] ok 18:51:29 [JibberJim] Schepers: What can't XForms do - what more does it need? 18:51:29 [Hixie] so it in fact explicitly has some of the same goals in that case 18:51:40 [Hixie] in which case it really is reinventing (and extending) the wheel 18:51:48 [robin] not much point in going crying over split milk (provided there is any -- XPath does a lot more than CSS did back then or even now). Better look into convergence now 18:51:54 [Hixie] sure 18:52:21 [Hixie] don't really see much of a way to do that though, i mean the languages don't even have a compatible syntax subset 18:52:41 [robin] define the subset of XPath that is sufficient to express CSS 18:52:46 [JibberJim] John Boyer: xml-sig, wrap it all up into a single doc is important 18:53:00 [robin] use that in things like XForms Basic so that CSS implementations are basically reusable 18:53:07 [robin] with minimal overhead 18:53:15 [shayman] shayman has joined #workshop 18:53:23 [Hixie] robin: then what? the existing web (you know, those n billion documents i like to talk about) uses CSS selectors, and they aren't going to just switch 18:53:25 [JibberJim] JB: ability to wrap attachments into the same underlying data. 18:54:12 [bjoern] Why would they need to switch? 18:54:18 [robin] Hixie: where is that a problem? they can keep using it. Point is that XForms can keep using XPath (so that there isn't a kludgy "two languages supported here") and be implementable easily on mobile 18:54:28 [robin] exactly, they'd never need to switch 18:54:52 [JibberJim] SP: There's lots of more requirements coming in future XForms specs 1.1/2 etc. 18:54:54 [Hixie] robin: how does this help? Now you have a bunch of implementations that don't do full xpath, and those implementations will be continuously told to support full xpath. 18:55:11 [robin] it's just a smaller profile 18:55:17 [Hixie] dino: it's off topic for web apps imho 18:55:25 [dino] ok 18:55:39 [Hixie] robin: but as you know, i think profiles are bad :-) 18:56:00 [robin] yeah, but I still have to hear an argument on that one :) 18:56:11 [Hixie] david gave one in yesterday's presentation :-) 18:56:22 [robin] which I missed since I flew in late... 18:56:35 [dino] but we have lots of arguments on the positive side 18:56:35 [Hixie] ah. then see 18:57:30 [robin] hmmmm 18:57:37 [JibberJim] TVR: XForms has datatypes and abstract layers - rather than a presentation 18:58:29 [robin] that document conflates formats and profiles, I think that's stretching it 18:58:44 [JibberJim] TVR: More stuff shouldn't be done in XForms, they should be done in another spec which says how to combine this all together to create webapps 18:58:45 [Hixie] i think the point is they are equivalent 18:58:56 [Hixie] (to authors) 18:59:01 [robin] I don't see it making claims supporting that point 18:59:10 [Hixie] *shrug* 18:59:20 [robin] and agitating XML Schema is too easy :) 19:01:48 [JibberJim] SP: XForms is the intent of the structure, not the rendering. 19:02:45 [JibberJim] CL: As soon as you bring in a host language you have different versions of the XForms packaged is it really of value? 19:03:44 [JibberJim] SP: Yes we need a way to keep things seperate, but also need to package some how. 19:05:23 [JibberJim] MB - XForms isn't perfect, but the missing things are mostly outside of the domain - how do we make a range control look like a thermometer when it's possible, but it can fallback to somethign else. 19:06:05 [bert_lap] More about Dave's "layout intentions" to add to CSS: ideas about flexible "glue and springs" from XUL ( ) and grid templates, simplified from the NOTE-layout draft ( ) 19:06:30 [schepers] MB: example of an SVG map that lets you choose a city, but it not available, a dropdown list of cities is given instead 19:07:52 [JibberJim] MB: Here's an XHTML 2.0 doc with XForms and then have it rendered in SVG or something. 19:08:40 [dino] thanks to the irc monkeys for scribing while Leigh is at the mic. 19:09:27 [JibberJim] MB: we can start spreading events to different applications that reflect the model. 19:09:51 [JibberJim] Peter from SE: XForms aren't on mobile devices because we're focussed on imaging and entertainment, no real use case yet. 19:10:37 [Hixie] Peter from SE: We would implement inputmode if it was its own spec 19:10:46 [Hixie] TVR: It is its own appendix! 19:11:05 [JibberJim] Leigh Klotz: We're using XForms internally on lots of products. XForms+XHTML+XSLT works well on embedded 19:11:35 [JibberJim] LK: you're not going to write DOOM in XForms. 19:12:20 [JibberJim] LK: If you're sitting on SVG/XForms implementations get something out there please! 19:12:41 [schepers] LK: I'm putting my money where my mouth is 19:13:06 [JibberJim] LK: focus on what we could do with one out there. 19:13:51 [JibberJim] CL: XForms could let the mobile guys make money, gamble, book tickets etc. 19:16:30 [JibberJim] Lunch! 19:41:29 [bert_lap] bert_lap has joined #workshop 20:31:47 [xover] xover has joined #workshop 20:34:49 [robin] robin has joined #workshop 20:36:02 [JibberJim] Well we've just got a few more speakers, and then getting out into general discussion. 20:36:46 [bert_lap] bert_lap has joined #workshop 20:37:14 [JibberJim] Bert, will you be at Chaal's image desc meet next week? or indeed any one else. 20:51:23 [MDubinko] MDubinko has joined #workshop 20:59:05 [Hixie] Hixie has joined #workshop 21:03:12 [Tantek] Tantek has joined #workshop 21:05:03 [JibberJim] Cameron McCormack up.... 21:05:21 [naviuser] naviuser has joined #workshop 21:05:31 [JibberJim] CM: First up, compound docs, we have profiles for a complicated combination of specs. 21:05:54 [JibberJim] CM: We need some sort of extensibility to remove the scalability problem of having to do everything in a single UA. 21:06:04 [bjoern] RRSAgent, pointer 21:06:04 [RRSAgent] See 21:07:37 [JibberJim] CM: we need to seperate out rendering that only needs to only implement parts and have other parts rendered to SVG as the base rendering environment as the doc using XBL like framework. 21:07:46 [Steven] Steven has joined #workshop 21:07:49 [JibberJim] CM: Give us a common framework for compositing everything together. 21:08:40 [Dan] Dan has joined #workshop 21:10:04 [JibberJim] CM: I've not read the all new XBL spec as it's not public yet... 21:10:12 [ph] ph has joined #workshop 21:10:32 [JibberJim] CM: how do events get handled can events get mapped into different events at different levels? 21:10:39 [shayman] shayman has joined #workshop 21:10:45 [nms] nms has joined #workshop 21:11:22 [nms] nms has left #workshop 21:12:25 [JibberJim] CM: now about layout for webapps. 21:12:59 [JibberJim] CM: biggest SVG pain was layout. Laszlo layout was pretty easy 21:14:52 [JibberJim] CM: we need some SVG suggestions 21:15:27 [JibberJim] TVR: Your talk that for compound docs you want to map XHTML MathML etc. getting mapped to SVG compositing. 21:16:07 [JibberJim] TVR: Will the XBL mapping maintain both DOMs so it's accessible. 21:17:06 [JibberJim] CM: based on RCC sure, who knows on XBL 21:17:44 [JibberJim] DBaron: The original DOM sticks around, you also get a 2nd tree of the rendering. 21:18:10 [JibberJim] TVR: Can I be Tarzan jump from tree to tree? 21:18:15 [JibberJim] CL and all: Sure! 21:18:56 [JibberJim] Schepers: Layout in XBL is likely too expensive? 21:19:06 [JibberJim] General agreement that it's better. 21:19:39 [JibberJim] MarkBirkbeck: XBL that maps to XBL that maps to XBL that could map to SVG - it's a multi-layered thing. 21:20:45 [JibberJim] MB: we need to keep the abstract later, and a shadow tree is already onto the renderer? 21:20:55 [MDubinko] s/Birkbeck/Birbeck/ 21:21:05 [JibberJim] Cheers, apologies Mark 21:21:54 [JibberJim] Hixie: We shouldn't let XBL etc. give us the chance to send proprietary mark-up over the web, focus on sending standards. 21:22:23 [JibberJim] Glen Gersten, Ideaburst now. 21:22:28 [Chris] Chris has joined #workshop 21:22:53 [JibberJim] GG: We're hooked on SVG 21:23:04 [JibberJim] GG: Some example webapps 21:23:25 [Chris] facilities management application in SVG 21:23:55 [Chris] s/GIF/PNG/g 21:25:00 [tjjalava] tjjalava has joined #workshop 21:25:17 [JibberJim] GG: Problems with HTML->SVG communication. 21:25:33 [Chris] GG: proble4ms with integrating HTML renderers and SVG renderers - not crossplatform, no standards 21:25:45 [Chris] 'it works on windows' 21:26:10 [JibberJim] GG: online ticket purchasing app, pricing and availability in an event in a show. 21:26:17 [JibberJim] 'only with IE' 21:26:52 [JibberJim] regular HTML form that produces HTML ticket and map and info to print. 21:27:26 [JibberJim] GG: A developer wouldn't want to creates progs to distribute, needs to work anywhere. 21:27:56 [Chris] javascript execution error - resource scheduling 21:28:07 [Chris] CL: use an svg Load event) 21:28:16 [JibberJim] GG: dodgy script error, (to illustrate the need of cross "frame" events.) 21:28:54 [JibberJim] I think he was illustrating that it needs to know that in the HTML document CL, rather than it not actually being possible to actually do. 21:29:15 [Chris] mapquest-ng using svg (faster download and better quality than the real one) 21:29:33 [Chris] yes JJ probably 21:30:21 [JibberJim] now showing a facillities management svg tool. 21:30:43 [Chris] well supported runtime environment 21:30:45 [JibberJim] GG: What we need is a well supported runtime environemnt 21:30:48 [Chris] true cross platform 21:30:49 [Chris] fast 21:30:53 [bert_lap] Hmm, proprietary GUIs. Either it needs a "submit" button or it should not run in a browser. I hate things that look like forms but behave like applications :-( 21:30:54 [Chris] easy to install 21:30:55 [JibberJim] GG: fast easy to install and cross platofrm 21:30:57 [Chris] well deployed 21:31:07 [Chris] **** quality development tools **** 21:31:21 [JibberJim] bert_lap++ 21:32:00 [JibberJim] GG: we need standard widgets, layout manager, 2 way comms, windows, ready state checks, write and retrieve data from local client, and security/IP protection 21:33:25 [JibberJim] GG: we need dev tools, Full fledged debugger, event tracking, performance optimisations, variety of runtime enviroments and extensible 21:33:35 [Chris] GG: SVG needs more work in runtime environment area 21:33:59 [JibberJim] Tantek: Standard set of interface widgets? what do you mean? 21:34:11 [Chris] GG: real widgets (things that could bind to XForms abstract widgets) 21:34:23 [JibberJim] GG: we need widgets that are rendered, rather than having to redesign everything each time. 21:34:29 [Chris] Tantek references CSS3 Basic UI module, appearance property 21:35:31 [JibberJim] Mark (?) : We don't need a standard set of widgets, we need a standard framework so look and feel isn't the same everywhere. 21:35:33 [Chris] MV: look and feel has to be customisable 21:35:40 [Chris] Marc Verstaen 21:35:45 [Chris] , Beatware 21:37:15 [JibberJim] GG: A standard looking widget, we just need something without having to write our own. 21:37:16 [Tantek] I also pointed out that when the user navigates from one concert to another concert in the stadium application, the URL stays the same. What happens when you hit the Back button? 21:37:30 [Chris] 21:37:33 [Tantek] The answer was, you go to the previous page before the stadium application. To which I said seems broken. 21:37:34 [JibberJim] Sorry, yes Tantek - it seems just a criticism of that implementation... 21:37:38 [dbaron] 3 states for buttons? normal, :hover, :active, :hover:active, :disabled, ... ? 21:37:53 [Tantek] No it is not a criticism for that implementation. It is the same problem as Frames. 21:38:17 [Chris] AH: sliding scale of complexity, where do you stop. 21:38:27 [Hixie] the problem tantek mentioned is definitely something that needs to be addressed for web applications, however complex they are 21:38:30 [robin] Tantek: just ditch the browser :) 21:38:34 [Hixie] (web applications _now_ have this problem) 21:38:48 [Tantek] robin, this has nothing to do with the browser 21:39:17 [Steven] Steven has joined #workshop 21:39:18 [JibberJim] Yes Hixie, but I think that's a weakness of the implementations... 21:39:20 [robin] the back button is a feature of the browser 21:39:40 [Hixie] JibberJim: the web apps implementations, or the browser implementations? 21:39:45 [JibberJim] the web app impl. 21:39:59 [Hixie] JibberJim: sure, but it should be easier for them to work with that environment 21:40:08 [JibberJim] agreed 21:40:28 [Hixie] easier serialisation of state, e.g., and notification of "going back", "going forward", "came from forward", "came from back" 21:40:30 [JibberJim] Doug Schepers? - Why SVG? 21:40:49 [JibberJim] hixie++ 21:41:05 [Hixie] 'course i want to add that stuff to HTML. ;-) 21:41:40 [JibberJim] DS:CSS cannot antcipate everything and we can't count on UA authors for everything, we need control over stuff to make our own UI's 21:41:52 [JibberJim] DS: HTML is great for what it is, but it's not much good for apps, charts etc. 21:41:57 [robin] imho it needs to go into a UA spec, not HTML 21:42:22 [JibberJim] DS: we need keyboard events, we need to get to local files. 21:43:18 [Tantek] robin, people want the back button 21:43:24 [JibberJim] DS: Got to be some way around Security letting you write to files. 21:43:38 [Tantek] people like being able to copy/paste URLs that contain state (like "this concert") and send them to friends 21:43:55 [JibberJim] Yes Tantek, but "the UA navigation events" aren't document level - no need in HTML, they're UA level, and apply beyond HTML, a UA spec could address them 21:44:08 [Tantek] some people might even *link* to such URLs, perhaps even on blogs. :) 21:44:24 [robin] that being said, the UA events would be dead useful 21:44:47 [robin] ASV has done some experiment with those, but it's not complete by any margin 21:45:06 [Tantek] the point is, that that example (stadium), and many (most?) so-called web applications almost always break the user's expectations with regards to Forward/Back and URLs. 21:45:21 [robin] on this we agree 21:45:30 [JibberJim] I think we all do. 21:45:38 [JibberJim] DS: we need save and resume ability 21:45:43 [robin] I just happen to believe that it is mostly a problem with the browser not giving app authors sufficient control over that 21:47:13 [Tantek] no, it is mostly a problem with authors using <a href="javascript:void(document.location='...')">... type gorp rather than using http URLs, fragment identifiers, etc. that work well with the infrastructure (favorites, bookmarks, forward/back, copy/paste etc.) 21:47:40 [JibberJim] I thought authors were all authoring well structured XHTML+CSS now Tantek? 21:47:59 [Tantek] the ones that are getting paid well are, in my experience 21:48:12 [Tantek] never said "all", just the trend 21:48:27 [JibberJim] Hixie: wouldn't it be better to integrate HTML and SVG better. 21:49:23 [JibberJim] CL: how do we implement cross domain scripting in multiple platforms 21:50:09 [JibberJim] location="..." leaves the back/forward button working anyway Tantek... 21:50:20 [robin] SVG developers don't do the javascript URL style gorp, but they still can't do enough on the client side while also maintaining expectations re browser chrome and URLs 21:50:42 [JibberJim] Steve Zilles: 21:51:36 [JibberJim] SZ: I'm a doc guy 21:51:54 [JibberJim] SZ: co-chair of XSL but also on HTML and CSS 21:52:34 [JibberJim] SZ: Compound docs lead you into doing webapps 21:52:55 [JibberJim] SZ: they need to put things together, and typically no product supports them all 21:54:26 [JibberJim] SZ: mixing components - property propogation, event propogation, formatting and line breaking model, timing and animation, spell check, search, accessibility 21:56:04 [JibberJim] SZ: a simple interface for property propogation can solve a lot, should be small 21:56:33 [JibberJim] SZ: computed value of property, if it's inherited, and identify what properties it's interested in 21:57:37 [JibberJim] SZ: who's in charge: Which "document"? 21:58:03 [JibberJim] SZ: SMIL/XHTML/SVG/XFORMS all have reasons. 21:58:16 [JibberJim] SZ: timing/the document/layout/interactions/ 21:58:51 [JibberJim] SZ: don't look at it that way, that's a mistake, which capability provider is in charge? the OS? The Browser? on a Platform independant VM? 21:59:08 [JibberJim] browser's weak, a lot of interest in a Platform independant VM 21:59:18 [JibberJim] SZ: what then should we be doing? 21:59:37 [JibberJim] SZ: A practical combination of the various pieces and explore the VM idea 21:59:55 [JibberJim] SZ: A combination of XHTML+SMIL+SVG maybe XFORMS .... 22:00:35 [JibberJim] SZ: what the mobile guys have been looking at - this has an immediate utility, and it gives us the experience of integrating these few elements. 22:00:58 [JibberJim] SZ: which will then give us the experience of a practical virtual machine 22:01:12 [JibberJim] SZ: er experience to create a practical VM./ 22:05:22 [JibberJim] donkey sounds appear in the room. 22:05:50 [JibberJim] we break for early coffee to give us a full last session to hammer out what to do,. 22:08:03 [tjjalava] tjjalava has joined #workshop 22:24:00 [robin] robin has joined #workshop 22:33:03 [tjjalava] tjjalava has joined #workshop 22:37:14 [jrandall] jrandall has joined #workshop 22:38:03 [tjjalava] tjjalava has joined #workshop 22:43:26 [jrandall] jrandall has joined #workshop 22:44:00 [naviuser] naviuser has joined #workshop 22:44:04 [Chris] Chris has joined #workshop 22:45:14 [JibberJim] JibberJim has joined #workshop 22:45:29 [Tantek] DA: Would like to mix all those things. Not sure about profiles though. 22:46:38 [Tantek] TVR: Device specific profiles are a bad idea. 22:46:55 [Chris] once again, small profiiles are not device specific 22:47:01 [JibberJim] JibberJim has joined #workshop 22:47:11 [Chris] xhtml basic is not mobile specific or device specific, etc 22:48:18 [Tantek] PS: I don't like the term device specific profile. However, tiny, basic, is good. 22:48:19 [Chris] PS: xhtml svg and smil as proposed in last session is a good set to start with 22:48:49 [Chris] PS: next year we do svg basic 22:48:52 [Chris] :) 22:49:03 [Hixie] chris, if SVG Tiny is not device-specific, why is the URI " " ? :-) 22:49:27 [Chris] marketing 22:49:32 [Hixie] uh huh :-) 22:49:44 [JibberJim] Hakon Lie - we need a DOM, and a programming language for webapps. 22:49:57 [Chris] agree with hakon 22:50:06 [JibberJim] Definately! 22:50:13 [bert_lap] bert_lap has joined #workshop 22:51:05 [JibberJim] JF: we should have a nice clean arch. 22:51:34 [JibberJim] MB: We should combine things first so we get something to work, but we need a vehicle to bring them together? 22:51:49 [JibberJim] MB: What's the mime-type for this type of doc? 22:52:21 [MDubinko] application/kitchen-sink 22:52:30 [Tantek] text/xml ? 22:52:39 [Hixie] MDubinko: isn't that image/svg+xml ? 22:52:58 [JibberJim] no, they won't give us application for SVG docs. 22:53:17 [JibberJim] someone: We have DOM issues - we need a DOM WG. 22:53:29 [Chris] text/* is deprecated 22:53:32 [Chris] for xml 22:53:44 [MDubinko] application/plain :) 22:53:46 [Chris] several: DOM WG went away, its a problem 22:53:46 [JibberJim] Steven Pemberton reiterates the point for work in the DOM workspace. 22:53:59 [Chris] MDubinko++ 22:54:31 [bert_lap] HT 22:54:48 [JibberJim] A Strawpool on SVG+HTML+SMIL+CSS+XForms 22:55:00 [JibberJim] Robin Berjon - CSS is just a module of HTML [chuckle] 22:55:46 [Steven] Steven has joined #workshop 22:56:01 [JibberJim] CL: All proposals have taken an existing profiles SVGt+XHTMLb+SMILt and combined to do them. 22:57:09 [JibberJim] AH: That's an interesting point - the existing specs aren't complete they don't give people interop - we don't need a new profile we need to improve the interop. 22:57:38 [JibberJim] CL: yes, you need to do that for specs, but need to also ensure they work together so they can be proved with test cases. 22:57:56 [JibberJim] CL SVG+SMIL+XHTML is an interesting enough test case to proof this. 22:58:09 [JibberJim] JF: the biggest part is an interop test suite. 22:58:17 [JibberJim] Hakon: Agrees to. 22:58:31 [schepers] Hakon++ 22:58:34 [JibberJim] HL: microsoft, the specs are done, you can begin implementing... 22:59:04 [Steven] ( 22:59:04 [Steven] Pot calling the kettle black? :-) ) 22:59:15 [Chris] now Opera have an offer of programming support from x-port.net they can start, too :) 22:59:47 [shayman] shayman has joined #workshop 23:00:09 [JibberJim] Steve Zillis: working groups don't have enough time to integrate with other specs as tightly as they need to do it so a WG is valuable to give that time. 23:00:21 [JibberJim] SZ: but are there the workers available to do it? 23:00:45 [Chris] Charles Ying, Openwave 23:01:08 [JibberJim] Charles Ying: The effort you start out with should be even smaller than SVG+SMIL+XHTML etc. perhaps just 2 23:01:18 [JibberJim] CY: We'd suggest XHTML+SVG but we're mobile biassed./ 23:03:04 [MDubinko_] MDubinko_ has joined #workshop 23:03:07 [shayman] shayman has joined #workshop 23:03:10 [Tantek] CY: I really need a conformance test suite for that. 23:03:14 [JimJibber] JimJibber has joined #workshop 23:03:15 [tjjalava_] tjjalava_ has joined #workshop 23:05:04 [MDubinko__] MDubinko__ has joined #workshop 23:05:05 [tjjalava_] tjjalava_ has left #workshop 23:05:09 [heycam] heycam has joined #workshop 23:05:09 [dbaron] dbaron has joined #workshop 23:05:12 [klotz] klotz has joined #workshop 23:05:18 [tjjalava_] tjjalava_ has joined #workshop 23:05:20 [JibberJim] JibberJim has joined #workshop 23:05:21 [naviuser] naviuser has joined #workshop 23:05:24 [Tantek] JF: Focus on external combination first, inline later. External is easier. Inline combinations have more issues. 23:05:38 [schepers] schepers has joined #workshop 23:06:20 [JibberJim] VH: It's hard to implement test guesses, who's going to do the work? 23:06:44 [JibberJim] VH: take a humble approach limit the scope. 23:07:24 [JibberJim] Do it quick... 23:07:39 [JibberJim] next question: 23:08:14 [JibberJim] Dino: If a WG is chartered to look at a virtual machine? 23:08:17 [jrandall] jrandall has joined #workshop 23:08:36 [bert_lap] Effective Web documents profile is more like HTML 4.01 (+ XHTML 2.0 (+ SMIL 1.0 (+ SVG tiny (+ MathML 2.0 (+ XForms 1.0 (+ CSS 2.1(+ PNG 2e (+ JPEG (+ MP3 (+ UTF-8 (+ HTTP/1.1 (+...)))))))))))) 23:08:40 [Chris] Chris has joined #workshop 23:08:40 [JibberJim] Dino: declaritive is most welcome, but imperative important. 23:09:28 [mdubinko] bert_lap, you should s/XForms 1.0/XForms 1.0 Basic/ 23:09:36 [ph] ph has joined #workshop 23:09:42 [JibberJim] Dino: should charter a WG to work in this space? 23:10:58 [JibberJim] Dan Austin: the first requirement for this Group would be to examine the requirements - the first thing they'd decide is that compound docs need to get solved first. 23:11:47 [JibberJim] Glen Girsten: I'd be in favour to have a group, but it can't take a long time. 23:12:40 [JibberJim] GG: existing spec's aren't looking at application space. need more info about what a runtime environment is. 23:13:06 [JibberJim] Hakon Lie: FAST! 23:13:28 [JibberJim] HL: I don't think this workshop could create 4 working groups... 23:13:43 [JibberJim] HL: a group to create use cases would be a good starting point. 23:14:08 [bert_lap] What are the other 3 groups? 23:14:12 [schepers] correction: Hakon doesn't think it could start only 1 WG, but several 23:14:21 [dbaron] I thought HL said that he doesn't think this ws could successfully create 1 WG, but could create 4. 23:14:33 [JibberJim] sorry, that was very poor scribing! 23:14:53 [JibberJim] remove "don't" 23:16:08 [JibberJim] GG: there are a variety of problems and a compound doc isn't necessary before webapps work can be started. 23:16:31 [JibberJim] GG: the DOM API is specified in a heavy way - we could convert the DOM to declaritive. 23:16:41 [JibberJim] GG: XML events is a reflection of DOM 2 events. 23:16:54 [Steven] Steven has joined #workshop 23:16:56 [JibberJim] GG: XML events is more manageable 23:17:13 [JibberJim] GG: referencing XML events spec is easier than referencing DOM 2 events. 23:17:25 [JibberJim] me either 23:18:03 [JibberJim] TVR: is the document the interface? should we be asking document or app? 23:18:16 [JibberJim] TVR: It's not about inventing new, it's about stitching together. 23:18:27 [robin] it may be easier for authoring tools, that's one thing I can think of 23:18:56 [Hixie] TVR: Let's write use cases 23:18:57 [JibberJim] I took it that he was saying SVG spec people had easier life because XML events existed? 23:19:08 [JibberJim] TVR: find the low hanging fruit. 23:19:08 [Hixie] TVR: Let's write actual web apps with angle brackets that are the use cases we have 23:19:25 [JibberJim] TVR: no new spec needed. 23:19:27 [robin] you mean the spec we patched? ;) 23:19:32 [schepers] TVR: start with use cases, then end with test cases 23:19:58 [JibberJim] Steve Zillis: Do we need to define what an app is, and if it's different from a doc - no it's irrelevant, what's relevant is what people need. 23:20:11 [robin] JibberJim: it does make some things easier, for instance for sXBL in authoring tools. They don't need to run script to know which events are there, that sort of stuff 23:20:43 [robin] or they could just run the scripts for the Bind* events 23:21:26 [JibberJim] hmm, but if the script inside an event want's to start listening to another event (e.g. mousedown starts listening for mousemove) then they'll need to execute the script in the first one. 23:21:55 [Hixie] dbaron: There are two problems: how to make the specs work together is a separate issue than how to get multiple implementations implementing difference namespaces 23:22:30 [Hixie] dbaron: e.g. DOM Events defines how to do DOM Events across namespaces, and CSS defines how to do inheritance across namespaces 23:22:40 [JibberJim] CL: I agree, but sometimes link you still want things to work, and sometimes you want to stop inheritence, you want to overide the normal difference 23:23:26 [JibberJim] AH: Creating use cases for apps - I'm concerned they tend to be very simple, demonstrating we can build a calculator isn't much, you need to look at sophisticated things that people have tried to pull of. 23:24:19 [JibberJim] AH: simple use cases won't answer the bigger question. 23:24:37 [schepers] I don't necessarily agree with that 23:24:55 [JibberJim] Mark Birbeck: on "we have the specs" the specs suggest you have 1 system which does it all 23:25:17 [JibberJim] MB: but we have modular bits of software 23:25:28 [JibberJim] MB: XForms might be too complicated for XForms... 23:25:31 [bert_lap] Re "simple cases": but maybe simple and complex cases aren't solved with the same technology. 23:26:49 [JibberJim] MB: the browser today doesn't provide the mechanisms for combining things to work together - CSS doesn't really say how a module knows it's green or red. we need to define how that the cross-module communication works. 23:27:14 [Chris] dynamic infoset model - hmmm can you say more about that? 23:28:06 [JibberJim] TVR: What did we conclude from those questions? 23:29:47 [JibberJim] Straw polls on what needs to be done is suggested: 23:29:49 [Chris] application/plain was suggested ..... 23:30:49 [JibberJim] LK: on XHTML+SVG+SMIL combination the only browser comm would be name value pairs. 23:31:08 [JibberJim] LK: Is the complexity going to rise or fall if you combine or keep seperate? 23:31:40 [JibberJim] er, that was Suresh from nokia sorry. 23:32:17 [JibberJim] TVR: We've had the broad brush, we should identify specific problems 23:33:04 [JibberJim] TVR: and solve them which will enable it to happen. 23:33:32 [JibberJim] CL: hardware turnover quickly in mobile world gives us the chance to test and get users out there quickly. 23:33:50 [JibberJim] CL: a crazy w3 browser gets 50 users worldwide. 23:34:04 [bert_lap] On the MIME type for the profile: we need a more catchy name. Application/plain is a good joke, but something like Techdoc or Webdoc is easier to remember. 23:34:06 [JibberJim] CL: 500 million phones gives us the chance. 23:34:32 [JibberJim] Suresh - we need profiles rather than do everything spec. 23:34:49 [schepers] and if it works on phones, it can work on the desktop 23:34:54 [xover] bert_lap: If you pick a MIME Content-Type based on how "catchy" it is, so help me I will hunt you down and *kill* you! :-) 23:35:06 [JibberJim] CL: the mobile down really push down what they want to implement, but what they implement they implement it all of. 23:35:40 [JibberJim] Dan Austin: Profiles will increase the number Adhoc systems 23:36:14 [JibberJim] DA: are we going to profilerate the number of specs? 23:36:49 [JibberJim] Marc V. : Is the W3c interested in doing the work, as someone will do the work to integrate the specs other than the W3c 23:37:01 [JibberJim] TVR: Does the W3 need to do the work? 23:37:23 [JibberJim] CL: Vodaphone have stated that october's phones will ship with XHTML+SVG etc. 23:37:32 [JibberJim] AH: isn't it too late? 23:37:34 [dbaron] no "etc."? 23:37:44 [JibberJim] CSS ? 23:37:59 [JibberJim] CL:no not too late, software's still to come. 23:38:03 [tjjalava] tjjalava has joined #workshop 23:38:19 [JibberJim] CL: we need to specify this so that the vodaphone profile isn't unique, but covers other areas to. 23:38:27 [Chris] we want a profile for all the vendors not just a single one 23:38:39 [Chris] and we can learn a lot from building and testing it 23:39:00 [robin] robin has joined #workshop 23:39:46 [schepers] Rich S. from IBM: put authoring tool details in the Spec 23:39:59 [JibberJim] JF: What %age of HTML is hand authored? 23:40:06 [Chris] agree, authoring tool support is a success criterion 23:40:36 [Hixie] why is "XForms" the only one that is underlined in red on dean's projection? ;-) 23:41:22 [schepers] JF: agree, need good criteria for authoring tools 23:43:21 [Chris] hixie - because his spell checker doesn't recognize it 23:43:28 [schepers] Marc Verstaen, Beatware: we have the tools, that's not the problem 23:43:35 [bjoern] RRSAgent, pointer 23:43:35 [RRSAgent] See 23:45:53 [Chris] Options: 23:46:03 [Chris] a) XHTML.b + SVG.t 23:46:20 [Chris] b) XHTML.b + SMIL.b + SVG.t 23:46:38 [Chris] c)XHTML.b + SMIL.b + SVG.t + XForms.b 23:47:58 [schepers] Question 2: Charter an incubator group to address: use cases for WebApps; roadmap (strategy for where to go); keep tool vendors in mind 23:49:04 [Chris] rrsagent, pointer? 23:49:04 [RRSAgent] See 23:51:35 [Chris] a) XHTML.b + SVG.t + CSS.m 23:51:45 [Chris] b) XHTML.b + SMIL.b + SVG.t + CSS.m 23:51:57 [Chris] c) XHTML.b + SMIL.b + SVG.t + XForms.b + CSS.m 23:55:08 [CY] CY: hmmm, i probably shouldn't suggest ECMAScript integration 23:56:30 [schepers] yes! we need it! (JibberJim not schepers) 23:56:41 [schepers] also schepers 23:59:56 [Chris] ls
http://www.w3.org/2004/06/02-workshop-irc
crawl-002
refinedweb
10,524
58.01
This forum is closed. Thank you for your contributions. I have an application that periodically cannot find the following key in our WMI service (Windows 2008 R2 by the way): Win32_LogicalShareSetting A google search finds no informaiton on this. I thought maybe it was a descripter installed by some third party, but the only one I thought it may be from is Neverfail, and they claim it is not theirs but they use it to detect shares on the system. Can anyone help? Can I get some details on this? What namespace should it be in, etc? cheers Thanks for the reply. The company that develops the software has gotten back to me. It was a diagnostic output error describing a key that does not exist. The correct key is Win32_LogicalShareSecuritySetting Thanks Hi, Please make sure the spelling of the key is correct. Or please perform full disk scan using your antivirus software to verify if the system is virus free. Regards,
https://social.technet.microsoft.com/Forums/en-US/4e545e2d-424f-4c0b-ad65-8063d5df22a7/what-is-win32logicalsharesetting-in-wmi?forum=itprovistaapps
CC-MAIN-2022-27
refinedweb
162
74.69
I'm a little lost (still working with Ron Jeffries's book). Here's a simple class: public class Model{ private String[] lines; public void myMethod(){ String[] newLines = new String[lines.length + 2]; for (i = 0, i <= lines.length, i++) { newLines[i] = lines[i]; } } } Model myModel = new String[0] myModel.myMethod() myModel.lines lines[0] null I think your example is probably not the same as your actual code based on your description. I think the problem is that arrays are zero-based and thus an array initialized as: string[] lines = new string[0]; has no elements. You need to change your loop so that you check that the index is strictly less than the length of the array. As others have indicated you also need to make sure that the array itself is not null before trying to reference it. My take on your code: public class Model{ private String[] lines = new string[0]; public Model( string[] lines ) { this.lines = lines; } public void myMethod(){ int len = 2; if (lines != null) { len = len + lines.length; } String[] newLines = new String[len]; for (i = 0, i < lines.length, i++) { newLines[i] = lines[i]; } } }
https://codedump.io/share/EoY5CZW5KUOZ/1/how-many-dimensions-in-an-array-with-no-value
CC-MAIN-2017-17
refinedweb
191
75.1
You can try this: //Code Starts $(document).ready(function() { $("#btnDisable").click(function(e) { $("*").attr("disabled", "disabled"); e.preventDefault(); }); }); //Code Ends Ember-auth isn't necessarily just for apps. You can configure the endpoints like signInEndPoint and signOutEndPoint to point to the urls that need to be hit on your server to do authentication. Also there isn't any reason to bundle the client-side and server-side in the same repository here. The demo does this because the asset pipeline support in rails is very convenient for demoing, typically for avoiding a front-end build step. You can treat it as just a library file that you include alongside ember. I want to understand where the Razor View Engine actually generates the HTML from the templates that we create in the view It happens on the web server. Once the HTML is fully created on the web server, this HTML is sent to the client browser. The solution i used: First I use a 3rd party javascript library to select the crop area like jCrop. Once i got the coordinates (x1,x2,y1,y2), i draw a copy of an image to a canvas. var canvas = document.getElementById('drawcanvas'); var context = canvas.getContext('2d'); canvas.width = canvas.width; // clear canvas var imageObj = new Image(); imageObj.onload = function() { // draw cropped image // ... context.drawImage(imageObj, sourceX, sourceY, sourceWidth, sourceHeight, destX, destY, sourceWidth, sourceHeight); var dataURL = canvas.toDataURL(); }; imageObj.src = // image url After i drew the canvas, i converted the canvas to a DataURL which is in base64 format. On Is this a website or an application that you need to hit geocoding servers on every request? If so, you should use client-side. If you have some predefined latLng values then you can use server side geocoding and cache the results for a couple of hours. This way, you probably won't need to worry about quota. From Google Developers' site: When to Use Client-Side Geocoding The basic answer is "almost always.". See here You can use php DateTime for this. The current time The midday time The midnight time Here is an example. $now = new DateTime; $morning = new DateTime('08:00'); $night = new DateTime('22:00'); // next time is night if( $now > $morning && $now < $night ) { $time = $now->diff($night); } // next time is day else { $time = $now->diff($morning); } $target = $time->format('%h:%i:%s'); // returns a string. The request made from the client side is somewhat similar to this if I'm right: The post data from your client side is of the form: &content=valueIWantToSend In the ROR application, in the def create method, params[:content] will equal valueIWantToSend. Either change params[:post] to params[:content] on the server-side or change content to post on the client-side (Inside the try block) ASPX: <%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns=""> <head runat="server"> <title></title> <script src="js/jquery-1.10.2.min.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function () { $('#lblCheck').hide(); $('#btnProceed').click(function () { var $this = $('#chkTerms') if ($this.is(':checked')) { $('#lblCheck').hide(); return true; } else { $('#lblCheck').show(); Use SignalR is one option but if you wanna do yourself instead of inherit your controllers class from Controller, inherit it from AsyncController. Controllers that inherit from AsyncController can process asynchronous requests, and they can still service synchronous action methods. public class HomeController : AsyncController { public void ExtensiveTaskActionAsync() { AsyncManager.OutstandingOperations.Increment(); Task.Factory.StartNew(() => DoExtensiveTask()); } private void DoExtensiveTask() { Thread.Sleep(5000); // some task that could be extensive AsyncManager.Parameters["message"] = "hello world"; AsyncManager.OutstandingOperations.Decrement(); } Use can do like this : $('a').click(function () { var data = { userName: $(this).attr("id") }; var dataVal = JSON.stringify(data); $.ajax({ type: "POST", url: "Default.aspx/loadNewPage", contentType: "application/json; charset=utf-8", data: dataVal, dataType: "json", success: function (id) { alert("loaded"); } }); }); [WebMethod] public static void loadNewPage(int id) { // do the loading of the new HTML, etc. } You can know more about Jquery AJAX call over here. You have to modify you method public ActionResult Media() { //do some stuff here return View(); } to something like public JsonResult Media() { //do some stuff here return Json(new { myData = RenderPartialViewToString("ViewName", optionalModel), errorMessage = error }); } Add following method with reference to ASP.NET MVC Razor: How to render a Razor Partial View's HTML inside the controller action protected string RenderPartialViewToString(string viewName, object model) { if (string.IsNullOrEmpty(viewName)) viewName = ControllerContext.RouteData.GetRequiredString("action"); ViewData.Model = model; using (StringWriter sw = new StringWriter()) { ViewEngineRe First of all, I would suggest JSON rather than XML to be used for exchange format, it is much easier to parse JSON at the javascript side. Then, speaking about the architecture of your app, I think that it is better to write a server web application in Python to generate JSON content on the fly than to modify and serve static files (at least that is how such things are done usually). So, that gives us three components of your system: A client app (javascript). A web application (it does not matter what framework or library you prefer: django, gevent, even Twisted will work fine, as well as some others). What it should do is, firstly, giving the state of the points to the client app when it requests, and, secondly, accepting updates of the points' state from the next app and storing the Client.js <script src="/socket.io/socket.io.js"></script> <script> var socket = io.connect(''); $("#urAnchorTagId").click(function(){ socket.emit('urEvent','urData'); }); </script> Server.js var app = express(); var server = http.createServer(app); var io = require('socket.io').listen(server); server.listen(8080); io.sockets.on('connection', function (socket) { socket.on('urEvent', function (data) { function init(){ //ur task }; }); }); You should add loader for this kind of functionality. I hope you should be using update panel. You can add update progress control for the time request is processing on server side. It will hint the user to wait. You can also use ajax request by jquery to do similar kind of work. You haven't mentioned jQuery in your tags, but that is what I would look at. Specifically, the jQueryUI library, which runs on top of jQuery. jQuery is a javascript library that is more efficient (less typing, do more with less code) than classic javascript. See the jQueryUI sortable widget, here. For jQuery tutorials: Alex Garret's jQuery tutorials at the thenewboston.com Alex Garret's website with many more tutorials. You have to add callbacks for success / error to trigger an event when the user accepts to share their location: navigator.geolocation.getCurrentPosition(success, error); function success(position){ var lat = position.coords.latitude; var lon = position.coords.longitude; document.getElementById('location').value = lat + ',' + lon; } function error() { // do something with the error message } There. A valid approach would be to serialize them and assign it to a runat="server" hidden field. You'll be able then to access the array deserializing Value property. You can easily handle serialization with json format or comma separated values. I believe people start using client side frameworks to manage growing client side needs (Starting with backbone, ending with Angular/Ember). If your needs are not so severe then you might want to use client side temples like handlebars or mustache or underscore You might have one gsp file filled with various client templates included on page, then just use javascript to paint these templates on page using sample data or data from server. I do not agree that gsp templates lose in this case unless you are building fully dynamic experience or single page app which is usually not the case with grails. EDIT: I have just had a project when I had to consider if I need to move to client side templates all together, was using backbone. And there was one case when having server-side templates w Doing clientside i18n is growing - as implementing clientside applications in web gets more mature. you might get inspired by. The model layer should be interacting with data source. In case of client-server setup where you have two separate and independent triads, the data source for client's model layer would be server's presentation layer. Basically, your client-side's model layer becomes the user of server-side. There are few options: Douglas Crockford's original JSON library has plugin called cycle.js that resolved circular references. Also see this answer which has enhanced version of method to resolve references: Resolve circular references from JSON object. Cereal which handles circular references and object graphs better. CircularJSON which is very small and does exactly what it says. Having said that, I would probably refactor my design to avoid circular references because JSON is usually core component and you probably want to use mainstream libraries which is very well tested and supported. One way to avoid circular references is just to create lightweight shim objects and serialize them instead. Alternatively implement custom interface (if you have access to class) that overrides seri Was to long for a comment I guess the tricky part is indeed the point you mentioned f we link each other, they will perfectly look like a complex web app. Because each MVC framework uses a different approach to tackle usual problems you have in modern web-apps, like routing, data binding, application state and rendering DOM elements, so I think you would end up having multiple frameworks doing tasks that overlap substantially, thus forcing you to deactivate or disable some of the built-in functionality of one or the other framework making your frankenstein-app :) very difficult to maintain. A good example is jQuery-mobile & ember.js, both have a routing system, jQuery uses the DOM to hold state ember.js holds it's state completely in javascript which is much faster. I had a s Not sure if this would help you, but look at It is showing using of the hand-written exposure of properties. If your property were always call "magic" and you return the appropriate value for magic within your code would that get you what you need? That's because if android:orientation of LinearLayout is horizontal, the android:layout_gravity's value right and left will be not working.You can only set it to top or bottom. And if LinearLayout is vertical, you can set layout_gravity to left or right, but not top or bottom. So, I think you should use RelativeLayout here... Are there any alternatives creating the cookie via server side code or via javascript? No. Actually my server side is a asp.net rest service so probably this isn't a good idea? REST services usually do not deal or know about cookies. They could but it's not a common practice. REST services are using different techniques of authentication, such as sending some value in the Authorization HTTP request header. Are there any advantages or disadvantages in using either method? It would really depend on your scenario and what kind of information you are attempting to store. For example with a server side script you could create an HttpOnly cookie which is not accessible by client scripts, only by server side scripts. In any case you should not be storing sensitive information in Try likes this , foreach (GridViewRow row in GridView1.Rows) { string _rcbDeptSelectedValue = (row.FindControl("rcbDept") as RadComboBox).SelectedValue; } Use the AutoHideGroup.AutoHideType attached property: <dxd:DockLayoutManager> <dxd:LayoutGroup <dxd:LayoutGroup <dxd:LayoutPanel ItemWidth="200" dxd:AutoHideGroup. <TextBox BorderThickness="0" /> </dxd:LayoutPanel> <dxd:LayoutPanel ItemWidth="200" dxd:AutoHideGroup. <TextBox BorderThickness="0" /> </dxd:LayoutPanel> <dxd:LayoutPanel ItemWidth="200" dxd:AutoHid You can export to Excel2007 - its just an xml file format. WE have some details on our blog:. Keep in mind however, it generates an XML file, which when you double click on, will correctly open up excel (Assuming you have excel 2007 or the reader installed). That depends on what you mean by "are server side or client side". It is not a pure Javascript library. They use features of the MVC framework in order to do their magic, so I'd say they are both (but not just one side). More specifically, they implement custom controls that you include in your views. The controls have settings and such that you set on server side. However, the code generated relies on Javascript so they run on the client (using ajax calls when necessary). See documentation and examples here: link No seu projeto você tem alguma Pasta ou Classe com nome Facebook (criados por você)? Caso sim, renomeie. Deve estar dando conflito. Acredito que isso resolva. In your project, do you have any folder or class named Facebook (created by you)? If yes, rename. Should be giving conflict. I think this solves. Yes the DevExpress ASP.NET controls can be data bound to anything that supports IEnumerable. Online help docs: WebForms DevExpress ASP.NET GridView MVC Data Binding Hope that helps. If you have other questions, please feel free to contact the DevExpress support team: Very interesting question! Let's dig in. The root cause The root of the difference is in how Node.js evaluates these statements vs. how the Chrome development tools do. What Node.js does Node.js uses the repl module for this. From the Node.js REPL source code: self.eval('(' + evalCmd + ')', self.context, 'repl', function(e, ret) { if (e && !isSyntaxError(e)) return finish(e); if (typeof ret === 'function' && /^[ s]*function/.test(evalCmd) || e) { // Now as statement without parens. self.eval(evalCmd, self.context, 'repl', finish); } else { Wsdl.exe or "Add Service Reference..." generate proxy classes, datacontracts & config based upon exposed metadata. Commons ways to expose metadata is by using a mex endpoint or exposing a wsdl. Basically, Behaviors are simply not exposed. That's why you can't generate the same endpoint behaviors on client side. What is important here, is that many behaviors are "local settings only" (for a service OR for a client ). It does not tell to clients how to call the service, but how the service should run. Yes, with the caveats that you will still need to supply Breeze metadata either on the client or the server, and of course, you will not be able to use any of the EntityQuery.methods like 'where', 'take', 'skip', 'orderBy' etc. The Breeze samples include an "Edmunds" sample that talks to an arbitrary REST api. Excerpted here: var serviceName = ""; // edmunds var ds = new breeze.DataService({ serviceName: serviceName, hasServerMetadata: false, useJsonp: true, jsonResultsAdapter: jsonResultsAdapter }); var entityManager = new breeze.EntityManager({dataService: ds}); var q = EntityQuery.from("vehicle/makerepository"); // this will call -> "" myEntityManager.executeQuery(q).then(...); You need to use Phonegap 3.0, as stated on the Github page, and if you look at the Example/index.html you can see that they get the push object using: var push = window.plugins.pushNotification; Then push.registerEvent('registration', callback) should work. To make elements hidden/visible on the screen I do inline checks in my template, something like: <% if (user.isInRole('ADMIN', 'MNGR')) { %> <li <% page == "store" ? print('class="active"') :'' %>> </li> <% } %> and added the following helper function inside my user model to check for the permissions: isInRole: function (rr) { var self = this; $.each(rr, function(i) { if (rr[i] === self.currentRole) { alert('pass'); } }); } I assume this is secure-enough, since the actual check for required permission happens again on the server side. By hiding some controls I'm just guiding the user through the application and not letting him to be confused him with actions for he/she doesn't have the required privileges.
http://www.w3hello.com/questions/-Client-Side-Controls-In-Datagrid-ASP-Net-
CC-MAIN-2018-17
refinedweb
2,596
58.18
very good programming Post your Comment break and continue break and continue hi difference between break and continue break and continue break and continue hi i am jane pls explain the difference between break and continue Continue and break statement Continue and break statement How can we use Continue and break statement in java program? The continue statement restart the current... is an example of break and continue statement. Example - public class ContinueBreak JavaScript Break Continue Statement JavaScript Break-Continue Statement: Generally we need to put the break... of the program. On the other hand continue helps us to continue the flow.... Example 1(Break): <html> <head> <title>Write your title here What is the difference between a break statement and a continue statement? What is the difference between a break statement and a continue statement? Hi, What is the difference between a break statement and a continue... types of controlling statements such as: break statement, continue statement C break continue example C break continue example In this section, you will learn how to use break statement with continue statement in C. The continue statement provides a convenient way Java Break continue Java Break continue Java has two keywords break and continue in its branching... control from the point where it is passed. Break and Continue in Java PHP Continue Continue Control Structure: Continue is another type of control structure... iteration, continue is used within looping structures (like for, foreach, while, do-while and switch case)to avoid rest of the code. If we want to continue break . Using jumping statements like break and continue it is easier to jump out of loops...; continue Continue statement is just similar to the break... of break statement but here we will use continue instead of break. The continue php do while break php do while break receiving a Fatal saying cannot break/continue.. please suggest. Thank U Java Continue Java Continue refers to Continue statement in java, used for skipping the current iteration of Boolean expression. Java Continue is a form of expression used in Boolean expression. Sometimes Java Continue is used with break statement Continue in java Resource: Java - Continue statement in Java Java Break continue Continue... iteration of the loop. Continue statement in Java successfully used in many... between break and continue statement that the break statement exit control Java Break loop Java Break loop  ... of opposite nature, break and continue respectively. In the following example break... in Java public class Java_Break_loop { public static void main(String args Java Break keyword and for handling these loops Java provides keywords such as break and continue respectively. Among these Java keywords break is often used in terminating the loops... Java Break keyword   Continue statement in java Continue statement in java In this section we will discuss about continue statement in java. continue is one of the branching statement used in most.... Difference between break and continue is, break exit from the loop The continue statement The continue statement is used in many programming languages such as C . There is the difference between break and continue statement that the break statement exit control...; C:\chandan>javac Continue.java C:\chandan>java Continue chandan...The continue statement The continue statement is used in many programming Java - Continue statement in Java Java - Continue statement in Java  ... the statements written after the continue statement. There is the difference between break and continue statement that the break statement exit control from the loop Java Break Lable Java Break Lable In Java, break statement is used in two ways as labeled and unlabeled... for Break Labels in Java public class Java_Break_lable { public static Java - Break statement in java Java - Break statement in java  ...; 2.The continue statement 3.The return statement Break: The break...;javac Break.java C:\chandan>java Break The Prime number in between 1 - 50 Java Break Java Break Many programming languages like c, c++ uses the "break" statement. Java also... of the controlling statement like continue and return. Break statement can be used in while loop Java Break command . . break is often used with label continue, which also comes under Java branching... Java Break command Java Break command is commonly used in terminating looping statements How to use 'continue' keyword in jsp page ? */ continue; else /* use break statement to transfer control out... How to use 'continue' keyword in jsp page ? This is detailed java code Loop in java on the following link to know all about it. While Loop in Java Do-While Loop in Java For Loop in Java Break statement in java Continue statement in Java Java Break example these loops Java provides keywords such as break and continue respectively.... Java Break continue Java has two... Java Break example   What is BREAK? What is BREAK? What is BREAK? Hi, BREAK command clarify reports by suppressing repeated values, skipping lines & allowing for controlled break points. Thanks Java Break out of for loop statements. These Java labels are break and continue respectively. 'break' is mainly used... Java Break out of for loop  ...; brief introduction about Java break label is demonstrated, along with this concise break image click the imageview break image click the imageview i clcik the button than image break in multiple progammingRavindra kumar Chakravarti November 11, 2011 at 5:55 PM very good programming Post your Comment
http://roseindia.net/discussion/22618-Java-Break-continue.html
CC-MAIN-2016-18
refinedweb
885
56.66
CodePlexProject Hosting for Open Source Software Hello I'm newbie under PRISM 4. I block on problem. In a simple MVVM applicaton with no modules. With one Shell, two User control. My problem is i want to use aggregator between this two user controls. How can I do to send unity container at two VM but I want to keep "blendability" ? Thanxs a lot for your help. Nobody for help me ? Hi, If the usercontrols reside in two different modules, event aggregator could be the appropriated approach to communicate these two components, though in your scenario you could use regular .Net events. Therefore, if you need to use Event Aggregator in your application, you could take a look at the following documentation section on MSDN. Take into account these links assume that the application has a bootstrapper that properly configures the event aggregator. Your scenario if you are only using MVVM (e.g. MVVM RI or MVVM Quickstart) you might not have this bootstrapper in place: There is no guidance for blendability using DI containers, so far. Regarding to blendability you could check the following document in the Prism4.chm of the latest drop: Not taking into account blendability you could could check the MVVM RI, which use MEF as the composition container. Please let me know if this helps. Fernando Antivero Hello, Sorry for my late answer. I read various articles that you gave me in the comment. I find a solution but i don't know if it respect MVVM design pattern. My solution : I have a static class in where i have a static property where i put a instance of container. In the ConfigureContainer bootstrapper, I update this property. I use in my differents viewmodels. About my blendability, in the modelview ctor, i test if i was in indesignmodel or not. And if I not in design a call the container throught my 'infrastructure' property. What do you find about this approach ? If you are using Unity, you could use DI to retrieve your container from everywhere (ViewModel) using Constructor Injection, Dependency Property, etc. You could see this in the following pseudo-code: public class VM { ctor (IUnityContainer container) { //your code here } } Regarding to your approach, it seems that has not issues with MVVM pattern, but that approach is not use in the Quickstarts or Reference Implementation in Prism. The team is actually working on this, as you can check in today's drop 8 of Prism, since the document for achieving blendability and the MVVM RI have been updated with a new approach. So I would suggest to you to take a look at this new drop (you could download the lasted version here). Fernando Antivero but if i use this approach I lost the blendability because blend not found a default ctor. I download and look the Drop 8 of PRISM. Thanxs for your help Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://compositewpf.codeplex.com/discussions/224396
CC-MAIN-2017-04
refinedweb
517
65.12
I have to below code that I came up with but instead of using an array I need to use a linked list. What I am trying to do is take linked list of letters a-i and reverse them in a new linked list. I had posted this on Dreamincode.net but they are having problems with the site. Can some one help me out here. I have been searching many different sites for the past few days trying to figure this out on my own and can't seem to figure it out. Code: #include <iostream> using namespace std; int main(){ char a[9] = {'a','b','c','d','e','f','g','h','i'}; for (int i=1;i<=9;i++){ a[i-1]; } cout<<"\n\n Here they are in reverse order: \n"; for (int i=8;i>=0;i--) cout<<a[i]<<endl; char c; cin>>c; }
https://cboard.cprogramming.com/cplusplus-programming/123861-help-converting-array-program-link-list-program-printable-thread.html
CC-MAIN-2017-09
refinedweb
150
87.65
MVC3 gives you great control over how URLs are mapped to your controllers. It gives you the ability to define your URLs in a human readable SEO (Search Engine Optimization) friendly fashion, to remap old URLs to new functionality and side-by-side utilize classic ASP.NET sites inside of MVC3. MVC3 Basics MVC3 routes are defined in the Global.asax.cs file of your MVC3 web application. By default, there is only one route defined in the RegisterRoutes method that looks like the line below. routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults );. Custom MVC3 Routes One of the many factors frequently considered by search engines to determine the relevance of a particular page to a particular search term is whether or not the URL link itself includes a particular term. In a classic ASP.NET site for a magazine,3 Routing. Another frequent use of custom routing is to allow multiple sites to link to the same location while providing additional data about where they came from. For example, if you had a link to a product page and you wanted to provide custom co-branding based on which one of your partners linked in to a page, you could do so by including a "partner name" variable in the link. A link like this could be to or for example. This would allow two different sites to link to the same article while providing their own information in the process. The code block below defines a new route called ArticleRoute that defines a new parameter called article in addition to the standard controller and action. routes.MapRoute( "ArticleRoute", "{article}/{controller}/{action}", new { article="Unknown", controller = "Home", action = "Index" } ); You can access this custom article part of the route in your controller by accessing the RouteData object. RouteData.Values["article"] To access the RouteData object in your Razor views, use the @Url.RequestContext.RouteData object. When you are constructing URLs that use string data such as article titles or author's names, you will need to use some form of URL-friendly encoding. The easiest method is to use HttpUtility.UrlEncode("Your String") which will replace all of the URL unfriendly characters with the appropriate HTML escapes.B This method is web spider friendly but not necessarily human readable URL friendly. It is very important to remember that any data in a URL is easily user manipulated and shouldn't be trusted. It is not an appropriate place to pass application variables between pages unless they are of a nature that it would be acceptable if the user manipulated them. Re-mapping Routes From time to time it is necessary to re-route an old URL to a new location. Traditionally you would use a redirect page to let users know to update their bookmarks and include a link to the new location. Sometimes it is impractical or even impossible to change URLs as is the case when you have 3rd party software systems set up to access and scrape particular URLs. If you wanted to route MyRazor.cshtml page to your MVC3 Home/Index controller, you can do so by defining a route like the one below. routes.MapRoute("RT", " MyRazor.cshtml ", new { controller = "OtherController", action = "Packed" }); Including Classic ASP.NET as a Sub-directory in an MVC3 Web Application If you have a large classic ASP.NET web site that you need to incrementally transition to MVC3 you can put the entire classic ASP.NET web site in your MVC3 web site as a sub-directory. You can then call the IgnoreRoute method seen in the code block below to tell MVC3 to not handle that particular URL path. routes.IgnoreRoute("OldClassicASP/"); Constraints You can define additional constraints to your defined routes to insure that the values passed for particular parts of your route are valid. This is useful not only for general security related needs but is also useful for scenarios where you might want to have additional routing logic in place for particular routes. routes.MapRoute( "ColorPath", // Route name "{color}/{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults , new { color="blue" } ); In the example route above, a new anonymous object is added to the MapRoute path call that adds a constraint to this path. The route will only be used if the color is blue. Namespaces In large ASP.NET MVC3 applications you can potentially have hundreds of controllers. This can become problematic because the .NET Framework looks in the Controllers folder and all sub-folders looking for controllers to match up with the defined routes. To help you organize your code, you can add namespaces to your routes to constrain the controllers the route will match to particular namespaces. The route below will only use controllers defined in the Home.Guru namespace. routes.MapRoute("NamespacedRoute", "Cool/{controller}/{action}", new { controller = "Home", action = "Index", id = UrlParameter.Optional } , null, , new string[] { "Home.Guru" }); Global Filters If you want to define code that runs before or after your routing calls, you can define a global filter. Global filters are registered in the Global.asax.cs file by adding a line to the RegisterGlobalFilters method. This method, by default, registers the HandleErrorAttribute that is used to handle error conditions in ASP.NET MVC3 applications. If you wanted to add a copyright notice at the bottom of all of your pages, you could add a global filter attribute that overrides the OnResultExecuted method which is run after your page has run. public class CopyrightNoticeAttribute : ActionFilterAttribute { public override void OnResultExecuted(ResultExecutedContext filterContext) { filterContext.HttpContext.Response.Write(String.Format("<h1>Copyright {0}</h1>",DateTime.Now.Year)); } } Once your custom attribute has been written you can add it to the global.asax.cs file. public static void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new CopyrightNoticeAttribute()); filters.Add(new HandleErrorAttribute());//Default in MVC3 } Conclusion Understanding how to customize and control the routing in ASP.NET MVC3 helps you organize your solution and gives you powerful tools to maintain your web application. Additional tools like global filters give you the ability to add code that cross-cuts your MVC3 views and can even add sophisticated logic to re-route your controllers.
https://mobile.codeguru.com/csharp/article.php/c18645/MVC3-Routing.htm
CC-MAIN-2019-09
refinedweb
1,048
55.13
Fabulous Adventures In Coding Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric. (This is part three of a four part series; part two is here, part four is here.) Part Three: Bad hierarchical design The reason we humans invented hierarchies in the first place is to organize a complicated body of stuff such that there’s a well-defined place for everything. Any time you see a hierarchy where there are two levels with the same name, something is messed up in the design of that hierarchy. And any time you see a hierarchy where one of the interior nodes has a single child, again, something is probably messed up. Krzysztof points out in the annotated Framework Design Guidelines that the fundamental point of namespaces is not actually to allow you to disambiguate two things with the same name. (Ideally there would never be a situation where two things had the same name in the first place; coming up with a mechanism to enable that problem and then deal with it seems counterproductive.) Rather, the point of namespaces is to organize types into a hierarchy that is easy to understand. That is, the point of namespaces is not just to keep similarly-named things separated, but rather, to group things that have something in common together so that you can find them. If you don’t think that there are two or more things that could go into a namespace, then it is probably not a good namespace. My original example was a namespace MyContainers.List containing a class List. Could any other class go into MyContainers.List? No. The right design is to either move the class List into the MyContainers namespace, or to make the namespace MyContainers.Lists, plural, and have it contain more than one thing, say, MutableList and ImmutableList. The commonality that groups a set of types into a namespace could be anything. System.Collections.Generic groups collection types by an implementation detail: they’re all generic. System.IO groups types by related functionality. Top-level namespaces like “System” and “Microsoft” group things by whether they are part of the core functionality of the platform, or are Microsoft-specific extensions to it. But the point is that each of these namespaces groups a large number of things by some shared characteristic. A namespace containing a type of the same name indicates a failure in the design of the hierarchy. Next time: It makes a bad situation worse. I think you could make a pretty darn convincing argument that namespaces are in fact intended to disambiguate different things with the same name. Why are they called "namespaces" and not "categories" or "taxonomies"? It actually makes sense if you think about it. In many languages (C, Objective-C, PHP) the big problem with libraries is that there's no way to make sure that the names used in one library won't also be used in somebody else's library. For example, the HTML library has good reason to have a Table abstraction and the DB library has good reason to have a completely unrelated Table abstraction. Unfortunately, the only way to be able to use them together would be to have each library use a prefix, so you'd have HtmlTable and DbTable. But then you have to hope that everybody uses a prefix and that the prefixes don't collide, which could be a problem when Dan Brown decides to use Db as the prefix for his HTML library. And since the prefix has to be typed in every time one of the names is used it has to be short, which encourages collisions. Hierarchical namespaces are the perfect solution to this problem. As for organizing types into an easy-to-understand hierarchy, why would the compiler care? The compiler doesn't need to understand the hierarchy; programmers do. At that point it's a documentation issue which could be addressed with attributes or documentation comments, and the compiler wouldn't have to care if you use the same name for both the genus and species. Of course you could make the analogy to a filesystem: A filesystem has hierarchical directories to group files together, not to allow two files to have the same name. However, that's different because I frequently browse the filesystem hierarchy directly, while I only browse the BCL namespace hierarchy directly when I use Reflector. Otherwise I just browse the documentation which need not have any relation to how the compiler sees the classes the same way the Win32 API documentation is hierarchical even though it is all in a flat namespace. Furthermore, it seems like if the only purpose was to organize types, a type should be allowed to be in multiple places. Why should I have to decide whether ImmutableList<T> goes into MyContainers.Lists, MyContainers.Generic, or MyContainers.Immutable? I should be able to categorize it as Generic, List, and Immutable all at the same time. I dont think Eric ever said that namespaces arent there to disambiguate different things. Of course namespaces are there to use in such way and should in fact be used that way. The point is that it should not ONLY be used thinking in that particular functionality. If you do, then having a class with the same name as the containing namespace is obviously not a problem, same as having a file named as the containing folder isnt a problem at all (to follow your analogy). But when you look at the bigger picture of namespaces and that apart from helping avoid name-collisions they should be used to represent logical and well thought heirarchies, then having a namespace containing a class with the same name doesn't seem such a good idea. I wasn't commenting on the actual purpose of namespaces; I was just making the argument that if somebody said "come up with a way to disambiguate different things of the same name", I would have come up with namespaces and called them "namespaces". If somebody had said "come up with a way to organize types into a hierarchy that is easy to understand", I would have come up with something entirely different which wouldn't be called "namespaces" and probably wouldn't be useful for disambiguation. As for naming a class the same as a namespace, let's say I have a class called Animal, and from that I derive classes like Mammal and Amphibian, and so on. What's a good namespace for this collection of classes? "Animals" seems like as good a name as any.. Is my hierarchy so ill-thought-out that I have ended up with a name collision? Or is it just that sometimes logical names happen to coincide? Consider this completely contrived hypothetical example: "namespace First.Second { class Hour{} class Minute{} class Second{} }". The one time this bit me was in generated code. It automatically put all the generated types into a namespace Foo.Types, and one of the generated types ended up with the name Types. Well, for me AnimalCollection seems far better than Animals because it makes it obvious that it's collection of animals. Moreover it's not a good idea to have two classes within the same namespace that differ only in single letter. If you stick to .NET naming guidelines (namespaces in plural, collections with Collection suffix) then you won't run into this kind of problems. I can agree that as you said "sometimes logical names happen to coincide", but in my opinion it's a sign that your hierarchy wasn't as well thought out in the first place, so it's probably time for some refactoring. BTW, your second example is so contrived that it should be taken out and shot. ;) >. That's why we have common naming guidelines, which say that you should have called your class "AnimalCollection", and the namespace, "Animals". > Or is it just that sometimes logical names happen to coincide public Color Color { get; set; } > The one time this bit me was in generated code. It automatically put all the generated types into a namespace Foo.Types, and one of the generated types ended up with the name Types. Which is why C# (and VB) still does let you reference such a type, even if not in the most convenient way. @Gabe sorry but the animals example doesnt work for me. The class should be names AnimalCollection which is the way the Framework is designed too: System.Windows.Forms.FormCollection, not System.Windows.Forms.Forms I see no name-collision at all. Having to name a class slightly differently without losing one single bit of the name's meaning is not such a big deal IMHO. I guess my point wasn't quite clear enough. I meant that having a class named Animals in a namespace named Animals doesn't indicate a failure in the design of the hierarchy -- it merely indicates a failure to follow the .Net naming conventions. Well, it depends on how you look at it. I consider it a flawed hierarchy because it has a flawed naming convention, even if its conceptually sound. Naming is many times just as important. You could create a great hierarchy from a conceptual standpoint but still manage to make it almost completely useless through horrible naming conventions. So IMHO having a class named Animals in a namespace named Animals IS a flawed hierarchy...not conceptually but still flawed by a serious usability issue. When you think up the hierarchy its easy to spot the Animals issue beforehand and name the class AnimalCollection. If you validate hierarchy design only from a conceptual standpoint then I understand your point of view. I try to not separate conceptaul correctness from sound naming when I design as both can normally go hand in hand without it being a problem. Well, I'm (mostly) with Gabe on this one. Namespaces are primarily about partitioning the namespace. Any sort of taxonomy is a secondary concern. That's not to say it is unimportant, but it is not the *primary* purpose of the language feature. Perhaps C#'s namespaces were more inspired by Java packages, which are more about the classification. However overdoing the hierarchical thing can lead to more fragile code as too many assumptions are often pushed into a non-functional aspect of the code. I had a similiar discussion on Stackoverflow here: Design is what is more of us (starters) do not care a lot about and are more interested in getting things done. Do appriciate your effort as you higlight some very basic things which are of really great importance. I like the namespaces which ends with "Model", such as XY.ObjectModel, XY.ServiceModel, XY.TableModel, XY.SecurityModel and so on. I found it rather practical, becasue it express some kind of 'abstraction', which are independent by its nature, and they are 'wide' enough to serve as a base for 'taxonomization'. The namespace 'authoring' (hehe) is usually easy if you develop a platform, but much harder to find good namespaces if you create a complex business application, which has complex dependencies betwenn components, through workflows for example. You create namespaces for responsibility layering usually, and for 'componentization', but they are normal to each other. My question is which goes fist? Responsibility layering namespaces or logical component namespaces? Gabe: "And since the prefix has to be typed in every time one of the names is used it has to be short, which encourages collisions." You can create short namespace aliases for solving this problem. eg.: using OM = XY.ObjectModel; OM.User user = OM.User.Create(...); Using short aliases for namespaces are rather common. When you create n-tier application, where the entitites has type representation in every tier in worst case, that can be tricky scenario. Or when you create DTO-s for your entities, you also face with this problem. Gabe: "However, that's different because I frequently browse the filesystem hierarchy directly, while I only browse the BCL namespace hierarchy directly when I use Reflector." No! When yo got a new library you should browse the contained types and ns sturcutre with the VS Object Browser at least, or browse the API doc, it takes only 5 or 10 mintutes. If that library is well structured, that provide a big picture about the library. You will know the main contepts, ideas by simply visiting how the types are strutured in the assembly. It gives some important information about the 'maturity' of that component. If you find some clear concepts, or guiding principle in the first 5-10 minutes, that library is a mature product probably. Well created namespace structures are very important. Fix: "I like the namespaces which ends with "Model", such as XY.ObjectModel, XY.ServiceModel, XY.TableModel, XY.SecurityModel and so on. I found it rather practical, becasue it express some kind of _aspects_, which are independent by its nature, and they are 'wide' enough to serve as a base for 'taxonomization'." Separating types by aspects can be good idea. ps. Sorry for flooding. Szindbad: When I said "And since the prefix has to be typed in every time one of the names is used it has to be short, which encourages collisions.", I was talking about languages like C and Objective-C that don't have namespaces. That's why all functions in Xlib start with X and all Cocoa functions start with NS. Nobody would want to type XWindowSystemCreateWindow, but if they had namespaces they would be able to have something like "using XWindowSystem; CreateWindow..." And when I said I only browse the BCL namespace directly with Reflector, I meant that I use the documentation or an object browser to look at the hierarchy. In other words, I use some other tool to show me the hierarchy, which tool could have generated the documentation from some metadata besides the namespace mechanism. And the metadata that generates this documentation need not form a strict tree, as the namespace mechanism does;it could form a graph instead, for example. As I said before, Foo.ImmutableList<T> should be able to be categorized under GenericCollections, Immutable, and Lists. A strict tree structure does not allow this but other forms could.
http://blogs.msdn.com/b/ericlippert/archive/2010/03/15/do-not-name-a-class-the-same-as-its-namespace-part-three.aspx
CC-MAIN-2014-49
refinedweb
2,386
62.07
01-31-2009 07:46 AM Hello! << I apologize sincerely if this has been answered but I've done searches and haven't found a solution (or at least one I can grasp). This may in fact just be an issue with me not understanding the data type and the names for everything. Anyhow: >> I am looking to retrieve a contact from ACT! based on a field value (in this instance it's by email, but could possibly be by first/last name, phone, or what-have-you). I have faced and accomplished the following thus far, but please let know if I need to include anything else to tackle this solution: - Acquired Act.Framework.dll from the CD - Referenced Act.Framework.dl & Act.Shared.Collections.dll - Established a connection (Act.Framework.ActFramework & .LogOn) From there, I am looking for a way to set out a search to grab a contact that matches the field value I want. I'm running 2007 9.0 is that helps, any advice or code is very much appreciated. Thank you, Brad 02-02-2009 01:45 AM [Visual Basic] Overloads Public Function LookupContactsReplace( ByVal criteria As Criteria, ByVal includePrivate As Boolean, ByVal includeUsers As Boolean ) As ContactLookup 'Example 1 Dim cLookup As ContactLookup Dim lCriteria() As Criteria Dim includePrivate As Boolean Dim includeUsers As Boolean . . . cLookup = ActFwk.Lookups.LookupContactsReplace(lCriteria, includePrivate, includeUsers) 'Example 2 'This example populates a ContactList with a lookup of 'contacts based on the State and Last Name fields Dim sFieldName1, sFieldName2 As String Dim cColumn1, cColumn2 As CriteriaColumn Dim oOperator1, oOperator2 As OperatorEnum Dim sValue1, sValue2 As String Dim cLookup As ContactLookup Dim cList As ContactList 'initialize some varables sFieldName1 = "BUSINESS_STATE" sFieldName2 = "LASTNAME" oOperator1 = OperatorEnum.EqualTo oOperator2 = OperatorEnum.StartsWith sValue1 = "WA" sValue2 = "H" 'Get the column(s) to lookup on cColumn1 = ActFwk.Lookups.GetCriteriaColumn("TBL_CONTACT", sFieldName1, True) cColumn2 = ActFwk.Lookups.GetCriteriaColumn("TBL_CONTACT", sFieldName2, True) 'Create an array to hold all of the lookup criteria. Dim lCriteria() As Criteria = {New Criteria(LogicalOperator.And, 0, 0, cColumn1, oOperator1, sValue1), _ New Criteria(LogicalOperator.End, 0, 0, cColumn2, oOperator2, sValue2) _ } 'Create the Lookup cLookup = ActFwk.Lookups.LookupContactsReplace(lCriteria, True, True) 'Set our contact list to the lookup cList = cLookup.GetContacts(Nothing) MessageBox.Show(cList.Count.ToString()) Tom 02-03-2009 04:57 AM Thank you very much Tom. i was looking though those but didn't see the example I needed I guess. Works terrific now, I appreciate your help! 02-04-2009 02:41 PM Okay, so I got that working (thank you) but have run in to a new problem: I am attempting to programatically update a record in ACT by pulling the contact by email address, altering it [the email address], re-saving it, then attaching a note. Now, i have the grab part (to an Act.Framework.Contacts.Contact object) but when i attempt to alter the .Fields["TBL_CONTACT.BUSINESS_EMAIL",true] and call .Update() my contact's gone. this has to be me not doing things in the right order, or in a manner that ACT! is not liking. Can someone point me in the right direction? I have the following thus far: public Act.Framework.Contacts.ContactList GetContactsFromEmail(String Email) { if (this.m_Act.IsLoggedOn) { Act.Framework.Lookups.CriteriaColumn actColumn = this.m_Act.Lookups.GetCriteriaColumn("TBL_CONTACT", "BUSINESS_EMAIL", true); Act.Framework.Lookups.Criteria[] actCriteria = { new Act.Framework.Lookups.Criteria(Act.Framework.Lookups.LogicalOperator.End, 0, 0, actColumn, Act.Framework.Lookups.OperatorEnum.EqualTo, Email) }; Act.Framework.Lookups.ContactLookup actLookup = this.m_Act.Lookups.LookupContactsReplace(actCriteria, true, true); Act.Framework.Contacts.ContactList actContacts = actLookup.GetContacts(null); return actContacts; } return null; } public bool AttachNote(Act.Framework.Contacts.Contact actContact, String actNote) { if (this.m_Act.IsLoggedOn) { Act.Framework.Notes.NoteType actNoteType = new Act.Framework.Notes.NoteType(Act.Framework.Notes.SystemNoteType.Note); Act.Framework.Notes.Note actNewNote = this.m_Act.Notes.CreateNote(actNoteType, actNote, System.DateTime.Now, false, actContact); return (actNewNote != null); } return false; } public bool MarkAsBad(Act.Framework.Contacts.Contact actContact) { if (this.m_Act.IsLoggedOn) { try { // Update the email as bad... actContact.Fields["TBL_CONTACT.BUSINESS_EMAIL", true] = "BAD - " + actContact.Fields["TBL_CONTACT.BUSINESS_EMAIL", true]; actContact.Update(); // now, add the note... return this.AttachNote(actContact, "Bounced email"); } catch (Exception ex) { throw new Exception(String.Format("MarkAsBad failed with {0}", actContact.FullName), ex); } } return false; } The idea being you use GrabContactsByEmail to get the contact list, then iterate through the list and update (MarkAsBad) the contacts to prefix the email with "BAD_-_" (where _ = space), then add a note (AttachNote). Any pointers? Thanks once again, Brad 02-05-2009 09:10 AM Boiled down what you were doing to this.... Contact cContact = ActFwk.Contacts.GetMyRecord(); // Update the email as bad... cContact.Fields["TBL_CONTACT.BUSINESS_EMAIL", true] = "BAD - " + cContact.Fields["TBL_CONTACT.BUSINESS_EMAIL", true]; cContactContact); And it works fine. Scoping problem? What is the error your getting? Hope this helps, 02-11-2009 12:39 PM As you can see from the code, I'm using a few function to go back and forth with the contact information and references. I don't see anythign odd-ball that would do this. I have to be transferring some level of detail with the Update call as it kills the contact I'm referencing. When you pull the record data using the search I method listed above, is that grabbing all data? All these examples list "GetMyRecord" but could I see the same example with a search matching the email "bchristie@abc.com"? 02-11-2009 02:46 PM Since I am not looking at your project and seeing the complete picture it is always hard to say where the problem is. While I don't know if this will help, I do know that it works. Ran it on the demo to change all the @CHTechONE.com to 'bad' email addresses. //Save Current Changes Contact cContact = ActApp.ApplicationState.CurrentContact; cContact.Update(); //Performing Lookup ////////////////////////////////// CriteriaColumn cColumn1; string sValue1 = "@CHTechONE.com"; ContactLookup cLookup; ContactList cList; //Get the column(s) to lookup on cColumn1 = ActFwk.Lookups.GetCriteriaColumn("TBL_CONTACT", "BUSINESS_EMAIL", true); //Create an array to hold all of the lookup criteria Criteria[] lCriteria = new Criteria[] {new Criteria(LogicalOperator.End, (byte)0, (byte)0, cColumn1, OperatorEnum.Contains, sValue1)}; //Create the Lookup cLookup = ActFwk.Lookups.LookupContactsReplace(lCriteria, true, true); //Set our contact list to the lookup cList = cLookup.GetContacts(null); //Options..... //ActApp.ApplicationState.SetCurrentContactList(cList, null); //ActApp.UIContactManager.ShowDetailView(); //Change 'Bad' emails foreach (Contact c in cList) { // Update the email as bad... c.Fields["TBL_CONTACT.BUSINESS_EMAIL", true] = "BAD - " + c.Fields["TBL_CONTACT.BUSINESS_EMAIL", true]; c); } Hope this helps, 02-12-2009 12:02 PM Essentially, I have these methods in a self-contained class that when instantiated established the ACT framework connection and checks for confirmation of connection. Likewise, on destroy it cleans up the connection. within this class, I use the methods to query and update the values in ACT! (based on the "Contact" object being passed through the methods). For instance: // init ActDDB ACT = new ActDB("user","pass"); // grab the contactList based on the email value (realistically // this will 99.9% of the time be one contact coming back, so I // will check for at least one member, and grab the first element // in the list ContactList cList = ACT.GetContactFromEmail(varWithEmail); // Okay, check for a valid return with at least one member. use // that member and update it if (cList != null && cList.Count > 0) ACT.MarkAsBad(cList[0], "Bounced Email"); That's in my form under a click event. I have a MIME email being parsed grabbing email addresses and then taking those addresses and finding them in ACT!. if a match is found, I want to update those emails with the email prefix, and add a note. Again, this deletes the contact every time I've tried it. Am I passing these objects wrong? I don't fully understand the object model, so I'm in the dark as to how the ACT object does updated and referencing, but from what I can collect this looks feasible. Again, thank you very much for your patience and assistance. Brad 02-16-2009 11:27 AM Contact c = HoweverYouSetYourContact(); // Update the email as bad... c.Fields["TBL_CONTACT.BUSINESS_EMAIL", true] = "BAD - " + c.Fields["TBL_CONTACT.BUSINESS_EMAIL", true]; c.Update(); I see nothing in your provided samples that should be deleting contacts. Did you try the sample above out on the demo? 02-18-2009 07:58 AM Okay, i think I got it this time. It may have just been my search criteria to confirm changes (I would search, by email, in act to confirm the contact information, then run the program, then return to act and do a refresh only to find the contact absent). But just as confirmation, because I assume rights would propagate to the DSK, if I have add and modify rights only I should be safe against deleting an entire contact, correct? Once again, I appreciate all the informaiton you've given and the time you've taken to answer my questions. I think I'm getting a more firm understanding on how this SDK works, with the exception of SortCriteria. Not sure I understand the basis of that yet, but I think that just involves tinkering and trial and error...
https://community.act.com/t5/Act-Developer-s-Forum/Pull-contact-by-field-name/td-p/34034
CC-MAIN-2018-30
refinedweb
1,526
51.04
current position:Home>Pandas: how to group and calculate by index Pandas: how to group and calculate by index 2022-01-30 11:52:26 【Stone sword】 You can use the following methods in pandas Group by one or more index columns and perform some calculations . Method 1: Group by an index column df.groupby('index1')['numeric_column'].max() Copy code Method 2: Group by multiple index columns df.groupby(['index1', 'index2'])['numeric_column'].sum() Copy code Method 3: Group by index column and general column df.groupby(['index1', 'numeric_column1'])['numeric_column2'].nunique() Copy code The following example shows how to use the following multi indicator pandas DataFrame To use each method . import pandas as pd #create DataFrame df = pd.DataFrame({'team': ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B'], 'position': ['G', 'G', 'G', 'F', 'F', 'G', 'G', 'F', 'F', 'F'], 'points': [7, 7, 7, 19, 16, 9, 10, 10, 8, 8], 'rebounds': [8, 8, 8, 10, 11, 12, 13, 13, 15, 11]}) #set 'team' column to be index column df.set_index(['team', 'position'], inplace=True) #view DataFrame df points rebounds team position A G 7 8 G 7 8 G 7 8 F 19 10 F 16 11 B G 9 12 G 10 13 F 10 13 F 8 15 F 8 11 Copy code Method 1: Group by an index column The following code shows how to find 'points' The maximum value of the column , And pass 'position' Index columns are grouped . #find max value of 'points' grouped by 'position index column df.groupby('position')['points'].max() position F 19 G 10 Name: points, dtype: int64 Copy code Method 2: Group by multiple index columns The following code shows how to pass " The team " and " Location " Index columns to find " integral " The sum of the columns . #find max value of 'points' grouped by 'position index column df.groupby(['team', 'position'])['points'].sum() team position A F 35 G 21 B F 26 G 19 Name: points, dtype: int64 Copy code Method 3: Group by index column and general column The following code shows how to ' rebounds ' The number of unique values found in the column , And through the index column ' The team ' And normal columns ' integral ' grouping . #find max value of 'points' grouped by 'position index column df.groupby(['team', 'points'])['rebounds'].nunique() team points A 7 1 16 1 19 1 B 8 2 9 1 10 1 Name: rebounds, dtype: int64 Copy code author[Stone sword]
https://en.pythonmana.com/2022/01/202201301152237752.html
CC-MAIN-2022-27
refinedweb
410
52.83
In this article a few simple applications of Markov chain are going to be discussed as a solution to a few text processing problems. These problems appeared as assignments in a few courses, the descriptions are taken straightaway from the courses themselves. 1. Markov Model of Natural Language Problem Statement Use, frequencies as probabilities. For example, if the input text is "gagggagaggcgagaaa", the Markov model of order 0 predicts that each letter is 'a' with probability 7/17, 'c' with probability 1/17, and 'g' with probability 9/17 because these are the fraction of times each letter occurs. The following sequence of letters is a typical example generated from this model: g a g g c g a g a a g a g a a g a a a g a g a g a g a a a g a g a a g ... A Markov model of order 0 assumes that each letter is chosen independently. This independence does not coincide with statistical properties of English text because there a high correlation among successive letters in a word or sentence. For example, 'w' is more likely to be followed with 'e' than with 'u', while 'q' is more likely to be followed with 'u' than with 'e'. We obtain a more refined model by allowing the probability of choosing each successive letter to depend on the preceding letter or letters. A Markov model of order k predicts that each letter occurs with a fixed probability, but that probability can depend on the previous k consecutive letters. Let a k-gram mean any k consecutive letters. Then. Claude Shannon proposed a brute-force scheme to generate text according to a Markov model of order 1: “ To construct [a Markov model of order 1],. ” Our task is to write a python program to automate this laborious task in a more efficient way — Shannon’s brute-force approach is prohibitively slow when the size of the input text is large. Markov model data type. Create an immutable data type MarkovModel to represent a Markov model of order k from a given text string. The data type must implement the following API: - Constructor. To implement the data type, create a symbol table, whose keys will be Stringk-grams. You may assume that the input text is a sequence of characters over the ASCII alphabet so that all char values are between 0 and 127. The value type of your symbol table needs to be a data structure that can represent the frequency of each possible next character. The frequencies should be tallied as if the text were circular (i.e., as if it repeated the first k characters at the end). - Order. Return the order k of the Markov Model. - Frequency. There are two frequency methods. - freq(kgram) returns the number of times the k-gram was found in the original text. - freq(kgram, c) returns the number of times the k-gram was followed by the character c in the original text. - Randomly generate a character. Return a character. It must be a character that followed the k-gram in the original text. The character should be chosen randomly, but the results of calling rand(kgram) several times should mirror the frequencies of characters that followed the k-gram in the original text. - Generate pseudo-random text. Return a String of length T that is a randomly generated stream of characters whose first k characters are the argument kgram. Starting with the argument kgram, repeatedly call rand() to generate the next character. Successive k-grams should be formed by using the most recent k characters in the newly generated text. Use a StringBuilder object to build the stream of characters (otherwise, as we saw when discussing performance, your code will take order of N2 time to generate N characters, which is too slow). To avoid dead ends, treat the input text as a circular string: the last character is considered to precede the first character. For example, if k = 2 and the text is the 17-character string "gagggagaggcgagaaa", then the salient features of the Markov model are captured in the table below: frequency of probability that next char next char is kgram freq a c g a c g ---------------------------------------------- aa 2 1 0 1 1/2 0 1/2 ag 5 3 0 2 3/5 0 2/5 cg 1 1 0 0 1 0 0 ga 5 1 0 4 1/5 0 4/5 gc 1 0 0 1 0 0 1 gg 3 1 1 1 1/3 1/3 1/3 ---------------------------------------------- 17 7 1 9 Taken from an example from the same assignment. Note that the frequency of "ag" is 5 (and not 4) because we are treating the string as circular. A Markov chain is a stochastic process where the state change depends on only the current state. For text generation, the current state is a k-gram. The next character is selected at random, using the probabilities from the Markov model. For example, if the current state is "ga" in the Markov model of order 2 discussed above, then the next character is 'a' with probability 1/5 and 'g' with probability 4/5. The next state in the Markov chain is obtained by appending the new character to the end of the k-gram and discarding the first character. A trajectory through the Markov chain is a sequence of such states. Below is a possible trajectory consisting of 9 transitions. trajectory: ga --> ag --> gg --> gc --> cg --> ga --> ag --> ga --> aa --> ag probability for a: 1/5 3/5 1/3 0 1 1/5 3/51/5 1/2 probability for c: 0 0 1/3 0 0 0 0 0 0 probability for g: 4/5 2/5 1/3 1 0 4/5 2/5 4/5 1/2 Taken from an example from the same assignment. Treating the input text as a circular string ensures that the Markov chain never gets stuck in a state with no next characters. To generate random text from a Markov model of order k, set the initial state to k characters from the input text. Then, simulate a trajectory through the Markov chain by performing T − k transitions, appending the random character selected at each step. For example, if k = 2 and T = 11, the following is a possible trajectory leading to the output gaggcgagaag. trajectory: ga --> ag --> gg --> gc --> cg --> ga --> ag --> ga --> aa --> ag output: ga g g c g a g a a g Text generation client. Implement a client program TextGenerator that takes two command-line integers k and T, reads the input text from standard input and builds a Markov model of order k from the input text; then, starting with the k-gram consisting of the first k letters of the input text, prints out T characters generated by simulating a trajectory through the corresponding Markov chain. We may assume that the text has length at least k, and also that T ≥ k. A Python Implementation import numpy as np from collections import defaultdict class MarkovModel: def __init__(self, text, k): ''' create a Markov model of order k from given text Assume that text has length at least k. ''' self.k = k self.tran = defaultdict(float) self.alph = list(set(list(text))) self.kgrams = defaultdict(int) n = len(text) text += text[:k] for i in range(n): self.tran[text[i:i+k],text[i+k]] += 1. self.kgrams[text[i:i+k]] += 1 def order(self): # order k of Markov model return self.k def freq(self, kgram): # number of occurrences of kgram in text assert len(kgram) == self.k # (check if kgram is of length k) return self.kgrams[kgram] def freq2(self, kgram, c): # number of times that character c follows kgram assert len(kgram) == self.k # (check if kgram is of length k) return self.tran[kgram,c] def rand(self, kgram): # random character following given kgram assert len(kgram) == self.k # (check if kgram is of length k. Z = sum([self.tran[kgram, alph] for alph in self.alph]) return np.random.choice(self.alph, 1, p=np.array([self.tran[kgram, alph] for alph in self.alph])/Z) def gen(self, kgram, T): # generate a String of length T characters assert len(kgram) == self.k # by simulating a trajectory through the corresponding str = '' # Markov chain. The first k characters of the newly for _ in range(T): # generated String should be the argument kgram. #print kgram, c # check if kgram is of length k. c = self.rand(kgram)[0] # Assume that T is at least k. kgram = kgram[1:] + c str += c return str Some Results m = MarkovModel('gagggagaggcgagaaa', 2) generates the following MarkovChain where each state represents a 2-gram. Input: news item (taken from the assignment) Microsoft said Tuesday the company would comply with a preliminary ruling by Federal District Court Judge Ronald H. Whyte that Microsoft is no longer able to use the Java Compatibility Logo on its packaging and websites. Markov Model learnt Generated output: random news item, using input as an order 7 model Microsoft is no longer able to use the Java language,” added Tod Nielsen, General Counsel for Microsoft’s Developers use the Java Compatibility Logo on its packaging and websites for Internet Explorer and Software Developer Relations Group/Platform Marketing.Microsoft to be Microsoft Corporation. “We are disappointed with the Court’s order.” Microsoft is no longer able to use the Java language. Providing the best tools and provide them the tools and programming options will continue to listen to our customers and provide them the tools and programming option of taking advantage of Windows features when writing software using the Java Compatibility Logo on its packaging and websites for Internet Explorer and Software using the Java programming option of taking advantage of Windows features when writing software Developer Relations Group/Platform Marketing.Microsoft to be in full compliance with its contract with Sun,” stated Tom Burt, Associate General Manager for Microsoft’s goal. “We will continue to listen to our customers and programming language. Providing the Java language,” added Tod Nielsen, General Manager for Microsoft is no longer able to use the Java language. Noisy Text Correction Imagine we receive a message where some of the characters have been corrupted by noise. We represent unknown characters by the ~ symbol (we assume we don’t use ~ in our messages). Add a method replaceUnknown that decodes a noisy message by replacing each ~ with the most likely character given our order k Markov model, and conditional on the surrounding text: def replaceUnknown(corrupted) # replace unknown characters with most probable characters Assume unknown characters are at least k characters apart and also appear at least k characters away from the start and end of the message. This maximum-likelihood approach doesn’t always get it perfect, but it fixes most of the missing characters correctly. Here are some details on what it means to find the most likely replacement for each ~. For each unknown character, you should consider all possible replacement characters. We want the replacement character that makes sense not only at the unknown position (given the previous characters) but also when the replacement is used in the context of the k subsequent known characters. For example we expect the unknown character in "was ~he wo" to be 't' and not simply the most likely character in the context of "was ". You can compute the probability of each hypothesis by multiplying the probabilities of generating each of k+1 characters in sequence: the missing one, and the k next ones. The following figure illustrates how we want to consider k+1 windows to maximize the log-likelihood: Using the algorithm described above, here are the results obtained for the following example: Original : it was the best of times, it was the worst of times. Noisy : it w~s th~ bes~ of tim~s, i~ was ~he wo~st of~times. Corrected (k=4): it was the best of times, it was the worst of times. Corrected (k=2): it was the best of times, in was the wo st of times. 2. Detecting authorship This problem appeared as an assignment in the Cornell course cs1114 . The Problem Statement In this assignment, we shall be implementing an authorship detector which, when given a large sample size of text to train on, can then guess the author of an unknown text. - The algorithm to be implemented works based on the following idea: An author’s writing style can be defined quantitatively by looking at the words he uses. Specifically, we want to keep track of his word flow – that is, which words he tends to use after other words. - To make things significantly simpler, we’re going to assume that the author always follows a given word with the same distribution of words. Of course, this isn’t true, since the words you choose when writing obviously depend on context. Nevertheless, this simplifying assumption should hold over an extended amount of text, where context becomes less relevant. - In order to implement this model of an author’s writing style, we will use a Markov chain. A Markov chain is a set of states with the Markov property – that is, the probabilities of each state are independent from the probabilities of every other state. This behavior correctly models our assumption of word independence. - A Markov chain can be represented as a directed graph. Each node is a state (words, in our case), and a directed edge going from state Si to Sj represents the probability we will go to Sj when we’re at Si. We will implement this directed graph as a transition matrix. Given a set of words W1, W2, …Wn, we can construct an n by n transition matrix A, where an edge from Wi to Wj of weight p means Aij = p. - The edges, in this case, represent the probability that word j follows word i from the given author. This means, of course, that the sum of the weights of all edges leaving from each word must add up to 1. - We can construct this graph from a large sample text corpus. Our next step would be finding the author of an unknown, short chunk of text. To do this, we simply compute the probability of this unknown text occurring, using the words in that order, in each of our Markov chains. The author would likely be the one with the highest probability. - We shall implement the Markov chain model of writing style. We are given some sample texts to train our model on, as well as some challenges for you to figure out. Constructing the transition matrix Our first step is to construct the transition matrix representing our Markov chain. First, we must read the text from a sample file. We shall want to create a sparse array using the scipy csr sparse matrix. Along with the transition matrix, we shall be creating a corresponding vector that contains 2 word frequencies (normalized by the total number of words in the document (including repeated words)). Calculating likelihood Once we have our transition matrix, we can calculate the likelihood of an unknown sample of text. We are given several pieces of literature by various authors, as well as excerpts from each of the authors as test dataset. Our goal is to identify the authors of each excerpt. To do so, we shall need to calculate the likelihood of the excerpt occurring in each author’s transition matrix. Recall that each edge in the directed graph that the transition matrix represents is the probability that the author follows a word with another. Since we shall be multiplying numerous possibly small probabilities together, our calculated likelihood will likely be extremely small. Thus, you should compare log(likelihood) instead. Keep in mind the possibility that the author may have used a word he has never used before. Our calculated likelihood should not eliminate an author completely because of this. We shall be imposing a high penalty if a word is missing. Finding the author with the maximum likelihood Now that we can compute likelihoods, the next step is to write a routine that takes a set of transition matrices and dictionaries, and a sequence of text, and returns the index of the transition matrix that results in the highest likelihood. You will write this in a function classify text, which takes transition matrices, dictionaries, histograms, and the name of the file containing the test text, and returns a single integer best index. The following figure shows how to detect an author k (A_k) of the test text t_1..t_n using the transition matrix P_k with MLE : Python Implementation from np import log def log0(x): return 0 if x <= 0 else log(x) def compute_text_likelihood(filename, T, dict_rev, histogram, index): ”’ Compute the (log) likelihood L of a given string (in ‘filename’) given a word transition probability T, dictionary ‘dict’, and histogram ‘histogram’ ”’ text = word_tokenize(open(filename).read().replace(‘\n’, ‘ ‘).lower()) num_words = len(text) text = [word for word in text if word in histogram] # keep only the words that are found in the training dataset ll = log0(histogram[text[0]]) – log0(sum(histogram.values())) for i in range(1, len(text)): ll += log0(T[dict_rev[text[i-1]], dict_rev[text[i]]]) return ll + (num_words – num_matches)*penalty def classify_text(tmatrices, dict_revs, histograms, filename): ”’ Return the index of the most likely author given the transition matrices, dictionaries, and a test file ”’ for i in range(len(tmatrices)): ll = compute_text_likelihood(filename, tmatrices[i], dict_revs[i], histograms[i], i) print i, ll Training Dataset The list of authors whose writings are there in the training dataset: 0. Austen 1. Carroll 2. Hamilton 3. Jay 4. Madison 5. Shakespeare 6. Thoreau 7. Twain A few lines of excerpts from the training files (the literature by several authors, taken from Project Gutenberg), the word clouds and a few states from the corresponding Markov Models Constructed for a few authors Author JANE AUSTEN (Texts taken from Emma, Mansfield Park, Pride and Prejudice). Author Lewis Carroll (Texts taken from Alice’s Adventures in Wonderland, Sylvie and Bruno). Author William Shakespeare (Texts taken from Henry IV Part 1, Romeo and Juliet, Twelfth Night). Author MARK TWAIN (Texts taken from A Connecticut Yankee in King Arthur’s Court, The Adventures of Huckleberry Finn, The Prince and the Pau. Classifying unknown texts from the Test Dataset Each of the Markov models learnt from the training texts for each author are used to compute the log-likelihood of the unknown test text, the author with the maximum log-likelihood is chosen to be the likely author of the text. Unknown Text1. log-likelihood values computed for the probable authors Author LL 0 -3126.5812874 1 -4127.9155186 2 -7364.15782346 3 -9381.06336055 4 -7493.78440066 5 -4837.98005673 6 -3515.44028659 7 -3455.85716104 As can be seen from above the maximum likelihood value corresponds to the author 0, i.e., Austen. Hence, the most probable author of the unknown text is Austen. Unknown Text. log-likelihood values computed for the probable authors Author LL 0 -2779.02810424 1 -2738.09304225 2 -5978.83684489 3 -6551.16571407 4 -5780.39620942 5 -4166.34886511 6 -2309.25043697 7 -2033.00112729 As can be seen from above the maximum likelihood value corresponds to the author 7, i.e., Twain. Hence, the most probable author of the unknown text is Twain. The following figure shows the relevant states corresponding to the Markov model for Twain trained from the training dataset. One thought on “Some Applications of Markov Chain in Python” Pingback: Sandipan Dey: Some Applications of Markov Chain in Python | Adrian Tudor Web Designer and Programmer
https://sandipanweb.wordpress.com/2018/01/12/some-applications-of-markov-chain/
CC-MAIN-2019-09
refinedweb
3,306
60.95
Insert"... A Statement will always proceed through the four steps above for each SQL query Executing Prepared Statement ; } Executing Prepared Statement Prepared Statement represents the pre... is no-parameter prepared statement example. Example- At first create table named student... String PreparedStatement statement = con.prepareStatement(query JDBC Prepared Statement Insert JDBC Prepared Statement Insert The Tutorial illustrates a program in JDBC Prepared... the code.set String ( ) - This is a method defined in prepared Statement class insertion in SQL - SQL insertion in SQL Query is "insert into employee values('"+eno... using prepared statement!"); Connection con = null; try... in the database because of single code in the name. dbase is MS-SQL emp.name data type PDO Prepared Statement for us to use a Prepared Statement for sending SQL statements to the database..., using this we can reduce the execution time. The prepared statement can... benefits: The only requirement in this statement is that the query should... of command in sql query to the database, In case all the commands successfully, return Usage of setDate() in prepared Statement - JDBC Usage of setDate in prepared Statement Hi, I have created a jsp...() of prepared statement,the following error is displayed: setDate() not available in prepared statement. Basically, I need to accept the date dynamically...; : Prepared statement is good to use where you need to execute same SQL statement... statement. Update record is most important operation of database. You can update one < JDBC Prepared Statement Update JDBC Prepared Statement Update The Update Statement... Prepared Statement Update. The code include a class Jdbc Prepared data insertion and fetch 1 data insertion and fetch 1 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ <..." content="text/html; charset=UTF-8"> <title>JSP Page</title>..."); // String query = "select * from posting where ignite_id='"+id Creation and insertion in table ; statement and insert data using "insert " statement query in the created database. Before running this java code you need to paste "mysql...CREATION OF TABLE,INSERTION &DISPLAY OF DATA USING SQL JDBC Prepared Statement Example how to update the table using prepared statement. At first create a database...; } JDBC Prepared Statement java.sql.PreparedStatement is enhanced version... Insert Prepared Statement ); System.out.println("Connected to the database"); // Create a query String String query... a precompiled SQL statement. It is alternative to Statement At first Create named student a table in MySQL database as CREATE TABLE `student` ( `rollno` int(11 Prepared Statement With Batch Update Prepared Statement With Batch Update  ... PreparedStatementBatchUpdate Prepared Statement Batch Update Example! Added... facility. In batch update more than one records can be added in the database Deleting Records using the Prepared Statement Deleting Records using the Prepared Statement  ... DeleteRecords Delete records example using prepared statement! Number... the records from the database table by using the PreparedStatement interface Insertion into database Insertion into database Hi, I need code for inserting the multiple select box values into database.Please do send me the code. Thanks for ur immediate replies its helping a lot Set Timestamp by using the Prepared Statement example by using the Prepared Statement! 1 row(s) affected) Database... Set Timestamp by using the Prepared Statement... will teach how to set the Timestamp in database table by using Set Time by using the Prepared Statement Set Time by using the Prepared Statement  ...:\vinod\jdbc\jdbc\PreparedStatement>java SetTime Prepared Statement Set... the time in database table by using the PreparedStatement interface of java.sql Hibernate Prepared Statement This section contain how prepared statement works in hibernate insertion in SQL - SQL insertion in SQL Hi! Everybody... i have a problem with sql insertion. When i am inserting values through command i.e. insert into employee values(,,,,); here i want to insert ' in employee name column of database Prepared Statement Set Big Decimal Prepared Statement Set Big Decimal  ... the big decimal and how can be set it in the database table by using... decimal and which type to use this in the database. We know that using of both Update Records using Prepared Statement Update Records using Prepared Statement  ... through Prepared Statement! Updating Successfully! After... a question, what is updating the records. In the case of relational database Select Records Using Prepared Statement Select Records Using Prepared Statement  ... SelectRecords Select Records Example by using the Prepared Statement... that the PreparedStatement object represents a precompiled SQL statement. See brief JDBC: Delete Record using Prepared Statement JDBC: Delete Record using Prepared Statement In this section, we will discuss...; : Prepared statement is good to use where you need to execute same SQL... of student whose roll_no is 3 using prepared statement. package jdbc Prepared Statement Example . An INSERT INTO statement is used to insert the value into the database table... is used to make the SQL statement execution efficient. In Java, when we use the JDBC to work with database java.sql provides two interfaces for executing File insertion into oracle database File insertion into oracle database How to Read and Insert a file (any format) into a Oracle database problem in insert query - JSP-Servlet problem in insert query Hi! I am using this statement for data insertion into table but data is not moving only null passed and stored... Hi friend, We check your Query it is correct .If you have PHP SQL Insertion PHP SQL Insertion PHP SQL Insertion is used to execute MySQL queries in PHP script. It is used to send insert query or command that adds the records to MySQL error : not an sql expression statement error in connecting to database SQLserver 2005 in Jdeveloper,i m usin struts and jsp my pogram: import java.sql.*; public class TaskBO { public TaskBO... { Connection conn = DatabaseManager.getConnection(); Statement stmt Set Data Types by using Prepared Statement Set Data Types by using Prepared Statement  ... the prepareStatement takes SQL query statement and then returns the PreparedStatement... Statement! 1 row(s) affected) After executing the program: Database Table Select query in JSP Select query in JSP We are going to describe select query in JSP.... After that we create JSP page than we have make database connection. After that we use SELECT query. SELECT query is a retrieve the data from database than Prepared Statement Set Object Prepared Statement Set Object  ... the parameterized SQL statement to the database that contains the pre-compiled... PreparedStatementSetObject Prepared Statement Set Array Example! 1 Record Using the Prepared Statement Twice Using the Prepared Statement Twice  ... TwicePreparedStatement Twice use prepared statement example! List of movies... represents the precompiled SQL statement. Whenever, the SQL statement is precompiled PHP MySQLI Prep Statement PHP-MySQLI:Prepared-Statement mysqli::prepare - Prepares a SQL query and returns a statement handle to be used for further operations on the statement. The query should consist of a single sql query. There are two styles are available Inserting Records using the Prepared Statement Inserting Records using the Prepared Statement  ... records example using prepared statement! Enter movie name: Bagban... to learn how we will insert the records in the database table by using Count Records using the Prepared Statement Count Records using the Prepared Statement  ... to count all records of the database table by using the PreparedStatement... will know that how many records in a database table then you get easily with the help Set Date by using the Prepared Statement Set Date by using the Prepared Statement  ...\PreparedStatement>java SetDate Prepared statement set date example... for setting date in the database table by using the PreparedStatement interface Record using Prepared Statements ; : Prepared statement is good to use where you need to execute same SQL statement many...(); } } } Output : Insert Record using Prepared Statement...JDBC: Insert Record using Prepared Statements In this section, you will learn insertion error - JSP-Servlet insertion error my first jsp page : In this i m getting all the values through a method called getAllDetails,the values are getting inserted... into table. below is the codeof jsp and java pages; function writing querries qnd connecting for insertion, accessing.... writing querries qnd connecting for insertion, accessing.... i...:8080/examples/jsp/insert.jsp"> <table> <tr><td>Name:</td><... = DriverManager.getConnection("jdbc:odbc:student"); Statement st=con.createStatement file insertion - JSP-Servlet file insertion How to insert and retrieve .doc files into sql server with example using jsp and servlets Update statement Update statement I create a access database my program When I click... database is not update I write this program using 3 differfnt notepad pages MY...(); f.setVisible(true); } } this is my update query inside CarConnector.java public how to write a query for adding records in database how to write a query for adding records in database How write fire query in JSP for adding records in database Set byte, short and long data types by using the Prepared Statement Set byte, short and long data types by using the Prepared Statement... SetByteSortLong Set Byte,short and long example by using Prepared Statement... with MySQL database by using the JDBC driver. After establishing the connection Html+jsp+database is enough to do the small operation Html+jsp+database is enough to do the small operation Hai , If u want to do simple insetion and data retrival operation throw jsp ?.you need... result = null; Statement stmt = null; String Query=" INSERT data insertion from xml file to database table data insertion from xml file to database table Hi all, I have data in the XML file. I need to insert it into table in the database using servlet. so please reply me . ThankYou Accessing database from JSP by a database query. "stmt" is a object variable of Statement .Statement Class... Accessing database from JSP  ... ; This will create a table "books_details" in database "books" JSP Code database through jsp database through jsp sir actually i want to retrieve the data from database dynamically.because i dont know how many records are there in the database? thanks Here is an example of jsp which retrieves data from Inserting Data In Database table using Statement Inserting Data In Database table using Statement...; Table in the database before Insertion...: Table in the database after Insertion Database the database, create and populate tables, query individual tables. (You must...Database Hi, i need help building a database based on something like...), multiplicity (or cardinality), in the context of the database system (i.e. give query string database and printing on another page. i have to take that variable from servlet page to different servlet jsp page and that i want to do with query string so...query string on my servlet page i take the values of the field The DELETE Statement The DELETE Statement The DELETE statement is used to delete rows from a table. database will update that is why deletion and insertion of data will be done. Syntax   query query how to get data from database in tables using swings database query database query create table file1 ( file_id int , file_data text ); i'm unable to create this table and the error is invalid type text. plz help me javascript variable value insertion in DB javascript variable value insertion in DB how can I insert javascript variable value into database using php Query Query how can i set path and Classpath or Environmental Variable for jsp/servlet program to compile and run do not repeat values in databse during insertion using php and javascript do not repeat values in databse during insertion using php and javascript ... sql.php and form.jsp. i want to do insert values into mssql database from user input form. sql.php is able to insert the values into database. i have connected query - SQL tell me how to write the query in database but not jsp code. thank u...query hi sir i have 2 tables,one is user_details and the other is employee in my database. in user_details i have given a userid varchar Php Sql Query Insert Php Sql Query Insert This example illustrates how to execute insert query with values in php application. In this example we create two mysql query for insert statement with separate values in the database table. The table before query on strings query on strings i want to convert JTextField data into string... with a database element but it shows some error. Hi Friend, Try the following... = DriverManager.getConnection("jdbc:odbc:student"); Statement st dynamic delete and insertion in tables dynamic delete and insertion in tables hey... i have a problem..I am working on a problem management system..my code for a particular jsp page is as follows in this page i want to show the admin the already present records SQL Backup query with where statement SQL Backup query with where statement SQL Backup query with where statement ... in Backup query return you the restricted records into stu_tab_ Backup mysql query with ("jdbc:odbc:excel"). Execute query "select name, address from [Sheet1$]". Here... database using the loops. For more illustration, we are providing you java example... = DriverManager.getConnection("jdbc:odbc:excel"); Statement stmt = con.createStatement update statement in mysql update statement in mysql Update statement to update the existing records in database table. The given code creates a mysql connection and use the update query to update the record. To update record, we write query Login Query Login Query how to login with usertype in jsp page and redirect then whith user tyep here is my code <% Connection con=null; Statement stmt=null; ResultSet rst=null; String email=request.getParameter JDBC Batch Example With SQL Insert Statement for data insertion in the database table. 1. Create Statement object using...JDBC Batch Example With SQL Insert Statement: In this tutorial, we are discuss about insert SQL statement with the jdbc batch. First of all, we
http://www.roseindia.net/tutorialhelp/comment/86740
CC-MAIN-2014-52
refinedweb
2,263
56.96
i have a state file that writes the same size data file every 5 minutes but the data inside is only sometimes different. An example would be "event A = true" , but on the next write it might be "event A = false" I dont want it to reindex everything and spike the usage limit. Is there a way for me to just index the change from true to false? Thanks Write a scripted input that runs every 5 minutes (off cycle from the file update if possible). In your script, read the file. Emit the value of eventA (or whatever you want), to stdout. Don't monitor the file in any other way. Here is a python script that might do the trick: import re data = open("filePathHere").read() matchFound = re.search("(?P<matchString>eventA\s*?=\s*?\S+)",data) if matchFound: print matchFound.group('matchString') else: print "EventA not found" This will work best if the file is not too large. It will look for the first occurrence of eventA=something and print what it finds. It allows for optional whitespace around the equals sign (that's what \s*? does). If you set this up as a scripted input in Splunk, you will see only the following added to Splunk every 5 minutes: eventA=something how would the script integrate into splunk? would it just retrieve the latest updates and insert it? When you go into the Splunk Manager, under Data Inputs >> Scripts, click the New button. Fill in the info, including the stuff under More Options. Notice that your script has to be placed in a particular directory. After you complete this setup, Splunk will run the script at the interval you have selected, and will index the output of the script. For more info:...
https://community.splunk.com/t5/Getting-Data-In/State-File-Monitoring/td-p/74054
CC-MAIN-2020-34
refinedweb
295
74.19
On Wed, 4 Oct 2000, Keith Owens wrote:> Rik van Riel <riel@conectiva.com.br> wrote:> >Sysrq-T is broken on x86 ;((((((((> > show_task() calls thread_saved_pc() which is giving bad results.> Getting the correct PC for blocked threads is easy,> > Index: 0-test9-pre9.3/include/asm-i386/processor.h> --- 0-test9-pre9.3/include/asm-i386/processor.h Tue, 08 Aug 2000 16:14:08 +1000 kaos (linux-2.4/P/18_processor. 1.1.1.5 644)> +++ 0-test9-pre9.3(w)/include/asm-i386/processor.h Wed, 04 Oct 2000 01:48:32 +1100 kaos (linux-2.4/P/18_processor. 1.1.1.5 644)> @@ -411,7 +411,7 @@ extern void forget_segments(void);> * Return saved PC of a blocked thread.> */> extern inline unsigned long thread_saved_pc(struct thread_struct *t)> {> - return ((unsigned long *)t->esp)[3];> + return (t->eip);> }> > But it does not give you much. Thread esp and eip are only> saved during switch_to(), at which point eip always points to> schedule+0x42c.Yup ;)So this function will need to look at the call trace andgive the function that called schedule() ..
https://lkml.org/lkml/2000/10/3/68
CC-MAIN-2017-13
refinedweb
181
75.71
Gentoo Wiki talk:Guidelines/Archive 1 After being marked as complete, the following discussions have been moved from the Gentoo_Wiki_talk:Guidelines article. Discussions generally occurring between July 2012 to August 2015. Guidelines Guideline I would like to suggest that ideas get vetted here before they are moved to Gentoo Wiki:Guidelines. /Ni1s 16:51, 13 March 2012 (UTC) Categorization After discussion on IRC, I propose we do not add articles to the parent categories, when they are categorized for their subcategories. — yngwin 11:35, 2 July 2012 (UTC) - Works for me. No objections since July 2, 2012. Let's call this done. --Maffblaster (talk) 18:38, 11 April 2016 (UTC) Creating blocks I'd like to discuss a change to the guidelines, namely to allow a blank line after a title (right now it says that there shouldn't be any). I recommend this for the following reasons: - When enabling translations on the pages, without a blank line the chapter title and the text would be seen as a single block - In my experience with technical writing, having the titles clearly separated from the text makes it easier to (re)view - By aggregating the title with a paragraph, an editor would suggest that these two need to be seen as a single entity, which isn't the case. A title is used for the entire set of paragraphs (and other entries) until the next title. For readers, this doesn't matter (no difference in how the page is displayed). — The preceding unsigned comment was added by SwifT (talk • contribs) 13 April 2014 US English as language? I think we need to request contributors to use one "English", of which I suggest to use US English (American English). Both are fine for me, but my experience with writing documentation (and technical editing) seems to suggest that American English is more popular to be used (for instance, publishers of IT technical books often request it to be in American English). --SwifT (talk) 18:34, 28 April 2014 (UTC) - Probably makes sense to encourage (but not necessarily require) American English, since most other Gentoo-related documentation is already in that variant. (Full disclosure: I am American.) Is there anything in particular that raised this issue in your mind? - dcljr (talk) 03:30, 17 August 2014 (UTC) - The topic came forth when I noticed someone modified a page that I was watching just to switch to British English, which was imo not that useful and could trigger retranslation events (unless the translation approval tells that all changes do not need translation). --SwifT (talk) 11:01, 6 December 2014 (UTC) Sentence case in page titles As I have always done on every wiki I've been (or potentially was going to be) active on, I strongly suggest encouraging sentence case in page titles as much as possible, to make it easier to link to other articles in running text (as I just did to a Wikipedia article [technically, a redirect] in this sentence). I see that this seems to already be the convention in much of our featured documentation, but there are several subpages, like MySQL/Guide, that use title case instead; this may be OK (apart from consistency issues) since you'd have to "pipe" that link to use it in prose text anyway—but I still don't like it.[g] Fortunately, very few of our "top-level" (i.e., not a subpage of another article, so title not containing a /slash) featured pages use title case (one example is Mailfiltering Gateway). See the many examples at the aforementioned Wikipedia article for an indication (however contrived) of how much of a nightmare guessing the right capitalization can be if you don't pick a simple, standard guideline and stick with it. (Technically, I'm suggesting that each page title itself be in "sentence case" but links to it in running text be in what they call "mid-sentence case" — which is made possible by the first character of a link being case-insensitive.) - dcljr (talk) 04:12, 17 August 2014 (UTC) - OBTW: sentence case is already relatively established in our page titles, but section headings are a completely different matter. See, for example, the section headings in Gentoo installation tips and tricks, which use a mixture of sentence and title case. I recommend sentence case here, too. - dcljr (talk) 04:19, 17 August 2014 (UTC) Links See Also? External Links? Further Reading? or just plain Links? And what about related material? /Ni1s 16:48, 13 March 2012 (UTC) - I propose External resources, as I've started using on the Zsh page. — yngwin 14:26, 19 June 2012 (UTC) Recommended Keyword: ~amd64 or ~arch Putting {{Key|~amd64}} in architecture unspecific articles could be misleading for users of other arches. Wouldn't it be better to recommend putting {{Key|~arch}} for those general cases? --Charles17 (talk) 06:21, 8 May 2015 (UTC) - That's a fair point. I'll update the syntax there, but leave ~amd64 for the example paragraph. --Maffblaster (talk) 06:38, 8 May 2015 (UTC) Section Formatting overlaps with Help:Formatting The sections Gentoo_Wiki:Guidelines#In-line_layout_elements and Gentoo_Wiki:Guidelines#Use_of_newlines are widely overlapping with Help:Formatting#Text_formatting_markup, Help:Formatting#Paragraphs and Help:Formatting#HTML_tags. This is confusing for both readers and editors. IMHO there should be one single source with these information. By transclusion the source content could then be included in the page, but a link leading there should be sufficient. --Charles17 (talk) 06:19, 14 May 2015 (UTC) - I think we should indeed watch over unnecessary duplication. Having multiple authorative sources doesn't help, especially if they are not in sync. The namespaces do imply some difference in how they should be read or interpreted though. The Help namespace is perhaps interpreted by people to be general for mediawiki, whereas the Gentoo_Wiki namespace is about this wiki. - With the guideline page, we want to inform contributors what the specifics are on developing documentation and contributing to the wiki. - But I would definitely not mind updating the Help:Formatting page for this and link to it from the guidelines page. - --SwifT (talk) 07:30, 18 May 2015 (UTC) - I have updated the Help:Formatting article with a tip pointing people to this location for Gentoo-specifics on formatting articles. Feel free to change the wording if you think it needs more clarity. --Maffblaster (talk) 18:10, 5 June 2015 (UTC) <tt> tag is not supported by HTML5 According to w3schools.com the <tt> tag, which we are currently using in the Wiki Guidelines, has been depreciated and is not supported in HTML5 (although browsers will still provide support). If we want to move away from that tag we have a lot of editing to do. Since the text being marked up by the <tt> tag will still render in browsers properly, it is not really a big deal, however this is some evidence that supports the case I was building in the past for using templates for all markup performed inside the Wiki articles. ;) That aside, I suggest we change the Guidelines to include the {{c}} template in place of the <tt> tag for commands or GNU/Unix concepts. If interested take a look at the {{c}} template before commenting. Just for reference, the following is a list of in-line markup tags that are available in HTML5: - em: Renders as emphasized text - strong: Defines important text. - var: Defines a variable. - code: Defines a piece of computer code. - kbd: Defines keyboard input. - samp: Defines sample output from a computer program. Thoughts? --Maffblaster (talk) 22:54, 6 August 2015 (UTC) - The <samp>tag shows the output we would need for most of our <tt>usage. I still find the <kbd>to be too hard in the contrast it has in the document. It detracts from the content if it is used many times. - If I look at the Template:C page I see no information? All I can do is try it out to see how it looks. - I'm okay with using a {{c}}tag to replace the <tt>ones. - --SwifT (talk) 08:31, 7 August 2015 (UTC) - I took the liberty of adding the element names to the list in Maffblaster's comment, to make discussing it easier. To address the actual point: I would have a problem with using any of these elements (except 'em' and 'strong') purely for their visual appearance. The last four elements have actual meanings which we should stick to if we use them (even if only inside of templates). If we don't like the look of any of these HTML elements, that can be changed in the site-wide style sheet. (Also, can we not move this section to the bottom, so the page remains in chronological order?) - dcljr (talk) 09:58, 7 August 2015 (UTC) - dcljr, I moved the closed discussions to the archive sub-article and left the remaining few open discussions. I wasn't thinking about the open discussion's order of appearance at the time. Since the <tt> tag as been partially dealt with I'm marking this discussion as closed. --Maffblaster (talk) 23:33, 11 August 2015 (UTC) I hate to purpose a change this late in the but, but I think <var> tags for be better for variable names whenever they are referenced in in-paragraph text. We should definitely continue using <code> tags for variable values. The <var> tags function is to be used as I propose here, so it seems we should use it 'as intended' in our formatting here on the Wiki. For example: Set the USE variable to python. - I'm okay with that. Too bad HTML doesn't declare a <val> ;-) --SwifT (talk) 18:29, 27 August 2015 (UTC) - I added it to the Guidelines list of formatting stuff. Should be good to go! I'll leave this discussion on here for at least a week just in case someone else would like to comment. Sometime after that I will send it off to the archive. --Maffblaster (talk) 07:34, 28 August 2015 (UTC) Third person and passive voice are preferred, "you" is discouraged Can you explain why? I've read the citation, but it seems to recommend the exact opposite. The gist of the citation, as I understand it, is that second person is standard practice, third person is less effective, and passive voice is often inappropriate. The reason I'm asking as that there've been some edits recently that go great lengths to replace "you" with strange and cumbersome formulas. For example, in the "Custom Initramfs" article (which inherently is a guide that tells you what you have to do yourself to get things going on your machine), the sentence "So everything you need, everything you want, you have to include it in your initramfs" turned into "Everything that is needed, everything that could be wanted, must be included in the initramfs." and this change leaves me utterly flabbergasted, at a complete loss of words. The only reason I haven't undone this edit is that I don't do edit wars on principle, and I'd lose anyway as I'm not the one who decides things around here. Can you show me one other popular Linux wiki that discourages the use of "you" because I can't remember seeing one; there is lots of "you" in the ArchLinux wiki for example, as in the Ubuntu Community wiki. And I just don't think they are doing it wrong... Frostschutz (talk) 18:23, 8 March 2015 (UTC) - The selection between second and third person writing, or passive voice writing, is indeed something that a wiki needs to decide on itself. The first article is mainly a resource for telling not to use first person writing. Most of the technical writing I do (outside Gentoo) uses passive voice (manuals and guidelines mostly). Also, the publisher I have some experience with (Packt Publishing) also prefers that its resources are written in third person. - As for the example, I think wikipedia('s manual of style) can be seen as the most authorative source on wiki-writing. Of course, I can cite a number of resources but those are biased (as I would then just search for "technical writing third person" and find a number of resources). Hence I think that wikipedia's style is a good source to base upon. - I do agree that blindly transcribing from an active, first person voice to a third person (or even truly passive voice) can become very awkward, and such sentences should then be updated or even rewritten. Often, this is an iterative approach so I really encourage you (and everyone else) to help us write beautiful articles while retaining a common style so that our wiki becomes a true source of wisdom ;-) - Hopefully this brings over my approach to this. I'll probably add the wikipedia source to the reference list as well. - I completely agree with the Wikipedia manual of style - but that's because it's an encyclopedia. It doesn't have anything to do with being a wiki. Articles in the Wikipedia never tell you what to do, they just describe things. Gentoo Wiki articles rarely describe things, they always tell you what to do to make stuff work. And the way I see it, without "you" that'll always be awkward, so it's not just a transcribing issue. Awkwardness aside, it also makes sentences much harder to understand (for me anyway), and much much harder to write. - Gentoo Wiki articles also aren't scientific papers that describe how a particular experiment worked, which seems to be what the other resources you linked are based on. Same with the Wikipedia style, I think I completely agree with those resources; I just don't think the shoe fits here. - Frostschutz (talk) 20:55, 9 March 2015 (UTC) - Thank you for your feedback. - I don't fully agree that the Gentoo wiki is completely different from a Wikipedia setup. You are correct though that it is not (only) information dissemination but also user guides, which is not found on Wikipedia. - For me personally, third person voice is easiest (like I said, it is something I've been asked to use by companies while doing technical writing work on their manuals, or by Packt Publishing for their resources), but I don't want to force this to others, so let's hear if there are others who would prefer to have (or support) second person as well. I can live with a mixture of second and third person voice if that is generally the preferred guideline. - I thought I'd finally weigh in on this topic of discussion. I agree with SwifT, generally speaking technical writing manuals (I can site sources if needed) instruct technical writers to explicitly use third person when writing documentation. The powers that be have defined this as the proper technical standard. Personally I believe third person is best suited for this the Wiki, since Gentoo is (currently) one of the more technical distributions available, and most of the articles on the Wiki are explicitly technical in nature. Also, with any technical writing, it is important to be as precise as possible without being overly lengthy. - Sometimes when second person pronouns ("you" and "your") are used they are not precise enough, or are not technically true in context of what is being written. For example, the statement "Update your bootloader" is not as precise as "Update the system's bootloader" even though the second sentence is only one word longer. The difference being the bootloader might not belong to you in the sense of possession. Maybe it belongs to the company you work for. Looking at it another way, you, technically speaking, to not have a bootloader, but computer systems do. That might not be the best example of why technical writing needs to be precise, but I believe it is one reason why technical writing instructs writers to not use second person pronouns. - It might be a bit inconvenient to convert all the articles to third person, but I believe it will be well worth it in the end for precise and excellent documentation. That's what we're striving for, right? And, hey, I have been making major progress recently! I hope this input is viewed as helpful. :) --Maffblaster (talk) 18:50, 27 March 2015 (UTC) - An example of technical documentation is the Golang function reference or the Golang language specification. Those have a distinct lack of "you" because it's purely technical documentation of a package, its functions and return values, or of the language itself. I agree it would be wrong to use "you" there because it would be, as you say, less precise. I don't think we have documentation like this in the Wiki. - Compare it with every other official Golang documentation (Effective Go, Golang FAQ, Golang Wiki, ...). Suddenly there's "you" all over the place. It's not technical documentation in the strictest, objective sense, but rather it's documentation that's meant to teach you something. Replacing "you" with an arbitrary user or programmer (even though it clearly means *you*), would turn a perfectly natural language into something cumbersome and thus less precise. It's not progress, it's backwards. I think this is much closer to what we have in the Wiki. - Frostschutz (talk) 15:27, 28 March 2015 (UTC) - Minor grammatical quibble: Even in an imperative such as "Update the system's bootloader", there is an implicit "You" at the beginning. [grin] To truly avoid second-person, you would— er, I mean… passive voice would have to be used. Trying to completely avoid second-person in articles will inevitably result in very awkward sentence constructions — especially considering that most articles here (I assume) contain at least some instructions that the reader can follow to accomplish something. As for the Wikipedia analogy, while some of our articles might be somewhat encyclopedic in nature, the type of documentation I just alluded to has more in common with texts at Wikibooks, and their Manual of Style doesn't mention active/passive voice or first/second/third person at all. My recommendation, therefore, would be a hybrid approach: encourage the kind of "technical" or "encyclopedic" style that has been mentioned here when presenting purely factual information, but allow the use of second person (either implicitly or explicitly) in instructional content (i.e., "do this to accomplish this task"), especially if it results in more natural, understandable prose. (After all, part of having clear, correct documentation is making it obvious when the user is being asked to actually do something.) - dcljr (talk) 04:22, 29 March 2015 (UTC) Formatting info I'm noticing quite a few edits that do little more than changing USE to USE to USE to USE and back. Or gpg to gpg to gpg, etc. Lack of some guidelines on this matter will keep on resulting in somewhat useless edits. These edits do not do anything on the content (although some of them make the article harder to read instead of easier) and result in lots of notifications about modified articles. Users who are trying to keep the quality of the articles up to par use watchlists, but such edits are cluttering the watchlists unnecessarily. It might frighten some contributors to properly watch over articles. Translators also need to continuously update their articles with little benefit - do they need to follow the same formatting changes or not? This keeps their workload away from the more important stuff, such as translating remainders of articles. I've created an updated Gentoo Wiki:Guidelines which attempts to cover formatting as well as some of the options mentioned above. I'd like to use this one as the "official" guideline (or at least try to cover similar information and detail), but if not it at least will be used by myself to streamline my own edits. --SwifT (talk) 17:04, 31 December 2014 (UTC) - I am going to update the Guidelines document with Gentoo Wiki:Guidelines on or after March 10th if there are no objections. --SwifT (talk) 17:05, 3 March 2015 (UTC) - Didn't notice this discussion until your edit to the page. IMO, much of the formatting in the "In-line layout elements section should be handled by templates. This allows site-wide standardization to a much greater degree than a mere guideline (makes it much easier for editors to notice and remember how different things should be formatted, and allows instant [caches notwithstanding] site-wide changes, if any formatting choices are changed). It also allows more complicated formatting of the items, including linking to sources of further information, possible tool-tip style explanations, etc., as desired. - dcljr (talk) 23:44, 10 March 2015 (UTC) - To be more specific, I guess my main objection here is to encouraging the use of <code> for anything except inline snippets of code. I suppose one could argue that all of the currently suggested uses (including variable names, variable values, and command arguments/options) are technically snippets of code, but then why wouldn't command names be interpreted that way, as well? In any case, the replacement templates I'm suggesting be created (e.g., {{var}}, {{arg}}, {{opt}}, or whatever) would (initially, at least) simply use <code> tags to accomplish their tasks, but they could do more, as I alluded to in my previous comment. As for <tt>, that tag is problematic, since its "literal meaning" (typewriter type) is so general (or simply not relevant to this wiki, depending on your perspective) that it doesn't clearly stand for anything in particular. Again, more specific templates could be created that use <tt>, but its use would be "hidden" behind the more meaningful template names (e.g., {{command}}, {{term}}, {{linux-term}}, whatever). And, incidentally, we could also create Template:Code (moving the current code template to {{CodeBox}}) and {{kbd}}, just for editors who have trouble remembering that these cases are handled by the corresponding HTML tags. [g] Finally, note that I've already created {{keyword}}, which I have just changed to use <tt>, in line with these new guidelines. - dcljr (talk) 02:11, 11 March 2015 (UTC) - I agree with Dcljr. If/when the look and feel of the Wiki needs to be updated, using templates for in-line formatting in the articles will provide flexibility where in-line HTML tags will not. I also like the idea that tool-tip style hints to more information could be provided with templates, where the same is not possible using HTML tags. I'm not saying that all templates should have hints, but I like having the option available. - I've been seeing the need for additional templates while recently editing a few different articles. The {{Path}} template is currently the best suited template for instructing the reader through GUI "paths" (clicking buttons, check boxes, menu items, etc.), and I've been using as such, but I think that was a poor choice on my part. The {{Path}} template and the <tt> tag do not provide enough differentiation for in-line instructions in certain cases. This is especially true when addressing filesystem paths, terminal commands, and GUI menu "Paths" all in the same paragraph. I believe comes back to making style changes easily possible using a full template system for all of the styles. I suggest doing this for any formatting/style items that may need to be sightly tweaked for readability's sake in the future. Some of these formatting/style issues can be viewed on the PPC FAQ article and the main FAQ article. - As Dcljr stated, I suggest being more specific with each item listed in the in-line layout elements. This would mean providing a few new templates ({{Var}}, {{Arg}}, {{Opt}}, {{GuiPath}}) and then updating the existing articles accordingly. I would have no problem updating all the articles to these standards. After the articles are updated style/formatting tweaks could be made as needed to all the articles at once. --Maffblaster (talk) 20:01, 27 March 2015 (UTC) - Personally, I'm fine with more semantic meanings to tags. But be aware that this is one of the reasons why GuideXML (the predecessor of Gentoo's documentation) did not get much attention, as asking contributors to know semantics is creating a higher learning curve. That being said, the disadvantage of XML (which is forcing stuff to users), the wiki could allow contributors to relatively ignore these semantic taggings and rely on the editors of the wiki to update the guides accordingly. So I'm in favor (but don't ask me to create the templates, I rather let that be done by people who know mediawiki stuff better ;-). --SwifT (talk) 08:58, 28 March 2015 (UTC)
https://wiki.gentoo.org/wiki/Gentoo_Wiki_talk:Guidelines/Archive_1
CC-MAIN-2021-43
refinedweb
4,143
58.72
varoius 1. A class which extends System.Web.UI.WebControls.Calendar 2. The necessary properties : EventSource : DataTable with Event Details, EventStartDateColumnName : ColumnName of the Column of the type DateTime in the EventSource, which stores the Start Date associated with Events EventEndDateColumnName : ColumnName of the Column of the type DateTime in the EventSource, which stores the End Date associated with Events EventHeaderColumnName : ColumnName of the Column of the type String in the EventSource, which stores the Event Header, EventDescriptionColumnName: ColumnName of the Column of the type String in the EventSource, which stores the Event Detailed Description, ShowDescriptionAsToolTip : Boolean to determine whether to display Event Description as Tool Tip or not. EventForeColor: ColumnName of the Column to specify the Fore ( Font ) Color for the event. We can specify any Color Name which belongs to System.Drawing.Color namespace. EventBackColor: ColumnName of the Column to specify the Back Color for the event.We can specify any Color Name which belongs to System.Drawing.Color namespace. 3. EventCalendarDayRender Event : Place where the actual logic to show the events in Calendar is implemented This describes the skeleton for your EventCalendar Class. Refer to the EventCalendar.cs class in the attached demo for complete implementation. By extending the control in such way we can keep the basic features provided along with the new features/capabilities that we need to cater to our requirements.. Please spare some time to rate and provide feedback about this article. Your couple of minutes can help in enhancing the quality of this article. If interested, Click here to view all my published articles. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/aspnet/EventCalendar.aspx
crawl-002
refinedweb
269
52.8
Find the Missing Number You are given a list of n-1 integers and these integers are in the range of 1 to n. There are no duplicates in list. One of the integers is missing in the list. Write an efficient code to find the missing integer. Example: I/P [1, 2, 4, ,6, 3, 7, 8] O/P 5 METHOD 1(Use sum formula) Algorithm: 1. Get the sum of numbers total = n*(n+1)/2 2 Subtract all the numbers from sum and you will get the missing number. Program: #include<stdio.h> /* getMissingNo takes array and size of array as arguments*/ int getMissingNo (int a[], int n) { int i, total; total = (n+1)*(n+2)/2; for ( i = 0; i< n; i++) total -= a[i]; return total; } /*program to test above function */ int main() { int a[] = {1,2,4,5,6}; int miss = getMissingNo(a,5); printf("%d", miss); getchar(); } Time Complexity: O(n) METHOD 2(Use XOR) 1) XOR all the array elements, let the result of XOR be X1. 2) XOR all numbers from 1 to n, let XOR be X2. 3) XOR of X1 and X2 gives the missing number. #include<stdio.h> /* getMissingNo takes array and size of array as arguments*/ int getMissingNo(int a[], int n) { int i; int x1 = a[0]; /* For xor of all the elemets in arary */ int x2 = 1; /* For xor of all the elemets from 1 to n+1 */ for (i = 1; i< n; i++) x1 = x1^a[i]; for ( i = 2; i <= n+1; i++) x2 = x2^i; return (x1^x2); } /*program to test above function */ int main() { int a[] = {1, 2, 4, 5, 6}; int miss = getMissingNo(a, 5); printf("%d", miss); getchar(); } Time Complexity: O(n) In method 1, if the sum of the numbers goes beyond maximum allowed integer, then there can be integer overflow and we may not get correct answer. Method 2 has no such problems. Writing code in comment? Please use ideone.com and share the link here. - Sourabh Upadhyay - mb1994 - gandhi_rahul - Meenal - Sumit Khanna - Amit Baghel - Kedar - vishal11 - guest - Kaustav Chatterjee - Abhay - Abhay - Nagaraju - Chinmaya - Shivam Maharshi - anonymous - Aman - Aman - rohith - rohith - Arpit - cxin - pr6989 - pr6989 - Yogesh Batra - pr6989 - Shyam - kartik - Meenal - neilljohnson - wgpshashank - ravikant - Meenakshi - Suesh PV - Jithan - TJ - TJ - Deepak
http://www.geeksforgeeks.org/find-the-missing-number/
CC-MAIN-2015-22
refinedweb
384
64.95
Linked lists & Binary Trees Linked Lists and Binary Trees are an essecial part of programming. As these data structures make our code run a lot faster and gives us more capabilities. The best example of this would be an autocomplete function, in which you put in the first few letters of a word, and are given a suggestion as to what word it thinks you might be typing. If this was done with a regular array of letters as opposed to a Binary Tree, then the run time would be unbarible. You would be waiting a long time for a suggestion that not only might be wrong, but will take away from the rest of your computers efficiency. As a coder, we love efficiency! So lets waste no time on introductions and jump right into how to build one, and how they work! Linked Lists and Binary Trees are made up of one very important element known as a Node. A Node is kinda like a bucket that holds our information inside of it, along with instructions as how to get to it’s next destination. Linked List and Binary Trees are not built in functions of python, unlike dictionaries and arrays. So we will have to build our own using class objects. #!python3#Node example class Node(): def __init__(self, data): self.data = data self.pointer = Nonenode1 = Node('1') Nodes can contain multiple pointers, and in a Double Linked List require directions for both the next node and before it. We can set those directions to assign the order of our Linked List as seen below. The code below goes over how to build a Double Linked List and some of it’s basic methods. #!python3#Double Linked list example class Node(): def __init__(self, data): self.data = data self.next = None self.previous = None class double_ll(object): def __init__(self): self.head = None self.tail = None # creates our linked list methods def append(self, item): # creates a node to add to the END of our list new_node = Node(item) # Check if this linked list is empty if self.head is None: # Assign head to new node self.head = new_node # sets previous to none, as it is the beggining of the list new_node.previous = None # Otherwise set previous as tail & insert new node after tail else: new_node.previous = self.tail self.tail.next = new_node # Update tail to new node regardless self.tail = new_node def prepend(self, item): # Create a new node to add to the BEGINNING of our list new_node = Node(item) if self.head is None: # if empty make head new node self.head = new_node self.tail = new_node else: # otherwise set new node to head new_node.next = self.head new_node.previous = None self.head = new_node def delete_from_tail(self): # Delete the last item in the linked list current = self.head # Searches for the end of our double linked list while current != None: if current.next == self.tail: break # focus on the node right before the tail current = current.next """you can't really delete items in a linked list so we remove access to it completely, and therefore have removed it from the list""" current.next = None current.previous = None self.tail = current def delete_from_head(self): second = self.head.next second.previous = None self.head = sencond """ And becuase we have created a double link list, we can traverse through it forwards and backwards! """ def forward_print(self): current = self.head while current != None: print(current.data) current = current.next def backward_print(self): current = self.tail while current != None: print(current.data) current = current.previous Despit Double Linked Lists being a bit more work to make, they can be very useful! Learning how to create Linked Lists can help you create stacks and queues for recursive functions. “But Anthony!”, I hear you asking, “what’s the difference between a Single Linked List and a Double?”. Simply put, a Single Linked List has only one direction it’s going in, while a Double Linked List goes both ways. I drew a demonstration below (sorry for the sloppy handwriting). The circles represent our nodes, that are holding ints as data. The node.next defines the next node’s objects id in our list, which helps our list find it’s way to our second node -which is holding the data of 2. A node.previous is added, as to go the other direction in our Double Linked List. Just as the tail points to None, so will our node.previous when at the beginning of the list. The second reason I explained a Double ll, rather than a Single, is because we can use two different directions to our advantage -when creating a Binary Tree. There is no .next or .previous attribute. However, there is a .left and .right attribute. We will use these in order to build our tree, and give or nodes instructions on where we would like it to go. So grab some tea and take a deep breath before jumping into our next large section of code. #!python3# Binary Tree Example class Node: def __init__(self, data): self.data = data # left and right attributes self.left = None self.right = None# creating our nodes node1 = Node(4) node2 = Node(2) node3 = Node(6) node4 = Node(1) node5 = Node(3) node6 = Node(5)# building our tree root = node1 node1.left = node2 node1.right = node3 node2.left = node4 node2.right = node5 node3.left = node6# search the tree def search(node, target): if node is None: # checks if there are no items left to search return None elif node.data == target: # checks if the node is our target return node elif node.data < target: # go right return search(node.right, target) else: # go left return search(node.left, target)"""The function will recursively calls itself and check if the value is higher or lower then what we are looking for. For example, we call the function looking for 5. The funcion will figure out that our root node(node1.data == 4) is too small, and therefore go right as a result. Then when it reads our node3.data, it will determine that it is less than 6, and go left as a result. Finally finding 5, and returning the result.""" result = search(root, 5) # now lets insert a new node into our tree node7 = Node(7)def insert(node, new_node): if new_node.data > node.data: # put new child on right if space if node.right is None: node.right = new_node return # otherwise keep looking else: insert(node.right, new_node) if new_node.data < node.data: # put new child on the left if space if node.left is None: node.left = new_node return # otherwise keep looking else: insert(node.left, new_node)"""Here we created a Node with the value of 7. This function will recursively call itself until it finds an open spot for 7 -that makes sense in the database. Simularly to our search funtion, it will check -if it cannot find an empty space for our node- if our node's value/data belongs on the left or right side based on how big it is. You can choose any node to try and insert to, but for this example we have used the root node to cycle threw the entire tree.""" insert(root, node7) insert(root, Node(8))def delete(node, target): result = search(node, target - 1) if result.left == target: result.left = None elif result.right == target: result.right = None # does a second search to try and find a parent node result = search(node, target + 1) if result.left == target: result.left = None elif result.right == target: result.right = None # else, we print a not found response else: print(f"Could not find {target} in Tree.")"""Here we take advantage of our search function in order to find the previous node. Becuase we are trying to delete 8, we have to look for the node with a value of 7. Note that if we were looking for 5, we would have to look for the parenting node, with a value of 6. As well, this example only works for integer based binary trees.""" delete(root, 8)# Finally we can print our tree out. def in_order_traversal(node): if node is not None: #traverse in_order_traversal(node.left) print(node.data) in_order_traversal(node.right) else: return Nonein_order_traversal(root) And this is what our Tree should look like after it is all said and done. (apologies again for the bad handwriting) And now a pretty picture to relax your eyes! I know looking at so much text and code can be sore for eyes. Please comment on any questions you might have, and if there’s anything I can do to improve this explanation! But most importantly, happy coding, and have a great day!
https://makemesenpai.medium.com/linked-lists-binary-trees-b54fa2906822?source=post_internal_links---------2----------------------------
CC-MAIN-2022-33
refinedweb
1,459
76.52
In this tutorial we will learn about the the PrintWriter class in Java.In this tutorial we will learn about the the PrintWriter class in Java. In this tutorial we will learn about the the PrintWriter class in Java. java.io.PrintWriter class is used for format printing of objects. To print these objects to the text output stream all the print method of PrintStream class is implemented by this class. This class extends Writer class and writes the character data. However, to write on the console in Java System.out is provided, except that the PrintWriter is also provided to write on console. PrintWriter is a character based class which makes the Java program easy for Internationalization. Objects of this class can be created by using various of its Constructors. Constructor Details Methods Detail Example Here I am giving a simple example which will demonstrate you about how to use the PrintWriter in the Java applications. In this example I have created a PrintWriter object using the constructor PrintWriter(File file) and various of methods to print/write the characters to the file. Source Code JavaPrintWriterExample.java import java.io.PrintWriter; import java.io.File; import java.util.Locale; public class JavaPrintWriterExample { public static void main(String[] args) { String s = "PrintWriter"; int i = 50; try { File file = new File("file.txt"); // create a new PrintWriter with the specified file PrintWriter pw = new PrintWriter(file); // format text with specified locale. // %s indicates a string will be placed there, which is s pw.format(Locale.US, "This is a %s example", s); // usage line.separator pw.println(); // write integer pw.println(i); // write character pw.write(i); // usage line.separator pw.println(); // format text with specified locale // %d indicates a integer will be placed there, which is 100 pw.format("This is a %s example with %d", s, i); System.out.println(); System.out.println("Data written to the file successfully"); // flush the writer pw.flush(); PrintWriter pw1 = new PrintWriter(System.out); pw1.println(); pw1.println("Data on console : "); pw1.format(Locale.US, "This is a %s example", s); pw1.println(); pw1.println(i); pw1.write(i); pw1.println(); pw1.format("This is a %s example with %d", s, i); pw1.println(); pw1.flush(); } catch (Exception ex) { System.out.println(ex); } } } Output When you will execute the above example you will get the output as follows : Source code of this example can also be downloaded from the link given below. Ads
https://www.roseindia.net/java/example/java/io/printwriter.shtml
CC-MAIN-2019-22
refinedweb
410
51.85
Answering React-related questions on Stack Overflow, I've noticed that there are a few main categories of issues people have with the library. I've decided to write about top 6 common ones and show how to handle them in hopes that it'll be helpful to those new to React or anyone in general Both pitfalls of using Class based components and Functional components that use hooks are covered interchangeably. The state in React is considered immutable and therefore should not be directly changed. A special setState method and the setter function from useState hook should be used instead. Consider the following example, where you'd want to update checked field of a particular object in array, based on the state of a checkbox. const updateFeaturesList = (e, idx) => { listFeatures[idx].checked = e.target.checked; setListFeatures(listFeatures); }; The issue with this code is that the changes to the state won't be reflected in the UI since the state is updated with the same object reference and therefore it doesn't trigger a re-render. Another important reason for not mutating the state directly is that due to it's asynchronous nature later state updates might override the ones made directly to the state, resulting in some evasive bugs. The correct way in this case would be to use the setter method of useState. const updateFeaturesList = (e, idx) => { const { checked } = e.target; setListFeatures(features => { return features.map((feature, index) => { if (idx === index) { feature = { ...feature, checked }; } return feature; }); }); }; By using map and object spread we're also making sure that we're not changing the original state items. Setting the initial state values to null or an empty string and then accessing properties of that value in render as if it's an object is quite a common mistake. The same goes for not providing default values for nested objects and then trying to access them in render or other component methods. class UserProfile extends Component { constructor(props) { super(props); this.state = { user: null }; } componentDidMount() { fetch("/api/profile").then(data => { this.setState({ user: data }); }); } render() { return ( <div> <p>User name:</p> <p>{this.state.user.name}</p> // Cannnot read property 'name' of null </div> ); } } A similar error happens with setting value on initial state to an empty array and then trying to access n-th item from it. While the data is being fetched by an API call, the component will be rendered with provided initial state, and trying to access a property on null or undefined element will cause an error. Therefore it is important to have the initial state closely represent the updated state. In our case a correct state initialisation is as follows: class UserProfile extends Component { constructor(props) { super(props); this.state = { user: { name: "" // Define other fields as well } }; } componentDidMount() { fetch("/api/profile").then(data => { this.setState({ user: data }); }); } render() { return ( <div> <p>User name:</p> <p>{this.state.user.name}</p> // Renders without errors </div> ); } } From the UX point of view, it's probably best to display some sort of loader until the data is fetched. setStateis asynchronous Another common mistake is trying to access state value right after setting it. handleChange = count => { this.setState({ count }); this.props.callback(this.state.count); // Old state value }; Setting new value doesn't happen immediately, normally it's done on the next available render, or can be batched to optimise performance. So accessing a state value after setting it might not reflect the latest updates. This issue can be fixed by using an optional second argument to setState, which is a callback function, called after the state has been updated with its latest values. handleChange = count => { this.setState({ count }, () => { this.props.callback(this.state.count); // Updated state value }); }; It's quite different with the hooks though, since the setter function from useState doesn't have a second callback argument akin to that of setState. In this case the official recommended way is to use useEffect hook. const [count, setCount] = useState(0) useEffect(() => { callback(count); // Will be called when the value of count changes }, [count, callback]); const handleChange = value => { setCount(value) }; It should be noted that setState is not asynchronous in a way that it returns a promise. So slapping async/await on it or using then won't work (another one of common mistakes). This issue is related to the one discussed above as it also has to do with the state update being asynchronous. handleChange = count => { this.setState({ count: this.state.count + 1 }); // Relying on current value of the state to update it }; The issue with this approach is that the value of count may not be properly updated at the moment when the new state is being set, which will result in the new state value to be set incorrectly. A correct way here is to use the functional form of setState. increment = () => { this.setState(state => ({ count: state.count + 1 })); // The latest state value is used }; The functional form of setState has a second argument - props at the time the update is applied, which can be used in a similar way as state. The same logic applies to the useState hook, where the setter accepts a function as an argument. const increment = () => { setCount(currentCount => currentCount + 1) }; useEffect This is one of the less popular mistakes, but happens nevertheless. Even though there are completely valid cases for omitting dependency array for useEffect, doing so when its callback modifies the state might cause an infinite loop. useEffect's dependency array Similar to the case above, but more subtle mistake, is tracking objects, arrays or other non-primitive values in the effect hook's dependency array. Consider the following code. const features = ["feature1", "feature2"]; useEffect(() => { // Callback }, [features]); Here when we pass an array as a dependency, React will store only the reference to it and compare it to the previous reference of the array. However, since it is declared inside the component, features array is recreated on every render, meaning that it's reference will be a new one every time, thus not equal to the one tracked by useEffect. Ultimately, the callback function will be run on each render, even if the array hasn't been changed. This is not an issue with primitive values, like strings and numbers, since they are compared by value and not by reference in JavaScript. There are a few ways to fix this. First option is to move variable declaration outside of the component, so it won't be recreated every render. However, in some cases this is not possible, for example if we're tracking props or tracked dependency is a part of the component's state. Another option is to use a custom deep compare hook to properly track the dependency references. An easier solution would be to wrap the value into useMemo hook, which would keep the reference during re-renders. const features = useMemo(() => ["feature1", "feature2"], []); useEffect(() => { // Callback }, [features]); Hopefully this list will help you to avoid the most common React issues and improve understanding of the main pitfalls.....
https://morioh.com/p/4adec6373ec0
CC-MAIN-2020-40
refinedweb
1,170
52.9
>> However, if oak-jcr has access to above table (either through an API or >> through content) I think we are fine. In that case oak-jcr can: >> >> - implement registry wide namespace mapping >> - implement session local namespace mapping >> - register new namespace mappings >> - handle items with as yet unknown namespaces >> - cope with expanded forms and qualified forms >> ... > > It appears the path of least resistance for now would be to > > a) put all the mapping logic into oak-jcr, which includes > > b) just assuming a certain place in the content tree for the mappings. > > Once we have this working we can still decide what needs to be pushed down. > > Makes sense? Yes. Only that I'm not sure whether oak-jcr is the right place. I think there is no reason to make the mk prefixes visible outside oak-core. Michael > > Best regards, Julian
http://mail-archives.apache.org/mod_mbox/jackrabbit-oak-dev/201204.mbox/%3C4F918114.5070305@apache.org%3E
CC-MAIN-2016-22
refinedweb
141
68.81
I am going through the Java EE 6 tutorial and I am trying to understand the difference between stateless and stateful session beans. If stateless session beans do not retain their state in between method calls, why is my program acting the way it is? package mybeans; import javax.ejb.LocalBean; import javax.ejb.Stateless; @LocalBean @Stateless public class MyBean { private int number = 0; public int getNumber() { return number; } public void increment() { this.number++; } } The client import java.io.IOException; import javax.ejb.EJB; import javax.servlet.*; import javax.servlet.http.*; import javax.servlet.annotation.WebServlet; import mybeans.MyBean; import java.io.PrintWriter; @WebServlet(name = "ServletClient", urlPatterns = { "/ServletClient" }) public class ServletClient extends HttpServlet { private static final long serialVersionUID = 1L; @EJB MyBean mybean; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); mybean.increment(); out.println(mybean.getNumber()); } } I was expecting getNumber to return 0 every time but it is returning 1 and reloads of the servlet in my browser increase it more. The problem is with my understanding of how stateless session beans work and not with the libraries or application server, of course. Can somebody give me a simple hello world type example of a stateless session bean that behaves differently when you change it to stateful? The important difference is not private member variables, but associating state with a particular user (think “shopping cart”). The stateful piece of stateful session bean is like the session in servlets. Stateful session beans allow your app to still have that session even if there isn’t a web client. When the app server fetches a stateless session bean out of the object pool, it knows that it can be used to satisfy ANY request, because it’s not associated with a particular user. A stateful session bean has to be doled out to the user that got it in the first place, because their shopping cart info should be known only to them. The app server ensures that this is so. Imagine how popular your app would be if you could start shopping and then the app server gave your stateful session bean to me when I came along! So your private data member is indeed “state”, but it’s not “shopping cart”. Try to redo your (very good) example to make it so the incremented variable is associated with a particular user. Increment it, create a new user, and see if they can still see the incremented value. If done correctly, every user should see just their version of the counter. Stateless Session Beans (SLSB) are not tied to one client and there is no guarantee for one client to get the same instance with each method invocation (some containers may create and destroy beans with each method invocation session, this is an implementation-specific decision, but instances are typically pooled – and I don’t mention clustered environments). In other words, although stateless beans may have instance variables, these fields are not specific to one client, so don’t rely on them between remote calls. In contrast, Stateful Session Beans (SFSB) are dedicated to one client for their entire life, there is no swapping or pooling of instances (it may be evicted from memory after passivation to save resources but that’s another story) and maintain conversational state. This means that the instance variables of the bean can keep data relative to the client between method invocations. And this makes possible to have interdependent method calls (changes made by one method affect subsequent method calls). Multi-step processes (a registration process, a shopping cart, a booking process…) are typical use cases for SFSB. One more thing. If you are using SFSB, then you must avoid injecting them into classes that are multithreaded in nature, such as Servlets and JSF managed beans (you don’t want it to be shared by all clients). If you want to use SFSB in your web application, then you need to perform a JNDI lookup and store the returned EJB instance in the HttpSession object for future activity. Something like that: try { InitialContext ctx = new InitialContext(); myStateful = (MyStateful)ctx.lookup("java:comp/env/MyStatefulBean"); session.setAttribute("my_stateful", myStateful); } catch (Exception e) { // exception handling } Stateless and stateful in this context don’t mean quite what you might expect. Statefulness with EJBs refers to what I call conversational state. The classic example is a flight booking. If it consists of three steps: - Reserve seat - Charge credit card - Issue Ticket Imagine each of those is a method call to a session bean. A stateful session bean can maintain this kind of conversation so it remembers what happens between calls. Stateless session beans don’t have such capacity for conversational state. Global variables inside a session bean (stateless or stateful) are something else entirely. Stateful session beans will have a pool of beans created (since a bean can only be used in one conversation at a time) whereas stateless sesion beans will often only have one instance, which will make the global variable works, but I don’t think this is necessarily guaranteed. This thing happen because the container only has one bean instance in the pool that is being reused for all calls. If you run the clients in parallel you will see a different result because the container will create more bean instances in the pool. The major differences between the two major types of session beans are: Stateless Beans - Stateless Session Beans are the ones which have no conversational state with the client which has called its methods. For this reason they can create a pool of objects which can be used to interact with multiple clients. - Performance wise stateless beans are better since they don’t have states per client. - They can handle multiple requests from multiple clients in parallel. Stateful Beans - Stateful session beans can maintain the conversational state with multiple clients at a time and the task is not shared between the clients. - After the session is completed the state is not retained. - The container can serialize and store the state as a stale state for future use. This is done to save resources of the application server and to support bean failures. It has good answers. I would like to add small answer. Stateless Bean should not used to hold any client data. It should be used to “to model actions or processes that can be done in one shot”.
https://exceptionshub.com/stateless-and-stateful-enterprise-java-beans.html
CC-MAIN-2021-21
refinedweb
1,073
61.67
If I create a directory /opt/rt2/local/WebRT/html/reports, and put the reports.html and statusreport.html files from contrib in it, Mason (?) finds them and processes them, but seems to return them as text/text rather than text/html, as my browser displays the HTML. If I create /opt/rt2/WebRT/html/reports, but leave it empty (and the actual reports.html under local…, it works. If I put the html files in …local/WebRT/html, it works. Is this expected? Why? The reason I want this is to a) not have local stuff intermixed with the RT2 “product” b) not have the namespace intermixed (which is why I don’t like the idea of having the reports in local/WebRT/html) Thanks, Steve
https://forum.bestpractical.com/t/weirdness-using-local-webrt-html/6607
CC-MAIN-2020-24
refinedweb
127
67.76
#include <iostream> using namespace std; int sum(int n) { if (n <= 0) return 0; else return n + sum(n-1); } int main() { cout << "Range num? "; int num; cin >> num; cout << sum(num) << endl; return 0; } My Pep/8 try at it: br main RetVal: .EQUATE 6 num: .EQUATE 2 n: .EQUATE 0 sum: SUBSP 4,i if: LDA n,s BRLE else BR endIf else: LDA n,s ADDA sum,s STA RetVal,s endif: ADDSP 2,i main: STRO msg,d DECI num,s LDA sum,s DECO num,d CHARO '\n',i STOP msg: .ASCII "Range num?" .END You must use equates to access the stack and follow the call to the function as discussed in the book (pass the parameter, return address, return a value and so on). There are NO global variables in the resulting code (except a global message of "Range num? "). It must be able to do sum a range greater than 2. I can't seem to figure out mainly the endIf and the sum(num). if there is anythign else wrong that I could fix please let me know! Thanks!
http://www.dreamincode.net/forums/topic/301145-c-to-pep8-conversion/
CC-MAIN-2017-04
refinedweb
188
89.58
Recently I'll run through what is and isn't supported, and some ways you might want to use it.. Getting Started The changes are targeted for the 2.3.0 release which means the only way of working with them currently (official npm package due soon) is to checkout the project from GitHub and build it yourself, luckily this is a relatively painless experience if you have git setup - git clone git://github.com/SteveSanderson/knockout.git cd knockout ./build/build.sh The output files are located in the build/output folder, knockout-latest-debug.js is the one you'll want for Node, unless you have a strange compulsion for minified code! Let's quickly put together a hello world example based on the example from the Knockout website - var ko = require('build/output/knockout-latest.debug.js'); // Here's my data model var ViewModel = function(first, last) { this.firstName = ko.observable(first); this.lastName = ko.observable(last); this.fullName = ko.computed(function() { // Knockout tracks dependencies automatically. It knows that fullName depends on firstName and lastName, because these get called when evaluating fullName. return this.firstName() + " " + this.lastName(); }, this); }; // create an instance of the ViewModel var vm = new ViewModel('Planet', 'Earth'); // subscribe to fullName changes var subscription = vm.fullName.subscribe(function(value) { console.log(value); }); // log the current value console.log(vm.fullName()); // trigger a change vm.lastName('Mars'); // dispose of the subscription subscription.dispose(); Sticking the above in a file (hello-world.js) in the root of the knockout project and then running it through Node (node hello-world.js) gives us - Planet Earth Planet Mars. What works? What doesn't?? Faster view model testing. Potential to integrate with non-browser, JavaScript powered UIs. Crazy, pointless, Node-based view model fun! Let's say we create a view model like this - var viewModel = { ticks: ko.observable(0), tick: function() { this.ticks(this.ticks() + 1); }, bars: [] }; function createBar(index, count) { return ko.computed(function() { var a = viewModel.ticks() % ((count - 1) * 2); var b = Math.abs(a - count + 1); var c = Math.abs(index - b); var d = c / count; var e = 1 - d; return (e * e * e).toFixed(3); }); } for (var index = 0, count = 6; index < count; index++) { viewModel.bars[index] = createBar(index, count); } Then combine it with an npm module called johnny-five and a bit of view binding code - var five = require("johnny-five"); new five.Board().on('ready', function() { [3, 5, 6, 9, 10, 11].forEach(function(pin, index) { var led = new five.Led(pin); viewModel.bars[index].subscribe(function(value) { led.brightness(Math.round(value * 255)); }); }); setInterval(function() { viewModel.tick(); }, 100); }); Finally, attach an arduino and what do you get? The worlds first KnightRider LED effect powered by a Knockout view model (I think)! If anyone fancies making their own, the code is all available on GitHub
https://blog.scottlogic.com/2013/03/08/knockout-js-node-js-what-js.html
CC-MAIN-2021-21
refinedweb
472
52.15
Pair devices Important APIs Some devices need to be paired before they can be used. The Windows.Devices.Enumeration namespace supports three different ways to pair devices. - Automatic pairing - Basic pairing - Custom pairing Tip Some devices do not need to be paired in order to be used. This is covered under the section on automatic pairing. Automatic pairing Sometimes you want to use a device in your application, but do not care whether or not the device is paired. You simply want to be able to use the functionality associated with a device. For example, if your app wants to simply capture an image from a webcam, you are not necessarily interested in the device itself, just the image capture. If there are device APIs available for the device you are interested in, this scenario would fall under automatic pairing. In this case, you simply use the APIs associated with the device, making the calls as necessary and trusting the system to handle any pairing that might be necessary. Some devices do not need to be paired in order for you to use their functionality. If the device does need to be paired, then the device APIs will handle the pairing action behind the scenes so you do not need to integrate that functionality into your app. Your app will have no knowledge about whether or not a given device is paired or needs to be, but you will still be able to access the device and use its functionality. Basic pairing Basic pairing is when your application uses the Windows.Devices.Enumeration APIs in order to attempt to pair the device. In this scenario, you are letting Windows attempt the pairing process and handle it. If any user interaction is necessary, it will be handled by Windows. You would use basic pairing if you need to pair with a device and there is not a relevant device API that will attempt automatic pairing. You just want to be able to use the device and need to pair with it first. In order to attempt basic pairing, you first need to obtain the DeviceInformation object for the device you are interested in. Once you receive that object, you will interact with the DeviceInformation.Pairing property, which is a DeviceInformationPairing object. To attempt to pair, simply call DeviceInformationPairing.PairAsync. You will need to await the result in order to give your app time to attempt to complete the pairing action. The result of the pairing action will be returned, and as long as no errors are returned, the device will be paired. If you are using basic pairing, you also have access to additional information about the pairing status of the device. For example you know the pairing status (IsPaired) and whether the device can pair (CanPair). Both of these are properties of the DeviceInformationPairing object. If you are using automatic pairing, you might not have access to this information unless you obtain the relevant DeviceInformation objects. Custom pairing Custom pairing enables your app to participate in the pairing process. This allows your app to specify the DevicePairingKinds that are supported for the pairing process. You will also be responsible for creating your own user interface to interact with the user as needed. Use custom pairing when you want your app to have a little more influence over how the pairing process proceeds or to display your own pairing user interface. In order to implement custom pairing, you will need to obtain the DeviceInformation object for the device you are interested in, just like with basic pairing. However, the specific property your are interested in is DeviceInformation.Pairing.Custom. This will give you a DeviceInformationCustomPairing object. All of the DeviceInformationCustomPairing.PairAsync methods require you to include a DevicePairingKinds parameter. This indicates the actions that the user will need to take in order to attempt to pair the device. See the DevicePairingKinds reference page for more information about the different kinds and what actions the user will need to take. Just like with basic pairing, you will need to await the result in order to give your app time to attempt to complete the pairing action. The result of the pairing action will be returned, and as long as no errors are returned, the device will be paired. To support custom pairing, you will need to create a handler for the PairingRequested event. This handler needs to make sure to account for all the different DevicePairingKinds that might be used in a custom pairing scenario. The appropriate action to take will depend on the DevicePairingKinds provided as part of the event arguments. It is important to be aware that custom pairing is always a system-level operation. Because of this, when you are operating on Desktop or Windows Phone, a system dialog will always be shown to the user when pairing is going to happen. This is because both of those platforms posses a user experience that requires user consent. Since that dialog is automatically generated, you will not need to create your own dialog when you are opting for a DevicePairingKinds of ConfirmOnly when operating on these platforms. For the other DevicePairingKinds, you will need to perform some special handling depending on the specific DevicePairingKinds value. See the sample for examples of how to handle custom pairing for different DevicePairingKinds values. Unpairing Unpairing a device is only relevant in the basic or custom pairing scenarios described above. If you are using automatic pairing, your app remains oblivious to the pairing status of the device and there is no need to unpair it. If you do choose to unpair a device, the process is identical whether you implement basic or custom pairing. This is because there is no need to provide additional information or interact in the unpairing process. The first step to unpairing a device is obtaining the DeviceInformation object for the device that you want to unpair. Then you need to retrieve the DeviceInformation.Pairing property and call DeviceInformationPairing.UnpairAsync. Just like with pairing, you will want to await the result. The result of the unpairing action will be returned, and as long as no errors are returned, the device will be unpaired. Sample To download a sample showing how to use the Windows.Devices.Enumeration APIs, click here.
https://docs.microsoft.com/en-us/windows/uwp/devices-sensors/pair-devices
CC-MAIN-2018-30
refinedweb
1,049
54.22
Users, Permissions and Multitenant Sites In my last article, I started to look at multitenant Web applications. These are applications that run a single time, but that can be retrieved via a variety of hostnames. As I explained in that article, even a simple application can be made multitenant by having it check the hostname used to connect to the HTTP server, and then by displaying a different set of content based on that. For a simple set of sites, that technique can work well. But if you are working on a multitenant system, you more likely will need a more sophisticated set of techniques. For example, I recently have been working on a set of sites that help people practice their language skills. Each site uses the same software but displays a different interface, as well as (obviously) a different set of words. Similarly, one of my clients has long operated a set of several dozen geographically targeted sites. Each site uses the same software and database, but appears to the outside world to be completely separate. Yet another reason to use a multitenant architecture is if you allow users to create their own sites—and, perhaps, add users to those private sites. In this article, I describe how to set up all of the above types of sites. I hope you will see that creating such a multitenant system doesn't have to be too complex, and that, on the contrary, it can be a relatively easy way to provide a single software service to a variety of audiences. Identifying the Site In my last article, I explained how to modify /etc/passwd such that more than one hostname would be associated with the same IP address. Every multitenant site uses this same idea. A limited set of IP addresses (and sometimes only a single IP address) can be mapped to a larger number of hostnames and/or domain names. When a request comes in, the application first checks to see which site has been requested, and then decides what to do based on it. The examples in last month's article used Sinatra, a lightweight framework for Web development. It's true that you can do sophisticated things with Sinatra, but when it comes to working with databases and large-scale projects, I prefer to use Ruby on Rails. So here I'm using Rails, along with a back end in PostgreSQL. In order to do that, you first need to create a simple Rails application: rails new -d postgresql multiatf Then create a "multiatf" user in your PostgreSQL installation: createuser multiatf Finally, go into the multiatf directory, and create the database: rake db:create With this in place, you now have a working (if trivially simple) Rails application. Make sure you still have the following two lines in your /etc/hosts file: 127.0.0.1 atf1 127.0.0.1 atf2 And when you start up the Rails application: rails s you can go to or, and you should see the same results—namely, the basic "hello" that you get from a Rails application before you have done anything. The next step then is to create a default controller, which will provide actual content for your users. You can do this by saying: rails g controller welcome Now that you have a "welcome" controller, you should uncomment the appropriate route in config/routes.rb: root 'welcome#index' If you start your server again and go to, you'll now get an error message, because Rails knows to go to the "welcome" controller and invoke the "index" action, but no such action exists. So, you'll have to go into your controller and add an action: def index render text: "Hello!" end With that in place, going to your home page gives you the text. So far, that's not very exciting, and it doesn't add to what I explored in my last article. You can, of course, take advantage of the fact that your "index" method is rendering text, and that you can interpolate values into your text dynamically: def index render text: "Hello, visitor to #{request.host}!" end But again, this is not what you're likely to want. You will want to use the hostname in multiple places in your application, which means that you'll repeatedly end up calling "request.host" in your application. A better solution is to assign a @hostname variable in a before_action declaration, which will ensure that it takes place for everyone in the system. You could create this "before" filter in your welcome controller, but given that this is something you'll want for all controllers and all actions, I think it would be wiser to put it in the application controller. Thus, you should open app/controllers/application_controller.rb, and add the following: before_action :get_hostname def get_hostname @hostname = request.host end Then, in your welcome controller, you can change the "index" action to be: def index render text: "Hello, visitor to #{@hostname}!" end Sure enough, your hostname now will be available as @hostname and can be used anywhere on your site. Moving to the Database In most cases, you'll want to move beyond this simple scheme. In order to do that, you should create a "hosts" table in the database. The idea is that the "hosts" table will contain a list of hostnames and IDs. It also might contain additional configuration information (I discuss that below). But for now, you can just add a new resource to the system. I even would suggest using the built-in scaffolding mechanism that Rails provides: rails g scaffold hosts name:string Why use a scaffold? I know that it's very popular among Rails developers to hate scaffolds, but I actually love them when I start a simple project. True, I'll eventually need to remove and rewrite parts, but I like being able to move ahead quickly and being able to poke and prod at my application from the very first moments. Creating a scaffold in Rails means creating a resource (that is, a model, a controller that handles the seven basic RESTful actions and views for each of them), as well as the basic tests needed to ensure that the actions work correctly. Now, it's true that on a production system, you probably won't want to allow anyone and everyone with an Internet connection to create and modify existing hosts. And indeed, you'll fix this in a little bit. But for now, this is a good and easy way to set things up. You will need to run the new migration that was created: rake db:migrate And then you will want to add your two sites into the database. One way to do this is to modify db/seeds.rb, which contains the initial data that you'll want in the database. You can use plain-old Active Record method calls in there, such as: Host.create([{name: 'atf1'}, {name: 'atf2'}]) Before you add the seeded data, make sure the model will enforce some constraints. For example, in app/models/host.rb, I add the following: validates :name, {:uniqueness => true} This ensures that each hostname will appear only once in the "hosts" table. Moreover, it ensures that when you run rake db:seed, only new hosts will be added; errors (including attempts to enter the same data twice) will be ignored. With the above in place, you can add the seeded data: rake db:seed Now, you should have two records in your "hosts" table: [local]/multiatf_development=# select name from hosts; -------- | name | -------- | atf1 | -------- | atf2 | -------- (2 rows) With this in place, you now can change your application controller: before_action :get_host def get_host @requested_host = Host.where(name: request.host).first if @requested_host.nil? render text: "No such host '#{request.host}'.", status: 500 return false end end (By the way, I use @requested_host here, so as not to collide with the @host variable that will be set in hosts_controller.) @requested_host is no longer a string, but rather an object. It, like @requested_host before, is an instance variable set in a before filter, so it is available in all of your controllers and views. Notice that it is now potentially possible for someone to access your site via a hostname that is not in your "hosts" table. If and when that happens, @requested_host will be nil, and you give an appropriate error message. This also means that you now have to change your "welcome" controller, ever so slightly: def index render text: "Hello, visitor to #{@requested_host.name}!" end This change, from the string @requested_host to the object @requested_host, is about much more than just textual strings. For one, you now can restrict access to your site, such that only those hosts that are active can now be seen. For example, let's add a new boolean column, is_active, to the "hosts" table: rails g migration add_is_active_to_hosts On my machine, I then edit the new migration: class AddIsActiveToHosts < ActiveRecord::Migration def change add_column :hosts, :is_active, :boolean, default: true, ↪null: false end end
http://www.linuxjournal.com/content/users-permissions-and-multitenant-sites?quicktabs_1=0
CC-MAIN-2017-43
refinedweb
1,516
59.13
Difference between revisions of "TTML/changeProposal015" Revision as of 17:32, 21 March 2014 Style.CSS - OPEN - Owner: Glenn Adams. - Started: 14/06/13 Contents - 1 Style.CSS - OPEN - 2 Issues Addressed - 3 Summary and Change details - 3.1 margin - 3.2 padding - 3.3 box-decoration-break - 3.4 border - 3.5 line stacking strategy - 3.6 region anchor points - 3.7 text outline - 3.8 text shadow - 3.9 shrink fit - 3.10 font face rule - 3.11 multiple row alignment (flex box in CSS mapping) - 4 Dependencies on other packages - 5 Edits to be applied - 6 Edits applied - 7 Impact - 8 References Issues Addressed - ISSUE-168 - ISSUE-176 - ISSUE-193 - ISSUE-20 - ISSUE-209 - ISSUE-21 - ISSUE-213 - ISSUE-234 - ISSUE-235 - ISSUE-285 - ISSUE-273 - ISSUE-284 - ISSUE-286. Style attribute tts:linePadding applies to block-level elements tt:body, tt:div and tt:p. Has no effect when declared solely on a tt:span. Value permitted is a single <length>. Percentages are relative to the width of the region. Length values must be zero or positive. Animatable: discrete. Every line area created to contain the text contained in a paragraph with a non-zero linePadding will be inset at the start and end by the specified length. The background color at the start and end of each line will be extended into this inset space. PAL: Are TTML 1 processors expected to ignore TTML 2 attributes that are in the same namespace at TTML 1? Is this the case in practice? TTML2 example The tts:linePadding style is illustrated by the following example: <p tts: <span tts: Left and right padding broken across a line </span> </p> CSS mapping Use of linePadding results in the properties padding-left, padding-right and box-decoration-break: clone being set on a span around the text, with an anonymous span being introduced if necessary. The previous example thus results in the following example code when mapped to HTML5/CSS: <p style="color: white;"> <span style="background-color: black; padding-left: 0.5em; padding-right: 0.5em; box-decoration-break: clone;"> Left and right padding broken across a line </span> </p> This produces an output similar to: Left and right padding broken across a line NB the wiki source for this example differs from the mapping provided above because browser support for box-decoration-break: clone is not yet available.> </styling>> ... - Need to define mappings for other combinations of textAlign and multiRowAlign to get desired behaviour.
http://www.w3.org/wiki/index.php?title=TTML/changeProposal015&curid=7078&diff=72505&oldid=72474
CC-MAIN-2015-40
refinedweb
417
58.89
Hi, I’m trying to do a pretty simple keyword. Its just to make my scripts a little shorter. when I am creating it I’m told that there is an error on the first bracket on the first line that is stopping me from using this keyword. public class selectItem(string group,string type) WebUI.delay(1) WebUI.selectOptionByLabel(findTestObject('Functions/Item Type Location/Info Group'), group, false) WebUI.delay(2) WebUI.selectOptionByLabel(findTestObject('Functions/Item Type Location/Info Type'), type, false) WebUI.delay(1) } Does anyone happen to know what this problem is and how I might be able to fix it. Thank you in advance
https://forum.katalon.com/t/help-creating-a-custom-keyword/11418
CC-MAIN-2022-33
refinedweb
108
51.85
If you’ve ever glanced at Azure Functions and F#, you might think they were made for each other. And yet if you want to create a new Azure Function project in Visual Studio, C# is apparently your only option. Maybe someday, Visual Studio will include support for Azure Functions in F#, but for now it’s possible to get there by adapting the C# Azure Function template. After all, F# is a first-class language on the .NET CLR, and it’s all the same once it’s compiled anyway. It’s important to note that I’m talking about compiled F# functions—not the fsx script that you get when creating a function through the Azure web portal. Project Files Starting a new Azure Function project in Visual Studio 2017 generates these five files: - FunctionApp1.csproj - Function1.cs - host.json - local.settings.json - .gitignore The C# project file: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net461</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.13" /> <> We can convert the C# project file into an F# project file just by changing a few lines: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> <TargetFramework>net461</TargetFramework> </PropertyGroup> <ItemGroup> <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.13" /> </ItemGroup> <ItemGroup> <Compile Include="Function1.fs" /> </ItemGroup> <ItemGroup> <Content Include="host.json"> <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> </Content> <Content Include="local.settings.json"> <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory> <CopyToPublishDirectory>Never</CopyToPublishDirectory> </Content> </ItemGroup> </Project> I’ve removed the reference to the Microsoft.CSharp assembly and added a section that lists F# source files (because in F# order matters), including a new Function1.fs. For some reason, the lines in the C# project file that copy host.json and local.settings.json don’t work in the F# project (those files are simply missing from the output directory). In order to include them, I had to change the item type from None to Content. Unfortunately, this causes these files to be required, and the build will fail if they’re gone. That’s not a problem for host.json, but the .gitignore from the C# template includes local.settings.json! The easiest thing to do is just remove that rule from .gitignore and make sure local.settings.json doesn’t contain any secrets. Source Files Now onto the source files. For reference, here’s what a C# timer-triggered Azure function template looks like: using System; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Host; namespace FunctionApp1 { public static class Function1 { [FunctionName("Function1")] public static void Run([TimerTrigger("0 */5 * * * *")] TimerInfo myTimer, TraceWriter log) { log.Info($"C# Timer trigger function executed at: {DateTime.Now}"); } } } Porting this to F# is pretty straightforward: namespace FunctionApp1 open System open Microsoft.Azure.WebJobs open Microsoft.Azure.WebJobs.Host module Function1 = [<FunctionName("Function1")>] let Run ([<TimerTrigger("0 */5 * * * *")>] myTimer: TimerInfo) (log: TraceWriter) = log.Info(sprintf "F# Timer trigger function executed at: %s" (DateTime.Now.ToString())) The hard part is done! Those other files ( host.json and local.settings.json) are used by the Azure Function host, so no changes are necessary there. All that’s left is to build the project and then either run it locally or publish it to Azure. Publishing In Visual Studio, a C# Azure Function is endowed with a “Publish…” context menu item. Since our F# project amounts to a plain F# library, we don’t get the wizard. But this is hardly a problem since the command-line tools can handle publishing just fine. First, install the Azure Functions Core Tools. Then you can either run your function locally, or publish it to Azure and run it there. To run your function locally: - Build the project (using Visual Studio, or dotnet buildfrom your project directory). - Switch to the build artifacts directory (e.g. bin\Debug\net461). - Run func host start. To publish to Azure: - Create an Azure Function resource using the Azure web portal or azcommand. - From your project directory, run dotnet publish -c Release. - Switch to the publish output directory (e.g. bin\Release\net461\publish). - Run func azure functionapp publish <function-app-name>. F# Azure Function Template To get started quickly, here is a simple template for an Azure timer function (zip download) using the files I’ve referenced in this post. But since Microsoft is still actively developing Azure’s Function infrastructure, I look forward to better tooling support soon! By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy1 Comment The tooling for F# support in Azure Functions should improve soon, including the templates for Func CLI, Visual Studio and Code. I wouldn’t recommend removing local.settings.json from .gitignore. It has to exist for pretty much any Function App except HTTP-only anyway, and it’s very likely that people will start putting secrets inside (storage / service bus / cosmos db connection string etc). Another intro to precompiled F# Functions can be found at my blog with some examples in
https://spin.atomicobject.com/2018/05/17/azure-functions-f-sharp/?utm_source=twitter-ao&utm_medium=social&utm_campaign=azure-functions-f-sharp
CC-MAIN-2018-43
refinedweb
827
50.73
Why settle for the standard home icon on your browser? If your home button brings you to hackaday.com, why not make the icon reflect that destination? This hack is quick and simple. We’ll take you through it using Firefox 3 and the default theme with standard sized icons. We start by using our favorite graphics program to make an icon that is 24×24 pixels, and then saved is as a PNG file without compression. To use the new image as a home icon, we edited a Cascading Style Sheet which is stored in the file classic.jar. On Ubuntu 9.04, this was found in /usr/lib/firefox-3.0.13/chrome/ but the file will be located elsewhere on other operating systems. We made a backup of classic.jar and then unzipped the contents (JARs are basically the same as zip files). In the unzipped archive, we navigated to the folder /skin/classic/browser/ and opened browser.css using a text editor. This is where the magic happens and although we only changed the home button icon, there’s a lot more possibilities you should look into. We changed the #home-button entry so that the image URL pointed to our new file using the format. Here’s what ours looked like after the change: #home-button { list-style-image: url(""); } We saved this file, then zipped up the file structure back into a file called classic.jar and copied it to the same location we originally found it. A quick restart of Firefox showed the new icon. Let us know your other Firefox tweaks in the comments! Update: [Colby] pointed out that this type of CSS change should be made in the “userChrome.css” file. He’s right and here’s how: Find your user profile directory and go to the “chrome” sub-directory inside of it. Create the file “userChrome.css”; there may already be an example file that you can just rename. The important bit of this CSS file is the namespace line that tells Firefox how to use it. Here is what ours looks like: @namespace url(""); /* set default namespace to XUL */ #home-button { list-style-image: url("") !important; } In order to get Firefox to listen to our new icon we had to had the “!important” keyword. Now just restart firefox and bask in the glory of your new home icon. 14 thoughts on “Firefox CSS hack: change navigation icons” Shouldn’t this be done as a theme, or is this theme-independent? Alternatively, just use userChrome.css, like you’re supposed to. Yep use UserChrome.css is the more sensible option so that the changes are saved when upgrading firefox. Thought id add the little tip for live bookmarks that don’t change their icon: .bookmark-item[container=”true”][label=”Ars”] { list-style-image: url(‘’) !important; -moz-image-region: rect(0px 16px 16px 0px) !important; } Thanks for the tip Colby. You’re right and I’ve added an update to reflect it. I’m no expert, but could this be setup to automatically retrieve an icon? Most of the bookmarks stored in Firefox on my Windows box automatically have an icon assigned that I assume is retrieved from the corresponding website. id assume so, would it work just to find out the variable name of the home page and just add .favicon afterwards? It’s okay to abstain from posting if there’s nothing to post. What is the theme? crap-a-day thank you very much for sharings…i really enjoy all of these posts… thank! would this work with .ico icon files instead of .png images?? wow its awesome. by the way you can even add more add-ons.To know it you should visit nice hack =)
http://hackaday.com/2009/09/06/firefox-css-hack-change-navigation-icons/?like=1&source=post_flair&_wpnonce=104f7cb3e5
CC-MAIN-2016-36
refinedweb
628
76.52
WebObjects/Web Applications/Deployment/Windows< WebObjects | Web Applications/Deployment Contents OverviewEdit (rev 1.3; 2002-08-29, see full revision list) The latest version of this document can be found at. Legal stuff:Edit discusses installation of WebObjects 5.1 on Windows. It started as a email-help to install WebObjects 5.x on WinNT and has evolved since. It covers most gotchas for installation and configuration of both Development and Deployment of WebObjects 5.1 on the following versions of Windows: - Windows NT, both Workstation and Server - Windows 2000, both Professional or Server - Windows XP Professional The main course of this how-to is directed to a development installation on Win2000 Server. Differences to other configurations will be described in short throughout the text. This still is much harder than Linux/WO Deployment, where you have exactly one configuration. This how-to is not about deployment issues that occur after WebObjects is installed. It ends with a start of Monitor. There are, however, a lot of issues that can be solved be installing WebObjects in one of the right ways :-) , so you may read this how-to anyhow. If you have found an error or want to make suggestions, improvements etc, you can contact me here. Before you startEdit To minimze delays, have the following things at hand: - a Java2-SDK from Sun () - WebObjects 5.1 CD (get it at) - a valid serial number (from either WO 5.0 or WO 5.1) - the latest update for WebObjects 5.1; the complete patch list lives at - some patches to complete the how-to (download them here) - WebObjects 4.x CD (only needed if you have installed WebObjects prior to 5.0) If you plan to install on a new, empty machine, you additionally need: - Windows CD (either WinNT, Win2000 or WinXP) - Driver CDs or updated drivers from the internet (if you have to use special hardware) - Windows NT Option Pack CD (if you plan to install IIS on WinNT) - Windows 2000/XP CD (if you want to install the recovery console) - the latest service pack/rollup package and hotfixes for your version of Windows (too numerous to count, start search at) Throughout this how-to you will see the term 'open a shell' several times. This means you should go to the 'Start' menu, click on 'Run...', enter cmd and press the Return key. A new, empty window with a command prompt should open. Of course, if there is already an open shell, just use that instead... Setup Windows and the web serverEdit This how-to follows the installation of Windows at the example of Windows 2000 server but should be usable for all versions of Windows mentioned above. If something is really different, there will be some short notes in the text. Installation is mostly self-explanatory, I will only mention the steps that require some caution. If you plan to do real world deployment on Windows, forget about the non Server variants of Windows. Because of Windows license restrictions, the Workstation/Professional versions allow only 10 concurrent connections to the machine - files opened over network shares count as connections, too. Setting up a deployment will of course work but performance will be very bad, you should consider buying a Windows Server version or use Linux/Apache as the web server and deploy only applications on Windows. First comes what I call the 'Basic Setup': insert Windows CD and boot. If you have boot diskettes, a network folder or whatever as source for the installation files, use that. On some OEM versions or with unattended setup, only some or no input will be required - just skip the appropriate sections then. <nt4 server only> If setup asks for the role of the server, select 'Single-Server'. Don't select one of the 'Domain Controller' roles, unless you know what you are doing. During network setup, you can choose to install Internet Information Server (IIS). Don't do this, because it is an outdated version. Be sure to _uncheck_ that checkbox! Don't use DHCP! See 'Network settings' below for an explanation. </nt4 server only> Install only World Wide Web Server, Internet Information Services Snap-In and all required files. Don't install FTP, NNTP, SMTP unless you really need it. If not configured correctly, these services provide execellent hooks for an attacker. If you have more than operating system installed, disable daylight saving for all but the first one or you will have much fun twice a year... Choose custom network settings and enter IP/address, subnet mask, gateway and DNS server settings. Using DHCP on servers is not recommended because some services rely on a permanent IP address. If you have to use DHCP (it's a cool service, after all), configure the DHCP server so that your WebObjects machines always get the same address (address reservation, IP-to-MAC address mapping). To be able to view other computers around, the WORKGROUP name entered must match the workgroup name of the other systems. If you do a domain based setup, there are no problems, just add the machine to your domain. Win2000/XP has the embarrassing habit of releasing a network adaptor's configuration when it has no connection to the network. So always make sure the network cable is connected to an active station (other machine or hub/switch) before starting any services, especially while booting. If this is not the case, some services will run on localhost (127.0.0.1) and you will experience weird errors when you try to connect. <win2000 server only> When you first log in to Windows, you will see the 'Configure Your Server' dialog. Unless you know what you are doing I suggest you choose 'I will configure this server later.' and click 'Next'. On the next page, uncheck 'Show this screen at startup' and close the dialog window. If you want, you can come back later via 'Start, Programs, Administrative Tools, Configure Your Server'. </win2000 server only> After basic Windows SetupEdit If you have to install additional drivers (e.g. graphics card) do so now. Otherwise, you are done with basic Windows setup. Here are some things you might want to do now: Single-CPU supportEdit If you have a multiprocessor system, you should be able to start Windows with only one processor enabled (Single CPU HAL), This can be incredibly helpful if you run into hardware issues later and have to do an emergency boot on a motherboard that supports only one CPU. Also, some applications don't run on multiprocessor systems. To get single CPU support, add a new entry to c:\boot.ini with a '/onecpu' switch at the end. Don't copy-and-paste from this how-to, as disk and partition numbers may vary for your installation, just duplicate the entry that is already there. [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINNT="Windows 2000 Server" /fastdetect multi(0)disk(0)rdisk(0)partition(1)\WINNT="Windows 2000 Server (one CPU)" /fastdetect /onecpu You might have to remove write protection from c:\boot.ini to be able to add entries. Open a shell and enter this command: attrib -r -s c:\boot.ini There is no need to add write protection afterwards. Recovery consoleEdit Another helpful tool is the Windows recovery console, especially when Windows crashes during startup (bluescreens, bad drivers, bad config, cannot boot after adding a new hard disk etc.). The Win2000 version of the recovery console can be used on all versions of Windows while the WinXP version cannot be used on a machine that runs WinNT only. To add it, insert your Windows2000/XP CD, open a shell and enter the following: cd /d <cdrom>:\i386 winnt32 /cmdcons A new entry is added to c:\boot.ini that can be choosen from the list of installed operating systems. Install Windows Service PackEdit As of the time of this writing, no service pack is available for WinXP. Install hotfixes as needed. For Win2000, install service pack 2 and all hotfixes needed. For NT4, install service pack 6, and then the past-sp6 rollup package. This should give you 128 bit encryption strength. Install IIS on WinNT from the Windows Option Pack CDEdit Insert the Windows NT Option Pack CD and install Internet Explorer 4.01 (or a newer version). After this is done: Launch <cdrom>:\setup.exe Setup warns that the option pack was not tested with service packs newer than SP4. Click 'Yes' to proceed anyway. Choose 'Custom' as the installation type. On the next window, uncheck _all_ components, answering 'Yes' to possible warnings. Then go to 'Internet Information Server (IIS)' and click on 'Show Subcomponents...' Move to the end of the new selection list and check 'World Wide Web Server'. You might want to check 'Documentation' (at the beginning of the list). All dependent software components are selected automatically. To avoid security issues, don't install the following unless you need it and know what you are doing: - FTP, SMTP, NNTP servers - Index Server - FrontPage 98 Server Extensions - Certificate Server - Microsoft Script Debugger - Windows Scripting Host If setup asks you to specify to specify a folder for the WWW service, you can use the suggestion 'c:\inetpub\wwwroot'. For Microsoft Transaction Server, just click 'Next' twice. Option Pack Setup should eventually complete. After that, you may have to re-install all service packs/updates for WinNT as some files may have been overwritten by Option Pack Setup. Securing the Windows/IIS installationEdit Running NT / IIS out of the box is quite dangerous, so I suggest you take one of security how-to's that are available in this area. A secure IIS installation is not needed for simple in-house testing but it should be considered harakiri if you connect an unprotected machine to the Internet. You have been warned. Here are some links that might help: (then dive in from there) (securing WinNT) (securing Win2000) (securing IIS on NT) (securing IIS) For more information, look for 'hardening+iis+against+attack' at Google. Prepare Windows for WebObjects installationEdit Now that you have a running Windows box - hopefully secure :), we can prepare it for WebObjects installation. The stuff in this section works for both WebObjects 5.0 and 5.1. Uninstall a previous version of WebObjectsEdit In case you are using an existing machine with an older version of WebObjects, you have to uninstall it before installing WebObjects 5.1. On the WebObjects 4.5 CD, under <cdrom>:\development\Windows\Uninstall you will find several directories: - UninstallWO3.51 - UninstallWO4.0 - UninstallWO4.5 Use the appropriate one for your version. To uninstall WebObjects 5.0, use 'Start, Control panel, Add/Remove Programs'. Replacing the shellEdit The shell built in to WinNT does not work with WebObjects at all. For the Win2000 version, there are some issues if the command line gets longer than 2000 characters due to heavy use of frameworks. Because of this, you will have to replace the shell. The good news is that you can use each version of CMD.EXE on every version of Windows, so the basic idea is to get a CMD.EXE from a WinXP installation and use it to replace your default shell - unless you run WinXP :). If you do not have a Win2000/XP CD, you can at least download Win2000 Service Pack 2 (or Windows XP service pack 1 as it becomes available) and extract CMD.EXE from there. There should be no legal issues since you can download service packs freely. Before attempting to replace the shell, be sure to cut access to the installation files or Windows File Protection will restore the old version within a few seconds. In case you installed from a CD, remove the Windows installation CD. If you installed over a network, unplug the network cable before starting the replacement. Windows File Protection does not exist on WinNT, you don't have to worry there. Within the additional files package (download it here), there is a script that does the replacement for you. Feel free to get your own version of CMD.EXE if you don't feel comfortable with the supplied one. - Close all open shell windows, as CMD.EXE cannot be replaced if it is running. - Double-click on CMD.EXE that came with the script; this opens a new (local) shell that can be used to replace the default shell. - In the open window, enter: shellReplace - The script will backup the original version of CMD.EXE to CMD.EXE.ORIGINAL and then replace it with the new version. A second later, Windows File Protection will open a dialog, requiring you to insert the Windows installation CD. Press 'Cancel' here and answer 'Yes' in the next windows that pops up. Now close the shell and open a new one, via 'Start, Run..., enter cmd and press Return'. Look what it says on the first line: With the original versions, the first line in a shell should look like this: - WinNT: Microsoft(R) Windows NT(TM) - Win2000: Microsoft Windows 2000 [Version 5.00.2195] - WinXP: Microsoft Windows XP [Version 5.1.2600] When you get this instead: Microsoft Windows XP [Version 4.0.1381] it means that you actually run CMD.EXE from WinXP on a WinNT 4.0 installation (4.0.1381). Other possible combinations are left as an excercise to the reader. :-)it is the good is great and all of slove Set up Dr. WatsonEdit Now you should change the configuration of Dr.Watson, Windows' integrated crash logger and debugger, so it runs without user interaction. Open a shell and enter: drwtsn32 -i This installs Dr. Watson as the default application debugger, which can be useful if you have installed Visual Studio Debugger and this beast shows its ugly head after each application crash. After this, enter: drwtsn32 and a dialog window pops up. Here you should use the follwing settings: Enable (check): - 'Dump Symbol Table' - 'Dump All Thread Contexts' - 'Append To Existing Log File' Disable (uncheck): - 'Create Crash Dump File' - 'Visual Notification' - 'Sound Notification' These settings instruct Dr. Watson to collect as much information as possible and to finish without user interaction. Otherwise, a crashed application will not terminate until the user clicks 'OK'. - 'Number of Instructions': 20 - 'Number of Errors To Save': 50 This saves the last 50 errors with the last 20 instructions for each error. - 'Log File Path': Set this to something short and easy to remember. Maybe you already have a folder for log files; I always use the TEMP folder for this. SDK installationEdit Next thing is to install a Java-SDK. The version of the SDK must be greater or equal to 1.3.1, SDK 1.3.0 will not work. Also, you have to install a SDK, installing only a runtime environment (JRE) will not work. I would suggest using SDK 1.4.0 or higher unless you experience problems with it (someone mentioned issues with Timestamp's) . Especially on multiprocessor systems, 1.4.0's concurrent garbage collection is a big plus. If you do cross-platform development (OSX/Windows) be sure to download the full international SDK from Sun (the biggest package), as only this version contains all the font encodings from the Mac. Else, you will get exceptions stating that NSMacOSRomanStringEncoding could not be found. Run the SDK installer. Use the path suggested (i.e. c:\j2sdk1.4.0). You can deselect 'sources' and 'examples', they are not needed for WO deployment. Install the Java plugin if you need it. After SDK setup has finished, open a shell and enter: java -version You should see an output like this: java version "1.4.0" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.0-b92) Java HotSpot(TM) Client VM (build 1.4.0-b92, mixed mode) Good job! You are now ready to install WebObjects. Install WebObjectsEdit With everything set up and running properly, you can now install WebObjects. Be sure to log in as user with Administrator privileges. This section also works for both WebObjects 5.0 and 5.1. Copy CD contents to local hard diskEdit Create a new folder on your hard disk and copy the contents of the WebObjects installation CD there. Don't use a shared folder on the network, as Setup needs to restart when it is half way done and fails if it cannot find the contents of the CD; bad karma arises and you will be left with an unusable WO installation ... an empty folder like c:\cdrom would be just fine. Choose the right version of NMServer.regEdit The supporting files package contains serveral versions of NMServer.reg. Choose the one you need: - NMServer.reg.deployment - use for all version of Windows if you install WO deployment (the original one has a bug) - NMServer.reg.nt4-devel - use for WO development on NT4 (the original one only runs on Win2000/XP) - NMServer.reg.win2k-devel - use for WO development on Win2000/XP (the one included with WO by default) Rename the appropriate file to NMServer.reg and copy it to - c:\cdrom\Deployment\Windows\Install\NMServer.reg (for deployment) or - c:\cdrom\Developer\Windows\Install\NMServer.reg (for development). Depending on the type of setup, run - for Development install: c:\cdrom\Developer\Windows\Setup.exe - for Deployment install: c:\cdrom\Deployment\Windows\Setup.exe To complete all steps in this how-to (like compiling the adaptor), development setup is necessary. As someone put it nicely: Development = Deployment + Developer Tools If you want to install Deployment only, just skip the steps you don't need. Notice that you will not be able to compile the ISAPI adaptor (which has an ugly bug) if you install Deployment only. Be sure to set up a Development installation (maybe on another machine), too. Accept the license agreement. Enter User Name, Company Name and the appropriate serial number for your setup (development, in this case). No JRE found during setupEdit I have seen this only on WinNT but it may happen on other versions as well: Although you have installed Java2-SDK 1.4.0, Setup complains that there is no JRE and that you should install Sun Java 1.3 or greater. In this case, quit Setup and import the file jresettings.reg from the supporting files package (just double-click it). You may have to open it first with a text editor (notepad would be fine) and change the path settings in there. By default, it points to c:\j2skd1.4.0. If you have installed the SDK there, no changes to jresettings.reg are necessary. Choose 'Custom Setup'. Select C:\Apple as the destination folder. You may specify another drive letter but don't put the Apple folder anywhere else but into the root folder. Great frustration with long paths will manifest itself if you do. Also, under no circumstances put spaces in the path! On the 'Select Components' don't change anything; simply click 'Next'. <winnt only> After this dialog, an 'ComponentAddItemError' dialog box may pop up. Simply click 'Ok' there and select 'Other' from the web server list; then click 'Next'. </winnt only> 'CGI-BIN Directory': Specify the path to the IIS scripts folder, which is typically located at c:\inetpub\scripts. You can use the 'Browse...' button to avoid typos. 'Document Root Directory': The path to the IIS wwwroot folder, typically c:\inetpub\wwwroot. Click 'Next' on all further dialog windows. Setup should start copying all files needed to the right locations. Then it asks you to reboot. Do so and log in again using the same user name as before. Setup will continue with the installation and eventually finish. Don't interrupt it before, this will render the installation useless! After setup has finished, install the latest update for your version of WebObjects. Reboot when asked to do so. Congratulations! WO Setup is complete. Post WebObjects installation stepsEdit Having all the WebObjects files on you disk does not mean sunshine immediately. Here comes what I call the 'post WebObjects setup'. WebObjects base installationEdit Open 'Services': - on WinNT: 'Start, Control Panel, Services' - on Win2000/XP: Open a shell and enter: %systemroot%\system32\services.msc /s You will find the following WebObjects related services. The name of the service is written into brackets - you can use that name to start/stop these services from the command line. These two are installed in both Development and Deployment: - Apple WebObjects Task Daemon 5 (wotaskd5) This service manages all running application instances. It should be set to run automatically at startup on both Development and Deployment. - Apple WebObjects Monitor 5 (womonitor5) This service provides a user interface to create new and manage existing application instances, sort of a frontend to wotaskd. It is not needed to start and manage application instances and should (for security reasons) be set to run manually. To start it, open a shell and enter: net start womonitor5 After the time it takes to start (usually only some seconds), you can open a browser and point to http://<myhost>:56789 to see the first page. To improve security, go to the 'Preferences' tab and set a password for accessing Monitor. Then use it like you normally would. To stop it after use, enter: net stop womonitor5 In WebObjects 5.0, wotaskd and Monitor are not installed as services by default. To change this, you will need the instsrv and srvany tools from the supporting files package. Here comes a quick step-by-step guide to install wotaskd and Monitor as services, for more information see srvany.doc that comes with srvany.exe - Right-click on the Start menu button and choose 'Explore all users'. In the Explorer window that opens, go to 'Programs, Startup'. Move the 'Start Task Daemon' shortcut out of the Startup folder to stop wotaskd from starting when you log in. I always put it into the 'WebObjects' folder but you can delete it as well. - Copy srvany.exe to an easy to remember place like C:\Apple (or C:\WinNT). Open a shell and enter instsrv wotaskd c:\apple\srvany.exe - Open RegEdit and go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\wotaskd Change the DisplayName property to 'Apple WebObjects Task Daemon' Change the Start property to '2' if it is not already 2. This means that the service should start automatically. - Under the wotaskd key, add a new subkey and name it Parameters. Under this subkey, create a new Application property (type string) with the following value: C:\Apple\Library\WebObjects\JavaApplications\wotaskd.woa\StartWOTaskD.exe Do not use %NEXT_ROOT% here - this may not yet be available when the service is about to be started resulting in a failure. Use the absolute path name to your WebObjects root folder instead. - Create a second string value property, AppDirectory, with the following value C:\Apple\Library\WebObjects\JavaApplications\wotaskd.woa - Reboot your system After reboot, you should be able to view http:\\<myhost>:1085 from another machine without logging in to the Windows box. If this works you have done everything correctly. If you want to install Monitor as a service, too, do the following. - Open a shell and enter: instsrv womonitor c:\apple\srvany.exe - Open RegEdit and go to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\womonitor Change DisplayName to 'Apple WebObjects Monitor' Change Start to '3' which means 'manual startup'. If you do not have to care about security, you can set it to '2' (automatic startup). - Under the womonitor key, add a new subkey and name it Parameters. Under this subkey, create a new Application property (type string) with the following value: C:\Apple\Library\WebObjects\JavaApplications\JavaMonitor.woa\JavaMonitor.cmd - Create a second string property, AppDirectory, with the following value C:\Apple\Library\WebObjects\JavaApplications\JavaMonitor.woa - To start Monitor on port 56789 every time, add a third string property, AppParameters, with the following value -WOPort 56789 You can specify more command line arguments here if you like. Now, try to start Monitor. Open a shell and enter net start womonitor After a few seconds, open your browser and go to http://<myhost>:56789. You should see Monitor's starting page. If you have WO Development installed, there are some more servicesEdit - Apple Mach Daemon (Apple_Mach_Daemon) and Apple Netname Server (Apple_Netname_Server) These two services - together with Pasteboard Server and Window Server - provide the base to run YellowBox applications (most of the WebObjects Developer Tools) on Windows. You don't have to care about them, just set them to start automatically. - openexec (openexec): This service starts the Openbase database engine and provides several sample databases. If you don't want to use OpenBase, you can save memory by setting the startup type to 'Manual' which prevents this service from starting automatically. To get rid of the Openbase processes that are running at the moment, you can either reboot or kill all relevant processes using the 'kill' tool (included in the supporting files package). Open a shell and enter: kill OpenBase.exe kill pgroup.exe kill databackup.exe kill openinfo.exe Important notice for all servicesEdit Never set the startup type of an entry to 'Disabled' unless you are dead sure what you are doing! 'Manual' services can be started from the system as needed while 'disabled' services cannot be started at all (unless you select a different startup type). Disabling the wrong services may prevent the system from booting and is not considered to be a good idea. Besides all those services, for a Development installation there are two shortcuts with the following settings in the 'All Users' Startup folder: - Pasteboard Server Target: %NEXT_ROOT%\Library\Frameworks\AppKit.framework\Resources\pbs.exe Start in: %NEXT_ROOT%\Library\Frameworks\AppKit.framework\Resources Run: Minimized - Window Server Target: %NEXT_ROOT%\Library\System\WindowServer.exe Start in: %NEXT_ROOT%\Library\System Run: Minimized WinXP Fast User SwitchingEdit 'Pasteboard Server' and 'Window Server' are started everytime a new user logs on. With Fast User Switching, more than one user can be logged in at the same time, causing Pasteboard Server and Window Server to be started more than once. Both programs don't like it and stop working which in turn causes the developer tools (ProjectBuilder, EOModeler, WOBuilder) to stop working, too. ;-( Workaround: If only one user on the machine is developing with WebObjects, move the shortcuts for Pasteboard Server and Window Server from the 'All Users' Startup folder to the personal Startup folder of the áppropriate user. If more than one user on the machine is developing with WebObjects, make sure they are not logged in at the same time. If nothing else helps, you can disable Fast User Switching. WebObjects Update programEdit There are at least two bugs when installing WebObjects Updates: - Files that you have changed yourself are not updated, the updater just silently skips them. Because of this, if you want to be sure that a given file is updated, rename it - for example, to <filename.extension.OLD> - and then run the update again. - Everytime the update is run, the WebObjects relevant folders are added to the PATH variable. Depending on how often updates are installed, this can result in an extremely long path. After the update has finished, clean the PATH variable manually. Changes to environment variablesEdit Open 'System' in Control Panel. On the 'Advanced' tab, click 'Environment Variables', then click the name of the user variable or system variable you want to change, as follows: Click New/Edit/Delete to add/change/remove a new variable name and value. You may have to close and reopen running programs for the new settings to take effect. To update all running services with the new values, it may be easier (and less error prone) if you just reboot the machine once more. The following environment variables should exist after WO setup: - LIB: one contains the path to several libraries used be the WebObjects developer tools and should point to C:\Apple\Developer\Libraries. You usually don't have to change anything here, it is just mentioned for completeness. - NEXT_ROOT: New: Don't change the slash in NEXT_ROOT into a backslash! Just leave it as it is. The compilation of the adaptor sources will fail, if you do this. It's all my fault, that I suggested otherwise in a previous version. - PATH: The following three entries are the only ones needed by WebObjects but they are usually duplicated by running the WebObjects Update. This does not harm except that it makes the PATH variable unnecessary long. Be sure to include the following only once (this assumes that you have installed WebObjects to C:\Apple): C:\Apple\Library\Executables; C:\Apple\Library\JDK\bin; C:\Apple\bin; - TEMP: For the user accounts used to develop and run WebObjects applications, remove both TMP and TEMP from the user variables section and add them to the all users section. Change TEMP and TMP to a path without spaces, like c:\temp. You may have to create c:\temp if it does not exist yet. IMHO, that makes finding launch script zombies, log files and other WebObjects temporary stuff much easier. To clean the zombies, you alternatively can use any application you have written and start it in a shell with the 'clean' parameter: cd myapp.woa myapp.cmd clean - WEBOBJECTS_JAVA_EXTENSIONS This contains the path to the extensions folder for additional JAR files. A typical path is C:/PROGRA~1/Java/J2RE14~1.0/lib/ext. Be sure not to put spaces in here, either move your JRE to another folder or use short names (8dot3 notation like old MSDOS). - WEBOBJECTS_JAVA_HOME This specifies the path to the SDK used by WebObjects applications you write. A typical path is C:/J2SDK1~1.0. Also like above, don't use spaces in the path name. As opposed to the SDK you specify here, the WebObjects developer tools always use JDK 1.1.8 in %NEXT_ROOT%\Library\JDK - you don't have to care about that. Upgrade the licenseEdit If you have installed WebObjects Development but want to fully use its Deployment capabilites (like multiple instances, unlimited requests etc.) you have to upgrade your license to the Deployment version. There are two ways to do this. First one is to use the License Upgrader. Open a shell and enter: %NEXT_ROOT%\Demos\JavaWebObjectsLicenseUpgrader.app\JavaWebObjectsLicenseUpgrader.exe Alternatively, you can open it via 'Start, Programs, WebObjects, WebObjects 5 License Upgrader'. The second way (if the License Upgrader does not start) is to edit the key file with a text editor (Notepad would be okay). Just start the editor and open %NEXT_ROOT%\Library\Frameworks\JavaWebObjects.framework\Resources\License.key Delete the old license key and insert the new one. You have to restart wotaskd and (if it is running) Monitor for the changes to take effect: net stop wotaskd5 net stop womonitor5 net start wotaskd5 net start womonitor5 Mess around with the adaptorEdit There is a ugly bug in the IIS adaptor that extremely degrades performance when it is put under heavy load. You will reach about 3 to 5 requests per second and memory will be eaten at extreme speed, resulting in OutOfMemoryExceptions after just a few sessions have been created. This happens whenever you use the ISAPI adaptor, even when your applications run on another machine with a different OS. Luckily, Karl Hsu found a fix to this which I am extremely grateful for. Thanks Karl! Karl states that you should incorporate this fix only if you are seeing massive performance problems, so do some test before patching. The Apache Benchmark tool ab (included in support files package) can be used for this. Both files, request.h and request.c can be found at %NEXT_ROOT%\Developer\Examples\WebObjects\Source\Adaptors\Adaptor Patched versions of request.h and request.c that include the code below are also included in the support files package. If you did not make changes to these two files yourself, you can copy them to %NEXT_ROOT%\Developer\Examples\WebObjects\Source\Adaptors\Adaptor and start compiling right away. Else, in request.h, search for #include "WOURLCUtilities.h" and add the following line right below: #include "wastring.h" In request.c, search for the beginning of the method 'int req_sendRequest' (around line 217) and add the following code block _before_: #ifdef WIN32 static void req_appendHeader(const char *key, const char *val, String *headers) { int valLength = strlen(val); while (val[valLength - 1] == '\r' || val[valLength - 1] == '\n') { valLength--; } str_append(headers, key); str_appendLiteral(headers, ": "); str_appendLength(headers, val, valLength); str_appendLiteral(headers, "\r\n"); } int req_sendRequest(HTTPRequest *req, net_fd socket) { struct iovec *buffers; int bufferCount, result; String *headersString; buffers = WOMALLOC(3 * sizeof(struct iovec)); headersString = str_create(req->request_str, 0); if (headersString) { st_perform(req->headers, (st_perform_callback)req_appendHeader, headersString); } buffers[0].iov_base = headersString->text; buffers[0].iov_len = headersString->length; buffers[1].iov_base = "\r\n"; buffers[1].iov_len = 2; bufferCount = 2; if (req->content_length > 0) { bufferCount++; buffers[2].iov_base = req->content; buffers[2].iov_len = req->content_length; } result = transport->sendBuffers(socket, buffers, bufferCount); str_free(headersString); WOFREE(buffers); if (result == 0) result = transport->flush_connection(socket); else WOLog(WO_ERR, "error sending request"); return result; } #else Now, right after this #else comes the original req_sendRequest method. Go to the end of this method and insert a #endif in the next line. When you have successfully patched the source file, it is time to... Compile the adaptorEdit Next bug: The makefile expects the OS environment variable to be either empty or contain the string 'WINDOWS'. Unfortunately, the value of OS is WINDOWS_NT instead of WINDOWS so the make process will fail to compile the ISAPI adaptor. You could change the makefile so it works with WINDOWS_NT but you will have to do _a lot_ of changing then. Instead, let's just change the OS variable. However, don't change this permanently via control panel, as I do not know if it is needed elsewhere. Just setting it to the proper value before compile is enough. Open a shell and enter: cd /d %NEXT_ROOT%\Developer\Examples\WebObjects\Source\Adaptors set OS=WINDOWS make clean make If compilation was successful (no error messages), we now should have the WebObjects adaptor as EXE (CGI) and DLL (ISAPI) file. These two should be copied to the proper location (change c:\inetpub to match your system). net stop w3svc copy /y cgi\webobjects.exe c:\inetpub\scripts copy /y iis\webobjects.dll c:\inetpub\scripts net start w3svc Stopping IIS before copying prevents possible sharing violations for the ISAPI adaptor as you cannot overwrite the file if it has already been loaded by IIS. ISAPI Adaptor configuration settingsEdit Now that you have compiled the ISAPI adaptor, you have to configure it. For a single server deployment (IIS and wotaskd on the same machine), no further configuration is necessary. However, if you have wotaskd running on another machine, or want to use more than one application server, you have to tell the adaptor where to look for them. The way that always worked for me is via host list in the registry. Open Regedit (open a shell and enter regedit) and go to: HKEY_LOCAL_MACHINE\SOFTWARE\Apple\WebObjects\ If it does not exist yet, create a new subkey and name it 'Configuration'. Then go to: HKEY_LOCAL_MACHINE\SOFTWARE\Apple\WebObjects\Configuration and add the following two keys (type REG_SZ): CONF_INTERVAL 10 This means that every 10 seconds, the ISAPI adaptor will talk to the wotaskd's and re-read the configuration. CONF_URL This is a comma-separated list of all hosts, on which a wotaskd is running. 1085 is the default port. Just add all application servers and be sure not to put spaces after the commata - WO does not like spaces. If you have fixed IP-addresses, you can specify them, otherwise use hostnames and make sure the IIS machine can resolve hostnames to valid addresses. This should be no problem if all machines run Windows, otherwise you have to use a DNS server or a host file. Testing the installationEdit Now that everything should be set up properly, it is time for a small test. First, let's look if wotaskd is working. Open a browser and go to http://<myhost>:1085 If it works, you should see the host's configuration displayed in the browser. Now, open a shell and start Monitor if it is not already running: net start womonitor5 Wait a few seconds, then point the browser to http://<myhost>:56789 You should see Monitor's main window. Now you can start writing and deploying applications. Have fun! TroubleshootingEdit Q: I keep getting the following exception when running the WO applications: java.io.UnsupportedEncodingException: NSMacOSRomanStringEncoding A: Make sure the charsets.jar library is in your Java runtime path. This is installed in the JDK but not always in the JRE. Revision listEdit 1.3: 2002-08-29 - removed suggestion to change the slash in NEXT_ROOT into a backslash, because this breaks compilation of the adaptor sources 1.2: 2002-07-26 - added steps to install wotaskd and Monitor as services in WebObjects 5.0 1.1: 2002-07-10 - added two workarounds for errors during WebObjects setup on WinNT 1.0: 2002-07-09 - first public release 0.9: 2002-06-28 - work in progress
https://en.m.wikibooks.org/wiki/WebObjects/Web_Applications/Deployment/Windows
CC-MAIN-2016-40
refinedweb
6,197
56.76
ALBINO ST BERNARD . Duke, a c-section in mumbai- i quality pups. planet venus photos Does albino butlavender albino garcia twin springs stsome other. Muneca small albino listed as albinos but technically this website. Black big paws dog pet sometimes even st bernard. Keep her as a special needs a half-mask orit. Meets st months. Above is essentially large lab and i got an held at. Cute babies from in. za albino jades, currently the i suppose to. a beautiful albino pekingese. Comalbino zebra beautiful albino husky mixed with. Miniature saint sheffield, away online english dictionary is. Get a tammany parish jan ive recently had. st single from bernardhusky mix teehee shar-pei pound male. Places she will well worth your. Suppose to jun puppie for bogue falaya park. Hamsters for every albino catfish ever caught by. Held at absolutely free online classifieds. The and terms, cause ive. Apple slice ratings off on the last week. From the last days agodo. Filesize. kb home that are referred to buy. Comrepin like comment black sheep larosa referee shawn clark- feb. Square following his electionhi im arjun and golden marmosets, golden marmosets golden. Cubensis albino penis envy. Him to show her mother amythats when. Zebra beautiful albino barrera will damned st bernard need. Mumbai- animals- feb. albino penis envy za albino skin andruby is albinoif. Took him more in yadavi kennels golden retriever dog since. yrsfind albino stsome other pets only. Apan albino now ive relinquished my first giant african land. Corgi mix dozing with cute babies from the same cage albino mumbai. Albinus, bernard doggies have assembled. Rare albino ball python, hamstersalbino doberman pinscher. Red oscar jan albinos, kigoma provide best price with apple slice. Thinking of cm albino. Teehee shar-pei view full size and lyme. Institute, white pages public transcripts for bogue falaya park. Lolled outhi we piebald albino smooth coat. Inches details more mar babies from though. Rescuedsaint bernard puppies for a picture of her. depath naya Go in adorable price with a sissy loves. Jrwhite albino pekingese, saint bernards are referred to santalbino. Clarivaux is it dedicated respectively to go in ratings. There is patron saint thee more in association with microphthalmia. Pages rust, and ive relinquished my pupps. ihatestacy pics import quality pups sale at dogpsilocybe cubensis albino penis. Repin oct contactis there is essentially large head is come from. Patrick, albino, up to blue iris coloboma, and said. Needs a wide range. Lake calcasieu, louisiana, usa clark. Petersburg giant albino garcias biography, profile, la rosa. Loves big paws dog pet propagators interesting free- animals. Mar import quality pups. Com jun those big dog. The snow, maggie andst bernard. mar yadavi kennels golden retriever. Can call me to this dog. Him more social just got. Hi im caitlin i got. Food, st siberian husky aug importmastiff. Bernardtheres an all of those big dog. owl in desert Grew to buy of kennel offers albino. Need to blue iris albino. Pug chow saint malachy quality saint bernard, st tallest man of her. Hassaint bernard pricemongo is rat with landowners from the lenten mission, ellen hunter- pound male female albino catfish. Park i just months ago doggies have a pit bull. Petersburg giant and came back healthy, can call. It extremely gentle giant checked out. kb computer desktop encyclopedia, albertus magnus, saint bernards, samoyeds. october year. New jersey njgill st commentrepin like comment dogs. Saint-bernard-de-lacolle hotels in santalbino rss feed. Barrera will be a thissaint bernard x st an oct. What the money for sale albino ball python, hamstersalbino doberman pinschers. Rhino x st high school, dog, st dog- he was. Comrepin like complete with cute face here above. Jul parish jan plazita institute. Rescuedinformation about to this dog for a picture. Angle dysgenesis, iris albino angle. Enjoying an searches siberian husky aug ferret. njgill st dubai- i all white pages. Locanda del at around. Skier lost st barrera will keep her as an inch. Breed of clarivaux is ckc, she will tegaderm foam adhesive Burmese python pages rust, and ive seen. St bernard pups are talking about. Comghy- animals profile, la rosa luis de la locanda. Albertus magnus, saint importmastiff miniature saint bernard. Italian giovanni paolo i, born albino will keep her lodrigue. views mar faridabad new township. al and winry akash khanna akihito yagi was born airfrance klm logo ahmeenah young agency organizational chart african royalty clothing aeon bandaraya melaka adriana sidi adrian the name vw fleet acumuladores lth acetyl coa production ace stuff You need Flash Player 8 to view this site. You can install it by clicking here .
http://architecturalstudio.com/admin/albino-st-bernard
CC-MAIN-2014-52
refinedweb
767
71.61
CVSROOT: /cvs/cluster Module name: cluster Changes by: rpeterso sourceware org 2007-01-23 16:53:29 Modified files: fence/fence_tool: fence_tool.c Log message: Resolves: bz 222933: regression: fence_tool no longer times out after 300 seconds Patches: --- cluster/fence/fence_tool/fence_tool.c 2006/10/13 14:57:55 1.23 +++ cluster/fence/fence_tool/fence_tool.c 2007/01/23 16:53:28 1.24 @@ -2,7 +2,7 @@ ******************************************************************************* ** ** Copyright (C) Sistina Software, Inc. 1997-2003 All rights reserved. -** Copyright (C) 2004 Red Hat, Inc. All rights reserved. +** Copyright (C) 2004-2007 Red Hat, Inc. All rights reserved. ** ** This copyrighted material is made available to anyone wishing to use, ** modify, copy, or redistribute it subject to the terms and conditions @@ -29,6 +29,7 @@ #include "ccs.h" #include "copyright.cf" +#include "libcman.h" #include "libgroup.h" #ifndef TRUE @@ -36,7 +37,7 @@ #define FALSE 0 #endif -#define OPTION_STRING ("Vhcj:f:t:w") +#define OPTION_STRING ("Vhcj:f:t:wQ") #define FENCED_SOCK_PATH "fenced_socket" #define MAXLINE 256 @@ -57,7 +58,10 @@ char *prog_name; int operation; int child_wait = FALSE; +int quorum_wait = TRUE; int fenced_start_timeout = 300; /* five minutes */ +int signalled = 0; +cman_handle_t ch; static int get_int_arg(char argopt, char *arg) { @@ -97,6 +101,11 @@ return 0; } +static void sigalarm_handler(int sig) +{ + signalled = 1; +} + int fenced_connect(void) { struct sockaddr_un sun; @@ -135,6 +144,50 @@ return gdata.member; } +/* + * We wait for the cluster to be quorate in this program because it's easy to + * kill this program if we want to quit waiting. If we just started fenced + * without waiting for quorum, fenced's join would then wait for quorum in SM + * but we can't kill/cancel it at that point -- we have to wait for it to + * complete. + * + * A second reason to wait for quorum is that the unfencing step involves + * cluster.conf lookups through ccs, but ccsd may wait for the cluster to be + * quorate before responding to the lookups. There wouldn't be a problem + * blocking there per se, but it's cleaner I think to just wait here first. + * + * In the case where we're leaving, we want to wait for quorum because if we go + * ahead and shut down fenced, the fence domain leave will block in SM where it + * will wait for quorum before the leave can be processed. We can't + * kill/cancel the leave at that point, but we can if we're waiting here. + * + * Waiting here doesn't guarantee we won't end up blocking in SM on the join or + * leave, but it avoids it in some common cases which can be helpful. (Quorum + * could easily be lost between the time we wait for it here and then begin the + * join/leave process.) + */ + +static int check_quorum(void) +{ + int rv = 0, i = 0; + + while (!signalled) { + rv = cman_is_quorate(ch); + if (rv) + return TRUE; + else if (!quorum_wait) + return FALSE; + + sleep(1); + + if (!signalled && ++i > 9 && !(i % 10)) + printf("%s: waiting for cluster quorum\n", prog_name); + } + + errno = ETIMEDOUT; + return FALSE; +} + static int do_wait(int joining) { int i; @@ -156,6 +209,22 @@ int i, fd, rv; char buf[MAXLINE]; + ch = cman_init(NULL); + + if (fenced_start_timeout) { + signal(SIGALRM, sigalarm_handler); + alarm(fenced_start_timeout); + } + + if (!check_quorum()) { + if (errno == ETIMEDOUT) + printf("%s: Timed out waiting for cluster " + "quorum to form.\n", prog_name); + cman_finish(ch); + return EXIT_FAILURE; + } + cman_finish(ch); + i = 0; do { sleep(1); @@ -253,6 +322,7 @@ printf(" -V Print program version information, then exit\n"); printf(" -h Print this help, then exit\n"); printf(" -t Maximum time in seconds to wait\n"); + printf(" -Q Fail if cluster is not quorate, don't wait\n"); printf("\n"); printf("Fenced options:\n"); printf(" these are passed on to fenced when it's started\n"); @@ -284,6 +354,10 @@ exit(EXIT_SUCCESS); break; + case 'Q': + quorum_wait = FALSE; + break; + case 'w': child_wait = TRUE; break;
https://www.redhat.com/archives/cluster-devel/2007-January/msg00224.html
CC-MAIN-2014-10
refinedweb
622
71.24