text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Hello! I've been having a tough time trying to use the CGI monad transformer (CGIT). Hopefully someone can show me my misstep(s). I have a CGI script that's evolving. Early on, it needed a single database access. Now it's doing two accesses (and it looks like I'll be adding more.) Rather than making a connection for each access, the script needs to connect once. To do this, I want to combine the CGI monad with the Reader monad. This version compiles cleanly: > module AppMonad (App (..), runApp) > where > > import Control.Exception (bracket) > import Control.Monad.Reader > import Network.CGI.Monad > import Network.CGI.Protocol > import System.IO (stdin, stdout) > import Database.HSQL.PostgreSQL > > newtype App a = App (ReaderT Connection (CGIT IO) a) > deriving (Monad, MonadIO, MonadReader Connection) > > runApp :: App CGIResult -> IO () > runApp (App a) = > bracket (connect "host" "dbname" "user" "password") > disconnect > (\c -> do { env <- getCGIVars > ; hRunCGI env stdin stdout (runCGIT (runReaderT a c)) > ; return () } ) Unfortunately, when another module tries to actually use the monad, I get warnings about "No instance for (MonadCGI App)". I tried making an instance: > instance MonadCGI App where > cgiAddHeader = ? > cgiGet = ? But I don't know how to define these functions. I tried various 'lift'ing combinations, but couldn't come up with a solution that would compile. I'm also disappointed that I had to break apart 'runCGI' (by cut-and-pasting its source) because I couldn't make it believe my monad looked enough like MonadCGI. My previous experiment with monad transformers was successful. It didn't use CGIT, however, so the 'run*' functions were simpler. Does anyone have an example of using CGIT (I didn't find any from Google)? Shouldn't I be able to use 'runCGI' with my monad? CGIT users shouldn't be required to re-implement 'runCGI", right? Any help or ideas is appreciated! -- Rich JID: rich at neswold.homeunix.net AIM: rnezzy | http://www.haskell.org/pipermail/haskell-cafe/2007-August/030887.html | CC-MAIN-2013-48 | refinedweb | 317 | 60.61 |
Joe Mundy said:
> I don't like solution 2). Maybe we could more broadly share the work
of
> a release according to solution 1) in some way. I don't have a good
> understanding of what is involved. How did the process go last time?
> Joe
Well, the process is basically a pain in the proverbial IMO. I made a
few mistakes along the way (hence the reason we have a 1.1.0.1, and not
just a 1.1). But basically I just followed the instruction list passed
on by Ian Scott - I'm not a cvs expert so most of it was voodoo to me,
but it seemed to do the trick. The most effort if I remember correctly
is the downloading and compiling which needs to be done a few times on
at least two platforms - the scariest bit is getting the tags right, I
think I had to undo what I'd done once or twice ...
I presume you don't like 2) because of the apparent difficulty of entry
for newcomers? But I think at the moment the difficulty of entry is even
greater because newcomers download an old version of the library, spend
some effort trying to compile it, then ask one of the lists how to fix
it only to be told that they should get the CVS version...
My suggestions were (in increasing order of personal preference):
1) maintain the status quo, but making some effort to make releases on a
semi-regular basis (perhaps every 12 mths)
2) remove the releases from public view, and tell everyone to get the
latest CVS version as that is stable
3) automatically tar up a new release on a regular time frame (say once
a month, once a week, every day - if it's automatic, who cares?), and
just make that the current release (possibly changing only a minor
version number, eg 1.1.1, 1.1.2, ...).
Instructions (From Ian):
--------------
1. Download everything first onto at least two platforms and test the
current CVS HEAD. THe dashboards can be wrong (They never noticed that
the root configure filres had been removed!) This includes updating
CHANGES.txt.
* Need to commit changes here before making branch point etc..
-------------
--
Cheers,
Brendan.
----------------------------------------------------------------------------
Brendan McCane Email: mccane@...
Department of Computer Science Phone: +64 3 479 8588/8578.
University of Otago Fax: +64 3 479 8529
Box 56, Dunedin, New Zealand.
Hi everyone,
I've been using VIL for a while now and had a question/comment for you all.
Unless I'm missing something, it appears as if VIL is unable to handle image
files larger than 2^31-1 bytes (i.e.. the largest number that can be
represented by vil_streampos which is typedef'd to be a long int). The
reason I noticed this in the first place was because I ran across a NITF
file that exceeded these bounds. The NITF spec allows for files as big as
10^12-1 (999,999,999,999) bytes.
So, it seems as if this limitation is caused by the combination of two
things:
1) vil_streampos is defined to be long int -- long int is 32 bits on my
platform (Windows).
and
2) vil_stream has no API for iterative seeking (i.e.. seeking from the
current position as opposed to the start of the stream).
It seems like we could overcome this limitation by remedying either #1 or
#2. I took a crack at #1 in my working directory by changing the typedef
(of vil_streampos) to this:
#if VXL_HAS_INT_64
typedef vxl_int_64 vil_streampos;
#else //VXL_HAS_INT_64
typedef long int vil_streampos;
#endif //VXL_HAS_INT_64
That caused lots of compiler warnings elsewhere in VIL code base where int,
long int and vil_streampos are occasionally used interchangeably. Some of
these issues were easy to resolve and other would (I believe) require API
changes. Also, vil_stream_fstream::seek() (which uses std::fstream under
the hood) would have to be modified. My plan for that was to have it do
multiple seeks (from current position) for cases where the desired seek
offset exceeded the size of istream::streampos. I never got this far.
The other option would be to attack #2 by adding a parameter similar to
ios_base::seekdir to vil_stream::seek(). Then client code could get a
larger seek, by iteratively seeking. This would solve the NITF case
somewhat nicely (files up to 10^12 bytes) but would be infeasible for REALLY
large files where you may have to seek a ridiculous number of times (worst
case: 2^32) to get to the end of the file. Of course, who cares about files
that are that big if they even exist. This solution requires a (small)
change to the core API.
I suppose a third option would be to add a virtual seek64() function that
did the right thing for all subclasses. I haven't really thought this idea
through too thoroughly.
At any rate, does anyone have any opinions on this? Is there another
(easier) option I'm missing. If I were to pursue a solution, would you want
it incorporated into VIL? Thank you all for your consideration.
Rob Radtke | https://sourceforge.net/p/vxl/mailman/vxl-maintainers/?viewmonth=200506&viewday=23&style=flat | CC-MAIN-2018-17 | refinedweb | 857 | 70.33 |
Xamarin Studio 6.0
Xamarin Studio 6.0.2
- New: Android Http Client Handler build option allows you to choose between different HttpClient implementations.
- Updated: NuGet updated to 2.12.
- Fixed: 41752: I updated to Version 6.0 and it can run Console.Readline() but without any reaction which is weird.
- Fixed: 41604: File not found exception logged every second into IDE log when not project not compiled.
- Fixed: 40344: [Cycle7] When user creates a project with same name then XS create two Projects.
- Fixed: 40099: [Toolbar] Wrong behavior when showing close button in full screen.
- Fixed: 41245: Attribute code completion not showing all constructors and showing too many things.
- Fixed: 42011: Code Template Failing.
- Fixed: 41658: Cannot compile old projects. Starting a new solution from scratch gives the same problem.
- Fixed: 41774: [Regression] Project not recognized as Android project after update to Xamarin Studio 6.
- Fixed:>.
- Fixed: 41795: Generated documentation includes "returns" tag for void methods.
- Fixed: 41358: Xamarin Stuidio crashed on add GTK# widget signal handler.
- Fixed: 40915: Assembly Reference Aliases are not support.
- Fixed: 41580: Icon for file links is not clear.
- Fixed: 37577: [Search/Shell] Closing the "Find in Files" window also closes the search panel.
- Fixed: 41236: New file not saved correctly.
- Fixed: 40538: Code completion isn't filtering properly.
- Fixed: 41351: No arguments code completion for methods called via ?. operator.
- Fixed: 40898: Unreadable text in solution pad selection.
- Fixed: 41320: Run Unit Tests from Run Menu isn't working. Shows NRE in log.
- Fixed: 41790: Xamarin Studio fatal error when running project build custom commands.
- Fixed: 41722: Build failed with Custom Commands.
- Fixed: 41499: Searching for a NuGet package takes five to ten minutes to populate the results.
- Fixed: 41694: Roslyn doesn't "load" in projects that XS doesn't support.
- Fixed: 41388: Code completion is incorrect for array types.
- Fixed: 39328: [Regression] Copy/paste line no longer works in XS in Windows.
- Fixed: 40888: Extension methods shown on types.
- Fixed: 42575: Type string=String doesn't syntax highlight correctly.
- Fixed: 41541: App Config is not read in Unit Tests.
- Fixed: 41113: On clicking '+' button of ToolBox window, XS get hanged for few seconds.
- Fixed: 42284: CTRL+Space doesn't bring up code completion.
- Fixed: 42239: Cannot use single-character DU case sub-label names.
- Fixed: 42051: Error when editing control name in storyboard, outlets/actions not regenerated.
- Fixed: 41808: Setting UILabel Text type to "Attributed", then back to "Plain" causes compilation error in Storyboard.
- Fixed: 41887: Unable to remove duplicate events generated for widgets.
- Fixed: 41907: XS crashes on changing the properties of Widgets on both Main.storyboard and Interface.storyboard.
- Fixed 41909: The new "On-Demand Resources" step of the Ad Hoc "Sign and Distribute" workflow for iOS projects is always displayed even if the app doesn't contain any on-demand resources.
- Fixed: 41615: XS shows many errors in subclasses of Forms ContentPage.
- Fixed: 41572: An error "The call is ambiguous between the following methods or properties 'ClassName.InitializeComponent()' and 'ClassName.InitializeComponent()'" is displayed in the Source Code Editor when editing .xaml.cs code-behind files.
- Fixed: 40714: Old XS icon(blue color) is appearing previous to 'About' option under Help menu in windows XS.
- Fixed: 41862: Unable to open Android Designer in XS on certain projects.
- Fixed: 41894: Xamarin Studio crashes when trying to open "Options" on an Android Bindings project.
- Fixed: 41317: Bar Button Item gets disappear on placing Fixed/Flexible space bar item between them.
- Fixed: 41862: Unable to open Android Designer in XS on certain projects.
- Fixed: 42051: Error when editing control name in storyboard, outlets/actions not regenerated.
- Fixed: 41499: Searching for a NuGet package takes five to ten minutes to populate the results.
- Fixed: 42795: The "View | Focus Document" command is not functioning.
Known Issues
- 41727: Document Outline Pad - When document outline pad is floating and an item is clicked the caret is not moved to item on text editor.
Xamarin Studio 6.0.1
- Fixed: 41565: "Could not load solution:" and "Load operation failed." when attempting to open certain projects that use Components. This fix automatically deletes the general Android library or iOS library project type GUIDs from projects that also include more specific library subtype GUIDs such as Android bindings libraries or iOS extension projects. This is necessary because the new Project Model disallows certain redundant combinations of
ProjectTypeGuidsthat were accepted in XS 5.x.
- Fixed: 41661: Subversion support is not available in the IDE on Mac if libsvn_client cannot be located. Xcode does not install this library by default. This fix provides a dialog that runs
xcode-select --installto install the missing library.
Known Issues
- 41572: An error "The call is ambiguous between the following methods or properties 'ClassName.InitializeComponent()' and 'ClassName.InitializeComponent()'" is displayed in the Source Code Editor when editing .xaml.cs code-behind files.
-.
Xamarin Studio 6.0.0
Content
- Roslyn integration
- New Project Model
- New visual style and Dark theme
- F# Enhancements
- 64bit build on Mac
- NuGet
- Android
- Android Designer
- iOS
- iOS Designer
- tvOS
- Mac
- Insights
- ASP.NET
- Text Editor
- Debugger
- Version Control
- Profiler
- Other Improvements and Bug Fixes
- Changes Since Last Preview
Roslyn integration
Xamarin Studio’s type system is now based on Roslyn, Microsoft’s open source .NET compiler platform. Even though this is an internal change, it has several practical benefits:
- The behavior of code completion is more accurate and will work much better when a file contains syntax or semantic errors.
- Refactoring operations cover more cases and are more reliable.
- C# 6.0 is now fully supported for code completion and refactoring operations.
- The formatting engine got replaced and it now defaults to the Visual Studio format. Indenting with spaces is easier and feels like tab based indenting.
- Typing flow got improved and is now more fluid.
- Code completion import symbol support greatly improved – will no longer interfere with the typing flow and it doesn’t have a performance impact.
- The new formatting engine is now compatible with Visual Studio, but does not support custom formatting schemes from prior versions of Xamarin Studio because of the changed engine & option model.
Related Changes
The Quick Fix keyboard shortcut now requires Xamarin Studio > Preferences > Text Editor > Source Analysis > Enable source analysis of open files to be switched on:
The Resolve and Refactor context menu items for undefined symbols:
Have been combined into a single Fix item:
This new submenu is only available when Xamarin Studio > Preferences > Text Editor > Source Analysis > Enable source analysis of open files is switched on.
New Project Model
The Project Model is the service in charge of loading and building projects, and provides an API which is used by the rest of the IDE to manage information about projects. This release introduces a completely revamped project model which has a deeper integration with MSBuild, and which can handle projects that take advantage of advanced MSBuild features. Here are some features that are now supported:
- Conditional file and reference inclusion are now properly handled.
- Project files can import other projects, and properties will be properly loaded.
- Project files are evaluated before loading, so properties can be used in item definitions, and will be properly replaced.
- The extension model has changed, and will make it easier to implement add-ins that need to directly manipulate MSBuild files.
- Projects are built in parallel when possible. This should speed up the build time of solutions with many projects.
- Xamarin Studio is now much better at handling projects generated in Visual Studio, and will not make unnecessary changes to the project files when saving.
- Added support for different architectures of Mono so user can pick 32bit or 64bit version to start debugging.
- Support for MSBuild tools v14.
Known issues
41565: "Could not load solution:" and "Load operation failed." when attempting to open certain projects that use Components in Xamarin Studio 6.0. This issue also produces the error "System.InvalidOperationException: Already bound to project" in the Xamarin Studio log files. These errors occur because certain combinations of
ProjectTypeGuidsthat were accepted in Xamarin Studio 5.x are no longer allowed by the new Project Model. Recommended fix: Open the problematic
.csprojfile in a text editor and remove the conflicting item from the
<ProjectTypeGuids>element. For iOS extension projects, the most common fix is to remove the
{FEACFBD2-3405-455C-9665-78FE426C6842};item from the list.>.
Bug fixes
- Fixed: Project references don’t handle Condition attributes.
- Fixed: If a solution has two projects with the same name, project references may not work correctly.
- Fixed: Configurations defined in project imports are ignored.
- More fixes to avoid unnecessary changes when saving a project.
- Fixed: Project loading issue when the SolutionDir MSBuild property is used in an import.
- Fixed: Diagnostic build output sometimes omits some of the Task and Target ‘performance summary’ lines.
- Fixed: Code completion and File Search don’t take into account files added by Import declaration in project file.
- Fixed: Xamarin Studio does not recognise base class assignment in Forms’ XAML files.
- Fixed: Environment variables being removed from custom commands.
New visual style and Dark theme
In this release we are presenting the new look of Xamarin Studio. It has a more modern style, a dark variant and many visual tweaks to make Xamarin Studio more pleasant to use. Here are the highlights:
- 5727 new icons! (including light, dark and selection variants),
- new Windows toolbar and improved menu appearance,
- new dark IDE theme,
- new dark highlighting editor scheme,
- fine-tuned light IDE theme,
- appearance fixes to Code Completion and Code Hint popovers,
- fixes in New Project Dialog and Welcome page,
- the drop down button in the solution pad is gone – the context menu can be used instead.
The theme can be changed in the global preferences, Visual Style section.
Known issues: the new UI is still work in progress and we are aware of some issues that we plan to fix:
- Appearance of some UI elements is not finished yet.
F# Enhancements
Portable Class Libraries
We now support F# portable class library projects, and include a template to support them.
This was one of the most requested features to allow developers to write shared logic in F# and consume this from either F# or C# applications across multiple platforms.
Shared Projects
F# joins the world of shared projects. We are pioneering this support in Xamarin Studio as the equivalent functionality has not been implemented in Visual Studio yet.
These let you write common code that is referenced by a number of different application projects. The code is compiled as part of each referencing project and can include compiler directives to help incorporate platform-specific functionality into the shared code base.
Unlike Portable Class Library, shared projects do not have strict requirements of being a standalone library, nor force you to pick an API profile to be compiled against. This is just a convenient way of reusing and recompiling the same code in various projects at the same time.
Xamarin.Forms Templates
We also include a new Xamarin Forms template for F# projects. This solution includes a core project containing common code, an iOS project and an Android project. A test project is also included that can run tests against both platforms.
New Features
- Global F# symbol search, you can now search for F# symbols across all open solutions from the search bar at the top right.
- FAKE integration (preview). You can access fake build scripts by typing "FAKE" into the search bar. Any FAKE tasks from the build.fsx in the project root folder will be displayed in the search results and can be filtered further and ran. FAKE output appears in a new pad.
- Document outline pad now available for F# files.
- Highlighting improvements – Mutable variables are now highlighted red to make them stand out. The color can be changed in the theme or disabled completely. The F# interactive pad also includes syntax highlighting.
- Code navigation improvements: Goto base type, Find Overloads, and Find derived types.
- Support for F#4.
Enhancements
- Completion lists are now sorted and grouped by object hierarchy. The more relevant completions come to the top. This is similar to what we have been doing for C# already.
- Improved auto indenter – the auto-indenter now allows copy and pasting of code between different indentation levels.
- Tooltip improvements:
- Generic types display type constraints.
- Tooltips for keywords, active patterns and double ticked functions are now functional.
- All right hand side > now align.
- Containing type (or module) and assembly for members is now displayed.
- Improved parse performance – semantic highlighting appears faster when a solution is first loaded and completions are now faster.
- Go to source code from the NUnit test runner.
- Go to declaration can now jump to C# code from F# code and vice versa.
- Find references shows both C# and F# references.
- Better syntax highlighting so that source code looks good before semantic information becomes available.
- Cmd (or ctrl) click working for go to definition. Identifiers that can be jumped to are underlined as you hover.
- Generated projects are compatible with more versions of Visual Studio.
- Aggressive intellisense so that there is no need to press ctrl+space. Can be switched off.
Bug fixes
- Fixed: Project folders in solution pad are collapsed after reordering files.
- Fixed: with "Enable Source Analysis" option switched on, the margin always displays 1 error.
- #commands fixed inside the F# Interactive panel.
- Prevent completion and tooltips working inside comments and docstrings.
- Remove display of empty tooltips on Android projects.
- NUnit test runner – fix tests that contain spaces in the name.
64bit build on Mac
This release of Xamarin Studio for Mac is a 64bit application, so you’ll be able to take advantage of all the memory of your system when working with large solutions.
NuGet
Improved support for pre-release NuGet packages
Updating to a later pre-release NuGet package is now supported in the Solution window. An individual pre-release NuGet package can be updated by right clicking and selecting Update. When all packages in a project or solution are updated then pre-release NuGet packages will be updated to later pre-release NuGet packages if they are available.
The "Show pre-release Packages" setting in Add Packages dialog will now be remembered on a per solution basis.
The "Check for package updates" setting will now check for updates to pre-release NuGet packages as well as stable packages.
Support for watchOS and tvOS
Xamarin.TVOS and Xamarin.WatchOS target frameworks are now supported in NuGet packages.
Package License Acceptance
A license acceptance dialog will now be displayed if a NuGet package requires a license to be accepted before it is installed. The NuGet package license can be viewed from the dialog by clicking View License hyperlink. If the license is declined then the NuGet package will not be installed.
Other bug fixes and improvements
- Support NuGet packages that use icons from local files.
- Fixed: Incorrect update count displayed after updating NuGet packages.
- Error dialog is now displayed if the NuGet.Config file cannot be read.
- Fixed: NuGet restore and update support for workspaces.
Android
Improved selection of Google Play Services
The option to add Google Play Services to a new project has been removed from the New Project Dialog and replaced with a new dialog that lists all the known Google Play Services NuGets that we have bindings for. This dialog is accessible from the Packages node in the Solution Pad and from the Project menu.
Support for Visual Studio Emulators
We have added support for enumerating and launching Visual Studio Android Emulators on Windows. Access to the various emulator managers (Google, XAP, VS) has been improved and it is now more consistent.
Bug Fixes
- Fixed: Xamarin Studio would hang with a specific version Xamarin Android Player.
- Improved: Minor updates to the project templates.
- Enhancement: The Android ResGen pad is now reused.
- Fixed: Starter Edition could not build a new project.
- Fixed: The name of a project would be incorrect for new Xamarin.Forms apps.
- Fixed: The designer would display an incorrect value after the project was created.
- Fixed: Xamarin Studio would not stop a deployment when the user pressed the stop button.
- Fixed: ProGuard and Dex options were not persisted.
- Fixed: "MultiDexMainDexList" was not included in the available list of build actions.
- Improved: When you attach a phone the device target will change to the phone if the current device target is a non-running emulator.
- Fixed: Issues with the SDK Tools preview that prevented Xamarin Studio from obtaining the AVD name.
- Improved: Build actions are updated when moving files in and out of asset and resource folders.
- Improved: Inform users that in order to publish an app to the Google Play Store they first need to manually upload an Ad Hoc build.
- Improved: Error messages when Archiving for Distribution
- Fixed: Migration from Novell to Xamarin build targets would put the new import in the wrong place in the project file.
- Android project templates now target the latest Android framework that is installed on the machine.
- UITest project templates now use Xamarin.UITest 1.1.1 and Xamarin.TestCloudAgent 0.16.2.
- Fixed opening AndroidManifest.xml after creating Forms App leads Manifest being marked as dirty.
- Fixed: Project properties removed on loading Android project.
Android Designer
Improvements
- Better error recovery with invalid XML.
- Improve user themes handling.
- Improve toolbox performance.
- Code folding and document outline support for XML layout editor.
Bugfixes
- Fix resource creation with embedded text editor.
- Fix autocompletion of some elements (e.g.
RelativeLayout) and attributes.
- Fix layout outline appearance.
- Fix in-layout scrolling.
- Fix launching SDK with right permissions on Windows.
- Fix preferred theme not being re-used between solution opening.
iOS
Asset Editor
The assets folder hierarchy is not shown anymore in the solution pad. Double clicking on the asset folder opens the asset editor with a left hand side list of your different assets. The asset editor has several improvements:
- The asset editor now works better with smaller displays.
- Updated asset catalog file templates.
- Added “On Demand Resources Tags” text box to Image and Data assets allowing developers to tag their resources.
- Supports watchOS 2 and tvOS assets.
- Every asset type gets its own icon in the left hand side panel.
- The currently selected asset type is now shown on the top left of the editor.
Plist Editor
The plist editor received a global set of updates with the introduction of the dark theme. The controls look more native, sections have been updated to show a discrete line and all elements have been centered.
We removed the old UI for App Icons and Launch Images. If you were setting your assets in the plist editor you will see a migration button, offering you to move to Asset Catalogs. All those assets will then be automatically transferred and you will have the opportunity to add the ones required by recent versions of iOS.
New HttpClient stack selector
This controls which HttpClient implementation to use. The default continues to be an HttpClient that is powered by HttpWebRequest, while we can now optionally switch to an implementation that uses iOS’s native transports (NSUrlSession or CFNetwork depending on the OS). The upside is smaller binaries, and faster downloads, the downside is that it requires the event loop to be running for async operations to be executed.
New SSL/TLS implementation build option
This controls the SSL/TLS backend used by Xamarin.iOS, and switches between Apple’s TLS stack (default) present in Mac and iOS and Mono’s own TLS stack.
Note: the underlying value (Legacy) used when the Mono (TLS 1.0) option is selected isn’t supported by older versions of Xamarin Studio (prior to 6.0). A csproj error will appear if you try to open a solution with a project that has this legacy value. In such case, open your .csproj and delete
<MtouchTlsProvider>Legacy</MtouchTlsProvider>.
On-Demand Resources
On-Demand Resources allow your application to download resources from the App Store as they are needed rather than downloading them as part of the initial install of your application.
In order to take advantage of this, you will need to tag your assets and/or bundle resources and then define which tags should be included in the initial install and the pre-fetch ordering for all remaining tags in your Project Options under the new tab, iOS On-Demand Resources. Enter the appropriate tags in each of the text fields separated by a comma (,).
To tag an asset, simply edit your asset catalogs as normal. Each asset will have an On-Demand Resource Tags text field allowing you to specify a list of tags to assign to the resource (once again, enter tags separated by a comma).
To tag a bundle resource, right-click on the bundle resource in the Solution tree and select Properties. In the Properties panel, locate the On-Demand Resource Tags text field and enter the tags separated by a comma.
For more information, please see Apple's On-Demand Resources Guide.
Native Frameworks
It is now possible to reference native iOS Framework directories in the “Native References” folder of an iOS project and have them be included in the compiled app bundle.
Known issues
-.
Other improvements and bug fixes
- Only show iPhone devices that are paired with watches when a WatchKit App is set as startup project.
- The mtouch verbosity is now based on the msbuild verbosity level (Preferences > Projects > Build > Log Verbosity).
- [Publishing workflow] *.ipa file now contains On Demand Resources assets.
- [Binding Project] Renamed StructsAndEnums.cs to Structs.cs.
- Unit Test App is now compatible with larger iPhone screen sizes.
- We now prevent user from enabling iCloud if it’s not supported by the provisioning profile.
- Fixed: Perpetual ‘The app icon set "AppIcons" has an unassigned child’ warning.
- Fixed: Entitlements.plist Personal VPN key.
- Fixed: Sign and Distribute dialog needs columns auto-sized in width.
- Fixed: 4GB option is missing in data set view (new with Xcode 7.2).
- Fixed: Metal 3v1 option is missing in data set view (new with Xcode 7.2).
- Fixed: Action Extension template.
- Fixed: Sprite Atlas context menu options are sometimes not clickable.
- Fixed: Xcode integration generates "System.Single" instead of “float”.
- Fixed: iTunesArtwork is not copied to the IPA in an AdHoc build.
- Fixed: xcassets added via Interface Builder are incorrectly imported by Xamarin Studio.
- UITest project templates now use Xamarin.UITest 1.1.1 and Xamarin.TestCloudAgent 0.16.2.
iOS Designer
We’ve been profiling hard to help improve performance and reduce lag when interacting with the designer. We’ve also fixed many small issues in the property panel and design surface.
Bug Fixes
- Fixed several places where constraints would not render correctly, typically when they had a value of ‘0’ they would have their horizontal instead of vertical, or vice versa.
- The bottom layout guide bounds are displayed correctly in all cases now.
- Improved property panel reloading performance.
- Panning/scrolling in the design surface should feel smoother and more natural.
- When an item is deleted all related size class variations should be cleaned up too.
- Support for OTF fonts has been added.
- Fixed an issue which could cause ViewControllers to vanish from the surface when one of their properties changes. This typically happened when specific combinations of segues where set.
- Fixed a typo in the xml when we generated OutletCollections.
tvOS
Introducing support for tvOS
When coupled with the Xamarin.iOS tvOS previews, it is now possible to create tvOS applications. We have included new project types, file templates and designer support for it. New in this release is support for binding projects for tvOS, which allow your projects to easily consume native libraries in tvOS, in the same way that you do for iOS, as well as support for the tvOS GameCenter features.
You can run your applications on both the tvOS simulator as well as a physical tvOS device.
Bitcode support for tvOS apps
Bitcode allows the App Store to re-optimize your app binary. Therefore you will not need to re-submit your app to the store to take advantage of new processor capabilities.
The project build option menu now provides a sub option to the LLVM optimizing compiler: the "Enable Bitcode" option. Bitcode is exclusively available when you enable LLVM and only for the release mode on device.
It is required to publish your tvOS app on the App Store.
Mac
New binding project support
Available for previews of Xamarin.Mac with binding support. This feature has been available on iOS projects for a long time, and we have finally brought it to the Mac.
Mac Entitlements
The Entitlements.plist editor has been completely rewritten to match the iOS Entitlements.plist editor. The Info.plist editor no longer contains the Mac Entitlements. New projects will include an Entitlements.plist by default.
The iCloud Entitlements options have been updated for CloudKit (instead of the obsolete iCloud entitlement keys).
In order to enable Entitlements in an existing project without an Entitlements.plist, the developer will need to create an Entitlements.plist and add it to the project. It is recommended that developers use the “Property List” file template located in File -> New -> File... dialog under the Mac section.
Other improvements and bug fixes
- New Metal project template.
- Fixed: Adding app icons to project does not reflect when running app.
Insights
- Improved: Better error handling from the API server.
- Improved: The Insights NuGet is added to the PCL project in a new Xamarin.Forms app.
- Fixed: The initialization call was missing from the Android project in a new Xamarin.Forms app.
- Fixed: Projects that did not have Insights enabled would still try to upload symbols when publishing the app.
ASP.NET
ASP.NET support has been greatly improved. There is a new project wizard for creating Web Forms and MVC projects and all templates have been reviewed and updated.
Text Editor
Navigational link support
Navigational link support: if you hold the Command (on Mac) or Control (on Windows) key while moving the mouse over a symbol, the symbol is converted to a link, which can be clicked to jump to the declaration.
Regular expression support improvements
Working with regular expressions is easier than ever in this release:
- To make regular expressions more readable, XS now uses different colors to highlight different parts of a regular expression.
- There is code completion support for "\" in regexes, which show regular expression escape sequences.
- Groups are inserted into the completion list.
- There is a new code action inside regex patterns: "Validate regular expression" that opens the regex inside the regex toolkit.
Other improvements and bug fixes
- The "Standard Header" panel in the options lists all the available variables.
- Fixed: Pasting sometimes fails to paste current clipboard contents.
- XML code formatting settings now work for .xaml files
- Omit XML declaration setting now affects xml formatting.
- Fixed being unable to open .m and .h files in the source code editor.
This release does not include the VI mode.
NUnit 3
This release includes support for NUnit 3. There are no project templates for this NUnit version yet. If you want to use NUnit 3, just create a regular NUnit project and update the NuGet package to 3.0.
Debugger
- Fixed: The embedded exception display does not handle large content.
- Fixed: No locals and evaluation possible in statically constructed fields.
- Fixed: Debugger does not hit breakpoints on Windows.
- Fixed: Visualizer for strings removes underscore.
- Fixed: User is not able to inspect variables while debugging iOS application.
- Fixed: The "Locals" pad appends new items each time an item in the “Call Stack” is double-clicked.
- Fixed several issues with function breakpoints:
- Setting the breakpoint now works when parameter type has full type name with dots.
- Now it’s possible to set on methods without debug information (it places on ILOffset=0).
- Now setting on all overloads also works for unloaded types.
- Fixed issue when evaluating enum values.
- Fixed: Breakpoints inside anonymous method not working on iOS Device.
- Fixed: Highlighting of current execution location to highlight statement instead of whole line(when ending line and column are available)
Version Control
Known Issues
- 41661: Subversion support is not available in the IDE on Mac if libsvn_client cannot be located. Xcode does not install this library by default. Recommended fix: Run
xcode-select --installin a Terminal.app command prompt to ensure that the libsvn_client library is installed into the expected location.
Enhancements
- Blame view now has syntax highlighting and supports jumping to the parent revision of the selected revision.
- The Diff View no longer reports a line change when line endings differ.
- The Last selected path in Checkout/Select repository is now remembered.
- Git: Merging from a branch no longer tries to do a fast forward merge.
- Status View now automatically refreshes the status of files modified while the view was open.
- Fixed: Log command from the Blame view now focuses the commit in the log view.
- Status View now automatically refreshes file status.
- When cloning a repository, a solution file is now recursively searched for.
Fixes
- Fixed: Git credentials dialog no longer pops up multiple times.
- Fixed: Git Blame view would not account for local changes.
- Fixed: Blame view menus sometimes would not work.
- Fixed: Network operations would sometime lockup in Git.
- Fixed: URL parsing is more lax now for Git operations.
- Fixed: Probing for a Subversion repository could hang the IDE.
- Fixed: Subversion no longer blocks the editor from typing whenever an exception occurs.
- Fixed: The log of a file now includes the initial commit for Git.
- Fixed: Subversion now shows a proper error dialog for the native VC++ redist dependency.
Profiler
- Added: Support for profiling tvOS apps.
- Fixed: Some edge-case for iOS apps caused the Start Profiling command to be hidden.
Other Improvements and Bug Fixes
- High DPI displays on Windows are now much better supported.
- Fixed: Context menu’s options is not working properly.
- Fixed: Copying build errors un-selects them.
- Fixed: "Stop" button doesn’t change into “Play” button after stop the application in release mode.
- Fixed: "Stop" button doesn’t change into “Play” when executing long running build.
- Fixed: Editing user tasks in tasks pad when sorted modifies wrong task.
- Show active configuration in project options dialog.
- Fixed several memory leaks.
- Fixed: Directory separator characters not handled when renaming files and folders.
- Fixed: Prevent language keyword being used as new project name.
- Fixed: Status bar message changing too often on installing NuGet packages.
- Fixed: Prevent emoji being used when creating a new project.
- Fixed: Allow # and % characters in file names when creating a new project.
- Fixed: Do not show search results from closed solution in global search results.
- Fixed: Generate events for a web reference.
- Fixed: Crash in global search popup window due to badly behaved third party add-ins.
- Fixed: MVC Views Configuration file template not being displayed.
- Fixed: Exception when using parameterized T4 templates.
- Fixed: Sorting by completed column in Task Pad.
- Fixed: Copy and paste removing original file from project.
- Fixed: Cut and paste in solution pad not updating project file.
- Fixed: Open assemblies in assembly browser by default.
- Fixed: Prompt to save files before opening New Project dialog.
- Fixed: Use same simulator for UITest and debugger if simulators are duplicated.
- Fixed: Forms .xaml files not being migrated from MSBuild:Compile to MSBuild:UpdateDesignTimeXaml.
- Enhancement: Improved responsiveness of the global search bar.
- Fixed: Property Pad now displays the right target’s information when using auto-hide.
- Fixed: Issues with downgrading to Starter after a trial has expired.
- Improved: The Next, Previous and Cancel buttons in the Publishing wizard are focusable on Windows.
- Fixed: Importing .mdpolicy files would sometimes fail.
- Fixed: Downloaded link is disappearing while coming back to Stable channel at ‘Check For Updates’ window.
- Fixed: Searchbar popup correctly positioned when using multiple monitors.
- Fixed: Updater no longer occasionally fails if user’s language isn’t English.
- Fixed: Menus in OS X are no longer completely disabled after closing dialogs.
- Added: New file template for adding an app.manifest file so Windows Desktop applications can be manifested for Windows 8.1+.
- Improved Korean translatioin.
- Fixed: UI glitch when re-sizing Archives Pad.
- Fixed: Execution targets now get correctly remembered per project.
- Fixed: A few focus issues with the Windows toolbar.
- Fixed: Annoying behaviour when focusing searchbar would select whole text when issuing Go to File.
- Fixed: Small delay on opening the Document Switcher the first time.
- Fixed: Searching in files will no longer open multiple search results pad when not needed.
- Fixed being unable to add an automated UI test project to a Multiplatform Single View App
- Fixed migration from MonoMac to XamMac not updating project type guids.
- Fixed being unable to select a folder with the open file dialog on Windows.
- Fixed being unable to remove a Xamarin component.
- Fixed component store window not being displayed.
- Fixed: Tree view tooltip is shown and then immediately hidden.
- Changed: Removed obsolete Edit - Word Count menu item.
-
- Fixed: Unable to read warning message - too long for screen.
- Fixed: Shared project, adding file and changing configuration results in invalid file list for iOS/Android project.
- Fixed: Updates to Xamarin Studio corrupt the .app on Mac.
- Fixed: Updater stuck on "Attaching disk image...".
- Fixed: [Error Pad] Inverted click when errors/warnings are sorted descending.
- Fixed: Submit to Test Cloud fails when app name contains a space.
-.
Changes Since Last Preview
This is a list of fixes and improvements since the last preview
6.0.0.5166 (Release Candidate 3)
- Fixed: Xamarin iOS designer crashes when two buttons are in the right bar.
- Fixed: Hang while activating the Document Outline pad with iOS designer open.
- Fixed: Properties window goes blank when changing the Build Action from the Properties window.
- Fixed: Code completion: Shift+Enter on imported type doesn't insert full type name.
6.0.0.5156 (Release Candidate 2)
- Fixed: Rebuild Solution only Cleans does not actually rebuild the solution.
- Fixed: Unable to save selected ABIs in Release configuration for an Android project.
- Fixed: Project failed to load with error "Requested value AppleTLS wasn't found".
- Fixed: Duplication of Compiler Define Symbols.
- Fixed: Go to definition failure with Xamarin.Mac.
- Fixed: iOS Page-BasedApp template has warnings.
- Fixed: Cannot create new FSharp.fsx files in a blank Workspace in Xamarin Studio 6.0.
- Fixed: Getting error on building the Xamarin.Forms template projects when Standard Header is set in XS-> Preferences.
- Fixed: Unable to create Xamarin.Forms Templates project when only Xamarin.Android is installed on the system.
- Fixed: UITest templates need Xamarin.UITest updated.
- Fixed: Problem running xamarin-component.exe.
- Fixed: Search Result links don't display properly if code is in a collapsed region.
- Fixed: Unable to launch iOS sample on device in DEBUG mode, application just uploaded on the DEVICE and debugger get disconnected.
- Fixed: On Demand Resource changes don't get reflected after changing the Build action.
- Fixed: Crash closing a window in XS.
- Fixed: NullReferenceException when loading android designer.
- Fixed: "and" keyword not syntax highlighted in F# projects.
- Fixed: The fix action "Using 'some namespace import'" sometimes fails with a null reference exception.
- Fixed: Contents of popup window of XS is not correct for long expired license.
- Fixed: Xamarin Android not installed for Xamarin Studio.
- Fixed: Renaming a AXML did not update the csproj.
- Fixed: Error deploying/debug when using enterprise Certificates.
- Fixed: Can't enter comment on blank last line of file.
- Fixed: On creating Generic project, it is showing error in Solution pad.
- Fixed: Once in a while the text editor stops working.
- Fixed: Unable to exit in "Full Screen" mode on Windows XS.
- Fixed: Context menu on selected solution explorer node does not represent selected node.
- Fixed: Opening a new solution doesn't get Syntax Highlighting or Code completion.
- Fixed: [mdtool] System.AggregateException with build/archive commands.
- Fixed: Opening a solution crashed XS.
- Fixed: Incorrect number of method overloads shown in intellisense.
- Fixed: Unable to upload iOS project's .ipa to Test Cloud.
- Fixed: Selected User Interface Language doesn't take effect on restarting XS.
- Fixed: XS crashes when user clicks on Disconnect button under iOS device log.
- Fixed: Search list is not closing after removing the search text in search box.
- Fixed: Status bar shows updates available when there are none after updating.
- Fixed: User is not able to Search dll in Assembly Browser.
- Fixed: NullReferenceException when profiling Android projects.
- Fixed: User should not able to change Active Configuration of running application in XS.
- Fixed: MSBuild Errors: Unknown MSBuild failure.
- Fixed: Unable to run tests for a project, NUnit3Runner crashes.
- Fixed: Holding down Ctrl key when opening recent solution closes existing solution in workspace on Windows.
- Fixed: Linker behavior should changed to "Link Framework sdks only" while selecting platform as "iPhone" in XS.
- Fixed: Expired Android Trial blocks usage of non-expired iOS License with "Thank you for evaluating Xamarin Studio Enterprise".
- Fixed: Upon changing "Build Action" property grid gets reset.
- Fixed: Environment.CurrentDirectory not set correctly when running unit tests.
- Fixed: F# Addin Breaks Find References in C#.
- Fixed: Restoring NuGet packages does not update the type system.
- Fixed: Widget does not move correctly on increasing and decreasing value of X and widget disappears.
- Fixed: After changing Origin property for any widget when we again select that widget, its coordinates values continue changes and widget disappears.
Changes since 6.0.0.4968
- High DPI displays on Windows are now much better supported.
- Fixed: Shared project, adding file and changing configuration results in invalid file list for iOS/Android project.
- Fixed: Full screen mode turns toolbar white with white text.
- Fixed: Updates to Xamarin Studio corrupt the .app on Mac.
- Fixed: Solution Pad: Selected row should always have white text.
- Fixed: Updater stuck on "Attaching disk image...".
- Fixed: [Error Pad] Inverted click when errors/warnings are sorted descending.
- Fixed: Adding multiple Google Play Service packages leads to many of the packages not being installed even after License Acceptance.
- Fixed: Refactoring classes can break Code file.
- Fixed: Submit to Test Cloud fails when app name contains a space.
- Fixed: New F# Xamarin.Forms project using Shared library does not create the shared library.
-.
- Fixed: Rename refactoring a method parameter causes subsequent document elements to be removed.
- Fixed: Rename refactoring is giving the wrong name.
- Fixed: The fix action "Using 'some namespace import'" sometimes fails with a null reference exception.
- Performance and memory use improvements.
Alpha 4
- Licensing
- Licensing now matches the licensing terms that was announced at Build 2016.
- Shell
- Fixed: Focus issue with the Toolbar when selecting build configurations.
- Fixed: Recent projects would be removed if they were not found.
- Fixed: Maximising and Minimising the Preferences dialog should not be possible.
- Improved: The Preferences panel remembers which panel was last used.
- Fixed: Error pad icons were sometimes duplicated.
- Fixed: When saving a file the encoding option was not visible.
- Fixed: Moving a folder between projects fails to add files to the new project.
- Fixed: Navigating to a type in the assembly browser could sometimes fail.
- Fixed: The assembly browser would not remember the member visibility setting.
- Improved:
Open withis enabled for solution files.
- Fixed: Meta+W would not save and close a file.
- Fixed: Empty tags in project files would be reformatted when saving the project.
- Text Editor
- Fixed: The tool tip for an attributes constructor did not show the correct namespace.
- Fixed: No code completion in assembly attribute arguments.
- Fixed: Nested classes should not be offered "Rename to match class" refactoring.
- Fixed: Extract method no longer triggers an inline rename of the method.
- Fixed: The refactoring preview window showed empty content.
- Fixed: Incorrect code completion for properties.
- Fixed: An issue where XS could crash whilst editing a file.
- Fixed: Matching braces highlighting could not be disabled.
- Fixed: Operator methods are not shown correctly in the breadcrumb.
- Fixed: Rendering issue with breakpoint and warning icons.
- Fixed: Autocomplete shows different list before and after typing
(.
- Fixed: Code Foldings can hang Xamarin Studio.
- Fixed: Code completion would sometimes error when completing statements.
- Debugger
- Fixed: Live updating icon was not synchronised with Breakpoint pad.
- NuGet
- Fixed: Issues when adding NuGet packages with .props files.
- Fixed: Issue that would cause assemblies to not be located immediately after restoring a package.
- Fixed: Uninstalling and reinstalling did not add back imports.
- Fixed: Licenses could not always be accepted when installing multiple Google Play Service NuGets.
- Fixed: It was not possible to add certain Google Play Service NuGets to a project.
- iOS
- Fixed: Crash when editing Attributed text.
- Fixed: Tap Gesture Recognizer was missing an Action in the designer.
- Fixed: Image slicing info removed from Contents.json when editing and saving an Image Set that had slicing info from XCode.
- Fixed: An issue where storyboards could not be opened.
- Fixed: On Demand Resource tags could be set for images with a build action of None.
- Fixed: "Application already installed." when uploading a rebuilt app to AppleTV.
- Mac
- Updated: F# template now uses Storyboards.
- Fixed: The template for mac applications was missing a Register attribute.
- Fixed: Error when creating a new Mac app project.
- Android
- Fixed: Sometimes the wrong app name was written to the AndroidManifest when creating a new project.
- Fixed: Deploying an app to device would fail if the package name contained a hyphen.
- Fixed: Third party tooltip providers would not work.
- Fixed: Rendering issue in Add Google Play Service dialog for packages that were already added to the project.
- UITest
- Fixed: An issue that would cause UITest to fail to connect.
- Other
- Fixed: Column size of provisioning profiles when publishing an app are too small.
- Fixed: Dark theme makes the publishing wizard unusable.
- Fixed: "Create a project within the solution directory." checkbox in the New Project dialog did the opposite.
- Fixed: Rendering issues with the Error pad focus icon.
- Fixed: Xamarin Studio would throw an exception on a targets file with a particular form of Condition.
- Fixed: A project might display no files when files exist in the project.
- Fixed: Xamarin Studio could crash on startup.
Alpha 3
- Projects
- Fixed: XS saves annoying WarningLevel 0 tags in project files.
- NuGet
- New dialog for accepting NuGet licences.
- Editor
- Fixed: Event handler completion list sorting broken.
- Fixed: Refactor Initialize Field from Parameter is generating wrong code.
- Fixed: Code diagnostic suppressions doesn't work on compiler warnings.
- Fixed: Search and replace not always working.
- Fixed: Unable to read warning message - too long for screen.
- Fixed some caret line marker rendering issues.
- Fixed: Code completion should be case sensitive.
- Removed use of italics on editor themes.
- Shell
- Fixed: Global search can't be stopped.
- iOS
- Fixed: Add / Add Native Reference does nothing.
- Fixed: Device specific builds should use last known device.
- Fixed: Device list does not show names of simulators as they are named in XCode.
Alpha 2 (6.0.0.4761)
- NuGet
- Error dialog is now displayed if the NuGet.Config file cannot be read.
- Fixed: NuGet restore and update support for workspaces.
- NUnit
- Fixed: Nunit3 runner does not respect ExplicitAttribute.
- Fixed: [NUnit 3] When a test ends up being in an inconclusive state, it is still shown not run in Test Pad.
- Projects
- Fixed: Switching XM target framework from Mobile to 4.5 with a PCL reference leaves project as unloadable.
- Fixed: Unable to open project when ProjectTypeGuids is not specified in the first property group.
- Shell
- Fixed: Tree view tooltip is shown and then immediately hidden.
- Fixed: Translation project not available in New Project dialog due to missing icon.
- Fixed: Color for modified value in Watch pad is too light.
- Fixed: Toolbox being empty when the GTK# designer is open.
- Removed obsolete Edit - Word Count menu item.
- Fixed: Environment variables being removed from custom commands.
- Fixed: Warning dialog shown on opening unsupported projects such as Windows Phone.
- Fixed: Rendering glitches in toolbar when using dark theme.
- Many other icon and visual tweaks.
- Android
- Fixed: XS is writing some empty properties to debug and release property groups in android projects.
- Fixed: Android App template with Maximum Compatibility has build warnings.
- iOS
- Fixed: Invalid Xcode version leads to an error when trying to open the plist editor.
- Fixed: XS attempts to migrate Launch images resources that are not part of the project, and that fails.
- Fixed error installing on iPad and iPad mini.
- Fixed error in signing app, and icloud entitlement for production/app store.
- Fixed: [On demand resources] Incorrect app bundle structure on embedding resources to app bundle
- Fixed: For Universal app, deployment info in info.plist file does not get updated for iPad
- Mac
- Added support for Mac Entitlements
- F#
- New: Go to source code from the NUnit test runner.
- New: Go to declaration can now jump to C# code from F# code and vice versa.
- New: Find references shows both C# and F# references.
- Improved: Better syntax highlighting so that source code looks good before semantic information becomes available.
- New: Cmd (or ctrl) click working for go to definition. Identifiers that can be jumped to are underlined as you hover.
- Fixed: Unable to create F# forms project on Windows.
- Improved: Generated projects are compatible with more versions of Visual Studio.
- Aggressive intellisense so there is no need to press ctrl+space. Can be switched off.
- Version Control
- Status View now automatically refreshes file status.
- When cloning a repository, a solution file is now recursively searched for.
- Fixed: The log of a file now includes the initial commit for Git.
- Fixed: Subversion now shows a proper error dialog for the native VC++ redist dependency.
- Xml Editor
- Auto insertion of matching quotes was adding extra quote when it should not
- Insertion of code completion when dot or colon was in suggestion duplicated existing text
- Other
- watchOS 2 support has been removed since it is not yet fully supported by Xamarin.iOS.
-
Alpha 1 (6.0.0.4520)
- Fixed: XS 6 Preview does not build project when there are changes
- Fixed: Code Analysis does not work
- Fixed: When a project is reloaded due to changes in the underlying .csproj file, the project can't be executed anymore.
- Fixed: web references in a project can't be updated.
- Fixed: internal error when closing a solution just after opening it.
Preview 4 (6.0.0.3668)
- Text Editor
- Fixed: Typing "<" after IEnumerable deletes the text in front of it.
- Fixed: Missing semicolons are not indicated as error in the editor.
- Fixed: Writing a comment triggers code completion.
- Fixed: Pressing ‘undo’ results in corrupt code.
- Fixed: Go to Definition uses wrong file path in some cases (code-behind file).
- Fixed: Toggle line comment does not comment selected XML.
- Fixed: Format document command disabled for xml files.
- Fixed: Conflict when the same type is defined by a Shared Project and an iOS Extension.
- Fixed several type resolution issues when using shared projects.
- Projects
- Added support for C# 6 on Windows.
- Fixed: "Enable Code Analysis on Build" does not work.
- Fixed: Fixed references not being saved to project file when updating a NuGet package.
- Fixed: Unable to load or create translation projects.
- Fixed: Project and solution policies not being saved.
- Fixed: New PCL project added to solution not having any references.
- Fixed: User preferences not loaded for workspace.
- Fixed: Exception when adding a project to an empty solution.
- Fixed: New files cannot be added to unsupported projects.
- Fixed: Error building generic project.
- Fixed: Hang when adding new extension to an iOS project.
- Fixed: Unable to use Run with Mono Heapshot.
- Fixed: Interface definitions being removed when migration projects from iOS Classic to Unified.
- Fixed: References and .xib files being removed when migrating projects to Xamarin.Mac.
- Fixed: Migration dialog not shown on opening MonoMac project.
- Fixed: Add / remove references may fail in some cases.
- Fixed: App does not deploy after cleaning project.
- Fixed: IDE hangs when building an application after adding code to Razor template.
- Shell
- Fixed crash in Find in Files when the solution has disabled projects.
- Fixed: After fatal error occurs, the red icon in the status bar opens as a giant Alert box.
- Fixed: No progress information in status bar when loading large solution.
Preview 3 (6.0.0.1752)
- Fixed: Quick Fix no longer available.
- Fixed: Auto insert matching brace doesn’t work.
- Fixed: Matching brace highlighted when cursor is not next to a brace.
- Fixed: String formatting tooltip shown when it shouldn’t.
- Fixed: Code templates are not expanded if there isn’t a whitespace before the cursor.
- Fixed: Duplicate Find References results.
- Fixed: Closing a solution before it has fully loaded throws a bunch of exceptions.
- Fixed: Can’t goto declaration on items in code.
- Fixed: NuGet package license warning not shown in status bar.
- Fixed: Export command does not export project.
- Fixed: Items with multiple files in the include are split and duplicated (incorrectly) when saving project.
- Fixed: XS makes unnecessary changes to project.
- Fixed: Error while getting referenced assemblies – Collection was modified.
- Fixed: Can’t load certain project types that worked previously.
- Fixed: F# project will not open with message "UNC PAths should be in the form \Server\share".
- Fixed: Git doesn’t appear to be working (blame, changes, log, etc).
- Fixed: Warnings and Errors are not cleared from the status bar when the solution / workspace is closed.
- Fixed: Global search entry not focused when pressing CMD+dot.
- Fixed: Error logged when opening file from shared project.
- Fixed: Error opening assembly as solution.
- Fixed: Project folder is not created under solution while creating Generic Project.
- Fixed: Invalid warnings when archiving source files.
- Fixed: Error column appears off-by-one.
- Fixed: When entering string content as argument, one in a row, matching bracket is shown on next time.
- Fixed: Tooltip says "Parser error" for type resolution errors and warnings.
- Fixed: Assembly browser doesn’t show any names in TreeView.
- Fixed: Fold markers are invisible in assembly browser.
- Fixed: Unhandled text editor error when closing a very large file.
- Fixed: Holding down Command+Z Batches Undo.
- Fixed: Source tooltips do not always get removed.
- Fixed: Document Path / Jump list rendering issue.
- Fixed: Opening a workspace with multiple sln’s in it results in no intellisense, syntax highlighting etc.
- Fixed: Issues indicators get removed from side bar when showing namespace references.
- Fixed: F# compiler from parsing multiple times.
- Fixed: F# compiler issue where parse requests were not getting cancelled properly.
- Fixed: F# debugger expressions now evaluate properties on bindings.
- Fixed: F# Parameter tooltips now use semantic highlighting.
Preview 2 (6.0.0.1449)
- Fixed: Checkout from Version Control causes XS to crash.
- Fixed: Cannot unload project after unloading and reloading solution on XS 6.0.
- Fixed: Resx designer file not created when new resx file added to project.
- Fixed: Build errors after adding new resx file to project.
- Fixed: Error popup is displayed while Create "Packaging Project" from Other >> Miscellaneous section.
- Fixed: MonoMac project produces an incomplete app.
- Fixed: Renaming a file in the solution pad doesn’t modify the project file.
- [Subversion] Fixed regression with top menu commands using single file.
- Fixed: [Git] Commits reordered when rebasing.
- Fixed: Auto complete / resolve / types completely hosed on new XM due to corrupted cache.
- Fixed: Parameters tooltip not showing all overloads.
- Fixed: Confusing search-in-selection behavior.
- Fixed: Can not modify the built in "#region" Code Template permanently.
- Fixed: Default ordering of search results is incorrect.
- Fixed: Control+Alt+Space doesn’t resolve types.
- Fixed: document navigator does not let me choose the types when there are multiple types.
- Fixed: Smart backspace should only work for C# files.
- Fixed: Unable to open Android .axml file in text editor.
- Fixed: Getting build error on Xam.Mac classic template.
- F# compiler parsing improvements to speed. | https://developer.xamarin.com/releases/studio/xamarin.studio_6.0/xamarin.studio_6.0/ | CC-MAIN-2017-22 | refinedweb | 8,572 | 59.5 |
The’s.
Creating Dictionaries
A dictionary is a key:value pair. In Python, these key:value pairs are enclosed inside of curly braces with commas between each pair. Creating dictionaries in Python is really easy. Here are the three ways to create a dictionary:
>>> my_dict = {} >>> my_other_dict = dict() >>> my_other_other_dict = {1: 'one', 2: 'two', 3: 'three'}
In the first example, we create an empty dictionary by just assigning our variable to a pair of empty curly braces. You can also create a dictionary object by calling Python’s built-in dict() keyword. I have seen some people mention that calling dict() is slightly slower than just doing the assignment operator. The last example shows how to create a dictionary with some predefined key:value pairs. You can have dictionaries that contain mappings of various types, including mapping to functions or objects. You can also nest dictionaries and lists inside your dictionaries!
Accessing Dictionary Values
Accessing the values held in a dictionary is quite simple. All you need to do is pass a key to you dict inside of square braces. Let’s take a look at an example:
>>> my_other_other_dict = {1: 'one', 2: 'two', 3: 'three'} >>> my_other_other_dict[1] 'one'
What happens if you ask the dictionary for a key that doesn’t exist? You will receive a KeyError, like so:
>>> my_other_other_dict[4] Traceback (most recent call last): Python Shell, prompt 5, line 1 KeyError: 4
This error is telling us that there is no key called “4” in the dictionary. If you want to avoid this error you can use the dict’s get() method:
>>> my_other_other_dict.get(4, None) None
The get() method will ask the dictionary if it contains the specified key (i.e. 4) and if it doesn’t, you can specify what value to return. In this example, we return a None type if the key does not exist.
You can also use Python’s in operator to check if a dictionary contains a key as well:
>>> key = 4 >>> if key in my_other_other_dict: print('Key ({}) found'.format(key)) else: print('Key ({}) NOT found!'.format(key)) Key (4) NOT found!
This will check if the key, 4, is in the dictionary and print the appropriate response. In Python 2, the dictionary also had a has_key() method that you could use in addition to using the in operator. However, has_key() was removed in Python 3.
Updating Keys
As you’ve probably already guessed, updating the value that a key is pointing to is extremely easy. Here’s how:
>>> my_dict = {} >>> my_dict[1] = 'one' >>> my_dict[1] 'one' >>> my_dict[1] = 'something else' >>> my_dict[1] 'something else'
Here we create an empty dictionary instance and then add one element to the dictionary. Then we point that key, which is the integer 1 (one) in this case, to another string value.
Removing Keys
There are two ways to remove key:value pairs from a dictionary. The first that we will cover is the dictionary’s pop() method. Pop will check if the key is in the dictionary and remove it if it is there. If the key is not in there, you will receive a KeyError. You can actually suppress the KeyError by passing in a second argument, which is the default return value.
Let’s take a look at a couple of examples:
>>> my_dict = {} >>> my_dict[1] = 'something else' >>> my_dict.pop(1, None) 'something else' >>> my_dict.pop(2) Traceback (most recent call last): Python Shell, prompt 15, line 1 KeyError: 2
Here we create a dictionary and add an entry. Then we remove that same entry using the pop() method. You will note that we also set the default to None so that if the key did not exist, the pop method would return None. In the first case, the key did exist, so it returned the value of the item it removed or popped.
The second example demonstrates what happens when you attempt to call pop() on a key that is not in the dictionary.
The other way to remove items from dictionaries is to use Python’s built-in del:
>>> my_dict = {1: 'one', 2: 'two', 3: 'three'} >>> del my_dict[1] >>> my_dict >>> {2: 'two', 3: 'three'}
This will delete the specified key:value pair from the dictionary. If the key isn’t in the dictionary, you will receive a KeyError. This is why I actually recommend the pop() method since you don’t need a try/except wrapping pop() as long as you supply a default.
Iterating
The Python dictionary allows the programmer to iterate over its keys using a simple for loop. Let’s take a look:
>>> my_dict = {1: 'one', 2: 'two', 3: 'three'} >>> for key in my_dict: print(key) 1 2 3
Just a quick reminder: Python dictionaries are unordered, so you might not get the same result when you run this code. One thing I think needs mentioning at this point is that Python 3 changed things up a bit when it comes to dictionaries. In Python 2, you could call the dictionary’s keys() and values() methods to return Python lists of keys and values respectively:
# Python 2 >>> my_dict = {1: 'one', 2: 'two', 3: 'three'} >>> my_dict.keys() [1, 2, 3] >>> my_dict.values() ['one', 'two', 'three'] >>> my_dict.items() [(1, 'one'), (2, 'two'), (3, 'three')]
But in Python 3, you will get views returned:
# Python 3 >>> my_dict = {1: 'one', 2: 'two', 3: 'three'} >>> my_dict.keys() >>> dict_keys([1, 2, 3]) >>> my_dict.values() >>> dict_values(['one', 'two', 'three']) >>> my_dict.items() dict_items([(1, 'one'), (2, 'two'), (3, 'three')])
In either version of Python, you can still iterate over the result:
for item in my_dict.values(): print(item) one two three
The reason is that both lists and views are iterable. Just remember that views are not indexable, so you won’t be able to do something like this in Python 3:
>>> my_dict.values()[1]
This will raise a TypeError.
Python has a lovely library called collections that contains some neat subclasses of the dictionary. We will be looking at the defaultdict and the OrderDict in the next two sections.
Default Dictionaries
There is a really handy library called collections that has a defaultdict module in it. The defaultdict will accept a type as its first argument or default to None. The argument we pass in becomes a factory and is used to create the values of the dictionary. Let’s take a look at a simple example:
from collections import defaultdict sentence = "The red for jumped over the fence and ran to the zoo" words = sentence.split(' ') d = defaultdict(int) for word in words: d[word] += 1 print(d)
In this code, we pass the defaultdict an int. This allows us to count the words of a sentence in this case. Here’s the output of the code above:
defaultdict(<type 'int'>, {'and': 1, 'fence': 1, 'for': 1, 'ran': 1, 'jumped': 1, 'over': 1, 'zoo': 1, 'to': 1, 'The': 1, 'the': 2, 'red': 1})
As you can see, each word was found only once except for the string “the.” You will note that it is case-sensitive as “The” was only found once. We could probably make this code a bit better if we had changed the case of the strings to lower case.
Ordered Dictionaries
The collections library also lets you create dictionaries that remember their order of insertion. This is known as the OrderedDict. Let’s take a look at an example from one of my previous articles:
>>> from collections import OrderedDict >>> d = {'banana': 3, 'apple':4, 'pear': 1, 'orange': 2} >>> new_d = OrderedDict(sorted(d.items())) >>> new_d OrderedDict([('apple', 4), ('banana', 3), ('orange', 2), ('pear', 1)]) >>> for key in new_d: ... print (key, new_d[key]) ... apple 4 banana 3 orange 2 pear 1
Here we create a regular dict, sort it and pass that to our OrderedDict. Then we iterate over our OrderedDict and print it out. You will note that it prints out in alphabetical order because that is how we inserted the data. This is something you likely wouldn’t see if you just iterated over the original dictionary.
There is one other dictionary subclass in the collections module called the Counter that we won’t be covering here. I encourage you to check that out on your own.
Wrapping Up
We’ve covered a lot of ground in this article. You should now know basically all you need to know about using dictionaries in Python. You have learned several methods of creating dictionaries, adding to them, updating their values, removing keys, and even some of the alternate subclasses of the dictionary. I hope you’ve found this useful and that you will find many great uses for dictionaries in your own code soon!
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/python-101-all-about-dictionaries?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev | CC-MAIN-2017-51 | refinedweb | 1,451 | 71.04 |
This action might not be possible to undo. Are you sure you want to continue?
James Turner, Kevin Bedell
Adapted from: Struts Kick Start
2003-09 1
Agenda
What is Struts? A Brief Review of MVC A Look at a Simple Struts Example The Struts Tag Library Advanced Features of Struts
2003-09
2
Who Are We? - James
Director of Software Development, Benefit Systems Inc. 23 years of development experience. Author of JSP & MySQL Web Applications, Struts Kick Start, JSF Kick Start (Forthcoming, Fall 2003) Committer status on Apache Commons and Apache Struts.
2003-09
3
Who Are We? - Kevin
E-Business Architect, Sun Life Financial. BS Engineering, MBA. Microsoft MCSE Sun Certified Java Programmer Co-Author, "Struts Kick Start", SAMS Publishing Co-Author, "Axis: The Definitive Guide", O'Reilly Contributing Editor, Linux Business & Technology
4
2003-09
What is Struts?
Struts implements the Model-ViewController design pattern for JSP. Pioneered by Craig McClanahan of Sun. Maintained as part of the Apache Jakarta project. Currently in final betas for a 1.1 release.
2003-09
5
The Components of Struts
Struts consists of two main components:
1.
2.
The servlet processing portion, which is responsible for maintaining control of user sessions and data, and managing workflow. The tag libraries, which are meant to reduce or eliminate the use of Java scriptlets on the JSP page.
2003-09
6
The Power of Struts
Of the two, the tag libraries are the most visible part, but also the least important part of Struts (at least in the long run.) Many of the tag libraries are already obsolete if you can take advantage of JSTL (i.e., you have a Servlet 2.3 container) Most of the rest will be obsoleted by JSF.
2003-09
7
The Power of Struts (MVC)
The real power of Struts comes from the MVC design pattern that is implemented by the request processor. To understand why this is such a powerful tool, you first need to be familiar with MVC
2003-09
8
The MVC Pattern
The MVC (or model 2) design pattern is intended to separate out the everchanging presentation layer (the JSP page) from the web application control flow (what page leads to what) and the underlying business logic.
2003-09
9
The Three Pieces of MVC
The Model – The actual business data and logic, such as an class representing users in the database and routines to read and write those users. The View – A separate class which represents data as submitted or presented to the user on the JSP page. The Controller – The ringmaster who decides what the next place to take the user is, based on the results of processing the current request.
10
2003-09
MVC and Struts
The best way to learn both MVC and Struts is to see it in operation in a simple application. Let’s look at a basic user registration form implemented using Struts. This walk-through skips all the basic configuration steps and focuses on features.
See Struts Kick Start for all the gory details of setting up Struts.
2003-09
11
The Model
The model isn’t part of Struts per se, it’s the actual back-end business objects and logic that Struts exposes to the user. In this case, we’ll write a very simple bean that implements a User object. Note: All example code omits imports for brevity.
2003-09
12
package strutsdemo; public class User { private String username = null; private String password = null; public String getUsername () { return username; } public void setUsername(String name) { username = name; } public String getPassword () { return password; } public void setPassword(String pw) { password = pw; } public static User findUser(String username) { User u = new User(); u.setUsername(username); u.setPassword(“test”); // dummy implementation return u; } }
2003-09
13
The Model (cont)
No rocket science so far, the Model (in this case) is a simple bean with a dummied-up load method that just creates a test user. The key point of MVC is that the model doesn’t get directly exposed to the view (the JSP page in Struts.) So how is data passed back and forth to the user?
2003-09
14
Enter the View
The view is an placeholder object used in association with the JSP pages, and which holds a temporary copy of the model. Why do this? Well, suppose you are editing a model object, and the form submission fails during validation. If the model = the view, the model is now in an inconsistent state and the original values are lost.
2003-09
15
Defining the View
With Struts 1.1, there are now two different ways to define a view object (known as an ActionForm)
You can create a bean manually, with manual form validation. Or you can use DynaBeans in combination with the Validator Framework.
2003-09
16
A Manual ActionForm
package strutsdemo.struts.forms; public class UserForm extends ActionForm { private String username = null; private String password = null; public String getUsername () { return username; } public void setUsername(String name) { username = name; } public String getPassword () { return password; } public void setPassword(String pw) { password = pw; }
2003-09
17
A Manual ActionForm (cont)
public ActionErrors validate(ActionMapping map, HttpServletRequest req) { ActionErrors errors = new ActionErrors(); if ((username == null) || (username.size() == 0)) { errors.add(“username”, new ActionError(“login.user.required”)); } if ((password == null) || (password.size() == 0)) { errors.add(“password”, new ActionError(“login.passwd.required”)); } return errors; } }
2003-09
18
Key Points for ActionForms
In general, all fields should be Strings. This preserves the contents of the field if it doesn’t match against the required type
so if you type in “1OO” (rather than 100) for a field that needs to be a number, the field contents will be preserved when it returns to the input form.
2003-09
19
More on ActionForms
If an ActionForm doesn’t implement validate, the default validate (which does nothing) is run. If an ActionForm overrides validate, it controls whether a form passes validation or not. If the ActionErrors object returned has size > 0, control is returned to the input page.
2003-09
20
A Common ActionForm Gotcha
Let’s say you have a boolean property called isJavaGeek tied to a checkbox on a page. You submit the form with the checkbox checked. Then you hit the back button (or it returns to the page because of a validation error), and you uncheck the box.
2003-09
21
A Common ActionForm Gotcha
The problem: Because by the HTML standard, unchecked checkboxes don’t get placed on the request, the form object will not get the new value because the reflection will never occur to change the value of isJavaGeek The solution: Implement the reset() method on the ActionForm.
2003-09
22
Using Struts on the JSP Page
Let’s take a look at the input form that supplies our newly created ActionForm with values Struts uses the struts-html taglib to make interacting with the view easy.
2003-09
23
login.jsp
<%@ page language=“java” %> <%@ taglib uri=“/WEB-INF/struts-html.tld” prefix=“html” %> <head><title>Log in Please</title></head> <h1>Log In Please</h1> <html:form action=“/login”> <html:errors property=“username”/><BR> Username: <html:text property=“username”/><BR> <html:errors property=“password”/><BR> Username: <html:password property=“password”/> <html:submit/> </html:form>
2003-09
24
Controlling Flow with Actions
The actual processing of forms occurs in Actions. The action is the link between the view and the backend business logic in the model. The Action is also responsible for determining the next step in the pageflow.
2003-09
25
A Simple Action Class
package strutsdemo.struts.actions; public class LoginUserAction extends Action { public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest req, HttpServletResponse resp) { UserForm uf = (UserForm) form; User u = User.findUser(uf.getUsername()); ActionErrors errors = new ActionErrors(); if (u == null) { errors.add(“username”, new ActionError(“login.username.notfound”)); } else if (!uf.getPassword().equals(u.getPassword()) { errors.add(“password”, new ActionError(“login.password.invalid”)); }
2003-09
26
A Simple Action Class (cont)
if (errors.size() > 0) { saveErrors(request, errors); return mapping.getInputForward(); } request.getSession().setAttribute(“currentUser”, u); return mapping.findForward(“success”); } }
2003-09
27
Fun Things to do with Actions
If the Action returns null, no further processing occurs afterwards. This means you can use an Action to implement pure servlet technology, like generating a CSV file or a JPG. You can chain together Actions using the configuration file, useful for instantiating multiple ActionForms.
2003-09
28
A Note About Validation
Because the ActionForm shouldn’t contain business logic, the Action may need to do some validations (such as username/password checking) Since the Action isn’t called until the ActionForm validates correctly, you can end up getting new errors at the end of the process.
2003-09
29
Tying it All Together
So far, you’ve seen all the components that come together to form a Struts request cycle, except… The piece that ties all the disparate pieces together. In Struts, this is the struts-config.xml file.
2003-09
30
A Simple Example of the Config
<struts-config> <form-beans> <form-bean name=“userForm” type=“strutsdemo.struts.forms.UserForm”/> </form-beans> <action-mappings> <action path=“/login” name=“userForm” scope=“request” validate=“true” type=“strutsdemo.struts.actions.LoginUserAction” input=“/login.jsp”> <forward name=“success” path=“/mainMenu.jsp”/> </action> </action-mappings> </struts-config>
2003-09
31
Things to Notice in the Config
For space reasons, the XML header was ommitted. Form-beans define a name that Struts uses to access an ActionForm. Actions define:
What URL path the action is associated with. What JSP page provides the input. What JSP pages can serve as targets to the Action. What Action is used to process the request. Whether the form should be validated.
32
2003-09
Built-in Security
Because all JSP pages are reached via calls to Actions, they end up with URLs like “/login.do” The end-user never sees the actual URL of the underlying JSP page. You can place access control in your Actions, avoiding having to put checks on all your JSP pages. You can also use container-based security to control access via roles directly in the config.
2003-09
33
The Struts Tag Libraries
With the exception of the HTML and TILES libraries, they have all been superceded by JSTL. However, if you can’t move to a Servlet 2.3 container, they offer a lot of the power of JSTL.
2003-09
34
Examples of Struts Tags vs JSTL
Struts:
<logic:iterate id=“person” name=“people”> <logic:empty name=“person” property=“height”> <bean:write name=“person” property=“name”/> has no height<BR> </logic:empty> </logic:iterate>
JSTL:
<c:forEach var=“person” items=“${people}”> <c:if test=“${empty person.height}”> <c:out value=“${person.name}”/> has no height<BR> </c:if> </c:forEach>
2003-09
35
The Struts Tag Libraries
Logic – Conditional Display, Iteration Bean – Data Instantiation and Access Html – Forms and Links Nested – Access to Properties of Beans Tiles – Structured Layout of Pages
2003-09
36
Advanced Tricks with Struts
DynaForms allow you to avoid writing ActionForms alltogther.
2003-09
37
<form-bean name=“userForm” type=“org.apache.struts.actions.DynaAction Form”> <form-property name=“username” type=“java.lang.String”/> <form-property name=“password” type=“java.lang.String”/> </form-bean> -------------------------------DynaActionForm uf = (DynaActionForm) form; String userName = (String)uf.get(“username”);
2003-09
38
How to Validate DynaForms
Since you don’t define DynaForms as explicit classes, how do you do validation? Answer 1: Extend the DynaActionForm class and write validate() methods. Answer 2: Using the Struts Validator Framework.
Based on the Commons Validator package. Uses an XML file to describe validations to be applied to form fields.
2003-09
39
The Validator Framework
Predefined validations include:
Valid number: float, int Is a Credit Card number Length Checks Blank/NotBlank Regular Expression Matches Plus the all-purpose cross-field dependency: requiredif
2003-09
40
<form name=“medicalHistoryForm"> <field property=“lastCheckup" depends=“required"> <arg0 key=" medicalHistoryForm.checkup.label"/> </field> <field property=“weight" depends=“required,float"> <arg0 key=" medicalHistoryForm.weight.label"/> </field> </form>
2003-09
41
Validwhen appears in Struts 1.2
<form name=“medicalHistoryForm"> <field property=“lastMamogram" depends="validwhen"> <arg0 key="dependentlistForm.firstName.label"/> <var> <var-name>test</var-name> <var-value>((gender=“M”) OR (*this* != null)) </var-value> </var> </field> </form>
2003-09
42
Summary
Struts 1.1 is about to be released. Supported by all major IDEs (Eclipse, IdeaJ, Jbuilder, etc) Widely accepted and integrated into most J2EE platforms. Want to learn more?
“We’ve seen no better resource for learning Struts than Struts Kick Start. “ -- barnesandnoble.com
2003-09
43
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue reading from where you left off, or restart the preview. | https://www.scribd.com/document/957935/Struts-MVC-Meets-JSP | CC-MAIN-2016-40 | refinedweb | 2,140 | 56.25 |
Hi all!
I have a problem accessing an API shared by iTunes. The API is to get an RSS feed and/or search items in iTunes. To access iTunes, I need to parse a query through a URL and expect response in json format.
When the URL is parsed in Preview Mode, the "fetch" is successfully carried out and I get the response in json format.
However if i parse it in Published sited using wix-fetch, i get an error shown below:
"Failed to load: The 'Access-Control- Allow-Origin' header has a value '- code.com' that is not equal to the supplied origin. Origin '' is therefore not allowed access. Have the server send the header with a valid value, or, if an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled."
/**** here is my code ****/
import {fetch} from 'wix-fetch'; fetch("", { method: "get", headers: { "Access-Control-Allow-Origin": "*", "Content-Type": "application/json" } }) .then( (httpResponse) => { let url = httpResponse.url; let statusCode = httpResponse.status; let statusText = httpResponse.statusText; let headers = httpResponse.headers; let bodyUsed = httpResponse.bodyUsed; if (httpResponse.ok) { return httpResponse.json(); } else { return Promise.reject("Fetch did not succeed"); } }) .then( (json) => { /******* in preview mode, below line gets executed *******/ console.log(JSON.stringify(json)); }) .catch( (err) => { console.log(err); });
Can anybody shed some light.
Thank you in advance!
"Access-Control-Allow-Origin": " " ?
Hi Ethan,
Thanks for your comment but I am not sure what you are trying to say.
My guess is that:
USE "Access-Control-Allow-Origin": " "
INSTEAD OF "Access-Control-Allow-Origin": "*"
I tried that but I still get the same error.
if your site is not published Fetch is calling from the dev url
Also fetch requests that require CORS will only work in backend code
The error does not come up in Preview mode.
It only triggers in Published Mode.
How do I make it work in Published mode.
Is there anyone that could help me with this.
You are not giving it your origin it is using instead of your url. I had this same issue a while ago
I used the code below as you advised. But I still get the same error.
I finally solved it! to anyone interested to know, I put the FETCH function in the Back-end and call it from the Front-end.
Code in the Back-end:
Code in the Front-end:
Good luck! | https://www.wix.com/corvid/forum/community-discussion/problem-with-access-control-allow-origin | CC-MAIN-2020-05 | refinedweb | 407 | 68.97 |
Introduction: Bluetooth/Gyroscope/Accelerometer Controlled Lightball (with Individual Adjustable Leds in Each Side)
Finally (nearly topically) i made my personal lightball.
(A.k.a.: Another day in the FabLab Aachen)
Each side is enlighted individually with an intelligent LED (WS2812b, on a small breadboard) behind. A micro-controller (MSP430G2553, PDIP.) can use a gyroscope/accelerometer (MPU6050, GY-521 module) to change the colors, the which can itself be controlled via bluetooth (HC-06). There is space for a compass module (HMC5883L, GY-271 module) too.
Tools required:
Laser-cutter (cutting by hand is no fun)
Hot glue gun (and a bunch of sticks for it)
Soldering iron
PCB mill (optional, a stripboard version works as well)
MSP430 launchpad (as programmer and for the micro-controller)
Material:
ca. 600*300mm acrylic for led backlight (0M200 SC)
(white) carton (seperating each side )
cables, 2.54mm connectors (female: 8, 6, 5, 3(*3), 2(*2) male: 3(*3), 2(*2)), resistor (4.7k(*2), 10k), capacitor (100nF(*7))
3* mignon cell holder for three batteries
32* WS2812B on a small board
HC-06 bluetooth module
GY-521 accelerometer/gyroscope module
(optional, unused) GY-271 compass module
Step 1: Cutting, Soldering
First the shapes for the sides of the ball (Football_Outside.svg) are cut out of the acrylic. Gladly i could use an lasercutter (epliog zing) in the fablab, doing that manually would be too much work.
The layout of the sides is done by another program (CutCad)
The parts for seperating each side in the inside are cut out of white carton (Football_Inside.svg) and glued together to pyramide structures (black lines are marked (folding edges), red lines are cutted).
12 pyramides with 5-edges and 20 pyramids with 6-edges are needed.
The Leds are soldered together with 3 wires between each other, connecting the voltage lines (5V and ground) and each output from one Led with the input of the next one. Mostly 5cm wire length is enough, but for first one (and maybe even the next five) longer wires are better for opening the housing for changing the batteries.
A longer wire, which leads to the first Led ends in a female 3-pin header to conenct to the microcontroller and battery.
Step 2: Construction
Now the construction can begin: Starting with a 5-edge acrylic, the next sides are connected in cycles. First fixate the sides provisoric with tape on the outside, and glue them together with a little bit hot glue. Then position the corresponding pyramid on the center of the side and later fill the gap between them with hot glue.
Ate the beginning i used instant adhesive and special acrylic glue, but the hot glue works out best.
The Leds are inserted in the small square on top of the pyramid and also fixated with hot glue. Coating the whole top layer with it will further stabilize the pyramids and isolate the solder pads. The led on the bottom should be the last led in the line (with the unconnected data output).
Working in cycles results in an simple mapping of the led line on the sides of the ball.
Don't forget to test the ball on the side, errors are easier to correct in each step now. Later everything is coated in glue and inaccessible.
Step 3: Finish Construction
The top 5 6-edges and the last 5-edge will remain loose - the batterie might not last forever (almost 2A maximal current) :-).
Here, the pyramids are just glued on top of each side, and again the led on top of it.
Again, don't forget testing - but don't connect a 9V batterie to the 3V programming line, that relieved the magic smoke from my first controller version.
Step 4: Smoothing the Edges
Now "sandpaper" (i used a proxxon and dremel with a corundom tool first, sandpaper later for finer work) the edges to smooth them. That's was the less funniest part of the whole building process. If i would do that again, i will try to make a holder for the sides, where i can cut each bevelled side with the laser cutter: There is a certain angle between 5-edge and six-edge and also another certain angle between 6-edges, which correspond to the angle with which the edge should "sandpapered" to get a smooth connection between two tiles.
Step 5: Electronics
The electronis uses 3 mignon cells in a row to provide 4.5V (and 3* parallel for more power), directly used as power source for the leds (instead of 5V), and 3.3V via 1117 fixed voltage regulator for the logic, buffered by the capacitors (only 100nF capacitors are used).
A MSP430G2553 is connected with the bluetooth module via RX/TX serial and to the gyroscope/accelerometer module via I2C (4.7k pullup resistor on each line) (and maybe later to a compas module). Don't forget a 10k pullup resistor on the micro-controllers reset input.
The logic input of the first Led is connected with pin 2.4 of the micro-controller, powered directly from the batterie.
Two wires can route the positive voltage to connector for a jumper one corner of the ball, to connect/disconnect the batterie power from the outside.
Attachments
Step 6: Programming
Upload the Bluetooth.ino sketch on the MSP430G2553: I use a MSP430 launchpad as a programmer and the Energia-IDE. Either insert the micro-controller in the launchpad and program it there, or connect 3V, GND and the test and reset line (Spy-By-Wire-Interface) of the controller with the launchpad.
You need the WS2811Driver in the library folder - with a minor modification: The I2C connection uses pin 1.7, therefore we need to change in the ws2811.h file
#define WS2811_BITMASK BIT7
#define WS2811_PORTDIR P1DIR
#define WS2811_PORTOUT P1OUT
to
#define WS2811_BITMASK BIT4
#define WS2811_PORTDIR P2DIR
#define WS2811_PORTOUT P2OUT
Which allows to use pin 2.4.
The micro-controller expects commands like:
"I##CRGBT": Sets the led number ## (decimal value) to color red value R, green value G, blue value B (0...9)
"MRANDOMT": Random values for each led
"MACCELLT": Color change dependent on orientation (gravity vector)
"MROTATET": Color changes by gyroscope values (rotation)
"SrgbRGBT": 5edges and 6edges with different colors (rgb and RGB, each value from 0...9)
"POWER#TT": # (from 0...9) should be maximal allowed power consumation (0A...2A )
... replacing the 'T' at the end with a 'F' turns the mode of
These commands can be send e.g. from the serial window Arduino-IDE by connecting with the bluetooth module. Opening the LightballController with Processing allows to switch between the modes with a GUI.
(It searchs for serial port named "/dev/cu.HC-06-DevB" on my mac, for windows or linux other strings might be correct. Change line 38 to the correct string (all available ports are listed in the serial window below)).
Booth programs are just quick-and-dirty versions, but work well for now.
Step 7: Enjoy
Which means: Power up the ball and have Fun!
Participated in the
Home Technology Contest
Participated in the
Remote Control Contest
Participated in the
Epilog Challenge VI
Be the First to Share
Recommendations
18 Discussions
4 years ago
Anyone know where I can just buy one?
4 years ago
Awesome project with the MSP430! May I know where you got the following files:
#include "I2Cdev.h"
#include "MPU6050.h"
for the MSP430G2553?
I'm trying to build and program a quadcopter from scratch using the same microcontroller. I already made my own electronic speed controllers so I just need the MSP430 libraries for the GY-521.
THANK YOU SO MUCH
Reply 4 years ago
Both are great libraries from Jeff Rowberg:......
5 years ago on Introduction
Reply 5 years ago on Introduction
wow
Reply 5 years ago on Introduction
lol
Reply 5 years ago on Introduction
XXDDDDDDD
Reply 5 years ago on Introduction
hi tony!
Reply 5 years ago on Introduction
(Insert Reply Here)
5 years ago on Step 7
Very nice ! Excellent work and amazing result.
Thanks for sharing!
Build_it_Bob
Reply 5 years ago on Introduction
Thanks!
6 years ago on Introduction
What a cool night light!
Reply 6 years ago on Introduction
Nice idea, hadn't thought about that... i could even control a alarm clock via bluetooth (turn the light as a snooze button) ^^
6 years ago on Introduction
Awesome!
Reply 6 years ago on Introduction
Thanks!
6 years ago on Introduction
great work!!! We have seen it live at dorkbot Aachen tonight.
6 years ago on Introduction
So close to topical!
This is awesome enough that it needn't be cup-themed. Thanks for sharing (and especially for sharing the files and schematics.)
Reply 6 years ago on Introduction
Thanks. It was more or less coincidal in time, the cup gave only the last kick to finish it :-) | https://www.instructables.com/Lightball/ | CC-MAIN-2020-45 | refinedweb | 1,478 | 63.09 |
-1
I'm a toal noob to programming. i just bought a C++ book and im trying to develop some very simple programs. I'm using dev C++. I'm stuck on this one, its having an issue linking. here is my code:
#include <iostream> #include <math.h> #include <stdlib.h> #include <time.h> using namespace std; int rand_1toN(int sides); int main() { int sides, times, x, r; srand(time(NULL)); while(1){ cout<< endl << "Hello! This program will generate a random number between 1"; cout<< endl << "and any number you specify, however many times you want."; cout<< endl << "Essentially rolling any sided dice you want as many times as you want."; cout<< endl << "Enter 0 to exit."; cout<< endl << "Enter the amount of sides the dice has:"; cin>> sides; cout<< endl << "Enter the amount of dice want to roll:"; cin>> times; if (sides == 0) break; else for (x = 1; x <= times; x++) { r = rand_1toN(sides); cout << r << " "; } return 0; } }
here is the error message:
[Linker error] undefined reference to `rand_1toN(int)'
ld returned 1 exit status
I haventeven been able to test it so i dont even know if it will output what i want it to yet. a lot of what i have been doing is just trial and error on things that are probably obvious for a more experienced prgrammer. any help would be extremely appreciated!
thanks | https://www.daniweb.com/programming/software-development/threads/192399/help | CC-MAIN-2018-39 | refinedweb | 230 | 81.02 |
Case Study Analysis Cost Of Capital At Ameritrade
Capital Asset Pricing Model is a model that describes the relationship between risk and expected return and that is used in the pricing of risky securities.
Description: Capital Asset Pricing Model (CAPM)).%))”.
CAPM has a lot of important consequences. For one thing it turns finding the efficient frontier into a doable task, because you only have to calculate the co-variances.
“Cap-M” looks at risk and rates of return and compares them to the overall stock market. If you use CAPM you have to assume that most investors want to avoid risk, (risk averse), and those who do take risks, expect to be rewarded. It also assumes that investors are “price takers” who can’t influence the price of assets or markets. With CAPM you assume that there are no transactional costs or taxation and assets and securities are divisible into small little packets. CAPM assumes that investors are not limited in their borrowing and lending under the risk free rate of interest.
How to Calculate the Cost of Equity CAPM
The cost of equity is the amount of compensation an investor requires to invest in an equity investment. The cost of equity is estimable is several ways, including the capital asset pricing model (CAPM). The formula for calculating the cost of equity using CAPM is the risk-free rate plus beta times the market risk premium. Beta compares the risk of the asset to the market, so it is a risk that, even with diversification, will not go away. As an example, a company has a beta of 0.9, the risk-free rate is 1 percent and the expected return on the equity investment is 4 percent.
Instructions
Determine the market risk premium. The market risk premium equals the expected return minus the risk-free rate. The risk-free rate of return is usually the United States three-month Treasury bill rate. In our example, 4 percent minus 1 percent equals 3.
.
Combining the risk-free asset and the market portfolio gives the portfolio frontier.
The risk of an individual asset is characterized by its co-variability with the market portfolio.
The part of the risk that is correlated with the market portfolio, the systematic risk, cannot be diversified away.
Bearing systematic risk needs to be rewarded.
The part of an asset’s risk that is not correlated with the market portfolio, the non-systematic risk, can be diversified away by holding a frontier portfolio.
Bearing non-systematic risk need not be rewarded.
For any asset i:
where
We thus have an asset pricing model – the CAPM.
Example. Suppose that CAPM holds. The expected market return is 14% and T-bill rate is 5%.
What should be the expected return on a stock with β = 0?
Answer: Same as the risk-free rate, 5%.
• The stock may have significant uncertainty in its return.
• This uncertainty is uncorrelated with the market return.
What should be the expected return on a stock with β = 1?
Answer: The same as the market return, 14%.
What should be the expected return on a portfolio made up of 50% T-bills and 50% market portfolio?
Answer: the expected return should be
¯r = (0.5)(0.05)+(0.5)(0.14) = 9.5%.
Multifactor CAPM
In CAPM, investors care about returns on their investments over the next short
horizon – they follow myopic investment strategies.
In practice, however:
• Investors do invest over long horizons
• Investment opportunities do change over time.
In equilibrium, an asset’s premium is given by a multi-factor CAPM :
Limitations of CAPM
Based on highly restrictive assumptions i.e. no tax, transaction costs etc
Serious doubts about its testability.
Market factor is not the sole factor influencing stock returns.
Summary of CAPM
CAPM is attractive:
1. It is simple and sensible:
is built on modern portfolio theory
distinguishes systematic risk and non-systematic risk
provides a simple pricing model.
2. It is relatively easy to implement.
CAPM is controversial:
1. It is difficult to test:
difficult to identify the market portfolio
difficult to estimate returns and betas.
2. Empirical evidence is mixed.
3. Alternative pricing models might do better.
Multi-factor CAPM.
Consumption CAPM (C-CAPM).
APT.
Other Methods for calculating cost of equity
There are 3 methods which are mainly used for calculating Cost of equity other than CAPM
Arbitrage Pricing theory
3 factor method
Dividend Growth Method
Arbitrage Pricing Theory
APT assumes that returns on securities are generated by number of industry-wide and market-wide factors. Correlation between a pair of securities occurs when these securities are affected by the SAME factor or factors.
Return on any stock traded in a financial market consists of two parts.
R = Re + U
Where, R = return on any stock
Re = Expected or Normal return (depends on all of information shareholders have on the stock for next month.)
U = Uncertain or Risky return (this comes from information revealed in the month)
U = m + ¥€
Where,
m = Systematic risk or market risk (it influences all assets of market)
¥€ € €½ Unsystematic risk (it affects single asset or small group interrelated of assets, it is specific to company)
The capital asset pricing theory begins with an analysis of how investors construct efficient portfolios. But in real life scenarios, it isn’t necessary that every time portfolios will be efficient.
It is developed by Stephen Ross.
Moreover, the return is assumed to obey the following simple relationship:
Where b1, b2 and b3 are sensitivities associated with factor 1, factor 2 and factor 3 which can be interest rate or other price factors.
Noise = ¥ is event unique to the company.
APT states that the expected risk premium on a stock should depend on the expected risk premium associated with each factor and the stock’s sensitivity to each of the factors. Thus, formula modifies to:
Where, rf = risk free rate is subtracted from each return to give risk premium associated from each factor.
Analysis of the formula:
If we put value for b = 0, the expected risk premium will be zero. It will create a diversified portfolio which has zero sensitivity to macroeconomic factor which offers risk free rate of interest. Portfolio offered a higher return, investors could make a risk-free (or “arbitrage”) profit by borrowing to buy the portfolio. If it offered a lower return, you could make an arbitrage profit by running the strategy in reverse; in other words, you would sell the portfolio and invest the proceeds in U.S. Treasury bills.
Consider portfolio A and B are sensitive to factor 1, A is twice sensitive to factor1 as then portfolio Therefore, if you divided your money equally between U.S. Treasury bills and portfolio A, combined portfolio would have exactly the same sensitivity to factor 1 as portfolio B and would offer the same risk premium.
Steps of Arbitrage Pricing Theory
The various steps during Arbitrage Pricing Theory can be stated as:
Identify the macroeconomic factors: APT doesn’t indicate which factors are to be considered. But there are 6 principle factors which are:
Yield spread
interest rate,
exchange rate,
GNP
inflation
portion of the market return
Estimate the risk premium of each factor
Estimate the factor sensitivity
Net Return = risk free interest rate + expected risk premium
3 factor model
It is a special case of APT
It considers 3 major factors called as
market factor
size factor
book to market factor.
There is also evidence that these factors are related to company profitability and therefore may be picking up risk factors that are left out of the simple CAPM.
The practical application of this model is to estimate the betas for the three factors and then use them to predict where returns should fall, much like the CAPM.
It was researched by Fama and French.
Dividend Growth Method
Dividend Discount Model.
It is useful when the growth rate of dividend is forecasted constantly.
The present value of stocks is given as
Where,
r = discount rate,
g = rate of growth,
DIV = annual cash payment,
This formula can be used when growth rate g < rate of return r.
When growth rate = rate of return, the present value becomes infinite.
For perpetual growth, r > g.
Growing perpetuity formula,
Where,P0 in terms of next year’s expected dividend DIV
g = the projected growth trend
r = expected rate of return on other securities of comparable risk.
We can estimate cost of equity from this formula by re-arranging.
Let’s understand by an example:
Suppose that your company is expected to pay a dividend of $1.50 per share next year. There has been a steady growth in dividends of 5.1% per year and the market expects that to continue. The current price is $25. Then cost of equity r is given as:
When the growth rate isn’t constant but varies from year to year, then average can be calculated. Growth rate for current year is calculated using the formula:
For example,
Year
Dividend (in Rs. Million)
Percent change (g)
2000
1.23
–
2001
1.30
(1.30 – 1.23) / 1.23 = 5.7%
2002
1.36
(1.36 – 1.30) / 1.30 = 4.6%
2003
1.43
(1.43 – 1.36) / 1.36 = 5.1%
2004
1.50
(1.50 – 1.43) / 1.43 = 4.9%
Growth rate is average of all percent changes and equals
This model serves the major advantage of being easy to understand and use but has a major drawback total dependence on dividend and it cannot be used where company isn’t paying any dividend. Also, it doesnot consider any risk and is highly sensitive to the change in growth rate.
Estimating Beta
Beta is an important term in Capital Asset Pricing Method. Beta is the non-diversified risk of holding a single stock. But it turns out that companies in similar markets have similar risks.
Interpretation of beta
Beta = 1,it matches market portfolio
Beta > 1, higher risk.
Beta < 1, less risky and returns are highly predictable.
Methods for calculation of beta
It is calculated as:
beta_{i} = frac {mathrm{Cov}(R_i,R_m)}{mathrm{Var}(R_m)}
Where, Ri = rate of return of asset and Rm is rate of return of market. Thus, beta is dependent on regression analysis. Beta is found by statistical analysis of individual, daily share price returns, in comparison with the market’s daily returns over precisely the same period. We need to gather a lengthy time-series of observations for the market return and the individual asset return. Then required co-variances and variances can be calculated. If coefficient of correlation P is known then
The alternative method of calculating beta is (by rearranging terms from CAPM equation):
In practice, an additional constant alpha is also added in the above equation which tells how much better (or worse) the funds did than what the CAPM predicted. Alpha is a risk-adjusted measure of the so-called active return on an investment.
Here, E(Ri) – Rf is estimated return on asset portfolios and E(Rm) – Rf is estimated return on market index.
In order to check that there are no serious violations of the linear regression model assumptions. The slope of the fitted line from the linear least-squares calculation is the estimated Beta. The vertical intercept of this curve is called as the alpha.
For a portfolio of assets, we have the relation:
Given that beta is a linear risk measure, the beta of a portfolio of assets as simply the weighted average of all the individual betas that comprise the portfolio.
HANU
Estimate of Risk Premium
We don’t have reliable estimate where stock market will move in future. So we are using long term historical spreadsheets for estimate & large stock than small stocks because they are more closer to proper estimate of market
We are considering all values after Second World War because after that laws became stable in U.S.
Risk premium = Rm – Rf
U.S. government securities rate = 6.69% (20 years bond, Exhibit 3)
Average annual return for Large company stocks = 14 % (Exhibit 3)
So Risk premium for Ameritrade
= 14 % – 6.69 %
=7.31 %
Cite This Work
To export a reference to this article please select a referencing stye below: | https://www.ukessays.com/essays/finance/case-study-analysis-cost-of-capital-at-ameritrade-finance-essay.php | CC-MAIN-2019-04 | refinedweb | 2,045 | 54.22 |
Retreiving the duration of a WMV in C#
This week, I needed to get read the duration of a few hundred Windows
Media Video files for a project. Since I started my development code in
ASP and then ASP.NET, I pretty much only know managed code, so I wanted
to use a .NET language. I searched google.com for "wmv duration" and
"wmv duration C#" and came up with nothing. I found the 10 different
Windows Media SDKs, 9 different DirectX SDKs, and a several forum posts
asking the question "How can I read the duration of a WMV/WMA in C#",
but no answers.
I finally found this post on windows.public.windowsmedia.sdk. It wasn't quite right, but it got me where I needed to go.
Here's the final code:
using WMPLib; // this file is called Interop.WMPLib.dll WindowsMediaPlayerClass wmp = new WindowsMediaPlayerClass(); IWMPMedia mediaInfo = wmp.newMedia("myfile.wmv"); // write duration Console.WriteLine("Duration = " + mediaInfo.duration); // write named attributes for (int i=0; i<mediaInfo.attributeCount; i++) { Console.WriteLine(mediaInfo.getAttributeName(i) + " = " + mediaInfo.getItemInfo(mediaInfo.getAttributeName(i)) ); }
That's it.
Thanks John… helped me a lot. I’m new to the MS-programming-world but "imports" – isn’t that VB-code? Shouldn’t it read "using"?
Wow, you found a 3-year old post! Yes, you are correct. I fixed the code sample, so now it has "using" instead of "imports".
Thanks for this code snippet! It is just what I needed!!! Luckly Skooter found this 3 year old post since that probably boosted its relevance enough for me to find this in Google.
For some reason though, this fails with a "Unable to cast COM object of type…" when I try to call media.duration. The attributeCount, getAttributeName, and getItemInfo methods all work fine though.
IWMPMedia media = player.newMedia(filepath);
duration = media.duration;
Equally strange is that I can get the duration with:
duration = player.newMedia(filepath).duration;
It works with me
thanks
Also, have a look at:
This code works fine in local machine,but when i upload to the server it shows duration=00:00.
why??
I am not getting..
please help me
This code works fine in local machine,but when i upload to the server it shows duration=00:00.
here is the code
WindowsMediaPlayer wmp = new WindowsMediaPlayer();
IWMPMedia mediaInfo = wmp.newMedia(FileUpload1.PostedFile.FileName);
Response.Write("Duration = " + mediaInfo.durationString
why it is not calculating the duration in the server??
Thanks for this code John! It is just what I needed!!!
"muito obrigado pela ajuda!!"
VALEUU!!
Thank you man, you saved me some time! | http://johndyer.name/retreiving-the-duration-of-a-wmv-in-c/ | CC-MAIN-2016-07 | refinedweb | 434 | 52.66 |
Up to [OSS] / xfs-cmds / attr / man / man5
Request diff between arbitrary revisions
Keyword substitution: kv
Default branch: MAIN
Fix attr man pages Merge of master-melb:xfs-cmds:29936a by kenmcd. Fix attr man pages
Update Andreas' email address everywhere.
Update attr.5 man page to include details of security namespace.
Extended attribute updates mainly from Andreas Gruenbacher.
Documentation updates regarding trusted EAs from AndreasG.
Updates from AndreasG.
man page updates from Andreas, describing recent changes in user attr handling.
man page and test script updates from Andreas. fix syscall numbering a/ on sparc (fremovexattr was wrong) and b/ if arch doesn't have numbers defined yet, handle it cleanly (errno.h missing).
Merge of xfs-cmds-2.4.18:slinx:112273a by nathans. sync up with patch from AndreasG, mainly creates libattr.rpm/deb.
Merge of xfs-cmds-2.4.18:slinx:111138a by nathans. bump to version 2.0.0 for extended attribute and other interface changes. incorporate new code, docs, etc from ext2/ext3 project. | http://oss.sgi.com/cgi-bin/cvsweb.cgi/xfs-cmds/attr/man/man5/attr.5 | crawl-002 | refinedweb | 169 | 54.39 |
Got a problem... I cna't compile for Teensy 3.5...
C:\Program Files (x86)\Arduino\libraries\Wire\utility\twi.c:27:24: fatal error: compat/twi.h: No such file or directory
#include <compat/twi.h>
But the file is there
Got a problem... I cna't compile for Teensy 3.5...
C:\Program Files (x86)\Arduino\libraries\Wire\utility\twi.c:27:24: fatal error: compat/twi.h: No such file or directory
#include <compat/twi.h>
But the file is there
No 'compat' directory - File is :: I:\arduino-1.8.1\hardware\teensy\avr\libraries\Wire\utility\t wi.h ???
Paul thank you! I know that the card is suported . i used the 3.6 in my motion simulator
I was asking @GhostPrototype if built in sd is supportd in his Marlin firmwareI was asking @GhostPrototype if built in sd is supportd in his Marlin firmware
))
These 3.6 are amazing!
Not currently, but it would make sense to implement it.
Edit: Looking at the card reader in Marlin and the Teensy SD example, it could be nothing more than defining the correct SDSS pin. For the onboard on T3.5 and T3.6 this would be "BUILTIN_SDCARD".
Last edited by GhostPrototype; 05-14-2017 at 11:28 AM.
This is way out of my reach. I don't know where to begin to edit the SD library for Builtin support
The Marlin SD h and cpp files are very simmilar but still different. Maybe PaulStoffregen could thake a quick look on what to do. If he is not too busyThe Marlin SD h and cpp files are very simmilar but still different. Maybe PaulStoffregen could thake a quick look on what to do. If he is not too busy
OK it seems something works
I copied KinetisSDHC.c , Sd2Card.h from Pauls Teensy libraryI copied KinetisSDHC.c , Sd2Card.h from Pauls Teensy library
Changed -pins_TEENSY35_36.h - SDSS to BUILTIN_SDCARD
-Sd2Card.h - #include "src/HAL/HAL_TEENSY35_36/spi_pins.h"
Commented out Sd2Card.h lines: -uint8_t const SPI_FULL_SPEED = 0;
-uint8_t const SPI_FULL_SPEED = 1;
-uint8_t const SPI_FULL_SPEED = 2;
-uint8_t const SD_CHIP_SELECT_PIN = SS_PIN;
-uint8_t const SPI_MOSI_PIN = MOSI_PIN;
-uint8_t const SPI_MISO_PIN = MISO_PIN;
-uint8_t const SPI_SCK_PIN = SCK_PIN;
Hope I'm not forgeting anything (I tried multiple variants).
I don't know if this is the right thing to do or if I broke any parts of the code. But with these modifications the code compiles and I can init the card and read contents of the card and see the filenames
Update: it seems to work. The files are beeing read, in repetier you can upload files, mounting and dismounting works
Last edited by Tady; 05-29-2017 at 05:49 PM.
Good to know it works with so few modifications. I'll make sure it's more seamless with the next PR.
Also, the Teensy HAL has been officially merged into the official repo!
Now what's needed is hardware support to plug the Teensy board into something, along with people actually trying it out it and reporting any bugs.
Yes something like RADDS
maybe RTDSmaybe RTDS
))
I am currently building a reprap from scratch and Im planning to use teensy 3.6.
My printer will be simple so I modified you firmware to support 128x64 LCD with K0108 driver. I had a few laying around so I didn't want to buy those with I2C support and I had no problems sacrificing 13 I/Os
))))
I found a problem with 3.6 an probably 3.5. The firmware does not ronun standalone. The LCD is lack and frozen at boot until you open a serial terminal. The same is even if you fo not connect the USB and you power from an external power supply. You always need to connect the USB and open the serial port. Then it runs normally
Not to be rude but this code is very big (15+ .h files). The code is on github that was posted by GhostPrototype. So posting code on the forum is rather hard
There's a USB while loop in setup(), try changing that to delay(1000); and see if it helps.
I am adding a comment here as I have a delta 3D printer that would benefit greatly from a more powerful processor than the 2580. I have a dozen or so 3.6, another dozen 3.5, and countless LC and 3.2 Teensy (Teensies?) here. I cloned Paul's repo from 4 years ago, but I think I need to search this topic for a better version.
At any rate, I am interested in adding to this and using the power of the Teensy to handle these kinematic calculations.
I also seem to like to order them late at night when I think I need 2 or 3 more for some random project that seems like a great idea at 2am.
They are fantastic though and I simply love working with them. Haven't melted one yet either! | https://forum.pjrc.com/threads/26015-3D-Printer-Software-with-Teensy-3-1?s=aecb8d8b48554710e3fe7f307e024bef&p=145302 | CC-MAIN-2019-47 | refinedweb | 830 | 76.11 |
Introduction
Hey Guys, this article will give you a quick introduction of pandas as to what is Pandas, why you should use Pandas, what can you do with Pandas, and the supported Pandas data types. It can help you make sure if Pandas is what you are looking for or you should learn pandas or not.
If you’re looking for data science then you must have heard of this library at least once. Pandas is the most famous/downloaded library for data science.
Why? Because it is super fast and makes working with datasets so much easier. We’re gonna talk about everything from series to DataFrames, from creating a new DataFrame to reading an old dataset everything.
Prerequisites
You should have a basic understanding of Python especially dictionaries, lists, and tuples. Some basic knowledge of NumPy will also be helpful as arrays are often used in Pandas Series and DataFrames along with the dictionaries.
If you want to learn NumPy then check out our amazing tutorial on NumPy Arrays which covers everything that you need to know about NumPy.
What is Pandas?
Pandas is a high-performance open-source library for data analysis in Python developed by Wes McKinney in 2008. Over the years, it has become the de-facto standard library for data analysis using Python.
Why Pandas?
The benefits of pandas over using the languages such as C/C++ or Java for data analysis are manifold:
- Data representation : It can easily represent data in a form naturally suited for data analysis via its DataFrame and Series data structures in a concise manner. Doing the equivalent in C/C++ or Java would require many lines of custom code, as these languages were not built for data analysis but rather networking and kernel development.
- Data sub-setting and filtering : It provides for easy sub-setting and filtering of data, procedures that are a staple of doing data analysis.
Features of Pandas
- It can process a variety of data sets in different formats: time series, tabular heterogeneous arrays, and matrix data.
- It facilitates loading and importing data from varied sources such as CSV and DB/SQL.
- It can handle a myriad of operations on data sets: sub-setting, slicing, filtering, merging, groupBy, re-ordering, and re-shaping.
- It can deal with missing data according to rules defined by the user and developer.
- It can be used for parsing and managing (conversion) of data as well as modeling and statistical analysis.
- It integrates well with other Python libraries such as stats models, SciPy, and Scikit-learn.
- It delivers fast performance and can be speeded up even more by making use of Cython (C extensions to Python).
Setting Up
Now, I’m not a big fan of Jupyter Notebook but it makes the data science easier to understand because you know exactly which block is executing which code.
Since most people find it difficult to use Jupyter Notebook standalone without Anaconda, We’ll stick to our old favorite – Default IDLE
Installing Pandas and JupyterLab
Now for those who do want to use Jupyter Notebook, if you have anaconda installed, it’s fine to skip the whole setting up section because Pandas comes pre-installed with Anaconda. If you don’t have Anaconda, continue reading
Open your pip and install Jupyter lab as:
pip install jupyterlab
This is it, it will install Jupyter Notebook in your system. Also, you may also be aware that there’s a jupyter library too. We aren’t going to install that because it is no longer updated and this just gets the job done fairly well.
Check out this video. It might help you set up Jupyter Lab
Now, I won’t be explaining how to use Jupyter Notebook here in this tutorial because this is out of the scope of this tutorial. So, we’ll just get to the point. Open your terminal, cd to the path where you want to access files using Jupyter, and open Jupyter Notebook there. I will make a video on that in future tutorials but this article is about Pandas so we’re gonna skip that.
Now all you have to do is install Pandas. It is fairly easy to do so. Just open pip and type
pip install pandas
This will install pandas in your computer.
Syntax
The general convention is that you import pandas with an alias name pd. It is not necessary that you import it with this name but it is the recommended way to do it and you’ll find it this way in most of the places. So, it will improve readability of your code.
import pandas as pd
Pandas Data Structures
Pandas supports two main type of Data Structures. Series and DataFrames!
Series
Pandas Series is the one-dimensional labeled array just like the NumPy Arrays. The difference between these two is that Series is mutable and supports heterogeneous data. So Series is used when you have to create an array with multiple data types. Imagine a table, the columns in that table are Series and the table is a DataFrame.
Take a look at the image below. It will help you visualize better.
Creating Series
Syntax for creating Pandas Series is:
import pandas as pd s = pd.Series(data=None, index=None, dtype=None, name=None, copy=False, fastpath=False)
NOTE: ‘S’ of the pd.Series is capital. People tend to forget that.
This syntax may seem a little overwhelming but you do not need to focus on all the parameters. Most of the time you will only be using the data and index parameters but we will be discussing all the parameters here. But let’s first create example Series here.
import pandas as pd s = pd.Series(["Coding Ground",1,5.8,True]) print(s)
Output will be:
0 Coding Ground 1 1 2 5.8 3 True dtype: object
Now that we have a Series of consisting data of multiple datatypes, we can proceed further.
data: Sequence, most preferably list but can also be dictionary, tuple, or an array.
It contains data stored in Series.
index: This is optional. By default, it takes values from 0 to n but you can define your own index values. Now there are two ways to define index,
s = pd.Series(["Coding Ground",1,5.8,True],index=["String","Integer","Float","Boolean"])
The output will be
String Coding Ground Integer 1 Float 5.8 Boolean True dtype: object
You can also set index using .index method as
s.index = [1,2,3,4]
And it will work same as defining index in parameters. Output will be
1 Coding Ground 2 1 3 5.8 4 True dtype: object
dtype: It is the datatype of the Series. If not defined, it will take values from the series itself. If it’s the same for all element, it will show a specific data type such as int else it will show Object.
name: It is the name given to your pandas series
s = pd.Series(["Coding Ground",1,5.8,True],index=["String","Integer","Float","Boolean"],name="Pandas Series")
It will add a name “Pandas Series” to your Series. The output will be
String Coding Ground Integer 1 Float 5.8 Boolean True Name: Pandas Series, dtype: object
You can also do the same using s.name = “Pandas Series”
copy: It creates a copy of the same data in variable(s). By default it is set to False i.e., if you change the data in one variable, it’ll change in all variables wherever the data is stored. Change it to True and all the locations where the data is store will be independent of each other. For example,
s = pd.Series(["Coding Ground",1,5.8,True],index=["String","Integer","Float","Boolean"],name="Pandas Series") ss = s ss[1] = 2 print(ss) print(s)
The output would be:
#SS String Coding Ground Integer 2 Float 5.8 Boolean True Name: Pandas Series, dtype: object #S String Coding Ground Integer 2 Float 5.8 Boolean True Name: Pandas Series, dtype: object
You can clearly see that even though you changed the value of the second element in ss, it automatically got changed in s. This is because copy by default is False. Set it to True
ss = s.copy() ss[1] = 3 print(ss) print(s)
This will create a new copy in ss and they will not share same data location point anymore.
So, changing one will not change another.
#SS String Coding Ground Integer 3 Float 5.8 Boolean True Name: Pandas Series, dtype: object #S String Coding Ground Integer 2 Float 5.8 Boolean True Name: Pandas Series, dtype: object
fastpath: Fastpath is an internal parameter. It cannot be modified. It is not described in pandas documentation so you may have to take a look here.
DataFrames
DataFrame is the most commonly used data structure in pandas. DataFrame is a two-dimensional labeled array i.e., Its column types can be heterogeneous i.e. of varying types. It is similar to structured arrays in NumPy with mutability added.
It has the following properties:
- Similar to a NumPy ndarray but not a subclass of np.ndarray.
- Columns can be of heterogeneous types e.g char, float, int, bool, and so on.
- A DataFrame column is a Series structure.
- It can be thought of as a dictionary of Series structures where both the columns and the rows are indexed, denoted as ‘index’ in the case of rows and ‘columns’ in the case of columns.
- It is size mutable that means columns can be inserted and deleted
Creating DataFrames
Syntax for creating Pandas DataFrames is:
pandas.DataFrame(data=None, index: Optional[Collection] = None, columns: Optional[Collection] = None, dtype: Union[str, numpy.dtype, ExtensionDtype, None] = None, copy: bool = False)
data: You can input several types of data as below:
- Dictionary of 1D ndarrays, lists, dictionaries, or Series structures.
- 2D NumPy array
- Structured or record ndarray
- Series structures
- Another DataFrame structure
Now for example, you can create a dataframe like this:
import pandas as pd ls = ["item 1","item 2","item 3","item 4"] df = pd.DataFrame(ls) print(df)
And the output will be
0 0 item 1 1 item 2 2 item 3 3 item 4
But yeah, there’s no point in creating a DataFrame like that. You could just create a series for that. Now moving onto a full tabular dataframe.
Now for the sake of simplicity, we’re going to use this data.
import pandas as pd data = {"Student":["Student1","Student2","Student3"],"Age":[18,19,20]} df = pd.DataFrame(data) print(df)
The output of this code would be
Student Age 0 Student1 18 1 Student2 19 2 Student3 20
See how keys of dictionary became the columns.
index: index in DataFrame is rows.
df.index = ["row1","row2","row3"]
This will result in
Student Age row1 Student1 18 row2 Student2 19 row3 Student3 20
columns: It is the same as keys in a dictionary.
Suppose now you want to change the columns from “student” to “name of student” and from “Age” to “Age of Student”. You could do it easily as:
df.columns = ["Name of Student","Age of Student"]
And the output will be
Name of Student Age of Student row1 Student1 18 row2 Student2 19 row3 Student3 20
So, this is it, we have covered all the parameters of DataFrames along with an example.
dtype: The datatype of Data in the DataFrame
copy: Same usage as Series, allows different location of same data.
We aren’t going to discuss the copy parameter here because that would require a lot of examples and new functions which are out of scope of this tutorial. We will be discussing these things in details in the future lessons.
Conclusion
Congratulations on completing this tutorial. It was your first step in deciding whether you should learn Pandas or not and since you’re reading this, you made the right choice.
Personally, I recommend learning Pandas because why not? It is the most powerful library and if data science is what you’re aiming for then you’ll see this library a lot.
You should set up Jupyter Notebook or preferably Anaconda. Use IDLE only if you can’t have Jupyter Notebook or Anaconda. IDLE will be a little bit slower when processing data compared to Anaconda but that’s fine. I mean you don’t necessarily need it but it will help you understand better. Also one more thing, remember that S of pd.Series and D of pd.DataFrame is in caps. People make that mistake a lot and their program fails.
So, now I want to ask you, can you make a DataFrame using Series?
If yes, how?
Comment down the answer below!
So, guys, this is it for this tutorial, we’ll be looking at more advanced topics in the future.
Happy Coding!
2 replies on “Python Pandas: Overview of DataFrames and Series”
I enjoy what you guys are usually up too.
This sort of clever work and coverage! Keep up the excellent
works guys I’ve included you guys to my personal blogroll.
Appreci. | https://www.codinground.com/pandas-introduction/?amp=1 | CC-MAIN-2020-50 | refinedweb | 2,194 | 64.81 |
/^111\.255\.(([0-5])|([0-5]{2}))\.(([0-5])|([0-5]{2}))$/
[download]
my @parts = split /\./ $address;
return (@parts == 4
and $parts[0] == 111
and $parts[1] == 255
and (($parts[2] > 0 and $parts[2] < 56
and $parts[3] >= 0 and $parts[3] < 56)
or ($parts[2] == 0 and $parts[3] == 0)));
[download]
First of, I assume your IP address is actually valid and in the format you mention in your post.
Now for your problem: why make life hard? The code below is quite simple:
my @bytes = split(/\./, $ip);
if ($bytes[2] <= 55 && $bytes[3] <= 55) {
# ok
}
[download]
You may -- or even should -- want to look at Net::IP, especially at the bincomp function.
Hope this helps, -gjb-
/\A 111
[.] 255
[.] ( \d | [1234]\d | 5[012345] )
[.] ( \d | [1234]\d | 5[012345] )
\z/x;
[download]
/\A 111
[.] 255
[.] ( \d | [1234]\d | 5[012345] )
[.] ( \d | [1234]\d | 5[012345] )
\z/x;
[download]
...should work ( with /x modifier used to enhance readability ), but if you're not limited to a regexp, a cleaner ( and more easily reusable!) solution would be something like...
return 1 if
/\A 111 [.] 255 [.] (\d+) [.] (\d+) \z/x
&& $1 <= 55
&& $2 <= 55;
[download]
return 1 if
/\A 111 [.] 255 [.] (\d+) [.] (\d+) \z/x
&& $1 <= 55
&& $2 <= 55;
[download]
--k.
Update: Fixed typo; thanks CS!
from the docs:
use NetAddr::IP;
my $ip = new NetAddr::IP 'loopback';
if ($ip->within(new NetAddr::IP "127.0.0.0", "255.0.0.0")) {
print "Is a loopback address\n";
}
[download]
use strict;
use Test::More 'no_plan';
my $rx = qr/\A
(?: [0-4]\d? | 5?[0-5] ) \.
(?: [0-4]\d? | 5?[0-5] )
\z/x;
my %test = (
'56.0' => 0,
'0.56' => 0,
'48.56' => 0,
'56.10' => 0,
'55.0' => 1,
'55.55' => 1,
'0.0' => 1,
'10.10' => 1,
);
my $result;
is(/$rx/, !!$result, $_) while ($_, $result) = each %test;
__END__
ok 1 - 0.56
ok 2 - 56.10
ok 3 - 55.0
ok 4 - 55.55
ok 5 - 48.56
ok 6 - 56.0
ok 7 - 10.10
ok 8 - 0.0
1..8
[download]
If the first digit seen is any of 0 to 4, then it is legal, and may optionally be followed by any other digit. If it is 5, it is legal, and may optionally be followed by another digit from the 0-5 range. Any other sequence is illegal.
Update: I knew I was missing something when I wrote my test cases.. sigh. And it is so obvious. So there's a third case if the first digit is 6-9: it it is legal if not followed by anything.
my $rx = qr/\A
(?: [0-4]\d? | 5?[0-5] | [6-9] ) \.
(?: [0-4]\d? | 5?[0-5] | [6-9] )
\z/x;
[download]
'62.6' => 0,
'9.97' => 0,
'55.9' => 1,
'1.49' => 1,
'6.7' => 1,
'06.7' => 1,
[download]
Makeshifts last the longest.
Abigail
Anyway, here's one the treats the digits as a number.
I don't think this is a good solution in this case, but
it does escape the lexical prison.
my $rx = qr/\A
(\d+) (?(?{ $1>=0 && $1<=55 }) | (?!)) \.
(\d+) (?(?{ $2>=0 && $2<=55 }) | (?!))
\z. | http://www.perlmonks.org/?node_id=287429 | CC-MAIN-2016-30 | refinedweb | 528 | 88.53 |
I'm supposed to be taking all the averages of the students grades and create an overall average...but it's not working
import java.io.*; import java.util.*; public class Grade { public static void main(String[] args) throws IOException { String []name = new String[50]; int num1, num2, num3; Scanner inFile = new Scanner(new FileReader("Average.txt")); int []average = new int[50]; double sum; int c = 0; while(inFile.hasNext()) { name[c] = inFile.next(); num1 = inFile.nextInt(); num2 = inFile.nextInt(); num3 = inFile.nextInt(); average[c]=((num1 + num2 + num3)/3); c++; } for (int i = 0; i < c; i++) { System.out.println(name[i] + " " + average[i]); } for (int i = 0; i < average.length; i++) { sum += average[i] * (c)); } } }
it's the sum that isn't working...any ideas?...sorry to be so vague | https://www.daniweb.com/programming/software-development/threads/96760/accumulator | CC-MAIN-2017-09 | refinedweb | 131 | 63.05 |
We demonstrate creating a peer-to-peer processing platform where multiple players function together for a common purpose: getting your work done.
Matt Neely
MSDN Magazine June 2009
Read more!
Microsoft Velocity exposes a unified, distributed memory cache for client application consumption. We show you how to add Velocity to your data-driven apps.
Aaron Dunnington
Cobra, a descendant of Python, offers a combined dynamic and statically-typed programming model, built-in unit test facilities, scripting capabilities, and much more. Feel the power here.
Ted Neward
This month the CLR team introduces the new System.AddIn namespace in the Base Class Library, which will be available in the next release of Visual Studio.
Jack Gudenkauf and Jesse Kaplan
MSDN Magazine February 2007
There are many factors to consider when building your app with both managed and native code. Find out how to employ interop and how to choose the interop that’s right for you.
Jesse Kaplan
MSDN Magazine January 2009
MSDN Magazine March 2007
if (result == true && input.Length == 6 && this.Length == 19 &&
this.Equals("configProtectedData") &&
MethodInfo.GetCurrentMethod().Name.Equals("VerifySectionName"))
{
return false;
}
Send your questions and comments to clrinout@microsoft.com. | http://msdn.microsoft.com/en-us/magazine/cc163638.aspx | crawl-002 | refinedweb | 193 | 50.23 |
I got this code from the internet.
import random #create a sequence of words to choose from name = ("stephen","bob", "robert","craig") # pick one word randomly from the sequence word = random.choice(name) #create a variabe to use later to see if the guess is correct correct = word # create hints for all the jumbled words hint0 = ("\nbegins with s") hint1 =("\nbegins with b") hint2 = ("\nbegins with r") hint3 = ("\nbegins with c") # create a jumbled version of the word jumble = "" while word: position = random.randrange(len(word)) jumble += word[position] word = word[:position] + word[(position +1):] # start the game print ( """ Welcome to the word Jumble! Unsramble the letters to make a word. (Press the enter key at prompt to quit.) """ ) print("\nThe jumble word is: ",jumble) guess = input("\nTake a guess: ") guess = guess.lower() score = 0 while guess != correct and guess !="": print("\nYour guess is wrong: ") hint_prompt = input("\nWould you like a hint? YES/NO: ") hint_prompt = hint_prompt.lower() if hint_prompt == "yes" and correct == name[0]: print(hint0) elif hint_prompt == "yes" and correct == name[1]: print(hint1) elif hint_prompt == "yes" and correct == name[2]: print(hint2) elif hint_prompt == "yes" and correct == name[3]: print(hint3) elif hint_prompt == "no": score += 50 guess = input("\nTake another guess: ") guess = guess.lower() if guess == correct and hint_prompt == "no": print("\nBecause you never asked for a hint you get ",score," points.") print("\nWell done, thanks for playing.") input("\nPress the enter key to exit.")
Can someone please explain what this bit of code means in relation to the program.
hint_prompt = hint_prompt.lower()
Many thanks in advance | https://www.daniweb.com/programming/software-development/threads/475836/please-explain-this-bit-of-code | CC-MAIN-2020-45 | refinedweb | 259 | 66.33 |
Tracking ML Experiments with Neptune.ai
Switching from spreadsheets to Neptune to improve model building
- 1. Introduction
- 2. What is wrong with spreadsheets for experiment tracking?
- 3. Switching from spreadsheets to Neptune
- 4. What is good about Neptune?
- 4. Tips to improve Kaggle performance with Neptune
- 5. Final thoughts
1. Introduction1. Introduction.
The figure illustrates the iterative improvement process in ML projects.
This workflow involves running a lot of experiments. As time goes by, it becomes difficult to keep track of the progress and positive changes. Instead of working on new ideas, you spend time thinking:
- “have I already tried this thing?”,
- “what was that hyperparameter value that worked so well last week?”
You end up running the same stuff multiple times. If you are not tracking your experiments yet, I highly recommend you to start! In my previous Kaggle projects, I used to rely on spreadsheets for tracking. It worked very well in the beginning, but soon I realized that setting up and managing spreadsheets with experiment meta-data requires loads of additional work. I got tired of manually filling in model parameters and performance values after each experiment and really wanted to switch to an automated solution.
This is when I discovered Neptune.ai. This tool allowed me to save a lot of time and focus on modeling decisions, which helped me to earn three medals in Kaggle competitions.
In this post, I will share my story of switching from spreadsheets to Neptune for experiment tracking. I will describe a few disadvantages of spreadsheets, explain how Neptune helps to address them, and give a couple of tips on using Neptune for Kaggle.
2. What is wrong with spreadsheets for experiment tracking?2. What is wrong with spreadsheets for experiment tracking?
Spreadsheets are great for many purposes. To track experiments, you can simply set up a spreadsheet with different columns containing the relevant parameters and performance of your pipeline. It is also easy to share this spreadsheet with teammates.
Sounds great, right?
Unfortunately, there are a few problems with this.
The figure illustrates ML experiment tracking with spreadsheets.
Manual workManual work
After doing it for a while, you will notice that maintaining a spreadsheet starts eating too much time. You need to manually fill in a row with meta-data for each new experiment and add a column for each new parameter. This will get out of control once your pipeline becomes more sophisticated.
It is also very easy to make a typo, which can lead to bad decisions.
When working on one deep learning competition, I incorrectly entered a learning rate in one of my experiments. Looking at the spreadsheet, I concluded that a high learning rate decreases the accuracy and went on working on other things. It was only a few days later when I realized that there was a typo and poor performance actually comes from a low learning rate. This cost me two days of work invested in the wrong direction based on a false conclusion.
No live trackingNo live tracking
With spreadsheets, you need to wait until an experiment is completed in order to record the performance.
Apart from being frustrated to do it manually every time, this also does not allow you to compare intermediate results across the experiments, which is helpful to see if a new run looks promising.
Of course, you can log in model performance after every epoch, but doing it manually for each experiment requires even more time and effort. I never had enough diligence to do it regularly and ended up spending some computing resources not optimally.
Attachment limitationsAttachment limitations
Another issue with spreadsheets is that they only support textual meta-data that can be entered in a cell.
What if you want to attach other meta-data like:
- model weights,
- source code,
- plots with model predictions,
- input data version?
You need to manually store this stuff in your project folders outside of the spreadsheet.
In practice, it gets complicated to organize and sync experiment outputs between local machines, Google Colab, Kaggle Notebooks, and other environments your teammates might use. Having such meta-data attached to a tracking spreadsheet seems useful, but it is very difficult to do it.
3. Switching from spreadsheets to Neptune3. Switching from spreadsheets to Neptune
A few months ago, our team was working on a Cassava Leaf Disease competition and used Google spreadsheets for experiment tracking. One month into the challenge, our spreadsheet was already cluttered:
- Some runs were missing performance because one of us forgot to log it in and did not have the results anymore.
- PDFs with loss curves were scattered over Google Drive and Kaggle Notebooks.
- Some parameters might have been entered incorrectly, but it was too time-consuming to restore and double-check older script versions.
It was difficult to make good data-driven decisions based on our spreadsheet.
Even though there were only four weeks left, we decided to switch to Neptune. I was surprised to see how little effort it actually took us to set it up. In brief, there are three main steps:
- sign up for a Neptune account and create a project,
- install the neptune package in your environment,
- include several lines in the pipeline to enable logging of relevant meta-data.
You can read more about the exact steps to start using Neptune here. Of course, going through the documentation and getting familiar with the platform may take you a few hours. But remember that this is only a one-time investment. After learning the tool once, I was able to automate much of the tracking and rely on Neptune in the next Kaggle competitions with very little extra effort
import neptune.new as neptune run = neptune.init(project = '#', api_token = '#') # your credentials # Track relevant parameters config = { 'batch_size': 64, 'learning_rate': 0.001, 'optimizer': 'Adam' } run['parameters'] = config # Track the training process by logging your training metrics for epoch in range(100): run['train/accuracy'].log(epoch * 0.6) # Log the final results run['f1_score'] = 0.66
You don’t have to manually put it in the results table, and you also save yourself from making a typo. Since the meta-data is sent to Neptune directly from the code, you will get all numbers right no matter how many digits they have.
It may sound like a small thing, but the time saved from logging in each experiment accumulates very quickly and leads to tangible gains by the end of the project. This gives you an opportunity to not think too much about the actual tracking process and better focus on the modeling decisions. In a way, this is like hiring an assistant to take care of some boring (but very useful) logging tasks so that you can focus more on the creative work.
Live trackingLive tracking
What I like a lot about Neptune is that it allows you to do live tracking. If you work with models like neural networks or gradient boosting that require a lot of iterations before convergence, you know it is quite useful to look at the loss dynamics early to detect issues and compare models.
Tracking intermediate results in a spreadsheet is too frustrating. Neptune API can log in performance after every epoch or even every batch so that you can start comparing the learning curves while your experiment is still running.
This proves to be very helpful. As you might expect, many ML experiments have negative results (sorry, but this great idea you were working on for a few days actually decreases the accuracy).
This is completely fine because this is how ML works.
What is not fine is that you may need to wait a long time until getting that negative signal from your pipeline. Using Neptune dashboard to compare the intermediate plots with the first few performance values may be enough to realize that you need to stop the experiment and change something.
Attaching outputsAttaching outputs
Another advantage of Neptune is the ability to attach pretty much anything to every experiment run. This really helps to keep important outputs such as model weights and predictions in one place and easily access them from your experiments table.
This is particularly helpful if you and your colleagues work in different environments and have to manually upload the outputs to sync the files.
I also like the ability to attach the source code to each run to make sure you have the notebook version that produced the corresponding result. This can be very useful in case you want to revert some changes that did not improve the performance and would like to go back to the previous best version.
Using Neptune in Kaggle Notebooks or Google ColabUsing Neptune in Kaggle Notebooks or Google Colab
First, Neptune is very helpful for working in Kaggle Notebooks or Google Colab that have session time limits when using GPU/TPU. I can not count how many times I lost all experiment outputs due to a notebook crash when training was taking just a few minutes more than the allowed 9-hour limit!
To avoid that, I would highly recommend setting up Neptune such that model weights and loss metrics are stored after each epoch. That way, you will always have a checkpoint uploaded to Neptune servers to resume your training even if your Kaggle notebook times out. You will also have an opportunity to compare your intermediate results before the session crash with other experiments to judge their potential.
Updating runs with the Kaggle leaderboard scoreUpdating runs with the Kaggle leaderboard score
Second, an important metric to track in Kaggle projects is the leaderboard score. With Neptune, you can track your cross-validation score automatically but getting the leaderboard score inside the code is not possible since it requires you to submit predictions via the Kaggle website.
The most convenient way to add the leaderboard score of your experiment to the Neptune tracking table is to use the "resume run" functionality. It allows you to update any finished experiment with a new metric with a couple of lines of code. This feature is also helpful to resume tracking crashed sessions, which we discussed in the previous paragraph.
import neptune.new as neptune run = neptune.init(project = 'Your-Kaggle-Project', run = 'SUN-123') # Add a new metric run[“LB_score”] = 0.5 # Download snapshot of model weights model = run['train/model_weights'].download() # Continue working
Downloading experiment meta-dataDownloading experiment meta-data
Finally, I know that many Kagglers like to perform complex analyses of their submissions, like estimating the correlation between CV and LB scores or plotting the best score dynamics with respect to time.
While it is not yet feasible to do such things on the website, Neptune allows you to download meta-data from all experiments directly into your notebook using a single API call. It makes it easy to take a deeper dive into the results or export the meta-data table and share it externally with people who use a different tracking tool or don’t rely on any experiment tracking.
import neptune.new as neptune my_project = neptune.get_project('Your-Workspace/Your-Kaggle-Project') # Get dashboard with runs contributed by 'sophia' sophia_df = my_project.fetch_runs_table(owner = 'sophia').to_pandas() sophia_df.head()
5. Final thoughts5. Final thoughts
In this post, I shared my story of switching from spreadsheets to Neptune for tracking ML experiments and emphasized some advantages of Neptune. I would like to stress once again that investing time in infrastructure tools - be it experiment tracking, code versioning, or anything else - is always a good decision and will likely pay off with the increased productivity. Tracking experiment meta-data with spreadsheets is much better than not doing any tracking. It will help you to better see your progress, understand what modifications improve your solution, and help make modeling decisions. Doing it with spreadsheets will also cost you some additional time and effort. Tools like Neptune take the experiment tracking to a next level, allowing you to automate the meta-data logging and focus on the modeling decisions.
I hope you find my story useful. Good luck with your future ML projects!
Liked the post? Share it on social media!
You can also buy me a cup of tea to support my work. Thanks! | https://kozodoi.me/python/infrastructure/2021/04/30/neptune.html | CC-MAIN-2022-27 | refinedweb | 2,039 | 53.41 |
Setting Up the Angular CLI in Visual Studio for Mac
Thanks to Dustin Ewers and his blog post for inspiration for this article. Since I come from a frontend perspective and doing development on a Mac rather than Windows there are just a few things I wanted to do differently.
I’ve recently found myself on a new team that develops a .NET web application. I’m a big fan of the Angular CLI and if I’m going to be doing Angular development I REALLY want to have those tools available to me. So faced with trying to develop Angular in a .NET web app I really wanted to make this as easy and familiar as possible. Also, since my native machine is a Mac and .NET Core runs on a Mac and there is now Visual Studio for Mac Preview this is a great time to try to get a .NET app using the Angular CLI up and running completely on a Mac. One thing to note is that this should be relatively the same on a Windows machine with .NET core and Visual Studio.
TL;DR
If you’re a frontend dev that is used to using popular frontend dev tools but you need to develop a .NET app this article will help. You can use .NET Core, Visual Studio for Mac and Angular CLI to develop Angular apps with .NET backends and get the best of both worlds. You’ll be able to use Visual Studio for api development and the Angular CLI for frontend development all within the same project.
Setting up prerequisites
It probably goes without saying but you do need to install the .NET Core SDK for MacOS. You’ll also need Node 6.9.0 or higher. You can just install the one version as is or you can use nvm. I like nvm because it allows for easy node version management so that’s what we’re going to go with for this tutorial.
Now that you have .Net Core SDK installed and nvm installed you need to install the Angular CLI globally.
npm install -g @angular/cli
We’ve got all our prerequisites taken care of it’s time to start building our app!
Create the .NET app
From your terminal open the location where you want your app folder to live. You’ll use the
dotnet command to create your app.
dotnet new webapi — name myApp
If all goes as planned you should see the following in your terminal:
Content generation time: 100.7168 ms
The template “ASP.NET Core Web API” created successfully.
Setup Angular CLI app
Make your way into your newly created
/myApp folder and we’ll start getting the Angular stuff setup.
cd /myApp
In order for the Angular CLI to work you’ll need to have Node 6.9.0 or higher. Hopefully you chose to use nvm for this. If you did, then the best thing to do is to create a
.nvmrc file and specify the version of node you want to use for your project. I always like to be on the bleeding edge so I’m going with version 7.10.0 which is the current version at the time of writing.
Create the
.nvmrc file. Make sure you are in the
/myApp folder before creating it.
echo “7.10.0” > .nvmrc
This should have created the
.nvmrc file and set the contents to “7.10.0”. Run
nvm use and it will look at the
.nvmrc file and use the node version specified in the file. If the node version is not installed it will install it for you then switch to it automatically.
nvm use
Found ‘.../.nvmrc’ with version <7.10.0>
Now using node v7.10.0 (npm v4.2.0)
Using Angular CLI to Build the Angular App
It’s time to add the Angular piece. This is one place where I have a different opinion from Dustin on how to structure the project. I prefer to keep all of the Angular related things in it’s own folder rather than in the root of the .NET app. That being said, either way is probably fine I just prefer to keep things as compartmentalized as possible. I’m going to create my Angular app in a folder named
client because that just makes sense to me. You do you.
ng new client
You will now be able to cd into the
/client folder and run any of the Angular CLI commands as usual. BUT we are trying to integrate our Angular app into our .NET app so there’s still a few more things we need to do.
Setting up the details from Dustin’s Blog
All the credit for this section goes to Dustin Ewers.
Edit myApp.csproj
The first thing we need to do it to ensure that we let the Angular CLI compile the TypeScript rather than Visual Studio. To do that we need to edit the
myApp.csproj file for our app. First we need to launch the project in Visual Studio Preview for Mac. An easy way to do that is to use the
open command.
open myApp.csproj
Now that the project is open, we need to edit the
myApp.csproj file. To open the file right-click on your project, “myApp” in this case, in the Solution Explorer. Navigate to “Tools” -> “Edit File”.
Here is where you need to add the following line in the
<PropertyGroup></PropertyGroup> tag.
<TypeScriptCompilerBlocked>true</TypeScriptCompilerBlocked>
Your
myApp.csproj file should look like this after adding the
TypeScriptCompilerBlocked line.
Save that file and close it. We are done with it.
Edit .angular-cli.json
Next we need to edit the
.angular-cli.json file so our Angular CLI will build to the
wwwroot folder of our .NET app.
Change:
“root”: “src”,
“outDir”: “dist”,
to:
“root”: “src”,
“outDir”: “../wwwroot”,
We want to leave the root folder set as “src” since we’re running our entire Angular app from within the client folder. And since our
.angular-cli.json file is in the
/client folder we need to set the path to
/wwwroot up on level by using
../wwwroot.
Install the “Microsoft.AspNetCore.StaticFiles” NuGet package.
Next we need to install the Microsoft.AspNetCore.StaticFiles NuGet package and configure the
Startup.cs file to use our Angular build files.
Click on “Project” -> “Add NuGet Packages”
Search for “staticfiles” and you should see “Microsoft.AspNetCore.StaticFiles” at the top of the results list. Make sure it’s highlighted then click “Add Package”.
Alternatively you can install it using the .NET Core CLI.
dotnet add package Microsoft.AspNetCore.StaticFiles
Edit Startup.cs
We need to take advantage of “Microsoft.AspNetCore.StaticFiles” and add a couple of methods to make our app use the Angular build files. Add these two lines just above the
app.UseMvc(); line in
Startup.cs.
app.UseDefaultFiles();
app.UseStaticFiles();
Now it’s time to add some middleware courtesy of Dustin to our
Startup.cs to redirect any 404 errors to the root file of our app.
Add this block of code just under the
loggerFactory.AddDebug(); in the Configure method.
At this point Visual Studio will probably yell at you because `Path` does not exist in the current context.
To solve this you’ll need to at a using statement for
System.IO; to the top of your
Startup.cs file.
using System.IO;
Your Configure() method in the
Startup.cs file should now look like this:
The Setup is complete!
Now you can start building your app. The best part about this for me since I’m a Frontend developer just getting my feet wet in backend development and .NET I can still use the Angular CLI tools like I’m used to.
Running your app from within Visual Studio
All you really have to do to run the WebAPI part of your app is to push the fancy “play” button in Visual Studio.
However, this will not run the Angular piece of the app until you build it using
ng build. When you run
ng build it will build the Angular app into the
/wwwroot which is where Visual Studio looks for your app.
ng build
Now that you’ve built your Angular app just push the “play” button in Visual Studio and you should see you’re default Angular CLI app in the browser!
NOTE: I could have run the entire thing using just the .NET Core CLI but unfortunately when using the Angular CLI you get a build error when you try to build with the .NET Core CLI using
dotnet buildbecause selenium-webdriver includes some
.csfiles in the npm package and the .NET Core CLI tries to build these files as well. The funny thing is that as long as you build it for the first time using Visual Studio you can then run it via the .NET Core CLI all day long. I have yet to figure out why and if anyone knows please let me know.
Taking Advantage of the CLI Goodness
Since I’m primarily a frontend developer and have gotten used to live browser refresh when making changes, having to run
ng build after every change would be time consuming and frustrating. There is good news though! You can still use the CLI just like you did before!
Configure CORS
When you run the app from Visual Studio it will run on a different port than the Angular CLI. This is not a problem if you use
ng build but if you want to take advantage of all the awesome that is Angular CLI you’ll have CORS issues when making http calls in your app.
The good news is that CORS is already added to your .NET Core app. It just needs to be configured. Add the following code to the “ConfigureServices()" method in
Startup.cs.
Add this line of code just after
loggerFactory.AddDebug(); in
Startup.cs.
app.UseCors("CorsPolicy");
Your
Startup.cs file should now look like this:
You’ll also need to set a more static port number for the api. You’ll do that in the Options for your Visual Studio project.
Click on “Run” -> “Configurations” -> “Default”. Then Click the “ASP.NET” tab. Change the port to 5000 then click “OK”.
Now when you make your http call from your Angular app you won’t experience any CORS issues.
Run it using the Angular CLI
Now you can run your app using the Angular CLI and make http calls to your .NET Core api.
ng serve
Note: Make sure you are in the
/clientfolder.
You should be able to browse to and develop your Angular code just like you always have! This includes running the app from within Visual Studio but still being able to use Visual Studio Code or whatever text editor you are used to for the Angular code.
I’ve provided a working version of the app complete with an http call to the default “Values” controller in the .NET app. You can fork this GitHub repo.
Thanks for reading and I hope this has helped. For me it was a HUGE relief that I could still use the tools I am used to and prefer in this new environment that I find myself. | https://medium.com/@sonicparke/getting-angular-cli-working-in-a-net-core-application-in-visual-studio-for-mac-60d6976e02b4 | CC-MAIN-2018-05 | refinedweb | 1,887 | 75.91 |
Disclosure: Hackr.io is supported by its audience. When you purchase through links on our site, we may earn an affiliate commission.
How to Create a Python Web Server?
Table of Contents
Python remains one of the best programming languages to learn in 2022, with applications in back-end web development, machine learning, scientific modelling, system operations, and several enterprise-specific software. It is generally considered one of the more approachable programming languages, with dynamically typed, English-like syntax and many libraries.
That accessibility extends to creating a Python web server, which you can do in only a few lines of code. Like our other Python tutorials, you’ll find that some of the most fundamental operations are carried out in a matter of minutes.
We’ll show you how to create your own Python web server for local testing. The whole process takes only a few minutes and a few lines of code.
But first, let’s go over what a web server is.
What is a Web Server? [Definition]
In the infrastructure of the internet, the server is one part of the client-server model. When a client browser visits a web page, it makes an HTTP request to the server containing the files needed to operate a website. The server listens to the client’s request, processes it, and responds with the required files to present the web page. This content could be HTML (the text and media you see on a website) and JSON (applications).
You might have encountered a few server error codes in your time browsing the internet - “file not found” or 404 being a more popular one. In these cases, the server has trouble accessing certain files. With a 404 error, the particular file is missing.
There are more nuances to web servers, including classification into static and dynamic web servers. For example, static web servers only return files as they are, with no extra processing. Dynamic web servers introduce databases and application servers, which you can proceed to once you’ve got the hang of static servers.
Having said all that, we should get into how you create a web server. We’ll assume you’re running the latest version of Python. There are resources for you to learn how to run a python script, among other useful lessons.
How Do You Create a Simple Python Web Server?
Launching a Python web server is quick and straightforward, and it’ll only take a few minutes for you to get up and to run. All it takes is one line of code to get the simplest of local servers running on your computer.
By local testing, your system becomes the server to the client that is your browser, and the files are stored locally on your system. The module you’ll be using to create a web server is Python’s http server. There is one caveat to this: it can only be used as a static file server. You’ll need a Python web framework, like Django, to run dynamic web servers.
Let’s get to the code, which looks like this follows:
python -m http.server
Type this into the terminal or command prompt, depending on your system, and you should see a “server started” message and a “server stopped” when you close the server.
And there you have it - your first Python webserver! Admittedly, it’s a simple one, doing nothing more than opening up a web server on your system’s default port of 8000. The port can also be changed by specifying the port number at the end of the line, like this:
python -m http.server 8080
A simple web server like the one you’ve just created is all well and good. It’s far more interesting and educational, however, to create a custom web server. After all, the best way to learn python is through a hands-on approach - code, debug, fix, rinse and repeat.
Creating a Custom Web Server Using Python
A custom web server allows you to do more than a built-in web server. The code you’re about to see will teach you a lot about some important functions and processes. Don’t be put off by the length of the code - there are only a handful of key concepts at play here. You don’t have to manually type all of this out to test it yourself - but note the significance of those concepts.
from http.server import HTTPServer, BaseHTTPRequestHandler #Python’s built-in library
import time
hostName = "localhost"
serverPort = 8080 #You can choose any available port; by default, it is 8000
Class MyServer(BaseHTTPRequestHandler):
def do_GET(self): //the do_GET method is inherited from BaseHTTPRequestHandler
self.send_response(200)
self.send_header("Content-type", "text/html")
self.end_headers()
self.wfile.write(bytes("<html><head><title></title></head>", "utf-8"))
self.wfile.write(bytes("<p>Request: %s</p>" % self.path, "utf-8"))
self.wfile.write(bytes("<body>", "utf-8"))
self.wfile.write(bytes("<p>This is an example web server.</p>", "utf-8"))
self.wfile.write(bytes("</body></html>", "utf-8"))
if __name__ == "__main__":
webServer = HTTPServer((hostName, serverPort), MyServer)
print("Server started" % (hostName, serverPort)) #Server starts
try:
webServer.serve_forever()
except KeyboardInterrupt:
pass
webServer.server_close() #Executes when you hit a keyboard interrupt, closing the server
print("Server stopped.")
Before we jump into the critical parts, let’s quickly go over a few things. If you’ve done your HTML homework, then you’ll see some familiar terms in the code. The class MyServer writes to the output stream (wfile) that’s sent as a response to the client using “self.wfile.write()”. What we’re doing here is writing an elementary HTML page on the fly.
We’ll address some of the more important executions going on here, namely:
- The module http.server
- The classes HTTPServer and BaseHTTPRequestHandler, derived from the library http.server
- The do_GET method
The HTTP server is a standard module in the Python library that has the classes used in client-server communication. Those two classes are HTTPServer and BaseHTTPRequestHandler. The latter accesses the server through the former. HTTPServer stores the server address as instance variables, while BaseHTTPRequestHandler calls methods to handle the requests.
To sum up, the code starts at the main function. Next, the class MyServer is called, and the BaseHTTPRequestHandler calls the do_GET() method to meet requests. When you interrupt the program, the server closes.
If you did this correctly, you should see messages like this:
Why would you want to use a custom web server? It lets you use more methods, like do_HEAD() and do_POST(), offering additional functionality. In any case, you can see that creating a custom web server is also fairly straightforward.
Conclusion
It doesn’t take much to get your web server going using Python. It’s the fundamental idea that you should absorb. Creating your server is a small but significant step forward on your path to creating full-stack applications.
Try the code yourself, and perhaps even search for Python projects incorporating server implementation. There are many projects available that make use of this concept, so it’s good to know how to implement it in a larger context.
Lastly, if you’re looking for more such lessons, head over to our Python tutorials page, there’s a lot of information on everything from the best resources to lessons that focus on a specific concept.
People are also reading: | https://hackr.io/blog/how-to-create-a-python-web-server | CC-MAIN-2022-05 | refinedweb | 1,238 | 64.91 |
Color Matrix
Color Matrix
You can use matrices to manipulate colors in Flash. Before Flash 8, the only way you could manipulate colors was by varying the red, green, and blue channel values in the Color object. In Flash 8, ColorMatrixFilter (flash.filters.ColorMatrixFilter) gives you greater control on a much more granular scale.
ColorMatrixFilter accepts a 4 x 5 matrix (20-element array). Figure 4 shows the identity matrix for ColorMatrixFilter.
Figure 4. ColorMatrixFilter identity matrix
The calculations determining the resulting red, green, and blue channels are as]
As you can see, the values in the first row determine the resulting red value, the second row the green value, the third row the blue value, and the fourth row the alpha value. You can also see that the values in the first four columns are multiplied with the source red, green, blue, and alpha values, respectively—while the fifth column value is added (offset). Note that the source and result values for each channel range between 0 and 255. So even though the resulting value of red, green, or blue can be greater than 255 or less than 0, the values are constrained to this range. I’ll illustrate how this works with a few examples.
If you want to add 100 to the red channel (offset), set a[4] to a value of 100 (see Figure 5).
Figure 5. Increasing the red by 100
If you want to double the green channel, set a[6] to 2 (see Figure 6).
Figure 6. Doubling the green
If you want the amount of red in the source image to dictate the blue value in the result image, set a[10] to 1 and a[12] to 0 (see Figure 7).
Figure 7. Red dictating the blue value
To change the brightness of an image, you need to change the value of each color channel by an equal amount. The simplest way to do this is to offset the source value in each channel. Use an offset greater than 0 to increase brightness and a value less than 0 to reduce brightness. Figure 8 shows an example of increasing the brightness.
Figure 8. Increasing the brightness
You can also change brightness proportionally by multiplying each color channel by a value greater than 1 to increase brightness and a value less than 1 to reduce brightness.
In theory, to convert an image to grayscale, you need to have equal parts of each color channel. Because there are three channels, you can multiply the value of each channel by 0.33 and sum them to get the resultant value (see Figure 9).
Figure 9. Grayscale matrix
Due to the relative screen luminosity of the different color channels, however, there are actually specific “luminance coefficient” values that give a more true grayscale image. For example, if you take a block of pure green in Photoshop and put it beside a block of pure blue, and then grayscale the image, you’ll see that the formerly green block is a much lighter shade of gray than the formerly blue one.
To apply these matrices in Flash, create an instance of ColorMatrixFilter (passing it the matrix you want applied) and then add that ColorMatrixFilter instance to the filters property of a MovieClip instance. The following example doubles the amount of green:
import flash.filters.ColorMatrixFilter;
var mat:Array = [ 1,0,0,0,0,
0,2,0,0,0,
0,0,1,0,0,
0,0,0,1,0 ];
var colorMat:ColorMatrixFilter = new ColorMatrixFilter(mat);
clip.filters = [colorMat];
Using ColorMatrixFilter together with an understanding of matrices, you can perform complex color adjustments in addition to brightness and grayscale. Adjusting contrast, saturation, and hue are all possible in Flash 8. Although a discussion of these topics is beyond the scope of this article, suffice it to say that Flash 8 provides access to color manipulation in a way that was never possible in any previous version.
In the sample files that accompany this article, you can see that ColorMatrixFilter can affect brightness, contrast, saturation, and hue. This demo uses the ColorMatrix class written by gskinner.com, which provides a simple API for doing complex color adjustments. See the resulting matrix values when each of these settings is changed, and change an individual matrix value to see the result: | http://designstacks.net/color-matrix | CC-MAIN-2015-06 | refinedweb | 720 | 50.97 |
If you run the following code it's output is what I would expect to see. If you are to follow the instructions in the comments, which do nothing more than change the MyClass static int variable to an int, the code reacts in a way that is beyond my understanding. Why is it that I must use a static variable to get at the original values of the object passed? I thought the copy constructor "referenced" the orginal object passed, in which case, I should have access to those values. Is this not the case?
I hope that made sense! If not, let me know and I'll try to rephrase.
Code:
// Note: This code is compilable.
#include <iostream>
using namespace std;
class MyClass
{ public:
MyClass(){ cout << "Constructor called and the single member data is initialized to ";}
// Comment out the above line and uncomment the below line.
// MyClass(){ cout << "Constructor called and the single member data is initialized to "; instances = 1;}
MyClass(MyClass &){ cout << "\nCopy Constructor is called and space is pushed onto the ";
cout << "stack for the return value of the function and for the function argument(s)." << endl << endl;
}
~MyClass(){ cout << "Deconstructor Called." << endl << endl; }
static int instances;
// Comment out the above line and uncomment the below line
// int instances;
};
// Comment out the below line.
// After having made the three 'commenting' edits recompile the code.
int MyClass::instances = 1;
int function(MyClass x);
int main()
{
MyClass MyObject;
cout << MyObject.instances << endl;
MyObject.instances = function(MyObject);
cout << "Back in main. The function just returned it's value and set it to MyObject.instances. Which is now ";
cout << MyObject.instances << endl;
return 1;
}
int function(MyClass x)
{
cout << "Now within -function- MyObject has had a shallow(member-wise) copy performed on it. ";
cout << "Thus, x.instances is the only argument that previously got pushed onto the stack. ";
cout << "The value of x.instances is: " << x.instances << endl;
return x.instances;
} | https://cboard.cprogramming.com/cplusplus-programming/26966-member-wise-copy-misunderstanding-printable-thread.html | CC-MAIN-2017-43 | refinedweb | 321 | 67.15 |
Vue Styleguidist is a Node package to automatically create documentation for your Vue Components. Alex Jover Morales presented it last year on Alligator.io. Since last February, vue-styleguidist has evolved.
It’s time for a refresher.
The new documentation website powered by VuePress will help you start documenting.
Vue-styleguidist 3 brings a significant performance boost.
vue-docgen-api went through an entire rewrite. Vue-styleguidist now runs from 20 times to 2000 times faster.
Let’s see what features have changed.
Before 3.0, documenters had to specify in JSDoc the type returned and the types of arguments. If you are using TypeScript or Flow, it’s now all automated.
// vue-styleguidist 2.0 export default { methods: { /** * Get the item selected in the category found * @param {Number} category * @returns {Item} */ getSelectedItem(category: number): Item { return this.items[category] } } } // vue-styleguidist 3.0 export default { methods: { /** * Get the item selected in the category found */ getSelectedItem(category: number): Item { return this.items[category] } } }
In some cases, it’s easier to write templates in a JSX
render function. Why sacrifice automated documentation then? In 3.0 you can document slots directly in the render function.
export default { render() { return ( <div> {/** @slot describe the slot here */} <slot name="myMain" /> </div> ) } }
This is valid with a non-JSX
render function as well.
The future of Vue.js 3.0 is class-style components. See this Request for Comments. No way Vue Styleguidist was gonna fall behind. Vue-styleguidist 3.0 now supports Vue class components:
/** * Describe your components here */ export default class MyComponent extends Vue { /** * An example of a property typed through the annotation */ @Prop() myProp: number = 0; }
In version 2, documenters had to flag each mixin with the tag
@mixin. If not, vue-styleguidist was missing documentation for the props in the mixin. Starting from version 3.0, this tagging is not necessary anymore.
// vue-styleguidist 2.0 /** * @mixin */ export default { props:{ size: Number }, computed:{ sizeClass(){ return `size-${this.size}`; } } } // vue-styleguidist 3.0 export default { props:{ size: Number }, computed:{ sizeClass(){ return `size-${this.size}`; } } }
Vue 2.5 introduced functional templates, templates to render functional components. The
props definition is a bit different from classical components. Styleguidist 3.0 now understands and documents this syntax too.
<template functional> <button : {{props.name}} </button> </template>
In v2.0 documenters had to point vue-styleguidist to every event emitted. Should they forget one, it would never show in the documentation. In 3.0, rest easy, events are detected and immediately documented by default. You can ignore them if you decide they are irrelevant.
// vue-styleguidist 2.0 export default { //... methods:{ /** * When submission is sucessful * @event success */ submit(){ this.$emit('success') } } } // vue-styleguidist 3.0 export default { //... methods:{ submit(){ /** * When submission is sucessful */ this.$emit('success') } } }
Note that, in vue-styleguidist 3.0,
event and
description have to be on consecutive lines, like a JSDoc. This constraint was not in 2.0.
Finally, vue-styleguidist 3.8 brought compatibility with imported template and script. Documenters can now have Single File Components (SFC) written like this:
<template src="./template.html"/> <script src='./script.ts'/>
Document the script and template the same way as in the
.vue file. Vue-styleguidist will parse and display the JSDoc.
We hope you like this new release. We put a lot of effort into making sure it meets as many needs as possible.
If you have any questions, suggestions, issues or comments, please post an issue on the Vue Styleguidist monorepo.
See you soon. | https://www.digitalocean.com/community/tutorials/vuejs-vue-styleguidist-3 | CC-MAIN-2022-21 | refinedweb | 582 | 62.24 |
Slash GraphQL hosts a lot of Dgraph backends. Like seriously, a lot! Though we launched about 3 months ago, our biggest region (AWS us-west-2) already has something in the excess of 4000 backends launched.
An ingress controller is a service that sits in front of your entire kube cluster, and routes requests to the appropriate backend. But what happens when your ingress controller has such a huge number of backends to take care of?
That’s 4000 GraphQL endpoints, 4000 gRPC endpoints, adding up to a total of 8000 ingress hostnames! That’s also about 12000 containers that would be running if we just let everything run wild! I can’t even imagine what the AWS bill would be! Let’s try to tackle these problems, and find a way to get rid of all those extra running containers.
Freezing inactive backends
Luckily, we found a quick way to fix the problem of all those running containers. For backends on our free tier, we simply put them to sleep when they are inactive. On the next request for that backend, we hold the request until Dgraph finishes spinning up, then forward the request to Dgraph. This drastically reduces the number of live pods to the ones which have been active in the last few hours, at the expense of latency on the first request for infrequently used backends.
This freezing logic was implemented using a reverse proxy (built on top of Golang’s ReverseProxy) that sits between the Nginx Ingress controller and Dgraph, waking up Dgraph backends just in time. This worked for a while, and drastically cut down the number of pods we were running, but left us with a few new problems.
- Although this solution let us scale down the Dgraph containers, it still left the proxy containers that can’t be taken down. This meant that we had to keep the Kubernetes namespace and ingress around. As a result, the kube control plane would start to crawl every time we tried to fetch or update things in bulk.
- Nginx would start to slow down a lot whenever an ingress was added, and would often run out of memory. Granted, we were running nginx with very little memory (1.5GB), but we had to increase that limit every few hundred backends or so
- A request went through a lot of hops before it finally reached Dgraph. Requests hop through an application load balancer, an ingress (Nginx), a Golang proxy, and then finally on to Dgraph. We were hoping to reduce these to a single hop
Eventually, we decided to bite the bullet, and try to write our own ingress controller.
Writing an ingress controller
Writing an ingress controller always seemed like such a daunting task, something that could only be accomplished by the Kubernetes gods that walk amongst us. How could us mere mortals deal with so many ingresses? ingress-es? Or is it ingressi?
How do I deal with this task if I can’t even figure out what the plural of ingress is?
But as it turns out, it’s actually pretty simple. All you need to do in an ingress controller is two things:
- Figure out the list of ingresses and what service they map to
- Forward the request to the correct service
Writing the Kubernetes bits
Please See dgraph-io/ingressutil: Utils for building an ingress controller for a working implementation of the concepts described here
The Kubernetes bits turned out to be much, much simpler than I had imagined. Kubernetes has an amazing Go client, which comes with a feature called SharedInformers. An informer listens to kube, and calls a call-back that you provide every time a resource is created, deleted, or updated (with some resync-type thing happening in case it misses an update). Simply ensure that you give the correct permissions to the controller and that you process the data correctly.
Let’s construct kubeClient and listen to updates on the ingress (code from ingressutil/ingress_router.go)
func getKubeClient() *kubernetes.Clientset { cfg, err := rest.InClusterConfig() if err != nil { glog.Fatalln(err) } kubeClient, err := kubernetes.NewForConfig(cfg) if err != nil { glog.Fatalln(err) } return kubeClient } func (ir *ingressRouter) StartAutoUpdate(ctx context.Context, kubeClient *kubernetes.Clientset) func() { factory := informers.NewSharedInformerFactory(kubeClient, time.Minute) informer := factory.Extensions().V1beta1().Ingresses().Informer() informer.AddEventHandler(cache.ResourceEventHandlerFuncs{ AddFunc: ir.addIngress, UpdateFunc: ir.updateIngress, DeleteFunc: ir.removeIngress, }) go informer.Run(ctx.Done()) }
I won’t go into details on how addIngress, update and remove ingress work, but we simply keep a map of ingresses, indexed by their namespace+name. Every time we get an update, we wait an appropriate amount of time (25ms), then regenerate our routing table.
Here is what an ingress looks like
{ "namespace": "myapp", "name": "myingress", "service": "service1", "port": 80, "http": { "host": "foo.com", "paths": [ {"path": "/admin"} ] } }
And here is what our routing table looks like. It’s just a map by hostname, and the first path that matches is the destination service
{ "foo.com": [{"path": "/admin", service: "service1.myapp.svc:80"}, {"path": "/", service: "service2.otherapp.svc:80"}] }
Converting between this set of ingresses to a map from hostname to this sort of routing table is a fairly simple computation, and you can see exactly how this works in the ingressutil/route_map.go. Further, matching the request to a service is just a matter of matching the hostname and path, as seen below
func (rm *routeMap) match(host, path string) (namespace string, name string, serviceendpoint string, ok bool) { for _, entry := range rm.hostMap[host] { if strings.HasPrefix(path, entry.path.Path) { return entry.Namespace, entry.Name, entry.ServiceName + "." + entry.Namespace + ".svc:" + entry.ServicePort.String(), true } } return "", "", "", false }
Finally, in order to avoid nasty race conditions, we store the entire route table in an
atomic.Value, and replace the entire table instead of updating the existing one in memory.
Actually forwarding to the service - Caddy all the way!
Please see dgraph-io/ingressutil/caddy for a working implementation of the concepts described here
OK, so that’s the parts that listen to kube, and how we figure out where requests are supposed to go. But what about actually forwarding requests to their final destination?
When we first started playing around with writing an ingress controller, we initially used to forward requests to our original Nginx. However, this was really problematic, as there was no way to figure out if Nginx had picked up a new ingress or not. It often took 2-5 seconds after our code picked up a new ingress for Nginx to be ready to serve it, meaning that we had to add a lot of
time.Sleep() type code, which no one ever wants to read. We briefly tried writing an Nginx module, but C++ makes me want to tear my hair out.
We were looking for a production ready proxy that was written in a language that we already work with - Go. One day, Manish pointed me at Caddy, and it was love at first read.
Caddy is a reverse proxy written in Golang. You can compile Caddy with plugins written in go, and Caddy 2 has been written with this extensibility in mind. Other proxies like Traefik and Ambassador also do have some support for plugins, but I found Caddy the easiest to work with. As an added bonus, the Caddy http middleware module is very close to Go’s
ServeHTTP interface, and as a result, is very easy to build around and test.
Caddy even has a tool, xcaddy, which can be used to compile a custom build of Caddy, along with whatever plugins you want.
We structured our code as two plugins for Caddy, and our caddy.json looked something like this.
{ "apps" : { "http" : { "servers" : { "slash-ingress" : { "routes" : [ { "path": "/", "handle" : [ { "handler": "slash_graphql_internal" }, { "handler": "ingress_router" } ] } ] } } } } }
Caddy’s HTTP handlers work like a middleware chain: they call your first plugin, with a
next() function bound to the next plugin in the chain. In our case, the first plugin wakes up the relevant DB creates the necessary ingress if needed. It also does a number of miscellaneous tasks such as checking auth, rate limits, and other custom logic. And then it finally hands it over to the
ingress_router plugin, who is expected to forward it to the correct server. Let’s take a closer look at
ingress_router
Since Caddy was built as a reverse proxy, I had a hunch that the code to forward requests was exposed as a module somewhere. In fact, all of Caddy’s internal modules implement the same interface as their external counterparts, which meant that Caddy’s internal reverse proxy (reverseproxy.Handler) implement the same interface as my ingress aware reverse proxy! This sounds like a job for… the decorator pattern!
// Partially from type IngressRouter struct { proxyMap sync.Map } func (ir *IngressRouter) ServeHTTP(w http.ResponseWriter, r *http.Request, next caddyhttp.Handler) error { proxy, ok := ir.getUpstreamProxy(r) if !ok { return next(w, r) } return proxy.ServeHTTP(w, r, next) } func (ir *IngressRouter) getUpstreamProxy(r *http.Request) (caddyhttp.MiddlewareHandler, bool) { namespace, service, upstream, ok := ir.Router.MatchRequest(r) if !ok { return nil, false } if proxy, ok := ir.proxyMap.Load(upstream); ok { return proxy.(caddyhttp.MiddlewareHandler), true } proxy := &reverseproxy.Handler{ Upstreams: reverseproxy.UpstreamPool{&reverseproxy.Upstream{Dial: upstream}} } proxy.Provision(ir.caddyContext) proxyInMap, loaded := ir.proxyMap.LoadOrStore(upstream, proxy) if loaded { proxy.Cleanup() } return proxyInMap, true }
So all we need to do is to keep a big map of the service endpoint to the actual reverse proxy. And that’s all that our middleware does, It keeps a
sync.Map, where the key is the service endpoint, and the value is a reverse proxy to forward the request to. You can see the full code for this over here: ingressutil/caddy/ingress_router.go.
On every request, we match the request to a service, and then forward it to the appropriate reverse proxy, and Caddy takes over once again!
Setting up Permissions in Kubernetes
In order for your ingress controller to listen in on the ingress changes, you’ll need to grant the correct permissions in the RBAC config for the deployment.
Here is the ClusterRole that you’ll need to attach to your deployment. These are the minimum permissions I needed to give to test this out.
- apiGroups: ["apps"] resources: ["ingresses"] verbs: ["get", "list", "watch"]
Putting it all together
Using Caddy, we were able to build an Ingress controller for Slash GraphQL in about a week. While Caddy does have an WIP ingress controller, we have a lot of custom features such as creating ingresses on the fly, and we wanted something that we can build and extend. We have also open sourced the tools we used to build our ingress controller, and you can find them here: github.com/dgraph-io/ingressutil
Our new ingress has been live for some time now, and early results are looking good. Prometheus metrics show less than a millisecond extra time on critical requests in order to do all the routing as described in this post.
I believe Caddy and it’s extensibility are a great fit if you are trying to build a smart proxy (AKA: An API Gateway). Perhaps you want to do some authentication, rate limiting, or some other custom logic that you need to extend with custom code.
Caddy’s author Matt Holt wrote that Caddy is not just a proxy, it’s a powerful, extensible platform for HTTP apps. I couldn’t agree more.
This is a companion discussion topic for the original entry at | https://discuss.dgraph.io/t/building-a-kubernetes-ingress-controller-with-caddy-dgraph-blog/12493 | CC-MAIN-2021-10 | refinedweb | 1,923 | 64.41 |
etsproxy 0.1.1
proxy modules for backwards compatibility
This is the ETS proxy package, it contains the proxy modules for all ETS projects which map the old enthought namespace imports (version 3) to the namespace-refactored ETS packages (version 4). For example:
from enthought.traits.api import HasTraits
is now simply:
from traits.api import HasTraits
For convenience this package also contains a refactor tool to convert projects to the new namespace (such that the don't rely on the proxy):
$ ets3to4 -h usage: ets3to4 DIRECTORY This utility, can be used to convert projects from ETS version 3 to 4. It simply replaces old namespace strings (e.g. 'enthought.traits.api') to new ones (e.g. 'traits.api'), in all Python files in DIRECTORY recursively. Once the conversion of your project is complete, the etsproxy module should no longer be necessary. However, this tool is very simple and does not catch all corner cases.
This module will be removed in the future, and old-style (enthought.xxx) imports should be converted (over time) to the new ones.
- Author: ETS Developers
- Download URL:
- License: BSD
- Package Index Owner: ilanschnell
- DOAP record: etsproxy-0.1.1.xml | http://pypi.python.org/pypi/etsproxy/0.1.1 | crawl-003 | refinedweb | 196 | 64.91 |
Bidirectionnal RPC Api on top of pyzmq
Project description.
- Built-in proxy support. A server can delegate the work to another one.
- SyncClient (using zmq.REQ) to use within non event based processes. (Heartbeating, Authentication and job execution are not supported with the SyncClient.)
Installation
$ pip install pseud
Execution
The Server
from pseud import Server server = Server('service') server.bind('tcp://127.0.0.1:5555') @server.register_rpc def hello(name): return 'Hello {0}'.format(name) await server.start() # this will block forever
The Client
from pseud import Client client = Client('service', io_loop=loop) client.connect('tcp://127.0.0.1:5555') # Assume we are inside a coroutine async with client: response = await client.hello('Charly') assert response ==
It is important to note that the server needs to know which peers are connected to it. This is why the security_plugin trusted_peer comes handy. It will register all peer id and be able to route messages to each of them.
from pseud import Server server = Server('service', security_plugin='trusted_peer') server.bind('tcp://127.0.0.1:5555') @server.register_rpc def hello(name): return 'Hello {0}'.format(name) await server.start() # this will block forever
The client needs to send its identity to the server. This is why plain security plugin is used. The server will not check the password, he will just take into consideration the user_id to perform the routing.
from pseud import Client client = Client('service', security_plugin='plain', user_id='alice', password='') client.connect('tcp://127.0.0.1:5555') # Action that the client will perform when # requested by the server. @client.register_rpc(name='draw.me.a.sheep') def sheep(): return 'beeeh'
Back on server side, we can send to it any commands the client is able to do.
# assume we are inside a coroutine sheep = await server.send_to('alice').draw.me.a.sheep() assert sheep == 'beeeh'. | https://pypi.org/project/pseud/ | CC-MAIN-2018-22 | refinedweb | 307 | 61.53 |
(For more resources on this subject, see here.)
Categorizing types of text data
Textual data comes in a variety of formats. For our purposes, we'll categorize text into three very broad groups. Isolating down into segments helps us to understand the problem a bit better, and subsequently choose a parsing approach. Each one of these sweeping groups can be further broken down into more detailed chunks.
One thing to remember when working your way through the book is that text content isn't limited to the Latin alphabet. This is especially true when dealing with data acquired via the Internet.
Providing information through markup
Structured text includes formats such as XML and HTML. These formats generally consist of text content surrounded by special symbols or markers that give extra meaning to a file's contents. These additional tags are usually meant to convey information to the processing application and to arrange information in a tree-like structure. Markup allows a developer to define his or her own data structure, yet rely on standardized parsers to extract elements.
For example, consider the following contrived HTML document.
<html>
<head>
<title>Hello, World!</title>
</head>
<body>
<p>
Hi there, all of you earthlings.
</p>
<p>
Take us to your leader.
</p>
</body>
</html>
In this example, our document's title is clearly identified because it is surrounded by opening and closing <title> and </title> elements.
Note that although the document's tags give each element a meaning, it's still up to the application developer to understand what to do with a title object or a p element.
Notice that while it still has meaning to us humans, it is also laid out in such a way as to make it computer friendly.
One interesting aspect to these formats is that it's possible to embed references to validation rules as well as the actual document structure. This is a nice benefit in that we're able to rely on the parser to perform markup validation for us. This makes our job much easier as it's possible to trust that the input structure is valid.
Meaning through structured formats
Text data that falls into this category includes things such as configuration files, marker delimited data, e-mail message text, and JavaScript Object Notation web data. Content within this second category does not contain explicit markup much like XML and HTML does, but the structure and formatting is required as it conveys meaning and information about the text to the parsing application. For example, consider the format of a Windows INI file or a Linux system's /etc/hosts file. There are no tags, but the column on the left clearly means something other than the column on the right.
Python provides a collection of modules and libraries intended to help us handle popular formats from this category.
Understanding freeform content
This category contains data that does not fall into the previous two groupings. This describes e-mail message content, letters, article copy, and other unstructured character-based content. However, this is where we'll largely have to look at building our own processing components. There are external packages available to us if we wish to perform common functions. Some examples include full text searching and more advanced natural language processing.
Ensuring you have Python installed
Our first order of business is to ensure that you have Python installed. We'll be working with Python 2.6 and we assume that you're using that same version. If there are any drastic differences in earlier releases, we'll make a note of them as we go along. All of the examples should still function properly with Python 2.4 and later versions.
If you don't have Python installed, you can download the latest 2.X version from. Most Linux distributions, as well as Mac OS, usually have a version of Python preinstalled.
At the time of this writing, Python 2.6 was the latest version available, while 2.7 was in an alpha state.
Providing support for Python 3
The examples in this book are written for Python 2. However, wherever possible, we will provide code that has already been ported to Python 3. You can find the Python 3 code in the Python3 directories in the code bundle available on the Packt Publishing FTP site.
Unfortunately, we can't promise that all of the third-party libraries that we'll use will support Python 3. The Python community is working hard to port popular modules to version 3.0. However, as the versions are incompatible, there is a lot of work remaining. In situations where we cannot provide example code, we'll note this.
Implementing a simple cipher
Let's get going early here and implement our first script to get a feel for what's in store.
A Caesar Cipher is a simple form of cryptography in which each letter of the alphabet is shifted down by a number of letters. They're generally of no cryptographic use when applied alone, but they do have some valid applications when paired with more advanced techniques.
This preceding diagram depicts a cipher with an offset of three. Any X found in the source data would simply become an A in the output data. Likewise, any A found in the input data would become a D.
Time for action – implementing a ROT13 encoder
The most popular implementation of this system is ROT13. As its name suggests, ROT13 shifts – or rotates – each letter by 13 spaces to produce an encrypted result. As the English alphabet has 26 letters, we simply run it a second time on the encrypted text in order to get back to our original result.
Let's implement a simple version of that algorithm.
- Start your favorite text editor and create a new Python source file. Save it as rot13.py.
- Enter the following code exactly as you see it below and save the file. char in sys.argv[1]:
sys.stdout.write(rotate13_letter(char))
sys.stdout.write('\n')
- Now, from a command line, execute the script as follows. If you've entered all of the code correctly, you should see the same output.
$ python rot13.py 'We are the knights who say, nee!'
- Run the script a second time, using the output of the first run as the new input string. If everything was entered correctly, the original text should be printed to the console.
$ python rot13.py 'Dv ziv gsv pmrtsgh dsl hzb, mvv!'
What just happened?
We implemented a simple text-oriented cipher using a collection of Python's string handling features. We were able to see it put to use for both encoding and decoding source text. We saw a lot of stuff in this little example, so you should have a good feel for what can be accomplished using the standard Python string object.
Following our initial module imports, we defined a dictionary named CHAR_MAP, which gives us a nice and simple way to shift our letters by the required 13 places. The value of a dictionary key is the target letter! We also took advantage of string slicing here.
In our translation function rotate13_letter, we checked whether our input character was uppercase or lowercase and then saved that as a Boolean attribute. We then forced our input to lowercase for the translation work. As ROT13 operates on letters alone, we only performed a rotation if our input character was a letter of the Latin alphabet. We allowed other values to simply pass through. We could have just as easily forced our string to a pure uppercased value.
The last thing we do in our function is restore the letter to its proper case, if necessary. This should familiarize you with upper- and lowercasing of Python ASCII strings.
We're able to change the case of an entire string using this same method; it's not limited to single characters.
>>>>> name.upper()
'RYAN MILLER'
>>> "PLEASE DO NOT SHOUT".lower()
'please do not shout'
>>>
It's worth pointing out here that a single character string is still a string. There is not a char type, which you may be familiar with if you're coming from a different language such as C or C++. However, it is possible to translate between character ASCII codes and back using the ord and chr built-in methods and a string with a length of one.
Notice how we were able to loop through a string directly using the Python for syntax. A string object is a standard Python iterable, and we can walk through them detailed as follows. In practice, however, this isn't something you'll normally do. In most cases, it makes sense to rely on existing libraries.
$ python
Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> for char in "Foo":
... print char
...
F
o
o
>>>
Finally, you should note that we ended our script with an if statement such as the following:
Python modules all contain an internal __name__ variable that corresponds to the name of the module. If a module is executed directly from the command line, as is this script, whose name value is set to __main__, this code only runs if we've executed this script directly. It will not run if we import this code from a different script. You can import the code directly from the command line and see for yourself.
>>> if__name__ == '__main__'
Notice how we were able to import our module and see all of the methods and attributes inside of it, but the driver code did not execute.
Have a go hero – more translation work
Each Python string instance contains a collection of methods that operate on one or more characters. You can easily display all of the available methods and attributes by using the dir method. For example, enter the following command into a Python window. Python responds by printing a list of all methods on a string object.
$ python
Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import rot13
>>> dir(rot13)
['CHAR_MAP', '__builtins__', '__doc__', '__file__', '__name__', '__
package__', 'rotate13_letter', 'string', 'sys']
>>>
Much like the isupper and islower methods discussed previously, we also have an isspace method. Using this method, in combination with your newfound knowledge of Python strings, update the method we defined previously to translate spaces to underscores and underscores to spaces.
Processing structured markup with a filter
Our ROT13 application works great for simple one-line strings that we can fit on the command line. However, it wouldn't work very well if we wanted to encode an entire file, such as the HTML document we took a look at earlier. In order to support larger text documents, we'll need to change the way we accept input. We'll redesign our application to work as a filter.
A filter is an application that reads data from its standard input file descriptor and writes to its standard output file descriptor. This allows users to create command pipelines that allow multiple utilities to be strung together. If you've ever typed a command such as cat /etc/hosts | grep mydomain.com, you've set up a pipeline
In many circumstances, data is fed into the pipeline via the keyboard and completes its journey when a processed result is displayed on the screen.
Time for action – processing as a filter
Let's make the changes required to allow our simple ROT13 processor to work as a command-line filter. This will allow us to process larger files.
- Create a new source file and enter the following code. When complete, save the file as rot13-b.py.
>>> dir("content")
['_']
>>>
- Enter the following HTML data into a new text file and save it as sample_page.html. We'll use this as example input to our updated rot13.py. line in sys.stdin:
for char in line:
sys.stdout.write(rotate13_letter(char))
- Now, run our rot13.py example and provide our HTML document as standard input data. The exact method used will vary with your operating system. If you've entered the code successfully, you should simply see a new prompt.
<html>
<head>
<title>Hello, World!</title>
</head>
<body>
<p>
Hi there, all of you earthlings.
</p>
<p>
Take us to your leader.
</p>
</body>
</html>
- The contents of rot13.html should be as follows. If that's not the case, double back and make sure everything is correct.
$ cat sample_page.html | python rot13-b.py > rot13.html
$
- Open the translated HTML file using your web browser.
What just happened?
We updated our rot13.py script to read standard input data rather than rely on a command-line option. Doing this provides optimal configurability going forward and lets us feed input of varying length from a collection of different sources. We did this by looping on each line available on the sys.stdin file stream and calling our translation function. We wrote each character returned by that function to the sys.stdout stream.
Next, we ran our updated script via the command line, using sample_page.html as input. As expected, the encoded version was printed on our terminal.
As you can see, there is a major problem with our output. We should have a proper page title and our content should be broken down into different paragraphs.
Remember, structured markup text is sprinkled with tag elements that define its structure and organization.
In this example, we not only translated the text content, we also translated the markup tags, rendering them meaningless. A web browser would not be able to display this data properly. We'll need to update our processor code to ignore the tags. We'll do just that in the next section.
(For more resources on this subject, see here.)
Time for action – skipping over markup tags
In order to preserve the proper, structured HTML that tags provide, we need to ensure we don't include them in our rotation. To do this, we'll keep track of whether or not our input stream is currently within a tag. If it is, we won't translate our letters.
- Once again, create a new Python source file and enter the following code. When you're finished, save the file as rot13-c.py.
<ugzy>
<urnq>
<gvgyr>Uryyb, Jbeyq!</gvgyr>
</urnq>
<obql>
<c>
Uv gurer, nyy bs lbh rneguyvatf.
</c>
<c>
Gnxr hf gb lbhe yrnqre.
</c>
</obql>
</ugzy>
- Run the same example.html file that we created for the last example through the new processor. This time, be sure to pass a -t command-line option.
import sys
from optparse import OptionParser
import string
CHAR_MAP = dict(zip(
string.ascii_lowercase,
string.ascii_lowercase[13:26] + string.ascii_lowercase[0:13]
)
)
class RotateStream(object):
"""
General purpose ROT13 Translator
A ROT13 translator smart enough to skip
Markup tags if that's what we want.
"""
MARKUP_START = '<'
MARKUP_END = '>'
def __init__(self, skip_tags):
self.skip_tags = skip_tags
def rotate13_letter(self, letter):
"""
Return the 13-char rotation of a letter.
"""
do_upper = False
if letter.isupper():
do_upper = True
letter = letter.lower()
if letter not in CHAR_MAP:
return letter
else:
letter = CHAR_MAP[letter]
if do_upper:
letter = letter.upper()
return letter
def rotate_from_file(self, handle):
"""
Rotate from a file handle.
Takes a file-like object and translates
text from it into ROT13 text.
"""
state_markup = False
for line in handle:
for char in line:
if self.skip_tags:
if state_markup:
# here we're looking for a closing
# '>'
if char == self.MARKUP_END:
state_markup = False
else:
# Not in a markup state, rotate
# unless we're starting a new
# tag
if char == self.MARKUP_START:
state_markup = True
else:
char = self.rotate13_letter(char)
else:
char = self.rotate13_letter(char)
#Make this a generator
yield char
if __name__ == '__main__':
parser = OptionParser()
parser.add_option('-t', '--tags', dest="tags",
help="Ignore Markup Tags", default=False,
action="store_true")
options, args = parser.parse_args()
rotator = RotateStream(options.tags)
for letter in rotator.rotate_from_file(sys.stdin):
sys.stdout.write(letter)
- If everything was entered correctly, the contents of rot13.html should be exactly as follows.
$ cat sample_page.html | python rot13-c.py -t > rot13.html
$
- Open the translated file in your web browser.
What just happened?
That was a pretty complex example, so let's step through it. We did quite a bit. First, we moved away from a simple rotate13_letter function and wrapped almost all of our functionality in a Python class named RotateStream. Doing this helps us ensure that our code will be reusable down the road.
We define a __init__ method within the class that accepts a single parameter named skip_tags. The value of this parameter is assigned to the self parameter so we can access it later from within other methods. If this is a True value, then our parser class will know that it's not supposed to translate markup tags.
Next, you'll see our familiar rotate13_letter method (it's a method now as it's defined within a class). The only real difference here is that in addition to the letter parameter, we're also requiring the standard self parameter.
Finally, we have our rotate_from_file method. This is where the bulk of our new functionality was added. Like before, we're iterating through all of the characters available on a file stream. This time, however, the file stream is passed in as a handle parameter. This means that we could have just as easily passed in an open file handle rather than the standard in file handle.
Inside the method, we implement a simple state machine, with two possible states. Our current state is saved in the state_markup Boolean attribute. We only rely on it if the value of self.skip_tags set in the __init__ method is True.
- If state_markup is True, then we're currently within the context of a markup tag and we're looking for the > character. When it's found, we'll change state_markup to False. As we're inside a tag, we'll never ask our class to perform a ROT13 operation.
- If state_markup is False, then we're parsing standard text. If we come across the < character, then we're entering a new markup tag. We set the value of state_markup to True. Finally, if we're not in tag, we'll call rotate13_letter to perform our ROT13 operation.
You should also notice some unfamiliar code at the end of the source listing. We've taken advantage of the OptionParser class, which is part of the standard library. We've added a single option that will allow us to selectively enable our markup bypass functionality. The value of this option is passed into RotateStream's __init__ method.
The final two lines of the listing show how we pass the sys.stdin file handle to rotate_from_file and iterate over the results. The rotate_from_file method has been defined as a generator function. A generator function returns values as it processes rather than waiting until completion. This method avoids storing all of the result in memory and lowers overall application memory consumption.
State machines
A state machine is an algorithm that keeps track of an application's internal state. Each state has a set of available transitions and functionality associated with it. In this example, we were either inside or outside of a tag. Application behavior changed depending on our current state. For example, if we were inside then we could transition to outside. The opposite also holds true.
The state machine concept is advanced and won't be covered in detail. However, it is a major method used when implementing text-processing machinery. For example, regular expression engines are generally built on variations of this model. For more information on state machine implementation, see the Wikipedia article available at.
Pop Quiz – ROT 13 processing
- We define MARKUP_START and MARKUP_END class constants within our RotateStream class. How might our state machine be affected if these values were swapped?
- Is it possible to use ROT13 on a string containing characters found outside of the English alphabet?
- What would happen if we embedded > or < signs within our text content or tag values?
- In our example, we read our input a line at a time. Can you think of a way to make this more efficient?
Have a go hero – support multiple input channels
We've briefly covered reading data via standard in as well as processing simple command-line options. Your job is to integrate the two so that your application will simply translate a command-line value if one is present before defaulting to standard input.
If you're able to implement this, try extending the option handling code so that your input string can be passed in to the rotation application using a command-line option.
<html>
<head>
<title>Uryyb, Jbeyq!</title>
</head>
<body>
<p>
Uv gurer, nyy bs lbh rneguyvatf.
</p>
<p>
Gnxr hf gb lbhe yrnqre.
</p>
</body>
</html>
Supporting third-party modules
Now that we've got our first example out of the way, we're going to take a little bit of a detour and learn how to obtain and install third-party modules.
The Python community maintains a centralized package repository, termed the Python Package Index (or PyPI). It is available on the web at. From there, it is possible to download packages as compressed source distributions, or in some cases, pre-packaged Python components. PyPI is also a rich source of information. It's a great place to learn about available third-party applications. Links are provided to individual package documentation if it's not included directly into the package's PyPI page.
Packaging in a nutshell
There are at least two different popular methods of packaging and deploying Python packages. The distutils package is part of the standard distribution and provides a mechanism for building and installing Python software. Packages that take advantage of the distutils system are downloaded as a source distribution and built and installed by a local user. They are installed by simply creating an additional directory structure within the system Python directory that matches the package name.
In an effort to make packages more accessible and self-contained, the concept of the Python Egg was introduced. An egg file is simply a ZIP archive of a package. When an egg is installed, the ZIP file itself is placed on the Python path, rather than a subdirectory.
Time for action – installing SetupTools
Egg files have largely become the de facto standard in Python packaging. In order to install, develop, and build egg files, it is necessary to install a third-party tool kit. The most popular is SetupTools. The installation process is fairly easy to complete and is rather self-contained. Installing SetupTools gives us access to the easy_install command, which automates the download and installation of packages that have been registered with PyPI.
- Download the installation script, which is available at. This same script will be used for all versions of Python.
- As an administrative user, run the ez_setup.py script from the command line. The SetupTools installation process will complete. If you've executed the script with the proper rights, you should see output similar as follows:
$python rot13-c.py -s 'myinputstring'
zlvachgfgevat
$
- As an administrative user, run the ez_setup.py script from the command line. The SetupTools installation process will complete. If you've executed the script with the proper rights, you should see output similar as follows:
# python ez_setup.py
Downloading
setuptools-0.6c11-py2.6.egg
Processing setuptools-0.6c11-py2.6.egg
creating /usr/lib/python2.6/site-packages/setuptools-0.6c11-
py2.6.egg
Extracting setuptools-0.6c11-py2.6.egg to /usr/lib/python2.6/sitepackages
Adding setuptools 0.6c11 to easy-install.pth file
Installing easy_install script to /usr/bin
Installing easy_install-2.6 script to /usr/bin
Installed /usr/lib/python2.6/site-packages/setuptools-0.6c11-
py2.6.egg
Processing dependencies for setuptools==0.6c11
Finished processing dependencies for setuptools==0.6c11
#
What just happened?
We downloaded the SetupTools installation script and executed it as an administrative user. By doing so, our system Python environment was configured so that we can install egg files in the future via the SetupTools easy_install system.
SetupTools does not currently work with Python 3.0. There is, however, an alternative available via the Distribute project. Distribute is intended to be a drop-in replacement for SetupTools and will work with either major Python version. For more information, or to download the installer, visit.
Running a virtual environment
Now that we have SetupTools installed, we can install third-party packages by simply running the easy_install command. This is nice because package dependencies will automatically be downloaded and installed so we no longer have to do this manually. However, there's still one piece missing. Even though we can install these packages easily, we still need to retain administrative privileges to do so. Additionally, all of the packages that we chose to install will be placed in the system's Python library directory, which has the potential to cause inconsistencies and problems down the road. As you've probably guessed, there's a utility to address that.
Python 2.6 introduces the concept of a local user package directory. This is simply an additional location found within your user home directory that Python searches for installed packages. It is possible to install eggs into this location via easy_install with a –user command-line switch. For more information, see.
Configuring virtualenv
The virtualenv package, distributed as a Python egg, allows us to create an isolated Python environment anywhere we wish. The environment comes complete with a bin directory containing a Python binary, its own installation of SetupTools, and an instance-specific library directory. In short, it creates a location for us to install and configure Python without interfering with the system installation.
Time for action – configuring a virtual environment
Here, we'll enable the virtualenv package, which will illustrate how to install packages from the PyPI site.
- As a user with administrative privileges, install virtualenv from the system command line by running easy_install virtualenv. If you have the correct permissions, your output should be similar to the following.
Searching for virtualenv
Reading
Reading
Best match: virtualenv 1.4.5
Downloading
virtualenv-1.4.5.tar.gz#md5=d3c621dd9797789fef78442e336df63e
Processing virtualenv-1.4.5.tar.gz
Running virtualenv-1.4.5/setup.py -q bdist_egg --dist-dir /tmp/
easy_install-rJXhVC/virtualenv-1.4.5/egg-dist-tmp-AvWcd1
warning: no previously-included files matching '*.*' found under
directory 'docs/_templates'
Adding virtualenv 1.4.5 to easy-install.pth file
Installing virtualenv script to /usr/bin
Installed /usr/lib/python2.6/site-packages/virtualenv-1.4.5-
py2.6.egg
Processing dependencies for virtualenv
Finished processing dependencies for virtualenv
- Drop administrative privileges as we won't need them any longer. Ensure that you're within your home directory and create a new virtual instance by running:
$ virtualenv --no-site-packages text_processing
- Step into the newly created text_processing directory and activate the virtual environment. Windows users will do this by simply running the Scripts\activate application, while Linux users must instead source the script using the shell's dot operator.
$ . bin/activate
- If you've done this correctly, you should now see your command-line prompt change to include the string (text_processing). This serves as a visual cue to remind you that you're operating within a specific virtual environment.
(text_processing)$ pwd
/home/jmcneil/text_processing
(text_processing)$ which python
/home/jmcneil/text_processing/bin/python
(text_processing)$
- Finally, deactivate the environment by running the deactivate command. This will return your shell environment to default. Note that once you've done this, you're once again working with the system's Python install.
(text_processing)$ deactivate
$ which python
/usr/bin/python
$
If you're running Windows, by default python.exe and easy_install.exe are not placed on your system %PATH%. You'll need to manually configure your %PATH% variable to include C:\Python2.6\ and C:\Python2.6\Scripts. Additional scripts added by easy_install will also be placed in this directory, so it's worth setting up your %PATH% variable.
What just happened?
We installed the virtualenv package using the easy_install command directly off of the Python Package index. This is the method we'll use for installing any third-party packages going forward. You should now be familiar with the easy_install process. Additional packages are installed using this same technique from within the confines of our environment.
After the install process was completed, we configured and activated our first virtual environment. You saw how to create a new instance via the virtualenv command and you also learned how to subsequently activate it using the bin/activate script. Finally, we showed you how to deactivate your environment and return to your system's default state.
(For more resources on this subject, see here.)
Have a go hero – install your own environment
Now that you know how to set up your own isolated Python environment, you're encouraged to create a second one and install a collection of third-party utilities in order to get the hang of the installation process.
- Create a new environment and name it as of your own choice.
- Point your browser to and select one or more packages that you find interesting. Install them via the easy_install command within your new virtual environment.
Note that you should not require administrative privileges to do this. If you receive an error about permissions, make certain you've remembered to activate your new environment. Deactivate when complete. Some of the packages available for install may require a correctly configured C-language compiler.
Where to get help?
The Python community is a friendly bunch of people. There is a wide range of online resources you can take advantage of if you find yourself stuck. Let's take a quick look at what's out there.
- Home site: The Python website, available at. Specifically, the documentation section. The standard library reference is a wonderful asset and should be something you keep at your fingertips. This site also contains a wonderful tutorial as well as a complete language specification.
- Member groups: The comp.lang.python newsgroup. Available via Google groups as well as an e-mail gateway, this provides a general-purpose location to ask Python-related questions. A very smart bunch of developers patrol this group; you're certain to get a quality answer.
- Forums: Stack Overflow, available at. Stack overflow is a website dedicated to developers. You're welcome to ask your questions, as well as answer others' inquires, if you're up to it!
- Mailing list: If you have a beginner-level question, there is a Python tutor mailing list available off of the Python.org site. This is a great place to ask your beginner questions no matter how basic they might be!
- Centralized package repository: The Python Package Index at. Chances are someone has already had to do exactly what it is you're doing.
If all else fails, you're more than welcome to contact the author via e-mail to questions@packtpub.com. Every effort will be made to answer your question, or point you to a freely available resource where you can find your resolution.
Summary
This article introduced you to the different categories of text and provided you with a little bit of information as to how we'll manage our packaging going forward.
We performed a few low-level text translations by implementing a ROT13 encoder and highlighted the differences between freeform and structured markup. We'll examine these categories in much greater detail as we move on. The goal of that exercise was to learn some byte-level transformation techniques.
Further resources on this subject:
- Advanced Output Formats in Python 2.6 Text Processing[article]
- Testing Tools and Techniques in Python[article]
- Python 3: Object-Oriented Design[article]
- Getting Started with Spring Python [article]
- wxPython: Design Approaches and Techniques[article]
- Creating Skeleton Apps with Coily in Spring Python[article]
- Python Text Processing with NLTK 2: Transforming Chunks and Trees[article] | https://www.packtpub.com/books/content/getting-started-python-26-text-processing | CC-MAIN-2015-32 | refinedweb | 5,377 | 58.18 |
IRC log of swd on 2007-01-22
Timestamps are in UTC.
13:49:04 [RRSAgent]
RRSAgent has joined #swd
13:49:04 [RRSAgent]
logging to
13:49:10 [RalphS]
Meeting: SWD Boston F2F
13:49:13 [RalphS]
Chair: Guus
13:49:25 [RalphS]
Agenda:
13:51:30 [RalphS]
rrsagent, please make record public
13:51:39 [Zakim]
SW_SWD(f2f)8:30AM has now started
13:51:46 [Zakim]
+ +1.617.253.aaaa
13:55:18 [RalphS]
zakim, aaaa is MeetingRoom
13:55:18 [Zakim]
+MeetingRoom; got it
13:55:33 [RalphS]
zakim, MeetingRoom is really MIT-Kiva
13:55:33 [Zakim]
+MIT-Kiva; got it
13:55:49 [RalphS]
zakim, MIT-Kiva is MeetingRoom
13:55:49 [Zakim]
+MeetingRoom; got it
14:02:52 [TomB]
TomB has joined #swd
14:03:01 [TomB]
RalphS, hi!
14:03:29 [RalphS]
zakim, meetingroom has Alistair, Antoine, Bernard, Guus, Diego, IvanHerman, TimBL, Ralph
14:03:29 [Zakim]
+Alistair, Antoine, Bernard, Guus, Diego, IvanHerman, TimBL, Ralph; got it
14:03:31 [ivan]
ivan has joined #swd
14:03:40 [berrueta]
berrueta has joined #swd
14:04:45 [berrueta]
scribenick: berrueta
14:05:27 [Guus]
Guus has joined #swd
14:05:38 [aliman]
aliman has joined #swd
14:05:43 [Zakim]
+??P5
14:05:56 [RalphS]
zakim, ??p5 is Tom
14:05:56 [Zakim]
+Tom; got it
14:06:49 [RalphS]
zakim, Fabien just arrived in meetingroom
14:06:49 [Zakim]
+Fabien; got it
14:07:12 [Antoine]
Antoine has joined #swd
14:08:04 [RalphS]
zakim, Ben just arrived in meetingroom
14:08:04 [Zakim]
+Ben; got it
14:10:18 [berrueta]
tbl: meeting of a group of people interested in SW in the Cambridge area
14:10:28 [benadida]
benadida has joined #SWD
14:10:51 [berrueta]
tbl: 15 people interested
14:11:01 [RalphS]
zakim, meetingroom also has Jon
14:11:01 [Zakim]
+Jon; got it
14:11:11 [berrueta]
tbl: no particular agenda
14:11:34 [berrueta]
... mainly a social thing
14:11:55 [berrueta]
... discussion, brainstorming
14:12:43 [berrueta]
tbl: in this room (Kiva)
14:13:55 [berrueta]
Guus: short round of introductions
14:13:57 [Jon_Phipps]
Jon_Phipps has joined #swd
14:13:58 [RalphS]
Topic: Introductions
14:14:19 [RalphS]
scribenick: berrueta
14:16:06 [RalphS]
Alistair: we have several implementations of SKOS nwo
14:16:09 [RalphS]
s/nwo/now/
14:17:30 [Elisa]
Elisa has joined #swd
14:19:13 [FabienG]
FabienG has joined #swd
14:19:27 [RalphS]
Jon: picked up on SKOS at Dublin Core Madrid workshop
14:20:04 [Zakim]
+Elisa_Kendall
14:20:51 [RalphS]
Bernard: U. Manchester is adding SKOS to COHSE
14:20:59 [Guus]
zakim, who is here?
14:20:59 [Zakim]
On the phone I see MeetingRoom, TomB (muted), Elisa_Kendall
14:21:01 [Zakim]
MeetingRoom has Alistair, Antoine, Bernard, Guus, Diego, IvanHerman, TimBL, Ralph, Fabien, Ben, Jon
14:21:05 [Zakim]
On IRC I see FabienG, Elisa, Jon_Phipps, benadida, Antoine, aliman, Guus, berrueta, ivan, TomB, RRSAgent, Zakim, RalphS
14:22:25 [RalphS]
Guus: interoperability of vocabularies is core to our work at Vrieje University
14:22:39 [RalphS]
s/Vrieje/Vrije/
14:23:45 [RalphS]
Diego: my research work is on semantic search
14:24:22 [RalphS]
Fabien: my research group is interested in graph-based reasoning on SemWeb
14:25:45 [RalphS]
Ivan: every day I take a tram that goes by Guus' office
14:25:56 [berrueta]
tbl: i'm not here as W3C director
14:26:46 [berrueta]
tbl: interested in the discussions on RDFa and recipes ("slash")
14:27:25 [RalphS]
Tim: I taught a 1-week course [2 weeks ago] and one of the biggest problems was how to configure apache
14:27:44 [RalphS]
... if it came out-of-the-box with application/rdf+xml support things would be a *lot* easier
14:28:30 [berrueta]
RalphS: the activity of this group is very important, great impact
14:29:27 [berrueta]
... very busy, reduced dedication to this group
14:30:00 [RalphS]
Ralph: but hope to increase my time in SWD again
14:31:06 [RalphS]
Tom: project looking at model-based metadata; includes Dublin Core and eventually SKOS
14:31:20 [RalphS]
Tim: calling from Berlin
14:31:26 [berrueta]
s/Tim/Elisa
14:31:30 [RalphS]
Elisa: calling from Los Altos, California
14:31:47 [RRSAgent]
See
14:31:51 [RalphS]
s/Elisa/Tom
14:32:11 [RalphS]
[Tom calling from Berlin, Elisa calling from Los Altos]
14:32:54 [RalphS]
Elisa: working with organizations who are keenly interested in metadata about their ontologies
14:33:02 [RalphS]
... core business of SandPiper is ontology development
14:34:19 [berrueta]
Guus: three objectives of this meeting
14:34:43 [berrueta]
... 1) skos use cases: discuss them
14:35:12 [berrueta]
... 2) as a result, obtain a list of requirements for SKOS
14:35:49 [berrueta]
... 3) review issue list, priorize them, select critical ones
14:36:31 [berrueta]
topic: SKOS use cases
14:37:29 [Antoine]
14:37:45 [berrueta]
Antoine: 12 use cases in the document
14:38:17 [berrueta]
... there are more than 20 contributions
14:38:37 [berrueta]
... some are not edited (yet), but available at the wiki
14:39:02 [Antoine]
14:39:20 [timbl]
timbl has joined #swd
14:40:14 [berrueta]
... thinks the response of the community was good
14:40:52 [berrueta]
... will summarize each one of the UC next
14:42:49 [berrueta]
... UC1: description incomplete
14:42:51 [berrueta]
Guus: two different hierachies for the same thesaurus, is this a requirement?
14:43:29 [berrueta]
aliman: multi-hierarchy is an important requirement
14:45:23 [berrueta]
Guus: shows an example of the getty vocab
14:45:55 [RalphS]
Guus: google for 'tgn getty', then enter 'boston'
14:46:19 [berrueta]
Guus: two record types: administrative and geographical, non exclusive
14:47:04 [RalphS]
... (shows Utrecht next)
14:47:31 [berrueta]
Guus: looks up for an example of a record with two types
14:48:22 [berrueta]
Antoine: complex lexical info in the context of this application
14:50:57 [berrueta]
RalphS searchs 'Boston' on the TGN
14:51:17 [berrueta]
Guus: Boston has several alternative names
14:51:30 [RalphS]
Guus: Getty shows English, Vernacular, and Historical names
14:52:43 [RalphS]
... e.g. Tokyo has 'Edo' historical name
14:54:22 [berrueta]
Aliman: multilingual labels are already solved
14:55:29 [berrueta]
Aliman: but language is not enough in some cases (see the Boston and Tokyo examples)
14:57:18 [berrueta]
... the issue is there are different scripts for some language, but only a language tag
14:57:58 [RalphS]
Alistair: potential issues with cardinality constraints and preferredLabel properties if there are multiple scripts in which the label might be written
14:57:59 [berrueta]
Guus: this is probably out of scope of this WG
14:59:38 [berrueta]
ivan: this WG should not worry about this issue. Maybe forward the issue to the RDF core WG
15:01:18 [berrueta]
RalphS: asks for clarification of the multi-hierarchy issue
15:01:27 [RalphS]
Alistair: a conceptual node may have more than one parent
15:02:34 [berrueta]
guus: back to the issue of making a statement about a label
15:03:04 [berrueta]
aliman: we should provide a framework to allow that
15:03:49 [Zakim]
-TomB
15:04:09 [berrueta]
guus illustrates his point with an example in the whiteboard
15:04:46 [RalphS]
Guus: how would we say the label "Edo" is valid only between 1600 and 1800 AD ?
15:04:52 [RalphS]
Alistair: annotation properties
15:04:52 [Zakim]
+??P5
15:05:05 [berrueta]
aliman: we you use an annotation propierty, you are not limited to a literal value
15:05:15 [RalphS]
zakim, ??p5 is probably Tom
15:05:17 [Zakim]
I already had ??P5 as TomB, RalphS
15:06:02 [berrueta]
aliman: you can use a resource as a value of the annotation property
15:06:05 [RalphS]
Alistair: model annotation as an n-ary relation
15:09:13 [berrueta]
ivan: is this a possible use of reification?
15:10:03 [berrueta]
guus: seems to be 2 options: to reify, or to loss information
15:11:00 [berrueta]
timbl: another option is to put the statement in another document
15:13:23 [RalphS]
->
SKOS annotated label whiteboard discussion
15:13:50 [TomB]
:-)
15:14:24 [RalphS]
->
meeting room right side
15:14:38 [RalphS]
->
meeting room left side
15:16:51 [aliman]
15:16:54 [Guus]
Guus has joined #swd
15:16:58 [aliman]
15:17:23 [RalphS]
Alistair: the old issue notes a place-holder item for this
15:17:43 [RalphS]
... "SKOS does not provide support for ... any type of annotation associated with a non-descriptor"
15:17:54 [timbl]
are a few slides about options for modeling things which vary with time
15:18:01 [RalphS]
Guus: not sure this is the same thing
15:18:59 [RalphS]
ACTION: Alan write up the preferredLabel modelling issue
15:19:10 [RalphS]
zakim, AlanR has arrived in meetingroom
15:19:10 [Zakim]
+AlanR; got it
15:19:58 [RalphS]
AlanR: just joined the WG representing Science Commons ... active on HCLS IG
15:21:01 [RalphS]
Antoine: UC #2
15:21:22 [RalphS]
... has descriptor concepts and non-descriptor concepts
15:21:40 [RalphS]
s/#2/#2: Iconclass
15:22:08 [RalphS]
Guus: this case helps define SKOS scope
15:22:31 [RalphS]
... Iconclass is a grammar
15:23:04 [RalphS]
... permits adding things to parts of the vocabulary
15:23:18 [RalphS]
... I'd like to make this feature out of scope for SKOS
15:23:32 [RalphS]
... e.g. KEY; it's not pre-defined where in the vocabulary this is used
15:24:41 [berrueta]
berrueta has joined #swd
15:25:29 [RalphS]
Antoine: finding modifiers while browsing a vocabulary -- "post coordination"
15:26:50 [berrueta]
aliman: this mechanism allows to create new concepts by combination of existing concepts
15:28:17 [berrueta]
Guus: shows an example of the vocabulary ("Animal")
15:30:06 [berrueta]
aliman: this is related to the "qualifiers" of the ?? medical vocabulary
15:30:51 [RalphS]
s/??/MESH/
15:31:33 [RalphS]
Alistair: terms in MESH have flags that indicate they can be used with an additional qualifiers vocabulary
15:31:41 [TomB]
15:31:47 [Jon_Phipps]
Jon_Phipps has joined #swd
15:31:50 [aliman]
example of coordination
15:31:56 [RalphS]
Alistair: ... e.g. 'aspirin' combined with 'sideEffects' means 'sideEffectsOfAspirin'
15:32:44 [RalphS]
Alistair: BLISS classification scheme has similar aspects
15:33:57 [berrueta]
Antoine: we lost the possibility to attach qualifiers
15:34:07 [berrueta]
... cannot represent hierarchies of qualifiers
15:35:33 [Guus]
q?
15:36:15 [berrueta]
aliman: ambiguity can arise from the use of qualifiers
15:36:47 [RalphS]
Alistair: in my master's thesis I conclude that it is an application-specific decision whether order of coordination is significant
15:36:54 [berrueta]
... if you don't have a mechanism to attach the qualifier to particular individuals
15:37:44 [RalphS]
Guus: Iconclass also has a notion of 'opposite', or counter-example, done by doubling the letter; e.g. 25FF
15:37:58 [aliman]
Examples of using bliss classification
15:38:35 [RalphS]
Guus: this feature is also used in Iconclass to do male-female distinction
15:39:50 [berrueta]
Antoine: 13,000 concepts
15:40:15 [berrueta]
... in this vocabulary
15:40:55 [berrueta]
... qualifiers allow to reduce the number of concepts
15:41:20 [berrueta]
... indexing can use multiple concepts
15:42:43 [berrueta]
ACTION: Antoine to provide more use cases of uses of qualifiers
15:43:38 [RalphS]
Alistair: library world talks about "synthetic" and "enumerative" classification schemes; "synthetic" scheme is meant to be used in combinations to synthesize categories
15:43:53 [berrueta]
Guus: we will continue with use cases after coffee break
15:44:10 [RalphS]
[15 minute coffee break]
15:59:24 [Zakim]
-TomB
15:59:50 [Zakim]
+??P8
16:07:02 [timbl_]
timbl_ has joined #swd
16:07:17 [berrueta]
berrueta has joined #swd
16:14:06 [RalphS]
[restarting]
16:14:06 [ivan]
scribenick: ivan
16:14:53 [aliman]
Core requirements for automation of analytico-synthetic classifications
16:15:09 [aliman]
(I just found this paper, looks highly relevant to preceding discussion.)
16:15:56 [ivan]
antoine: we have to decide at some point what goes intot he document
16:16:10 [ivan]
guus: we should keep the overiew and the deails of examples
16:16:37 [ivan]
ralph: features in the use cases that are important for skos has to be brought out from the examples (for those who do not know the details)
16:17:17 [ivan]
alistair: we could move the examples from the vocabulary to the example, but what ralph said made me think again...
16:17:28 [RalphS]
zakim, meetingroom no longer has alan
16:17:28 [Zakim]
alan was not listed in MeetingRoom, RalphS
16:17:29 [Zakim]
-alan; got it
16:17:35 [ivan]
guus: it is good to have that on our list...
16:17:39 [RalphS]
zakim, who's in meetingroom?
16:17:39 [Zakim]
MeetingRoom has Alistair, Antoine, Bernard, Guus, Diego, IvanHerman, TimBL, Ralph, Fabien, Ben, Jon, AlanR
16:17:45 [RalphS]
zakim, meetingroom no longer has alanr
16:17:45 [Zakim]
-AlanR; got it
16:18:22 [ivan]
antoine: next use case an integrated view to mediaval manuscripts
16:18:35 [ivan]
... there are collections and bridges among these
16:18:53 [ivan]
... we always have info on which vocabularies are used
16:19:02 [ivan]
... an issue of alignment of vocabularies
16:19:13 [ivan]
... it uses the iconclass vocabulary
16:19:23 [ivan]
... and another one that comes from the French national library
16:19:42 [ivan]
... the latter is 15000 subject, simple labels (simple and alternate)
16:20:01 [ivan]
... it is probably a flat list, and they introduce a set of classes for browsing purposes
16:20:22 [ivan]
... you got between 15000 descriptors, and each is linked to a class that is more general
16:20:41 [ivan]
alistair: is it essentially a tree level hierarchy, but you can use the descriptors on the bottom only
16:20:44 [ivan]
antoine: yes
16:20:54 [ivan]
guus: this is a feature I have not distilled yet
16:21:07 [benadida]
benadida has joined #SWD
16:21:10 [RalphS]
s/yes/yes, only the leaves of the tree can be used as descriptors/
16:21:19 [ivan]
... this problem of representing mandragor
16:21:29 [ivan]
s/mandragor/mandragore/
16:21:56 [ivan]
... there are 2 issues coming out: (1) requirement for mapping, you need equivalence
16:22:05 [ivan]
... (2) you have the notion of abstract classes
16:22:11 [ivan]
... things that are not for indexing
16:22:40 [RalphS]
Guus: abstract classes appear in AAT also
16:23:21 [ivan]
alistair: i think it is a use cases that has some basic requirements for vocabulary mapping amont themselves
16:23:36 [ivan]
... there is also a requirement to map between combination of concepts
16:24:00 [RalphS]
Alistair: "11U4 Mary and John the Baptist ..."
16:24:00 [ivan]
... 11U4 in the description
16:24:35 [ivan]
... i think that will be a common requirement
16:25:03 [ivan]
antoine: the mapping points that there could be a link between the non descriptor items
16:25:24 [ivan]
... a descriptor on the one side and a qualifier on the other side, the latter is never be found as a descriptor
16:25:49 [ivan]
guus: is it fair to say we have a mapping requirement and two basic requirements
16:25:50 [ivan]
?
16:26:11 [ivan]
... with respect to the conjunction type of thing, that is an issue (or a requirement)
16:26:23 [ivan]
alistair: it comes up often in my experience
16:26:42 [ivan]
... there is a british standard wg rewriting the thesaurus standard
16:26:57 [ivan]
... working on how to represent mapping between thesauri
16:27:07 [ivan]
... i would think that they will come up with something how to model it
16:27:34 [ivan]
bernard: is there a requirement to map the iconclass to mandragore to identify the ??
16:27:56 [ivan]
... it seems that mandragore is a different type of mapping
16:28:05 [RalphS]
Alistair cited ISO 2788 parts 3 and 4 (under development) work on mapping
16:28:38 [ivan]
guus: rephrase the question: do we need more specific than broad and narrow, ie, owl or rdfs vocabulary
16:28:53 [ivan]
bernard: yes, this is what I am asking
16:29:02 [ivan]
... what is the broader term of XX
16:29:22 [ivan]
alistair: there is a browser for mandragore, can we see how this looks like?
16:29:55 [ivan]
antoine showing the mandragore browser example
16:31:30 [ivan]
antoine shows the iconclass vocabulary, one can see the vocabulary and the specialization of the concept
16:32:05 [ivan]
... on the right are the images from the collection (from the BNF) which have not been indexed against iconclass
16:32:39 [ivan]
... you browse your vocabulary, then you have access to the images
16:32:56 [ivan]
alistair: can you browse against the mandragore images only?
16:33:57 [RalphS]
->
Project STITCH : Semantic Interoperability to access Cultural Heritage
16:34:14 [ivan]
(scribe was a bit lost:-(
16:34:57 [ivan]
antoine: when you do a mapping to mandragore, do you use a second level only?
16:35:02 [ivan]
s/antoine/alistair/
16:35:21 [RalphS]
Antoine: there are 15,000 alignment relations in the mapping
16:35:28 [ivan]
guus: I try to summarize, three thiings
16:35:35 [ivan]
... (1) need for an equivalence mapping
16:35:53 [ivan]
... (2) a less or more specific mapping, should it be more specific than broad/narrow
16:36:10 [ivan]
... (3) links between compostionals
16:36:36 [ivan]
... we recently linked a nist vocabulary for video tracking
16:36:43 [ivan]
... we got into a similar situation
16:36:59 [ivan]
... we got both the conjunctive and disjunctive form
16:37:18 [ivan]
... may be it should be a requirement, or maybe we can handle outside skos
16:37:30 [ivan]
ralph: there a reference to optional rejective forms
16:37:39 [ivan]
... is that from iconclass?
16:37:46 [ivan]
antoine: this comes from the french vocabulary
16:37:59 [ivan]
ralph: guus showed the double letter example, is it similar
16:38:24 [ivan]
antoine: they are more similar concepts, synonyms
16:38:48 [ivan]
guus: it is quite similar to preferred and non-preferred label
16:39:08 [RalphS]
"optional rejected form" means "synonym but deprecated"
16:39:19 [ivan]
guus: move on?
16:39:51 [ivan]
alistair: when it comes to mapping requirement, we need to have in mind of the functionality it is used for and focus on that
16:40:06 [ivan]
... that might help us in passing by other representations
16:41:22 [ivan]
guus: in this particular domain mappings are the only thing that adds something to the existing functionalities to musea
16:41:37 [ivan]
... if you open up the collections to browse to other vocabularies that you get new things
16:41:45 [ivan]
... mapping is 100% crucial,
16:41:52 [ivan]
... the only added value, and a big one
16:42:12 [ivan]
... in medicine it may be different
16:42:27 [ivan]
antoine: 4th example bio-zen
16:43:05 [ivan]
... wait for AlanR to come back on that one
16:43:38 [ivan]
antoine: the 5th use case: semantic search accross multilingual thesauri (agricultural domain)
16:44:02 [ivan]
... these are mostly mulitlingual and to provide open access to these vocabularies
16:44:21 [ivan]
... it is an interesting use case is for multilingual vocabularies
16:44:34 [ivan]
... there are 12 languages, with other terms, related terms, etc
16:44:45 [ivan]
... illustrates some typical usage like skos notes
16:44:54 [ivan]
... use some more complex links
16:45:12 [ivan]
... you can also use more specialized versions, subclasses,
16:45:25 [ivan]
... you find again the links between terms
16:45:37 [ivan]
... representing the terms of, eg, translations
16:45:58 [ivan]
... there is also a representation for mapping links, at the end of the use case
16:46:22 [ivan]
... they are using equivalence links, links between a concept and a combination of concepts
16:46:36 [ivan]
... conjunctions and disjunctions
16:46:46 [ivan]
alistair: that may be like a union
16:47:31 [ivan]
... the last example in the use case they use the mapping vocabulary as it is right now in skos
16:47:45 [ivan]
... it also has 'and or not'
16:47:56 [ivan]
... the second example is exactly an 'or'
16:48:21 [ivan]
guus: ie, they also have the 'and or not' in their usage?
16:48:42 [ivan]
bernard: the more these vocabularies are mashed, they have similar like narrow and broader
16:48:54 [ivan]
alistair: these can be ambigous...
16:49:16 [ivan]
guus: we already have this on the list of the issues (whether we need to represent a specific semantics to broad and narrow)
16:49:58 [ivan]
ACTION guus: to check that this broad/narrow is on the issues' list
16:50:27 [RalphS]
s/this/this issue of more specialization than/
16:51:21 [ivan]
guus: you can say we build into the skos vocabularies that we define, eg, two subclasses
16:51:47 [ivan]
... or we can say that we leave that to the vocabulary, the authors has the guideline to present this as a subproperty to broad/narrow
16:51:53 [ivan]
... the issue is to resolve this
16:52:13 [ivan]
alistair: ie, if people want to do more specific, how would they do it?
16:52:26 [ivan]
guus: yes, and whether this is part of the skos vocabulary or not
16:54:21 [RalphS]
Ivan: were problems with representing multilingual scripts found?
16:54:31 [RalphS]
... is there enough in RDF to represent this?
16:54:49 [RalphS]
Alistair: there were some interesting language problems in the Chinese mapping
16:55:20 [RalphS]
Antoine: but I think they succeeded in representing everything they wanted to represent in RDF, though they needed more than SKOS
16:55:40 [aliman]
a document about mapping between agrovoc and chinese agricultural thesaurus
16:56:07 [ivan]
guus: term-to-term relationship?
16:56:46 [ivan]
antoine: the problem of having several labels for the same concepts that comes up, they want o be able to line up the literal translations to one another
16:57:00 [ivan]
guus: why not use for each preferred and alternative lables
16:57:19 [ivan]
alistair: the the preferred label in chinese may be the third alternative example in english
16:57:55 [ivan]
timbl: cat translates to 'chat' in French, you have to label in french
16:58:15 [ivan]
alistair: you are making a link between translations and labels
16:59:12 [ivan]
antoine: a concept in one vocabulary has a latin name for the pref label, and an alternative label the common name
16:59:17 [ivan]
... the same in the french versions
16:59:35 [ivan]
... and you want to point ot the fact to translate from the two alternative labels
16:59:48 [ivan]
guus: then the latin is a lingua franca
17:00:23 [ivan]
alistair: another thing they wanted that the label in French has been derived from that and than alternative label in English
17:00:41 [ivan]
guus: we may have an issue of relationship of linguistic labels
17:00:54 [ivan]
... not clear to me what to do with this
17:01:12 [ivan]
alistair: we have to be careful with a use case like this is what they do to exactly with this information
17:01:23 [ivan]
... why do they use it
17:01:53 [JonP]
JonP has joined #swd
17:02:10 [ivan]
ACTION antoine: capture the issue on capturing relationships between labels
17:03:02 [RalphS]
Antoine: e.g. acronym link
17:03:16 [RalphS]
... an example of a semantic relationship between labels
17:03:37 [ivan]
antoine: examples 6 and 7 are similar on features
17:04:12 [RalphS]
s/examples/use cases/
17:04:15 [ivan]
... representing quite simple vocabularies, one is on tactical situation objects
17:04:21 [ivan]
... a list of unstructured terms
17:04:30 [ivan]
... each term has some label and a note
17:04:40 [ivan]
... when it should be used
17:04:51 [JonP]
JonP has joined #swd
17:04:57 [ivan]
... the support life cycle is similar
17:05:15 [ivan]
ralph: in #6 it was difficult to see what it says about skos
17:05:18 [ivan]
alistair: me too
17:05:33 [ivan]
... this is not the sort of use case i am familiar with
17:05:51 [ivan]
antoine: i tried to interpret it, but apart form simple labelling i did not find anything
17:06:12 [ivan]
alistair: we could ask them what they want to do
17:06:20 [ivan]
guus: this is what they have...
17:06:43 [ivan]
ralph: maybe we want to ask submittors to point at the wg on areas they want additional things
17:07:01 [ivan]
alistair: use case 7 actually adds a question mark on skos (or owl)
17:07:14 [ivan]
... it is not clear why they want to skos
17:07:35 [ivan]
guus: i could think of reasons
17:07:47 [ivan]
antoine: they were search of standard ways
17:08:10 [ivan]
guus: the problem with use case #7 that it is out of scope
17:08:30 [ivan]
... or am i misunderstanding
17:08:45 [ivan]
ralph: it would be interesting question to ask them what they want skos for
17:08:57 [ivan]
bernard: may be a marketing issue
17:10:08 [ivan]
action antoine: to contact the submittors of #7 to see what they want to use skos for (as opposed to, say, owl)
17:10:50 [ivan]
alistair: it seems that they have a requirement to capture lots of things, that may need to extend skos
17:11:04 [ivan]
antoine: no, they really need only flat things...
17:11:24 [ivan]
... they need a structure to represent a natural language representation without a reasoner
17:12:56 [ivan]
antonie: number #8 gtaa web browser, accessing thesaurus
17:13:15 [ivan]
... want to provide the user with a sophisticated vocabulary
17:13:27 [ivan]
guus: there is an archive for tv and radio programs
17:13:46 [ivan]
... they do annotation inside the content but also coming from broadcasting companies
17:13:57 [ivan]
... on the top level there are 8 different facets
17:14:11 [ivan]
... and several of the sub hierarchies have separate classifications
17:14:28 [ivan]
... and that is the whole thing
17:14:41 [ivan]
... they are specific for a facet
17:15:19 [ivan]
alistair: there is a thematic and a named hierarchy, and they are orthogonal
17:15:45 [ivan]
guus: we can get test cases out of it
17:16:18 [ivan]
ralph: 'only keyword and genres can also have broader/narrower relation', is that a restriction?
17:16:31 [ivan]
guus: this is a very flat structure, this is not really a restriction
17:17:00 [ivan]
antonie: use case #9, another use of the same vocabulary of use case #8,
17:17:13 [ivan]
... using a special algorithm that provides the user an indexer
17:17:28 [ivan]
... the idea is to explore the different links in the thesaurus to rank the concepts
17:17:58 [ivan]
... if you have to index a document with a set of candidate terms, if the thesauri includes these terms, than that hierarchy is also presented
17:18:15 [ivan]
guus: I would have personally merged #8 and #9
17:18:31 [ivan]
antoine: #9 provided in a functional view
17:18:40 [ivan]
... adding a representation to an applicaiton is nice
17:18:57 [ivan]
guus: people in computer science like automatic things
17:19:07 [ivan]
... but these people like to manually check
17:19:57 [ivan]
ralph: even though it does not add anything technically, it adds a new aspect, good for 'marketing' reasons
17:20:38 [ivan]
alistair: if you look a traditional model, you manually build a vocabulary and index
17:21:08 [ivan]
... in this case the vocabulary is done manually, but an automatic indexing is good
17:21:57 [ivan]
... a use case document should have a business model section to show how different scenarios are used
17:22:22 [ivan]
guus: summary: #9 does not add anything to the requirements, but is an interesting use case scenario to keeo
17:22:28 [ivan]
s/keeo/keep/
17:23:04 [ivan]
alistair: applications might want the integrity of their data, and expressing the constraints is a requirement
17:23:22 [ivan]
guus: there is already and issue on the level of semantics that skos has
17:24:23 [ivan]
action alistair: summarizes the aspects of semantics of the skos data model
17:24:31 [RalphS]
zakim, Jonathan_Rees just arrived in meetingroom
17:24:31 [Zakim]
+Jonathan_Rees; got it
17:24:44 [RalphS]
zakim, AlanR just arrived in meetingroom
17:24:44 [Zakim]
+AlanR; got it
17:25:17 [RalphS]
JonathanRees: I'm part of Science Commons
17:26:34 [ivan]
alistair: a question on #8, relationships on terms between facets were computed
17:27:17 [ivan]
... question is how were these computed?
17:27:26 [ivan]
guus making faces:-)
17:27:54 [ivan]
guus: the general problem was that there were lack of relationships
17:28:03 [ivan]
... but I do not think there were much semantics
17:28:09 [RalphS]
->
Biozen use case as submitted
17:28:34 [ivan]
alistair: it also says the precomputed terms were not part of the iso standards
17:28:43 [ivan]
guus: good question, I do not know
17:28:55 [RalphS]
->
GTAA use case as submitted
17:28:57 [ivan]
action guus: check with veronique on the terms being outside the iso standard
17:29:44 [ivan]
antoine: use case #4, bio-zen ontology framework
17:30:42 [ivan]
... the main point to represent these medical vocabularies, keeping all the infos that are useful for applciation
17:30:47 [ivan]
... the application was not really detailed
17:31:04 [ivan]
... gene ontology and mash are the two examples for applications
17:31:16 [ivan]
... it has an example of representation of a term
17:31:35 [ivan]
... the main point is the fact that the representation they mix all kind of different of metadata vocabularies
17:31:55 [ivan]
... they created some sort of metamodel using owl, and uses pieces of other vocabularies
17:32:13 [ivan]
... they use all these meta models to represent the medical vocabularies
17:32:24 [ivan]
... they use, eg, dublin core plus skos terms together
17:32:39 [ivan]
... they created an owl specification to mix these metamodel features
17:33:15 [ivan]
guus: why there was within the definition there is a representation of the part of relationship
17:34:00 [ivan]
...does the mesh have its own hierarchy
17:34:38 [ivan]
alan: 'is a' is not 'part of', careful about that
17:35:03 [ivan]
guus: in skos we use the broader and narrower terms which are less defined
17:35:18 [ivan]
alan: obo originates in the gene ontology
17:35:30 [ivan]
... the latter has is a and part of relationships in it
17:35:40 [ivan]
... there has been a number of threads using this
17:35:56 [ivan]
... one thread is to translate obo to other formats, people used, eg, skos
17:36:16 [ivan]
... they have to decide where broader, etc, are used
17:36:33 [ivan]
... these actually threw away information but they are part of skos
17:36:46 [ivan]
... from my understanding at the time at least
17:37:20 [ivan]
... there is an effort to translate this into owl
17:37:48 [ivan]
... second thread of discussion is the 'quality' of the whole thing
17:38:07 [Elisa]
There is a recently released related portal - Daniel Rubin and his group have created this and are working to develop it as a part of their NCOR work:
17:38:11 [ivan]
... what can be related to what, what are the description of that, more philosophical stuff
17:38:39 [ivan]
guus: some people make subproperties from, say, skos broader
17:38:51 [ivan]
... then you do not throw away things
17:39:19 [ivan]
alan: i had the issue on putting it with owl-dl
17:39:33 [ivan]
guus: that is a separate issue on the agenda (relationship to owl-dl)
17:40:17 [ivan]
(scribe got distracted, sorry)
17:41:13 [RalphS]
Alan: Matthias is asking that as we fiddle with SKOS, we try to keep it OWL-DL compatible
17:41:19 [RalphS]
Alistair: it's already not OWL-DL
17:41:47 [ivan]
alistair: if you go into library sciences, you will find papers on classification
17:42:00 [ivan]
... people there define fundamental facets, time, space, etc
17:42:10 [ivan]
... there are discussions on what these fundamental facets are
17:42:33 [ivan]
... that might come to the skos spec
17:42:54 [ivan]
... but if you want to do that, this should be done as an extension of skos (in my view)
17:43:27 [ivan]
guus: b.t.w., the relationship to owl-dl should be part of our issues list, not requirement
17:45:11 [ivan]
... maybe if we define a set of constraints, that might lead to skos-dl...
17:45:28 [ivan]
... but this is a topic for discussion
17:46:47 [ivan]
alistair: it is tricky, extension by requirements is one of the major way of extending skos, and all of those are annotation properties, and that leads to problem
17:47:15 [ivan]
action alistair: rephrase the old issue of skos/owl-dl coexistence and semantics
17:47:24 [Zakim]
-TomB
17:47:43 [ivan]
bernard: it was good in the owl days to have implementations submitted, too
17:47:58 [ivan]
guus: for the moment it is good to collect the information, it is good to use them as test cases
17:48:14 [ivan]
.... but this group is much smaller than the old owl group, and we have a resource problem
17:48:46 [alanr]
alanr has joined #swd
17:48:48 [ivan]
alistair: there are two wiki pages, and the shiny new skos web site
17:48:54 [alanr]
17:49:06 [ivan]
... the idea that anyone who has implementation should be able to add it
17:49:27 [RalphS]
->
SKOS home page
17:50:07 [ivan]
antoine: use case #10 birnlex, lexion for neurosciences
17:50:17 [ivan]
... aims at providing several vocabularies
17:50:28 [ivan]
... they are the same as the bio-zen use case
17:50:47 [ivan]
... there is a mixture of different metadata models, skos, dc, foaf, etc
17:51:27 [ivan]
alistair: all they want is the some of the properties like pref label, alt label, not in the structure of label
17:51:51 [ivan]
... if skos has good annotating support, people may just want to use that
17:52:10 [ivan]
guus: i interpreted this as having a lot of need to various type of relations
17:52:14 [Zakim]
+Tom_Baker
17:53:17 [ivan]
... there are many things in the examples term relations with other semantics
17:54:19 [ivan]
alan: the argument is that there is a desire of the part of type of relationships that we may need in general
17:54:26 [Zakim]
+??P5
17:54:28 [Zakim]
-TomB
17:54:35 [ivan]
... ie, people insert tags into the rdf labels,
17:54:57 [ivan]
... shows the importance of this issue
17:54:58 [RalphS]
Alan: the BIRNlex use case may bring in issues for our vocabulary management work
17:55:29 [ivan]
alistair: this is a bit of annotating just about everything
17:55:51 [alanr]
17:55:51 [ivan]
... they do not want skos broader and narrower
17:56:09 [ivan]
... it is more that they want all type of documentation/annotation support
17:56:54 [ivan]
guus: the issue here is that you have your concepts
17:57:04 [ivan]
... how to document/annotate various concepts
17:57:09 [ivan]
... and what skos give you on that
17:58:38 [ivan]
action alan: write down the general documentation requirements, in particular to those that are related to literal values, and how to represent that in skos
17:59:10 [ivan]
antoine: use case #11 quite similar
17:59:18 [ivan]
... I have not read it in much details
17:59:47 [ivan]
... it is once again to represent all these various vocabularies and linking/importing skos concepts to an 'own' ontology
17:59:58 [ivan]
... and extending skos relations
18:00:21 [ivan]
guus: my proposal: there are still use cases coming in
18:00:38 [ivan]
... we have to include facilities to evaluate use cases
18:01:04 [ivan]
... we should go through the list of the requirements and see if we can refine this
18:01:11 [ivan]
... and go through the issues' list
18:02:55 [ivan]
---- lunch break ----
18:03:13 [RalphS]
[one hour lunch break]
18:03:16 [Zakim]
-TomB
18:03:34 [Elisa]
See you in a bit -- elisa
18:03:52 [Zakim]
-Elisa_Kendall
18:17:26 [timbl]
timbl has joined #swd
18:59:51 [berrueta]
berrueta has joined #swd
19:00:49 [Zakim]
+ +1.617.475.aabb
19:02:13 [TomB_]
TomB_ has joined #swd
19:03:06 [Zakim]
-TomB
19:04:47 [RalphS]
zakim, Stephen_Williams has arrived in meetingroom
19:04:47 [Zakim]
+Stephen_Williams; got it
19:04:48 [Zakim]
+??P5
19:04:54 [RalphS]
zakim, timbl has left meetingroom
19:04:54 [Zakim]
-TimBL; got it
19:05:00 [aliman]
aliman has joined #swd
19:05:09 [RalphS]
scribenick: RalphS
19:05:47 [RalphS]
-> SKOS Requirements sandbox
19:08:17 [RalphS]
-- R0. Information accessible in distributed setting
19:08:33 [RalphS]
Guus: is this a requirement on SKOS?
19:09:11 [RalphS]
Antoine: doesn't seem to change anything about SKOS or what it represents
19:09:56 [RalphS]
Guus: seems to be a general Web requirement
19:10:03 [RalphS]
Ralph: comes with RDF and the Semantic Web
19:10:37 [RalphS]
RESOLVED: drop R0. Information accessible in distributed setting as not SKOS-specific
19:10:45 [RalphS]
-- R1. Representation and access to relationship between concepts
19:10:57 [RalphS]
Guus: s/relationship/relationships/
19:12:14 [RalphS]
Bernard: "displaying or searching concepts" might give the impression of constraining our scope
19:12:26 [RalphS]
... e.g. excluding annotation
19:12:57 [RalphS]
Guus: how about "representing relationships between concepts"
19:13:16 [RalphS]
... the ability to represent hierarchical and non-hierarchical relationships between concepts
19:13:39 [RalphS]
-- R2. Representation and access to basic lexical values (labels) associated to concepts
19:14:30 [RalphS]
Antoine: "basic" as in "simple" as compared to more sophisticated scope notes
19:15:09 [RalphS]
Guus: basic lexical _information_ or do you really mean to restrict to _labels_ ?
19:15:39 [TomB_]
q+ to ask whether we are dropping "access to" in R1 and R2?
19:15:52 [RalphS]
Guus: "access" not needed
19:16:06 [Zakim]
TomB_, you wanted to ask whether we are dropping "access to" in R1 and R2?
19:16:34 [RalphS]
Tom: support dropping "access to"
19:16:52 [RalphS]
-- R3. Representation of links between labels associated to concepts
19:17:08 [RalphS]
Guus: we have an issue related to this
19:17:28 [RalphS]
... this requirement may change after resolving the issue
19:18:29 [RalphS]
Alistair: we could suggest a point of view without making a hard requirement
19:18:50 [RalphS]
... while reviewing all these requirements today
19:19:08 [RalphS]
Guus: I suggest that any requirement with a related issue be marked as "soft"
19:19:37 [Zakim]
+Elisa_Kendall
19:19:45 [RalphS]
-- R4. Representation of glosses and notes attached to vocabulary concepts
19:20:18 [RalphS]
Guus: "notes" means "scope notes"?
19:20:20 [RalphS]
Antoine: yes
19:20:48 [RalphS]
Guus: so use the well-known term "scope notes"
19:20:58 [RalphS]
Antoine: should we include administrative notes?
19:22:03 [TomB_]
q+ to suggest "Representation of lexical information in multiple languages"
19:22:16 [RalphS]
Jon: suggest "glossaries" instead of "glosses"
19:22:33 [Zakim]
TomB_, you wanted to suggest "Representation of lexical information in multiple languages"
19:23:29 [RalphS]
Guus: I thought there is a distinction between a glossary and a scope note
19:23:48 [RalphS]
Alistair: what's the difference between 'gloss' and 'definition', then?
19:23:56 [RalphS]
... SKOS hasn't used the term 'gloss' previously
19:24:38 [RalphS]
Guss: "representation of textual descriptions ", with text mentioning definitions, scope notes, ...
19:24:48 [RalphS]
R{6,5}. Multilinguality
19:25:11 [RalphS]
Tom: suggest "Representation of lexical information in multiple languages"
19:25:21 [TomB_]
yes
19:25:26 [RalphS]
Bernard: multiple _natural_ languages?
19:25:37 [RalphS]
Guus: yes, good addition
19:25:44 [RalphS]
-- R6. Descriptor concepts and non-descriptor ones
19:26:19 [RalphS]
Guus: distinction between concepts intended to be used for indexing and other concepts?
19:26:21 [RalphS]
Antoine: yes
19:27:01 [RalphS]
... what I had in mind was the existing skos:subject
19:27:17 [RalphS]
... some concepts cannot be used as subject relationships
19:27:40 [RalphS]
Guus: qualifiers are still relevant to indexing
19:28:55 [RalphS]
... e.g. AAT vocabulary
19:29:29 [RalphS]
... Furnishings ... furniture ... <furniture by form or function> ... screens
19:29:40 [RalphS]
... the terms in <...> are not meant for indexing
19:29:56 [RalphS]
Alistair: many folk would not consider the <...> to be concepts; they call them "node labels"
19:30:13 [RalphS]
... they are labels for a grouping of concepts, the groupings are called 'arrays'
19:30:23 [RalphS]
... they say the node label does not represent a 'concept'
19:31:07 [RalphS]
... in the British standard it is quite clear that the node labels are only used in a certain way
19:31:29 [RalphS]
... but AAT adds things to the thesaurus beyond the British standard
19:31:53 [RalphS]
... it's just a matter of us wording this requirement correctly
19:33:06 [RalphS]
... consider Mandragore; you're not supposed to use things from levels 1 and 2
19:33:31 [RalphS]
... but the British standard demonstrates a requirement to be able to label groupings
19:34:11 [RalphS]
Guus: propose to rephrase as "the ability to distingish between concepts to be used for indexing and for non-indexing"
19:34:20 [RalphS]
Bernard: is this really a requirement or just an issue?
19:34:34 [RalphS]
Guus: is this in the ISO standard?
19:34:48 [RalphS]
Alistair: no, in ISO thesaurus any concept can be used for indexing
19:35:32 [RalphS]
... there's no a-priori reason why something not intended for indexing in one context would be inappropriate for use in another context
19:35:45 [RalphS]
Guus: suggest R6 is a soft requirement
19:36:26 [RalphS]
... and add a new requirement having to do with grouping
19:36:28 [RalphS]
...
19:37:41 [RalphS]
... "the ability to include grouping constructs in concept hierarchies" -- as a soft requirement
19:38:11 [RalphS]
Alistair: hierarchies are not the only place where node labels can be used
19:38:20 [RalphS]
... node labels are also used in related terms
19:38:28 [aliman]
see z39.19
19:38:35 [RalphS]
-- R7. Composition of concepts
19:39:07 [RalphS]
Guus: is this like conjunction and disjunction?
19:39:25 [RalphS]
Alistair: the terms 'conjunction' and 'disjunction' don't really make sense as we're not talking about sets of things
19:39:46 [RalphS]
... the classical way of talking about this is to talk about 'coordination', and 'coordination of things'
19:40:04 [RalphS]
... I'm afraid to use set-theoretic language, as this would be jumping the gun
19:40:28 [RalphS]
... we're not talking about True and False or sets, rather we're talking about concepts
19:40:44 [RalphS]
... 'compound concepts' is a term used in the thesaurus world
19:41:37 [RalphS]
... 'post-coordination' usually means that things are coordinated at search time but it typically really just means queries with more than one thing
19:41:56 [RalphS]
... I don't recommend referring to pre- or post-coordination
19:42:18 [RalphS]
Guus: I recommend linking 'coordination' to an explanation
19:42:31 [RalphS]
Alistair: I'd be happy using 'composition' rather than 'coordination'
19:43:17 [RalphS]
Guus: let's categorize into 'candidate requirements' and 'accepted requirements' (rather than 'hard' and 'soft')
19:43:25 [RalphS]
-- R8. Vocabulary interoperability
19:44:05 [RalphS]
Guus: mapping at the level of equivalence, more specific, less specific
19:44:12 [RalphS]
... further things under discussion
19:45:28 [RalphS]
... suggest dropping this, as we need to be able to test
19:46:01 [RalphS]
Ralph: is R8 the general case and R12 a specific case?
19:47:03 [RalphS]
Jon: I have another use case; our system supports the expression of relationships between terms in vocabularies we own and terms in vocabularies we don't own
19:47:26 [RalphS]
... the reciprocal relationship would need to be endorsed by the owner of the other vocabulary
19:48:14 [RalphS]
Guus: I can make equivalence statements in my own ontology and others can choose to use mine or not use mine
19:48:40 [RalphS]
... valid to have different statements about mapping and determine to which you commit
19:49:00 [RalphS]
Jon: imagine two indexing systems but a single retrieval system
19:49:59 [RalphS]
Alan: a search for A should include B but not vice-versa?
19:50:03 [RalphS]
Jon: yes
19:50:21 [RalphS]
Fabien: is this specific to equivalence or is it a filter on the source?
19:51:12 [RalphS]
Guus: back when we did OWL, I had to spend a long time defending owl:imports
19:51:41 [RalphS]
... this may be outside the SKOS language, at a different level of the SemWeb stack
19:51:59 [RalphS]
Jon: this is not about trust but about representing the intent of the thesaurus writer
19:52:21 [RalphS]
Guus: but it's at a reasoning level
19:55:08 [RalphS]
Alistair: we refer to 'SKOS concepts' and 'SKOS concept schemes'; perhaps we can also talk about 'mapping schemes'
19:55:17 [RalphS]
Guus: like provenance?
19:55:36 [RalphS]
Bernard: why isn't a concept scheme the same as a mapping scheme
19:56:01 [RalphS]
Alistair: they're handled differently by applications
19:56:11 [RalphS]
... an application wouldn't display a mapping scheme as a hierarchy
19:56:30 [RalphS]
Bernard: but if you dereference all the concepts in a mapping scheme wouldn't you end up with a concept scheme?
19:56:58 [RalphS]
Alistair: there's current a loose recommendation that two concepts in a single concept scheme do not share a label
19:57:12 [RalphS]
... this might be expressed as a logical constraint on a concept scheme
19:57:23 [RalphS]
... but this constraint would be inappropriate for a mapping scheme
19:58:23 [RalphS]
... if someone wants to capture in their RDF graph that there exists a set of mappings that he authored ....
19:59:49 [RalphS]
... a concept scheme has a notion of 'containment'
20:00:10 [RalphS]
... different integrity constraints if you're just collecting some mappings
20:01:07 [RalphS]
Alan: if you use owl:sameAs, you're making a bi-directional assertion
20:01:23 [RalphS]
... but the author of a vocabulary might want only a one-way assertion
20:02:01 [RalphS]
Antoine: is related to the issue Sean raised about containment
20:02:21 [RalphS]
zakim, Jonathan has left meetingroom
20:02:21 [Zakim]
RalphS, I was not aware that Jonathan was in meetingroom
20:02:38 [RalphS]
zakim, who's here?
20:02:38 [Zakim]
On the phone I see MeetingRoom, TomB_ (muted), Elisa_Kendall
20:02:39 [Zakim]
MeetingRoom has Alistair, Antoine, Bernard, Guus, Diego, IvanHerman, Ralph, Fabien, Ben, Jon, Jonathan_Rees, AlanR, Stephen_Williams
20:02:41 [Zakim]
On IRC I see aliman, TomB_, berrueta, timbl, alanr, JonP, benadida, Guus, FabienG, Elisa, Antoine, ivan, TomB, RRSAgent, Zakim, RalphS
20:02:48 [RalphS]
zakim, Jonathan_Rees has left meetingroom
20:02:48 [Zakim]
-Jonathan_Rees; got it
20:03:16 [RalphS]
Bernard: is there something about R8 that is not included in R12?
20:05:43 [RalphS]
Bernard: "it shall be possible to record provenance information on mappings between concepts in vocabularies"
20:05:55 [RalphS]
s/vocabularies/different vocabularies/
20:06:07 [TomB_]
q+ to raise a point
20:06:12 [RalphS]
RESOLUTION: R8 reworded to "it shall be possible to record provenance information on mappings between concepts in different vocabularies"
20:06:19 [Zakim]
TomB_, you wanted to raise a point
20:07:18 [RalphS]
Tom: we use "vocabulary", 'concept scheme', and 'SKOS model'; let's stick to one term
20:07:40 [RalphS]
Guus: I propose we drop 'concept scheme' and use only 'vocabulary'
20:08:46 [RalphS]
Alistair: the ISO standard does not distinguish between term-oriented and concept-oriented; I made up this distinction
20:09:18 [RalphS]
... you _can_ talk about whether the data model is term-oriented or concept-oriented, but not the vocabulary itself
20:09:44 [RalphS]
Guus: consider a 'bank' vs. 'financial institution' example
20:10:08 [RalphS]
... 'bank' implicitly defines a concept, implicitly it's a lexical label
20:10:21 [RalphS]
... consequence for a thesaurus is that the term 'bank' cannot be used anywhere else
20:11:00 [RalphS]
... in practice this distinction is useful, which is why I'd prefer to not use the term 'concept scheme'
20:11:14 [RalphS]
... 'vocabulary' is more general and makes less commitments
20:12:12 [RalphS]
Tom: R10 is really talking about the extension of the SKOS vocabulary, not the SKOS model
20:12:28 [RalphS]
... could be confusing if we use the term 'vocabulary' generically
20:13:24 [RalphS]
Alistair: the 'concept scheme' idea came from DCMI
20:14:25 [RalphS]
Ralph: it's probably easier to refer to "the SKOS vocabulary" and "a SKOS concept scheme" to differentiate between the SKOS terms and a thesaurus written using the SKOS terms
20:14:44 [RalphS]
Bernard: let's define terms near the start of the document
20:15:31 [RalphS]
Antoine: I tried to consistently use 'vocabulary' for applications of SKOS and 'model' for SKOS itself
20:16:05 [RalphS]
Alistair: there are implicit integrity constraints currently expressed only in prose
20:16:37 [RalphS]
Tom: this is a big question that deserves more thought, let's not decide now
20:17:13 [RalphS]
... SKOS model is like a DCMI application model
20:17:26 [RalphS]
... R10 talks about extending both the SKOS model and the vocabulary of properties
20:18:24 [RalphS]
Guus: for the time being, let's distinguish between the terms in the SKOS vocabulary and the application terms that use SKOS
20:18:29 [TomB_]
+1
20:18:44 [RalphS]
... for now, let's use "SKOS vocabulary" and "concept scheme", respectively, for these two
20:18:50 [TomB_]
+1 on "SKOS vocabulary" versus "SKOS concept scheme"
20:19:17 [RalphS]
-- R9. Extension of vocabularies
20:19:25 [RalphS]
now "R9. Extension of concept schemes"
20:20:12 [RalphS]
Alistair: how do I express that I want to import another concept scheme into my own, or import only a part of another concept scheme?
20:20:21 [RalphS]
Bernard: why import, just reference?
20:20:36 [RalphS]
Alistair: how does a browser know the boundary of a new concept scheme?
20:20:58 [RalphS]
Bernard: related to Protege issue of how to represent externally-defined items
20:21:23 [RalphS]
Guus: do we include maintenance properties, revision information, etc.?
20:21:49 [RalphS]
... I suggest we add a requirement related to versioning information
20:22:03 [RalphS]
Bernard: R9 is about tools
20:23:39 [RalphS]
Ralph: I suggest we keep the vocabulary management work, including versioning, as a separate task and not mix it into SKOS right now
20:24:42 [RalphS]
Jon: example; replacing a single term with two terms
20:25:52 [RalphS]
... not necessarily establishing a broader/narrower relationship but dropping the old term
20:26:03 [RalphS]
Guus: we handle this in OWL by deprecating old terms
20:26:23 [RalphS]
Alistair: my approach is to worry first about how to represent a static model
20:27:26 [RalphS]
Guus: suggest deferring versioning questions to separate vocabulary management work and later evaluate whether any SKOS-specific properties are needed
20:28:03 [RalphS]
Alistair: we have requests to be able to define concept schemes as 'we use everything in that scheme with the following additions'
20:29:49 [RalphS]
... Alistair: I need to find a better use case to motivate this
20:29:54 [RalphS]
-- R10. Extendability of SKOS model
20:30:03 [RalphS]
now "R10. Extendability of SKOS vocabulary"
20:30:32 [RalphS]
Guus: means "local specialization of SKOS vocabulary"
20:30:51 [RalphS]
... propose to rename this to "local specialization of SKOS vocabulary"
20:31:01 [RalphS]
... get this for free
20:31:22 [RalphS]
-- R11. Attaching resources to concepts
20:31:53 [RalphS]
Antoine: this is skos:subject; annotating resource
20:32:05 [RalphS]
Fabien: inverse of dc:subject?
20:32:30 [RalphS]
Alistair: skos:subject is dc:subject with a range constraint
20:33:12 [RalphS]
Guus: propose to rename this to "Ability to represent the indexing relationship between a resource and a concept that indexes it"
20:33:26 [RalphS]
... I suggest this is a candidate requirement
20:33:55 [RalphS]
-- R12. Correspondence/Mapping links between concepts from different vocabularies
20:34:22 [RalphS]
now "... different concept schemes"
20:34:58 [RalphS]
Bernard: related to mapping between labels in different concept schemes; that can be a separate requirement
20:35:25 [RalphS]
Guus: at a minimum, equivalent, less/more specific, and related
20:35:36 [RalphS]
Alistair: also composition
20:36:00 [RalphS]
Alan: is 'related' a superproperty of 'broader' or 'narrower'?
20:36:03 [RalphS]
Alistair: no
20:36:15 [RalphS]
Alan: please document this explicitly in the spec
20:38:18 [RalphS]
Guus: propose a new candidate requirement: Correspondence mapping links between concepts in different concept schemes
20:38:30 [RalphS]
s/between/between lexical labels of/
20:38:44 [RalphS]
-- R13. Compatibility between SKOS and other metadata models and ontologies
20:39:14 [RalphS]
Antoine: may not bring any additional requirements on representational features
20:40:08 [RalphS]
Guus: what other models do we want to be compatible with?
20:40:19 [RalphS]
Alistair: Dublin Core
20:40:32 [RalphS]
... note that changes have been made to Dublin Core specifically to align it with SKOS
20:40:33 [Elisa]
Another metadata standard we should consider here is ISO11179
20:41:11 [RalphS]
Elisa: ISO 11179 is another related standard, on which Daniel Rubin and I have spent time recently
20:41:27 [RalphS]
... Daniel is interested in 11179 because many biomedical ontologies use it
20:41:43 [RalphS]
.. by mapping 11179 to SKOS we bring a lot of those into the RDF world
20:42:14 [RalphS]
Alistair: 2788 is a thesaurus standard and is very different from 11179, which is a metadata model
20:43:01 [RalphS]
... there is a particular part of 11179 that is intended to talk about classification schemes
20:43:09 [RalphS]
... it's obvious how SKOS and that part of 11179 relate
20:43:52 [RalphS]
Alan: what does "compatible with" mean?
20:44:45 [RalphS]
Bernard: does "compatible with" mean "does not violate the [Dublin Core] abstract model"?
20:45:12 [RalphS]
Ralph: what sort of test cases could we construct to decide "is compatible" or "is not compatible"?
20:45:53 [RalphS]
Alistair: could we translate a data instance using 11179 to a data instance in SKOS? how much data loss? how much data loss in transforming back?
20:46:15 [RalphS]
Alan: the scope of 11179 is much larger than that of SKOS
20:47:01 [RalphS]
Guus: it would be good to identify specific other models
20:47:56 [RalphS]
... e.g. 2788, 11179 [part 3]
20:48:10 [RalphS]
Alistair: 5964 (multilingual)
20:48:43 [RalphS]
... I'd put 2788 as a stronger requirement than 5964
20:48:48 [RalphS]
... interpretation of 5964 is harder
20:49:49 [RalphS]
Alan: is SKOS a "metadata model"?
20:50:58 [RalphS]
Guus: propose omitting the general requirementsR13 and R15 and adding specific requirements for 2788, 11179.3
20:51:13 [RalphS]
Alan: is all of 11179.3 relevant to SKOS? there's a lot of stuff in there
20:51:47 [RalphS]
Elisa: I am happy to help narrow the scope
20:53:00 [RalphS]
... and the US contingent in the 11179 group are physically close to me
20:53:13 [RalphS]
s/sR13/s R13/
20:53:35 [RalphS]
-- R.14 OWL-DL compatibility
20:54:04 [RalphS]
Guus: we can talk about a SKOS representation that is OWL-DL compliant
20:54:35 [RalphS]
Alan: make it formal that annotation [sub]properties are allowed?
20:54:58 [RalphS]
Guus: we have to be sure that we can complete our deliverables without requiring another WG to be rechartered
20:55:09 [RalphS]
... we can make comments to the OWL comment list about annotation properties
20:55:28 [RalphS]
Alan: there's a partial workaround available to SKOS
20:56:08 [RalphS]
Ivan: what does DL compatibility mean when you have a processing model that includes some rules into SKOS?
20:56:24 [RalphS]
... regardless of annotation properties, SKOS is already out of DL
20:56:36 [RalphS]
Alistair: the annotation properties don't have to be used
20:56:52 [RalphS]
-- R.16 Checking the consistency of a vocabulary
20:57:03 [aliman]
s/annotation properties/rules/
20:57:04 [RalphS]
Guus: issue raised earlier about semantics
20:57:10 [TomB_]
q+ to suggest "consistency of a concept scheme"
20:57:35 [RalphS]
[I think Tom's suggestion is agreed implicitly]
20:57:37 [TomB_]
q-
20:58:43 [RalphS]
Jon: I've been updating the sandbox wiki in realtime
20:58:44 [JonP]
21:03:40 [RalphS]
[break]
21:05:07 [RalphS]
zakim, Kjetil_Kjernsmo just arrived in meetingroom
21:05:07 [Zakim]
+Kjetil_Kjernsmo; got it
21:11:26 [Zakim]
-TomB_
21:11:50 [Zakim]
+??P5
21:12:16 [kjetilk]
kjetilk has joined #swd
21:13:51 [TomB_]
my
21:19:47 [aliman]
TOPIC: Best Practise Recipes
21:19:54 [aliman]
scribenick: aliman
21:20:42 [aliman]
guus: finish by 5:45
21:21:24 [aliman]
steve: Steve Williams, hyperforms technologies, participating in W3C 2-3 in binary XML WGs
21:21:25 [JonP]
Current issues list:
21:21:26 [timbl]
timbl has joined #swd
21:21:57 [aliman]
... interest in semweb, relatied technologies, interested in AI.
21:22:14 [RalphS]
zakim, TimBL has arrived in meetingroom
21:22:14 [Zakim]
+TimBL; got it
21:22:19 [aliman]
guus: three main topics here, one is on SKOS, now going on to recipes for publishing RDF vocabs, tomorrow move on to RDFa
21:23:09 [timbl]
timbl has changed the topic to: SWD f2f
sic
21:23:34 [aliman]
kjetil: opera software, interested in semweb since 98, bumped into danbri, then graduate student of astrophysics, then hired by opera, mostly programmin, chaals getting me into more on semantic web.
21:23:49 [aliman]
... responsible for my opera foaf stuff
21:24:22 [aliman]
guus: moving on to discussion of recipes for publishing RDF, have as input recipes document from SWBPD, incomplete document, Jon action to generate issues list, now on wiki
21:24:36 [JonP]
21:25:16 [aliman]
... suggest briefly review this list, pick out critical issues, spend time discussing critical issues, diego can play a role because has proposed resolution for one of these issues (we can discuss and decide on)
21:25:58 [aliman]
jon: first four issues left over from previous working group, diego's been working on first issue.
21:26:49 [ivan]
21:27:08 [RalphS]
^Diego's proposed resolution to COOKBOOK-I1.1
21:27:22 [aliman]
diego: [issue 1.1] already on mailing list, there was a TODO tag, issue regards configuration of apache to serve vocabularies, apache uses configuration files with directives, one of these directives is the overrides ...
21:27:46 [JonP]
Diego's verification email:
21:28:00 [aliman]
... in original doc there was a TODO tag next to overrides, to verify this is correct, I checked this and discovered that the line was correct, no additional overrides required, both overrides are required.
21:28:08 [aliman]
... proposed to remove this TODO tag.
21:28:34 [RalphS]
->
@@TODO from Recipes WD
21:28:58 [aliman]
jon: is that resolution acceptable? how do we handle? seems to be fine.
21:29:15 [aliman]
... vote as a group?
21:29:21 [aliman]
guus: can we write a test case?
21:29:27 [aliman]
diego: I have test case.
21:29:50 [aliman]
jon: diego sent around email, describing test cases and results.
21:29:55 [berrueta]
->
test cases
21:30:02 [aliman]
guus: further discussion?
21:30:09 [aliman]
ralph: looks good to me.
21:30:15 [aliman]
aliman: me too.
21:30:52 [aliman]
PROPOSED to resolve issue 1.1 as per email of
21:31:12 [aliman]
ralph seconds
21:31:19 [aliman]
no objections
21:31:21 [aliman]
RESOLVED
21:32:01 [aliman]
ACTION: jon to update issue list as per resolution of issue 1.1
21:32:18 [aliman]
guus: next issue?
21:32:45 [aliman]
jon: skip over second issue, because the TODO is that it references 6 which doesn't exist
21:32:58 [aliman]
... issue 1.2 and 1.4 are essentially the same.
21:33:09 [aliman]
guus: PROPOSED to drop issue 1.2
21:33:17 [aliman]
diego seconds
21:33:22 [aliman]
no objections
21:33:28 [aliman]
RESOLVED to drop issue 1.2
21:33:41 [aliman]
ACTION: jon to update issue list as per dropping of 1.2
21:34:08 [aliman]
jon: issue 1.3 - why performing content negotiation on the basis of the "user agent" heading.
21:34:21 [aliman]
... is not considered good practice.
21:34:48 [aliman]
bernard: whole section should be in an appendix, why in main body of text?
21:35:48 [aliman]
aliman: karl suggested move whole content negotiation to appendix
21:36:25 [aliman]
timbl: if you add features to a user agent, because it stumps the deployment of new browsers, e.g. folks at opera, resulted in user agents lying about who they are
21:36:50 [aliman]
... e.g. some browsers ship with lying user agent fields, unsatisfactory, better to look at the mime types
21:37:18 [aliman]
... sometimes in practice necessary to look at user agent field to pick up bugs, where you know there are specific bugs, particular trap for particular browser.
21:37:47 [aliman]
jon: potential resolution is to explain the problem with using user agent, as per stunting development?
21:37:51 [aliman]
timbl: yes
21:38:06 [aliman]
diego: esiting doc to explain this?
21:40:56 [aliman]
ralph: we didn't want to break semantic web applications which don't include accept header, so set RDF as default response
21:43:33 [aliman]
timbl: two cases, one is your serving data, but if you are trying do the trick of doing either rdf html version, but only put if you are content negotiating ...TAG says something about identity of resourse
21:43:44 [aliman]
aliman: but use 303 so don't have to have same info content
21:43:48 [aliman]
timbl: yes
21:44:02 [aliman]
bernard: uncomfortable with hack
21:44:15 [aliman]
jon: real worl, applies to IE7?
21:44:16 [timbl]
The TAG I think says the same URI may conneg go to differenet representations ... but they should convey the same i nformation
21:44:53 [aliman]
guus: someone take action to look at IE7
21:45:03 [aliman]
jon: regardless of IE7, should still leave hack in.
21:45:15 [aliman]
aliman: I agree
21:45:47 [aliman]
jon: issue 1.3 is actually to explain why the hack is slightly bad
21:45:56 [aliman]
ralph: new issue would be to look again at the hack
21:46:01 [aliman]
jon: two separate issues
21:47:23 [aliman]
ralph: if IE7 does the wrong thing, leave it, if IE7 does the right thing then drop the hack (except if you have to support a specific ocmmunity)
21:48:06 [aliman]
bernard: I will raise this issue
21:48:35 [aliman]
ACTION: bernard to raise new issue re IE6 hack
21:48:53 [aliman]
ACTION: diego to look at IE7 accept headers
21:49:20 [aliman]
ralph: test cases?
21:49:28 [aliman]
aliman: just whar's in the document already.
21:49:50 [aliman]
jon: 1.3 issue has been raised.
21:50:33 [aliman]
ralph: I move we conside 1.3 open - there is a TODO that needs to be done, timbl how likely is that TAG write something about use of user agent header?
21:50:48 [aliman]
timbl: may be something already, otherwise need to send email to TAG
21:50:56 [aliman]
ralph: I will own this issue
21:51:11 [aliman]
ACTION: ralph propose resolutition to issue 1.3
21:51:53 [aliman]
jon: move on to issue issue 1.4 ... recipe 6 is not there
21:52:01 [aliman]
bernard: do we need a recipe 6
21:52:25 [RalphS]
scribenick: ralph
21:52:55 [RalphS]
Alistair: one of the reasons people like slash namespaces is because the response to a GET is specific to the requested resource and you can incrementally learn more with additional GETs
21:53:54 [RalphS]
... in recipe 5, if you request RDF you get redirected to a namespace document that describes everything
21:54:19 [RalphS]
... recipe 6 was intended to permit serving just a relevant chunk of RDF data
21:54:30 [RalphS]
TimBL: d2rdf does this
21:55:05 [RalphS]
... a SPARQL query can navigate a graph by recursively pulling in documents
21:55:24 [aliman]
timbl:.
21:55:39 [RalphS]
... it's good to remind people to include backlinks; dereferencing a student should give you a pointer back to the class
21:56:04 [aliman]
timbl: minimum spanning graph, RDF molecule
21:56:45 [aliman]
... patrick stickled CBD onlly arcs out
21:56:47 [timbl]
see:
21:56:55 [aliman]
... this is important recipse to include
21:56:56 [RalphS]
s/stickled/stickler/
21:57:12 [aliman]
... proposed workshop for web conference about linked data, didn't have space.
21:57:25 [timbl]
The D2R Server generates linked data automatically.
21:57:25 [RalphS]
->
Concise Bounded Description [Stickler, 2005]
21:57:26 [aliman]
jon: this should be opened, this recipe should be written.
21:57:40 [aliman]
guus: open means we work on it now.
22:00:15 [aliman].
22:00:37 [aliman]
jon: useful to say that as part of the recipe, best handled by a service that will handle on the fly.
22:01:09 [aliman]
timbl: important to separate, e.g. FOAF files do it by hand, in some cases there is a lot of hand written stuff, fact that something is genearated automatically may apply to other recipes also.
22:01:34 [aliman]
ralph: we are a deployment group, rewriting nice URIs to query URIs, good to show this to get more deployment.
22:01:50 [aliman]
guus: we are in a position to write a resolution to this section, who can do?
22:01:59 [aliman]
ralph: examples of sparql services we can use?
22:02:29 [aliman]
timbl: geonames? d2r server. one does 303 redirect to URI encoded SPARQL query.
22:02:46 [aliman]
ralph: sounds like some code existed.
22:03:26 [aliman]
jon: two parts to this, first part is data, second part is server configuration. We're looking for a document fragment example and server config.
22:03:46 [aliman]
ralph: wordnet is an obviious choice, but the W3C need to commit to D2R service.
22:04:20 [aliman]
diego: I will own the issue
22:04:49 [aliman]
guus: maybe diego can talk with Ralph about wordnet, nice use case, widely used.
22:05:02 [aliman]
ralph: need to get web servers to support the service, but plausible.
22:05:16 [RalphS]
s/get web/get W3C web/
22:06:00 [aliman]
jon: issue 2.1 (QA comments)
22:06:50 [aliman]
... karl raised wordsmithing and structural comments, lots, something for each section, I couldlnt' break out individual issues, I'd like to propose we simply open this, I'll take ownership, I'll implement most of his suggestions and propose as modification to the document.
22:07:12 [aliman]
guus: comments from QA people, we owe them a response. Need to go through each response, say what we did.
22:07:45 [Zakim]
-TomB_
22:09:37 [aliman]
jon: Issue 2.1 ... raised by me (wiki lies) recipes are specific to apache server, may be applicable in non-apache environments, do we want to provide general template that describes recipes in general, or say that recipes can be implemented by a script.
22:09:43 [aliman]
bernard: general template is possible?
22:10:35 [aliman]
ralph: is there is one web master would recognise then can look at it. cookbook is very practical, make it simple for server admin to do it, if someone wants to submit recipes for other environments then good.
22:10:47 [Zakim]
+??P1
22:11:08 [aliman]
jon: common principles e.g. redirect based on content negotiation, I don't konw enough about other environments to say how.
22:11:12 [Guus]
[hi Tom]
22:11:18 [aliman]
ralph: like to encourage others to contribute recipes
22:11:34 [aliman]
jon: suggest we provide a place for people to submit new recipes for other environments
22:11:47 [aliman]
ralph: happy with mailing list for proposed translations.
22:12:00 [aliman]
ivan: wiki?
22:12:07 [aliman]
... esw wiki?
22:12:42 [RalphS]
alistair: the diagrams were intended to provide a schematic overview of the behavior we were trying to implement
22:12:56 [RalphS]
... hopefully this would give people enough information to implement in other environments
22:13:00 [RalphS]
scribenick: aliman
22:13:03 [aliman]
jon: leave this in a raised state? open it?
22:13:12 [aliman]
ivan: resolution to open wiki page.
22:13:29 [aliman]
ralph: willing to own issue, proposed resolution to create wiki page.
22:14:12 [aliman]
guus: publication schedule, can say we don't think it's a high priority if we think resources are limited.
22:14:32 [aliman]
ralph: should be ok, but may get not good configs
22:15:15 [aliman]
jon: issue 3.1 (raised by me) discussion about differentiating between versions, one reason we use redirects to supply most recent snapshot of a vocabulary
22:15:22 [aliman]
... actualy document.
22:16:24 [timbl]
q+
22:16:28 [RalphS]
Alistair: consider Dublin Core; it has a fixed URI that is redirected to the current version of the vocabulary
22:17:06 [RalphS]
... an application may be able to deal with versioned URIs, and access older snapshots of the vocabulary
22:17:30 [RalphS]
q+
22:17:48 [aliman]
jon: why not use mod_rewrite instead of redirects, can use redirects to make version ???
22:18:08 [aliman]
... proposing a complete suggestion for a naming convention for handling this sort of thing in a recipe ...
22:18:35 [timbl]
q+ to say 1) the redirect does not convey that semantics and so the semantics need sto be onveyedelsewhere and (b) the redirect is an overhead
22:18:41 [aliman]
... link is in extended requirements section of original doc
22:18:59 [RalphS]
->
extended requirements
22:19:06 [timbl]
... and (c) metatdat in URI is converd by the TAG in a new finding and is in gneral bad
22:20:30 [Zakim]
timbl, you wanted to say 1) the redirect does not convey that semantics and so the semantics need sto be onveyedelsewhere and (b) the redirect is an overhead
22:23:17 [aliman]
22:23:34 [aliman]
... TAG is aware of this but hasn't tackled from RDF point of view, I wrote an ontology ...
22:23:37 [aliman]
q+ to foo
22:23:43 [timbl]
22:24:46 [aliman]
22:25:10 [aliman]
... that would be ideal, ideal pattern which nobody does at the moment - don't know if I should ask this group or another to look into this.
22:25:17 [aliman]
... suggest this group push this out.
22:26:51 [aliman]
22:27:06 [aliman]
... consider this as candidate requirement for other VM work, not to try and solve for this doc (recipes).
22:28:09 [aliman].
22:28:19 [aliman]
... so we should reword this doc?
22:28:41 [aliman]
ralph: shows up on our deliverables page ... "principles for management..." we can point to this document.
22:29:10 [aliman]
guus: propose to leave this issue as raised, go on with recipes without resolving it, indicate that this issue is intended to be resolved by another doc.
22:29:22 [RalphS]
Ralph: specifically, our deliverable 3. Principles for Managing an RDF Vocabulary
22:29:26 [Zakim]
aliman, you wanted to foo
22:29:54 [aliman]
22:30:06 [RalphS]
Alistair: in anticipation of needing to use RDF to describe the relationship between a dynamic thing and static snapshots of that thing, I've published net/d4
22:30:21 [RalphS]
... d4 may provide a basis for discussion
22:31:03 [RalphS]
... also GRDDL seems to be in a similar space; making assertions about the relationship between various documents you might be able to access from a namespace
22:31:17 [aliman]
guus: 10 mins left, suggest we cover last two issues, 5 mins for each.
22:32:15 [aliman]
jon: issue 3.2 (testing) diego has written some unit tests, useful if we could provide a service for developers who wanted to utilise cookbook recipes, provide as a server validation service officially
22:32:37 [aliman]
... this would allow you to specify you wanted to test a particular server against a particular recipe
22:32:50 [aliman]
guus: not an issue with the current doc,
22:33:06 [aliman]
ralph: intermediate step to publish the test cases?
22:33:39 [aliman]
jon: thinking more like RDF validation service, you point service at URL and say which recipe.
22:34:01 [aliman]
... diego has already written the code, we just don't have the service.
22:34:07 [aliman]
ralph: you have test service?
22:35:13 [aliman]
timbl: great idea, presentation suggest that you may get more people who validation than go to the doc, so service could point people out to the doc, start from existing situation and lead people buy the hand to appropriate recipes.
22:36:06 [timbl]
(Service could be implemented in Javascript within the document ;-) not.
22:36:07 [aliman]
guus: question of timing, has to be synchronised.. From pragmatic view, suggest take an action to look at possibilities and report back, time frames and synchronisation.
22:36:15 [aliman]
ralph: can you commit to hosting?
22:36:21 [aliman]
diego: I can write the code.. .
22:36:42 [aliman]
timbl: can it run in a browser?
22:36:47 [aliman]
diego: runs on server side.
22:37:02 [aliman]
guus: before resolving, do some suggestion on the list about how to realise this.
22:37:19 [aliman]
ralph: I could put this in category of vocabulary mangement validator, then falls into big validator project we have.
22:37:56 [aliman]
timbl: rethink about how to support validators, logically if you're going to validate, you can go so many ways ... top of an iceberg.
22:38:10 [aliman]
guus: open issue, diego is owner, first to propose a timescale.
22:38:35 [aliman]
jon: issue 3.3 raised by diego, mod_rewrite is required for all recipes, but we don't say so.
22:38:43 [aliman]
ralph: and apparently not there by default.
22:38:55 [aliman]
guus: jon is to be issue owner.
22:39:26 [aliman]
ralph: probably worth saying here's what apache config file to go to to cause it to be loaded.
22:39:41 [aliman]
guus: close this dicussion on the recipes, thanks to all, moved to state where can see progress.
22:43:11 [aliman]
....
22:43:44 [aliman]
l... to keep to schedule, we need to have test suite stage by the summer.
22:44:01 [aliman]
... propose we adjourn for the day!
22:45:10 [benadida]
benadida has left #SWD
22:46:03 [Zakim]
-Elisa_Kendall
22:47:00 [TomB_]
RalphS, my phone number tomorrow...
22:47:35 [TomB_]
...will probably be xxxx
22:49:59 [Antoine]
rrsagent, pointer?
22:49:59 [RRSAgent]
See
22:50:29 [Zakim]
-MeetingRoom
22:58:35 [RalphS]
zakim, drop tom
22:58:35 [Zakim]
TomB_ is being disconnected
22:58:37 [Zakim]
SW_SWD(f2f)8:30AM has ended
22:58:39 [Zakim]
Attendees were +1.617.253.aaaa, Alistair, Antoine, Bernard, Guus, Diego, IvanHerman, TimBL, Ralph, Fabien, Ben, Jon, TomB, Elisa_Kendall, AlanR, Jonathan_Rees, +1.617.475.aabb,
22:58:41 [Zakim]
... Stephen_Williams, TomB_, Kjetil_Kjernsmo
22:59:05 [RalphS]
Tom are you comfortable with your number staying in the public record or do you want me to edit it out of the irc log?
22:59:34 [RalphS]
rrsagent, please draft minutes
22:59:34 [RRSAgent]
I have made the request to generate
RalphS
23:00:24 [RalphS]
rrsagent, bye
23:00:24 [RRSAgent]
I see 14 open action items saved in
:
23:00:24 [RRSAgent]
ACTION: Alan write up the preferredLabel modelling issue [1]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: Antoine to provide more use cases of uses of qualifiers [2]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: guus to to check that this broad/narrow is on the issues' list [3]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: antoine to capture the issue on capturing relationships between labels [4]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: antoine to to contact the submittors of #7 to see what they want to use skos for (as opposed to, say, owl) [5]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: alistair to summarizes the aspects of semantics of the skos data model [6]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: guus to check with veronique on the terms being outside the iso standard [7]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: alistair to rephrase the old issue of skos/owl-dl coexistence and semantics [8]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: alan to write down the general documentation requirements, in particular to those that are related to literal values, and how to represent that in skos [9]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: jon to update issue list as per resolution of issue 1.1 [10]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: jon to update issue list as per dropping of 1.2 [11]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: bernard to raise new issue re IE6 hack [12]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: diego to look at IE7 accept headers [13]
23:00:24 [RRSAgent]
recorded in
23:00:24 [RRSAgent]
ACTION: ralph propose resolutition to issue 1.3 [14]
23:00:24 [RRSAgent]
recorded in | http://www.w3.org/2007/01/22-swd-irc | CC-MAIN-2016-30 | refinedweb | 13,483 | 56.39 |
After we selected OCaml as the new language for 0install, I’ve been steadily converting the old Python code across. We now have more than 10,000 lines of OCaml, so I thought it’s time to share what I’ve learnt.
OCaml is actually a pretty small language. Once you’ve read the short tutorials you know most of the language. However, I did skip one interesting feature during my first attempts:
There are also “polymorphic variants” which allow the same field name to be used in different structures, but I haven’t tried using them.
I’ve since found a good use for these for handling command-line options…
The problem
The
0install command has many subcommands (
0install run,
0install download, etc), which accept different, but overlapping, sets of options. Running a command happens in two phases: first we parse the options, then we pass them to a handler function. We split the parsing and handling because the tab-completion and help system also need to know which options go with which command.
Using plain (non-polymorphic) variants I originally implemented it a bit like this (simplified). I had a single type which listed all the possible options:
Each command handler takes a list of options and processes them:
Each handler function has the same type:
zi_option list -> unit (they take a list of options and return nothing).
Finally, there is a table of sub-commands, giving the parser and handler for each one:
But those
assert false lines are worrying. An
assert false means the programmer believes the code can’t be executed, but didn’t manage to convince the compiler. If we declare that a subcommand accepts a flag, but forget to implement it, the program will crash at runtime (this isn’t as unlikely as it sounds, because we declare options in groups, so adding an option to a group affects several subcommands).
Polymorphic variants
Polymorphic variants are written with a back-tick/grave before them, and you don’t need to declare them before use. For example, we can declare
handle_run like this:
OCaml will automatically infer the type of this function as:
[< `Refresh | `Wrapper of string ] list -> unit
That is,
handle_run takes of list of options, where the options are a subset of
Refresh and
Wrapper. Notice that the
assert is gone.
Now you can call
handle_run (parse_run argv), and it’s a compile-time error if
handle_run doesn’t handle every option that
parse_run may produce.
There is, however, a problem when we try to put these functions in the
subcommands list. OCaml wants every list item to have the same type, and so wants every subcommand to handle every option. The compile then fails because they don’t.
My first thought to fix this was to declare an existential type. e.g.
I’m trying to say that each subcommand has a parser and a handler and, while we don’t know what subset of the options they process, the subsets are the same. Sadly, OCaml doesn’t have existential types.
However, we can get the same effect by declaring a class or closure:
This works because the
subcommand function has a for-all type (for all types
a, it accepts an
a parser and an
a handler and produces an object that doesn’t expose the type
a in its interface:
parse_and_run just has the type
string list -> unit.
However, if we want to expose the parser on its own (e.g. for the tab-completion) we have to cast it first. Here, the
parse method simply returns a
zi_option list, losing the information about exactly which subset of the options it might return (which is fine for the completion code). This allows all subcommand objects to expose the same interface:
So, I think this is rather nice:
- Every option displayed in the help for a command is accepted by that command.
- We don’t need any asserts in the handlers (indeed, adding the assert destroys the safety, since the handler will then accept any option).
One final trick: when matching variants you can use the
#type syntax to match a set of options. e.g. the real
handle_run looks more like this:
That is, it processes the run-specific options itself, while delegating common options (
--offline, etc) and storing selection options (
--version, etc) in a separate list to be passed to the selection code. The
select_opts list gets the correct sub-type (
select_option list). | http://roscidus.com/blog/blog/2013/08/31/option-handling-with-ocaml-polymorphic-variants/ | CC-MAIN-2015-35 | refinedweb | 745 | 65.76 |
Integration Process:
1. To start the integartion process, you need test merchant account and test credit card credentials to have the experience of overall transaction flow.
NOTE: Here, you need to make the transaction request to the test-server and not on the production-server. Once you are ready and understood the entire payment flow, you can move to the production server.
2. To initialise the transaction, you need to generate a post request to the below urls with the parameters mentioned below
3. Add the following information in the setting file using the details from your PayU account:
PAYU_MERCHANT_KEY = "Your MerchantID", PAYU_MERCHANT_SALT = "Your MerchantSALT", # Change the PAYU_MODE to 'LIVE' for production. PAYU_MODE = "TEST"
4. When the user click on the checkout button in your template, generate the mandatory parameter named "hash" using the get_hash() method.
from payu import get_hash from uuid import uuid4 data = { 'txnid':uuid4().hex, 'amount':10.00, 'productinfo': 'Sample Product', 'firstname': 'test', 'email': 'test@example.com', 'udf1': 'Userdefined field', } hash_value = get_hash(data)
5. Then, send a post request to the PayU server using the HTML form filled with the data submitted by the buyer containing all the fields mentioned above.
6. When the transaction "post request" hits the PayU server, a new transaction is created in the PayU Database. For every new transaction in the PayU Database, a unique identifier is created every time at PayU’s end. This identifier is known as the PayU ID (or MihPayID).
7. Then the customer would be re-directed to PayU’s payment page. After the entire payment process, PayU provides the transaction response string to the merchant through a "post response". In this response, you would receive the status of the transaction and the hash.
8. Similar to Step 4, you need verify this hash value at your end and then only you should accept/reject the invoice order. You can verfy this hash using check_hash() method.
from django.http import HttpResponse from payu import check_hash from uuid import uuid4 def success_response(request): if check_hash(request.POST): return HttpResponse("Transaction has been Successful.")
In this "django-payu" package, there are many other functions to Capture, Refund, Cancel etc., For the detailed documentation, see... | https://micropyramid.com/blog/django-payu-payment-gateway-integration/ | CC-MAIN-2018-34 | refinedweb | 365 | 56.86 |
- Super Bowl VII
Infobox SuperBowl
sb_name = VII
visitor =
Miami Dolphins
Washington Redskins
visitor_abbr = MIA
home_abbr = WAS
visitor_conf = AFC
home_conf = NFC
visitor_total = 14
home_total = 7
visitor_qtr1 = 7
visitor_qtr2 = 7
visitor_qtr3 = 0
visitor_qtr4 = 0
home_qtr1 = 0
home_qtr2 = 0
home_qtr3 = 0
home_qtr4 = 7
date =
January 14, 1973
stadium = Los Angeles Memorial Coliseum
city = Los Angeles, California
attendance = 90,182
odds = Redskins by 1
MVP = Jake Scott, Safety
anthem = Little Angels of Holy Angels Church,
Chicago
coin_toss = Tom Bell
referee = Tom Bell
halftime =
Woody Hermanand the Michigan Marching Band
network =
NBC
announcers =
Curt Gowdyand Al DeRogatis
rating = 42.7
share = 72
commercial = $88,000
last = VI
next = VIII
Super Bowl VII was an
American footballgame played on January 14, 1973at the Los Angeles Memorial Coliseumin Los Angeles, Californiato decide the National Football League(NFL) champion following the 1972 regular season. The American Football Conference(AFC) champion Miami Dolphins(17–0) defeated the National Football Conference(NFC) champion Washington Redskins(13-4), 14–7, and became the first, and presently the only team in the NFL to complete a perfect, undefeated season.
As the lowest scoring
Super Bowltopicked up a blocked field goal, batted it in the air, and Redskins' cornerback Mike Basscaught it and returned it 49 yards for a touchdown. Indeed, it was the longest period in a Super Bowl to datein Super Bowl V) to earn a Super Bowl MVP.
Background
Miami Dolphins
The Dolphins went undefeated during the season, despite losing their starting quarterback. In the fifth game of the regular season, starter
Bob Griesesuffered a fractured right leg and dislocated ankle. In his place, 38-year-old Earl Morrallled Miami to victory in their nine remaining regular season games, and was the 1972 NFL Comeback Player of the Year. Morrall had previously played for Dolphins head coach Don Shulawhen they were both with the Baltimore Colts, where Morrall backed up quarterback Johnny Unitasand started in Super Bowl III.
But Miami also had the same core group of young players who. The more-experienced Kiick, however,once again provided the run-based Dolphins with an effective deep threat option, catching 29 passes for 606 yards, an average of 20.9 yards per catch. Miami's offensive line, led by future hall of famers Jim Langerand Larry Littlewas also a key factor for the Dolphins' offensive production. And Miami's "No-Name Defense" (a nickname inspired by Dallas Cowboyshead coach Tom Landrywhen he could not recall the names of any Dolphins defenders just before Super Bowl VI), led by future hall of fame linebacker Nick Buoniconti, allowed the fewest points in the league during the regular season (171). Safety Jake Scott recorded five interceptions. Because of injuries to defensive linemen (at the beginning of the season the Dolphins were down to four healthy defensive linemen) defensive coordinator Bill Arnspargercreated what he called the "53" defense, in which."Nick Buoniconti, "Super Bowl VII," "Super Bowl: The Game of Their Lives," Danny Peary, editor. Macmillan, 1997. ISBN 0-02-860841-0]
The Dolphins' undefeated, untied regular season was the third in NFL history, and the first of the post-Merger era. The previous two teams to do it, the
1934and 1942 Chicago Bears, both lost those years' NFL Championship Games. The Cleveland Brownscompleted a perfect season in 1948, including a Championship victory, when they were part of the All-America Football Conference.
Washington. [Dave Hyde, "Still Perfect! The Untold Story of the 1972 Miami Dolphins," p239. Dolphins/Curtis Publishing, 2002 ISBN 0-9702677-1-1] four by 38-year-old Sonny Jurgensen, then replaced Jurgensen when he was lost for the season with an Achilles tendon injury. Their powerful rushing attack featured two running backs. Larry Brown gained 1,216 yards (first in the NFC and second in the NFL) in 285 carries during the regular season, caught 32 passes for 473 yards, and scored 12 touchdowns, earning him both the NFL Most Valuable Player Awardand the NFL Offensive Player of the Year Award. Running back Charley Harrawayhad 567 yards in 148 carries. Future hall of fame wide receiver Charley Taylorand wide receiver Roy Jeffersonprovided the team with a solid deep threat, combining for 84 receptions, 1,223 receiving yards, and 10 touchdowns.
Washington also had a solid defense led by linebacker
Chris Hanburger(four interceptions, 98 return yards, one touchdown), and cornerbacks Pat Fischer(four interceptions, 61 return yards) and Mike Bass(three interceptions, 53 return yards) Sieple.
Meanwhile, the Redskins advanced to the Super Bowl without allowing a touchdown in either their 16-3 playoff win over the
Green Bay Packersor their 26-3 NFC Championship Game victory over the Cowboys.
uperin Super Bowl VI. Wrote Nick Buoniconti, "There was no way we were going to lose the Super Bowl; there was no way.". [Dave Hyde, "Still Perfect!," p248.]
Still, many favored the Redskins to win the game because of their group of "Over the Hill Gang" veterans, and because Miami had what some considered an easy schedule (only two Dolphin opponents, Kansas City and the
New York Giantsposted winning records, and both of those teams were 8-6) and had struggled in the playoffs.
Allen had a reputation for spying on opponents. A school overlooked the Rams facility that the NFL designated the Dolphins practice field, so the Dolphins found a more secure field at a local community college. Dolphins employees inspected the trees every day for spies. [Dave Hyde, "Still Perfect!," p239.]
Miami cornerback
Tim Foley, a future broadcaster who was injured and would not play in Super Bowl VII, was writing daily stories for a Miami newspaper and interviewed George Allen and Redskin players, provoking charges from Allen that Foley was actually spying for Shula.Shelby Strother, "The Perfect Season," "NFL Top 40". Viking, 1988. ISBN 0-670-82490-9]. [Dave Hyde, "Still Perfect!", p247.]
During practice the day before Super Bowl VII, the Dolphins' five foot seven, 150 pound kicker,
Garo Yepremian, relaxed by throwing 30-yard passes to David Shula, Don Shula's son. During the pre-game warmups, he consistently kicked low line drives and couldn't figure out why. [Dave Hyde, "Still Perfect!," p264.]
Television and entertainment
The game was broadcast in the
United Statesby NBCwith play-by-play announcer Curt Gowdyand color commentator Al DeRogatis.
This was the first Super Bowl to be televised live in the city in which it was being played. Despite unconditional blackout rules in the NFL that normally would have prohibited the live telecast from being shown locally, the NFL allowed the game to be telecast in the Los Angeles areaand the final one of Project Apollo. The show featured the crew of Apollo 17 and the Michigan Marching Band.
Later,
singer Andy Williamsaccompanied by the Little Angels of Chicago's Angels Church from Chicagoperformed the national anthem.
The halftime show, featuring
Woody Herman. This strategy proved successful. Washington's offensive line also had trouble handling Dolphins' defensive tackle/nose tackle Manny Fernandez, who was very quick. "He beat their center
Len Hausslike a drum," wrote Buoniconti. Miami's defenders had also drilled in maintaining precise pursuit angles on sweeps to prevent the cut-back running that Duane Thomashad used to destroy the Dolphins in Super Bowl VI.
Washington's priority on defense was to disrupt Miami's ball-control offense by stopping Larry Csonka. [Shelby Strother, "Playing to Perfection," "The Super Bowl: Celebrating a Quarter-Century of America's Greatest Game". Simon and Schuster, 1990 ISBN 0-671-72798-2] They also intended to shut down Paul Warfield by double-covering him. [Dave Hyde, "Still Perfect!," p256.]
As they had in Super Bowl VI, Miami won the toss and elected to receive. Most of the first quarter was a defensive battle with each team punting on their first two possessions. Then Miami got the ball on their own 37-yard line with 2:55 left in the first quarter. Running back
Jim Kiickstarted out the drive with two carries for eleven yards. Then quarterback Bob Griesecompleted an 18-yard pass to wide receiver Paul Warfieldto reach the Washington 34-yard line. After two more running plays, on third and four Griese threw a 28-yard touchdown pass to receiver Howard Twilley(his only catch of the game). Twilley fooled). [Dave Hyde, "Still Perfect!," p264.]
On the third play of the Redskins' ensuing drive, Miami safety Jake Scott intercepted quarterback
Billy Kilmer's pass down the middle intended for Taylor and returned it eight yards to the Washington 47-yard line. However a 15-yard illegal man downfield penalty on left guard Bob Kuechenbergnullified a 20-yard pass completion to tight end Marv Flemingon the first play after the turnover, and the Dolphins were forced to punt after three more plays.
After the Redskins were forced to punt again, Miami reached the 47-yard line with a 13-yard run by
Larry Csonkaand an 8-yard run by Kiick. But on the next play, Griese's 47-yard touchdown pass to Warfield was nullified by an illegal procedure penalty on receiver Marlin Briscoe(Briscoe's first, and only, play of the game). Then on third down, Redskins defensive tackle Diron Talbertsackedintercepted Kilmer's pass to Brown Charlie Taylor, who was open at the 2-yard line, but Taylor stumbled right before the ball arrived and the ball glanced off his fingertips. After a second-down screen pass to Harraway fell incomplete, left. Later in the period, the Dolphins drove 78 yards to Washington's 5-yard line, featuring a 49-yard run by Csonka, the second-longest run in Super Bowl history at the time. However, Redskins defensive back Brig Owensintercepted.
After Miami moved the ball to the 34-yard line on their ensuing drive, kicker
Garo Yepremianattempted, [Larry Csonka and Jim Kiick, with Dave Anderson, "Always on the Run," p.218. Random House, 1973 OCLC|632348] who blocked on field goals. Unfortunately for Miami, the ball slipped out of his hands and went straight up in the air. Yepremian attempted to bat the ball out of bounds, [Dave Hyde, "Still Perfect!," p264.]
Bill Stanfill's 9-yard sack on fourth down as time expired ended the game.
Griese finished the game having completed 8 out of 11 pass completions then Griese, but finished the game with just 16 more total passing yards and was intercepted three times. Said Kilmer, "I wasn't sharp at all. Good as their defense is, I still should have thrown better." Washington's Larry Brown rushed for 72 yards on 22 carries and also had five receptions for 26 yards. Redskins receiver
Roy Jeffersonwas the top receiver of the game, with five catches for 50 yards. Washington amassed almost as many total yards (228) as Miami (253), and actually more first downs (16 to Miami's 12).
coring summary
*MIA - TD: Howard Twilley 28 yard pass from Bob Griese (Garo Yepremian kick) 7-0 MIA
*MIA - TD: Jim Kiick 1 yard run (Garo Yepremian kick) 14-0 MIA
*WAS - TD: Mike Bass 49 yard fumble return (Curt Knight kick) 14-7 MIA
uper Bowl postgame news and notes
As Shula was being carried off the field after the end of the game, a kid who shook his hand stripped off his watch. Shula got down, chased after the kid, and retrieved his watch. [Dave Hyde, "Still Perfect!" p.268.]
Manny Fernandez was a strong contender for MVP. Wrote Nick Buoniconti, ." Larry Csonka also said he thought Fernandez should have been the MVP. [Larry Csonka and Jim Kiick, with Dave Anderson, "Always on the Run," p.220.] The MVP was selected by
Dick Schaap, the editor of SPORT magazine. Schaap admitted later that he had been out late the previous night, struggled to watch the defense-dominated game, and was not aware that Fernandez had 17 tackles. [Dave Hyde, "Still Perfect!," pp.260-261.]." [Dave Hyde, "Still Perfect!," p264.] Yepremian was so traumatized by his botched field goal. [Dave Hyde, "Still Perfect!" p.283.] Nevertheless, "Garo's Gaffe" made Yepremian famous and led to a lucrative windfall of speaking engagements and endorsements. "It's been a blessing," says Yepremian. [Dave Hyde, "Still Perfect!" p.268.]
tarting lineups
Officials
*Referee: Tom Bell
*Umpire:
Lou Palazzi
*Head Linesman: Tony Veteri
*Line Judge: Bruce Alford
*Field Judge: Tony Skover
*Back Judge: Tom Kelleher
"Note: A seven-official system was not used until 1978"
Weather conditions
*84 degrees, sunny, hazy
ee also
*
1972 NFL season
*
NFL playoffs, 1972-73
*The "
Perfect Season"
References
* [ Official NFL Encyclopedia Pro Football | publisher=NAL Books | id=ISBN 0-453-00431-8
VII — 1 2 3 4 Gesamt Miami Dolphins 7 … Deutsch Wikipedia
Super Bowl VII — score de la rencontre 1 2 3 4 Total Dolphins 7 7 0 0 14 Redskins 0 0 0 7 7 Données clés Date … Wikipédia en Français
Super Bowl XLII — 1 2 3 4 Gesamt New York Giants 3 … Deutsch Wikipedia
Super Bowl XXXV — score de la rencontre 1 2 3 4 Total Ravens 7 … Wikipédia en Français
Super Bowl VI — Partido de Campeonato de la NFL Ubicación … Wikipedia Español
Super-Bowl — Sunday — VI — Infobox SuperBowl sb name = VI visitor = Dallas Cowboys home = Miami Dolphins visitor abbr = DAL home abbr = MIA visitor conf = NFC home conf = AFC visitor total = 24 home total = 3 visitor qtr1 = 3 visitor qtr2 = 7 visitor qtr3 = 7 visitor qtr4 … Wikipedia
Super Bowl XVII — Infobox SuperBowl sb name = XVII visitor = Miami Dolphins home = Washington Redskins visitor abbr = MIA home abbr = WAS visitor conf = AFC home conf = NFC visitor total = 17 home total = 27 visitor qtr1 = 7 visitor qtr2 = 10 visitor qtr3 = | https://en.academic.ru/dic.nsf/enwiki/17922 | CC-MAIN-2020-05 | refinedweb | 2,258 | 58.21 |
react-chartjs-2
v2.8.0
Published
react-chartjs-2
Downloads
779,392
Maintainers
Readme
Looking for maintainers!!
react-chartjs-2
React wrapper for Chart.js 2 Open for PRs and contributions!
UPDATE to 2.x
As of 2.x we have made chart.js a peer dependency for greater flexibility. Please add chart.js as a dependency on your project to use 2.x. Currently, 2.5.x is the recommended version of chart.js to use.
Demo & Examples
Live demo: jerairrest.github.io/react-chartjs-2
To build the examples locally, run:
npm install npm start
Then open
localhost:8000 in a browser.
Demo & Examples via React Storybook
We have to build the package, then you can run storybook.
npm run build npm run storybook
Then open
localhost:6006 in a browser.
Installation via NPM
npm install --save react-chartjs-2 chart.js
Usage
Check example/src/components/* for usage.
import { Doughnut } from 'react-chartjs-2'; <Doughnut data={...} />
Properties
- data: (PropTypes.object | PropTypes.func).isRequired,
- width: PropTypes.number,
- height: PropTypes.number,
- id: PropTypes.string,
- legend: PropTypes.object,
- options: PropTypes.object,
- redraw: PropTypes.bool,
- getDatasetAtEvent: PropTypes.func,
- getElementAtEvent: PropTypes.func,
- getElementsAtEvent: PropTypes.func
- onElementsClick: PropTypes.func, // alias for getElementsAtEvent (backward compatibility)
Custom size
In order for Chart.js to obey the custom size you need to set
maintainAspectRatio to false, example:
<Bar data={data} width={100} height={50} options={{ maintainAspectRatio: false }} />
Chart.js instance
Chart.js instance can be accessed by placing a ref to the element as:
chartReference = {}; componentDidMount() { console.log(this.chartReference); // returns a Chart.js instance reference } render() { return (<Doughnut ref={(reference) => this.chartReference = reference } data={data} />) }
Getting context for data generation
Canvas node and hence context, that can be used to create CanvasGradient background, is passed as argument to data if given as function:
This approach is useful when you want to keep your components pure.
render() { const data = (canvas) => { const ctx = canvas.getContext("2d") const gradient = ctx.createLinearGradient(0,0,100,0); ... return { ... backgroundColor: gradient ... } } return (<Line data={data} />) }
Chart.js Defaults
Chart.js defaults can be set by importing the
defaults object:
import { defaults } from 'react-chartjs-2'; // Disable animating charts by default. defaults.global.animation = false;
If you want to bulk set properties, try using the lodash.merge function. This function will do a deep recursive merge preserving previously set values that you don't want to update.
import { defaults } from 'react-chartjs-2'; import merge from 'lodash.merge'; // or // import { merge } from 'lodash'; merge(defaults, { global: { animation: false, line: { borderColor: '#F85F73', }, }, });
Chart.js object
You can access the internal Chart.js object to register plugins or extend charts like this:
import { Chart } from 'react-chartjs-2'; componentWillMount() { Chart.pluginService.register({ afterDraw: function (chart, easing) { // Plugin code. } }); }
Scatter Charts
If you're using Chart.js 2.6 and below, add the
showLines: false property to your chart options.. Check
{ onElementsClick: (elems) => {}, getElementsAtEvent: (elems) => {}, // `elems` is an array of chartElements }
getElementAtEvent (function)
Calling getElementAtEvent(event) on your Chart instance passing an argument of an event, or jQuery event, will return the single element at the event position. If there are multiple items within range, only the first is returned Check
{ getElementAtEvent: (elems) => {}, // => returns the first element at the event point. }
getDatasetAtEvent (function)
Looks for the element under the event point, then returns all elements from that dataset. This is used internally for 'dataset' mode highlighting Check
{ getDatasetAtEvent: (dataset) => {} // `dataset` is an array of chartElements }
Working with Multiple Datasets
You will find that any event which causes the chart to re-render, such as hover tooltips, etc., will cause the first dataset to be copied over to other datasets, causing your lines and bars to merge together. This is because to track changes in the dataset series, the library needs a
key to be specified - if none is found, it can't tell the difference between the datasets while updating. To get around this issue, you can take these two approaches:
- Add a
labelproperty on each dataset. By default, this library uses the
labelproperty as the key to distinguish datasets.
- Specify a different property to be used as a key by passing a
datasetKeyProviderprop to your chart component, which would return a unique string value for each dataset.
Development (
src,
lib and the build process)
NOTE: The source code for the component is in
src. A transpiled CommonJS version (generated with Babel) is available in
lib for use with node.js, browserify and webpack. A UMD bundle is also built to
dist, which can be included without the need for any build system.
To build, watch and serve the examples (which will also watch the component source), run
npm start. If you just want to watch changes to
src and rebuild
lib, run
npm run watch (this is useful if you are working with
npm link).
License
MIT Licensed Copyright (c) 2017 Jeremy Ayerst | https://www.pkgstats.com/pkg:react-chartjs-2 | CC-MAIN-2019-51 | refinedweb | 810 | 51.95 |
15
When creating a new Schema in Tridion there is an entry field called 'Root Element Name'. The default is
content but I am not sure what this does.
15
When creating a new Schema in Tridion there is an entry field called 'Root Element Name'. The default is
content but I am not sure what this does.
17
This is the name of the first XML element in your content.
If a component is represented like this:
<Content> <Name>Hello</Name> <Description>I am a component</Description> </Content>
Then
Content is the Root Element name. If you would modify your schema to have a Root Element Name of
Article then your component's XML would be:
<Article> <Name>Hello</Name> <Description>I am a component</Description> </Article>
PS - for simplification, I am omitting XML namespaces from the examples.
EDIT
Some other answers mention it - and it is correct - so I'm adding it here for future reference.
This is an important setting to change on Embedded Schemas, if you ever need to use 2 different embedded schemas as part of the same "main" schema, they must have different Root Element Names or your XML will not be valid:
<Article> <Name>Hello</Name> <Description>I am a component</Description> <Content> <Field>And I am an embedded Field in an embedded schema</Field> </Content> <Content> <Description>And I am an embedded Field in another embedded schema</Description> </Content> </Article>
As the example above shows, if both embedded schemas would have a Root Element Name of
Content then you'd have a hard time figuring out which is which, especially if one of the embedded schemas fields allowed multiple values.
When adding an Embedded Schema Field, Tridion will validate that all Embedded Schemas have a different Root Element Name, and if that's the case then it won't let you save it - you'll have to go find that schema and modify the root element name - meaning you may now have content that is out of sync with its schema (if it was used already).
In short: always rename the Root Element Name of your embedded schemas. Rename the Root Element Name of your Content schemas if it makes sense to you. Do not rename it after you already have content based on that schema, that's asking for trouble.
4
In our implementation, we keep the combination of root element name and namespace of every schema unique. These two elements are more technical and not available to the content editors.
If the template code needs to check for a specific schema, this combination can be validated instead of validating the schema name. Schema name may have to be changed if content editors recommend a better name.
4
The answer has already been given: "It is the name of the first node in your XML." I do want to stress that in the case of embeddable Schema's, the XSD gives an error when you have 2 sorts of Embeddable schema's with the same root element name.
So be sure to keep it unique. The root element name can be changes, but you want to prevent that.
3
I would like to add that controlling of 'Root element name' and namespaces are extremely important especially if you use XSLT as a templating solution.
2The names we pick matter most for embeddable schema, where we'll get a namespace conflict if using embeddable schema with the same root element name. A good practice would be to give unique names for at least embeddable schema. – Alvin Reyes – 2013-11-25T22:24:49.047
...and even a better practice would be to give unique names to all Root Element Names in all Schemas that allow you to specify one. – Mihai Cădariu – 2013-11-27T18:19:25.627 | http://library.kiwix.org/tridion.stackexchange.com_en_all_2019-02/A/question/3708.html | CC-MAIN-2019-43 | refinedweb | 634 | 65.96 |
Generation and processing of per-vertex and per-face colors according to various strategy. More...
#include <vcg/complex/algorithms/update/color.h>
Generation and processing of per-vertex and per-face colors according to various strategy.
This class is used to compute per face or per vertex color with respect to a number of algorithms. There is a wide range of algorithms for processing vertex color in a photoshop-like mode (changing for example contrast, white balance, gamma) Basic Tools for mapping quality into a color according to standard color ramps are here.
Definition at line 52 of file color.h.
Color the vertexes of the mesh that are on the border.
It uses the information in the Vertex flags, and not necessarily any topology. So it just require that you have correctly computed the flags; one way could be the following one:
Definition at line 371 of file color.h.
Apply Brightness and Contrast filter to the mesh, with the given contrast factor and brightness amount.
Performs contrast and brightness operations on color, i.e NewValue = (OldValue - 128) * contrast + 128 + amount The result is clamped just one time after all computations; this get a more accurate result.
The formula used here is the one of GIMP.
Definition at line 624 of file color.h.
Desaturates the mesh according the a chosen desaturation method.
There are three possibilities
M_LIGHTNESSwhere lightness = 0.5*(Max(R,G,B)+Min(R,G,B))
M_LUMINOSITYwhere luminosity = 0.21*R+0.71*G+0.7*B
M_AVERAGEPlain Average
Definition at line 830 of file color.h.
Adjusts color levels of the mesh.
Adjusts color levels of the mesh. Filter can be applied to all RGB channels or to each channel separately. in_min, gamma and in_max are respectively the black point, the gray point and the white point. out_min and out_max are the output level for black and white respectively.
Definition at line 734 of file color.h. | http://vcglib.net/classvcg_1_1tri_1_1UpdateColor.html | CC-MAIN-2020-29 | refinedweb | 322 | 50.23 |
I learned a new word a few weeks ago: exegesis (waves in thanks to EW); I have a respectable vocabulary, so not something that happens every day. It dovetails with something that I've been meaning to write about for a while though...follow your nose. You can think of exegesis as an extreme form of following your nose, one that borders on religious fervor and potentially gets hung up on precision in prose that might or might not actually exist.
Follow your nose, in standards circles like the W3C, is a term of art that answers the question: "how do I know X is true?" Since "you indirectly commit to specs when you go online" (Tim Berners-Lee, 2002), it's a polite way of saying "read the specifications in the references sections"...recursively . Don't let the people who refer to RFCs (specs in the IETF) by number scare you, either. Everyone there started knowing none of them at some point.
Back in October 2012 (yes, "been meaning to" for a while now, ahem), I was working on the OSLC Performance Monitoring spec that ITM 6.3 eventually implemented. One of the developers asked: Where is the dbpedia Percentage class defined in human readable format? I had suggested re-using terms she was unfamiliar with, a common enough occurrence given that vocabulary re-use is a Best Practice in Linked Data. When I responded, I laid out the process step by step because you can do this, in general, for any Linked Data. It's particularly easy for the case of linked data because most if not all URIs are HTTP URIs that serve a useful representation.
Six months later, it was I'm wondering if we could use the CS property "hostid" to store the unique system ID? Do you think that is a valid approach or should we better define a custom property?, in the context of a services engagement. I also worked on the OSLC Reconciliation spec, so it went like this:
In this specific case, since the identifier was assigned by the client, we decided to define a new term in the client's namespace, to avoid ambiguity. Topic for another day.
FYN turns into exegesis when you start hanging vastly different interpretations on a turn of phrase that's not visibly different than the surrounding text. As a spec editor and reader, I have a gift (curse?) for seeing how a single set of words can be interpreted several ways. For example, reading ought to as a potential requirement for compliance.
I started observing a cultural distinction between IETF and W3C specs recently, while working on LDP. The IETF specs I was spending a fair amount of time referring to were at times maddeningly vague on issues like extension; if you look at the HTTP Accept syntax, it permits extension parameters (A Good Thing, generally speaking). Ditto media type registrations' generic syntax. When looking at application/json's registration however, which defines no parameters, are other specs allowed to define them? I could not find a definitive answer, even FYNing. In W3C specs I observed many cases where "remaining silent" was eschewed, in favor of an explicit MAY clause ("other specs MAY define blah", etc.). When I contacted some RFC authors to clarify the intent, (some) others were aghast that I should even consider their answers relevant - for them, The Written Word was all that mattered, even if TWW was obviously incomplete.
In some cases, I found myself asking questions like "RFC blah says this new header is defined for successful requests. I could really use that behavior on a redirect; is that considered a 'successful' request?" Coming from an IETF-simulated mindset, I'd guess they'd answer "no, it's not limiting; success doesn't only mean 2xx status codes"; I think my W3C friends would be more split on that.
I've also seen (and perpetrated in some cases) absolutely unreadable linguistic convolutions in normative (MUST, SHOULD, etc) text in order to specify only what's needed, and no more. When TimBL talked to LDP about his comments on the LDP Last Call spec a few months ago, this was on my mind. He said (paraphrased from memory) I prefer a writing style that just tells the client/server what to do. Which I think is meaningfully different from the attitude If it's convoluted to make it Just Right, better that than leaving anything open to interpretation. So I find I'm drifting somewhat back toward what I think of, for now, as the "looser IETF style"..
Web Architecture has a few things to say about URIs, which are used to link to things:
There's a lot more, of course - worth a read or 3 if you actually program on the Web.
Practical example: Jazz for Service Management Registry Services. Products in the Cloud and Smarter Infrastructure Bob's your uncle.
If you look at the PR pubs, the well-known default value is, and the next sentence says: Allow the customer to change the default value.
They think "change the default" means being able to change oslc-registry only... the hostname and port portion of the URI. They realize from experience that hostnames and port values are configured and often under someone else's control anyway, but surely the /oslc/pr/collection portion would never change - that's fixed in code!
Well, no. In a loosely coupled system you want the entire URL to be completely opaque to clients. In other words, only two parties should ever be looking at the contents of a URI: its owner (the software that serves it), and humans generic URI syntax, but that doesn't give you enough to achieve an end in itself.
Why? For one thing, it's only a subset of /oslc/pr/collection is actually managed by PR as a Web application; that first path segment, oslc , is a deployment choice. oslc might be the default, but admins of the Web container (WebSphere , in PR's case) can change that. The HTTP delegation of authority concept, which is really a social contract, does not stop at the hostname level..
When.
One of my day jobs is to contribute to the development of standards. At the moment, I'm co-editing a specification in the World Wide Web Consortium (known as W3C in many circles) ... the people who bring you HTML, XML, and other interesting kettles of fish.... called the Linked Data Platform.
If you already know what REST and Linked Data are (extra credit: OSLC too), you can jump right in (details on the Service Management Connect blog). What LDP focuses on that's new is the write operations; the common view of linked data to date is largely read-only.
If you need more background, there's an earlier entry with a reasonably concise version of that.
I'm a raving lunatic perfectionist when it comes to writing documents, so I won't even pretend that the Last Call draft is where I'd really like it to be. Suffice to say, before I left for my summer walkabout I made sure that the normative parts (stuff you can't just fix later) hung together. There's certainly room for more/better examples, and no doubt places where mere mortals' parsers will throw an exception - and that's exactly what the review process is there to help deal with, in addition to the normative content that people will depend on when it comes to asserting compliance.
Read with a skeptical eye.
A few weeks ago someone approached me with questions on something (I think it was OSLC Automation, but the pattern holds widely). Naturally (for me), I asked what he'd already read so I'd understand what he (in theory) already knew and what I'd have to explain in simpler terms. He started answer with: "Specifications are too dry to read." Now I realize not everyone's a spec reader, or even learns best through reading anything. Some people are kinesthetic learners, some verbal, some visual. C'est la vie. But the comment stuck with me.
In my spare time, I occasionally carve out a time slice for pleasure reading; what I call pleasure reading scares many people I talk to (A Brief History of Time and so on). At some point since his comment I read the "Figure versus Ground" topic in Gödel, Escher, Bach.
Fast forward a bit, and I've been editing the W3C Linked Data Platform specification as we get ready to issue a Last Call working draft (see the Jazz for Service Management blog for why I think LDP is a Good Thing for the Web and for open standards). As I was drafting some sections, my brain kept coming back to his comment like a song you can't get out of your head.
A bit of background: Escher is famous for his drawings; one set of them pokes at figure vs ground (aka positive vs negative space) directly, whereas others use it to construct higher order effects (optical illusions/contradictions). He's not the only one to make use of figure vs ground though ... most people are probably familiar with the Rubin vase, which demonstrates the concept nicely: is it one vase or two faces? It depends how you look at it.
The (ha! like it's just one!) problem in writing specifications is you're simultaneously writing for different audiences. On one hand, you're writing for implementers (servers, in Web specs); they want to know what behaviors they have to code in each single interaction, what they can skip, and they want as little else (chaff, distraction, etc.) as possible. On the other hand, you're writing for adopters/users (Web clients), who are more interested in stringing together a sequence of interactions to accomplish some purpose of interest to them; they want examples, background, informative text that shows them what some of those useful interaction sequences look like. Satisfy the adopters/users, and you've got a spec big enough to scare off some implementers without even reading it - too big and scary looking (parceling it up into multiple documents helps to a degree). Satisfy the implementers, and no one knows how to use it. Argh, I hate it when that happens. And yet, I could argue it's just a symptom.
I think the underlying problem is that we have to use prose to describe a picture (the old "a picture is worth a thousand words" maxim, demonstrated ad nauseam). What specs attempt to do is to divide the universe into two sets (one or more sets of two, more often): they define some conformance or compliance criteria, and they're trying to divide the universe into "compliant Xs" and "non-compliant Xs" ... in other words, they're trying to outline the vase - with words instead of a visual line. Imagine trying to describe the Rubin vase above using only words, no picture. You can try short-cuts like "take the starting rectangle, and subtract a strip at top and bottom whose height is xyz% of the total height." They're not very satisfying; describing what something is by stating what it isn't is cognitively harder. When it comes to that curve under the lip, good luck using prose (without reference to an image by analogy) for that.
What compounds the problem is the figure versus ground issue. Absent any way to draw the outline of that vase with words, we end up with not only positive and negative space, but "line space" in between. By the way, in open specifications we're not describing just one vase either, we're describing a class ("compliant vases"). Ow my head.
Back to the emergence of line space... the visual analog of the problem is to draw the outline of [the class of compliant] vase[s] using a 3-inch (8 cm)-wide paintbrush instead of a fine-line pen. The line itself is big enough that it now constitutes another space unto itself; not all of which falls within [compliant] vase[s], so you add more words (dab the line space while holding the brush sideways). It's like learning integration (the mathematical kind: using progressively thinner rectangles to measure/estimate the area under a curve) all over again.
I always liked calculus; maybe that's why I'm still editing specs instead of giving up | https://www.ibm.com/developerworks/community/blogs/looselycoupled/?lang=en | CC-MAIN-2014-10 | refinedweb | 2,083 | 59.64 |
Hi, I have recently started to read some C++ books, and i came across this in the "C++ for dummies"(obviously apt for myself) the following example on Mixed Mode Expressions.
So i wrote a little output to test, and i cannot seem to get anything other than 1 as my output. Is this correct?So i wrote a little output to test, and i cannot seem to get anything other than 1 as my output. Is this correct?Code:
// in the following expression the value of nValue1
// is converted into a double before performing the
// assignment
int nValue1 = 1;
nValue1 + 1.0;
I would of expected from the comments from the book excerpt above suggest that the integer is converted to a float either as, 1.0 or perhaps if i squint hard it should of returned an integer of 2.
So can anyone explain what I've interpreted or coded incorrect and shed some light on the subject.?
Code:
#include <cstdio>
#include <cstdlib>
#include <iostream>
using namespace std;
int main (int nNumberofArgs, char* pszArgs [])
{
int nValue = 1;
nValue + 1.0;
cout << nValue << endl;
system ("PAUSE");
return 0; | http://cboard.cprogramming.com/cplusplus-programming/112476-cplusplus-newbie-question-printable-thread.html | CC-MAIN-2015-06 | refinedweb | 189 | 63.19 |
I am to create a program that works as follows:
1. Write a java class (Employee.java) that encapsulates an employees first name, last name, SSN. Implement the apporpriate get and set methods along...
I am to create a program that works as follows:
1. Write a java class (Employee.java) that encapsulates an employees first name, last name, SSN. Implement the apporpriate get and set methods along...
I know how to use classes and objects but I am not good with the arrays. I am not sure how to assign the suit for the first dimension because I also had to create a single dimensional array for the...
My two-dimensional array is:
String[][] cards
The first dimension is to represent the suit and the second dimension will represent the type of card(from ace to King)
I want the constructor to...
Pass is what I believe is being added to the ArrayList, which is defined in the default constructor. So it is not in the same scope but I do not know how to bring them out of that ArrayList to read...
I am trying to take a text file and read it into the ArrayList I created. It should store the passengers name and the passenger's service class.
The text file has the following format:
James ...
For the following problem, I have created the passenger class and have created the ArrayList in the Train Class. I do not know how to read the passengers from a text file into the ArrayList. And I...
This is what I compiled
public double[][] get_less_tax()
{
double leastTax = rates[0][0];
double[][] array1 = new double[rates.length][];
for(int x = 0; x <...
I want to use a for loop to check if the number stored in each part of the array is less than .001
for(int x = 0; x < rates.length; x++)
{
for (int i = 0; i <...
I have a program that creates a double array for 50 states storing the last 10 tax rates for each year. The tax rate is less than .06 in all of them and is randomly configured to make the program run...
I added the print out statement after the while statement and it did print what I input into choice onto the display. It just won't run the program again.
My code all works fine and I have a user defined class called Circle. When I type "y" or "Y" it will not re run the program.
import java.util.Scanner;
public class CircleTest
{
public...
Thank you! I finally got it! You can tell I'm definitely new to this
I added an array to my code and thought I did it correct but I get an error saying " cannot find symbol -- myIntArray"
import java.util.Scanner;
public class EnglishCalculator
{
...
I have the whole code better written now and the only thing I cannot figure out is how to change the num1 and num2 to strings that actually print out the digits in words.
Output should be "nine"...
import java.util.*;
public class StudentInformation
{
public static void main (String [] args)
{
Scanner scan = new Scanner (System.in);
...
import java.util.*;
public class StudentInformation
{
public static void main (String [] args)
{
Scanner scan = new Scanner (System.in);
...
I am trying to figure out how to convert digits from 0 to 9 to words in my program.
What is the proper code to do this?
I have found examples online but none of them are working with my code. ...
so completely take out the string = x; and delete the switch (x) but replace it with just int?
public class StudentInformation
{
public static void main (String [] args)
{
String x;
for (int n = 12; n >= 1; n--)
{
System.out.println ("On the "...
So the switch statement without a break will print 3 when i call it and then it will also print 2 and 1 after it or do I need to print them backwards so 12 is the first case and 1 is the last case...
i did look at that but I don't understand how I can print the first line, then the second line but changing it to "2nd day" and reprinting the first line.
This is the problem I have to do for a lab. I am not asking anyone to do the work for me but if they could just start it because I have no idea and my textbook doesn't help me very much with this...
import java.io.File;
import java.io.IOException;
import java.util.Scanner;
import java.text.DecimalFormat;
public class EchoFileData
{
public static void main(String [] args) throws...
Thank You so much! I finally figured it out!
I'm not sure how to do that. It is a download file that I downloaded for an assignment. That is where the text file is located. I tried moving it to my computer, but I'm not exactly sure where to... | http://www.javaprogrammingforums.com/search.php?s=607a3700b476148cfdb4a16f97b3bafb&searchid=1627325 | CC-MAIN-2015-27 | refinedweb | 823 | 74.59 |
Wolfgang Rohdewald wrote: >. > OK, just as long as you updated the firmware from within a few weeks ago then you should have the new one. The version inside the "new" firmware did not change from 26d (so it may be a little confusing from just looking at it). > >>The recommended change to transfer.c for the new firmware is attached. > > > The net result seems to be the same for me: use 288k instead of 576. > I don't have the 4MB extension. So this cannot help. > If you notice that patch I sent in the previous mail, it undefines "FW_NEEDS_BUFFER_RESERVE_FOR_AC3", so, the line you are changing would not even be used. It seems that the "new" version of 26d does not need preset buffer reserves. --SNIP-- #ifdef FW_NEEDS_BUFFER_RESERVE_FOR_AC3 bool GotBufferReserve = false; - int RequiredBufferReserve = KILOBYTE(DvbCardWith4MBofSDRAM ? 288 : 576); + int RequiredBufferReserve = KILOBYTE(DvbCardWith4MBofSDRAM ? 576 : 288); #endif --SNIP-- > But I could get rid of the error messages by inserting an msleep(30) > somewhere in the driver (2.6.12-rc5 unmodified). They still happen but > much more seldom. See my post on the linux-dvb mailing list. > 2.6.11.11 with latest CVS of dvb-kernel would be my recommendation. Regards, | https://www.linuxtv.org/pipermail/vdr/2005-June/002806.html | CC-MAIN-2018-26 | refinedweb | 199 | 68.16 |
Skip> The main stumbling block was that pesky "from module import *" Skip> statement. It could push an unknown quantity of stuff onto the Skip> stack Greg> Are you *sure* about that? I'm pretty certain it can't be true, Greg> since the compiler has to know at all times how much is on the Greg> stack, so it can decide how much stack space is needed. Thomas> I think Skip meant it does an arbitrary number of Thomas> load-onto-stack Thomas> store-into-namespace Thomas> operations. Skip, you'll be glad to know that's no longer true Thomas> :) Since 2.0 (or when was it that we introduced 'import as' ?) Thomas> import-* is not a special case of 'IMPORT_FROM', but rather a Thomas> separate opcode that doesn't touch the stack. I'm not sure what I meant any more. (They say eye witness testimony in a courtroom is quite unreliable.) I'm pretty sure Greg's analysis is at least partly correct (in that that couldn't have been why I failed to implement a converter for IMPORT_FROM). I went back and looked briefly at my old code last night (which was broken when I put it aside - don't *ever* do that!) and could find nothing that would indicate why I didn't like "from-import-*". The instruction set converter would refuse to try converting any code that contained these opcdes: {LOAD,STORE,DELETE}_NAME, SETUP_{FINALLY,EXCEPT}, or IMPORT_FROM. At this point in time I'm not sure which of those six opcodes were just ones I hadn't gotten around to writing converters for and which were showstoppers. wish-i-had-more-time-for-this-ly y'rs, Skip | https://mail.python.org/pipermail/python-dev/2001-July/016482.html | CC-MAIN-2022-27 | refinedweb | 286 | 71.24 |
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video.
url Tag10:35 with Kenneth Love
We need to use links in our web app, but what if our URLs change? Django gives us a handy way around that.
{% url 'path.to.view' %} to link to a view who's URL doesn't have a name.
Note: This has been removed in Django 1.10 and beyond. If you want to use this feature, be sure to install Django 1.9 or below. You can do that with
pip install django<1.10. Better yet, name all of your URLs as shown below.
url(r'pattern', views.view, name='list') to name an URL
{% url 'list' %} to link to a named URL
include('courses.urls', namespace='course') to namespace a group of URLs
{% url 'courses:list' %} to link to a named and namespaced URL
- 0:00
Since we control our URL's through a Python module, sometimes URL's change.
- 0:05
You should try to keep your URLs the same once your project has a public launch, but
- 0:09
while you're developing it, feel free to change them around to your heart's desire.
- 0:13
Also, if you're distributing a reusable app, you can't control the path you'll
- 0:16
give your URL's, so between custom URLs for other developers and you changing
- 0:21
URLs while you develop How do you know what URL to type into your anchor tags?
- 0:26
Jango handily gives us a template tag just for this.
- 0:29
Before we can use our URL template tag, we should give our URLs names.
- 0:35
Now we can use the path to view file and
- 0:38
the view function just like we would do in flask's Jenga two templates.
- 0:42
But this isn't as common of a practice and most of the time
- 0:46
Jenga developers name their URLs to make it easier to refer to them.
- 0:50
So let's add a URL tag for a non-named URL and
- 0:54
then see about adding names to all of our URLs.
- 0:57
So let's actually go to our
- 1:02
templates layout.html.
- 1:06
And, I want to have like a header bar that specifies all of our links to stuff,
- 1:15
so, let's do a nav, and then inside here we're going to do, a href =,
- 1:20
and then this is our URL tag, and we're going to say views.hello world.
- 1:29
And then we're gonna say Home, and that.
- 1:32
All right, so
- 1:35
this is just linking to the views module, and the hello_world thing.
- 1:42
So let's go, let's see if that works.
- 1:46
There we go, we've got this home thing here.
- 1:50
Let's actually put this inside of here.
- 2:00
There you go, that makes a bit more sense.
- 2:05
Save that.
- 2:09
Refresh that page.
- 2:10
There we go.
- 2:11
We've got a big Home button there.
- 2:13
It's a really big Home button.
- 2:16
Okay, so this is using it without having a URL name.
- 2:22
So, it works and it's not the worst thing ever.
- 2:25
We could do a similar thing.
- 2:27
Let's add another tag here.
- 2:31
And here we'll do
- 2:35
courses.views.courseList.
- 2:42
And we'll just make this, say, courses.
- 2:48
And Refresh.
- 2:50
Now we've got a link to Home.
- 2:52
And we've got a link to Courses which gives us all of our courses.
- 2:56
So that's cool and it works.
- 2:59
It's not always great when we have to use URL arguments though,
- 3:04
like the PK or the course PK, step PK kinda thing.
- 3:07
So yeah, it could be better and, also, what if your view has a really,
- 3:11
really long name, when the module does?
- 3:13
Then you have this really long URL tag,
- 3:15
and that might be hard to remember that whole path to it, so it's not great.
- 3:19
And, if you're distributing this,
- 3:21
you don't wanna require the developers who use your package to have to remember
- 3:25
whatever long, complicated path that you have set up.
- 3:29
Let's see about giving our URLs names so we can reference them more easily.
- 3:33
So in URLs.py inside of our courses app, we have our three URLs here.
- 3:42
Let's give each of them a name.
- 3:43
We do that by specifying name equals and then something.
- 3:47
So let's say name = list and, let's say name = step.
- 3:55
And, you know what? That's a little long so
- 3:56
let's put that on it's own line.
- 3:59
And, let's say name = detail.
- 4:03
Just some nice simple names.
- 4:05
Hopefully right now you're thinking those names are gonna conflict eventually.
- 4:09
You're right they would.
- 4:10
We would have a conflict with some other app somewhere.
- 4:13
But we're gonna take an extra step to prevent that here in a little bit.
- 4:17
For now though let's just set the rest of our URLs.
- 4:20
So let's come over here to this courseList.html.
- 4:24
And first of all let's add in our extends "layout.html",
- 4:32
block title Available courses, endblock
- 4:40
block content, endblock and
- 4:45
then inside here let's do div class = cards.
- 5:02
And then we're gonna have for card in cars and
- 5:07
let's do.
- 5:10
And then we're gonna do a header.
- 5:19
All right, div class
- 5:25
=card copy in that
- 5:30
div and in that div.
- 5:35
Cool, okay, so sorry about that little bit of HTML to add in.
- 5:39
And now we want to add our anchor tag.
- 5:43
And I think we should anchor this, the title.
- 5:47
So we'll say a href =, and we'll use our url tag, and then we're gonna say
- 5:53
detail and then our pk is the course.pk.
- 5:58
And then we're gonna come here
- 6:03
to the end, and put in our /a.
- 6:07
Okay.
- 6:10
So now, let's go see what this looks like.
- 6:19
Refresh that we need to go to our courses.
- 6:21
And look, there's each of our courses.
- 6:24
And we have a link at the top that goes to the course.
- 6:28
That's kinda cool, kinda nice, we can jump around.
- 6:33
We can do that in our detail template as well.
- 6:37
Let's go look at course detail, and
- 6:42
where we have this link to where we had the steps.
- 6:48
Let's add in a link to each step instead.
- 6:52
So we have our h3 and let's put in our
- 6:57
a href=url and we're gonna say step.
- 7:02
And the course_pk=step.course.pk and the step_pk=step.pk,
- 7:15
And we'll do step.title and
- 7:19
we'll come delete this step.title.
- 7:25
You know what, I'm gonna put this on a separate line so
- 7:27
it's just a little bit easier to see.
- 7:33
And that would be [INAUDIBLE].
- 7:36
There we go, cool, that's readable enough.
- 7:40
So we've got that and then, let's do one more.
- 7:43
Down here in step detail.
- 7:45
Where we have this course title.
- 7:47
Let's link back to the course.
- 7:50
So that you can hop back.
- 7:52
So, a href=url.
- 7:55
And, this would be detail, of course.
- 7:58
And the pk would be step.course.pk [SOUND] and
- 8:05
then we close our anchor tag.
- 8:09
Now let's try these out.
- 8:12
So if I go to Python Basics here are my two steps using the Shell and
- 8:16
what's the deal with strings?
- 8:18
And if I click Using the Shell, then I get brought to the Shell here and
- 8:23
I can click this to come back to all of those.
- 8:26
So that's pretty cool.
- 8:28
That's pretty good, pretty simple.
- 8:31
So one last thing that we want to do though is we want to namespace our URLs.
- 8:35
That way we don't get those conflicts that I brought up.
- 8:38
Surely, we're eventually gonna have another view that's called list, and
- 8:41
we don't wanna deal with that.
- 8:42
So, back over here in our site wide URLs,
- 8:51
Where we do this include, and we say courses.urls,
- 8:55
we're gonna add one new argument to this, which is namespace, and
- 9:00
we're gonna call it courses as the namespace.
- 9:04
So, now all of our URLs live inside this 'courses' name space.
- 9:08
We need to go adjust our URL tags one last time.
- 9:13
So, let's do these real quick.
- 9:16
In our layout, instead of that one,
- 9:20
we're gonna say courses :list.
- 9:26
Courses is the namespace, list, is the name of the view, or the name of the url.
- 9:32
Sorry, the name of the route.
- 9:34
So courses : list says go look in the courses namespace and find the list route.
- 9:41
All right.
- 9:42
Course detail, we want courses:step.
- 9:47
Step detail,
- 9:50
we want courses:detail.
- 9:55
And in courses list, course list, sorry, we want courses:detail.
- 10:02
That should be all of our views.
- 10:05
We go Home, we go to Courses,
- 10:07
we go to a course, and we can look at a course and then go back to the detail.
- 10:13
Great, so that's all of our wonderful URLs.
- 10:17
If you're thinking, oh, I'll never use those namespaces,
- 10:20
I'd just like to remind you about the Xeno Python and its final line.
- 10:23
Namespaces are one honking good idea.
- 10:25
Let's do more of those.
- 10:27
Okay one final bit to cover, and we'll have a really solid app
- 10:30
that we can submit as a prototype to a client or our boss.
- 10:33
Congratulations on getting so far. | https://teamtreehouse.com/library/url-tag | CC-MAIN-2018-47 | refinedweb | 1,874 | 92.53 |
C - Quick Guide
C -.
C - Environment Setup
Before you start doing programming using C programming language, you need the following two softwares available on your computer, (a) Text Editor and (b) The C", to turn into machine language so that your cpu can actually execute the program as per instructions given.
This C programming language compiler will be used to compile your source code into final executable program. I assume you have basic knowledge about a programming language compiler.
Most frequently used and free available compiler is GNU C/C++ compiler, otherwise you can have compilers either from HP or Solaris if you have respective Operating Systems.
Following section guides you on how to install GNU C/C++ compiler on various OS. I'm mentioning C/C++ together because GNU gcc compiler works for both C and C++ programming languages.
Installation on UNIX/Linux
If you are using Linux or Unix then check whether GCC is installed on your system by entering the following command from the command line:
$ gcc -v
If you have GNU compiler installed on your machine, then it should print a message something as follows: Cent OS flavour MinW.
C -.
C - Basic Syntax
You have seen a basic structure:
printf("Hello, World! \n");
The individual tokens are:
printf ( "Hello, World! \n" ) ;
Semicolons ;
In C program, the semicolon is a statement terminator. That is, each individual statement must be ended with a semicolon. It indicates the end of one logical
Keywords
The following list shows the reserved words in C. These reserved words may not be used as constant or variable.
C - Data Types
In the C programming language, data types refer to an extensive system used for declaring variables or functions of different types. The type of a variable determines how much space it occupies in storage and how the bit pattern stored is interpreted.
The types in expressions sizeof(type) yields the storage size of the object or type in bytes.
Floating-Point Types
Following table gives you details about standard floating-point types with storage sizes and value ranges and their precision:
The header file float.h defines macros that allow you to use these values and other details about the binary representation of real numbers in your programs.
The void Type
The void type specifies that no value is available. It is used in three kinds of situations:
The void type may not be understood to you at this point, so let us proceed and we will cover these concepts in the upcoming chapters.
C - Variables
A previous chapter, there will be the following basic variable types:
C programming language also allows to define various other types of variables, which we will cover in subsequent chapters like Enumeration, Pointer, Array, Structure, Union, etc. For this chapter, let us study only basic variable types.
Variable Definition in C program but it can be defined only once in a file, a function or a block of code.
Example
Try the following example, where a variable has been declared at the top, but it has been defined inside the main function:
:
value of c : 30 value of f : 23.333334;
C - Constants and Literals type const Keyword
You can use const prefix to declare constants with a specific type as follows:
const type variable = value;
C - Storage Classes
A storage class defines the scope (visibility) and life time of variables and/or functions within a C Program. These specifiers precede the type that they modify. There.
C - Operators
This tutorial will explain the arithmetic, relational, logical, bitwise, assignment and other operators one by one.
Arithmetic Operators
Following table shows all the arithmetic operators supported by C language. Assume variable A holds 10 and variable B holds 20 then:
Relational Operators
Following table shows all the relational operators supported by C language. Assume variable A holds 10 and variable B holds 20, then:
Logical Operators
Following table shows all the logical operators supported by C language. Assume variable A holds 1 and variable B holds 0, then:
Bitwise Operators
The Bitwise operators supported by C language are listed in the following table. Assume variable A holds 60 and variable B holds 13 then:
Assignment Operators
There are following assignment operators supported by C language:
Misc Operators ↦ sizeof & ternary
There are few other important operators including sizeof and ? : supported by C Language.
Operators Precedence in C
Here operators with the highest precedence appear at the top of the table, those with the lowest appear at the bottom. Within an expression, higher precedence operators will be evaluated first.<<
C programming language assumes any non-zero and non-null values as true and if it is either zero or null then it is assumed as false value.
C programming language provides following types of decision making statements. Click the following links to check their detail._1<<
C programming language provides following types of loop to handle looping requirements. Click the following links to check their detail.
Loop Control Statements:
Loop control statements change execution from its normal sequence. When execution leaves a scope, all automatic objects that were created in that scope are destroyed.
C supports the following control statements. Click the following links to check their detail.
C - Functions
A.
The C standard library provides numerous built-in functions that your program can call. For example, function strcat() to concatenate two strings, function memcpy() to copy one memory location to another location and many more functions.
A function is known with various names like a method or a sub-routine or a procedure, etc.
Defining a Function:
The general form of a function definition in C programming language is as follows:
return_type function_name( parameter list ) { body of the function }
A function definition in C programming language.
Example:
Following is the example of implementing a function the C programming language:
; }
I kept max() function along with main() function and compiled the source code. While running final executable, it would produce the following result:
Max value is : 200
C - Scope Rules
A.
Let us explain. Following is the example using local variables.
Global Variables.
Formal Parameters
Function parameters, formal parameters, are treated as local variables within that function and they will take preference over the global variables.
C - a variable array which is sufficient to hold up to 10 double numbers.
Initializing Arrays
You can initialize array in ie. last element because all arrays have 0 as the index of their first element which is also called base index. Following is the pictorial representaion result something as follows:
Address of var variable: bffd8b3c Address stored in ip variable: bffd8b3c Value of *ip variable: 20
C - Strings
The string in C programming language is actually a one-dimensional array of characters which is terminated by a null character ' greeting[6] = {'H', 'e', 'l', 'l', 'o', '\0'};
If you follow the rule of array initialization then you can write the above statement as follows:
char greeting[] = "Hello";
Following is the memory presentation of above-defined string in C/C++:
Actually, you do not place the null character at the end of a string constant. The C compiler automatically places the '\0' at the end of the string when it initializes the array. Let us try to print above mentioned string:
#include <stdio.h> int main () { char greeting[6] = {'H', 'e', 'l', 'l', 'o', '\0'}; printf("Greeting message: %s\n", greeting ); return 0; }
When the above code is compiled and executed, it produces result something as follows:
Greeting message: Hello
C supports a wide range of functions that manipulate null-terminated strings:
C - Structures
C arrays allow you to define type of variables that can hold several data items of the same kind but structure is another user defined data type available in { char title[50]; char author[50]; char subject[100];:
; }
C - Unions.
Defining a Union
To define a union, you must use the union statement in very similar was as you did while defining structure. The union statement defines a new data type, with more than one member for your program. The format of the union statement is as follows:
union [union tag] { member definition; member definition; ... member definition; } [one or more union variables];:
union Data { int i; float f; char str[20]; } data;.
Accessing Union Members
To access any member of a union, we use the member access operator (.) in similar way as you access structure members. The member access operator is coded as a period between the union variable name and the union member that we wish to access. You would use union keyword to define variables of union type.
C - Bit Fields
Suppose your C program contains a number of TRUE/FALSE variables grouped in a structure called status, as follows: upto 32 variables each one with a width of 1 bit , then also status structure will use 4 bytes, but as soon as you will have 33 variables then it will allocate next slot of the memory and it will start using 8 bytes.
C - typedef
The C programming language provides a keyword called typedef, which you can use to give a type a new name. Following is an example to define a term BYTE for one-byte numbers:
typedef unsigned char BYTE;
After this type definitions,.
C - Input & Output
When we are saying Input that means to feed some data into program. This can be given in the form of file or from command line. C programming language provides a set of built-in functions to read given input and feed it to the program as per requirement.
When we are saying Output that means to display some data on screen, printer or in any file. C programming language provides a set of built-in functions to output the data on the computer screen as well as you can save that data in text or binary files.
The Standard Files
C programming language treats all the devices as files. So devices such as the display are addressed in the same way as files and following three file are automatically opened when a program executes to provide access to the keyboard and screen.
The file points are the means to access the file for reading and writing purpose. This section will explain you how to read values from the screen and how to print the result on the screen.
The getchar() & putchar() functions
The int putchar(int c) function puts the passed character on the screen and returns the same character. This function puts only single character at a time. You can use this method in the loop in case you want to display more than one character on the screen.
The gets() & puts() functions
The char *gets(char *s) function reads a line from stdin into the buffer pointed to by s until either a terminating newline or EOF.
The int puts(const char *s) function writes the string s and a trailing newline to stdout.
The scanf() and printf() functions
The int scanf(const char *format, ...) function reads input from the standard input stream stdin and scans that input according to format provided.
The int printf(const char *format, ...) function writes output to the standard output stream stdout and produces output according to a format provided.
The format can be a simple constant string, but you can specify %s, %d, %c, %f etc to print or read strings, integer, character or float respectively. There are many other formatting options available which can be used based on requirements. For a complete detail you can refer to a man page for these function.
C - File I/O", "ab+", "a+b", "wb+", "w+b", "ab+", "a+b"
Closing a File
To close a file, use the fclose( ) function. The prototype of this function is: provide by C standard library to read and write a file character by character or in the form of a fixed length string. Let us see few of the in the next section.
Writing a File
Following is the simplest function to write individual characters to a stream:.: new line character. You can also use int fscanf(FILE *fp, const char *format, ...) function to read strings from a file but it stops reading after the first space character encounters.
Binary I/O Functions
There are following two functions which can be used for binary input and output:.
C - Preprocessors symbol (#). It must be the first nonblank character, and for readability, a preprocessor directive should begin in first column. Following section lists down all important preprocessor directives:
Predefined Macros
ANSI C defines a number of macros. Although each one is available for your use in programming, the predefined macros should not be directly modified.
C - Header Files
A header file is a file with extension .h which contains C function declarations and macro definitions and to be shared between several source files. There are two types of header files: the files that the programmer writes and the files that come with your compiler.
You request the use:
peforming actual addition operation.
Usual Arithmetic Conversion
The usual arithmetic conversions are implicitly performed to cast their values in a common type. Compiler first performs integer promotion, if operands still have different types then they are converted to the type that appears highest in the following hierarchy:
>>IMAGE
C -
C - Variable Arguments
Sometimes, usage:
first argument represents the total number of variable arguments being passed. Only ellipses will be used to pass variable number of arguments.
Average of 2, 3, 4, 5 = 3.500000 Average of 5, 10, 15 = 10.000000
C - Memory Management
The C programming language provides several functions for memory allocation and management. These functions can be found in the <stdlib.h> header file.
C - Command Line Arguments
It is possible to pass some values from the command line to your C programs when they are executed. These values are called command line arguments and many times they are important for your program specially a single argument, it produces the following result.
$./a.out testing The argument supplied is testing | http://www.tutorialspoint.com/cprogramming/c_quick_guide.htm | CC-MAIN-2013-48 | refinedweb | 2,334 | 60.75 |
IntroductionVideo of this articleStep 1:- Create your WCF service project.Step 2:- Make changes to the web.config filesStep 3:- Decorate your methods with ‘WebInvoke’ attribute.Step 4:- Run your program.
WCF Rest Services are nothing but simple WCF Services with added functionality of consuming WCF service in a Restful manner. For instance if you had a order web service that you built in WCF, instead of consuming them through heavy SOAP implementation and ASMX, you can use the WCF Rest functionality to consume the service by using simple HTTP verbs like post and get.
So rather than passing complicated SOAP message to the below link to get a order detail We can now use REST style to get order as shown in the below URL In this article we demonstrate the 4 important steps to enable your WCF service as a REST service.
In case you are lazy like me you can download the video of this article in MP4 format from.
The first step is to create your WCF service. So click on new website and select WCF service template.
The next step is to make changes in the Web.config files to ensure that its uses the HTTP verbs like post and get. So open the web.config file of your WCF project and change the binding to ‘webHttpBinding’ as shown in the below code snippet.
<endpoint address="" binding="webHttpBinding" contract="IService" behaviorConfiguration="WebBehavior1">
We also need to insert the ‘endpointBehaviors’ tag with ‘webHttp’ tag in the behavior tag as shown in the below code snippet.
<behaviors>
<endpointBehaviors>
<behavior name="WebBehavior1">
<webHttp/>
</behavior>
</endpointBehaviors>
</behaviors>
The next step is to decorate the function / method with ‘WebInvoke’ attribute as shown below. So decorate your function which you want to expose through REST using the ‘WebInvoke’ and specify the HTTP method which this function will accept, currently its ‘GET’.We have also specified what kind of URI template will be used with the function. Currently we have specified the format as ‘Getdata/{value}’. In other words we can pass data as ‘GetData/1’, GetData/2’ etc.
using System.ServiceModel.Web;
[OperationContract]
[WebInvoke(Method = "GET",ResponseFormat = WebMessageFormat.Xml,
BodyStyle = WebMessageBodyStyle.Bare,
UriTemplate = "GetData/{value}")]
string GetData(string value);
Now run your program with URL as ‘’ as shown in the below figure and it will display the data in the XML format as shown in the below figure.
Latest Articles
Latest Articles from Questpond
Login to post response | http://www.dotnetfunda.com/articles/show/779/simple-5-steps-to-expose-wcf-services-using-rest-style | CC-MAIN-2016-30 | refinedweb | 407 | 55.13 |
7.3 Distributed Heterogeneous graph training¶
DGL v0.6.0 provides an experimental support for distributed training on heterogeneous graphs. In DGL, a node or edge in a heterogeneous graph has a unique ID in its own node type or edge type. DGL identifies a node or edge with a tuple: node/edge type and type-wise ID. In distributed training, a node or edge can be identified by a homogeneous ID, in addition to the tuple of node/edge type and type-wise ID. The homogeneous ID is unique regardless of the node type and edge type. DGL arranges nodes and edges so that all nodes of the same type have contiguous homogeneous IDs.
Below is an example adjancency matrix of a heterogeneous graph showing the homogeneous ID assignment. Here, the graph has two types of nodes (T0 and T1 ), and four types of edges (R0, R1, R2, R3 ). There are a total of 400 nodes in the graph and each type has 200 nodes. Nodes of T0 have IDs in [0,200), while nodes of T1 have IDs in [200, 400). In this example, if we use a tuple to identify the nodes, nodes of T0 are identified as (T0, type-wise ID), where type-wise ID falls in [0, 200); nodes of T1 are identified as (T1, type-wise ID), where type-wise ID also falls in [0, 200).
7.3.1 Access distributed graph data¶
For distributed training,
DistGraph supports the heterogeneous graph API
in
DGLGraph. Below shows an example of getting node data of T0 on some nodes
by using type-wise node IDs. When accessing data in
DistGraph, a user
needs to use type-wise IDs and corresponding node types or edge types.
import dgl g = dgl.distributed.DistGraph('graph_name', part_config='data/graph_name.json') feat = g.nodes['T0'].data['feat'][type_wise_ids]
A user can create distributed tensors and distributed embeddings for a particular node type or
edge type. Distributed tensors and embeddings are split and stored in multiple machines. To create
one, a user needs to specify how it is partitioned with
PartitionPolicy.
By default, DGL chooses the right partition policy based on the size of the first dimension.
However, if multiple node types or edge types have the same number of nodes or edges, DGL cannot
determine the partition policy automatically. A user needs to explicitly specify the partition policy.
Below shows an example of creating a distributed tensor for node type T0 by using the partition policy
for T0 and store it as node data of T0.
g.nodes['T0'].data['feat1'] = dgl.distributed.DistTensor((g.number_of_nodes('T0'), 1), th.float32, 'feat1', part_policy=g.get_node_partition_policy('T0'))
The partition policies used for creating distributed tensors and embeddings are initialized when a heterogeneous graph is loaded into the graph server. A user cannot create a new partition policy at runtime. Therefore, a user can only create distributed tensors or embeddings for a node type or edge type. Accessing distributed tensors and embeddings also requires type-wise IDs.
7.3.2 Distributed sampling¶
DGL v0.6 uses homogeneous IDs in distributed sampling. Note: this may change in the future release. DGL provides four APIs to convert node IDs and edge IDs between the homogeneous IDs and type-wise IDs:
map_to_per_ntype(): convert a homogeneous node ID to type-wise ID and node type ID.
map_to_per_etype(): convert a homogeneous edge ID to type-wise ID and edge type ID.
map_to_homo_nid(): convert type-wise ID and node type to a homogeneous node ID.
map_to_homo_eid(): convert type-wise ID and edge type to a homogeneous edge ID.
Below shows an example of sampling a subgraph with
sample_neighbors() from a heterogeneous graph
with a node type called paper. It first converts type-wise node IDs to homogeneous node IDs. After sampling a subgraph
from the seed nodes, it converts homogeneous node IDs and edge IDs to type-wise IDs and also stores type IDs as node data
and edge data.
gpb = g.get_partition_book() # We need to map the type-wise node IDs to homogeneous IDs. cur = gpb.map_to_homo_nid(seeds, 'paper') # For a heterogeneous input graph, the returned frontier is stored in # the homogeneous graph format. frontier = dgl.distributed.sample_neighbors(g, cur, fanout, replace=False) block = dgl.to_block(frontier, cur) cur = block.srcdata[dgl.NID] block.edata[dgl.EID] = frontier.edata[dgl.EID] # Map the homogeneous edge Ids to their edge type. block.edata[dgl.ETYPE], block.edata[dgl.EID] = gpb.map_to_per_etype(block.edata[dgl.EID]) # Map the homogeneous node Ids to their node types and per-type Ids. block.srcdata[dgl.NTYPE], block.srcdata[dgl.NID] = gpb.map_to_per_ntype(block.srcdata[dgl.NID]) block.dstdata[dgl.NTYPE], block.dstdata[dgl.NID] = gpb.map_to_per_ntype(block.dstdata[dgl.NID])
From node/edge type IDs, a user can retrieve node/edge types. For example, g.ntypes[node_type_id]. With node/edge types and type-wise IDs, a user can retrieve node/edge data from DistGraph for mini-batch computation. | https://docs.dgl.ai/en/latest/guide/distributed-hetero.html | CC-MAIN-2021-49 | refinedweb | 824 | 56.76 |
is most likely that you have practice more for the Java 8 specific topics since they are relatively new topics in the exam. They might test you with difference combination of syntax and ask you to choose the correct answers.
Here is the 9 questions for OCAJP 8 exam that will be useful for your OCAJP Java Certification Preparation. If you have any questions, please write it in the comments section. If you are interested in practicing more questions, please consider buying any of the popular OCAJP Practice Exam Simulators available in the market. They offer questions that are very much relevant to the real exams.
OCAJP 8 Exam Objective
Here is the exam objective for preparing the OCAJP exam:
- OCAJP 8 Exam expects you to recognize valid , invalid lambda expressions. It doesn’t asks you write the lambda expressions.
What is Lambda Expression?
Here is the overview or definition of Lambda expression in Java 8 if you are not aware this concepts. before you read the mock questions, please understand the Lambda expression in Java 8.
A lambda expression is a anonymous method with more compact syntax that also allows the omission of modifiers, return type, and in some cases parameter types as well. Before lambda expressions, the anonymous methods are written inside the anonymous classes which are many lines of compare to the single line lambda expression.
1) Which are true about functional interface ?
- A. It has exactly one method and it must be abstract.
- B. It has exactly one method and it may or may not be abstract.
- C. It must have exactly one abstract method and may have any number of default or static methods.
- D. It must have exactly one default method and may have any number of abstract or static methods.
- E. It must have exactly one static method and may have any number of default or abstract methods.
2) Given
interface Test { public void print( ); }
Which are valid lambda expressions (select 2 options) ?
- A. ->System.out.println(“Hello world”);
- B. void -> System.out.println(“Hello world”);
- C. ( ) -> System.out.println(“Hello world”);
- D. ( ) ->{ System.out.println(“Hello world”); return; }
- E. (void ) -> System.out.println(“Hello world”);
3) Which lambda can replace the MyTest class to return the same value? (Choose all that apply)
interface Sample { String change(int d); } class MyTest implements Sample { public String change(int s) { return "Hello"; } }
- A. change((e) -> “Hello” )
- B. change((e) -> {“Hello” })
- C. change((e) -> { String e = “”; “Hello” });
- D. change((e) -> { String e = “”; return “Hello”; });
- E. change((e) -> { String e = “”; return “Hello” });
- F. change((e) -> { String f = “”; return “Hello”; });
4) What is the result ?
1: import java.util.function.*; 2: 3: public class Student { 4: int age; 5: public static void main(String[] args) { 6: student p1 = new Student(); 7: p1.age = 1; 8: check(p1, p -> p.age < 5); 9: } 10: private static void check(Student s, Predicate<Student> pred) { 11: String result = pred.test(s) ? "match" : "not match"; 12: System.out.print(result); 13: } }
- A. match
- B. not match
- C. Compiler error on line 8.
- D. Compiler error on line 10.
- E. Compiler error on line 11.
- F. A runtime exception is thrown.
5) What is the output ?
1: interface Jump { 2: boolean isToLong(int length, int limit); 3: } 4: 5: public class Climber { 6: public static void main(String[] args) { 7: check((h, l) -> h.append(l).isEmpty(), 5); 8: } 9: private static void check(Jump j, int length) { 10: if (j.isTooLong(length, 10)) 11: System.out.println("too high"); 12: else 13: System.out.println("ok"); 14: } 15: }
- A. ok
- B. too high
- C. Compiler error on line 7.
- D. Compiler error on line 10.
- E. Compiler error on a different line.
- F. A runtime exception is thrown.
6) What can be inserted in the code below so that it will true when run ?
class Test { public static boolean check( List l , Predicate<List> p ) { return p.test(l) ; } Public static void main(String[] args) { boolean b = // write code here ; System.out.println(b); } }
Select 2 options
- A. check(new ArrayList( ), al -> al.isEmpty( ) );
- B. check(new ArrayList( ), ArrayList al -> al.isEmpty( ) );
- C. check(new ArrayList( ), al -> return al.size( ) == 0 );
- D. check(new ArrayList( ), al -> al.add(“hello”));
7. Given
class Test { int a ; Test( int a ) { This.a = a; } } And the following code fragment public void filter (ArrayList<Test> al,Predicate<Test> p) { iterator<Test> i = al.iterator( ); while(i.hasNext( ) ) { if(p.test(i.next( ) ) { i.remove( ); } } ---- ArrayList<Test> l = new ArrayList<Test>( ); Test t = new Test(5); l.add(t); t= new Test(6); l.add(t); t=new Test(7); l.add(t); //Insert method call here System.out.println(l);
Which of the following options print [5 7] ?
- A. filter(al,t->t.a%2==0 ) ;
- B. filter(al, (Test y)->y.a%2==0);
- C. filter(al, (Test y)->y.a%2);
- D. filter(al, y-> return y.a%2==0);
8. Which are true about java.util.function.Predicate ?
- A. It is an interface that has one method with declaration like-
public void test(T t)
- B. It is an interface that has one method with declaration like-
public boolean test(T t)
- C. It is an interface that has one method with declaration like-
public boolean test(T t)
- D. It is an abstract class that has one method with declaration like-
public abstract boolean test(T t)
- E. It is an abstract class that has one method with declaration like-
public abstract void test(T t)
9. Given
class Test { int a ; Test( int a ) { This.a = a; } } And the following code fragment public void filter (ArrayList<Test> al,Predicate<Test> p) { for(Test t : al) { if(p.test(t)) System.out.println(t.a) } } --- ArrayList<Test> l = new ArrayList<Test>( ); Test t = new Test(5); l.add(t); t= new Test(6); l.add(t); t=new Test(7); l.add(t); //Insert method call here
Which of the following options print 7 ?
- A. filter(al, (Test y) -> { return y.a>6 ; });
- B. filter(al, (Test y) -> { return y.a>6 });
- C. filter(al, ( d) -> return d.a>6) ;
- D. filter(al, d -> d.a>6) ;
Answers
1) Correct option : C
Functional interface must have exactly one abstract method and may have any number of default or static methods.
2) Correct options : C,D
Method doesn’t take any parameters , lambda expression should contain parenthesis in the parameter list of lambda expression. Method doesn’t return anything, so body part should not return anything.
3) Correct options : A, F.
Option B is incorrect because it does not use the return keyword. Options C, D and E are incorrect because the variable e is already in use from the lambda and cannot be redefined. Additionally, option C is missing the return keyword and option E is missing the semicolon.
4) Correct option : A.
This code is correct. Line 8 creates a lambda expression that checks if the age is less than 5. Since there is only one parameter and it does not specify a type, the parentheses around the type parameter are optional. Line 10 uses the Predicate interface, which declares a test() method
5) Correct option : C.
The interface takes two int parameters. The code on line 7 attempts to use them as if one is a StringBuilder. It is tricky to use types in a lambda when they are implicitly specified. Remember to check the interface for the real type.
6) Correct options : A,D
B is incorrect because parenthesis are missing for parameter and ArrayList is incorrect data type for parameter. C is incorrect because curly braces are mandatory to return keyword.
7) Correct option : B
Option A is syntactically correct but it gives compile time error because t variable within the same scope and can’t be declared two times. C is incorrect because it returns int, but Predicate method will return boolean. D is incorrect because curly braces are mandatory when return is being used in lambda expression.
8) Correct option : B
To answer this question you need to remember Predicate method declaration
Follow this link to know more about Predicate:
9) Correct options : A,D
B is incorrect because semicolon is missing after return statement. C is incorrect curly braces are mandatory when return is being used.
I hope this questions would be useful for preparing OCAJP 8 exam. If you have any questions in preparing for OCAJP exam, please write it in the comments section. We are happy to help you in passing the exam. | https://javabeat.net/ocajp-lambda-practice-questions/ | CC-MAIN-2017-47 | refinedweb | 1,430 | 68.77 |
Walkthrough: Writing Queries in C# (LINQ)
This walkthrough demonstrates the C# language features that are used to write LINQ query expressions. After completing this walkthrough you will be ready to move on to the samples and documentation for the specific LINQ provider you are interested in, such as LINQ to SQL, LINQ to DataSets, or LINQ to XML.
This walkthrough requires features that are introduced in Visual Studio 2008.
For a video version of this topic, see Video How to: Writing Queries in C# (LINQ).
To create a project
Start Visual Studio.
On the menu bar, choose File, New, Project.
The New Project dialog box opens.
Expand Installed, expand Templates, expand Visual C#, and then choose Console Application.
In the Name text box, enter a different name or accept the default name, and then choose the OK button.
The new project appears in Solution Explorer.
Notice that your project has a reference to System.Core.dll and a using directive for the System.Linq namespace.
The data source for the queries is a simple list of Student objects. Each Student record has a first name, last name, and an array of integers that represents their test scores in the class. Copy this code into your project. Note the following characteristics:
The Student class consists of auto-implemented properties.
Each student in the list is initialized with an object initializer.
The list itself is initialized with a collection initializer.
This whole data structure will be initialized and instantiated without explicit calls to any constructor or explicit member access. For more information about these new features, see Auto-Implemented Properties (C# Programming Guide) and Object and Collection Initializers (C# Programming Guide).
To add the data source
Add the Student class and the initialized list of students to the Program class in your project.
public class Student { public string First { get; set; } public string Last { get; set; } public int ID { get; set; } public List<int> Scores; } // Create a data source by using a collection initializer. static}}, new Student {First="Cesar", Last="Garcia", ID=114, Scores= new List<int> {97, 89, 85, 82}}, new Student {First="Debra", Last="Garcia", ID=115, Scores= new List<int> {35, 72, 91, 70}}, new Student {First="Fadi", Last="Fakhouri", ID=116, Scores= new List<int> {99, 86, 90, 94}}, new Student {First="Hanying", Last="Feng", ID=117, Scores= new List<int> {93, 92, 80, 87}}, new Student {First="Hugo", Last="Garcia", ID=118, Scores= new List<int> {92, 90, 83, 78}}, new Student {First="Lance", Last="Tucker", ID=119, Scores= new List<int> {68, 79, 88, 92}}, new Student {First="Terry", Last="Adams", ID=120, Scores= new List<int> {99, 82, 81, 79}}, new Student {First="Eugene", Last="Zabokritski", ID=121, Scores= new List<int> {96, 85, 91, 60}}, new Student {First="Michael", Last="Tucker", ID=122, Scores= new List<int> {94, 92, 91, 91} } };
To add a new Student to the Students list
Add a new Student to the Students list and use a name and test scores of your choice. Try typing all the new student information in order to better learn the syntax for the object initializer.
To create a simple query
In the application's Main method, create a simple query that, when it is executed, will produce a list of all students whose score on the first test was greater than 90. Note that because the whole Student object is selected, the type of the query is IEnumerable<Student>. Although the code could also use implicit typing by using the var keyword, explicit typing is used to clearly illustrate results. (For more information about var, see Implicitly Typed Local Variables (C# Programming Guide).)
Note also that the query's range variable, student, serves as a reference to each Student in the source, providing member access for each object.
To execute the query
Now write the foreach loop that will cause the query to execute. Note the following about the code:
Each element in the returned sequence is accessed through the iteration variable in the foreach loop.
The type of this variable is Student, and the type of the query variable is compatible, IEnumerable<Student>.
After you have added this code, build and run the application by pressing Ctrl + F5 to see the results in the Console window.
// Execute the query. // var could be used here also. foreach (Student student in studentQuery) { Console.WriteLine("{0}, {1}", student.Last, student.First); } // Output: // Omelchenko, Svetlana // Garcia, Cesar // Fakhouri, Fadi // Feng, Hanying // Garcia, Hugo // Adams, Terry // Zabokritski, Eugene // Tucker, Michael
To add another filter condition
You can combine multiple Boolean conditions in the where clause in order to further refine a query. The following code adds a condition so that the query returns those students whose first score was over 90 and whose last score was less than 80. The where clause should resemble the following code.
For more information, see where clause (C# Reference).
To order the results
It will be easier to scan the results if they are in some kind of order. You can order the returned sequence by any accessible field in the source elements. For example, the following orderby clause orders the results in alphabetical order from A to Z according to the last name of each student. Add the following orderby clause to your query, right after the where statement and before the select statement:
Now change the orderby clause so that it orders the results in reverse order according to the score on the first test, from the highest score to the lowest score.
Change the WriteLine format string so that you can see the scores:
For more information, see orderby clause (C# Reference).
To group the results
Grouping is a powerful capability in query expressions. A query with a group clause produces a sequence of groups, and each group itself contains a Key and a sequence that consists of all the members of that group. The following new query groups the students by using the first letter of their last name as the key.
Note that the type of the query has now changed. It now produces a sequence of groups that have a char type as a key, and a sequence of Student objects. Because the type of the query has changed, the following code changes the foreach execution loop also:
// studentGroup is a IGrouping<char, Student> foreach (var studentGroup in studentQuery2) { Console.WriteLine(studentGroup.Key); foreach (Student student in studentGroup) {
Press Ctrl + F5 to run the application and view the results in the Console window.
For more information, see group clause (C# Reference).
To make the variables implicitly typed
Explicitly coding IEnumerables of IGroupings can quickly become tedious. You can write the same query and foreach loop much more conveniently by using var. The var keyword does not change the types of your objects; it just instructs the compiler to infer the types. Change the type of studentQuery and the iteration variable group to var and rerun the query. Note that in the inner foreach loop, the iteration variable is still typed as Student, and the query works just as before. Change the s iteration variable to var and run the query again. You see that you get exactly the same results.
var studentQuery3 = from student in students group student by student.Last[0]; foreach (var groupOfStudents in studentQuery3) { Console.WriteLine(groupOfStudents.Key); foreach (var student in groupOfStudents) {
For more information about var, see Implicitly Typed Local Variables (C# Programming Guide).
To order the groups by their key value
When you run the previous query, you notice that the groups are not in alphabetical order. To change this, you must provide an orderby clause after the group clause. But to use an orderby clause, you first need an identifier that serves as a reference to the groups created by the group clause. You provide the identifier by using the into keyword, as follows:
var studentQuery4 = from student in students group student by student.Last[0] into studentGroup orderby studentGroup.Key select studentGroup; foreach (var groupOfStudents in studentQuery4) { Console.WriteLine(groupOfStudents.Key); foreach (var student in groupOfStudents) { Console.WriteLine(" {0}, {1}", student.Last, student.First); } } // Output: //A // Adams, Terry //F // Fakhouri, Fadi // Feng, Hanying //G // Garcia, Cesar // Garcia, Debra // Garcia, Hugo //M // Mortensen, Sven //O // Omelchenko, Svetlana // O'Donnell, Claire //T // Tucker, Lance // Tucker, Michael //Z // Zabokritski, Eugene
When you run this query, you will see the groups are now sorted in alphabetical order.
To introduce an identifier by using let
You can use the let keyword to introduce an identifier for any expression result in the query expression. This identifier can be a convenience, as in the following example, or it can enhance performance by storing the results of an expression so that it does not have to be calculated multiple times.
// studentQuery5 is an IEnumerable<string> // This query returns those students whose // first test score was higher than their // average score. var studentQuery5 = from student in students let totalScore = student.Scores[0] + student.Scores[1] + student.Scores[2] + student.Scores[3] where totalScore / 4 < student.Scores[0] select student.Last + " " + student.First; foreach (string s in studentQuery5) { Console.WriteLine(s); } // Output: // Omelchenko Svetlana // O'Donnell Claire // Mortensen Sven // Garcia Cesar // Fakhouri Fadi // Feng Hanying // Garcia Hugo // Adams Terry // Zabokritski Eugene // Tucker Michael
For more information, see let clause (C# Reference).
To use method syntax in a query expression
As described in Query Syntax and Method Syntax in LINQ (C#), some query operations can only be expressed by using method syntax. The following code calculates the total score for each Student in the source sequence, and then calls the Average() method on the results of that query to calculate the average score of the class. Note the placement of parentheses around the query expression.
var studentQuery6 = from student in students let totalScore = student.Scores[0] + student.Scores[1] + student.Scores[2] + student.Scores[3] select totalScore; double averageScore = studentQuery6.Average(); Console.WriteLine("Class average score = {0}", averageScore); // Output: // Class average score = 334.166666666667
To transform or project in the select clause
It is very common for a query to produce a sequence whose elements differ from the elements in the source sequences. Delete or comment out your previous query and execution loop, and replace it with the following code. Note that the query returns a sequence of strings (not Students), and this fact is reflected in the foreach loop.
IEnumerable<string> studentQuery7 = from student in students where student.Last == "Garcia" select student.First; Console.WriteLine("The Garcias in the class are:"); foreach (string s in studentQuery7) { Console.WriteLine(s); } // Output: // The Garcias in the class are: // Cesar // Debra // Hugo
Code earlier in this walkthrough indicated that the average class score is approximately 334. To produce a sequence of Students whose total score is greater than the class average, together with their Student ID, you can use an anonymous type in the select statement:
var studentQuery8 = from student in students let x = student.Scores[0] + student.Scores[1] + student.Scores[2] + student.Scores[3] where x > averageScore select new { id = student.ID, score = x }; foreach (var item in studentQuery8) { Console.WriteLine("Student ID: {0}, Score: {1}", item.id, item.score); } // Output: // Student ID: 113, Score: 338 // Student ID: 114, Score: 353 // Student ID: 116, Score: 369 // Student ID: 117, Score: 352 // Student ID: 118, Score: 343 // Student ID: 120, Score: 341 // Student ID: 122, Score: 368
After you are familiar with the basic aspects of working with queries in C#, you are ready to read the documentation and samples for the specific type of LINQ provider you are interested in:
LINQ to SQL [LINQ to SQL]
LINQ to XML [from BPUEDev11] | http://msdn.microsoft.com/en-us/library/bb397900(v=vs.120).aspx | CC-MAIN-2014-23 | refinedweb | 1,943 | 61.26 |
And the real problem is...
That date parsing is awful. Yes, I can just try strptime() with different formats until one works, but look at this:
from time import * strptime(strftime('%Z',localtime()),'%Z') Traceback (most recent call last): File "stdin", line 1, in ? ValueError: unconverted data remains: 'ART'
Now, what that is supposed to do, as far as I understand, is print my timezone (that is the strftime call, it prints ART), and then parse it back (that is the strptime call, and it fails).
Maybe %Z is new... no, it isnt. It is in the reference guide for 2.2.2 (what I use). | http://ralsina.me/weblog/posts/P97.html | CC-MAIN-2020-24 | refinedweb | 106 | 84.17 |
#include <curses.h>
The addchstr() function copies the chtype character string to the stdscr window at the current cursor position. The mvaddchstr()
and mvwaddchstr() functions copy the character string to the starting position indicated by the x (column) and y (row) parameters
(the former to the stdscr window; the latter to window win). The waddchstr() is identical to addchstr(), but writes
to the window specified by win.
The addchnstr(), waddchnstr(), mvaddchnstr(), and mvwaddchnstr() functions write n characters
to the window, or as many as will fit on the line. If n is less than 0, the entire string is written, or as much of it as fits on the line. The former two functions place the
string at the current cursor position; the latter two commands use the position specified by the x and y parameters.
These functions differ from the addstr(3XCURSES) set of functions in two important respects.
First, these functions do not advance the cursor after writing the string to the window. Second, the current window rendition is not combined with the character; only the attributes
that are already part of the chtype character are used.
On success, these functions return OK. Otherwise, they return ERR.
None.
addch(3XCURSES), addnstr(3XCURSES), attroff(3XCURSES) | http://www.shrubbery.net/solaris9ab/SUNWaman/hman3xcurses/addchstr.3xcurses.html | CC-MAIN-2018-05 | refinedweb | 206 | 54.42 |
Adding full i18n to Quasar
UPDATE 6th of June 2019: This article has been updated to fit v1.0 of Quasar.
If you are wanting to translate your own phrases and terms for your components and / or you are wondering why Quasar’s own
$q.lang (formerly $q.i18n)isn’t getting you to where you want to go, that’s because Quasar’s internal translation system is built only for Quasar’s internal components. Once you start building out your own components, Quasar’s i18n system won’t extend to help you.
No biggie! This tutorial should get you to your own translation system.
IMPORTANT NOTE: If you had selected i18n during the initialization of your project with Quasar’s CLI, you can skip to step 5 below. You are basically already set up to do translations. 😄
Steps to follow.
- First, you need to install vue-i18n into your project. Note, we always recommend using Yarn for project dependency management.
$ yarn add vue-i18nor $ npm install vue-i18n --save
If all went well, you should see something like this:
2. Next, you need to create the boot file for Quasar. For this, we can use Quasar’s CLI
new command. Enter this into your console:
$ quasar new boot i18n
3. Now go into the
/boot directory and open up the newly created
i18n.js file. Delete the contents and replace them with the following code:
import VueI18n from 'vue-i18n'
import messages from 'src/i18n'
export default ({ app, Vue }) => {
Vue.use(VueI18n)
app.i18n = new VueI18n({
locale: 'en',
fallbackLocale: 'en',
messages
})
}
4. Now we need to tell Quasar the boot file is available. Go to the
quasar.conf.js file and open it. Add
'i18n' as an array item to the boot array. Something like this:
5. You can put your English phrases into the
en-us/index.js file in the directory called
i18n directly under
/src in your project.
6. If you’d like to make separate language translations of your own phrases, create a second directory under
/i18n in the same format as
en-us directory. Below is an example for German translations (
de).
7. Add your translations as needed to your new file.
export default {
failed: 'Aktion fehlgeschlagen',
success: 'Aktion erfolgreich',
}
8. Make sure to add the new file to the imports and exports of
/i18n/index.js.
That’s it! You can now access your translation phrases in your components with:
{{ $t('translation.path') }} // in your template codeor this.$t('translation.path') // in your script code
Example Code
Visit this Codesandbox to see it all working (including the first Pro Tip below, along with a phrase added to the menu “Essential Links”) !!
Pro Tip! — Change the app language dynamically!
If you want to change the language dynamically, first create a select component for the language selection.
<q-select
label="Select Language"
v-model="lang"
map-options
:
Then create the options array for the select component in the data function. The example below adds German. Also set the initial value of
lang for the v-model of the select, so we can get the binding going.
return {
langs: [
{
label: 'German',
value: 'de'
},
{
label: 'US English',
value: 'en-us'
}
],
lang: this.$i18n.locale
Then create a watcher for the
lang data property. Here you’ll want to set both the
$q.lang (Quasar’s translations) and the
$i18n (your app’s translations) objects for the newly selected language.
watch: {
lang(lang) {
this.$i18n.locale = lang.value
// set quasar's language too!!
import(`quasar/lang/${lang.value}`).then(language =>
{this.$q.lang.set(language.default)
})
}
}
You can reference the
/i18n folder in Quasar to see full list of the available languages.
IMPORTANT NOTE: You must follow the language file names as the option values in your language selection. You’ll also probably want to make this “language selector” it’s own component, so you can insert it anywhere you’d like in your application.
If you’d like another language added, please make a PR to the Quasar repository.
Pro Tip #2! — Translations outside of components
If you need access to your app’s translations outside of your components, you can also use this code for the boot file:
import VueI18n from 'vue-i18n'
import { messages } from 'src/i18n'let i18nexport default ({ app, Vue }) => {
Vue.use(VueI18n)
app.i18n = new VueI18n({
locale: 'en-us',
fallbackLocale: 'en-us',
messages
})
i18n = app.i18n
}export { i18n }
You should be able to access the translations in non-component files with:
import i18n from 'boot/i18n.js'i18n.t('translation.path')
For further information on how to work with vue-i18n, please refer to the vue-i18n documentation.
And have fun with Quasar! 😃
If you need more information about Quasar itself, here are a few links for your consideration:
COMPONENTS:
CLI:
THE DOCS:
DISCORD:
FORUM:
TWITTER:
STEEM: | https://medium.com/quasar-framework/adding-full-i18n-to-quasar-150da2d5bba4 | CC-MAIN-2020-40 | refinedweb | 805 | 66.94 |
Greetings everyone!
I am building a hex tile game where hex is a piece of terrain and can be of different type, e.g. "Sand", "Dirt", "Water". Therefore each hex can have different sprite and probably different properties, say Water is not walkable and Dirt gives -1 to speed.
I am a bit confused of correct storing the information about terrain types. Below I will show what I've come up with at the moment.
Tile game object
Has Sprite Renderer component attached
Has "Tile.cs" script attached with public property "Terrain Type"
Tile.cs calls helper class to set correct terrain sprite to Sprite Renderer's "sprite" property
Terrain.cs helper class
Has public enum TerrainType defined => {Sand, Water, Dirt}
Has public class TerrainDefinition for simple mapping of TerrainType and its Sprite
Class Terrain contains List which is filled in constructor
The code is below.
Terrain.cs
// Possible terrain types
public enum TerrainType
{
Sand,
Water,
Dirt
}
// Structure to map terrain type with its corresponding sprite
public class TerrainDefinition
{
public TerrainType type;
public Sprite sprite;
public TerrainDefinition(TerrainType t, Sprite s)
{
type = t;
sprite = s;
}
}
// Implementation of the type => sprite mapping
// Logic to retrieve a sprite by type and other stuff like getting terrain properties...
public class Terrain
{
private List<TerrainDefinition> terrainList = new List<TerrainDefinition>();
public Terrain()
{
terrainList.Add(new TerrainDefinition(
TerrainType.Sand,
Resources.Load<Sprite>("Sprites/Sand")
)
);
terrainList.Add(new TerrainDefinition(
TerrainType.Dirt,
Resources.Load<Sprite>("Sprites/Dirt")
)
);
}
public Sprite GetSpriteByType(TerrainType t)
{
for (int i=0; i < terrainList.Count; i++)
{
if (terrainList[i].type == t)
{
return terrainList[i].sprite;
}
}
return null;
}
}
Tile.cs
public class Tile : MonoBehaviour
{
public TerrainType terrainType;
public void Start()
{
SetTerrainType(terrainType);
}
public void SetTerrainType(TerrainType t)
{
Terrain trn = new Terrain();
GetComponent<SpriteRenderer>().sprite = trn.GetSpriteByType(t);
terrainType = t;
}
}
What I am not happy with is that Terrain data is moved outside the Tile.cs. I think that each Tile should be aware of its possible types and its sprite representation without having to call other classes. The same for handling additional logic like "whether the tile is walkable", "what speed this tile can be passed" and bla-bla-bla.
The reason I moved the Terrain data outside the Tile.cs is to have only one place where sprites are physically loaded into memory, assuming that I will probably have a lot of tile and each Tile object will have duplicates of sprite images loaded, which is not very good.
My questions are about corret data structure in this case:
1) Should Tile class contain all info about its possible properties and images associated with these properties? Or properties info and logic should be moved outside the Tile class?
2) Storing sprites inside Tile class will lead to duplicating images in the memory for each object? Or it will work like "references" to single memory space?
3) I use Resources.Load to load images and map the with "terrain type", is this OK? I read that using Unity IDE to set resource references by drag and drop is better way, should I avoid using Resource.Load?
Thanks for answers and happy conding!
First of all: Having seperate GameObjects for each tile leads to performance problems very quickly for all but the smallest of maps. Generally, tiles are made from mech chunks or texture chunks which are generated on runtime.
Anyway, if each of your terrain tiles can only be of a certain pre-defined type (grass, road etc.), then you could just give each one a simple ID or name and use that to look up a dictionary or similar collection where t he stats for that terrain type are stored. On the other hand, if each tile can change any stat at any time, then you only need the tile catalogue to set up tile stats initially, but each tile still has to be a full instance of the terrain type class (the data would be copied from the catalogue to the tile).
As for Resources.Load, I wonder what yu mean by "better way"? Performancy shouldn'T be an issue because dragging / dropping is only done in edit mode anyway.
@Cherno, thanks for the reply!
leads to performance problems very quickly
leads to performance problems very quickly
Unity can't handle thousands of GameObjects in the same scene, no matter how powerful the hardware is. If you have a map that is 50x50 tiles big, that's already 2,500 GOs, and 125,000 if you go into the third dimension. Each Tile Object would have it's own Update function that is called, even if it's empty, and colliders etc. makes it even more of a resource hog.
A dictionary is a collection, like an array or list but each element is a value that belongs to a key, so you pass the key and get the value in return.
Here is a nice overview/guide for using collections:
Choosing the right collection type
Note that you need to be "using System.Collection.Generic" to access collections like Lists and Dictionaries.
Another overview, a bit more technical in nature:
C#/.NET Fundamentals: Choosing the Right Collection Class
As for procedurally generating meshes, this has been done (and written down) a hundred times before so you should find plenty of learning material with a simple Google search.
Here is a YT tutorial to get you started:
Unity 3d: TileMaps - Part 1 - Theory + The "Wrong" Way
Same goes for 2d tilemaps.
@Cherno
Unity can't handle thousands of GameObjects in the same scene
Unity can't handle thousands of GameObjects in the same scene
You can test it yourself by just adding GOs until the framerate starts to drop.
Note that open-world games like Fallout 3 etc. never have all the objects like items, trees, buildings and characters present at all times; things are loaded into and out of memory constantly when the player moves through the world, a process also known as streaming. So you see, even AAA games built for high-end hardware have to deal with certain limitations.
Answer by ShadyProductions
·
Jan 29, 2016 at 09:35 PM
Here's an interesting alternative to making a tile sort of game.
By using a mesh to make voxels.
Answer by J0hn4n
·
Oct 18, 2017 at 07:51 PM
i dont know but my own preference its store maps like ints or enums arrays fields, where those values represent what you want. like 0 to none, 1 wall, 2 floor , 3 door etc.... i think its a lot easier that way so just parse that data to your map. also you can map minimaps quickly using these build up the class structure for an object
1
Answer
Making a class publicly available
3
Answers
idea for a script of a gameobject with different types/roles
2
Answers
Can I save class instances in a prefab-like way?
1
Answer
calling a class named by a string
-1
Answers | https://answers.unity.com/questions/1134736/how-to-correctly-store-tile-and-terrain-type-data.html | CC-MAIN-2018-30 | refinedweb | 1,157 | 61.77 |
In this section, we will look at various aspects of the iostream output class (ostream).
Note: All of the I/O functionality in this lesson lives in the std namespace. That means all I/O objects and functions either have to be prefixed with “std::”, or the “using namespace std;” statement has to be used.
The insertion operator
The insertion operator (<<) is used to put information into an output stream. C++ has predefined insertion operations for all of the built-in data types, and you've already seen how you can overload the insertion operator for your own classes.
In the lesson on streams, you saw that both istream and ostream were derived from a class called ios. One of the jobs of ios (and ios_base) is to control the formatting options for output.
Formatting
There are two ways to change the formatting options: flags, and manipulators. You can think of flags as boolean variables that can be turned on and off. Manipulators are objects placed in a stream that affect the way things are input and output.
To switch a flag on, use the setf() function, with the appropriate flag as a parameter. For example, by default, C++ does not print a + sign in front of positive numbers. However, by using the std::ios::showpos flag, we can change this behavior:
This results in the following output:
+27
It is possible to turn on multiple ios flags at once using the OR (|) operator:
To turn a flag off, use the unsetf() function:
+27
28
There’s one other bit of trickiness when using setf() that needs to be mentioned. Many flags belong to groups, called format groups. A format group is a group of flags that perform similar (sometimes mutually exclusive) formatting options. For example, a format group named “basefield” contains the flags “oct”, “dec”, and “hex”, which controls the base of integral values. By default, the “dec” flag is set. Consequently, if we do this:
We get the following output:
27
It didn’t work! The reason why is because setf() only turns flags on -- it isn’t smart enough to turn mutually exclusive flags off. Consequently, when we turned std::hex on, std::ios::dec was still on, and std::ios::dec apparently takes precedence. There are two ways to get around this problem.
First, we can turn off std::ios::dec so that only std::hex is set:
Now we get output as expected:
1b
The second way is to use a different form of setf() that takes two parameters: the first parameter is the flag to set, and the second is the formatting group it belongs to. When using this form of setf(), all of the flags belonging to the group are turned off, and only the flag passed in is turned on. For example:
This also produces the expected output:
Using setf() and unsetf() tends to be awkward, so C++ provides a second way to change the formatting options: manipulators. The nice thing about manipulators is that they are smart enough to turn on and off the appropriate flags. Here is an example of using some manipulators to change the base:
This program produces the output:
1b
1c
29
In general, using manipulators is much easier than setting and unsetting flags. Many options are available via both flags and manipulators (such as changing the base), however, other options are only available via flags or via manipulators, so it’s important to know how to use both.
Useful formatters
Here is a list of some of the more useful flags, manipulators, and member functions. Flags live in the std::ios class, manipulators live in the std namespace, and the member functions live in the std::ostream class.
Example:
Result:
1 0
true false
1 0
true false
5
+5
5
+5
1.23457e+007
1.23457E+007
1.23457e+007
1.23457E+007
27
27
33
1b
27
33
1b
By now, you should be able to see the relationship between setting formatting via flag and via manipulators. In future examples, we will use manipulators unless they are not available.
Precision, notation, and decimal points
Using manipulators (or flags), it is possible to change the precision and format with which floating point numbers are displayed. There are several formatting options that combine in somewhat complex ways, so we will take a closer look at this.
If fixed or scientific notation is used, precision determines how many decimal places in the fraction is displayed. Note that if the precision is less than the number of significant digits, the number will be rounded.
Produces the result:
123.456
123.4560
123.45600
123.456000
123.4560000
1.235e+002
1.2346e+002
1.23456e+002
1.234560e+002
1.2345600e+002
If neither fixed nor scientific are being used, precision determines how many significant digits should be displayed. Again, if the precision is less than the number of significant digits, the number will be rounded.
Produces the following result:
123
123.5
123.46
123.456
123.456
Using the showpoint manipulator or flag, you can make the stream write a decimal point and trailing zeros.
123.
123.5
123.46
123.456
123.4560
Here’s a summary table with some more examples:
Width, fill characters, and justification
Typically when you print numbers, the numbers are printed without any regard to the space around them. However, it is possible to left or right justify the printing of numbers. In order to do this, we have to first define a field width, which defines the number of output spaces a value will have. If the actual number printed is smaller than the field width, it will be left or right justified (as specified). If the actual number is larger than the field width, it will not be truncated -- it will overflow the field.
In order to use any of these formatters, we first have to set a field width. This can be done via the width(int) member function, or the setw() manipulator. Note that right justification is the default.
This produces the result:
-12345
-12345
-12345
-12345
- 12345
One thing to note is that setw() and width() only affect the next output statement. They are not persistent like some other flags/manipulators.
Now, let’s set a fill character and do the same example:
This produces the output:
-12345
****-12345
-12345****
****-12345
-****12345
Note that all the blank spaces in the field have been filled up with the fill character.
The ostream class and iostream library contain other output functions, flags, and manipulators that may be useful, depending on what you need to do. As with the istream class, those topics are really more suited for a tutorial or book focusing on the standard library (such as the excellent book “The C++ Standard Template Library” by Nicolai M. Josuttis).
[codestd::iosshowpos[/code] There's a missing scope resolution here.
Thanks, fixed.
Also, under the "Useful formatters" heading, in the section on "boolalpha", Line 3 states
but I believe it should be
right? Also, the next one has the showpos flag in the table as "std::ios::iosshowpos". I think that's a typo with the extraneous "ios" in front of "showpos", as in the code example you simply use "std::ios::showpos" for the flag.
Correct on both counts. Fixed. Thank you much for pointing these out.
Hi Alex,
I'm getting below error when i use "std::cout.setf(std::hex);".
error: no matching function for call to ‘std::basic_ostream<char>::unsetf(std::ios_base& (&)(std::ios_base&))’
std::cout.unsetf(std::hex);
and it gets resolved after i change it to "std::cout.setf(std::ios::hex);"
Yes, it should be as you suggest. I updated the lesson to include the correct ::ios prefixes.
Alex,
On Mac (macOS 10.12), many of the examples do not work as written here, so I'm including code that does work for all of the examples.
When using std::cout.setf (), values are not in the std:: namespace, but rather in the std::ios:: namespace:
boolalpha and noboolalpha are in the std:: namespace, and using std::setf () requires the std::ios:: namespace:
Again, the use of std:: vs. std::ios:: namespaces:
Again:
This one didn't require modifications within the code to work, but, it does require inclusion of the iomanip.h header. Also, I thought the first output of an extra '\n' isn't needed, so I took that out:
Again, the iomanip.h header is required, but the rest of this code was good to go, as is:
Thanks, Lamont. I'm on Windows 10 and was also having this issue.
Thanks for this.
Hello Alex, just a quick question regarding manipulators. Why would we type this:
using namespace std;
cout << hex << 27 << '\n'; //1b
cout << 28 << '\n'; //still in hexadecimal - 1c
cout << dec << 29 << '\n'; //back to dec - 29
as appose to this:
using namespace std;
cout << ios::hex << 27 << '\n'; //1b
cout << 28 << '\n'; //still in hexadecimal - 1c
cout << ios::dec << 29 << '\n'; //back to dec - 29
Including ios:: more or less breaks the code, but why does this occur?
Thanks
Because the article was wrong, and it should be std::hex and std::dec. I've updated the article. My apologies for the mistake.
Alright, cheers
shouldn't true be 1 and false be 0?
Why in the boolalpha, 0 is true?
Or was it a typo?
Typo. I've fixed it now.
Under "Useful formatters", second sentence, you wrote:
" manipulators lives in the std namespace".
I think "lives" should be singular.
In the section "Usefull Formaters", the "Group" column is empty on each flag table that has only one row. Is it normal?
Yup, those don't have a group.
For fun I had a go at writing a function that can format a double type to a specified number of significant figures returning the value as a string. It can display numbers in scientific format if that number is either less than some predefined value or greater than some other predefined value. Also you can set a "threshold-to-zero" value such that if the number is less than or equal to this threshold the string representation of the number is set to zero.
FormatOutput.h
FormatOutput.cpp
Note that should a rounded value overflow to the next decimal unit an extra significant figure is given. For example, using this function to round 999.95 to 4 s.f. results in a string containing "1000.0", whereas 999.94 results in the string "999.9", and 1000.04 gives "1000".
Hi Alex,
I think you made a mistake here by assigning showpoint flag to the floatfield group, since i,ve found () that it belongs to the so called independent format flags. That's i think why this code doesn't perform properly:
#include <iostream>
#include <iomanip>
using namespace std;
int main ()
{
cout.setf(ios_base::showpoint,ios::floatfield);
cout << setprecision(4) << 831.0 << ' ' << 8e4;
}
It prints:831 8e+04. But as soon as I remove ios::floatfield from parameter list it performs as intended:831.0 8.000e+04 .
"uppercase - Uses upper case letters"
Who uses uppercase letters when this flag/manipulator is applied on streams. The program itself ?
std::cout does. When the program sends the uppercase manipulator to std::cout, std::cout will print upper case letters for the floating point exponent or hexadecimal values instead of lower case ones.
Hi Alex,
I think the uppercase examples are not correct. They are actually examples for setprecisions
No, it's correct. Using the uppercase manipulator makes std::cout print an uppercase E for the exponent of floating point numbers, as well as uppercase letters when printing hexadecimal values.
exactly it should be like:
0 1 –> should be 1 0
true false
0 1 –> should be 1 0
true false
Excellent Tutorial.............
Great tutorials. I've read them all so far.
However, at the boolalpha example on this page I noticed 0 represents true and 1 represents false?
0 1 --> should be 1 0
true false
0 1 --> should be 1 0
true false
I ran the example code to be sure (my world would collapse if true was 0 :P).
correct
Looking forward to see sections on vector, map, iterators, etc... There are many available sources online but I know when Alex write it it will be the best
Excellent Site. I am setting a book mark. I tried your examples and everything works fine. I would also like to reset the floating point precision back to its default. I am writing a library routine. I don't know what the user will have for their default precision but it would be great to capture it, change the precision to my own, then reset it back.
Thanks,
Vinny
So, how do you go about adjusting the size of the exponent from 3 digits to 2? I have a floating point number that I want outputted as 0.000E+00 instead of 0.000E+000.
Any ideas?
Thanks
I'm not sure you can adjust that in scientific notation format. If there is a way I'm not aware how.
I was stuck with the same problem. From the info i've gathered, the number of digits displayed in the exponent in scientific notation is implementation dependent, meaning it depends on the machine you are using.
It seems to me that the outputting of numbers with iostream is the worst part of C++ (and it's not even part of C++!) The thing that really bugs me is the persistance of things like floatfield and the non-persistance of the width settings. It means that for any given output statement you've no idea what will be displayed unless you explicitly set everything. For example, setting output to scientific somewhere in your program means that all subsequent output will be scientific unless you explicitly say otherwise - the only way I can see to get back to the default output is to use
std::cout.unsetf(std::ios::scientific|std::ios::showpoint|std::ios::fixed);
Bring on C# (or use stdio) I say.
Anyway enough of that, there are a few typos on this page:
In the description of the floatfield group, "floatfield" is missing for showpoint.
In the floating point output examples table, for "fixed" precision 3 there are two decimal points, and for "scientific" precisions 4-6 there are no decimal points.
VS 2005 seems to have a bug regarding showpoint manipulator. The following code:
double x = 1.2345;
cout << showpoint << x << endl;
cout << scientific << x << endl;
cout << showpoint << x << endl;
cout.setf(std::ios::showpoint,std::ios::floatfield);
cout << x << endl;
produces:
1.23450
1.234500e+000
1.234500e+000
1.23450
so you need to use setf to switch from scientific to showpoint (not that I can imagine that will worry too many people!)
I am personally not a fan of the C++ I/O operators for outputting formatted numbers. In those cases, I usually fall back to the old C-style stdio printf() function way of doing things. Thanks for noting the errors.
Name (required)
Website
Save my name, email, and website in this browser for the next time I comment. | https://www.learncpp.com/cpp-tutorial/183-output-with-ostream-and-ios/ | CC-MAIN-2020-29 | refinedweb | 2,542 | 63.29 |
i am getting pymodbus 502 connection refused.
- Nagarjuna Reddy last edited by
@Lazar-Demin
can you please help me i am getting pymodbus port 502 connection refused?. what could be the issue?.
from pymodbus.client.sync import ModbusTcpClient
import time
import logging
logging.basicConfig()
log = logging.getLogger()
log.setLevel(logging.DEBUG)
host = '192.168.0.200'
port = 502
client = ModbusTcpClient(host,502)
client.connect()
while True:
rr = client.read_holding_registers(0x0001,2,unit=2)
print(rr)
#assert(rr.function_code < 0x80) # test that we are not an error
#print rr.registers
time.sleep(5)
error:
root@Omega-6834:~# python modbus_ip.py
ERROR:pymodbus.client.sync:Connection to (192.168.0.200, 502) failed: timed out
ERROR:pymodbus.client.sync:Connection to (192.168.0.200, 502) failed: timed out
Traceback (most recent call last):
File "modbus_ip.py", line 13, in <module>
rr = client.read_holding_registers(0x0001,2,unit=2)
File "/usr/lib/python2.7/site-packages/pymodbus/client/common.py", line 114, in read_holding_registers
return self.execute(request)
File "/usr/lib/python2.7/site-packages/pymodbus/client/sync.py", line 107, in execute
raise ConnectionException("Failed to connect[%s]" % (self.str()))
pymodbus.exceptions.ConnectionException: Modbus Error: [Connection] Failed to connect[ModbusTcpClient(192.168.0.200:502)]
Note: i am able to pinging 192.168.0.200 this IPaddress.
@Nagarjuna-Reddy This suggests that port 502 is not open on IP 192.168.0.200. Is the service running on that device on that IP and port? Is there a firewall in place?
A good test is to try to telnet to that IP and port, but. You'll need to provide more specifics of your setup if you need more help but check the above.
- Nagarjuna Reddy last edited by
@crispyoz sir it was working previously, while working time suddenly I got this issue. but here i am able to ping the device IP. using telnet port forbidden error happening. I was done onion restart after also port was not opened. after done Modbus device restart port was opened. Is there any way to kill that 502 port issue from terminal side?
@Nagarjuna-Reddy I suspect the issue is that when you connect to the Modbus host then disconnect, the port on the host is not closed and the device can only accept a single connection. So you need to make sure that you close the port on the client side each time you or subsequent connections will fail.
I'm making some assumptions here because I don't have details of the host device, but the logic is sound. | https://community.onion.io/topic/4411/i-am-getting-pymodbus-502-connection-refused/4 | CC-MAIN-2021-10 | refinedweb | 427 | 62.24 |
Ideas for branching out and making money with... - Laundromats - Detailing - Pet Washes - Quick Lube
B AN G THEORIES ALSO INSIDE Another Amazing SS Super
Woman, Making Educated Business Decisions, Getting Ahead of Drought Legislation, and of course, more dumb criminals!!!
2 • SPRING 2016 •
CONTENTS Letters to the Editor ...................... 4
Carr’s Corner LETTER FROM THE EDITOR
Interview with Dave Dugoff ...... 8 Innovations.....................................19 Association News.........................20 Tricks of the Trade ......................30 Decisions, Decisions .................36 Industry Dirt..................................50 Extra! Extra! ...................................53 Interview with Gloria Winterhalt .........................56 Big Bang Theories .......................62 Darwin at the Carwash..............84
VOL. 43, NO. 2, SPRING 2016
Publisher Jackson Vahaly Editor Kate Carr Design Katy Barret-Alley Editor Emeritus Jarret J. Jakubowski Editor Emeritus Joseph J. Campbell Editor Posthumous Julia E. Campbell
To lobby or not to lobby, that is the question. The Puget Sound Car Wash Association recently hosted Western Carwash Association Executive Director Kristy Babb to their March meeting to ask her one important question: Should we be lobbying? It’s the question heard round the industry this year, as operators in several states square off with legislators on topics like taxation, environmental regulations and employment practices. As Babb pointed out in her presentation, “If you aren’t at the table, you’re on the menu.” This summarizes the importance of lobbying by our regional and national associations quite well. The WCA, of course, has been politically active for a while now, and especially in California where they have recently faced government interference on two matters integral to a carwash’s success: Water and labor. “I also love reminding people that what happens in California doesn’t always stay in California, that is why the WCA has year-round legislative monitoring in all 12 of our member states,” Babb added. And what good comes from these efforts? Well, the WCA (working with a coalition of business entities) has been successful in stopping several attempts at increasing the minimum wage in California, Babb pointed out. The WCA also works with legislators and the labor community to reduce the negative impacts of several requirements for California’s Car Wash Registration law. “This year, thanks to tireless work by the WCA Legislative Committee members and Political Solutions, Governor Brown has suggested increasing the number of enforcement officers for the carwash industry,” Babb stated in her presentation. “We hope that this increased focus on
enforcement will help to shut down illegal operators and help level the playing field for legally operating carwashes.” Political Solutions is the lobbying firm the WCA hired to augment their voice and reach in Sacramento. At the start of each legislative session, Political Solutions reads all of the introduced bills (about 3,000) and flags those issues which would impact the carwash industry, Babb explained. After input from the WCA Legislative Committee, Political Solutions advocates either for or against bills on which WCA has decided to position. Issues range from general business topics, such as minimum wage, to carwash specific issues related to enforcement and registration. Political Solutions has also created a matrix which matches WCA members to their legislative representatives. WIth this list, Political Solutions is able to encourage WCA members to reach out to their respective legislators on issues of high importance. I hear from a lot of our readers at trade shows who are facing regulatory or taxation challenges in their states. They ask me, “What can be done?” Well, I think Babb would tell you: Quite a lot if you work together and within the system. I encourage those of you facing such struggles to reach out to your regional associations and the International Carwash Association to get the ball rolling -- and don’t forget to keep SSCWN in the loop. We love to report on your hard work and successes.
Happy washing!
Kate
THE 24/7 OPERATOR ROUNDTABLE • SPRING 2016 •
3
LETTERS Hello Kate,
I have just spent most of the day reading back issues of SSCWN from the time you took over as editor. I must tell you I was very pleasantly surprised. First I was glad to see that you were not reluctant to express your thoughts on the state of ICA and give in depth coverage of what goes on at the shows positively or negatively. I must say that in all of the years I have been reading car wash magazines (cover to cover) I have not seen or read opinions expressed the way you have. Most of the magazines are afraid of offending an advertiser. Obviously you are not. I particularly enjoyed your coverage of the 2014 ICA show in Chicago and comments from the vendors meeting that was held. Unfortunately I did not attend that show. Just prior to reading that issue I came across Paul Fazio’s letter to SSCWN regarding events that occurred at the show as well other information regarding ICA and the transition it has gone through. I had been out of the industry for awhile and had lost track of some of the associations movements. These past issues of SSCWN brought me back up to speed.
Reader Input & Feedback
The second thing I noticed was the content. Your use of color pictures really grabbed my attention as well as the articles. In the past when I would get issues of SSCWN I would mostly skim through the newspaper. Now I found the articles to be intriguing and definitely worth reading. The publication really held my interest. Not so in the past. I will be exhibiting with my new company Auto Glanz Solutions in Nashville. Certainly would hope to catch up with you there. Best of luck to SSCWN and keep up the great writing. Regards, Stuart Levy President, Auto Glanz Solutions
Stuart,
Thanks for taking the time to pass along such kind sentiments; they are very much appreciated! Of course, since you flatter me and our publication by praising our “tell it like it is” style of journalism (hear! hear!) -- I must continue in that spirit.
Kate,
I have to absolutely disagree with your comments in regards to the SSCWN of the “past.” It is entirely because of the integrity of the editors who built this publication that I have had the courage to report so...well, as one advertiser described it to me recently, “bluntly.” Our mission is merely to continue the work began by Joe and Julia Campbell and faithfully carried out by J.J. Jakubowski over the decades. Without that foundation, there wouldn’t be the SSCWN you hold in your hands today. I find myself referencing my huge stack of archived magazines (thanks, JJJ!) daily, and you’ll find at least one archived piece in each issue we put out these days (see p. 36 for an excellent example). I consider myself a huge success if the “new” SSCWN is even half as good as the “old.” So thank you again for the praise, and I’ll continue to do my best to keep building on the revolutionary publication they created. With gratitude, Kate
Jim, Your letter has convinced me we need a new category for the Darwin Awards for customers who show laudable stupidity in their quest to get a car wash. What do you think, readers? It needs it’s own catchy name... It also reminds me of a friend who faithfully filled her tires with winter air from the time we were 16 until we were 22 and a new boyfriend finally told her the truth. Interestingly enough -- her married name is now Moran. So maybe “Moran’s Morons” for our new section? I don’t know... just spit ballin’ here. For more Darwin antics, turn to page 84. Cheers, Kate
The Darwin awards always amaze me. I have been meaning to send this one in for some time and here is another issue to remind me. The award nominees are not always criminals, sometimes they are customers. We have all heard this: I put a dollar in your vacuum and it does not work The picture says it all. The date on the pic is not correct. This guy lives 4 blocks from the carwash, long term resident. I have been dispensing and accepting gold dollar coins since 2000. Thought you would get a laugh out of this one.
Jim Moran
Alanson Car Wash
Hi Kate,
I just finished reading “Your Resolutions for the New Year” in the Winter 2016 issue. This is JUST the advice I’ve been looking for!! Although I consider myself a success at running my car wash, I often wonder if I’m on-track and what more I can be doing. This article gave me kudos for what I’m already doing (website, facebook account, gratitude, updated business plan, always looking for ways to improve my carwash, being involved, attending WCA conventions and regional meetings, customer appreciation, being frugal, supporter of Grace for Vets, voting), it gave me some fresh ideas and encouragement to do more. So after reading your article, I enthusiastically invited my boyfriend out for breakfast so we could discuss your article in depth, brainstorm,
4 • SPRING 2016 •
and make a plan! Maybe it was the caffeine in the coffee, my love of your article, or the excitement of a delicious breakfast, but it was a productive meeting! I came away with six items. As I’m sitting here now to document our 2016 resolutions, I thought you might appreciate how I plan to incorporate some of your suggestions at my carwash. You are my witness, and this will hold me more accountable... • Resolve to get involved - Although I already have a charity carwash plan in place, the insurance I have with WCA Insurance (Oregon Mutual) requires that the charity organization have their own liability insurance for the charity event. I have been told that this may cost as much as $500 - $1000/per day. I am calling WCA this week to find out if I can pay this for
the charity organization. Requiring an organization to pay for their own event liability insurance has been prevented me, thus far, from holding a charity carwash. • Resolve to reach out - I will research and contact local real estate agents and ‘welcome wagon’ service, about distributing free $1 wash tokens to new home buyers. • Resolve to be frugal - I will contact a competing garbage service, and get an updated estimate for a 4-yard dumpster. The contract with my local company expires in June 2016. I will contact Kleen-Rite about their dry soap flakes and see if their product matches the quality and chemical alkalinity of my current soap flakes. • Resolve to give back - I will start earlier to prepare 2016 •
5
6 • SPRING 2016 •
LETTERS for the Grace for Vets event in November. I will contact my local town newspaper to advertise my participation in this event, and solicit more involvement from the community. This week, our city is holding a veteran’s “Stand Down” event for homeless veterans, and I’m providing $100 in McDonald’s gift cards. • Resolve to be heard - At first my boyfriend suggested adding a Christmas tree on the car wash roof, but as we discussed the effort it would take to haul it onto the roof, dragging it up a 16’ ladder, we FORTUNATELY changed our minds. We don’t need any ladder accidents! We also thought it might offend or alienate some customers. Then we discussed the “look” we were going for - adding colorful festive holiday cheer! LED Christmas lights are an easy way to add this. I will research if we have electrical outlets already on our roof. I know how much I appreciate festive holiday lights around my town, when the days are short and the nights are dark. Last month, we swapped out metal halide and fluorescent lights for LED lights and at night, the car wash just POPS with a clean, bright light. My electrical bill is already down 50%! And adding LED holiday lights in the winter would be SO MUCH fun!! A few years ago, I came across this quote, “If I’m having FUN, then I’m doing it right!”. This became my mantra in running my carwash, and helped guide me in making business decisions. That has been my secret to success! And, I must be doing something right, as sales are up 25%. Your article was a great reminder of this, and will help me make it an even BETTER - FUNNER year! THANK YOU! Warmest regards,
Kimberly Berg
Citrus Heights Car Wash
Kim, Huzzah! I think if anyone can actually carry out the nearly-impossible, Sisyphus-like task of completing a New Year’s Resolution it has got to be YOU. Thanks for writing in and you may now consider your plan properly witnessed. Perhaps I’ll get it notarized tomorrow -- right after I finish my busy day of living out my New Year’s Resolutions for working out, drinking enough water, eating healthy, reading some non-fiction, organizing my closets, and being more present with my children… Cheers, Kate
Reader Input & Feedback
Dear Kate: I sent a note to the Carwash Association Of Pennsylvania suggesting a speaker on energy cost at our next meeting. I would also suggest the SSCWN consider an article providing information from an independent source on the cost of energy, both electricity and natural gas, over the next 1-2 years. I have received numerous calls over the last three months trying to get me to lock into a electric rate contract. Some claim electricity in Pennsylvania is going up, others claim it will go down. Some want contracts with cancellation fees based on the price of electricity when you cancel and reselling your existing contract. I don’t understand this type of cancellation fee and who do I believe on the direction of electricity cost? Thanks,
Joe Wolfinger Solar Shine Carwash
cents/kWh in 2015 and is expected to average 12.6 cents/kWh and 12.9 cents/kWh in 2016 and 2017, respectively. All this information doesn’t necessarily explain *why* energy rates have been rising for the past six years, especially since demand has not been rising at that pace, and the price of every energy commodity included in the energy index of the Consumer Price Index -- except electricity -- declined in the double digits. So, I’ll keep hunting down some answers, and in the meantime, please check out the immense amount of data on the EIA site. Oh -- and if you’re interested in tips on becoming more energy efficient (as Mr. Solar Panels himself just might be), the Small Business Association has some generic -- but probably pretty effective -- ideas on their website,. Cheers, Kate
Joe, This is a fantastic topic, but one that requires more time than I had to get a full length article in the magazine. (And I couldn’t find anyone with knowledge about those cancellation fees, but I would love to track down an answer for you.) While that remains a priority for this coming year, I thought it might be helpful to share with you some of the information I gleaned while researching and hunting down potential resources for this piece: I point you first and foremost to the U.S. Energy Information Administration at. EIA posts a monthly review of electricity costs and usage throughout the country, including the charts below, which include data from January 2016 and were published as this issue was going to press. By the time you receive this issue, the EIA should have some updated information and statistics. According to the EIA, growth in residential electricity prices was its highest in six years last year, but is expected to start slowing down. The average retail price per kilowatthour is 12.01, while the commercial average is 9.98. (Pennsylvania is at 13.87 and 9.43, respectively.) EIA forecasts the U.S. average retail price of electricity to the residential sector in April will be 12.6 cents per kilowatthour (kWh). The U.S. residential electricity price averaged 12.7 • SPRING 2016 •
7
Don’t Be Put Out to Dry! EDITOR’S NOTE: This interview has been transcribed from its original form as a podcast, directed and hosted by Perry Powell for Wash Ideas. You can find the original version under the “Ideas and Issues” section at.
Perry Powell’s 2014 interview with Dave Dugoff on WashIdeas.com provides a plethora of ideas for getting ahead of legislators and municipalities on drought and water regulations. Perry Powell: Well, good morning and welcome to another edition to Wash Ideas. I’m here with an old friend, David Dugoff, who is the owner of College Park Car Wash in College Park, Maryland. David, welcome to the show. David Dugoff: Thank you. PP: I think I first met you at a show when you were president of the Mid-Atlantic Car Wash Association, where I had come to present. Or at least, that’s where we really got to know each other. DD: I think that’s right. It was at one of the major trade shows. PP: You’ve done quite a bit of service with the associations, though, haven’t you? DD: I have. I’ve been president a number of times and cycling in different roles on and off the board since about 2000. PP: I’ve known you probably a dozen of those 15 years, then. When did you first come into the industry? DD: Well, that goes back to the days of black and white television. PP: You’re not that old. DD: Well, I started that young. My father started putting these white refrigerator boxes in the service bays of gas stations that we had in the mid-1960s. I was a kid -- this was even before I could drive -- but on Saturdays, I was supposed to tend to those machines; clean up the bays and tend to the equipment, as well as wait on customers; pump gas and wipe windshields. But that was how the full service gas stations worked those days; gas was a lot less than a buck a gallon in those days. It really goes back a long ways. So we had four self serve car washes among our 10 gas stations that my father built. And as my father was building and remodeling, we would put in two bays of self serve -- or in some cases four bays -- and then I tore down that station down in 1990 and built another four-bay self serve when we remodeled the entire gas station. It was a raze and rebuild. And that was really an eye-opening experience. When we re-opened that site in 1990, we were losing about $5,000 a month selling gas and diesel because the margins were so bad, and we were making $5,000 a month on this little car wash. And we had never done anything like that before. It was kind of just like having a big vending machine -- car washing had been good income, but it had never been anything significant. But when we opened that site, it opened our eyes. When we were doing that, my father said, “Don’t get that cheap equipment we’ve been getting for years. Go to a show, learn what’s out there, and get the best that you can find. We know this is a good site, let’s put the best we can find in there.” So that’s what we did. We put in good equipment -- in those days, that meant add-
8 • SPRING 2016 •
ing foam brush. It had been around for a while, but it was really new for us. And the carwash did so well -- I mean, we had lines around the property and it was a big property. And we’re thinking, “Wait, we just spent a fortune re-building the gas station and we’re losing money. And then we’ve got this nice little compact car wash that’s fairly simple, and it’s doing great! People love it. What’s wrong with this picture? So we started asking, “Who needs the gas?” And gas was an environmental disaster back then -it still is, I guess -- and we spent a lot of money -- I won’t say how much, but hundreds and hundreds of thousands of dollars cleaning up gas station sites -- and for what? And here we had this nice little simple car wash that was going gangbusters. So we started looking at other sites that we had remodel and rebuild and come into compliance with the underground storage laws from the mid-80s. We looked at College Park, and because of the very high traffic flow on Route 1 in College Park, it really had become a poor gas station site. Lines just kept kind of dwindling and dwindling. It was on a downhill tilt. And he and I kind of looked at it, and said, let’s just put as many bays on here that we can fit. We can lay it out nice and build a nice looking building, with a nice structure. Make it look solid and inviting. And not put any gas there. This was really revolutionary for us -- we had been selling gas since 1929. My grandfather had started it. So this was a major paradigm shift. And we also reached the point of saying being in the gas industry is not really as much fun as it used to be. We would generally make money, but we never knew why. We couldn’t control it. Nothing we did was really helpful. We were just buffeted by the markets and the margins -- sometimes we had a margin, sometimes we didn’t. And we said, “I don’t want to do this forever.” He was ready to start taking it easy, and I just didn’t want to do that for the rest of my life. I was in my 40s then, I guess. So we put the gas stations up for sale. It took five years to consummate the sale; to find a buyer who would give us the price we were looking for -- because we had no debt and we were in no hurry. But that was the decision we made. And during that time, I continued to remodel the gas stations and I initiated the planning stages for what would become College Park Car Wash. And that was two years of fighting a zoning battle to build a car wash where we already had one. We had a gas station and a two-bay carwash that we built in 1968 and they fought us on that. They being the city. They were very obnoxious. And it was painful. I can’t tell you how many public hearings it was -- but they did not want a carwash there. They did not want it. But we did. And we persevered. So, then when we sold the gas station business, we
wrapped up the affairs of the oil company and then went right into construction of the carwash that next Spring. We opened in February of ‘97. So that’s how I got to be a single site operator. And that’s really what I wanted. I had spent my whole life running around ten gas stations trying to keep my hands under the sieve to keep the sand from leaking out -- which you can’t do. It’s just an impossible task. I envy the guys who are able to do that; who are able to run multiple locations of any type of business at a high level. I just couldn’t do it. But I thought if I could run one site and devote all my efforts to running one site at a high level, then I bet I could make a decent living at that. And I did. And having my attention focused, I think, makes a lot of difference. It’s what I do every day. That’s where I go to work. That’s my main gig. PP: You mentioned the foaming brush -- in an interview with Tom Brister I asked him what he thought the most significant advancement in carwashing had been in his lifetime. And he said the foaming brush. DD: Oh, yeah. I think so. PP: And he told a story about the foaming brushes -sort of comparing it to the volcano and lava arches that we have today. He said, you’d go to the first show and there was maybe three there, and then you’d go to the next show and there’s 50 people selling foaming brushes. But until then, all you were selling was soap and water -- you weren’t selling time. And that really made a significant difference. Somewhere in all of this, you found time to become an attorney. I read somewhere that you’re a “recovering attorney.” So I guess my question is, is there a 12 step program for that? (Laughter.) DD: For the recovery part? I don’t know. Haha. My father tricked me into going to law school. I had utterly no interest. I won’t tell that story, but suffice it to say, he tricked me. I went to George Washington Law School at night while I was still running the gas stations and eventually I did pass the bar and I practiced outside for a year and then came back into the business. That just made a lot more sense for me. So i have done things at a lawyer, but it’s not what I do every day. I don’t think I was particularly good at it, necessarily. I think the education has tremendous value for being able to analyze issues and problems and to think about things not only from one’s own perspective -- but from the other guy’s perspective. I think you can make a much more effective argument if you understand that the other guy’s point of view has some value to you also. I mean, at least to him it does. There’s a lot of ranting and raving going on these days, particularly in Washington politics, where I think there’s a lack of ability to give credence to the fact that the other guy’s position has value, that it might be logical. That it’s not just there to be an irritant. And I think that the training from law school has been
{continued }
• SPRING 2016 •
9
Don’t Be Put Out to Dry! very valuable. PP: It seems that a lot of people today have sort of a scorched earth policy. That it’s winner take all or nothing. DD: Yeah, and it’s so stupid. In my opinion. It never works out that way. PP: You know, you’re in Maryland and I remember doing a variance in Maryland where literally I was in front of the board for three and a half hours -- it was the most grueling time I’ve ever been through -- and I countered on each sign, there were 13 signs going up on the carwash property and they wanted to sort of eliminate a whole bunch of them. At the end of the deal, we had effectively gotten them to agree that we needed all 13 signs. But one of the board members looked at me and said, “You have to give me something.” So we took lights, and we took lights out of two of the signs so they were non-illuminated. The owner felt like we had won the war -- and I did, too. But I think if you feel like you have to win on every point, then you won’t be winning any wars.
need to be in business, but that everyone needs to be in business or to live and to have water to brush your teeth. We can be very shortsighted -- you know, “I don’t want to pay those taxes!” -- well, okay, you don’t have to, but someday the bill is going to come due. And that’s exactly what they saw in Charlottesville. PP: Now you served on both Maryland and Virginia’s Governor’s Drought Commissions.
PP: Well maybe come to Texas then. (Laughter) DD: Really?
PP: What was the most eye-opening thing that you learned in that exercise?
PP: I think it could work. We have Code Enforcement that drives through our neighborhood from time to time.
DD: The most interesting thing is that when you’re talking with regulators, when the crisis is not upon you, they can be very reasonable and understanding -- and they “get it.” And when you wait until you’re in a crisis, then they have knee-jerk reactions just like everybody else. And they’re under political pressure, you know, the sort of, “How could you let those water wasting carwashes stay open when we have to take two-minute showers?!”
DD: Well, that’s interesting. I heard a story from an operator in Maryland -- a great operator -- and across the street from him is a fast food restaurant. And the fast food restaurant hosts a charity wash every weekend. And the effluent from the charity wash is going out the driveway, into the street, down the storm drain, directly into the river. And he’s sort of at his wit’s end. He’s tried to talk to the owner to explain this isn’t good for the environment. He can’t really get anywhere. I thought what he should do is invite the Boy Scouts to set up a grill at his car wash and sell some hamburgers and hot dogs for $1 as a charity event to compete with the guy across the street selling hot dogs and hamburgers -- but there was a restriction in his lease that he couldn’t do that. Otherwise, what’s good for the goose should be good for the gander. And there was a Code Enforcement officer that paid him a visit not too long ago and he said, “You know, Officer, you’ve got to be kidding. Do you know what I do for the Chesapeake Bay Foundation and the Wash the Bay fundraiser?” He’s involved in a number of programs -ICA WaterSavers and the charity events he does during the year. And the guy across the street is shooting effluent down the storm drain. And the Code Enforcement Officer says, “Yeah, but that’s for the Boys Scout.” And he’s asking okay, well, if I wash for the Boy Scouts can I send all my water down the storm drain too? And the Code Enforcement Officer wa sa little flustered by that, and I think, left, but …
PP: It’s panic.
PP: So, you’ve been very instrumental in your area -not only have you been a very successful operator and been very involved with the Association and supporting it with your efforts -- but you’ve also done a lot of work in the water area in terms of drought restrictions and conservation, as well.
DD: Yes, panic. THey feel that they’re in that panic. And it’s very hard for them -- they’re under the political pressure from the governor for solutions. They have to have public relations solutions to show that they’re dealing with this problem. And they don’t really care who they hurt when they’re throwing their weight around. But when it’s calm, you can talk to them. And you can explain what you do and what you can’t do. And one of the interesting things that we talked about with the regulators was that during the drought we encouraged operators to reduce their tip sizes on their nozzles and in the arches -- use a little less water. And we found that we could do that and we could still get a clean water and we could use a little less water, but then when the drought is over, there’s no reason to go back to using a bigger tip. So when you have another event two years or five years later, you can’t ratchet it down below that. I mean, you can make the hamburger a little smaller -- but you can’t take it off the bun and still call it a hamburger.
10 • SPRING 2016 •
DD: I think there is; we have to seize that special time to explain a lot of stuff and it’s something that we do try to do. But you have to realize: You’re never going to get a municipality to send out a police force to write out tickets for someone who’s washing in the driveway. It’s just not going to happen.
DD: Yes.
DD: Exactly.
DD: We have. MCA has been very active. At first we were the victim -- we were sort of shot int he back when we had a drought in 2000. The governor closed the carwashes precipitously and then relented and opened them again -- if we closed on Tuesdays. So we survived that, but it created the need for the MCA to be active politically and to make ourselves known. I don’t think we’re big contributors politically, necessarily, but our faces are known in Annapolis among the lawmakers that make these decisions. We had quite a time. And then two years later there was a very bad drought in Virginia, and one particular county -- it includes Charlottesville, the home of Jefferson -- and they shut down all the carwashes on one day’s notice and it was a total surprise to those guys. I think it was 13 car washes. A number of them got through that summer by going out and hauling in water on trucks and just leaving the trailer on the property and feeding in the water that way. They just wanted to be able to pay their employees and their mortgages. So, we bought time on the radio and we went through everything and the water authority -- it finally dawned on them, after a month or so -- that: A.) We don’t use much water to start with, B.) Closing us down wasn’t having any positive effect on the drought and C.) All they were doing was hurting legitimate businesses that had a right to be in business. But they couldn’t do anything because they had painted themselves into a corner, so to speak. So as soon as it rained in September, the first rain, they reopened the car washes. And we haven’t had much trouble there since. Charlottesville is a really good example of very poor long term planning. The reservoirs were built in the 1960s for the population that was there and predicted to be there -- but the population grew exponentially faster than anyone expected and there was no water capacity. So you have any kind of dip in rainfall and their reservoir is dry. They’ve been dredging out the reservoir over the last year or so and working on it -- which they had not done; had not spent any money in 40 years! So it’s people not taking care of the infrastructure that not just we
sweep a new class of customers into our facilities?
PP: And the presumption there is that everybody went back to their prior usage before the drought restrictions. DD: Yes, but I didn’t. And I found, “Oh, these work fine.” And I’m still using the economy nozzle set that my supplier worked out for my in-bay automatics. And I have to ask for it specially. “Oh, we have to look that up.” (Laughter) PP: And so it looks like maybe you found some cost savings in the middle of the drought. DD: Well, maybe. I mean, it’s not very significant. But, a little. PP: You wrote an article for Professional Carwashing & Detailing back in 2010 and I’m going to post a link to that article for any listeners who want to read that -- it was very interesting. There were a couple points that you raised in that article that I found interesting. The first: The issue of driveway washing. And I’m just curious, having read that and sort of triggered this thought in my head -- is there an opportunity to use a situation like this through media or what not, to talk directly to the consumer and talk about the usage differential between what happens in the driveway and what happens in a professional carwash at a time when homeowners are aware that they’re going to be in trouble, when they’re restricted already, isn’t there an opportunity to jump in there and use the current environmental conditions to
PP: I don’t know if you know Vic Odermat, but we interviewed him a while back. He’s in the Seattle area with Brown Bear Car Washes, and he had a really large problem with this very early in his career and he formed a group and they actually were very successful in getting restrictions passed over the storm drain issue. They hit it two prong: Education -- they went in the schools and continue to do that -- and then legislation. I think carwashers, I think sometimes we are just not paying attention to things until they are on our plate. DD: Yes, that’s true. The Brown Bear Study is a terrific piece of work. I take it everywhere I go, to various groups. And we explain to them that effluent will kill fish and this is not good. There’s a very interesting political situation that’s evolved in the last five years starting with the EPA imposing effluent restrictions on the storm sewer. So every municipality that runs a wastewater treatment plant has to decrease the amount of phosphorous nitrogen that comes out as an end product of that plant and they also have to reduce the nitrogenous phosphorus that’s coming out of the storm sewer. That, in and of itself, has created a tremendous awareness. In Maryland we’re spending billions of dollars -- and we’re a small state with 23 countries, maybe 30 or so wastewater treatment plants -- and we’re updating all of these facilities to the tune of billions of dollars. And that’s being passed on in your water bill to consumers.
{continued } 2016 •
11
12 • SPRING 2016 2016 •
13
Don’t Be Put Out to Dry! That’s an enormous amount of money, so there is an attention being paid now to water quality in the storm sewer as a dollar issue for the municipality that there was not. Now, it’s been in the Clean Water Act since the mid-80s. It’s just never been enforced. And now it’s being enforced and the money is being spent; plants are being updated. I think that creates an opportunity. We need to talk to them where we live -- in your state and your local water boards. PP: You brought up in that article a few other things that were really interesting to me. At one point you quote a statistic that carwashes use “one-half of one percent of a municipal’s’ water demand” and that at another point you said that carwashers have to distinguish themselves -- or have the burden of -- and I’m putting words in your mouth now -- but they have the burden of proving themselves to be an essential service. You make the point -- you didn’t use the word -- but you made the point that that is a subjective thing that you think is essential. For instance, restaurants which use ten times the amount of water as a carwash, are considered essential because people are putting food in their mouth and that’s essential. How have you been able in your experience to deal with that term “essential services” and sweep carwashes under that bar? DD: Well, not terribly successfully actually. We make that point that a white linen restaurant -- if you think about it, for just a moment. There are only two or three essential services for a community: Grocery store, pharmacy, and fuel. Then after that, what can you get along without if it were closed for a few days? You don’t have
METAL HALIDE
to go to a fine dining restaurant. THat’s not essential to you. You can get food at a grocery store and cook it. And yet, when the mayor of the town owns the white linen restaurant (and we had that happen in Fredericksburg, Maryland, at the time of the drought) -- she thinks that is essential and the car wash is not. Trying to get legislators to realize that distinction is just not true. It may be efficient at the time so that they can label a bunch of businesses and say, “We’re going to close down non-essential businesses.” But it’s not true. Now, McDonald’s and the fast food restaurants employ a lot of people. Well, full serve carwashes also employ a lot of people. And when you close down carwash, people lose their income. There are the same effects. The guy who owns the McDonald’s has a mortgage to pay. And the guy who owns the carwash also has a mortgage to pay. And utilities. And so on, and all these bills that just don’t stop because there’s a drought on. So creating that awareness, we say it -- but I’ve never been able to get a legislator to say it back to me. There’s a nodding acknowledgement, but I have not had success in selling that. Doesn’t mean that we don’t say it --- and it’s certainly part of the things that we say. I still think there’s a legal argument to be made that under the Equal Protection Clause that carwashes need to be treated the same as any other business and at a time of scarcity a municipality may have to restrict and close things, and if they treat all businesses the same, well then, okay. To me, that’s fair. What’s unfair is to single out one or two industries -- landscaping, golf courses, whatever -- and car washes because they’re *perceived* as water
(1200 Watts)
VS.
wasters and only closing them, or closing them first. I think the pitch we have made is: Treat us like any other commercial business. Any other commercial customer of the water authority. PP: And certainly we’ve had these experiences in parts of Texas this year, and the reactions and the arguments from the politicians always seem to be the same. One of the things you mentioned in the article that was actually borne out in a case in Texas, was that unifying together as a group was more important. You used the example of a bottling plant that employed the same number of people and they wanted to protect the jobs as the 17 carwashes in the area and how important it is that you be seen as a block of employers in that area and not just a standalone guy who’s trying to save his own skin. DD: Yes, that’s right. PP: As you began to unify with other operators -- you talk about the need for a spokesman and the need to be rational. Well, carwash operators who are fighting to pay the mortgage seem to be...well, less rational. How did you find it was working with people who were perceptively competitors and getting a common objective going? DD: Well, when they’re really threatened they can do it. I didn’t find it was that we were competitors that was the problem, but there were definitely some operators who could perceive the greater good of working together, and then some operators who would not under any circumstances -- they’d rather die than get a glass of water from the guy across the street. I mean, that’s the truth of human nature, I guess.
{continued }
G&G LED
(400 Watts)
W P S E R I E S | C A R WA S H L E D LU M I N A I R E S
WAT E R P R O O F L E D L I G H T S | M A D E I N U S A | 5 Y E A R WA R R A N T Y UL WET LISTED | CREE LED | LEXAN SLX LENS I N S TA L L S L I K E P V C CO N D U I T | Z E R O M A I N T E N A N C E CO M P L I M E N TA R Y L I G H T I N G P L A N A N D CO N S U LTAT I O N
14 • SPRING 2016 •
ggled.net 800.285.6780 sales@ggled.net
• SPRING 2016 •
15
Don’t Be Put Out to Dry! But if you get enough of the operators who are upstanding citizens and are known to politicians -- I remember, we went into a meeting in Fredericksburg with four operators; good operators, good citizens. And outside, before we went in, they said, “We have to protect ourselves. We have to do this, we have to do that.” And they immediately folded. They were immediately ready to give the mayor anything that she wanted. They were ready to comply. We can do anything you want. I said, “That’s fine. But on behalf of the Association and the total membership, I’m not going to agree to that. This isn’t fair, this isn’t fair, this isn’t fair and this isn’t fair. And you really shouldn’t do this. This isn’t going to be effective in helping your drought situation, first of all.” And then I laid out all of the points that we had already made again. So she looked at me and said, “Who are you and what are you doing here?” She was not happy to have me in the room. But then she had a staff person -- not sure if he was an adviser or a utility staff person or whatever -- but he looked at her and said, “He’s right.” And she was angry, “What do you mean?” But the points I made were true: We don’t use a lot of water. We never have. So they did not close the carwashes in the city of Fredericksburg. They were on the verge of it, and if those guys had gone in without someone from the outside -- someone who was not accountable to that mayor -- it would have been a different story. They would have been closed. PP: I find the same thing in dealing with signs because the local sign guy who’s standing before a board has to ultimately come back to that same authority time and time again to earn a living in that particular city. So, you know,
when I come in -- I’m coming in with the attitude that I might never be there again. You’re able to say things that the guys who are beholden to that body might not be able to say. You’re right about that. Switching gears: If you could give a piece of advice to a guy who is not facing any sort of perceivable water restriction at the moment about how he should or can be prepared for what may come by surprise at any time and he suddenly finds himself in this situation -- how can he prepare up front? What sort of advice would you give him? DD: One thing he can do is invite the mayor or the city council members over and show them your reclaim system, show them your spot free system, show them how small these tips are -- and tell them, this only uses three gallons a minute at high pressure, but most of the time we’re using low pressure functions that are in pints per minute or quarters per minute. And show them that what might look like a lot of water, isn’t really. You can convey that much more effectively when there’s no panic. I show them my second reclaim system. Self serve equipment -- that’s a mature industry. They know what they’re doing. But reclaim; that’s constantly evolving. I’m actually on my third reclaim system. You can show what you’re doing -- make yourself known as a good corporate citizen in their community. If you don’t live in that particular jurisdiction -- as I don’t in College Park -- and you’re not a voter, well then, they don’t need you. They don’t care. Most members of the town council -- especially in smaller areas -- are not in business, have never been in business, and don’t understand what it’s’ like to
be in business. They may have political aspiration sand they may be smart -- but their goals are very different than ours and their world view is very different than ours. So, have them over when it’s not intense. Educate them then. The mayor of College Park is a young fella in his 30’s. He’s an environmental activist. Very smart. Very, very smart. And i had him over to walk around. He wouldn’t wash his car -- and it really needed it -- but he doesn’t care about that. I showed him the reclaim system and I explained how I use it efficiently and effectively at the car wash. Now, it hasn’t gotten me anything other than we’re on a first name basis now. But I think if they had a problem with me, they’d give me a phone call. PP: Yes, that kind of thing might just keep your name off the table if the time comes. I think there’s a critical moment that you and everybody I’ve talked to that you and everybody I’ve talked to who’s been through this has gone through -- this one critical moment where, as you described, they have a knee jerk reaction and say, “Car wash. Cut ‘em off.” But if you can sort of plant all of the seedsup front that you’re not a water waster, than you might be winning the war. Well, this has been very interesting -- and we haven’t even taken this conversation as far as it can go -- but maybe we can count on you to come back and continue it? DD: Yes, yes. Sure. Of course. PP: Well, Dave, thank you so much for being here and for what you’ve done for the industry.
JBS Industries Presents
Concentrated Products. Obsolete Drums. Measurement Levels. Performance Packaging. Accurate Dispensing System. Space Saving. Safe for Employees. Visit us at booth #4020 at this year’s ICA show. (888) 745-0720 •
• SPRING 2016 • 16 compass_ad.indd 1
4/19/16 10:56 AM 3021 Midland Drive • Pine Bluff AR 71603 800-643-1574 • sales@fragramatics.com
• SPRING 2016 •
17
4
HOUSE
E
OLESALE PRIC WH
Pack
Y RESALE EAS
Black Ice
Black Ice
VS52431
VS52331
Caribbean Colada
Caribbean Colada
VS52425
VS52325
New Car Scent
New Car Scent
VS52433
VS52333
Vanillaroma®
Vanillaroma®
VS52432
VS52332
4
Bayside Breeze VS52434
Caribbean Colada VS52425
Pack
4
LITTLE TREES®
Easily slips around a car's sun visor
Pack
Snaps into air vent
NEW BOLD
VENT Clips
FRESH
VISOR Wraps
NEW FRAGRANCES
AIR FRESHENER HOLDER CADDIE
CLEAR: VS17121-TH BLACK: VS17152-TH
COPPER CANYON 24 Card Pack: VS57169 72 Pouch Pack: VS17169 Overlay Decal: VS17169O
CARRIBEAN COLADA 24 Card Pack: VS50324 72 Pouch Pack: VS10324 Overlay Decal: VS10324O
CHERRY
BLOSSOM HONEY 24 Card Pack: VS50476 72 Pouch Pack: VS10476 Overlay Decal: VS10476O
Counter DISP Display Box LAY
TOP
SELLERS!
The #1 Selling AIR FRESHENERS in the industry!
Comes with 96 Single Packs
Black Ice
New Car Scent
CHAT LIVE CLICK
Wild Cherry
Pure Steel
VS10000
24 of each: New Car Scent Royal Pine Strawberry Vanillaroma
SHIPPING FROM
TEXAS - NEVADA - PENNSYLVANIA 800-233-3873
REQUEST YOUR FREE COPY OF THE NEW KLEEN-RITE CATALOG
800-233-3873
18 • SPRING 2016 •
INNOVATIONS
BRIGHT NEW IDEAS, PRODUCTS & SERVICES FOR SELF SERVE CARWASHES
From GinSan: GS-407 Large LED Display Timer Ideal replacement for the wall mounted GS-31 display timer for more in-bay visibility. • Input Power: 24v • Large scrolling 2” x 9” LED Screen. • Premium product inputs. (adjusts time when service is selected). • Programmable greeting. • Total Revenue accumulation report (displayed on timer). • Four separate revenue inputs reports (displayed on timer).
• Timer Cycle: 0-99 minutes. • Pulse Accumulation Maximum: 255 • Quick Restart Time. • Start Up Delay. • Event Cycles. • Free Wash Cycles. • Bonus Time Features. • Language: English, German, Spanish, French. • 2 year warranty.
From Schaedler Yesco: 71W LED Spaulding Wallpack • Rugged cast aluminum construction. • Flat glass is tempered, impact resistant and allows no up-light. • 12.4w LED unit produces 802 lumens • Five standard polyester powder finishes, protects housing and provides lasting appearance. • CSA certified to UL1598 for use in wet locations.
From American Changer: AC8000 Paystation w/ CoinCo Validator Need to replace your old entry unit? Does your machine have boards that can’t be replaced? American Changer’s AC8000 with CoinCo Validator and optionalCryptoPay upgrade* is the most economical 24 hour car wash entry system for an automatic car wash! Are you considering upgrading or adding a credit card system to an aging Hamilton ACW3 or 4? Keep in mind that the main electronic control boards in those units may not be able to be repaired or replaced if they malfunction! AC8000 PaystationReplacing a Hamilton ACW 3 or 4 that is securely “bricked in” can result in some expensive masonry work! The American Changer Paystation fits easily inside of the Hamilton shell which eliminates removing the old cabinet! The Credit Card processing (Crypto-Pay Optional) and Token/Code marketing system are included in the Paystation! An optional “beauty ring” finishes the installation! The mounting and electrical access openings inside the American Changer are in the same relative position which also simplifies the job!
From KIC Team: Credit Card Reader Cleaner
From Simoniz: All Season Tire Shine 5 Gallon Simoniz introduces the first of its kind to the Car Wash Industry: ALL SEASON TIRE SHINE, a safe non-solvent, Water Based – Anti Freeze Dressing. This innovative Dressing provides the best of both worlds. For the Consumer. A non-greasy, excellent High Shine dressing with that Natural-Like New Clean Slick finish to all tires. All Season Tire Shine revitalizes tires and highlights a freshly cleaned vehicle. For the Car Wash Owner. A nonfreezing product for your SS Bays that will stay operating and produce profits throughout the winter months. It’s safe on all rubber and plastics and will not harm painted surfaces, this means no damage claims. Easily rinsed and cleaned with just water, preventing slippery or oily floors. Another First by SIMONIZ Perfect for Simoniz Self Serve Tire Shine Systems.
From Crown Equipment: Tire Shine/ Air Machine Combo • Coin-operated • For SELF SERVE/Full Serve/ Touchless/ Convenience Stores • Two (2) revenue producers. Sell “Tire Shine and “Air” • Designed to hold 5 gallon pail • Can also be used to dispense Wheel Cleaner at entrance to Touchless. • Sensortron/lighted dome standard. • Digital display optional • Holds 5 gal pail of tire shine.
Why use Cleaning Cards? • Prolong equipment life and reduce capital expenditures • Reduce maintenance fees • Reduce equipment downtime • Reduce fees from erroneous transactions • Increase customer loyalty with fast efficient transactions • SPRING 2016 •
19
Association News FROM THE PUGET SOUND CAR WASH ASSOCIATION: Friends old and new gathered for the Car Wash Solutions Conference held by the PSCWA on March 29 at the Muckleshoot Casino. Longtime members, PSCWA leadership, and our loyal vendor members enjoyed a day of information sharing and socializing. We were joined by representatives of the Western Carwash Association and International Carwash Association - Kristy Babb and Claire Moore, respectively. Operators new to our association also traveled to Auburn to partake of the conference. We got to know representatives from Bush Car Washes in the Tri-Cities, The Wash Station, LLC in Wenatchee, and GTO Carwash in Yakima. The Car Wash Solutions Conference was made possible through the support of our conference sponsors - the Western Carwash Association, Diamond Shine, Inc., McNeil & Co., Northwest Wash Systems, LLC, Zep Vehicle Care, and DRB Systems, as well as Lustra Professional Car Care Products and Charter Industrial Supply LLC. But,
20 • SPRING 2016 •
all of our vendor members turned out. There were many displays of products and services on hand, and a lot of networking got done during the conference. Roundtables topics were wide-ranging and covered useful information. Low cost, and yet effective, a motion-activated audio system was described by security expert, Mike Canaan of Trident Investigative Service, Inc. Brooke Anderson of Bank of America and Rich Hays of DRB Systems fielded operators’ questions about the choice between increased liability and purchasing equipment with greater security for credit card transaction. Tips for recruiting the right employees from People Values President, Grant Robinson included offering existing employees an incentive to refer potential employees and approaching school coaches and teachers, versus guidance counselors, for referrals, since they’re in a better position to know their students. Glenn Potter of Inland Insurance discussed managing claims, Tammie Hetrick of Retail Association Services made a presentation regarding Workers Comp, and Heath Pomerantz of Diamond Shine led lively self-serve roundtable discussions.
After an excellent buffet of barbecue served by the Casino staff, Kristy Babb talked about the government affairs efforts of the Western Carwash Association and how such techniques might be used to influence legislation in Washington State. The question was asked- Would PSCWA members support becoming involved with government affairs? Perhaps revisiting the possibility of working towards elimination of the sales tax on self- serve car wash revenues? The PSCWA leadership made a commitment to discuss this and make a proposal to the membership about influencing issues, beginning with the 2017 legislative session. The conference ended with drawings for some great door prizes- gift certificates, a basket with a couple of bottles of excellent wine, and a much sought after drill, all provided by our conference sponsors. The grand prize was provided by the International Carwash Association for an all-access pass for their convention in Nashville on May 9-11. It was claimed by Bryant Souriyavongxay of The Wave Carwash, LLC. Thanks to everyone that worked on and attend{continued }
• SPRING 2016 •
21
ETOWAH VALLEY EQUIPMENT
NEW PRODUCTS ! T HO
PUSHBUTTON METER BOX 11 Buttons QP Flex Timer X-20 Coin Mech
Custom Decal
$1795 New for 2015
S e r i T
e hin
Starting at $395 per Bay
2015
REPLACEMENT DOORS E FRE
For your existing meter shells
Custom Decal with Every Door
Most doors shipped in 1 week or less ! UPGRADE YOUR WASH NOW New for 2015
FOAMY
Tire BRUSH
GENESIS III
Updated Controller FOR
AMERICAN
888 920 2646
Changers
Etowah Valley Equipment 47 Etowah Center Dr sales@etowahmfg.com Etowah, NC 28729 22 • SPRING 2016 •
888 920 2646 CREDIT CARD SYSTEMS Compare before you buy ! QuickPay
Brand XYZ
EMV Ready
YES
?
NFC Capable
YES
?
Marketing Features
Yes (Multiple)
?
Set by Remote
YES
NO
Voice Available
Yes (PRO Series)
NO
MDB Capable
YES
NO
Accept Credit & Debit
YES
?
Bonus Features
YES
NO
Upgradeable
YES (Chips)
?
Value Priced
YES
NO
Coin & Bill Counters
YES
NO
Discount Cards
YES
NO
Established Company
YES (30 Years)
?
Free Factory Support
YES
?
FEATURES Multiple Models
YES (4)
NO
Why buy “just” a credit card acceptance system ?
THE CHOICE IS CLEAR !
QuickPay Marketing System WE designed it, WE manufacture it and WE service it !
Give your customers the options they want ! • SPRING 2016 •
23
Association News ed the 2016 Car Wash Solutions Conference!
FROM THE AUSTRALIAN CAR WASH ASSOCIATION PRESIDENT DARREN BROWN: We have had our share of both drought and rain this summer, bushfire in the West Coast World Heritage Area where we would normally expect rain and torrential downpours on our Sunny East Coast. Yes, I am writing of Tasmania, but it seems these occurrences were mirrored on the Big Island also. The season has turned down here in Hobart, 8 degrees this morning with a distinct chill in the air and we live in hope that the rain will come -- for without good rains in the highlands it could be lights out this winter. IF you haven’t heard our news, the Basslink undersea power cable has failed and our power generating dams are at dangerously low levels. What has all this got to do with the carwash industry? Everything. When we are faced with extreme conditions, such as drought or (in Tasmania’s case) overzealous public servants trading a precious resource for profit (the powers that be ran the turbines flat out before Prime Minister Tony Abbott repealed the carbon tax -- selling this power to the national grid {continued }
REMOVABLE PLUG LOCKS
REMOVABLE PLUG LOCK & PUCK LOCK KITS
PUCK LOCKS & DISK LOCKS
T-HANDLES & T-HANDLE LOCKS
RETRO FIT SECURITY KITS
All the Security You’ll Ever Need!
ASK ABOUT OUR FULL RANGE OF LOCKS AND LATCHES FOR AMUSEMENT & VENDING EQUIPMENT, GAS DISPENSERS & SELF STORAGE.
800-422-2866
24 • SPRING 2016 •
9 1 6 8 ST EL L AR COURT CORONA, CA 9 2 8 8 3 A IG R O UP.CO M S A L E S @ L A I G R O U P. C O M
PH: 951-277-5180 FA X: 9 5 1 - 2 7 7 - 5 1 7 0
Everything for the Car Care Industry!
Patented and proven formulas in a spray or wipes: cleans and protects surfaces to look like new, from the car care name you trust.
We carry a full line of ArmorAll products Distributor of car wash parts, supplies and equipment for over 45 years. 6800 Foxridge Drive, Mission, KS 66202 • Phone: (800) 443-0676 • Fax: (913) 789-9393 • SPRING 2016 •
25
Association News for an inflated return) the carwash industry is at the mercy of the public servants who make the decisions that affect our businesses and lives. This is why we need education, understanding and advocacy at every level of Government. Without these efforts, decisions will be made with little thought of consequences to the carwash owner. In addition to the myriad of individual benefits that come with membership, this (in my opinion) is why we need a respected, well-resourced organization to represent our industry. We can only achieve this through strong membership. I look forward to catching up with as many of you as I can in Melbourne at the ACWA Event in August.
FROM THE SWCA: 2016 was a record breaking year for the SCWA Convention & Car Wash EXPO. A record number of car wash owners and vendors recently gathered at the Arlington Convention Center for the annual Southwest Car Wash Association event - the First Big Car Wash Show of the Year. BETTER - STRONGER - TOGETHER, the 2016 edition of Southwest Car Wash Association Convention & EXPO, included three strong days highlighting premier education - industry innovations – business solutions – operational strategy
and unparalleled hospitality. The event included more than 60,000 square feet, in two EXPO areas, displaying car wash “state of the art”. “The EXPO, the networking and the wonderful energy with all the successful car wash operators is really a great way to start the year,” according to Ian Heritch, Mr. Sparkle Car Wash, San Antonio, Texas. The Convention was highlighted by the Captain Richard Phillips’ keynote address. Phillips recounted his experiences of being held captive by Somali pirates and how he realized we all have great inner strengths during times of stress. Good lessons for all business owners. The popular CEO Forum featured nationally recognized Billy Riggs who highlighted how car wash owners can be more competitive by creating unique customer experiences. In-Coming SCWA President, David Swenson, owner of the Arbor Car Washes in Austin, Texas said, “As we brought the car wash, lube and detailing community together from around the Southwest, the SCWA 2016 Convention & EXPO focused on how we can be “better – stronger – together” in 2016. This was the biggest and best SCWA Convention & EXPO – drawing the largest attendance ever. We will continue to build on the successes and continue to raise the bar even higher. As a member-driv{continued }
RADNELAC
2016 Association Calendar of Events Submissions can be made to Editor Kate Carr at katec@sscwn.com
MAY 9-11
The Car Wash Show 2016, International Carwash Association Nashville, TN
JUNE 14-16 UNITI expo
Stuttgart, Germany
JULY 12-13 WCA Road Show Boise, Iowa
AUGUST 9
ACWA Table Top Show Bulleen, Victoria, Australia
SEPTEMBER 13-17 Automechanika Frankfurt, Germany
SEPTEMBER 19-21
Northeast Regional Carwash Convention Atlantic City, NJ
NOVEMBER 16 Learn More, Earn More by Kleen-Rite Corp. Columbia, PA
26 • SPRING 2016 •
Revitalize your Wash to Increase Profits and Improve Customer Satisfaction!
From the FasTrak Touch-Free In Bay Automatic, U WASH IT Compact Self Serve, to the In Bay Tunnel, Coleman Hanna offers the best solutions for transforming your wash into a profit-making machine. 5842 W 34th St • Houston, TX 77092 1.800.999.9878 1.713.683.9878 Find us on Facebook: /ColemanHannaCarwash
• SPRING 2016 •
27
Association News en organization, SCWA wants to be on the leading edge of what is happening in the car wash industry and to provide our members with the most relevant information and resources available. “I am really excited to be a part of this dynamic organization.” Swenson was elected 2016-2017 SCWA President during convention proceedings and joined more than 1600 car wash owners and exhibitors attending the SCWA 2016 Convention & EXPO. The Annual Awards presentations honored the 2016 SCWA Car Wash of the Year, Cypress Station Car Wash & Lube, Houston, Texas. Cypress Station is owned by Ahmed Jafferally. The Southwest Car Wash Association includes
28 • SPRING 2016 •
more than 1300 members and over 6000 locations throughout the Southwest committed to supporting car wash owners and promoting the professionalism of the car wash industry. For more information and pictures of the 2016 Convention & EXPO visit.
FROM THE WCA: Over 100 people flocked to San Diego for WCA’s San Diego Roadshow, Education Series, and CarWash College presented by Sonny’s. Attendees of CarWash College came from across the state and even outside of it for the two day manage-
ment seminar. Our education series consisted of an engaging keynote from a Yelp executive, and a greatly informative presentation from CEO of the International Carwash Association, Eric Wulf. The Roadshow took guests across the San Diego region for a tour of local car washes, and a lunch at a fish house on the harbor. Thank you to all of our attendees, tour stops, and sponsors for making this event possible. Save the date for our next Roadshow in Boise, July 12-13!
Etowah Valley Equipment Etowah Valley Equipment
• • • • •
516 516
Arimitsu 516 Upgrade From All Brands Arimitsu 516
Self Serve and Single Gun Prep Upgrade From All Brands PN-61002 Left (as pictured) Self Serve and Single Gun Prep PN-61003 Right PN-61002 Left (as pictured) PN-61003 Right • SPRING 2016 •
29
TRICKS TRADE OF THE
Presenting some of the best discussions of the self serve industry’s headaches and solutions from ACF. You can find more discussions like these on AutoCareForum.com.
Hand held Dryers
of power as close as possible whenever significant changes are made.
Earl Weiss: Sound off please. Experience with installing Hand Held dryers. The good, Bad and Ugly. MFGRS Features. ROI Maintenance Other plusses / minuses. Chaz: I was on the fence on the air dryers. What sold me was the 30 day money back offer from Ginsan. This installed one unit with an “hour” meter so I could see firsthand the usage. A reverse vac, that I don’t have to clean out and makes great profit! I offer C/C with come not up but also see cash customers buying more time. As long as the meter is running I don’t care how long someone stays the in the bay. CarWashBoy: All Good, No Ugly.. If you don’t have them, that is why you need them.. Ours are used constantly.. Attic mounted..Wall mounts work well also, but are noisier. Doug BBE: We added them about 2 years ago. Attic mount. They are very low maintenance. I have replaced 2 nozzles I believe. But one nozzle a year isn’t bad for how much they get used. Definitely would be the next thing I would add if a wash already had cc acceptance in the bays. cfcw: Just bought a new car that I am babying and suddenly realized blowers would be great. Any specific brand recommendations? DiamondWash: I just used a Air Shammee at one location and a J.E Adam Turbo Towel at another location I like the gun on the Turbo towel versus the plastic gun on the air shammee Robert2181: I have 5 bays. Looking to upgrade. Only take coins. Do you upgrade to cc, blowers and/or bill changers? I would have to also put in new (flush mount)bay panels. Thanks Earl Weiss: Demographics will affect relative success of certain things. FWIW instead of committing to do the entire place and spend $ without knowing the return I first added bill acceptors to half the bays (Some people experience vandalism which make this a no go) . The return /experience justified me doing the other half. Same with CC acceptors and dualers. If I do the hand held dryers I may try a couple of bays then do the rest. CarWashBoy: Hi.. Just my opinion.. I would step it up and go Credit/Debit and Bills in bays.. NO COINS.. trash your quarter changer for a Bill Breaker.. and the same with the Vacuums, Bills and Cards, NO COINS you gotta spend it, to make it.. I have done what I just mentioned and it works.. I just remodeled a 6 Bay and 8 Vac Wash, with EVERYTHING Top of the Line..Air Dryers etc, and it did 188,000 last year after remodel.. this year heading for 230,000+.. I know that is not great revenue, but it’s working so far, and simple to manage....I thought about adding a AutoMatic, but I have had
30 • SPRING 2016 •
MEP001: I’d recommend that as well. I remember someone who was about to pull the plug on an IBA install because the city wanted him to install a new 400 amp panel (at a cost of over $10,000) since the auto and the dryers would have overloaded the old one. When I pointed out to the city engineer that both would not ever be on together he said it was okay as it was. them in the past, and they can be a Pain in the you know what.. Just my 3 cents. Happy washing, Doug. slash007: I put in six air Shammy’s a couple of years ago and it was the best addition I made. People love them and they get use often. Once I added count up credit card a few months later, usage went up much further. People are way more inclined to dry the car when they don’t have to add more $ first. rph9168: I have seen people spend 15 minutes in a bay drying a large SUV. It has taken a while for these to catch on but I think they have become a good addition for a self service wash. If you are unsure I would do what Earl suggests - put them in a couple of bays to see how it goes for you. mel(NC): Doug, what percentage increase in sales did you see after the remodel verses before? What do you attribute the 26% increase this year to? Earl Weiss: So Far , Turbo Towel , Air Shamee and Gin San reports. No feedback on Blasto Dry? Do you guys set up a separate breaker panel, one breaker for each unit for these dryers? BBE: Not sure how this pans out if you have the 110v, but we have the 208v air shammees, and we use the same breaker for each air shammee as the hp pump for that bay. Since there is no chance of both being on at the same time. I believe this is what most do. mjwalsh: Earl ... not sure on how your existing in bay electrical is & whether this could apply to your setup ... but when we updated from our vintage 1987 less power required 3 smaller motor Specialty-Doyle Blo Drys we realized that amperage was going to be an issue and tried to keep the havoc to our existing electrical panels & new aluminum conduit & new boxes to the bays to a minimum. What saved us was that we tapped into some of our existing 110VAC in the bays that was no longer needed because of the new low voltage G&G 24VDC LED Lighting. In our case we used Square D (2 in the space of 1) breakers but still had to run one additional aluminum box with only one short run additional aluminum conduit out to & within our closest bay. We were advised that we could also use a time delay relays but we went with the double breakers to replace the existing single breakers & a bit more wiring instead. Time delay relay would make sure both of the higher amp single phase motors don’t start at the same moment causing a code violation ... unless 10 gauge wire is run with 30 amp breakers etc. As many of us know from experience, a good electrical person will balance or tweak the loads in the panels to match the utility drop’s legs
mjwalsh: It might be just me ... but I see at least a minor trade off with sharing breakers in terms of being able to replace wiring &/or a motor or whatever not as quick & knowing that the breaker could be needed to be used by another piece of equipment. There are other disconnect for safety ... workarounds; but it seems like it is something for Earl & his maintenance staff to consider. MEP001: When you use the same breaker for the bay pump and the dryer, you’d have to shut the bay down to service either one. No rocket science there, just institute a basic safety protocol. mjwalsh: At an unattended self service bay ... it seems like the fact that anything in the hand held blow dry circuit that causes the breaker to trip ... would cause all of the other options to automatically & mercilessly be down on the busiest day imaginable ... something for Earl & others to consider. I have memories of times where the Blo Dry breaker tripped but we found out later of the problem which had a lesser consequence ... because at least the rest of the options did not automatically shut down the bay during a busy day. We also we find that during certain types of weather the blo dry is much less likely to be chosen by the customer which is relevant to the bay &/or hi pressure at minimum that would be automatically also not be available on a blo dry breaker trip. Slash007: I share the breaker with my SS pump as well. MEP001: Then supply each blower motor with a 10A fuse. I’ve never had a vac fused that way trip the breaker. I think you’re worrying too much over a problem that’s easily made not a problem. Earl Weiss: If the motors are rated at 13.5 amps is a 10 amp fuse enough? mjwalsh: 15 amp fuse assuming you followed the shared breaker approach & also assuming an original 20 amp breaker circuit. A good electrician &/or electrical engineer might suggest a better approach with a 2 in the space of 1 breaker which could mean the extra fuses not needed as much. MEP 001: Probably not, I was going by what I use in vacs which run about 8.5 amps per motor. I forgot most of the dryers use motors at higher amps. Regardless, you get the point - fuse the motors separately at a much lower load and don’t worry about them tripping the breaker. Robert2181: OK. which dryer over all does everybody favor? {continued }
A LEADING MANUFACTURER OF CONTROLS, INCLUDING DIGITAL TIMERS FOR THE CAR WASH INDUSTRY.
CONTACT YOUR DISTRIBUTOR
TODAY! 1-888-234-9667 FOR A DISTRIBUTOR IN YOUR AREA. • SPRING 2016 •
31
CryptoPay Credit Card System
The Road Ahead
The CryptoPay Credit Card System is ‘The Road Ahead’ with a tested and proven track record bringing cost-savings, convenience, and secure credit card acceptance to your facility. Designed with unparalleled security, innovation, affordability, and ease of installation. The CryptoPay Credit Card system is your secure solution today and into the future.
CryptoPay Credit Card Encryption Wireless Card Readers Transaction Consolidation Affordable and Easy to Install Cloud-Based Analytics and Receipts Low Start Up Cost No Minimum Card Reader Purchase Live Technical Support
And much, much more!
Jump in the driver’s seat and call CryptoPay to learn about the exciting Road Ahead.
719-277-7400
32
• SPRING 2016 • RoadAhead_FINAL_PRINT_REV.indd 1
2/15/16 7:36 PM
TRICKS TRADE OF THE
What to do: Damage caused by taped wand trigger... coincarwash.ca: The gun in my self serve bay was taped with the trigger in the pulled position and when customer inserted money it flew out of holder and damaged the customers car....what should I do....he is saying its my fault, however he did not have gun in his hand prior to inserting money. I have not responded to his message yet....opinions please.
I.B. Washincars: I don’t think it is your fault, but it’s not his either. I don’t see where you have any choice but to pay for the damage. If you figure out who taped the trigger, you could pursue them. Long story short, your wash damaged his car Earl Weiss: Legal liability is based upon whether a party was negligent - “Negligence. Conduct that falls below the standards of behavior established by law for the protection of others against unreasonable risk of harm. A person has acted negligently if he or she has departed from the conduct expected of a reasonably prudent person acting under similar circumstances.” If I were the judge i would rule. It is not reasonable to expect a car wash owner to inspect all equipment after each use to see if someone misused it which might damage someone else’s property. The sole proximate cause of the damage was the user who taped the gun. - NOT LIABLE. Sadly I am not the judge. Perhaps you have cameras that might reveal the culprit? robert roman: I’m not a judge, but I have been involved in enough legal matters to suggest paying for the damage and moving on. Gun with tapped trigger is not how the device was designed to be stored in holster or operated nor is taping trigger closed an industry best practice. So, store owner would be responsible regardless if taping was caused by prior customer or attendant (employee) for that matter. Analogy is ice. Owner has responsibility to ensure ice is not slip and fall risk for customers and employees. PaulLovesJamie: My gut agrees with IB, my head agrees with Earl. btw, nicely worded Earl. Earl Weiss: Analogy Fails. Ice is foreseeable and normal condition not caused by negligent / reckless act of third party. Even slippery conditions may not render wash owner liable due to exceptions which vary vis a vis, Open and obvious risk, Natural accumulation, Assumption of risk etc. I have had a person not put car in neutral and bump another car in tunnel. Sometimes when they exit offender drives off. Bumped car tries to get me to fix their damage. I give them license # of offending vehicle if it can be seen on video. (In one case cops went a got offender). Sometimes driver that got bumped believes I am responsible because it happened on my property. I asked them if they thought the grocery store was responsible if someone bumped them in the parking lot. (I now ask if the city is responsible if someone bumps them on the street) One person said “Yes” I told them to go to whoever gave them their legal education and ask for their money back because they were taught wrong. Once had a lawyer send me a letter (amazing how little many lawyers know of how a car wash conveyor operates) with all sorts of
BS theories as to why I was liable. I invited him to file suit and I would request that fees be awarded to me for bad faith filing. (I also guessed he must have had some relationship with the customer because his office was quite some distance - different county from where the wash and customer were located. rph9168: I have to agree. I would not pay. Sometimes it seems convenient to just pay the bill but I think that people want to take advantage of an operator for their own negligence. I would say the vast majority of customers have the wand in their hand before attempting to use it. I also wonder if their is a video to see who taped the wand. Could be the customer did it themself. In any case I would not pay for someone else’s negligence. MEP001: If you have any wording on the signage that says “Hold wand in hand before inserting coins” or the like, you’ve fulfilled your obligation as far as diligence. I’d have to agree that paying for the damage in good faith might be the best course for a happy customer, but what if he did it? What if he spreads the word that you’re an easy mark for a damage claim? I’ve had that happen with refunds years ago when I was an attendant and was instructed to give them money back no questions asked. In a month it went from one every few weeks to 5 or 6 a day, and almost all of them were BS. RPH -- Maybe in your neck of the woods, but I’ve watched them here and they don’t. Maybe one in 50 does, and it’s stated to do so on the menu sign. In fact it’s only on the sign as a remnant of triggerless guns. I was just about to have some custom ones made and thought about leaving that part off, but now I think I’ll leave it on for exactly this reason. rph9168: I had the same thought about the guy may be the same one that did the taping but without a video that would be hard to prove. Just about all the customers in our area hold the wand before engaging it and in many other areas of the country I have been to do as well. Most bay signs I have seen or designed always included that information. Never will understand why someone would turn on a high pressure wand without holding it in their hand but it takes all kinds. I can somewhat understand it with a foam brush because there is very little pressure but not a wand. Guess common sense is in short supply in your area. ghetto wash: I’m on Earl’s side also. My attendant has a daily check list that he goes down and signs each day. On that list is “wipe down meter boxes and guns” With this daily inspection documented, I think I’d be in the clear if this happened to me. mjwalsh: Like others have said, it would be helpful to have video footage. The struggle ... especially
with our closed doors during the winter is to keep the lens so it has a clear path without surface water droplets and/or some fogginess created from temperature differences in the air. We are tempted to resurrect the windshield wipers somehow on our original Panasonic camera housings & maybe come up with a remote way of wiping the glass on the housings. For camera footage to identify the exact person &/or license plate, including the actual act of possibly an extremely quick taping process seems like easier said than done. Maybe some other operators have found a better way for their bays that need to have their doors closed because of too cold of weather. MEP001: Tape a dollar coin over the camera lens, that will keep the fog off them I.B. Washincars: Now MEP, behave. Mike has definitely gotten better about not wedging his dollar coin crusade in every unrelated conversation. I’m sure it’s tough for him. I gotta admit, I got a pretty good chuckle out of it though. mjwalsh: Just to add to the water droplet affecting for a not a clear enough image (from the in bay camera) for those of us where the overhead doors are a must: Maybe a sensor could automatically trigger the small windshield wiper along with full remote capability. It sure would be nice to hold the rascals who would tape our trigger guns ... with properly placed liability ... & without a clear enough image ... the odds are not in our favor. Waxman: how much damage was caused? $250? $500? $1000? I’d start there. All car washes should have a good working relationship with a body shop or two. HCW: Pay for it and move on. Invest in a surveillance cameras system and you’ll be in good shape. Just yesterday I payed for a mirror that broke in the IBA. pgrzes: I have watched this thread for a while and my thought was,? How high is your pressure set on your guns?? I run at about 1100 and the gun won’t fly out and hit anything. They will slide across the floor but not enough power to cause any damage aside from soaking people. Anyhow I just put a new 1080p hd Lorex system in with 1 camera at each end of every bay. I can get tags and all events in every bay always. Randy: A few years ago we got some defective guns that would stay full on after the trigger was pulled. They would fly out of the gun holder at 1200 psi. We had a couple of complaints of flying guns {continued } • SPRING 2016 •
33
TRICKS TRADE OF THE
Hand held Dryers {continued } CarWashBoy: Hi Mel. Sorry for the delayed response.. The increase this year is based on Satisfied customers from last visit. Higher % of satisfied customers. They see we have Full Time Attendant, CC, Bill Acceptors, Air Dryers, Powerful Vacs, Great LED lighting and so on and a consistent operation and CLEAN. LOL.We use the same breaker that we use for the Hi-Pressure pump motor. When Air Dryer is running, pump motor is not... no need to add additional breakers.. Hope that helps. Oh... J.E. Adams has a HOT AIR dryer available... sounds interesting. I like. I’m going to order one to try.. then I can advertise Hot Air Dryer. soonermajic: Dude, if you have 6 SS bays, & are pushing $250k, that is freakin unreal !!! WTh would dare say that quarter of a million dollars is not “GREAT REVENUE”?!!! sparkey: I started with blasto dry and converted a bay to air shammee. I get more compliments from the air shammee. It has more air volume (3 motors, verses 2 with blasto dry). The air shammee is also better for motor cycles because they can direct the air better where they want it. The blasto dry is better for floor mats because you can knife the water
off. Most people don’t use the blasto dry very long after they try it. You have to get the angle and everything right to dry the car very well. CarWashBoy: Thanks.. we just provide EVERYTHING the customer wants.. Modern Premises, LED Lighting, Music, Shinny Powerful Vacs, Full Time Attendant, Air Dryers and so on.. We Charge what the product and service is worth.. If our competition is lower price, so what? the customer wants quality and so much more.. IT’S NOT JUST PRICE.
What to do... {continued } hitting cars. We’d give them a handful of tokens and call it good. Naturally the supplier denied that there was a problem with the guns. mjwalsh: I still say that even with our cameras at the end of bay ... in areas where overhead doors are must ... water droplets could be a problem showing the finer detail of a rascal quickly taping the wand trigger. rph9168: Since you have no camera to see what happened I wonder if the gun was really in the holster when this happened. Did you try to see if the gun can come out of the holster as the customer claims?
Hydrospray
34 • SPRING 2016 •
I.B. Washincars: Yeah, I bought about a dozen GIANT guns a few years ago. What a dangerous POS. Luckily, most of them were junk straight out of the box and I got them swapped out before anyone/ anything got hurt. MEP001: Someone I know got part of a bad batch of Adams guns, and Adams actually paid for damage to a car because it stuck wide open. CarWashBoy: Pay for it. and move on.. Simple.. Earl Weiss: Yep, next thing you know your lines will be longer than they have ever been. Except that they won’t be people wanting to use your place, just people wanting you to pay numb nuts claims. rph9168: It may be more convenient in the short term to pay out nuisance claims but as Earl suggests in the long run you will just multiply claims like this in the future. If your wash is responsible for the damage then you should settle but in cases where customer negligence is the cause then the responsibility is theirs. Don’t be a sucker for false or questionable claims Stuart: I have had some claims we were not negligent but we did offer to pay half of the bill. They had to get the repairs done first and bring us the bill or we have also sent our half directly to the body shop.
Dultmeier has What You Need, When You Need It! Call Now for Your FREE 2016 Power Wash Equipment & Supplies Catalog! Or Visit dultmeier.com today!
CONNECT WITH US PHONE
Tube TWITTER
YOU TUBE
1-800-228-9666 (Omaha, NE) • 1-800-553-6975 (Davenport, IA) • SPRING 2016 •
35
Editor’s Note: This article originally appeared in the Spring 2006 issue of SSCWN and has been an oft-requested one. We’re looking forward to your feedback: Does Paul’s formula still work out 10 years later?
FROM THE ARCHIVES:
Decisions, Decisions
PAUL’S NOTE FROM 2016: I’m sure that enquiring minds want to know – 10 years later, do I use actually Net Present Value calculations? I’m not a professional financial analyst, and I don’t want to be one. But yes, I have incorporated some basic financial thinking into my view of business, and yes I use several financial analysis tools regularly. Specifically on the use of the Net Present Value calculation: NPV is primarily a ranking tool, so the simplest place to implement it is on my “Project List.” I’m sure most of you have a similar list of projects. Mine happens to be in a spreadsheet, and I use NPV to rank all projects by profitability. The spreadsheet itself if rather simple, only a dozen columns or so: Project, Cost, income/savings, notes, category, etc. And yes, 2-year and 5-year NPV. From a theoretical perspective, the highest NPV value is the project I should be working on, and it sorts right to the top of my spreadsheet. So, yes the NPV calculation has become part of my everyday vocabulary.
(NOTE: Although my spreadsheet is simple, the devil is in the details. The supporting data for calculating cost & income can be as simple or as complex as each project demands.) (NOTE: The calculation for NPV is built into excel, you can add it as a formula to any cell.) As far as the article itself goes, would I make any changes? Well, although I could do a lot of rewriting I think the basic message was and is solid. In a nutshell, one can make decisions based on experience and common sense, but calculations and thoughtful analysis usually produce better decisions. I still stand behind that statement. And all of us – myself included – could raise our professional game a notch by learning and making use of some very standard analysis tools like NPV.
STRESS. Does making a decision to spend $15,000 to $20,000 on equipment for your car wash cause you stress? How about $200,000 to have an IBA installed? Do you worry about whether or not you are making a good decision? About whether you will make more profit, or waste money that you can’t afford to lose? How do you go about making such decisions? One of our responsibilities as car wash executives (if you own a car wash, you are an executive, whether you like it or not) is to make good investment decisions. Please notice that I did not say “to make decisions,” I said “to make good decisions.” It is the executive’s responsibility to ensure that the business makes a profit, because without profit the business ceases to exist. Good decisions are necessary to maximize profit. No offense intended, but I think that the vast majority of us do not approach these decisions properly. I believe that many of us operate on “business savvy” and simple ROI calculations to make these decisions. Unfortunately I think we are also influenced a bit too much by sales people, marketing, our peers, and emotion. Many of our decisions may have resulted in making money ... but does that mean they were good decisions? As expenses increase, competition increases, and margins go down, making the best decision is becoming even more critical to staying in business. Think about this – if your $1 million dollar car wash had a 10% profit margin, how comfortable would you be with your ability to make good investment decisions? Would you put your entire income on the line? A lien on your house? I wouldn’t. I fear that many (if not most) of us do not have the knowledge and experience in decision making to succeed when the going gets tough. Please don’t misunderstand me – by no means
36 • SPRING 2016 •
am I saying that any of you are lousy decision makers, that your success is based on luck! No! One of the reasons I love this industry is because of the admiration & respect I hold for so many of you. All I’m trying to say is that I don’t believe we’ve approached decision making professionally. We (myself included) can do better! This might sound boring at first – don’t give up on me! Go get a cup of coffee & invest 30 minutes of your time reading & thinking. I am certain that you will count it as time very well spent. DECISION EXAMPLE – SHOULD I BUY A DEBIT CARD SYSTEM? Here is a real life problem I am dealing with. As part of an effort to increase revenues, I’d like to be able to have a local organization sell carwashes for me. I would do this as a fund-raiser because I like the organization and their goals, so I would donate a percentage of the sales to them. With my knowledge of my market, I think this would be one of the best ways for me to increase profits, while doing some good in the community at the same time. It’s a classic win-win scenario. Since my wash is self service, I would have to either provide the organization with tokens or debit cards to sell. Instinct tells me that the tokens probably wouldn’t sell nearly as well as gift cards would; I think I should buy a debit card system. I’ve heard that a lot of other washes have installed one and are doing well, and the sales people sure do sound convincing ... So here’s the decision I’m faced with: Should I buy a debit card system? Then there is a follow-up question that I need answered: If I do buy a debit card system, how do I determine what percentage of the sales I should donate to the fund-raising organization? As business owners, all of us face questions such as this on a regular basis. Should I buy this car-
wash? Should I add an IBA? Should I buy this new equipment? I decided that I needed to check in with the pro’s to find out how they would approach making this type of decision. So I contacted an expert financial analyst and asked for help. What I learned is that there is an entire field of study (financial analysis) and careers devoted to making this type of business investment decision, and over time the pros have become rather good at it. I’m just beginning to scratch the surface of this field, but so far I’ve learned that there are a few things that I can do to dramatically improve my decision making abilities: • do some clear thinking • understand a few fundamental financial analysis concepts • use of some basic spreadsheet functions. SETTING EXPECTATIONS Let me make one thing perfectly clear – I am NOT going to give you a magic bullet that predicts the future. That is not possible. If you spend the time to read this article, what I can give you is three things: 1) Awareness that there may be a significant gap in a very important part of your business knowledge. 2) A basic understanding of a simple financial analysis technique that will enable you to make much more informed decisions, which should result in higher profits. 3) I sincerely hope that reading this gives you the desire to become a more informed business owner and executive, and leads you to greater success and profitability. What I am NOT going to do is give you a long boring classroom lecture in accounting. I am also not going to list all the excruciating details that need to be considered, or any arcane mathemat{continued } • SPRING 2016 •
37
FROM THE ARCHIVES:
Decisions, Decisions
ical formulas for figuring some of this stuff out. Many people devote their entire career to making business decisions – financial analysts, executives, CPAs, etc. There is no way I can provide a comprehensive education in a magazine article, this will be an introduction. For the purpose of this article, I will be making some simplifying assumptions in my analysis, so that the key points don’t get obscured by the details. I’ll identify these assumptions where necessary. I think it is also worthwhile to point out that you might disagree with the actual numbers that I come up with for some of my forecasts. That’s fine, I expect that you will. The steps and the calculations should be the same, but the results and conclusions may be different for each of us. DECISION MAKING – CLEAR THINKING BEGINS WITH A DEFINITION. Lets begin some of that “clear thinking” by understanding one thing right away: stating my decision dilemma as “Should I buy a debit card system” is not an accurate way to state the question. Since there are always alternatives that can be considered, a decision is a choice between alternatives. In this case, I could buy a debit card system. Or I could not buy a debit card system – buy nothing. Or I could buy a credit card system. Or a vending machine. Or a 5 year CD at the bank. Or ... anything else. Making a decision is really a matter or choosing the alternative that I think has the best likelihood of helping me attain a specific objective/goal. Yes, I snuck the word goal in there. Before choosing an alternative, I need to understand why I am making a choice to begin with, what I am trying to achieve. Because if the goal is not clear, I’m not very likely to achieve it. And if I don’t have a goal that I’m trying to achieve, then it doesn’t matter what I choose! THE DECISION MAKING PROCESS OK, let’s get down to it. Here is a list of the steps involved in making a capital investment decision: Simplifying Assumption: This is not an exhaustive list, I am just going to cover the basics needed to take us up one level in decision making ability. • Goal Identification • Identify Alternatives for attaining the goal(s) • Identify Factors affecting the success of each alternative • Forecasts and Assumptions • Financial Calculations • Evaluate results • Other considerations • Decide Now let’s add details to our outline – let’s go through my example of purchasing a debit card system. GOALS If you don’t have a goal, then it doesn’t matter what you choose to do. Without a goal, you have nothing against which to measure success or failure. My goal in this case is to raise revenues. More specifically, to increase net profit. I am considering installing a debit card system as a means to achieve this goal. Now, if I do choose to install a debit card system, there are other “secondary” goals that I can attain. Namely, to support a worthwhile community
38 • SPRING 2016 •
organization. A few other goals that could merit consideration: • image improvement • competitive advantage • increase market share • reduce expenses There are many many more potential goals, all are probably valid. Just be sure to recognize which is your primary goal, because trying to hit multiple targets at once is harder to do. And don’t lose sight of the goal. IDENTIFY ALTERNATIVES So my question is now more clearly defined as whether choosing to install a debit card system is my best alternative to increase my profits. The obvious next question is ... what are the alternatives? Simplifying Assumption: Ideally, I would list and evaluate dozens of alternatives – in fact, all of the projects that I could do. I’ll just pick a few. Capital improvements should provide a higher rate of return than simply putting your money into a CD at the bank, because improvements to your business carry a higher level of risk. So let’s consider a bank CD at 5% interest as an alternative. Credit card acceptance is a hot item these days, I’m hearing about some very significant increases in revenue due to credit card usage. I’ll consider that as my third option. Nothing. Doing nothing is always an alternative, in fact the old “mattress savings account” has always been popular. (Any of you have some cash stashed away somewhere?) A FUNDAMENTAL FINANCIAL ANALYSIS TOOL: NET PRESENT VALUE “Would you rather get paid $90 today, or $100 at the end of the year? It’s a classic question. And there is a simple answer that is easy to calculate by using “time value of money” concepts and the “Net Present Value” (NPV) formula. The “time value of money” concept simply says that $100 received today is worth more than $100 received next year, because you can put today’s $100 to work for the year – for example, you could buy a bank CD and earn 5% interest for a year. In other words, the $100 received today has a “future value” of $105 in 1 year. The reverse would also be true: $100 received in 1 year has a “present value” of $95. So in this case, the answer is take the $100 at the end of the year, because it is equivalent to receiving $95 today. The time value of money is a simple concept. The calculation is easy too, many of you probably either already knew the answer or figured it out instantly. What you probably don’t realize is that you were intuitively doing a Net Present Value calculation in your head! Net Present Value is simply a formula that will take those 3 numbers ($100, 5%, 1 year) and calculate the answer of $95. Piece of cake, right? Good. Now do this one: Would you rather receive $100 in year 1, $350 in year 2 and $217 in year 3 or $625 today. Interest rates are 4.87%. That’s a little tougher to do in your head! I had to pull out my calculator to figure out that the present value of $100 + $350 + 217 at 4.87% interest would be $600. Take the $625. I hope you can see where I am going with this ...
THINGS YOU MIGHT LIKE TO KNOW ABOUT YOUR BUSINESS Sample Questions That Can Be Answered By Professional Financial Analysis Techniques & Tools: • Is the price I pay for an IBA the most significant factor in future profitability? • Which equipment upgrade will bring me the most profit? • Which one of my assumptions is the movst critical to get right? • Which of my estimates is most sensitive to variation? • How is a price increase likely to affect car wash volume and profits? • If I were being paid an hourly wage for the work I do at my wash, what would my rate be? • What effect has inflation had on my revenues and profits in the past? • What effect is inflation likely to have on my revenues and profits in the future? • If I install an IBA, what is the minimum number of cars I need to wash to break even? • How many units do I have to sell to breakeven?
straight to my “should I buy a debit card system” question. I want to to run a Net Present Value calculation on the purchase of a debit card system, because the NPV calculation is the basis for answering a LOT of questions! I can predict the future! No I can’t. Neither can you. I need to make sure that you understand something very clearly: The techniques I am describing here can not predict the future. They cannot tell you exactly how much profit you will make, or how your customers will act. What these techniques can do is give you a LOT of very valuable information which will allow you to make a MUCH more informed decision. They help eliminate the by-gosh-and-by-golly guesstimating that we all tend to do. Compared to not using these techniques, this can be a silver bullet. But no, they cannot predict the future. INPUT DATA OK, I want to run some NPV calculations. What are the inputs, i.e. what information is needed? While performing the data gathering part of your analysis, keep in mind that the more thorough and accurate you are now, the more accurate your forecasts will be. Simplifying Assumption: I’m just going deal with enough issues and details to make sure that you understand the concepts, and that the conclusions are reasonable. COST TO INSTALL THE SYSTEM Obviously, we need to know how much the new system will cost. Don’t just take the sticker price, be sure to add in all installation costs. For the debit card system, I need to include: • The debit card hardware & software • Installation costs (parts & labor) • Upgraded timers • Signage and other up-front marketing costs OPERATIONAL COSTS How much will it cost me to operate the system annually? This needs a small clarification: using your “profit margin” as a ballpark figure for estimat-
Decisions, Decisions ing will not work. I am looking for the amount by which my operating costs will increase. For example, adding a debit card system will not increase my liability insurance costs or increase my school taxes, so I should not include them. Here’s a short list: • The amount of money that I donate to the fund raising organization • Variable costs of washing additional cars • State sales tax (do not include federal income tax) • Cannibalism Cannibalism?!? I figure that some percentage of the debit cards that get sold by the fund raising organization will be to customers who would have washed their cars anyway. They will buy their washes at the fundraiser instead of using my bill changers. So I’m really losing a little bit of profit here due to donating a percentage to the fund raising organization. I count that as an operational cost because it’s a cost I incur as a result of using the debit card system. Simplifying Assumption: I am going to pay cash for the equipment I buy. Borrowing money complicates the calculations a little bit, I’ll leave that detail out for today. REVENUES A biggie: how much additional revenue do I think will come in? Here is the basic set of forecasts that I made. Simplifying Assumption: I could go into a lot of detail about how I came up with my revenue forecasts, but they are really only relevant to my specific project, targeted toward my market. In other words, the details of how I came up with them are not worth much to anybody, so I’ll skip most of the details here. Also, I do expect to sell more cards in subsequent years, so the cash flows will vary from year to year. I won’t deal with that complication today either. Given what I know about the fund raising organization that I will be working with, I need to make a number of forecasts. I believe that they will sell around 500 debit cards per year. I’ll set the face value on each card to $20, and give a 20% donation ($4) for each card sold. A few other factors to consider: • How many new customers will buy cards? • How many existing customers will buy cards instead of using my bill changers? (“Cannibalism”) • How much more (or less) frequently will customers wash their cars as a result of the fund raising activities, i.e. will they wash more often? • What will the walkoff percentage be for the cards? Remember, the more serious thought you put into your forecasts, the more accurate they will be. DESIRED RATE OF RETURN Simplifying Assumption: Determining what number to put in this field can get complicated. If you are inclined to do some further reading on financial decision making, this would be a good area to research. Look for terms like cost of capital, hurdle rate, and beta. “Desired rate of return” is the rate of return (adjusted for risk) for a specific project/investment. The objective is to come up with a rate that reflects the risks involved in the project, and that corresponds to the rate investors would expect. For today, let’s just say that I expect to get a return of at least 10% on any upgrades I make at my wash.
Until and unless you gain more knowledge on the subject of “desired rate of return”, keep your values between 10 and 15%. Please trust me on this – it is not as simple as saying “I want to make 200%!” That doesn’t really help you make an informed decision. Desired Rate of Return could be described as the minimum rate of return at which I can recover all of the investments that go into a project, including the cost of borrowed money, labor, business risk, etc. As I said, this is an area that requires further study. Stick between 10 and 15% for now. TIME FRAME What period of time should I use for the analysis? This is important for two reasons. 1) Different time frames may lead you to different conclusions. 2) It puts a box around the project so that you can evaluate the project as “completely separate” from your existing business. Some possible answers: a) The life of the equipment. In the case of a debit card system that would be what, about 10 years? b) The time frame in which you expect to reap financial gain, perhaps 5 years. c) Your own personal standard for viewing the future, e.g. “I’m 70 years old, there ain’t no such thing as long term! 3 years!” One of the nice things about NPV analysis is that you can do all of them. You can simply change the input to look at the results after 3 years, then 5 years, then 10, then whatever. My own personal standard is that for mid-size investments in my wash ($5,000 to $20,000 range), I expect a payback within 1 to 3 years, but my standard for determining whether the profitability is worth the effort is 5 years. So for the purposes of this article, I’m looking at a 5 year time frame. TAX BRACKET What is your federal tax rate? Not what tax bracket are you in, but after deductions and all that stuff are taken into account, what percentage of your income actually ends up in Washington? I’ll be using 20%. This is an area where the math can get complicated, because the project may change your tax situation. INFLATION RATE Simplifying Assumption: I won’t be dealing with inflation today, because my objective today is education, not to try to answer all possible questions! When you came up with a number for operational costs, did you consider inflation? Inflation can have a huge effect on a project! I find that inflation is best factored in as a separate variable so that I can play with it and see how variations in future inflation will affect the profitability of a project. But as I said, we’ll ignore inflation for today. SALVAGE VALUE At the end of the time frame that you have chosen, the equipment that you bought still has value. What is the net financial impact of getting rid of the equipment so that you can continue to run your business without it. This usually means “how much is the equipment worth if I sell it used.” But
FROM THE ARCHIVES:
there could be expenses involved too. For example, suppose I have to cut holes in the bay walls to install the debit card acceptors. When I remove the debit card system, I will need to fill those holes, and replace the FRP on the wall so that the bay is returned to a functional condition. We need this number to help us “put a box around the project. INPUT DATA SUMMARY After all of my thinking and arithmetic, I ended up with a set of forecasts for my debit card project. The final results are shown below: PURCHASE AND INSTALLATION EXPENSES Revenues $7,500 Number of gift cards sold 500 Face value of gift card $20 Annual value of new customers $500 Increased sales to existing customers $1,000 Total new revenue: $11,500 OPERATIONAL EXPENSES Variable costs of washing 20% Donation percentage 20% State sales tax 6% Cannabilism costs 1% Total New Operational Expense $5,405 Salvage Value $100 Desired Rate of Return 10% Time Frame (years) 5 Federal Tax Rate 20% NPV CALCULATION – THE FORMULA The Net Present Value calculation itself is very simple. Since your lator or spreadsheet can do it for you I won’t do much explaining here, but for those of you who are mathematically inclined here it is:
calcu-
NPV = FV/(1+RoR)#Y Net Present Value = Future Value / (1 + Rate Of Return) Number Of Years
A very simplistic example: If I were to receive a cash flow of $10,000 in 3 years, what would that be worth today if I can expect a rate of return of 5%? PresentValue = $10,000 / ( 1 + .05)3 PresentValue = $8,638.
Warning: When you include variable revenue flows over multiple years, financing, depreciation, etc, the formulas get more complex. The actual NPV calculation remains the same, but you need to be a lot more careful about what numbers go where. NPV FOR MY DEBIT CARD SYSTEM At this point I have forecasts for how much revenue a debit card system would generate and what it would cost me to operate. It’s time to take these future cash flows and run them through the NPV calculation to see if I will really be making any money. And the answer is ... yes. The NPV for the cash flows I’ve forecast is $12,533. The simplest way to understand that figure is to think of it this way: “If I install a debit card system and all of my forecasts/ assumptions are correct, then in 5 years I will have made an amount of money equal to having invest{continued } • SPRING 2016 •
39
FROM THE ARCHIVES:
Decisions, Decisions
ed $12,533 today at an interest rate of 10%.” I’ll explain how to interpret that figure in a moment. NPV FOR ALTERNATIVES At this point in the decision making process, I have determined that the NPV forecast for a debit card machine at my carwash is $12,533. Does that mean I am done, I have my answer? No. Remember, a decision is a choice among alternatives, not a simple yes or no. Simplifying Assumption: The calculations for each alternative are the same, only the values for each variable are different. Therefore I will skip the calculations for the alternatives, and only show the results. If you recall, the alternatives I am considering are the debit card system, a credit card system, a bank CD at 5%, or simply leaving the cash under the mattress. Table 3 lists the Net Present Value results for all 4 alternatives. NPV FOR ALL ALTERNATIVES Paul’s “Mattress Bank” Savings Account 5% Bank CD Debit Card System Credit Card System
-$3,791 -$1,515 $12,533 -$3,372
NPV INTERPRETATION These next 3 paragraphs are important, please pay attention ... Interpreting the results of the Net Present Value calculation is not difficult. In a nutshell, it tells you how investing in your project compares with your desired rate of return, and it does factor in the time value of money. If your project will produce more money than the desired rate of return you will see a positive NPV, which means you will have more money than if you invested your cash at that rate. Many people look at NPV as a go/no-go indicator -- if the NPV for a specific project is greater than zero, it means you will beat the desired rate of return. If the NPV is negative, you are losing money. Ooops, that’s not technically correct – if the net present value is less than 0, you might still be making money, your P&L might still show a profit but you are not achieving your desired rate of return! In other words, you earn dollars, but not enough to cover business risks etc. This is one of the reasons that NPV can be so valuable – it can identify projects that look like they put money in your wallet, but are really not good projects when you look at the big picture. So if NPV is greater than 0, do it. In fact, there are companies whose policy is to implement all projects which have a positive NPV. (These companies also consider the IRR and several other factors, but that’s a whole ‘nother subject.) I hope you noticed in table 3 that since I expect a 10% return on my money, my 5% CD investment comes up negative, as does shoving the cash in my mattress. I know, I know – a 5% CD is making money ... right? Well, yes ... but it is losing money relative to the rate of return that my business should be earning. This is what is known as “losing money safely.” NPV AS A RANKING TOOL However, NPV is most effectively used as a ranking tool, which means it is used not only to determine which projects should be done, but which should be
40 • SPRING 2016 •
done first. So what the NPV values in table 3 tell me is that the debit card system is a better alternative than the bank CD. But according to my forecasts, I’d be better off putting my money into a CD than buying a credit card system! (FYI, I used a rather high installation cost of $20,000 for a credit card system, and at my wash I expect it to have a very minimal impact on revenues. Reducing the cost of a credit card system to $15,000 makes the credit card system a profitable project, but just barely.) It is important to keep the “Desired Rate of Return” in mind when interpreting NPV results. Remember, this is a desired rate of return, not an actual or an expected rate of return. For example, if the level of profit I’m shooting for is 10% and I invest my money in a 5% CD, I will fall 5% short of my goal. This will give me a negative NPV. Does this negative NPV mean I am losing money? Well, it depends how you look at it. On the one hand you made 5%, so you have more money in the bank. But on the other hand you could have invested in a money market which is returning 8%. So relative to the money market alternative, yes, you lost money! Just to make sure you understand this very clearly, there are two very important assumptions built in to proper interpretation of an NPV figures: your forecasts for the discount rate & projected cash flows. Be sure to think hard about both. Remember in the “Identify Alternatives” section when I said that in an ideal world I would list and evaluate dozens of alternatives? Now that you see how to compare 2 or 3 alternatives, I hope you can see that these techniques can actually be used to rank all of the projects that could be done from most to least profitable. In an ideal world, I would run forecasts on EVERY project that could possibly be done at my wash, and use NPV to help rank them. OTHER FACTORS Simplifying Assumption: Not going to discuss wide range of possibilities, just listing the types of things to consider. The important conclusion is that NPV is just one part of the decision. Now you have the results of NPV, and you know the basics about how to interpret the results. But that does not mean you have a decision. Other factors must be added to the mix. Outside factors - These are things that are not specifically related to the project being considered. For example, perhaps you bought a new house this year and therefore can’t come up with $20,000 for a project unless the payback is less than 6 months. Soft factors - These are changes that occur as a result of your project that could result in additional income. A good example would be additional capacity, which although it will not be used in the short term, could be utilized to produce more revenue. Intangible factors – These are changes that occur as a result of your project which do not have a quantifiable/measurable impact on profitability. Examples would be customer perceptions, your reputation within the community, etc. Assumptions – What new revenues a project will generate are important. But there are some assump-
DONATING TO CHARITY “For crying out loud, it’s a charity fundraiser, you’re not supposed to make money! Don’t be a cheap SOB!” Yep, charitable contributions are a sensitive topic, a room full of people could have quite a discussion over it. (Please invite me!) So feel free to disagree with me on this, but my take is that by definition, business decisions need to be primarily focused on profit. Charitable contributions are a personal decision that have nothing to do with your business, charity (and\or fund-raising) is simply one way you can choose to use your disposable income. My point is that if my goal is to support a charity, then I just write a check or donate time. If the car wash’s goal is to increase my company’s profit, the car wash considers various projects/equipment. If I want an investment project at the carwash to achieve both goals, then I need to be careful to keep them separate in my mind. Otherwise my personal goals may endanger the viability of the business, which will endanger my personal goals.
tions underlying these projections that are important to consider as well. Namely, how will you achieve those projections? Don’t just decide that you can sell 200 gift cards, determine how you will do it. Don’t just assume you will do it. Baked-in Assumptions – You are making some assumptions without even thinking about it; these could also affect your final decision. Examples: You are probably assuming (without stating it) that your health will remain good, enabling you to continue to work as you do today. You are probably assuming the economy won’t enter a depression, and a competitor will not build across the street thrusting you into a price war. Etc, etc. Some assumptions you should think about, others you can probably leave in the background and ignore them. But they are assumptions that could affect your success. CAN I AFFORD TO DONATE TO THE FUND RAISING ORGANIZATION? Now let’s take this up a notch, and get to some of the really cool stuff. Remember 7th grade algebra, where you would solve an equation for a certain variable? Same thing here. We have a bunch of variables, and so far we have solved for the “Net Present Value” variable. Using the same data we have already gathered, we can change our equation around and solve for any variable we choose! Let me give you an example. Rather than solve for the NPV value, I would like to know that if all of my revenue and cost forecasts are correct, what is the maximum percentage I can donate and still make a profit in year 1? Stated in more mathematical terms, I want to solve for the “donation percentage” variable when the Net Present Value equals $1 and the Time Frame variable equals 1 year. I’m lazy, so I let my PC do the 7th grade algebra for me. The answer: 7%. “WHAT IF ...” I think this is starting to get really interesting. I have to tell you, I got excited when I calculated that 7% donation percentage, and I started solving for just about every variable I had. A few examples of other {continued }
• SPRING 2016 •
41
FROM THE ARCHIVES:
Decisions, Decisions
things I solved for: • What happens to the net present value if the cost to install the system is $2,000 higher than I expected? (In other words, a competitor’s version of the debit card system is $2,000 more...) • How many debit cards do I have to sell in year 1 to break even? • If I only sell 200 cards per year, what is my breakeven period? • What happens to my profits if I have to buy the debit cards for $3 each instead of $1 each? What if I can get them for free? • What if my cannibalism costs are twice as high as I expect? Ten times as high? • Do the answers to any of these questions cause me to change my decisions? I hope you can see the value in this. You can solve for any variable you want to. What other kinds of questions might you want to ask? Which questions might return an answer that causes you concern? Which ones do you have control over the outcome, perhaps through marketing, a structured maintenance schedule, labor, or some other solution? Which factors are influenced by varying your forecasts, and which are not? It is the answers to questions such as these which allow you to know where your risks lie, and what you need to control to ensure that your project successfully reaches your goals.
So this is really a 2 part question, more clearly stated as “how do changes in the donation percentage effect my customer’s behavior, and how does that affect profitability?” This is a little more complex, because we need to make some more forecasts. In the revenue numbers I used, I estimated that in the first year, the fund raising organization would sell 500 cards with a donation of 20%. Now, if I lower the donation to 15%, how many cards do I think will sell? How about with a 10% donation? 30%? Basically I need to try to forecast how many cards will sell at each possible level of donation. SOME DETAILS OF MY FORECAST: In making this forecast, I basically think that lowering the donation percentage will reduce the incentive to sell, and reduce the incentive to buy. And vice versa. In other words, as the donation percentage goes up from 0% to 100%, card purchases should go up. (Notice that I said card purchases, not profits. We’ll get to that in a minute.) But I dont think card sales will go up in a straight line (linearly). Here is my thought process: • At 10% I think very little effort will be put into selling, therefore few buyers. • Based on percentages donated by other fund raisers, I think 15% is the point at which an effort will
SO WHAT % SHOULD I DONATE EFFECT TO THE FUND RAISING ORGANIZATION? $45,000 This is really good, I’ve determined that I can donate 7% to the fund raising $40,000 organization and still be profitable in year $35,000 1. In fact that’s great information ... but my original question was “how do I deter$30,000 mine what percentage of the sales I should $25,000 donate to the fund-raising organization?” Minimizing my donation isnt necessarily $20,000 my goal, nor is maximizing it. I want to $15,000 know what is the optimum percentage I should donate? I’m going to define the $10,000 optimum as the point at which both my $5,000 profits and the total amount donated to the fund raising organization are maxi$0 mized. In other words, I want the win-$5,000 win case. Yes, we can answer that question. It sim-$10,000 ply requires a bit more clear thinking, and 0% some work to get it right. Clear thinking first, starting with cause and effect. If I change the fundraising parameters, I would expect that to effect my customer’s behavior, right? For example, if I lower the donation percentage to zero, I would expect that the fund raising organization would make less of an effort to sell my debit cards, and I would expect customers to be less likely to buy them. (That describes my behavior anyway; when the neighbor’s kids sell things for the soccer club or the boy scouts, I always ask how much they get. I tend to buy more if they get a higher percentage.) So conversely, as I raise the donation percentage customers are more likely to buy cards. In fact, if I raise it high enough, some customers probably will buy them and consider it a donation, i.e. walkoff will increase!
42
• SPRING 2016 •
OF
5%
stead of from my bill changers. I would expect a dramatic increase. I also think this will cause customers (new and existing) to spend more time washing, and to wash more frequently. • At 75%, I think people will begin to view the card sales as a charitable donation, and they get free carwashes. I would expect a dramatic increase at this level, coupled with a high level of cannibalism. Wash frequency and time spent should remain at about the same as the 50% level. • From 75 to 85% donation, I don’t think I’ll see any big increases in purchasing motivation. • 90% to 100%. Would I really consider this, knowing that I would be losing money? Instinct says no, logic says there’s no pain in running the calculation. OK, I think people would definately view this as a charitable contribution that gives them free carwashes. I think sales would go up somewhat significantly. Time spent washing should go way up. Walkoff should definitely be at its highest level. (The debit card isn’t where the value is in this sale, it’s all about the charitable contribution for the customer. I know this from experience with a charity organization I support.)
WHAT EFFECT ON PROFITS? Now, what effect do these numbers have on profits? Well, what I have is a series of values that I can assign to the “donation percentage” and “resulting change in revenue” variables, and I can calculate the Net Present Value variCHANGES TO DONATION PERCENTAGE able for each pair. In addition, I will be able to see the total amount donated to the fund raising organization. (Yes, NPV I’ll also have to tweak some of the other variables such as walkoff percentDonation $ age, additional visits, etc.) In this case I think a graph communicates the data better than a table of numbers. Graph 1 shows the results. The X axis shows the donation percentage, and the Y axis is dollars. There are two lines on the graph: 1) The net present value of my revenue and expense forecasts (ie what I get out of the deal). 2) The total dollar amount of donations to the fund raising organization over the same 5 year period. DONATION PERCENTAGE 10%
15%
20%
25%
30%
35%
40%
be made to sell, and customers will be motivated to buy. Probably very few new customers, minimal cannibalism. • At 20% I think I will start to attract a new customers, more existing customers will buy, and existing customers will start to buy more than 1 card. A small amount of cannibalism, but I should also be reaching new customers. • I don’t think going from 25 to 40% will make much difference in customer’s motivation to buy, I think we’ll only sell a few more cards than at 20%. But I do think it will cause customers (new and existing) to spend more time washing, and to wash more frequently. • At 50%, I think people will begin to “buy their car washes” from the fund raising organization in-
LOOKING FOR THE WIN-WIN CASE In looking at this chart, lets keep in mind what my goal is – to find the winwin case where I maximize both my profits and the amount I can donate to the fund raising organization. Lets look at the NPV line first. Below approximately a 10% donation, NPV is negative. That means I’m not meeting my goal of returning 10% on my money. Since I’m in business to make money, 10% and below is not a good choice. When the donation jumps from 15 to 20%, I see a rather sharp increase in NPV, but then it levels off until I get up to 35%. When I go above about 35%, NPV starts to drop off significantly, so that is probably not a good choice either. So based purely on the NPV line, I want to be somewhere between 15% and 35%. 25% appears to be the highest level of profit for me.
45%
50%
2016 •
43
44 • SPRING 2016 •
• SPRING 2016 •
45
FROM THE ARCHIVES:
Decisions, Decisions
Now let’s take a look at the amount of cash that gets donated to the fund raising organization. In a nutshell, as I raise the percentage, the dollars they get goes up. That’s obvious, you didn’t need me to tell you that. But look more closely – notice how it goes up sharply when the donation goes from 15% to 20%, then the rate of increase slows down again. This indicates to me that somewhere between 15 and 20% may be a “sweet spot.” There are big jumps at 50% donation and above as well. What I find most interesting is where the 2 lines cross – at 15% and again at 25%. This indicates to me that this is where I should be looking. Why? Again, look at my goal – to maximize the benefit to both parties. Sure, I could be heroically generous and donate 50% I mean, just look how much more the fund raisers would get! But stop and remember the goals. If I’m not making more money, I can’t donate more money ... this needs to be mutually beneficial. If my goal is simply to donate money, then I’ll write a check. No need to take on a lot of extra work and risk. ANY SKEPTICS? Right about now, some of you skeptics out there should be saying “Oh come on, that chart isn’t accurate, you guesstimated all the data!” Yes, you would be right – actually half right. I did guesstimate some of the data ... but the chart might be accurate, you certainly don’t know that it isn’t. So skeptics, I ask you this. If I were to perform a study in which I actually ran this fund raiser, and varied ONLY the donation percentage, then at the end of the year we could determine what the correct and accurate values were for that year. Right? My point here is that there is a correct set of data, we just don’t know exactly what it is yet. So yes, I admit that I don’t know exactly how accurate my chart is. At the moment I’ve put some thought into it, and it is as accurate as I can get. Let me explain some of the things I’ve done to improve it’s accuracy, because as we all know, “Garbage In, Garbage Out.” WHAT CAN I DO TO IMPROVE ITS ACCURACY? I can get exact price quotes on equipment and installation. These will not be estimates, they will be exact figures. I can research precise salvage value figures for the equipment I am evaluating. I can extrapolate from my own historical data to make some of my “estimates” extremely accurate. For example, I have 10 years of excruciatingly detailed data on the costs involved in operating a self service car wash. I don’t have to forecast my variable expenses, I can get an exact figure, down to the pen-
46 • SPRING 2016 •
ny if I want to. Same thing for taxes. Same thing for inflation rates. Etc. I can talk with somebody who has already done this project, and ask what actually happened. The downside of this approach is that I’m not likely to find somebody who has done exactly what I want to do, in a market that is similar to mine. But I can probably find some data that will help. As I mentioned, I can run the fund raiser for 1 year at the percentage that I believe will work best, and to gather as much data from that process as I can. THOSE will be one set of real numbers which will allow me to adjust all of my forecasts. I can then re-run my analysis, and adjust the donation percentage for the following year in order to more accurately predict and control subsequent outcomes. In other words, quite a few of the numbers that go into the calculations are not forecasts! There is really only one number for my project that is truly difficult to forecast: the number of debit cards that will be sold. But since the organization I will be working with already sells a number of other products (including gift cards!) and has many years of experience with their “sales force,” I am fairly sure they can give me a good estimate of how many cards they think they will sell. So I’m confident that I can come close. Important note: I am not trying to impress you here with how accurate I think my estimates are. I am trying to emphasize that if you put some thought and effort into this you can get very accurate estimates. And the more accurate your estimates are, the closer this really gets to a true prediction! Your numbers will probably be very different! This is important – your numbers, charts and graphs will probably look different than mine. Your market, customers, and competition are all different. Please keep in mind that the exact numbers that I’m showing you are not nearly as valuable as the techniques. PERSPECTIVE! I suspect that you might be thinking “Come on, all this work over an investment that’s only a few thousand dollars, this is small potatoes! Wow, he is WAY over-analyzing this!” True enough ... if this is the only decision I ever make. Let me ask you this. When you teach a youngster how to pitch a baseball or shoot a foul shot, do you tell him to “just chuck it up there, heck, it’s only one pitch / foul shot.” Or do you teach proper technique, and have him practice the proper technique in order to establish lifelong habits? Me, I’m more inclined to practice with the small potatoes so that when I’ve reached the level of success to consider a “big potatoes” project, I already have knowledge and experience to do it right. Make a mistake with a small project and my feelings are hurt but I learn. Making a mistake with a big one can cost me the farm. So yes, all this work over a small project. DECIDE! So now that you have a little more knowledge –
maybe just enough to be dangerous – what would you do? Take into consideration the hard/financial numbers, soft factors, assumptions and intangibles, and decide. As the CEO of your car wash, it’s your job. Decide. A FEW THINGS TO THINK ABOUT I’d like to leave you with a few other things that are worth thinking about – topics that will help you in your quest to become a better CEO, to become more profitable by making better decisions. ANALYZING PAST INVESTMENTS I have a suggestion for a homework assignment. Pull out your ROI and payback estimates for an investment you made a few years ago. Then go into your accounting files and pull out the actual data. Run the actual historical data – what your revenues and expenses really were – through a NPV analysis, and see how the actual performance of your investment compares with what your estimates were. Would you make the same decisions? For the same reasons? ARE YOU WORKING FOR FREE? I should think that by the title of this paragraph you already know what my opinion is – your labor is not free, and if you are counting it that way then you are fooling yourself. No, wait, let me say that a different way ... if your labor is free, then get yourself down here to my place, you’re hired. When analyzing an investment, you need to include the “going rate” of labor expenses for any work that is to be done. Because if you are going to do the work yourself, that is basically the rate that you are paying yourself for that labor, whether you include it in your calculations or not. Keeping labor rates out of your calculations only serves to inflate the apparent profitability in the results. Be careful – I am not saying you should not do any work yourself. (Come on guys, you know I am a do-it-yourselfer.) What I am saying is that the CEO of your wash earns $100,000/year, and the cleanup guy earns $20,000. Sure, if you do both jobs your company can save a few bucks on the labor; so your take-home pay is now maybe $115,000. But that is only because you worked 2 jobs – CEO and Cleaner-Upper. The Cleanup guy’s salary is still an expense to the company, the only difference is whose pocket it went into. Clear thinking. The cleaning job still got done, and you got paid $3 per hour to do it. “PAYBACK PERIOD” – USE CAUTION. In the past, I’ve always calculated a “break even period” on any business investment I considered. I’ve always thought of it as a valuable tool, a good rule of thumb measure I could do in my head. Bad news: I wasn’t able to factor in the time value of money, taxes, inflation, etc in my head. So my estimates were always on the rosy side, to put it politely. The break-even period is a good tool – but factor in the time value of money when you use it.
• SPRING 2016 •
47
10% OFF YOUR FIRST ORDER ARMOR ALL PROTECTANT SPONGE Pack 100 PK 10800 $47.99/CS 100 ARMOR ALL CLEANER SPONGE Pack 100 PK 30800 $47.99/CS 100 ARMOR ALL PROTECTANT 4 oz Bottle 24/Cs 10040 $31.25/CS 24 ARMOR ALL TIRE FOAM 4 oz. 12/Case 40040 $12.99/EA
TO SAVE ENTER COUPON CODE SSARMOR FOR
AUT O WA S H O N L IN E. C O 48 • SPRING 2016 •
|
M A L D E N , M A 0 2 1 4 8 | FA X 7 8 1 - 7 2 3 - 0 0 7 0
GO ONLINE FOR THE
BEST PRICES
BEST GU
ARANTE
ED
PRICE G
U
AR
ANTEED
ON ALL PARTS & SUPPLIES AUTHORIZED DISTRIBUTOR
AUTOWASHONLINE
49
Happenings In & Around Self Serve Carwashing
INDUSTRY DIRT Mark VII Equipment Inc., the North American subsidiary of WashTec AG of Germany, the world’s largest manufacturer of vehicle cleaning systems, today announced that Rob Raskell has joined the company as Director of Distribution. Raskell has 18 years of experience in the carwash industry, including operations management of retail carwash chains, sales for a carwash distributor, and distribution sales for a “top tier” carwash chemicals manufacturer, according to a press release from the company. He will be based in the Spokane, Washington area. “I want to welcome Rob to the Mark VII team,” said Ryan Beaty, Mark VII’s EVP Sales. “His deep experience on both the supplier and operator sides of the carwash industry will make him a great resource for helping Mark VII distributors grow their businesses.”
SONNY’S CarWash College is now offering classes aimed at managing multiple car wash sites. The new Multi-Site class offers training to better understand the latest in car wash technology, manage employees at remote locations and remain in firm control of daily operations and procedures at several locations. The first class was conducted in March and other courses will be offered throughout the year. SONNY’S The CarWash Factory is the largest manufacturer of conveyorized car wash equipment, parts, and supplies in the world.
Autobell® Car Wash’s Charity Car Wash Program raised $634,398 in 2015 for charities, schools, and other nonprofits in North and South Carolina, Virginia, and Georgia. The Charlotte-based company’s program began in 1998 and has assisted non-
50 • SPRING 2016 •
profits in raising over $7.6. “Nonprofits continue to find this program a simple and effective way to reach their fundraising goals,” stated Autobell President and CEO Chuck Howard. “It eliminates the need for organizers to plan and execute a parking lot car wash or other large-scale fundraiser, worry about rain dates, or incur any upfront cost.
Jordan Allen, a Delta Sonic employee and Hilton High School student in Rochester, NY, has been selected as this year’s Larry Harrell Scholarship recipient by the International Carwash Association. Allen will receive a $1,000 scholarship to be used toward her continued education. She was selected to receive the scholarship due to her high school academic performance, school honors, community involvement and strong essay on the importance of hard work and kindness. Allen plans to attend either St. John Fisher College or University of Tampa in the Fall where she will pursue a Pre-Med major. The Larry Harrell Scholarship, created in honor of car wash operator Larry Harrell by his peers, has been recognizing young adults working in the car wash industry since 2000. com-
pany.
You know what they say: Mo’ workers, mo’ problems. Or at least thats the case for full service carwashes in New York and California which have made headlines in the last few months for their treatment of employees. We start with SLS Car Wash in Bushwick, NY, where about 40 workers have voted in a 35-5 decision to join the Retail, Wholesale and Department Store Union. They are the 11th carwash to join the union and, to date, the largest carwash to unionize in the nation. “Before we organized a union we worked under a lot of stress,” SLS worker Cheik Umat Balde said in a release. “The managers will always yell at us to work faster. Sometimes they will call us stupid. We had to deal with unknown chemicals with no protections.” A manager at SLS declined comment.
Problems abound for other full-service carwashes in the city. At C&P Car Wash in the Bronx, several groups and carwash worker sympathizers are calling for an investigation into the death of carwash worker William (El Toro) Gomez to see if there are any links between his job and his death. The Car Wash Campaign, which has been fighting for workers’ rights and improvements {continued }
Meet “Buster” & BC-1600 The newest members of the
BC-1400
BC-2500
BC-1600
family
BC-1600CC
Rowe bill changers are a safe option for locations because owners and attendants no longer need to carry cash. BC-1600 recycles a patron’s $5 bills as change for another patron without the owner or attendant handling bills. Bill Busters break large bills into smaller bills, other models accept credit cards and dispense bonus tokens. Plus owners count on the time tested reliable BC-200, 1200,1400, 2800. There is a Rowe changer for every application.
YOUR MOST TRUSTED SOURCE FOR RELIABLE BILL CHANGERS ROWE BILL CHANGERS, LLC 7121 Belton St. | Richland Hills, TX 76118 | 800.669.7693 |
Introducing Bill Buster_full page Self Serve Carwash 9.5x13 ad_4-26-16.indd 1
51
• SPRING4/26/16 2016 10:39 • AM
52 • SPRING 2016 •
EXTRA! EXTRA!
Interesting operator news and tidbits from around the industry water cooler.
Read all about it ... Oh, happy day! San Diego mayor Kevin L. Faulconer officially proclaimed April 21, 2016, as Soapy Joe’s Car Wash Day in recognition of the wash’s efforts to be environmentally friendly. The proclamation was celebrated in connection with Earth Day, annually observed the next day. “Sustainability has always been a top priority for us,” said Lorens Attisha, CEO of Soapy Joe’s Car Wash. The family owned and operated car wash business has eight locations throughout San Diego County offering express car wash services and oil changes According to a press release about the proclamation, the company focuses its efforts on one of the area’s most important natural resources – water. gave free top-of-the-line car washes at all eight locations that day via an online coupon available at SoapyJoesCarWash.com.
INDUSTRY DIRT
in health and job safety conditions at car washes, as well as the New York Communities for Change, have organized a vigil for Gomez and are suggesting his death may have been caused by working with “unlabeled chemicals.” Chio Valerio, deputy labor director at New York Communities for Change, noted there isn’t any proven direct connection but car wash workers have long complained about health ailments they believe are tied to constantly inhaling the unidentified chemicals. “We are trying to find out what’s happening,” Valerio said. The owner of the carwash, Frank Roman, released a statement in response to the planned vigil. “I am saddened by the death by one of my valued employees,” Roman in statement. “And my heartfelt condolences go out to his family and friends. I assure my workers and the community that my employees are trained, my establishment is safe, run in compliance with all applicable laws and regula-
And now your competition is coming from the App Store. Washé, “the app that cleans your car,” has officially launched in South Florida. The mobile application connects car owners with mobile carwashes via their smartphones. Users can download the Washé app free on the App Store and Google Play, set up a user profile for payment and location services, and then tap to order a car wash. Tiered pricing packages are offered to allow customers to personalize their service requests. According to a press release from the company, the app is the first of its kind in South Florida and already has close to 3,000 users during beta testing earlier this year. The company has its own network of “highly-skilled and experienced car wash workers that we’ve hand selected.” There’s no need to give directions to the washer — the Washé app uses GPS location services and keeps the consumer updated through every step of the process. When the job is complete, customers receive a picture of their clean car and secure payment happens automatically. The company plans to expand to Miami within the next few months. A short guide to free PR: Follow the example of David Chess, manager of Scrubbin’ Bubbles in Wallingford, CT, and make yourself available for an interview with the local television station. WTNH’s story was about “how
tions, and that all the cleaning solutions used in my establishment are OSHA certified,” he said. But he slammed the planned vigil. “It is a disgrace and wholly insensitive to the deceased family,” he said, arguing the union was trying to “politicize the death.” “This is their desperate attempt to use the death of a person to leverage the industry and me. It is disgusting, plain and simple,” he said.
Moving onto full service carwash woes in California, a Yorba Linda, CA, wash must pay $68,656 in back wages and damages to 16 workers,. Majd Aboul Hosn, co-owner of the business, said his workers were paid a legal wage but that some chose to arrive before their scheduled shifts because of transportation constraints. He denied
to save money at the car wash” after an earlier story on the channel pointed out the necessity of washing your vehicle often during the wintertime to avoid damage from road salts. Chess says there are lots of ways to avoid paying full price. “We have a punch card you get 10 punches, you get a free wash, we have coupons all over, at Big Y, on the back on receipts, they come in the mail,” he said. And sometimes it just pays to be a loyal customer. “You know if they’ve been here a while we’ll give them the better wash or a free vacuum,” Chess added. If the local news station isn’t exactly knocking down your door, consider putting together a short press release announcing a new budget-friendly wash package or pointing out the importance of washing your vehicle during different seasonal issues -- like pollen or lovebugs. {continued }wash industry has been the subject of some high profile investigations by the California Division of Labor Standards Enforcement, which has issued more than 1,400 citations between 2009 and 2014 and claims the industry has some of the highest level of workplace violations in the state. In Los Angeles, some 40 car washes have unionized in the past four years under the Clean Car Wash Campaign, funded by the AFL-CIO and the United Steelworkers. • SPRING 2016 •
53
EXTRA! EXTRA! Or take a page from Autobell in Hendersonville, NC, which gave the local TV station there a breakdown of how its water reclaim system works along with the wash process to eliminate risk from corrosive road salts. The company was also able to emphasize their commitment to being environmentally friendly in the story: After explaining that the wash only uses reclaimed water during the wash process and not during the final rinse, and that soaps and chemicals use render the salty water harmless, the company added “[that] they’re proud to be water recyclers and doing their part to preserve that limited resource. Finally, consider imitating OTTO Car Wash in Muskogee, OK, which announced its new equipment and business renovations to the public via a story in the Muskogee Phoenix. The story may have been short and sweet -- a mere 40 words -but the full-size photo taken by newspaper staff was worth the other thousand. Carwashes of the nation, hold onto your hats! We mean that quite literally after Speedy Pete’s Car Wash lost its roof during an intense storm in Alexandria, Louisiana. “I looked up and saw the roof start to detach a little bit. And a second or two later it was way up
Wash would be back in operation after a month or two, but there was no word yet from Pizza Hut.
in the sky, it was maybe 100 meters high,” Will Shepard, a witness, told KALB News. The roof was carried off to a nearby Pizza Hut, where people eating inside “had to run for cover as high winds carried the debris, wreaking havoc to everything in its path.” The building, as well as multiple vehicles in the parking lot, suffered significant damage, and two employees at the Pizza Hut suffered minor injuries, as well. “I never saw anything like this, I mean there’s some crazy weather here. But you know nothing like that,” said Shepard. The local news station reported that Pete’s Car
Speaking of disasters at the carwash, here’s a collection of three mishaps which should make you glad you operate a self serve rather than a conveyor wash: A 2012 Jeep Grand Cherokee (surprise, surprise) took off as a carwash worker tried to drive it out of the Landis Wash and Lube in Lititz, PA. “[It] “could have been a lot worse,” Lititz police Sgt. Kerry Nye said in a story on Lancaster Online. “He said he just hopped in it like normal, put it in drive and it just took off on him,” Nye said. “He was really scared to death after it happened.” The Jeep crashed into a utility pole, causing damage to the front end and surprising onlookers, but didn’t cause any other damage or injuries. The local report quoted a May 2015 Philadelphia Inquirer article which pointed out numerous incidents of runaway Cherokees at carwashes since the early 2000s. “The International Car Wash Association once advised its members to handle Jeep Grand Cherokees with extreme caution,” the report added. A woman going through a carwash in Taranaki, New Zealand, was caught “dangling in mid-air”
Save the date! September 19-21 Atlantic City Convention Center 54 • SPRING 2016 •
EXTRA! EXTRA! when the wash’s brushes got stuck on a bike rack she had installed to the back of the vehicle. “All she could do was honk her horn in panic as she felt the car lift up,” according to the Taranaki Daily News. The article continued by sharing the woman’s Facebook post, which she wrote after the incident to let other car owners know about the risks of having accessories on their vehicle. She posted: “What’s on my mind, you ask? Well just like to put a warning out there. Don’t leave your bike rack on the back of your car when going through the car wash. The big drum’s chamois strips got tangled up in the rack so badly it lifted up the back of the car. Sat on the horn, I did, to raise the alarm. Fortunately it came loose. Owner was very kind. ‘You’d have to say it was a pretty stupid thing to do,’ says he. One good thing. Must have a good ticker. It had a workout I can tell you.” An employee was hurt at Ernie’s Car Wash in West Boylston, MA, after he was struck and dragged by a customer’s car which jumped the track and then struck another vehicle inside the tunnel. A report by the area CBS station said the worker was taken to a local hospital, although there were no details on the extent of his injuries. Well it wouldn’t be the Extra! Extra! section if we didn’t end with a frus-
trating story of frustrating government stupidity. In this case, the operators of a carwash nearly constructed in San Luis Obispo, CA, would like to use groundwater, but city leaders are hemming and hawing over the idea of signing away their water rights as they transfer ownership of an on-site well that they haven’t used in 20 years. Hamish Marshall, vice president of property owner Westpac Investments, said he expects the new car wash to open around Feb. 20 at 1460 Calle Joaquin, near the intersection of Los Osos Valley Road. A grand opening would follow around the first week of March. “It doesn’t keep us from opening; it just costs us more money,” vice president of property owner Westpac Investments Hamish Marshall said in a story from The Tribune. “It’s simply that city water is very expensive. It’s potable water — to be using potable water on cars when we have the ability to use nonpotable water on cars seems a little silly.” According to the report: [City] staff hasn’t found a record of an agreement allowing the city to install the well on that property, City Attorney Christine Dietrick said earlier this week. The city has no ownership interest in the well or property, according to a Jan. 21 letter from attorney
Roy E. Ogden of Ogden & Fricks, which is representing the property owners, and Quiky intends to start using the well water with or without the city’s cooperation. Board members of local nonprofit Central Coast Grown have raised concerns that groundwater pumping could jeopardize the water table in that area and note the well was forced to shut down more than 20 years ago because of groundwater contamination. The San Luis Obispo Planning Commission approved a use permit for the Calle Joaquin car wash in November 2014, the same year that Westpac bought the property, Marshall said. The site has previously been occupied by Denny’s and Zaki’s Waffle House. Westpac also owns the Quiky Car Wash, which uses potable water, on Broad Street. The water is recirculated, said Marshall, estimating that about 80 percent is recycled and used a second time. The amount of water that the car wash would use, however, is a fraction of the amount the city pulled out of the ground 26 years ago. City staff said the estimated water use is about 63,568 gallons per month, or about the same amount as 11 single-family homes in San Luis Obispo, based on the average amount of water used by five comparable car washes in the city.
Equipment Distributed By
Pesco, Inc.
professional equipment solutions
800-737-9274 Steve Statkewicz
June 20 - 22
251-422-2594
FireKeepers Casino Hotel Battle Creek, Michigan
A complete show schedule and registration materials are available at
fresh and ready when you are
• SPRING 2016 •
55
Gloria from the North How this SS Superwoman gets things done in the mud, muck, dust, and freezing cold of her native Canada. by Kate Carr ment, and then having a caseload of all kinds of other issues from addiction to intellectual disabilities to all those kinds of things, I think all of those experiences bring amazing strengths so that we’re creating not just a carwash, but a real experience.
I was so pleased to finally get a chance to chat with Gloria Winterhalt, co-owner of Splish Splash Auto Wash in North Battleford, Saskatchewan, Canada. Gloria had been suggested to me as a “Super Woman” for our “Super Women in Self Serve Carwashing” cover story, but that interview didn’t happen in time for last issue’s cover. As luck would have it, delaying the interview worked in our favor, because it allowed Gloria and I to cover a wide range of topics that were a little outside the bounds of our “Super Women” interviews....
SSCWN: Tell us a little about yourself. How did you get into the business? Gloria Winterhalt: As a family, we’ve been in the carwash industry for about 20 years -- it was just a very small business, though. One bay. We’re coming up on our fourth anniversary of being open on this huge scale, though. For me? I’ve only been in it for about three and a half years. It’s been an incredibly new experience for me. I have a passion for economic development / tourism and a back ground in social work. SSCWN: How interesting! GW: Yes, so coming into a manual job is ...very, very different for me. I’ve worked with youth and with people who have various disabilities or addiction and other challenges and now I’m getting them as staff. SSCWN: Do you think that experience gives you an advantage? GW: Well, I don’t know. My husband sometimes has to tell me to quit working. It’s like my old job keeps interfering. But on the other hand, I think it’s helped us to truly develop what my brother’s had for a vision for the wash. I was just supposed to be a small part of it -- I’m on the retirement end of life. But now I’m totally out of retirement. Working even longer hours because we went into a new venture, but that’s fine. My grandkids get a little disappointed that Grandma’s at the carwash, but we’ve created ways to include them -- I’ve got a 12-inch squeegee for them. And that’s part of -- I think for me, coming from a different background than the rest of my family, having all these years of experience, that I’ve always had a strong connection with tourism and economic develop-
56 • SPRING 2016 •
SSCWN: Tell me a little bit about the carwash: How many bays, the type of equipment you have and what not. GW: We have seven large self serve bays for automobiles and motorcycles and then we have 100foot two station RV or Super Bay Wash so we can take in semis. Our vision around our 100 foot bay was -- we have semi washes all around us -- but we’re missing something for the contractors who have a truck and a cargo trailer -- where do they get to wash? And where does the truck and a fifth wheel who doesn’t want to go to a truck wash and deal with crude oil and muck and cow droppings? Those kind of things. That was to offer another place where the population never had anything like that to wash their trucks and trailers. Now we still have that all -- we still have the cow droppings and the muck -- but we keep it clean. We still ended up getting all of them anyway. They like to come here. We’re very, very clean. We clean up as fast as we can after each customer. SSCWN: Are you double doored or enclosed? GW: We’re barn style. SSCWN: I love those barn style washes. We don’t really have any of them down here in the States. GW: It’s interesting. Even when we go to the shows in the States and we’re talking to other carwash operators -- even across Canada, actually -we can go to the West, to Edmonton, and some of their carwashes don’t have doors. We can go all the way to Toronto and they don’t have doors. But we’re stuck in the middle in Saskatchewan and we probably have the most severe, contrasting weather twelve months of the year. Everything from rain, snow, extreme cold -- like 40 or 50 below -- to extreme hot and dry in the summer. We get it all. Trying to maintain a wash in that presents many challenges. Salt to bugs to extreme cold to vehicle’s freezing up. Right now, we’re in the mud season. We are dealing with vehicles coming in where we have to clean the tips sometimes every day or every other day. And then mud changes as the ground thaws -- and what type of mud comes in. In the summer they put calcium down on the roads to help keep the dust down because we become so dry. Customers asking how to get that film off the vehicles, and it gets slimy. In Saskatchewan, we deal with many, many extremes. SSCWN: What a double edged sword: It probably keeps you very busy, huh?
GW: It does. We’re in an extreme climate and it makes our job’s a bit more physical. Our mission is that every customer who comes in and gets a clean bay. So, to keep that up, we’re basically using shovels and scrapers non-stop. When we are lined up for long periods a day you are exhausted SSCWN: What kind of staff does that take? GW: We have three full-time staff on all day, from 7 a.m. until 5 or 6 or 7 p.m. And then most days, from 7 until 10:15 p.m., around when we close, we probably another two to three, depending on again, the season and what’s happening at the wash. We’re not just a car wash; we have other services that we offer here, so that increases the dynamics of the interaction for our staff and for the consumer. We already offer spot-free, so that gives us the opportunity to sell water. People can bring in their five-gallon jugs that they put in their water coolers and they can refill those here. We have a three-bay water refill station for customers. We can do 300 jugs a day, so 150 people which might be there just for that. THen we have a dog wash, so we have a two station dog wash -- we have to have it indoors, because of the weather -- so you’re cleaning that, too. And we have a retail end, and also a year ago, we added U-Haul, as a subsidy of Splish Splash. So, there’s a full-time staff that just deals with U-Haul. The most important part of our business is probably the charity component, and that’s my job. We have a venue for our charities to be able to come and raise amazing funds. Our WashCard System is setup so that these charities can make money. SSCWN: How is your program structured? GW: We’ve done them many different ways. For example, we have charities that sell our loyalty cards. We sell the cards to the charity for $10 a card, they sell them for $20, and we load them with $25. At the very end of all of it, the consumer is the one who really gets an amazing deal and our charities raise an amazing amount of money. That works towards both our charity and our marketing goals. SSCWN: What kind of groups are you partnering with? GW: Oh, everything from gymnastics and hockey to Christian motorcycle clubs to student missionaries who are raising money for trips to Peru or Costa Rica. We’ve had local churches and animal rescue groups. Right now, we just finished a live “on location” fundraiser to raise money for a rescue group that has a puppy that had legs amputated and they were dealing with high medical and equipment costs. SSCWN: How long have you been doing the charity partnerships?
Gloria Winterhalt GW: Since we opened. It was part of our vision when we started the carwash. We have other businesses and we wanted to have something that would be community focused and be built on community contribution. We are so very involved in the community and charity is a big part of that. We spent a whole year as a sponsor and raising money for a local day program that caters to adults with intellectual disabilities that wanted to construct a new building. That was close to $3 million. We’re part of our community; that’s our mission. We’re not just a car wash. We’re part of the community. Without the community, we don’t have a business. SSCWN: Tell me about the demographics and competition in your area. GW: Well, we have about 18,000 people here. We’re not very big, but we have a trading area of about 40,000. So, you know, about a 100-mile radius that we draw from. We have many different demographics; there are a lot of immigrants in the community and 14 reserves around us. We’re not a manufacturing community at all; we’re farming based. We’re in the middle of the prairies. Oil crashed here about a year ago, so now there is a lot of oil development. We’re changing a little bit. But we have everything to low-income to high-income; quite a variety of people. SSCWN: To that end -- the high income -- I noticed you have a “we wash it for you” program? GW: Yes, we have a we wash it for you service, as well as full service detailing. So, we can do a quick wash and vacuum while you’re here on site, or we can do the full interior and your windows, and the dash, and the cupholders, and all that -- we do that right up to as close to showroom quality as we can get it. We have a full time detailer here. SSCWN: Do you know what kind of split is there, percentage wise how many customers choose WIFM versus the self serve? GW: I don’t know the exact numbers. It’s a big part of our business, but it’s also seasonal. Our demographics probably affect it, too. We don’t have quite as many high-end customers, but we do have a lot of seniors who use that service. Or customers who have had surgery or have a bad back or they’re coming in dress clothes for a business meeting and don’t want to show up in a filthy vehicle. We reach out to all kinds of consumers and meet a need for them. We start at a base point and customize everything that comes through the door. We may have somebody coming in that’s got mud caked on their wheels and they’re wobbling on the highway and they’re not dressed to be dealing with mud, so we go in there and we scrub their wheels. We can adjust our pricing according to what service we’re giving. Our detailing is amazing, and we have that option, too. We don’t do our full service detailing at this location because we’re too busy. We have another location with two bays and our detailers are at work there.
And then we have outside vacuums available all year round. So it can speed up the consumer in the bay, they want to clean up the garbage and vacuum and they can do that outside and free up the bay for someone who wants to wash. We’re always considering: How can we help the consumer get what they need? It’s not about how we want it go -- but how can we help them get the services they want and get them through the wash? SSCWN: You do a lot with your loyalty card program, don’t you?
INTERVIEW
GW: Yes, actually every New Year’s we want all our registered cardholders to know that we appreciate their business, so we say thank you by loading up those cards with $5 on New Year’s as a way to say “Thank you for your business this past year, we look forward to your business in the coming year.” SSCWN: That seems like a fantastic way to promote the loyalty cards and the online registration program -- and I noticed the comments on Facebook were very pleasantly surprised and appreciative by the gesture.
GW: Yes, and we’ll even have comments two or GW: Yes, our WashCard is a starter from day one. three months after New Year’s who will come in We go against the norm, for sure. We don’t like to and again say, “Thank you very much! That was awebe the same as everyone else. So, instead of giving some.” I think when you do it unexpectedly, that just the usual 10 percent discount, we give you an exmakes the customer feel even more appreciated. tra 10 percent value. So, if you load that WashCard We’re doing a promo right now for some firewith $20, we’ll add another fighters who are in our community -- about 300 $2 onto it. We have a lot of consumers using our of them -- for a convention. And so we’re offering loyalty cards who are loading $50, $60 and even them a 25 percent discount while they’re here as $100, $200, $300 on their cards, so that loyalty a way to support the service they do around the percentage can really add up. You could be getting an extra $30 on that card. And since our WashCard is really becoming When I was researching Splish Splash for this article, I came across this “Letter a sort of multipurpose to the Editor,” printed in local newspaper The Battlefords News-Optimist this card within our busipast November after the carwash participated in Grace for Vets (as you all know, ness since we have so my most favorite carwash cause). I thought it was worth sharing again and many different sergiving Gloria another thumbs up. vices. I mean, you can “A large bouquet to Splish Splash Car Wash on the occasion of their free car wash for use it as a gift card. As veterans. My wife and I drove into the building, not knowing what to expect. A friendly a thank you for the guy employee motioned us into a stall. I got out of the car and asked, “How do I get started that cuts your grass or to wash my vehicle?” He politely told me, “Go to the office, sit down and have a coffee, as a stocking stuffer or as I’m going to do your car. Gloria will escort you in and get your coffee.” whatever. You can use As we were walking to the office, Gloria glanced at my wife and informed us, “I know it for car washing or for you both, you used to coach us with the Legion Track and Field at the Civic Centre, up detailing. We have a lot and down the steps we’d run for an hour.” of businesses that use She remembered Jill and Eddie Martin, Valerie Carbert, Rick and Rueben Mayes. “You it as their petty cash drove to track meets in Saskatoon and other places that we attended.” when they’re coming That was many years ago, what a small world. Once again a big thank you to Splish in to fill their water Splash Car Wash staff for honouring veterans. bottles. And we have Tony and Susan Francescone” consumers that use it just to wash their dog. You can register your card online, and every provinces. They’re here from all over, Manitoba, time you use that card at our business you are enAlberta, Saskatchewan. Hopefully they can have tered into all these different promos we have. So, a clean vehicle while they’re here. We’ve also had you could win $25 on your card, you could win a lot of fire trucks come through to get washed a detailing or an iPad. Also, if you lose your card before the convention, too. -- well, we’re in a crunch for dollars now, so if you We have an emergency services discount, so if lose a card with $20 on it, that could become very they’re using their WashCard, they’ll have an adimportant. So, if you’ve registered your card and ditional discount as a way to say thank you -- and its lost then we can cancel the whole card and put that extends to their personal vehicles, too. Bethat money onto a new one for you. We have a lot cause we really appreciate what they’re doing and of people who come in and say they’ve lost their how much they’re giving up in their personal lives card, but they haven’t registered it. That’s tough to serve us and our community. because we can’t help them then. We also love to participate in Grace for Vets. We SSCWN: I noticed some comments on your Facework with the Legions here. Again, it’s part of that book page that suggested you surprised your cusmission to be a part of the community. tomers by putting extra cash on their loyalty cards this New Year’s.
{continued } • SPRING 2016 •
57
Clean cars! Better than any other touch free Available to fit in existing bays easily Simplicity of operation Extensive use of Stainless Steel Highest quality components throughout High production – you wash more cars Will STILL be running when you make your last payment UL approved electrical throughout
58 • SPRING 2016
• SPRING 2016 •
59
INTERVIEW
Gloria Winterhalt
SSCWN: Turning the conversation back a bit, I wanted to take a minute to recognize how well the carwash is doing with Facebook and its social media interactions. You’re posting often, you’re keeping customers notified of all the different promotions going on and including photos. And I’ve noticed you have a lot of customers who interact with the page, too. Do you have someone responsible for the Facebook page or is that another one of your duties?
This carwash was supposed to be for the next generation, but they’re not quite ready to go there yet. They’re all off playing and building their own careers, and that’s great, too. Actually, about half of our kids have got their own businesses now, too. My son just opened a convenience store and my nephew has got a website building business. They’ve all branched off, but that entrepreneurship is still there. So where all that goes -- and where the carwash goes -- we have no idea.
GW: My niece, actually, you know -- it’s a younger generation and they can certainly do that much faster than me. She takes care of our Facebook and the postings. So if there’s a charity event or a unique event going on, we try to remember to snap a picture and send it to her and have her post it. You do need to have somebody who is going to do that medium of marketing and be focused on it. She does a very good job with it. We coordinate a lot of those efforts -- from our newspaper ads, and radio spots, and the Facebook -- so that we’re using multiple tools and they’re all sort of mirroring each other.
SSCWN: So speaking of all those brothers -you’re also in a male dominated industry now. What has that experience been like?
SSCWN: Speaking of your niece, you mentioned this is a family wash. Who all is involved?
Check out Splish Splash’s Facebook page (facebook.com/ splishsplashautoandpetwash) for an example of how to really maximize social media for the self serve business.
GW: Well, it’s had it’s challenges. Most sales reps or even the customer will walk right on by you and find the male staff. It doesn’t matter where you go, it’s a male dominated world. Sometimes I have to push it -- that I’m the boss. I’m at a moment where we have a mostly male staff -- I’ve had lots of girls work at the carwash, but at this time, it’s mostly males. And then I’m working with male brothers and my husband. So, there are challenges. But my background has given me some experience there -- you know, I’ve worked with troubled youth and what not. And we’re at the low end of the pay scale here, so of
Facebook Pros!
Not only that, Splish Splash has done so well with building a strong Facebook presence that their customers are now doing the promoting for them.
GW: It’s five of us siblings that own the carwash. We’ve all worked here, but myself and my brother, David, are the ones running it. I’m the one that’s here full-time. We have a few other businesses. We’ve had nieces and nephews and my own kids who have all worked here at some point in time between their jobs or finding jobs. There’s always employment here. My husband works here, too. He manages the U-Haul business for us. And then there’s a brother who owns a septic truck and comes and pumps all our pits. So, we try to emphasize that we are a family business. We grew up in a family business; our dad has always been an entrepreneur. We had a taxi company for almost 40 years. And our mom has always run these various home-based businesses. So we were brought up in a household with strong entrepreneurship experiences. My brothers worked more in the cab company, whereas I switched off and did a few other things -- but it’s always been a part of my life. We’ve always had businesses together. We’ve had rental property and what not, and we’ve always tried to connect somehow.
60 • SPRING 2016 •
course we’re finding those challenges. SSCWN: Speaking of that background in social work -- a lot of self serve operators talk about the difficulty of finding, hiring and managing good staff and attendants for the wash. What sort of methods are you using to deal with those hurdles? GW: I think the most important thing to stress is that we’re a team. We may be writing your checks, but there isn’t any job at this building that I haven’t done or that I don’t do. I clean the bathrooms right along with staff on the weekends. I shovel out pits right along with staff that shovel out pits. So no matter what level they are, even if they’re young students, I think it’s about respect. It’s about being part of the team. I can’t do this without staff. I mean -- we did for the first two years. I did it by myself. It’s not fun. We also encourage our students to put school first. School is your first job. We work around exams. And we also work with our school’s functionally integrated program to create a safe environ-
ment for students with a learning disability or a physical disability so that they can get some work experience and the social experience of working, too. And all of our staff are involved with that -it’s about building respect on both sides. We’ve created a real bond there. We also work with an employment program that focuses on helping people who might have difficulty getting a job for whatever reason -- addiction, disabilities. We help create and build those first skills; like making eye contact or saying “good morning” to a customer. Building that self confidence up so that they can move on. That’s part of the process. One thing we stress to our staff is that their next job might come from here -- you know, a future employer might walk into have their car washed. Or their next girlfriend or boyfriend. Or their future mother-in-law. You could meet them here, so we encourage them to take pride in who they are, what they do, and to build a good work ethic and respect on both sides. SSCWN: That’s an interesting spin on it -- that they might meet a future girlfriend at the wash or that they’re future mother-in-law might drive into the bay. To that end, what have you done to make the carwash more female-friendly? GW: I think it’s the basics: Lights outside, having an attendant on staff at all times, hiring female staff. We have curtains to divide the bays in the barn style wash, so they’re clear at the top so you can see through them so that a female will know she’s visible. We want all of our customers to feel safe when they’re here. Another thing we do, is we encourage our staff to step in and educate the customer. To observe them and if they see someone who is moving around like they’re a little unsure, then offer some help and show them around the bay and how to use the different services. Don’t hesitate to go up and say, “Good day, is there anything I can help you with today? Can I show you how the equipment works?” We’re always encouraging them to have an interaction with their customer. The worst thing as a customer is to struggle with a wand or a vacuum hose that’s all tangled up. I mean, I’m only 5 foot and our vacuum hoses are up high. So that’s another thing our staff is looking out for and watching for a little old lady or somebody in dress clothes or whoever might be fighting with a vacuum hose. (My interview with Gloria continued on after this phone chat and into another one, so please stay tuned for the rest of our conversation in the Summer 2016 issue!)
Non-acid Wheel Cleaner
Earth Friendly. Car Choice ®. “If your car could choose.”
Cuts dirt, road film, brake dust and grease. Fast.
Warsaw Chemical Co., Inc. P.O. Box 858, Warsaw, IN 46581 Phone: 800-548-3396 Fax: 574-267-3884
warsaw-chem.com
• SPRING 2016 •
61
G N BHA S E I R EO T
! a g n i z Baake more money M
y f i s r e v i d u o y n e h w your services.
e h t e k a m a n n a w u Yo tta big bucks? You go have big ideas. Over the years, we’ve noticed that self serve operators that invest their time in branching out beyond their core business and branding their car wash with complimentary profit centers have been able to attract more customers and build bigger profits than the average 4/1 SS/IBA. The saying goes: Activity breeds activity. A busy wash attracts more
BiG BANG THEORY
#1
Pet Washes AT A GLANCE
Cost to get started: $$ Expected monthly revenues: $800-$1,000 Industry resources and Associations
Pet washes are an excellent opportunity for washes that have one-too-many bays or a small footprint to maximize the available space and attract a new type of customer. They work well in rural, urban and suburban environments -- but particularly in rural areas where there aren’t many options for pet grooming and care.
Mike Dickey,
Richie’s Car Wash, Erlanger, KY Tell me how you got started with the pet wash. We bought an existing car wash that had been run down. We bought it at a Sheriff’s sale, actually. The owner had invested a lot of money in it over the
62 • SPRING 2016 •
customers, and the busiest washes have a little something for everyone. To that end, SSCWN has researched a few of the more popular and successful multi-profit center ideas for self serve carwashes. Read on for your next “Big Bang Theory…”
years, but I think he got into financial problems in the last five or six years, and he had let it go to nothing. So we bought it and we renovated the whole thing; new equipment in the self serve bays and everything. I had been seeing some of the dog wash stuff in the magazines and the shows as something new that other people were doing. So we decided to take one of the self service bays and put in two dog wash units in that bay. My feeling was that we could do something different that nobody else had here. Do you have much competition there? There’s really no other carwashes in the area that have pet washes. But -- what I have seen is that now some of the pet supply places, they’re having pet washes at those businesses. But I don’t think they have private rooms like we do. So describe your set-up for me. So, we had to pour a whole new floor in the bay and put in new drains for each pet wash stall, so that if people got water on the floor it would go to that drainage. So we made a little hallway and two doors going into two private rooms. We have a window on the outer wall so they don’t feel trapped in there or anything like that. And the doors going into
the rooms have windows on them also. We wanted to give it an open feeling -- we didn’t want them to think they were stuck in some little room in the back. We wanted women to feel comfortable with it. How long did the project take? That particular project took a while since we had to put the new floor in and then we had to put new concrete walls up, windows, doors, a dropped ceiling. We did it first class. You could have done it a lot cheaper and probably faster, but we wanted to make it nice. {continued }
• SPRING 2016 •
63
BIG BANG THEORIES FOR THE SELF SERVE OPERATOR How much was the final price tag? I think we spent roughly $40,000 or so. That includes two pet wash units from All Paws, and I felt like their units were really top-of-the-line. Just the way they’re made and how it looks -- I think we could have bought a cheaper, stainless steel looking thing, but it might not have had the sophistication of the All Paws unit. When did you open? We opened in October 2014. What’s the ROI been like? We’re still building that business. You know, winter time is usually the busiest time for the carwash -you know, January, February, March, because of the snow and salt. But what we’re seeing is that it’s’ totally opposite for the dog wash stuff. When the warmer weather hits you’re busier with the dog wash stuff than with the car wash. I don’t think people like bringing them in during the winter -wet dog in the cold weather. But it’s been much busier in the spring, summer and fall. Anything you know now that you wish you’d known before you started or that you wish you’d done differently? Not really. I think everything has turned out pretty much just as how I wanted it to. We totally re-did the whole car wash, we spent probably a quarter million dollars renovating the car wash. We put all new black top in, we painted the building, we put in new self serve equipment. There’s two separate buildings: a building out front with two in-bay automatic machines out there, and then in the back we have the self service unit. We took all the plumbing out, all the electrical out and totally re-did those self-service bays. It was a big ordeal. As far as the daily upkeep -- are you finding that to be more of a hassle or easier than you imagined it to be before starting? It’s about what I thought it would be. As far as a self service car wash, we have two locations. And so at both of those locations -- and definitely on the weekends -- we usually have someone there all day long. We’re fairly busy; so they’re making sure customers are happy and doing the cleaning -- so as far as taking care of the pet washes, it’s just another part of the daily routine. We check it just like the self service bays, every morning and then you make sure everything’s good and try to clean up any messes that are in there, dumping the garbage cans, taking care of the vacuums, it’s just another daily task during the week. We’re probably handling it twice a day checking things out. How long have you been washing cars? 15 years. Have you been using the All Paws website/ marketing tools? We used some of their logo stuff. And in the beginning we did some advertising the first year -- we did the radio stations pretty big. We had a live broadcast that we did down here. Our commercial that we
64 • SPRING 2016 •
were running had a line about bringing your furry friends in also.
Keith Caldwell,
Vice President, All Paws Pet Wash, pet wash manufacturer What’s going on with the pet care industry in 2016? Well, from a business standpoint, we have seen more growth this year in the first four or five months of the year than we did for almost all of last year. I think it stems from growth in the carwash industry and in the pet industry. The pet industry has grown to an anticipated $63 billion in gross revenues for all of the different markets (estimated for 2016, source: www. americanpetproducts.org); the grooming industry in particular represents about $5.5 billion of that number (source:). One of the biggest trends we’ve seen in the carwash world is growth from the markets where pet washes have opened. I mean, it’s kind of like you have to open one so that people can see that it’s worth investing into and then other carwashes will jump on board. I’ll put one in a town and then all of a sudden I’ll have ten more leads around that site. And the domino effect continues from there. Carwashes have been around for the last hundred years or so, and so the consumer is very familiar with them. Pretty much every little town in the country or around the world has some sort of carwash -- it doesn’t matter if it’s tunnel or if it’s self-serve or an automatic; a car wash is a car wash is a carwash to the consumer. And so pet washes are starting to pop up and now everyone’s starting to get on board. We have hundreds of sites around the country now, and I hope that soon I’ll be able to say thousands. The growth has been incredible. So what sort of equipment will you be displaying at Car Wash Show? We’ll have three different models at the show. We’ll have a modular unit which is a self-contained unit that sits outside the car wash. So if you have room outside one of your bays or your tunnel or even in the middle of your parking lot, you can put this up. It sits by itself. You run utilities to it; it comes pre-assembled in one piece, and it’s heated and air conditioned and ready to be hooked up to utilities and ready to go. Then we have a model that’s equipment designed to go inside a structure. So, if you’re building a new car wash or if you’re renovating a bay, this would work for those instances. It comes pre-assembled from the factory, ready to be hooked up to utilities that are inside your pre-existing building. The third unit is specifically for a self-serve bay at a self-serve car wash; it’s our flip tub model. It’s installed inside one of the walls of the self-serve car wash so that it can be used without taking up your bay the whole time. You don’t have to convert the whole bay, now you can put the flip tub model in the bay -- it’s only 12 inches out when the table is up. When the customer wants to use it, they flip it down.
The table becomes a big enough area where they can put the animal up and wash them in the tub. This is a good option for carwashes that still have a semi-productive or profitable bay. This way you can keep some of the bay revenues as a car wash and also get pet wash customers. The flip tub does need a heavier marketing push -- and definitely customer education so that they know the pet wash is there and available to them. Because the unit is so small and compact, it’s sometimes lost in translation. It needs to be a bit more than just some signage in the bay. Is the flip tub as profitable as a complete conversion? How would the operator determine which option works best for his or her location? I think as long as you do the advertising correctly then a flip tub can be just as profitable as a fully devoted bay. Now, if the bay wasn’t performing that well to begin with, then it may not even be the best place to put a pet wash in anyways. If a self-serve bay is really underperforming you have to consider if it’s the car wash -- if it’s the location, if it’s the market area. What’s nice about these units is that they’re all modular. So, if your car wash or pet wash is under performing, you can actually take the pet wash out and put it into a new location. What do successful pet wash sites have in common? I think an understanding of how to market, brand and advertise the pet wash. A lot of our savvy operators know that the best advertising is usually free advertising, so they go to the human societies and partner up with them. For example, they might participate in some of the marketing programs we offer, like the token program we have. So these tokens aren’t really a selling tool at the carwash, but they give the user a free wash. So they go to the humane societies, adoption facilities, or even police departments with K-9 units, and give out some tokens for free use of the wash. Once they use them, they’re hooked. The humane society can give that token out to someone who adopts a new pet, and then their next stop is the pet wash. Our pet washes are fully customizable -- the colors, the logos. It doesn’t have to be an All Paws Pet Wash; it could be Kate’s Pet Wash or whatever you {continued }
| Software For The Automotive Industry
HIGH TECH SOLUTIONS FOR YOUR
High Tech Car Wash.
800-600-4955 Or Visit See Why So Many Washes Are Switching to eGenuity!
800-600-4955
sales@eGenuity.com
• SPRING 2016 •
65.
pet w sh supplies®
Need soaps? Treats? Marketing materials? your pet wash right here. Call or email us for coupon codes!
Contact Us Today! 800-537-8231 • 8642 U.S. Highway 20, Garden Prairie, IL 61038 66 • SPRING 2016 •
BIG BANG THEORIES FOR THE SELF SERVE OPERATOR choose. We have a design team here that works with the car wash owner to come up with that vision that fits their existing wash before the manufacturing of the pet wash unit even begins. Also, we operate FindAPetWash.com. That website locates all the pet washes throughout North America. You can click on the site that’s nearest you and it will tell you about the location, what kind of unit it is, tell you the pricing, the hours, and it can tell you about the existing business it’s attached to. So, if it was a carwash, you could have a few lines about the carwash -- how many bays, what kind of automatic wash it might have or whatever. Are there any industry norms as far as average monthly revenues for the pet wash? As far as converting a bay, it’s going to be a little bit less than if you had a single, modular building outside the wash. It also depends on how well they market and advertise the wash. Customers have found success when utilizing our free website program, FindAPetWash.com and making use of our marketing materials and suggestions. How do weather patterns affect these pet washes? Are there any regions of the United States that are more successful than others due to climate? Choosing a pet wash design that fits your specific climate is the most important thing when pondering weather patterns. For instance, our ADA 13 units are available with heating and cooling systems. This option makes them available for year round usage. These types of units are particularly useful in areas of the countries where inclement weather is likely for many months of the year. However, our APW units are a great option for most areas of the country. Pre-fabricated awnings are an add-on option for operators to place their units outdoors, next to the outside of the building. It allows them to keep the self-serve bay open and have a pet wash without taking up too much space on the lot. This type of unit can be winterized and used solely as a seasonal application. Are there any demographics or market area minimums that are needed to support a pet wash? Some communities have information regarding pet owner statistics available specific to their communities. It is safe to say that with the $62.75 billion (source:) estimated to be spent this year in the pet industry, that every community has its’ fair share of pets. Compared to self-serve equipment maintenance and repairs, how technically savvy does a pet wash operator need to be? More recently we have adapted all of our pet wash units to include circuitry boards. This type of system allows for comprehensive diagnostic checks with the ease of a button. Pet wash maintenance tends to align itself well with pre-established car wash maintenance checklists. Assuming that operating systems are maintained and clear of clogs, pet washes tend to wear well.
#2
BiG BANG THEORY
Coin Laundries AT A GLANCE
Cost to get started: $$$ Approximate number of laundromats in the United States: 29,500 Expected monthly revenues: $1,250-$10,000 Industry resources and Associations: Coin Laundry Association,; Planet Laundry Magazine,
Perhaps the most common business pairing, laundromats and car washes go together like peanut butter and jelly. This investment is typically best achieved at the very beginning of the investment and new construction process, although acquisition opportunities exist and for some carwashes with extra large lots, there may be the ability to construct an add-on building. Operationally, laundromats are most similar to self serve businesses that are minimally attended and rely on the operators mechanical experience for repairs and maintenance.
The Coin Laundry Association (CLA) has generously provided this information for self serve carwash owners interested in opportunities in the coin laundry industry.
SSCWN thanks the CLA for sharing such thoroughly researched and well written ideas!. 35,000 coin laundries in the United States, generating nearly $5 billion in gross revenue annually. Clean clothes, like food and shelter, are considered a necessity of life and coin laundries provide a basic health service for millions of Americans. While coinops. Business Cycle. The public will always need this basic health service – people always need to wash clothes! Trends 2010 U.S. Census, 34.5 percent of the nation’s 116 million households were renter occupied. The number of coin laundry stores built over the past 70 years has grown steadily as the population has increased and shifted to more concentrated areas. The end result has been a mature, stabilized industry with predictable rates of turnover and values of existing coin laundries, development of new turn-key facilities, and equipment expansion and replacement. Market Value Coin laundries normally sell for a multiple of their net earnings. The multiple may vary between three and five times the net cash flow for most transactions, depending on several valuation factors. The following primary factors establish market value: • The net earnings before debt service, after adjustments for depreciation and any other nonstandard items including owner salary or payroll costs in services • The terms and conditions of the real estate interest (lease), particularly length; frequency and amount of increases; expense provisions; and overall ratio of rent to gross income • The age, condition and utilization of the equipment, and leasehold improvements; the physical • SPRING 2016 •
67
BIG BANG THEORIES FOR THE SELF SERVE OPERATOR
attributes of the real property in which the coin laundry is located, particularly entrances/exits, street visibility and parking • Existing conditions, including vend price structure in the local marketplace • The demographic profile in the general area or region • Replacement cost and land usage issues This resale market standard assumes an owner/ operator scenario, with no allocation for outside management fees. Marketing time for store sales averages 60 to 90 days, depending on price, financing terms and the quality and quantity of stores available at the time of sale. Coin laundry listings are generally offered by business brokers who charge a sales commission of 8 percent to 10 percent. Many coin laundry distributors also act as brokers. The accepted standard of useful life for commercial coin laundry equipment is as follows: • Topload Washers (12 lbs. to 14 lbs.): 5-8 years • Frontload Washers (18 lbs. to 50 lbs.): 10-15 years • Dryers (30 lbs. to 60 lbs.): 10-15 years • Heating Systems: 10-15 years • Coin Changers: 10-15 years This schedule will vary upon usage, sales volume and maintenance. Useful life may differ for accounting or tax purposes. Operations and Performance Coin laundry operations consist of four basic areas: • Janitorial • Maintenance • Collections • Employee management Bookkeeping, administration and banking are typically off-site management areas. A standard profit and loss statement for a coin laundry typically includes the following line items: • Income, consisting of wash and dry • Other income, which would include vending, drycleaning and/or wash-dry-fold service Expenses Each category will have a percentage that varies from store to store and region to region. Interest charges, depreciation and other nonstandard items, such as owner salary, generally appear on tax returns, but are excluded from the standard profit and loss statement for purposes of valuation and determination of cash flow. Typical Categories • Accounting
68 • SPRING 2016 •
• Advertising • Insurance • Legal Costs • Licenses • Maintenance (Includes Parts and Labor) • Payroll (Usually Limited to On-Site Work–i.e., Janitorial or Employees) • Personal Property Tax • Rent • Common Area Maintenance (CAM) Charges (Also Known as Net Charges Including: Real Estate Taxes, Maintenance, Insurance and Other Charges) • Utilities (Gas, Water, Electric and Sewer) • Vending Expenses • Miscellaneous Costs (Including: Wholesale Drycleaning Costs, Fluff-n-Fold Supplies and Labor) three TPD to as high as eight TPD or more. The primary factors affecting TPD include: population demographics, such as density and percentage of renters; capacity and quantity of the washers; the vend prices charged; prevailing market vend prices, and the quality and quantity if competition. forty and sixty percent of total washer income. Income and expense percentages may vary significantly for stores offering additional services such as drycleaning and Wash-Dry-Fold. Summary Today’s coin laundry industry is a strong and vibrant one. Even more appealing is the fact that this de-
pend the New Investor: Store Planning and Layout Selecting a location is one of the most important aspects of going into business. Many distributors and laundry brokers can assist with this step. However, it is good to know the basics of what to look for in a location, such as: • Utilities. A location should have the capability to provide all the necessary utilities … water, sewer, gas and electricity. Be aware that there may be initial water and sewer hook-up fees, which are also called impact fees. They could cost several thousands of dollars for an entire store and should be carefully evaluated. • Visibility. Another consideration when selecting the location is a good, well-lighted building that is not too far from the street … preferably at or above road grade level. Good visibility from the outside through large windows is an important customer safety factor as well. • Accessibility. Avoid a location that is in a highly congested area where it may be difficult to get into and out of the parking lot. Anything that is a vision hindrance, such as shrubbery or another building that blocks the view of the store, should be avoided. Also, neighboring businesses that may not be compatible should be avoided. • Free Standing or Cluster. Decide whether the store should be free standing or in a cluster (such as in a shopping center). There are advantages and disadvantages to both. Free-standing buildings offer more choice in layout, but strip center laundries have the advantage of late-night activity and ample parking.. There are a lot of benefits for a potential store owner in a strip center … primarily a long-term lease, which is the security most landlords are seeking. Designing for Success Before breaking ground for a new store, signing a lease or gutting an old store “for extensive remodeling,” thoroughly analyze the interior area. Analyze it as to its business potential for the maximum number of pieces of equipment that can fit into the building and still provide the number of turns per day per unit needed to provide the desired return on investment.
HYDROSPRAY
• SPRING 2016 •
69
SONNY’S
70 • SPRING 2016 •
SONNY’S
• SPRING 2016 •
71
BIG BANG THEORIES FOR THE SELF SERVE OPERATOR Any store must be designed for maximum profit per square foot of floor space. Once established, set about providing those customer convenience benefits that complement the layout and equipment configuration. Consider the following: 1. Doors. This first principle has to do with getting people into the store … the entryway. If customer convenience is paramount, consider installing automatic doors. Customers who are carrying bundles of dirty laundry into the store will appreciate automatic sliding doors. Other operators who have installed automatic doors report that they receive more compliments for this convenience than for anything else. 2. Immediate Observation Area. Inside each entryway, allow approximately an eight foot by eight foot (square) of space before a customer encounters the first piece of equipment. This “breathing space” provides customers with the opportunity to become oriented upon entering the store. It also gives them a chance to stop, look and decide which direction they are going to take without bumping into someone else who is trying to do the same thing. It also allows them an opportunity to pause to greet or say goodbye to friends on the way in or out of the store without blocking the door itself. 3. Aisle Ways. Ensure that there are no aisles in the store that are narrower than five-and-a-half feet. Be equally cautious not to make aisles too wide because that will be a waste of valuable floor space. In observing patrons, one can almost see aggravation levels rise if two approaching customers who are pushing carts in opposite directions are unable to pass by each other unhindered. Designing fiveand-a half-foot aisles should avoid those unpleasant head-on collisions. 4. Workflow. Try to establish a smooth workflow. This is probably the most important guideline of all — smooth workflow from washers to dryers to folding tables. One good way to accomplish this is by using multiple washer islands installed perpendicular to the dryer line. According to surveys taken over the years, 50 percent of customers would rather push dry clothes in a cart the shortest distances; 20 percent would prefer to cart wet clothes the shortest distance and 30 percent don’t care one way or the other. If these percentages hold true for a particular location, it would appear that folding tables should be located closer to the dryers than to the washers. In planning for table space, a good rule-of-thumb would be about 15 square feet of table per three dryers. 5. Large Capacity Washers. Locate large capacity machines as close to the front of the store as possible. In fact, try to position these big machines very close to the doors. 6. Finishing Touches. Finishing touches can be defined as those little things that provide a store with the personality, uniqueness and atmosphere all its own. One little touch is to use a lighting consultant. Most local electric companies employ an individual with this type of expertise. They engineer the building lighting for proper human com-
72 • SPRING 2016 •
fort levels in relation to the task that is to be performed. This is important because too little or too much light in the wrong place can become very discomforting to customers. Subconsciously they will appreciate the fact that they have the proper type and amount of lighting. The lighting will also have a big effect on how clean their clothes appear when they are washed and dried in the store. Also, if space is available, try to provide a specific area that can comfortably accommodate soap products and snack-type vending equipment (e.g., candy, coffee, soda, popcorn machines). Vending machines are a profitable as well as a desirable convenience that customers seek. Remember to provide the customers with plenty of laundry carts. About half of the carts should have hanging racks on them. The number of carts will depend on customer usage, but a minimum of one per four to five washers would be advisable. When decorating, don’t be afraid to use bright colors, specific themes or even wild décor. Generally, customers will like it and the creative decorating will provide the laundry with its own personal identity. Once established in the customer’s mind, this store image or identity can be used to advantage in advertising and promotional endeavors. Additional Tips • If possible, for safety, design the laundry allowing unobstructed visibility from the front to the back. • Install a ceramic or other type of slip resistant tile floor that will help minimize slips and falls and will look better and last longer. • Work with an expert to help design the store — find a CLA distributor member. Who Should Design the Space? • Is the person knowledgeable and dedicated to the industry? A fully knowledgeable design person will be an expert on local or national trends in laundry design. • Is this person a proven winner? Has he or she worked successfully with design concepts before? • Is the person forthright and does he or she have a good reputation? When selecting someone to help with space management, be sure that person will be direct about what is needed. Make sure the information is valid and that he/she is not saying something just to sell more equipment. Selecting the Equipment Provide the maximum number of machines to provide the number of turns per day needed for the desired return on investment. Once the equipment layout is determined, analyze how to provide convenience for the customer, which complements the equipment layout. Designing for maximum space utilization involves the size requirements and mix of the equipment. From this data, determine the number of toploaders, frontloaders, large capacity washers, dryers, extractors, bill and coin changers, soap and bag machines, carts, folding tables, water heaters and storage tanks needed. These are the significant items that
will occupy the space in the store. Where they go will depend upon the design of the store itself. Of course, the space design will depend upon the design of the building … the walls, window locations, door locations, bathroom plumbing, existing plumbing, gas and electrical, height of ceilings and room shapes and sizes.
Essential Equipment and Chemicals for Start-Ups (suggestions provided by the International Detailing Association, a leading industry association for professional detailing operators, suppliers and consultants to the industry IDA)
Equipment • • • • • •
Pressure Washer Hot Water Extractor Dry vapor Steamer Rotary Polisher D/A Polishers of various sizes Wet/Dry Cacuum Cleaners
Chemicals • • • • • • • • •
APC Car Wash Soap Wheel Cleaner Tire Dressing Polishes & Compounds for paint correction Waxes, Sealants & Coatings for paint protection Leather cleaner and conditioner Glass cleaner Interior cleaner
Training Resources List. php?51768-Detailing-Class-Information
BIG BANG THEORIES FOR THE SELF SERVE OPERATOR The Finishing Touches Now that the parts of the puzzle are complete, try to put them all together. The number of items desired, the actual placement of those items and the spacing between them will comprise the utilization of space. Development, innovation and changes in equipment can provide for new approaches to store design and space relationships. Here are some recent advances in store layout: • The size of laundries has for years been referred to in amount of square feet or number of washers and dryers, but another common size measurement is the number of pounds the store is capable of washing per day or per hour. Of course, large capacity washers allow more volume to be washed in less space. Likewise, stacked dryers provide more revenue in less space. • The greatest bottlenecks in stores occur near the dryers. Most stores have enough washers. Because drying takes longer than washing, a bottleneck (or backup), is often created by the dryers. If adding washers, especially large capacity washers, consider expanding the dryer capacity of the store as well. • As the flow in aisles is important for customer convenience, so is the overall workflow that is created by the store’s layout. Be sure there is a good flow from washers to dryers to folding tables. Newer folding tables take advantage of space utilization techniques by providing shelves, which can increase folding capacity by 30–40 percent. Each store will have different layout and design requirements that are based on equipment mix, the services provided and the customer makeup. The actual size and shape of the store should be determined only after all the data has been collected that helps determine the pounds of wash the market demands. Fully analyze the situation before going to the drawing board. When ready, make full use of the space by employing the techniques discussed. Given the choice of a free market, cleaner, safer and better-maintained stores, a good layout and design can make the all-important difference between a customer choosing and using the store over a competitor’s store. The Importance of a Distributor Choosing a distributor is one of the most important decisions a laundry owner will make. Over the years, the distributor has evolved into an integral part of the success not only of the manufacturer (by selling equipment) but also of the store owner (by selling knowledge, support, financing, service, design, demographics, marketing and equipment). Laundry equipment is not a product that can be taken out of a box, plugged in and used. The equipment needs to be taken off a truck (some products weigh thousands of pounds), rigged into the store, set in place, bolted down and hooked up correctly. To do this properly, requires a professional. Most equipment failures and problems stem from an improper installation. Machines improperly installed typically have premature bearing failures, vibration problems and drain problems. The money saved by buying direct is typically surpassed by costs in repairs and per-
haps premature replacement. Before purchasing a product, here are a few services the distributor should provide: 1. Examine the store’s specific needs and recommend the right products to achieve the owner’s goals. 2. Check on new construction regularly to ensure that all stages of the build-out go as planned. 3. Check installation parameters. Will the equipment fit through the door? Is the right flooring in place to support the equipment? Does the store have the right utilities? Does the equipment fit into the space properly? 4. Check delivery times from the manufacturer. 5. Recommend the right manufacturer for the store’s needs. 6. Answer all questions about the products being purchased, including why a specific product is recommended and how it compares with the competition. Another reason to consult a professional is to gain information from his/her experiences. This experience with products makes the distributor’s professional opinion a valuable component of the decision-making process. Most people have made a purchase they later regretted. Another professional’s opinion can help mitigate that feeling of buyer’s remorse. Once the order is placed with the distributor, the work really begins. Here are some of the distributor’s responsibilities: 1. Ordering the proper equipment with the right voltage and specifications from the manufacturer. 2. Coordinating the sale with a finance company, if financing is desired. 3. Tracking the order with the manufacturer. 4. Accepting the product either at the warehouse or at the job site; meeting the truck and taking the product off the truck; rigging it into the store and setting it in place. 5. If contracted, bringing the proper utilities to the machine; bolting it down, if necessary, and checking the start-up. 6. Training owners on proper use of the product and any preventative maintenance procedures they might have to perform; providing a maintenance schedule. 7. Providing service when needed, including warranty service as negotiated when the product was purchased. 8. Providing warranty parts through the manufacturer. What to Look for in a Distributor When selecting a new supplier, use prudence, especially if the store owner is new to the business. Check references with the Better Business Bureau, banks, credit bureaus or Dunn & Bradstreet. Distributors who are members of the Coin Laundry Association (CLA) are among the most reputable and best qualified to provide assistance. When selecting a distributor, look for the following qualities found in many credible businesses (inside the industry and out), which can help make the decisions that will benefit the most. The distributor should be able to demonstrate the following criteria:
• Offers honesty, integrity and credibility in addition to prompt and professional service • Attends industry activities and absorbs as much information as possible • Seeks ways to deliver high-quality service to customers, including the utilization of excellent and experienced installation crews • Offers reasonable financial assistance programs • Helps make decisions that are good for the business • Treats owner’s investment as if it were his/her own • Provides regular service training programs on equipment • Does not speak negatively about competitors or their products • Maintains an ample supply of parts with rapid delivery • Furnishes floor plans, equipment costs, pro formas and holds in-person meetings with construction people as part of the full service • Has superior knowledge of the industry and the products they represent • Possesses demographic knowledge and interpretive capabilities. Local distributors are a valuable link to the industry. They attend local and national trade shows to learn more about the industry. They network with other distributors and pass along the information to their customers. They participate in trade associations like the Coin Laundry Association to continue their education and to give back to the industry. The relationship with the distributor should be a working partnership for the future. Choosing the right distributor can impact the store’s financial future; determine how much that investment is worth. No matter what the purchase, price is only one part of value. Research distributors to find the one who illustrates the “value” in what he/she offers. Not all distributors are of the same quality. Interview them all in the trading area to find one who fits the descriptions above — and it becomes evident that price is not everything. Developing a relationship with the local distributor should result in a long-term collaboration that will enable the store to be even more successful. Financing There are many options out there for veterans and newcomers to the industry. Financing for a brand-new laundry is considered by banks as venture capital, which is a market in which they are not typically involved. Being regulated by the government, most banks are precluded from using bank funds for start-up businesses‚ unless they have 100 percent collateral outside of that business. For owners who are building an additional store, such as a second or third laundry, this loan would no longer be considered a start-up. It would often be referred to as an expansion and banks have more leeway in this case. To facilitate the sale of laundries for their customers, many manufacturers offer inhouse finance programs. Just as automobile dealers do, the manufacturers of laundry equipment pro{continued } • SPRING 2016 •
73 f We Of and Repair Service should the need arise. Bill Acceptors I Coin Mechs MEMEI
We stock all models for quick immediate delivery
PARTS Bill Boxes Bezels Harnesses
o C n i Co
Bill Acceptors Coin Mechs
Always in STOCK
NEW! MDB to Pulse Device
If you are not buying from us you are paying too much ! 74 • SPRING 2016 2016 •
75
BIG BANG THEORIES FOR THE SELF SERVE OPERATOR vide financing as a means to sell equipment. Having control of their own funds allows this “captive” finance company to look beyond the start-up nature of the financing and be able to finance qualified individuals for the right location. There are also several independent finance companies that specialize in the laundry business and can provide financing for start-up businesses. Financing for a new laundry can be in the form of traditional financing or equipment leasing. As there is a substantial amount of equipment in a laundry, it lends itself to leasing companies that can get around the nature of a start-up business by using the equipment as collateral. However, be wary of traditional leasing similar to leasing a car when a large amount of money is due at the end of the lease to purchase the car; or in some cases it is left open for the leasing company to determine the amount. This is too risky. Consider a “finance lease” where the purchase option at the end of the lease is minimal or “one dollar” and it is in writing upfront. Although it does not always seem to be, financing for start-up laundries (or any other business) is typically more expensive than traditional bank financing. As secondary sources of money, leasing companies and captive finance companies purchase their money from banks and other sources and turn around and lend it to consumers at a higher rate. Because of the risky nature of start-up businesses, they also have to build in a “reserve” for possible bad debts. The monthly payment is the key factor in the cash flow of any business. In the laundry business it is paramount. A good location can be successful or fail over two items, rent (or mortgage) and the note payment. The vend price charged is pretty much determined by the local market conditions (or what the market will bear) and the expenses of a laundry are mostly fixed; the only variables become the rent and the note. When examining a new laundry location, always put the equipment mix at the real vend prices and the actual expenses and rent into a spreadsheet including the note payment. Then there is the entire subject of fixed-rate financing vs. floating. As in home mortgages, floating rates are always lower upfront, but can rise with prime rate fluctuations. Most accountants will arrange for borrowing short term at floating, but long term at fixed; however, this is your choice. Fixed-rate financing is a guaranteed monthly payment for the entire length of the loan, and with this type of financing there can be no surprises no matter what happens to the economy. Most lenders prefer floating as it protects them, not the consumer. Banks typically borrow their money floating, so they charge more for fixed as they do not want to take a gamble. Ask any lender what their floating and fixed rates are; the fixed will always be higher. Assuming that the location qualifies with the lender, the owner will need to provide information and be willing to invest money of his or her own. Typically the bank or lending institution would like to see an investment of 20 to 30 percent of the entire project. The project will include equipment,
76
• SPRING 2016 •
installation and leasehold improvements. Plus, the owner should additional funds for start-up costs such as utility deposits, initial advertising campaigns and a reserve, which will carry the owner until the break-even point is reached. That means the bank or lending institution is investing up to 80 percent of the cost of the business and has more money invested than the owner does. To process an application, the lending source will usually need the following items: • Credit application signed to authorize a credit investigation • Bank verification forms as proof of funds to be invested • Personal financial statement • Last two years personal tax returns • Details of anything that might show up on the credit report • Location analysis including a demographic study • Cash flow forecast or pro forma • Signed sales agreement • Any business financial statements for full or part ownership The bank will need all of the above, plus other information that is required by their bank policy. Then they will need some time to verify the information and go to their credit committee. Typically this could take five to seven business days, but varies from lender to lender. What usually takes the most time is to get back the bank verification forms from the bank. To speed up the process, take the forms into the bank and wait while the loan officer fills them out. For financing of replacement business, the process is much easier and faster. More lenders will finance expansions or replacement equipment at lower rates. Some do not understand the nature of a cash-based business‚ and will require a business financial statement and proof of cash flow that will be adequate to repay the loan. Shopping for financing is the same as shopping for a distributor. The owner must feel comfortable enough with the lender so that they will enjoy a
long-term relationship. Over the course of five to seven years many things can happen. Will the lender be a good partner or “fit” for the business? Does the lender understand the long-term goals and can the lender grow with the business? Does the lender understand the business and can it be an asset to the future plans? Prospective owners should interview
the lender as the lender interviews them. All of the above information was provided by the Coin Laundry Association. Many thanks to the CLA for providing it to SSCWN!
#3
BiG BANG THEORY
Quick Lube AT A GLANCE
Cost to get started: $$$$ Expected monthly revenues: $50,000 Estimated number of quick oil change stations in the United States: 10,000
Quick lubes are the polar opposite of the previously covered laundromats: They’re management intensive, require lots of employee juggling, and need a very “hands on” approach. Of interest to many self serve operators may be the franchise opportunity that is heavily present in the quick lube industry. The quick lube presents high potential revenues but requires the most work and investment.
Bryan White, Executive Director, AOCA
Tell me a little bit about your annual expo, iFlex, which is going on in Nashville, May 9-11. This year we’re co-locating the iFlex show with the ICA’s show. Really, the impetus for that was that they’re so complimentary to one another; there are a lot of car wash owners who have quick lubes and vice versa. A lot of crossover membership. So it made sense for us from an attendee standpoint and also an exhibitor one to co-locate these events. It’s a lot of the same vendors and operators. Even operators who are not in carwash -- or even carwashers who aren’t into quick lube -- well, they may want to get into that at some point in time because they’re so complimentary. So it made sense to have them all together.
What’s the size of the AOCA’s involvement there? Altogether, the whole show this year will have its numbers combined. So, the Western Carwash Association, the International Carwash Association and now also the AOCA. We’re anticipating 6,500 at{continued }
THE POWER OF
CHOICE INCREASE REVENUE WITH OUR EXCLUSIVE
Car Wash Systems
800.892.3537 • • SPRING 2016 •
77
10% OFF YOUR FIRST ORDER
Little Trees
72 Count Vend Packs REGULAR ONLINE PRICE
$32.39
TO SAVE AUTOWASHONLINE.COM
ENTER COUPON CODE SSTREES 78 • SPRING 2016 •
|
M A L D E N , M A 0 2 1 4 8 | FA X 7 8 1 - 7 2 3 - 0 0 7 0
TOP BRANDS,
GREAT PRICES
ULTRA SERIES VACUUM WITH SHAMPOO & SPOT JE 29000
$4,450
HOSE, HIGH PRESSURE BLUE 3/8IN 551741
$1.15/FT
N TE E D RA A
BESICTE PR
GUARANT
G&G L
FREEED SHIPP OFFER ING O
D
G
UMP P T A C U
E
E
WEBSIT N E
WATERPROOF LED LIGHTS, 8- 6FT LIGHTS
GG-WP48
$1,574.00
HYDRA-CELL PUMP
H25XKBTHFECA
AUTO WA S H O N L I N E. C O
79
BIG BANG THEORIES FOR THE SELF SERVE OPERATOR tendees, and about 370+ exhibitors. The quick lube side of that is probably about a third. So how does the quick lube industry look in 2016? How did it handle the ups and downs of the last ten years? Obviously, a lot of the quick lubes struggled during the recession. During many of those years, a lot of the quick lubes had to change their business model. They had to diversify and offer a lot of add-on services. It was a little departure from the “quick” side of quick lube. But at the end of the day, you had to do some of the ancillary services -- the margins were higher. The oil change itself might get them in the door, but during the recession they had to go to a more full service model. The downside was that for some customers, the value wasn’t there. Or at least the value they had come to expect and that they were looking for. Many of those same operators who wonder why the customer car counts are down are probably not realizing how many customers were after that “quick” oil change service. But I would say that the operators who remained true to the fast oil change service, well, they’ve probably grown in volumes and profits since 2007 and at a pretty steady pace, I think. At the end of the day, customers are still willing to pay for convenience. It gives them back the time they need in their busy day. So, for stores who kept their focus on a fast service and maintaining their car counts, well, their revenues have been up over the years. I would say that the net physical store growth has slowed a bit over the past couple years. Part of that is due to consolidation; many of the large owner/operators became consolidators. Those owners that remain successful are now buying up their one-time competitors and updating the business model. Did more customers start doing their own oil changes? Obviously, some people started doing oil changes at home, although I don’t have any raw data on that. But most of it was probably the extended drain intervals. These oils are advancing, and now you don’t necessarily have to come in every 3,000 miles. A lot of customers are going 5,000 or even 6,000 miles. Those extended drain intervals have really affected the marketplace. The technology gets better and now they’re bringing their cars in less often, which obviously affects the car counts. There are new stores being built in the marketplace, of course, but there are just so many acquisition opportunities out there so the growth dynamic has changed a little bit. That may change in the next couple years -- it came from that 2008 period when business got tough and the operators that fought their way through it and came up to the top by maintaining their car counts were able to acquire those smaller shops that couldn’t make it. So now we’re in that consolidation period, but at some point and time, that will cycle through and we’ll be in a new growth pattern. It just ebbs and flows and goes up and down with time. So what sets those operators apart --
80 • SPRING 2016 •
the ones who are able to fight through a recession? I would say the big thing for a successful quick lube really comes down to customer service. You make your money out of high ticket averages, controlling expenses and providing a high level customer service that attracts loyal customers. Quick lubes don’t necessarily have the attraction rate of a car wash. The typical quick lube capture -- I think the stat is like .12 percent of highway traffic. An oil change is more of a planned purchase, while a carwash is typically an impulse. So enhancing that customer value and working the marketing plan, too, are crucial to their success. Has the quick lube industry embraced social media over the last few years? We’re starting to; it’s maybe a little slow as compared to other industries. But they realize the importance of it now and most of them are doing something. The AOCA actually just partnered with a company: Piston Marketing. They’re a strategic partner of ours now, and they provide marketing and consulting services -- like social media management, search engine optimization, couponing -- anything marketing related. And if you’re an AOCA member you’ll get a discount on any of their services. We as an association have recognized the importance of social media and we’re trying to give our members tools and discounts to help them improve their businesses. What are some of the bare minimums for location building, equipment, staff, etc.? I would actually refer to National Oil & Lube News magazine. They run a survey every year that’s pretty helpful, the Fast Lube Operator Survey. (Chart below.) (Editor’s Note: The responses for the 2015 Fast Lube Operator Survey account for 3,282 facilities and include fast lubes operating in all 50 states.) Basically, what they’re finding is that locations in high traffic trade areas where other business is be-
ing done, then the land and building for a quick lube ranges from $850,000 to $1.3 million. Equipment, on average, is $81,493. Average revenue is $694,189. About how many quick lubes are there in the United States? There’s probably a little bit less than 10,000 locations. We have 500 AOCA members, and they represent 3,500 locations. So obviously a lot of these operators own multiple locations. Tell me a little bit about the franchising vs. independent ownership opportunities in the industry and the advantages and disadvantages presented there. That’s quite different from what we know in the carwash industry. The franchise model probably includes some of the largest owner/operators in the industry. As with many industries, not just quick lubes, but there are business people who prefer to be independent and develop and operate their own businesses -- and then there is that other group that chooses to averse their risk by choosing a franchise model that’s been a proven success. With a lot of the franchises, the operating model and the brand are already implemented. If you’re a new operator and you tap into a franchise model, then you could potentially grow your business further and faster because you have those resources. But at the end of the day, there’s success on either path. A lot of it will actually just come down to how much experience the business person has -that technical oil change is also critical, but it’s not the only knowledge necessary. The principal understanding of consumer interaction and customer service, of developing and managing a team, and the elements of marketing -- I mean, if you’re operating a franchise or if you’re an independent, those are the skills that matter. Do you know about how many locations are {continued }
BIG DOG AIR Exceed their expectation
Big Dog Air w/coin and ePort $2,370.00
Digital Big Dog Air coin only $2,370.00
Factory Direct Prices • Factory Direct Service We’re a phone call away
800-643-1574
• • • • • • • • •
$1.17 per SWIPE…NET (or Set it Higher) KEEP 100% of the REVENUE DIRECT DEPOSIT to YOUR ACCOUNT Eliminates Theft – Eliminates Collection “Real-Time” On-line Reporting Credit / Debit / Cashless WIRELESS – simple…NO wires Free Air Check – Attracts more customers 25 ft. coil or wire braid hose
• 2 HP high-output-continuous-use (HOCU) industrial compressor • Dimensions: 22.25 w 20.25 h 14.25 d — 16 gauge stainless • Wall, pedestal or vault mount • Digital model – inflate or deflate tire to desired pressure with electronic push buttons; easy to read • LED display on digital units • Toll free support
Push button only. . . $1700.00 Coin only . . . . . . . . . $1850.00 ePort only . . . . . . . . $2000.00 Digital add . . . . . . . $0520.00 Pedestal . . . . . . . . . . $0250.00 ePort add . . . . . . . . . $0520.00 • SPRING 2016 •
81
CHAT LIVE CLICK
SHIPPING FROM
TEXAS - NEVADA - PENNSYLVANIA 800-233-3873
REQUEST YOUR FREE COPY OF THE NEW KLEEN-RITE CATALOG
800-233-3873
82 • SPRING 2016 •
BIG BANG THEORIES FOR THE SELF SERVE OPERATOR
franchised? Well, National Oil Lube News does a survey every year of the largest fast lube chains out there and based on their numbers, I think we could estimate that there’s about 2,900 or 3,000 franchised locations. A lot of these franchisors -- like the Valvolines and the Oil Changes -- they might have 600 or so franchised sites, but they might also have 300 independently owned stores. Some of these models are a little mixed; they might not be an actual franchise, but they’re branded that way. So what are some of the advantages of an AOCA membership? How has the Association grown over the last few years? We’re dedicated to providing members with business tools, resources, and education to help them run their businesses. From an Association standpoint, our membership has remained pretty steady. We are getting new members - but because of that consolidation that we talked about, it’s leveled out. We operate on that company membership dynamic. I think another advantage we provide is the opportunity to network with operators, whether at iFlex or just through the Association, who have been through the same challenges. That’s important. We also provide technical updates, training sessions, technician training online -- a lot of different educational resources. We also have a large foothold in government affairs. We have a policy advisor on retainer who works on everything from federal, state, and even local level to look out for the best interest of our operator members. How involved are you with politics? We’re not to the point where we have a Political Action Committee, but we’re pretty involved. We’re not soliciting funds from the membership for this -- we simply take a portion of the fund from our membership revenue each year, a pretty large portion actually, and it goes strictly to government affairs activities. It might be legislation that affects employment -- anything that would negatively affect the quick lube operator, we would fight on the behalf of. We basically without having a PAC, we don’t have a lobbyist on staff -- we just have an advisor. But there’s very common issues across the board, so we might enter into different coalitions -- say with the Car Care Council or something like that and try to pull our resources together with other Associations
so that we have a bigger voice. But our Advisor keeps her eye on these things that are going on in the industry and making recommendations as far as how we should allocate our dollars and our resources.
#4
BiG BANG THEORY
Detailing AT A GLANCE
Cost to get started: $ Expected monthly revenues: $5,000-$15,000 Industry resources and Associations: International Detailing Association,, Auto Detailing News,, Editor Debra Gorgos debrag@autodetailingnews.com
The easiest profit center to add onto an existing self serve business is detailing, which requires minimal equipment and additional space. Detailing does require a dedicated employee presence and management, but if managed correctly offers the opportunity for a high profit margin.
Dave Meusky,
House of Wax Touchfree Car Wash and Detail Shop, Orange, MA Tell me how you got started with the pet wash. We bought an existing car wash that had been run down. We bought it at a Sheriff’s sale, actually. The owner had investe In your experience, what are some of the synergies between self serve carwashing and detailing? These are the synergies: They directly compliment each other. The car wash is the first step in the exterior detailing process. The self service bay is where the process begins. A great self service wash is where a great detail takes shape. Self serve customers also often need detail services so there is some crossing over of customers between the two businesses. How much space would the self serve operator need to create a detailing service? Any suggestions for those who might consider renovating an underperforming SS bay? You need ample space for 2 cars at a time plus tools,
equipment and supplies. You need room for a washer and dryer and shelves for your towels. My detail shop is 40’ x 30’ with room for 4 cars at once. What sort of profit/volumes can self serve operators expect when they add detailing services? It depends how you market and perform and how you are staffed. During peak seasons for detailing, we can perform 4 full service details per day. On slower days we detail one car or two. Are there any unique marketing or advertising needs for a detailing business as opposed to those the self serve operator might be familiar with? What about social media? Social media is okay, but my favorites are the restaurant placemats and local newspaper ads. I just bought some custom printed t-shirts with our name on them so I guess we’ll see how that works! Considering operator personality, what sort of owner types do well with detailing? What are some common traits that successful detailers share? Industry leaders like Bud Abraham, Ron Holum and Robert Roman will tell you that detailing is all about the numbers and they are right. Things like cost of goods sold must be continually analyzed because as costs go up, a detailer must adjust pricing to stay healthy and to thrive in the marketplace. The detail manager is a people person who is an expert at understanding and meeting a customer’s needs and expectations. A detail manager must possess expertise at closing the sale. A good detail shop delivers more than its promise to the customer and handles complaints properly and in a way that wows the customer and strengthens the relationship with that customer. A detail manager is friendly and kind but also firm ( with employees as needed and customers as needed). What makes the detail operation successful? Size? Location? Volume? Pricing? Quality of workmanship is really the key. That and a price point that is affordable for the customer, yet reflects the hard work and expertise that goes into every detail job.
• SPRING 2016 •
83
Darwin Carwash at the
Yo, Darwins! If you’re going to commit an armed robbery at a carwash, we’d suggest doing it outside of Texas. Otherwise, you’re liable to find yourself staring down the barrel of a gun… That’s exactly what happened when a Dallas car wash owner shot an armed man who was trying to rob one of his customers. According to The Dallas Morning News: Around 5:30 a.m., the owner of a car wash in the 1600 block of South Belt Line Road saw three men point guns at a customer and demand money. The owner, who has not been publicly identified, went outside with a gun to confront the men, police said. When the men turned toward the owner, he fired at them, and they fled in a silver vehicle. Around 7 a.m., a man went to Methodist Charlton Medical Center with a gunshot wound. Police said the man told detectives that he was involved in the robbery at the car wash. The man will be charged later today, police said. This poor woman was absolutely shocked when she came face to face with...well, a face at Super Car Wash in Livingston,MT,two years ago,and now she’s seeking compensation from the parties she believes are responsible. According to news reports, the face was left on the carwash floor after a truck driver hit and killed an elderly man on a rural road and another vehicle “unwittingly” ran over the body some time after the accident. The second vehicle later went to the carwash where the owner, Wyran Young, unknowingly washed off the face, and Kimberly Kreig was the customer who eventually discovered -- and immediately reported -- the face. Kreig is now suing the truck driver’s employer for her medical expenses, lost income, negligence and emotional distress. According to Kreig, after reporting her discovery, police “treated (her) as a criminal suspect” by having her car impounded, demanding a blood sample and holding her for several hours. Young maintained she thought she had hit a pile of clothing and not a dead body - and all charges against her were dropped. The truck driver was given a six-year suspended sentence for knowingly being involved in an accident of a deceased person or another person in October 2014.
84 • SPRING 2016 •
This story is pretty frightening for anyone driving a car with a built-in GPS system...and maybe a good argument for choosing a clean, safe, and secure self serve carwash where you won’t have to hand over the keys. A woman in Melbourne, Australia, described how thieves stole her vehicle from a drop-andshop car wash and used her vehicle’s GPS to find her home and attempt a break-in there. The woman’s account of the story on Facebook has already been shared 3,000 times. “I went to pick up my car about an hour-anda-half after I’d dropped it off and the gentleman there said, ‘oh, it’s already been picked up’,” she told 774 ABC Melbourne. “I thought it was a joke for the first 30 seconds.” She said the car wash attendant told her that the man who collected the vehicle knew Ms Kozbanis’s name and mobile phone number. “I said, ‘no-one even knew I was coming to drop my car off here, not even my husband, you’ve just given my car away — now what?’,” she said. “He just didn’t really seem to know what to do about it.” According to the report, Kozbani believes the thieves used her car’s GPS to locate her home and went immediately there and used the remote garage opener to access a back door which they started “banging” on. Kozbani’s two daughters, aged 11 and 14, were inside and alone. They hid upstairs. “My daughter was trying to call me to tell me that they were banging down the door, but I was on the phone with the police,” Kozbani told the news station. The girls were able to get ahold of Kozbani’s husband who called local authorities. He arrived at the house shortly before police and the thieves had left by then. Kozbanis said the police have told her they would probably return to try again at a later date. Identifying paperwork for her business was also in the car. “Obviously we’ve had our locks changed, but there’s always that worry,” she said. The Chadstone car wash where the theft occurred is one of more than 60 operated by Star Car Wash nationwide. Star Car Wash chief financial officer Kevin Gordon said the incident was the first of its kind for the business. {continued }
Canadian Darwins are a different breed, that’s for sure. I’ll bet these goofballs are really, really sorry after they were caught cleaning a stolen truck at a Hamilton, Canada, car wash. According to The Hamilton Spectator, “Police nabbed a trio of alleged truck thieves after they pulled into a south Mountain car wash to gussy up the stolen ride. Officers blocked the GMC pickup as the driver was exiting the car wash bay at the Upper Gage Avenue and Stone Church Road East gas station just after 6 p.m. Tuesday. Two female passengers in the truck were arrested. But the male driver tried to get back into the truck and struggled with officers, who managed to control him, police said. Police have charged three people in connection with the alleged truck robbery.”
• SPRING 2016 •
85
Darwin Carwash at the
He said the business’s head office had only been made aware of the theft this week when Kozbanis posted on its Facebook page. “We are now doing everything we need to do as a company to resolve this satisfactorily.” Darwins of the world -- or maybe just the ones in Florida -- please consider your clothing choices before embarking on a crime spree. A tank top with #TURNT on it? Really? Has the pajama bandit taught us nothing?! Employees at Rising Tide Car Wash confronted a thief after they noticed he was going through their cars, according to a report by WPLG Local 10 News. The victims were able to get some of their belongings back from the thief, including cash and a wallet, before the suspect fled the scene. Surveillance videos captured the thief inside a nearby convenience store before the burglary, and leaving the car of one of the victims and placing the stolen items into his pockets. Detectives describe the thief as a stocky-framed man in his 20s with short, brown hair, brown goatee and a large tattoo on his left arm. The thief was last seen wearing a black tank top with a green design of the phrase “#TURNT” on the front, baggy black shorts and a black baseball-style cap. The only word for this story: Ugh. And because it is so rich in detail, here it is in full from Aimee Green of The Oregonian: The traf-
86 • SPRING 2016 •
One cries because one is sad. I cry because others are stupid, and that makes me sad. fic.
To be caught on the surveillance camera of one business, fine ... but two? These lovely Darwinettes were captured by surveillance cameras at an Oklahoma City car wash as they approached a customer washing his car, and then shortly after they showed up at a local Walmart and used the stolen credit cards. According to the report by KOCO, the victim was “washing his car when he noticed a maroon Buick park near the vacuums. The victim said the two women approached him; one woman came up behind him and held something to his back, while the other went through his pockets. At some point during this incident, a male who was with the two females, walked up and was part of the robbery. Police are hoping someone in the community can identify the two women shown in the photos, who managed to pocket $300, as well as the man’s credit cards and identification. 2016 •
87
Turbo Dry II Raises the Bar in Performance, Profitability and Durability for Self Serve Dryers!
(800) 421-5119 417 South Madison Blvd. | Po Box 488 | Roxboro, NC 27573 | sales@cpcarwash.com
Flexible • Profitable • Dependable | https://issuu.com/1carwash/docs/may16_sscwn | CC-MAIN-2020-29 | refinedweb | 43,498 | 70.63 |
Opening and managing a SFML window
Introduction
This tutorial only explains how to open and manage a window. Drawing stuff is beyond the scope of the sfml-window module: it is handled by the sfml-graphics module. However, the window management remains exactly the same so reading this tutorial is important in any case.
Opening a window
Windows in SFML are defined by the
sf::Window class. A window can be created and opened directly upon construction:
#include <SFML/Window.hpp> int main() { sf::Window window(sf::VideoMode(800, 600), "My window"); ... return 0; }
The first argument, the video mode, defines the size of the window (the inner size, without the title bar and borders). Here, we create
a window with a size of 800x600 pixels.
The
sf::VideoMode class has some interesting static functions to get the desktop resolution, or the list of valid video modes for
fullscreen mode. Don't hesitate to have a look at its documentation.
The second argument is simply the title of the window.
This constructor accepts a third optional argument: a style, which allows you to choose which decorations and features you want. You can use any combination of the following styles:
There's also a fourth optional argument, which defines OpenGL specific options which are explained in the dedicated OpenGL tutorial.
If you want to create the window after the construction of the
sf::Window instance, or re-create it with a different
video mode or title, you can use the
create function instead. It takes the exact same arguments as the constructor.
#include <SFML/Window.hpp> int main() { sf::Window window; window.create(sf::VideoMode(800, 600), "My window"); ... return 0; }
Bringing the window to life
If you try to execute the code above with nothing in place of the "...", you will hardly see something. First, because the program ends immediately. Second, because there's no event handling -- so even if you added an endless loop to this code, you would see a dead window, unable to be moved, resized, or closed.
Let's add some code to make this program a bit more interesting:
#include <SFML/Window.hpp> int main() { sf::Window window(sf::VideoMode(800, 600), "My window"); // run the program as long as the window is open while (window.isOpen()) { // check all the window's events that were triggered since the last iteration of the loop sf::Event event; while (window.pollEvent(event)) { // "close requested" event: we close the window if (event.type == sf::Event::Closed) window.close(); } } return 0; }
The above code will open a window, and terminate when the user closes it. Let's see how it works in detail.
First, we added a loop that ensures that the application will be refreshed/updated until the window is closed. Most (if not all) SFML programs will have this kind of loop, sometimes called the main loop or game loop.
Then, the first thing that we want to do inside our game loop is check for any events that occurred. Note that we use a
while loop so that
all pending events are processed in case there were several. The
pollEvent function returns true if an event was pending, or false
if there was none.
Whenever we get an event, we must check its type (window closed? key pressed? mouse moved? joystick connected? ...), and react accordingly
if we are interested in it. In this case, we only care about the
Event::Closed event, which is triggered when the user wants to close
the window. At this point, the window is still open and we have to close it explicitly with the
close function. This enables you to do
something before the window is closed, such as saving the current state of the application, or displaying a message.
A mistake that people often make is forget the event loop, simply because they don't yet care about handling events (they use real-time inputs instead). Without an event loop, the window will become unresponsive. It is important to note that the event loop has two roles: in addition to providing events to the user, it gives the window a chance to process its internal events too, which is required so that it can react to move or resize user actions.
After the window has been closed, the main loop exits and the program terminates.
At this point, you probably noticed that we haven't talked about drawing something to the window yet. As stated in the introduction, this is not the job of the sfml-window module, and you'll have to jump to the sfml-graphics tutorials if you want to draw things such as sprites, text or shapes.
To draw stuff, you can also use OpenGL directly and totally ignore the sfml-graphics module.
sf::Window internally creates an
OpenGL context and is ready to accept your OpenGL calls. You can learn more about that in the
corresponding tutorial.
Don't expect to see something interesting in this window: you may see a uniform color (black or white), or the last contents of the previous application that used OpenGL, or... something else.
Playing with the window
Of course, SFML allows you to play with your windows a bit. Basic window operations such as changing the size, position, title or icon are supported, but unlike dedicated GUI libraries (Qt, wxWidgets), SFML doesn't provide advanced features. SFML windows are only meant to provide an environment for OpenGL or SFML drawing.
// change the position of the window (relatively to the desktop) window.setPosition(sf::Vector2i(10, 50)); // change the size of the window window.setSize(sf::Vector2u(640, 480)); // change the title of the window window.setTitle("SFML window"); // get the size of the window sf::Vector2u size = window.getSize(); unsigned int width = size.x; unsigned int height = size.y; ...
You can refer to the API documentation for a complete list of
sf::Window's functions.
In case you really need advanced features for your window, you can create one (or even a full GUI) with another library, and embed SFML into it.
To do so, you can use the other constructor, or
create function, of
sf::Window which takes the OS-specific
handle of an existing window. In this case, SFML will create a drawing context inside the given window and catch all its events without interfering with
the parent window management.
sf::WindowHandle handle = /* specific to what you're doing and the library you're using */; sf::Window window(handle);
If you just want an additional, very specific feature, you can also do it the other way round: create an SFML window and get its OS-specific handle to implement things that SFML itself doesn't support.
sf::Window window(sf::VideoMode(800, 600), "SFML window"); sf::WindowHandle handle = window.getSystemHandle(); // you can now use the handle with OS specific functions
Integrating SFML with other libraries requires some work and won't be described here, but you can refer to the dedicated tutorials, examples or forum posts.
Controlling the framerate
Sometimes, when your application runs fast, you may notice visual artifacts such as tearing. The reason is that your application's refresh rate
is not synchronized with the vertical frequency of the monitor, and as a result, the bottom of the previous frame is mixed with the
top of the next one.
The solution to this problem is to activate vertical synchronization. It is automatically handled by the graphics card, and can easily be switched on and off with the
setVerticalSyncEnabled function:
window.setVerticalSyncEnabled(true); // call it once, after creating the window
After this call, your application will run at the same frequency as the monitor's refresh rate.
Sometimes
setVerticalSyncEnabled will have no effect: this is most likely because vertical synchronization is forced to "off" in your
graphics driver's settings. It should be set to "controlled by application" instead.
In other situations, you may also want your application to run at a given framerate, instead of the monitor's frequency. This can be done by calling
setFramerateLimit:
window.setFramerateLimit(60); // call it once, after creating the window
Unlike
setVerticalSyncEnabled, this feature is implemented by SFML itself, using a combination of
sf::Clock and
sf::sleep. An important consequence is that it is not 100% reliable, especially for high framerates:
sf::sleep's
resolution depends on the underlying operating system and hardware, and can be as high as 10 or 15 milliseconds. Don't rely on this feature to implement precise timing.
Never use both
setVerticalSyncEnabled and
setFramerateLimit at the same time! They would badly mix and make things worse.
Things to know about windows
Here is a brief list of what you can and cannot do with SFML windows.
You can create multiple windows
SFML allows you to create multiple windows, and to handle them either all in the main thread, or each one in its own thread (but... see below). In this case, don't forget to have an event loop for each window.
Multiple monitors are not correctly supported yet
SFML doesn't explicitly manage multiple monitors. As a consequence, you won't be able to choose which monitor a window appears on, and you won't be able to create more than one fullscreen window. This should be improved in a future version.
Events must be polled in the window's thread
This is an important limitation of most operating systems: the event loop (more precisely, the
pollEvent or
waitEvent function)
must be called in the same thread that created the window. This means that if you want to create a dedicated thread for event handling, you'll
have to make sure that the window is created in this thread too. If you really want to split things between threads, it is more convient to keep
event handling in the main thread and move the rest (rendering, physics, logic, ...) to a separate thread instead. This configuration will also
be compatible with the other limitation described below.
On OS X, windows and events must be managed in the main thread
Yep, that's true. Mac OS X just won't agree if you try to create a window or handle events in a thread other than the main one.
On Windows, a window which is bigger than the desktop will not behave correctly
For some reason, Windows doesn't like windows that are bigger than the desktop. This includes windows created with
VideoMode::getDesktopMode(): with the window decorations (borders and titlebar) added, you end up with a window which is slightly
bigger than the desktop. | http://www.sfml-dev.org/tutorials/2.0/window-window.php | CC-MAIN-2014-52 | refinedweb | 1,763 | 61.87 |
Jazz up the standard Java fonts
发表于2004/10/13 13:07:00 855人阅读
few simple tricks applied to standard Java fonts can help make your Web site stand out from the crowd. With the basic knowledge presented here, you will be able to create a set of font styles richer than the standard plain, bold, and italic. And do not confine yourself to these ideas -- they just lay the groundwork. You only need a limited mathematical background to understand the computations. So read on and discover the possibilities!.
String [] fonts = getToolkit().getFontList();
Font font;
int font_size = 20;
int x = 20;
int y = 25;
int line_spacing = 25;
for (int i = 0; i < fonts.length; i++)
{
font = new Font(fonts[i], Font.BOLD, font_size);
g.setFont(font);
g.drawString(fonts[i], x, y);
y += line_spacing;
}
Position the points
For most of the seven drawing tricks in this article you duplicate the text, slightly reposition it, and color the duplicate appropriately. The image on the right shows eight positions surrounding a center point. On the horizontal axis x increases from west to east, and on the vertical axis y increases from north to south. For example, the coordinate for the northeast position is
(x + 1, y - 1) if the center is
(x, y). I'll use the positions of this image in explaining the drawing tricks. The coordinate for the center position is assumed to be
(x, y).
The following functions are used to improve code readability:
int ShiftNorth(int p, int distance) {
return (p - distance);
}
int ShiftSouth(int p, int distance) {
return (p + distance);
}
int ShiftEast(int p, int distance) {
return (p + distance);
}
int ShiftWest(int p, int distance) {
return (p - distance);
}
Add a shadow
The easiest trick adds a shadow to the text. Use a dark color for the shadow and draw it near the center. Then draw the text in the center with its color on top of the shadow. The example here shows a shadow two points away in a southeasterly direction. The light looks as if it's coming from the northwest.
g.setColor(new Color(50, 50, 50));
g.drawString("Shadow", ShiftEast(x, 2), ShiftSouth(y, 2));
g.setColor(new Color(220, 220, 220));
g.drawString("Shadow", x, y);
Engrave the text
You can achieve an engraved effect by using a darker color for the background than for the text. To emulate a light projected on the inner walls of the engraved text, use a brighter color and draw it near the center. Finally, draw the text in the center.
The example here draws the inner walls one point southeast of the center with the light coming from the northwest. This effect relies heavily on color selection, so be careful!
g.setColor(new Color(220, 220, 220));
g.drawString("Engrave", ShiftEast(x, 1), ShiftSouth(y, 1));
g.setColor(new Color(50, 50, 50));
g.drawString("Engrave", x, y);
Outline the letters
You can outline the text by first drawing it at the northwest, southwest, northeast, and southeast positions with the outline color and then drawing the text in the center with the text color. The example here does exactly that with a red outline color and a yellow text color..
相关博文
- java.lang.IllegalArgumentException: Maximum number of fonts was exceeded解决
- Ambari学习9_setting up hdp 2.1 with non-standard users for hadoop services
- java_home.jre.lib.fonts.fallback.tar.bz2.1
- java_home.jre.lib.fonts.fallback.tar.bz2.2
- Java code standard
- java.lang.NoClassDefFoundError: org/apache/lucene/analysis/standard/StandardAnalyzer 错误
- Java™ Platform, Standard Edition 8
- SDES(Simple Data Encryption Standard)加密算法——Java实现
- lucene4.2 java.lang.NullPointerException at org.apache.lucene.analysis.standard.StandardTokenizerIm
- JSTL(Java Server Pages Standard Tag Library)标签函数库 | http://m.blog.csdn.net/nickeyfff/article/details/134761 | CC-MAIN-2017-51 | refinedweb | 616 | 57.57 |
I darcs pulled cabal head to get latest cabal, removed -Werror from GHC-Options in the cabal file, removed HsRegexPosixConfig.h and tried again with the same result. It seems to really want that file. With, it installs, without, no install. $ darcs whatsnew { hunk ./regex-posix.cabal 16 -Build-Depends: regex-base >= 0.80, base >= 2.0 +Build-Depends: regex-base >= 0.80, base >= 2.0, array, containers, byt estring hunk ./regex-posix.cabal 32 -GHC-Options: -Wall -Werror -O2 +GHC-Options: -Wall -O2 hunk ./regex-posix.cabal 43 -Include-Dirs: include +Include-Dirs: include/regex } Chris Kuklewicz <haskell at list.mightyreason.com> 08/30/2007 12:34 PM To Thomas Hartman/ext/dbcom at DBAmericas cc haskell-cafe at haskell.org, "cvs-ghc at haskell.org" <cvs-ghc at haskell.org> Subject Re: trouble compiling regex posix head (I think >0.92) on ghc 6.7 Thomas Hartman wrote: > > I'm trying to compile regex-posix on ghc 6.7. (Ultimate goal: happs on > 6.7). I have not explored ghc 6.7. You should also try posting on the <glasgow-haskell-users at haskell.org> mailing list. > > First, I patched by changing the cabal file to be compatible with the > new libraries broken out of base. I also had to add HsRegexPosixConfig.h > to include/regex (I just copied it from somewhere else on my hard drive > where I guess it had been put by an earlier regex-posix install, I don't > know if it's compatible here but at least it permitted things to compile > further.) I had no idea what HsRegexPosixConfig was, and I have no such file at all. So I looked in Wrap.hsc and found: > #ifdef HAVE_REGEX_H > #define HAVE_REGCOMP 1 > #else > #ifndef __NHC__ > #include "HsRegexPosixConfig.h" > #else > #define HAVE_REGEX_H 1 > #define HAVE_REGCOMP 1 > #endif > #endif Note that I did not write that section -- that was added by someone else. So HsRegexPosixConfig.h should only matter if HAVE_REGEX_H is undefined. The regex-base.cabal file says: "CC-Options: -DHAVE_REGEX_H" So unless Cabal is having a very very bad day, I assume that HsRegexPosixConfig.h is never needed. That it matters to your build to have that file seems _wrong_ to me. The only header file it should need is "regex.h" > Setup.hs build -v3 had a lot of warnings but didn't seem to fail. > However, Setup.hs install -v3 didn't work. You might try to change the cabal file. Currently I think it is "GHC-Options: -Wall -Werror -O2" and remove -Werror "GHC-Options: -Wall -O2" And you can change the cabal "Include-Dirs" to point to wherever it will find "regex.h" > the problem in build seems to occur around "upsweep partially failed or > main not exported"... That means nothing to me. > > [6 of 6] Compiling Text.Regex.Posix ( Text/Regex/Posix.hs, > dist/build/Text/Regex/Posix.o ) > *** Parser: > *** Renamer/typechecker: > > Text/Regex/Posix.hs:57:2: > Warning: The export item `module Text.Regex.Posix.String' exports > nothing > > Text/Regex/Posix.hs:59:2: > Warning: The export item `module Text.Regex.Posix.Sequence' exports > nothing > > Text/Regex/Posix.hs:61:2: > Warning: The export item `module Text.Regex.Posix.ByteString' > exports nothing > > Text/Regex/Posix.hs:63:2: > Warning: The export item `module Text.Regex.Posix.ByteString.Lazy' > exports nothing Those warning are slightly bogus. Including the module should export the instances. > *** Deleting temp files: > Deleting: /tmp/ghc9618_0/ghc9618_0.s > Warning: deleting non-existent /tmp/ghc9618_0/ghc9618_0.s > Upsweep partially successful. > *** Deleting temp files: > Deleting: > link(batch): upsweep (partially) failed OR > Main.main not exported; not linking. > *** Deleting temp files: > Deleting: > *** Deleting temp dirs: > Deleting: /tmp/ghc9618_0 > > complete output (along with patch) is attached. > > I'd appreciate any advice. > > best,: | http://www.haskell.org/pipermail/haskell-cafe/2007-August/031217.html | CC-MAIN-2014-41 | refinedweb | 622 | 62.64 |
import math import numpy as np
What is e? It is simply a number (known as Euler's number):
math.e
2.718281828459045
e is a significant number, because it is the base rate of growth shared by all continually growing processes.
For example, if I have 10 dollars, and it grows 100% in 1 year (compounding continuously), I end up with 10*e^1 dollars:
# 100% growth for 1 year 10 * np.exp(1)
27.18281828459045
# 100% growth for 2 years 10 * np.exp(2)
73.890560989306508
Side note: When e is raised to a power, it is known as the exponential function. Technically, any number can be the base, and it would still be known as an exponential function (such as 2^5). But in our context, the base of the exponential function is assumed to be e.
Anyway, what if I only have 20% growth instead of 100% growth?
# 20% growth for 1 year 10 * np.exp(0.20)
12.214027581601698
# 20% growth for 2 years 10 * np.exp(0.20 * 2)
14.918246976412703
What is the (natural) logarithm? It gives you the time needed to reach a certain level of growth. For example, if I want growth by a factor of 2.718, it will take me 1 unit of time (assuming a 100% growth rate):
# time needed to grow 1 unit to 2.718 units np.log(2.718)
0.99989631572895199
If I want growth by a factor of 7.389, it will take me 2 units of time:
# time needed to grow 1 unit to 7.389 units np.log(7.389)
1.9999924078065106
If I want growth by a factor of 1, it will take me 0 units of time:
# time needed to grow 1 unit to 1 unit np.log(1)
0.0
If I want growth by a factor of 0.5, it will take me -0.693 units of time (which is like looking back in time):
# time needed to grow 1 unit to 0.5 units np.log(0.5)
-0.69314718055994529
As you can see, the exponential function and the natural logarithm are inverses of one another:
np.log(np.exp(5))
5.0
np.exp(np.log(5))
4.9999999999999991 | http://nbviewer.jupyter.org/github/justmarkham/DAT8/blob/master/notebooks/12_e_log_examples.ipynb | CC-MAIN-2017-04 | refinedweb | 373 | 84.37 |
As a first time doing the #WebDevSampler challenge, I used Go, which is a popular backend language, as well as the language I have been coding in professionally for seven years. Ahead are my answers to the 11 exercises in the sampler, which
The tools I used for doing this challenge were:
- The Go standard library
- Go's built-in testing command for doing automated test coverage
- Gorilla Mux for parameterized HTTP routing
- SQLite and mattn's SQLite package for the database problems
Now, onward to the answers!
(1) Get an HTTP server up and running, serving an endpoint that gives the HTTP response with a message like "hello world!".
Go code
package main import ( // Go's standard library package for HTTP clients and servers "net/http" ) func main() { // make an http.ServeMux, a Go standard library object that // routes HTTP requests to different endpoints rt := http.NewServeMux() // Make a catch-all endpoint for all requests going into the // server. When the endpoint is hit, we run the function passed // in to process the request. rt.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { // We have the ResponseWriter write the bytes of the string // "Hello world!" w.Write([]byte("Hello world!")) }) // create a new server and run it with ListenAndServe to take // HTTP requests on port 1123 s := http.Server{Addr: ":1123", Handler: rt} s.ListenAndServe() }
Starting the program
- Run
go run main.go, or use
go installand run the installed binary
- Go to a browser or in a program like cURL. You should see the text "hello world!"
(2) Give that HTTP response as HTML, with Content-Type
text/html
Go code
rt.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { // Add the header "Content-Type: text/html" w.Header().Set("Content-Type", "text/html") // Add some HTML <h1> tags to the hello world response. The // browser, seeing the response is Content-Type: text/html, // will display the response as a big header. w.Write([]byte("<h1>Hello world!</h1>")) })
Starting the program
- Compile and start the server again, and refresh
- The response should now be displayed as a webpage
(3) Add another endpoint/route on your HTTP server, such as an
/about.html page
Go code
// naming the route "/about" makes it so when a request is sent to //, the about page is served instead of // the hello world page rt.HandleFunc("/about", func(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "text/html") w.Write([]byte(` <!DOCTYPE html> <html> <head> <title>About us</title> </head> <body> <h1>About us</h1> <p>We've got a website!</p> </body> </html> `)) })
Starting the program
- Compile and start the server again
- Go to. You should now see your about page.
(4) Serving an endpoint with an image or webpage in your file system
Preliminary steps
- Make a directory inside the directory where main.go is, named "images"
- Save a JPEG image in that images directory named "gopher.jpg"
Go code
// We are using Handle, not HandleFunc, because we're passing // in an object of type http.Handler, not a function rt.Handle( "/images/", // StripPrefix chops the prefix, in this case "/images/", // off of the HTTP request's path before passing the // request to the http.FileServer http.StripPrefix( "/images/", // create a FileServer handler that serves files in // your "images" directory http.FileServer(http.Dir("images")), ), )
Starting the program
- Compile and start the server again
- Go to. You now should see your gopher.jpg file
(5) Route to an endpoints using more complex route like /signup/my-name-is/:name
Preliminary steps
For this, one, we'll use the Gorilla Mux library, since that is one of the most popular HTTP routing libraries in Go.
- Set up a
go.modfile with
go mod init.
go modis Go's built-in package manager, and it is where your project's dependencies are listed, similar to package.json in Node.js.
- Run
go get github.com/gorilla/mux
Go code
First add Gorilla Mux to your imports
import ( // fmt is a string formatting package in Go "fmt" "net/http" // Now we're importing the Gorilla Mux package in addition to // net/http "github.com/gorilla/mux" )
Then at the start of
main, replace the
ServeMux with a Gorilla Mux
Router and add your parameterized endpoint to it
func main() { // now instead of a ServeMux, we're using a Gorilla Mux router rt := mux.NewRouter() // Our new parameterized route rt.HandleFunc( // make a parameter in the request path that Gorilla // recognizes with the string "name" "/signup/my-name-is/{name}", func(w http.ResponseWriter, r *http.Request) { // the route parameters on the request are parsed // into a map[string]string name := mux.Vars(r)["name"] w.Header().Set("Content-Type", "text/html") w.Write([]byte(fmt.Sprintf( // use the route parameter in the HTTP response "<h1>You're all signed up for the big convention %s!</h1>", name, ))) }, ) // Our existing endpoints mostly stay the same as before }
Finally, update the format of the images directory endpoint to use PathPrefix, which is what you use for path prefixes in Gorilla Mux as opposed to
Handle("/path/", handler)
rt.PathPrefix("/images/").Handler( http.StripPrefix( "/images/", http.FileServer(http.Dir("images")), ), )
Starting the program
- Compile and start the server again
- Go to. You now should get an HTML response saying you're signed up
(6) Write an automated test for your HTTP parameterized endpoint
Preliminary steps
First, take the logic for setting up our Mux router and move it to its own function
func handleSignup(w http.ResponseWriter, r *http.Request) { name := mux.Vars(r)["name"] w.Header().Set("Content-Type", "text/html") w.Write([]byte(fmt.Sprintf( "<h1>You're all signed up for the big convention %s!</h1>", name, ))) } // The router function is pretty much the same as the main // function as of the last exercise, from when rt is declared // to when we declared the last handler in the router. func router() http.Handler { rt := mux.NewRouter() rt.HandleFunc("/signup/my-name-is/{name}", handleSignup) // not shown: The other endpoints' handlers // mux.Router, the type of rt, implements the http.Handler // interface, which is in charge of handling HTTP requests // and serving HTTP responses. return rt }
Now replace the body of
main with:
func main() { // create a new server and run it with ListenAndServe to take // HTTP requests on port 1123 s := http.Server{Addr: ":1123", Handler: router()} s.ListenAndServe() }
Then, make a new file named
app_test.go
Go code (in app_test.go)
package main import ( // Standard library package for working with I/O "io" // Standard library testing utilities for Go web apps "net/http/httptest" // Standard library URL-parsing package "net/url" // Standard library testing package "testing" ) // Functions whose name start with Test and then a capital // letter and take in a testing.T object are run by the // `go test` subcommand func TestSignup(t *testing.T) { // A ResponseRecorder is the httptest implementation of // http.ResponseWriter, that lets us see the HTTP response // it wrote after running an HTTP handler function w := httptest.NewRecorder() // convert a string to a *url.URL object reqURL, err := url.Parse("") if err != nil { // if parsing the URL fails, have the test fail t.Fatalf("error parsing URL: %v", err) } // set up our HTTP request, which will be to the endpoint r := &http.Request{URL: reqURL} // send the request to the HTTP server by passing it and the // ResponseWriter to mux.Router.ServeHTTP(). The result of // that HTTP call is stored in the ResponseRecorder. router().ServeHTTP(w, r) res := w.Result() // convert the response stored in res.Body to bytes and check // that we got back the response we expected. body, err := io.ReadAll(res.Body) if err != nil { t.Fatalf("error retrieving response body: %v", err) } bodyStr := string(body) expected := `<h1>You're all signed up for the big convention Andy!</h1>` if bodyStr != expected { t.Errorf("expected response %s, got %s", expected, bodyStr) } }
Running the test
- From the directory the Go files are in, run
go test -v
- Observe the test passing. If you change the text the sign-up endpoint serves, then the test should now fail.
(7) Escape HTML tags in your endpoint.
Go code (main.go)
First import the net/html package
import ( // standard library package for working with HTML "html" "net/http" "github.com/gorilla/mux" )
Then update handleSignup to call EscapeString. In a real production app, we'd be doing more advanced sanitization of user data than this and probably rendering our HTML using a templating library that has sanitization built-in in order to catch more edge cases with malicious input, but EscapeString handles sanitizing HTML characters as a very simple demonstration of input sanitization.
func handleSignup(w http.ResponseWriter, r *http.Request) { name := mux.Vars(r)["name"] // we use EscapeString to escape characters that are used in // HTML syntax. For example, the character < becomes < and // > becomes > name = html.EscapeString(name) // rest of the endpoint stays the same w.Header().Set("Content-Type", "text/html") w.Write([]byte(fmt.Sprintf( "<h1>You're all signed up for the big convention %s!</h1>", name, ))) }
Go test code (main_test.go)
func TestSignupHTMLEscape(t *testing.T) { w := httptest.NewRecorder() // convert a string to a *url.URL object, this time with // some HTML in it that should be escaped urlString := "<i>Andy" reqURL, err := url.Parse(urlString) if err != nil { // if parsing the URL fails, have the test fail t.Fatalf("error parsing URL: %v", err) } // run ServeHTTP just like before r := &http.Request{URL: reqURL} router().ServeHTTP(w, r) res := w.Result() body, err := io.ReadAll(res.Body) if err != nil { t.Fatalf("error retrieving response body: %v", err) } // Expect that we thwarted the HTML injection attempt bodyStr := string(body) expected := `<h1>You're all signed up for the big convention <i>Andy!</h1>` if bodyStr != expected { t.Errorf("expected response %s, got %s", expected, bodyStr) } }
Running the test
- From the directory the Go files are in, run
go test -v
- Observe the test passing. If you comment out the EscapeString call, then the test should now fail.
Running the server
- Compile and start the server again
- Go to<i>YOUR_NAME. You now should NOT get any italicized text since the
<i>tag was escaped.
(8) Serialize an object/struct/class to some JSON and serve it on an endpoint with a
Content-Type: application/json
Go code (near top of main.go)
First, import
encoding/json, Go's standard library package for serializing objects to JSON
import ( // Go's standard library for JSON serialization "encoding/json" "net/html" "net/http" )
Then define this struct and HTTP endpoint
// Define a type with JSON serialization specified by JSON // Go struct tags type animalFact struct { AnimalName string `json:"animal_name"` AnimalFact string `json:"animal_fact"` } // add this function into the "router" function func sendAnimalFact(w http.ResponseWriter, r *http.Request) { fact := animalFact{ AnimalName: "Tree kangaroo", AnimalFact: "They look like teddy bears but have a long"+ " tail to keep their balance in trees!", } // Set the Content-Type for the response to application/json w.Header().Set("Content-Type", "application/json") // load the ResponseWriter into a JSON encoder, and then by // calling that Encoder's Encode method with a pointer to the // animalFact struct, the ResponseWriter will write the struct // as JSON. if err := json.NewEncoder(w).Encode(&fact); err != nil { // if serializing the response fails, then return a // 500 internal server error response with the error // message "error serializing response to JSON". // If the serialization succeeds though, we're all // set and the HTTP response is already sent. w.WriteHeader(http.StatusInternalServerError) w.Write([]byte(`{"error": "couldn't serialize to JSON"}`)) } }
Finally, add the animal fact to the
router function
rt.HandleFunc("/animal-fact", sendAnimalFact)
Running the server
- Compile and start the server again
- Go to. You should get your animal fact in JSON.
(9) Add a POST HTTP endpoint whose input is of Content-Type
application/json, deserialize it to an object/struct/class, and then use some part of the object to produce some part of the HTTP response.
Go code (near top of main.go)
First, define a signup struct, its JSON serialization using Go struct tags, and an endpoint to handle a JSON payload
import ( // Go's standard library for JSON serialization "encoding/json" "net/html" "net/http" "github.com/gorilla/mux" ) // Define a type with JSON serialization specified by JSON // Go struct tags. For example, `json:"days_signed_up_for"` // indicates that the DaysSignedUpFor field should be // serialized as days_signed_up_for, not DaysSignedUpFor type signup struct { Name string `json:"name"` DaysSignedUpFor int `json:"days_signed_up_for"` } // add this function into the "router" function func handleJSONSignup(w http.ResponseWriter, r *http.Request) { // load the request's Body into a JSON Decoder, and then by // calling that Decoder's Decode method with a pointer to the // signup struct, var s signup if err := json.NewDecoder(r.Body).Decode(&s); err != nil { // if deserializing the response fails, then return a // 400 Bad Request header and the error message // "invalid JSON payload" w.WriteHeader(http.StatusBadRequest) w.Write([]byte("invalid JSON payload")) return } // use the signup in the response body name := html.EscapeString(s.Name) days := s.DaysSignedUpFor msg := fmt.Sprintf( "You're all signed up %s! Have a great %d days at the big convention!", name, days, ) // NOTE: in a real production endpoint, if we're taking // in a JSON payload we'd probably send a JSON response // rather than plain text w.Write([]byte(msg)) }
Finally, in the
router function, add our handleJSONSignup endpoint
rt.Methods(http.MethodPost).Path("/signup").HandlerFunc(handleJSONSignup).
(10) Save the input of your POST request to a database
Preliminary steps to using my implementation
In the interest of simplicity, we will use SQLite as our database. If we were developing a big web app and planning on the site getting really popular with tons of people wanting data from the database at the same time, we might instead opt for a database like Postgres or MongoDB.
- Install SQLite and add it to your computer's path
- Open SQLite's command-line tool from the folder you've been coding the Sampler in. The command to do so is
sqlite3.
- Create your SQLite database with the command
.open website.db.
- Create your database table with the command
CREATE TABLE signups (id INTEGER PRIMARY KEY AUTOINCREMENT, name TEXT, days_signed_up_for INTEGER);
Now we have a database table to store sign-ups! Now for Go to be able to talk to SQLite, or any SQL database with the Go standard library's
database/sql package, we need a database driver for the database we're using. We can get the one for SQLite using
go get github.com/mattn/go-sqlite3
Now we're ready to use the actual code.
Go code
First in main.go, import
database/sql, Go's standard library package for working with SQL databases, and underscore-import
go-sqlite3 so that
database/sql registers the SQLite Go database driver. With that underscore import, your Go code is now able to talk to SQLite databases.
import ( // Go's standard library for JSON serialization "database/sql" "encoding/json" "fmt" "html" "net/http" "github.com/gorilla/mux" // register the database driver for SQLite _ "github.com/mattn/go-sqlite3" )
Then, define an object for interacting with our
signups database table. This is so that the logic for interacting with the database is not intertwined with the logic for serving the endpoint, making the code easier to test and harder to get unexpected bugs with.
// signupsDB centralizes the logic for storing signups in // SQLite. type signupsDB struct{ db *sql.DB } func newSignupsDB(filename string) (*signupsDB, error) { // Open a database/sql DB for the file path, using the // sqlite3 database driver. db, err := sql.Open("sqlite3", filename) if err != nil { return nil, err } return &signupsDB{db: db}, nil } // SQLite syntax to more or less say "insert the values of the // two question mark parameters into the name and // days_signed_up_for fields of a new item in the signups table const insertSignupQuery = ` INSERT INTO signups (name, days_signed_up_for) VALUES (?, ?)` func (db *signupsDB) insert(s signup) error { // run DB.Exec to insert an item into the database result, err := db.db.Exec( insertSignupQuery, s.Name, s.DaysSignedUpFor, ) if err != nil { return err } // check that only one item was inserted into the database // table rowsAffected, err := result.RowsAffected() if err != nil { return err } else if rowsAffected != 1 { return fmt.Errorf( "expected 1 row to be affected, but %d rows were.", rowsAffected, ) } return nil }
Now add an
init function for initializing our database. Note that this isn't how we'd set up a database connectioin in a production app, but it's probably the simplest way to do this setup.
// in a real production Go web app, we would be structuring // our HTTP handlers to be data structures instead of plain // functions so we aren't relying on global variables, but // for the sampler we'll just initialize our database in the // init function and panic if that fails to keep things simple var db *signupsDB func init() { var err error if db, err = newSignupsDB("./website.db"); err != nil { panic(fmt.Sprintf( "error opening db: %v; can't start the web app", err, )) } }
Then add
db: struct tags for serializing your object in a database as well as in JSON.
type signup struct { Name string `json:"name",db:"name"` DaysSignedUpFor int `json:"days_signed_up_for",db:"days_signed_up_for"` }
Finally, in the
handleJSONSignup endpoint, add this if statement right before where you serve the HTTP response:
if err := db.insert(s); err != nil { // in a real production web app, we'd look in more // detail at the error's value to decide the // appropriate status code and error message w.WriteHeader(http.StatusInternalServerError) w.Write([]byte("error inserting signup")) return }.
- To see that you really put a sign-up in the database entity, open
sqlite3in the command line, open the database again with
.open website.db, and finally, run
SELECT * FROM signups. You should now see a single item in the database.
(11) Make a GET endpoint that retrieves a piece of data from the database
Go code
First, add a new method to the
// SQLite syntax more or less saying "get the "name" and // "days_signed_up_for" fields of AT MOST one item in the // signups table whose name matches the question-mark // parameter" const getSignupQuery = `SELECT name, days_signed_up_for FROM signups WHERE name=? LIMIT 1` func (db *signupsDB) getByName(name string) (*signup, error) { // retrieve a single item from the "signups" database // table. We get back a *sql.Row containing our result, or // lack thereof if the item we want is not in the database // table. row := db.db.QueryRow(getSignupQuery, name) // We deserialize the Row to the data type we want using // Row.Scan. If no database entity had been retrieved, // then we instead get back sql.ErrNoRows. var s signup if err := row.Scan(&s.Name, &s.DaysSignedUpFor); err != nil { return nil, err } return &s, nil }
Then, make a new endpoint that uses getByName to query for signups
func handleGetSignupFromDB(w http.ResponseWriter, r *http.Request) { name := mux.Vars(r)["name"] w.Header().Set("Content-Type", "application/json") signup, err := db.getByName(name) switch err { case nil: // if there's no error, we have a signup, so carry on // with sending it as our JSON response case sql.ErrNoRows: // if we got ErrNoRows, then return a 404 w.WriteHeader(http.StatusNotFound) w.Write([]byte(`{"error": "sign-up not found"}`)) return default: // for any other kind of error, return a 500 w.WriteHeader(http.StatusInternalServerError) w.Write([]byte(`{"error": "unexpected error"}`)) return } if err := json.NewEncoder(w).Encode(signup); err != nil { w.WriteHeader(http.StatusInternalServerError) w.Write([]byte(`{"error": "couldn't serialize to JSON"}`)) } }
Finally, add
handleGetSignupFromDB to
router
rt.HandleFunc("/signup/get/{name}", handleGetSignupFromDB)
Running the server
- Compile and start the server again
- Send a request to the. You should now see the JSON of your sign-up you did in the last step.
- Send another request to the signup endpoint, this time with the name of someone that didn't sign up. You should now see the JSON of a sign-up not found error.
Top comments (5)
This is interesting! What's the web dev sampler challenge?
Thanks! Link to it is below, it's a series I made of 11 web development exercises to get started learning backend in a new language, and along the way hopefully learn its ecosystem! So this post is my answers for how I'd do these exercises in Go and the link in this comment is the original questions
dev.to/andyhaskell/introducing-the...
oh oops silly me. I should've clicked the link in the blog post at the top🤦🏾♀️
Thank you!
No problem, your comment actually reminded me to put that there so thanks for that!
great post, I’ll definitely come back to this as a reference! I recently stated using Go at work and definitely have some knowledge gaps to fill 😄 | https://dev.to/andyhaskell/webdevsampler-challenge-my-answers-in-go-3hkl | CC-MAIN-2022-40 | refinedweb | 3,488 | 57.47 |
Asked by:
c# capture problem records in SqlBulkCopy
Question
Hi, I am inserting data from text stream to a table. Previously I was using insert command to write data to a table and I was able to write problem records in the catch block to an audit log table and when the stream size grew it was taking too long for the inserts so using SqlBulkCopy class. Instead of rolling back the transaction is there a way to write problem records to an audit log table.
For instance, 1st & 3rd records insert fine write them to the actual table and 2nd record has some problem insert the record into the audit log table. My table has varchar datatype on all the columns.
Using insert command
public string writetotbl(IList < string > records) { string connString = ConfigurationManager.ConnectionStrings["myDBConnString"].ConnectionString; try { var lkup = from record in records let rec = records.Split(',') select new Lookup { Id = rec[0], Code = rec[1], Description = rec[2] }; foreach(var i in lkup) { using(SqlConnection sqlConnection = new SqlConnection(connectionString)) { sqlConnection.Open(); using(SqlCommand cmd = new SqlCommand("INSERT INTO [Lookup] ([Id], [Code], [Description]) VALUES (@Id, @Code, @Description)", sqlConnection)) { cmd.Parameters.AddWithValue("@Id", i.Id); cmd.Parameters.AddWithValue("@Code", i.Code); cmd.Parameters.AddWithValue("@Description", i.Description); cmd.ExecuteNonQuery(); } sqlConnection.Close(); } } } catch (Exception ex) { using (SqlCommand cmd = new SqlCommand("INSERT INTO [dbo].[log] ([ErrorRecord], [ErrorMessage]) VALUES (@ErrorRecord, @ErrorMessage)", sqlConnection)) { cmd.Parameters.AddWithValue("@ErrorRecord", I.Id + ", " + I.Code + ", " + I.Description); cmd.Parameters.AddWithValue("@ErrorMessage", ex.Message); cmd.ExecuteNonQuery(); } message = ex.Message; } }
Using SqlBulkCopy
private string writetotbl(IList<string> records) { string connString = ConfigurationManager.ConnectionStrings["myDBConnString"].ConnectionString; try { var lkup = from record in records let rec = records.Split(',') select new Lookup { Id = rec[0], Code = rec[1], Description = rec[2] }; DataTable dt = new DataTable(); dt.Columns.Add(new DataColumn("@Id", typeof(int))); dt.Columns.Add(new DataColumn("@Code", typeof(string))); dt.Columns.Add(new DataColumn("@Description", typeof(string))); DataRow dr = dt.NewRow(); foreach (var i in lkup) { dr = dt.NewRow(); dr["Id"] = i.Id.Replace("\"", ""); dr["Code"] = i.Code.Replace("\"", ""); dr["Description"] = i.Description.Replace("\"", ""); dt.Rows.Add(dr); } using (var conn = new SqlConnection(connString)) { conn.Open(); using (SqlBulkCopy s = new SqlBulkCopy(conn)) { s.DestinationTableName = "Lookup"; s.BatchSize = dt.Rows.Count; s.BulkCopyTimeout = 0; s.ColumnMappings.Add("Id", "Id"); s.ColumnMappings.Add("Code", "Code"); s.ColumnMappings.Add("Description", "Description"); s.WriteToServer(dt); s.Close(); } conn.Close(); } return (null); } catch (Exception ex) { //How to Insert records into audit log table here? errmsg = ex.Message; return (errmsg); } }
Thank you.
SQLEnthusiast
All replies
SqlBulkCopy writes in a transaction so imagine this scenario.
Add rows A, B and C to the table
Call WriteToServer
Row B fails but since this is a transaction none of the rows are inserted
See the problem? It doesn't matter which row was invalid, the entire batch is thrown out. So the scenario you describe about inserting rows 1 and 3 but sending row 2 to an audit table won't work with SqlBulkCopy as any errors causes all the rows to fail. That is one of the reasons why it is faster since everything is occurring in a single transaction.
Unfortunately you are very limited in what you can do. The exception that is returned may indicate the failure if it returns a SqlException or similar but you'll have to use heuristics to figure it out. Irrelevant none of the rows were inserted (and there may be more rows that are invalid). At this point you can try to remove the bad row and try the batch again but if there is another row that is invalid then you'll be repeating this over and over. If all the rows are bad then you'll be repeating for however many rows there are which is very inefficient.
An alternative approach is to use smaller batch sizes (which means the performance is going to be closer to just batch inserts you can do yourself). Then if a batch fails you can flag the entire batch as bad and move on to the next one. The larger the batch size the faster the inserts but the more (potentially) good records will be skipped because of a bad record. MSDN has several examples on how you can use transactions with bulk copy which may provide you some ideas on how to solve this problem.
Personally I would fail the batch with the exception logged and then let someone figure out what went bad. This is, of course, after I've already done due diligence and made sure the row was valid before even adding it to the table to begin with.
Michael Taylor
- Proposed as answer by Fei HuModerator Thursday, May 3, 2018 7:16 AM
Hello CSharp,
>>c# capture problem records in SqlBulkCopy
When you use SqlBulkCopy to transport multi rows data to database you couldn't locate which rows encounter invalid exception because BulkCopy operation insert recode block by block instead of row by row.
As for your circumstance, consider speed first you could use SqlBulkCopy but when exception occurs you need to loop the table and use insert statement row by row to find which row is invalid. You could refer the below blog.
Retrieving failed records after an SqlBulkCopy exception. | https://social.msdn.microsoft.com/Forums/en-US/efbef885-30f8-4f85-85d3-9528b70c5519/c-capture-problem-records-in-sqlbulkcopy?forum=csharpgeneral | CC-MAIN-2020-50 | refinedweb | 872 | 56.35 |
Content-type: text/html
#include <sys/times.h> #include <limits.h>
clock_t times(struct tms *buffer);
The times() function fills the tms structure pointed to by buffer with time-accounting information. The tms structure, defined in <sys/times.h>, contains the following members:
clock_t tms_utime; clock_t tms_stime; clock_t tms_cutime; clock_t tms_cstime;
All times are reported in clock ticks. The specific value for a clock tick is defined by the variable CLK_TCK, found in the header <limits.h>.
The times of a terminated child process are included in the tms_cutime and tms_cstime members of the parent when wait(3C) or waitpid(3C) returns the process ID of this terminated child. If a child process has not waited for its children, their times will not be included in its times.
The tms_utime member is the CPU time used while executing instructions in the user space of the calling process.
The tms_stime member is the CPU time used by the system on behalf of the calling process.
The tms_cutime member is the sum of the tms_utime and the tms_cutime of the child processes.
The tms_cstime member is the sum of the tms_stime and the tms_cstime.
The times() function will fail if:
EFAULT The buffer argument points to an illegal address.
See attributes(5) for descriptions of the following attributes:
time(1), timex(1), exec(2), fork(2), time(2), waitid(2), wait(3C), waitpid(3C), attributes(5), standards(5) | https://backdrift.org/man/SunOS-5.10/man2/times.2.html | CC-MAIN-2021-21 | refinedweb | 236 | 56.05 |
Welcome to the Parallax Discussion Forums, sign-up to participate.
Alkarex wrote: »
@VonSzarvas In the photo, it was literally finger-only. The third screw cannot be screwed more without a tool and a bit of force (not much, but apparently enough to make the wheel harder to rotate).
KiAcc[LEFT] := KiAcc[RIGHT]~~
KiAcc[RIGHT]
KiAcc[LEFT]
KiAcc[LEFT] := KiAcc[RIGHT]~~ + 2
K1Acc[RIGHT]
KiACC[RIGHT]
Pub GoSpeed(LeftVelocity, RightVelocity)
LeftVelocity /= 2 ' Divide by two to convert pos/sec to pos/half sec
RightVelocity /= 2 ' Divide by two to convert pos/sec to pos/half sec
CheckHWVer
if PDRunning < 1 ' Requires use of the PD controller
return @MotorError ' Abort if it isn't running
case LeftVelocity
-32767..32767:
other:
return @InvalidParameter
case RightVelocity
-32767..32767:
other:
return @InvalidParameter
if Mode <> VELOCITY
InterpolateMidVariables
Mode := POWER ' Stop PDIteration from modifying mid-positions
KiAcc[LEFT] := KiAcc[RIGHT]~~
SetVelocity[LEFT] := LeftVelocity ' Then set the motors' powers
SetVelocity[RIGHT] := RightVelocity
StillCnt[LEFT]~
StillCnt[RIGHT]~
Mode := VELOCITY
"~~" is being used in the post-set form, which means that KiAcc[LEFT] will be set to the value of KiAcc[RIGHT] before KiAcc[RIGHT] is set to -1.
KiAcc[LEFT]~~
KiAcc[RIGHT]~~
bebement wrote: »
@Publison: I apologize for the gaffe. I understand your perspective. My thought was that a narrowly targeted question in what I thought was a more relevant (and different) forum would get a quicker and potentially more accurate response. No worries.
@Dave Hein: Thank you for the clarification!
I am struck by the inconsistency/asymmetry of the
KiAcc[LEFT] := KiAcc[RIGHT]~~
statement relative to the other nearby statements throughout the firmware. I can not help but wonder whether it should be
KiAcc[LEFT]~~
KiAcc[RIGHT]~~
instead. I do not yet understand what the significance of a value of -1 might be though.
We never really solved the issue either, but tried a few different things that mitigated the problem:
- We sent GO commands instead of GOSPD at a rate not exceeding 8Hz. We continued to retrieve odometry from the encoders using DIST and HEAD since we need it in our application.
- We imposed a min velocity constraint of GO +-32 +-32 for non-zero commands. Supplying values lower than 20 would cause the error with the 2 blinking lights within a few seconds of operation. Is this expected?
Would the command rate or supplying too low a velocity using either GO or GOSPD to one of the motors cause an error?
@Alkarex: Have the robots operated normally since the switch?
We can also attempt updating the firmware on our robots over the next couple of weeks. I'd appreciate any input.
No, there is no significant improvement after all on our side with the new firmware (maybe a slight improvement, but this might be to some other parameters).
Yes, we also currently operate with the GO command, and have noticed that too low power or speed would make the motor board to crash.
When running our applications, we are currently reading the encoders from the Propeller Activity Board WX, but your approach seems to be a good idea.
More info from our side, after a lot of additional testing:
I have added some new photos and videos
* There seems to be a significant mechanical pressure on the motor axis, as the motor holes and the aluminium holes are not perfectly aligned, which increases significantly the power needed to turn the wheels
Furthermore, the motor axis is not rotating perfectly orthogonal to its base, so the physical constraint is not constant during a rotation.
Using only two screws instead of three, combined with not screwing fully, seems to increase the duration between two crashes.
* Using the new firmware and its STATUS command, we get error codes -3 and -5.
So with the two observations above, our current guess is that the mechanical constraints in the current Arlo models are much higher than expected by the motor board, even when there is no load (wheels not touching the ground). It seems that both a mechanical fix as well as a firmware fix would be needed.
* To make our problem worse, some of our encoders do not seem to work fine (e.g. frequently missing some counts), despite the black plastic disc being clean and correctly centred, and the soldering double-checked.
Best regards,
Alexandre
Some immediate thoughts...
That mechanical issue looks like it should be something...
Although I think you previously mentioned that swapping the motor and encoder cables between A and B sides didn't move the problem to the other channel?
I mean, assuming you've got one motor (say, Left- Ch.A.) with a good block and one with an "off tolerance" block (say, Right, Ch.B), then swapping both the motor and encoder wires for the bad block to the other channel, does that move the problem to the other channel? I think you said before it didn't- that Channel B remained the problem-- although I'm not sure if you moved both the motor and encoder, or just the encoder cables during that test?
David will know better if having one motor under-strain/over-current would impact both channels in the firmware. And he can interpret your status code results, so thank you that those also.
I'll keep watching, and wait for David's help on this.
I assume the issue comes from multiple things. When I tested the motor board with the three Arlo we have, it is always the right channel that is problematic and the STATUS command returns either -3 or -5. I guess the mechanical problem lies on both wheels but only make the right channel crash. I tested on another two new wheel this morning and the error returns -2 or -3. -2 is returned shortly after I reset the motor board without stopping the test program.
Statuses -2 and -3 indicate a fault output from the left and right motor driver ICs, respectively. This should coincide with a steady illumination (without blinking) of the respective motor's Status LED. The fault output from either motor driver can also stop moving the motors, which would also cause a -4 or -5 status, so they are likely artifacts of the -2 or -3 status.
There are two possible causes of a fault output from the motor driver ICs: either too low of a battery voltage or too high of an operating temperature. Both conditions increase in likelihood the longer a motor is operating continuously, and the higher the load the sooner they will occur. Ideally, the battery will maintain a high enough operating voltage until it is depleted, and the operating temperature will never be high enough to trigger a fault.
The firmware updated with the STATUS command doesn't have any other changes, so it is not likely that updating to it will have an effect on fault condition occurrences.
Our machinist is looking into the orthogonality issue, and his preliminary findings are that it may be what is causing your Arlo to reach a fault so easily. He looked into the pocket for the socket head screw and found that two of the pockets are off center, but are also enlarged enough to not be a factor. The main issue he is concerned about is that we do not specify how much torque to apply, but their is a maximum torque above which orthogonality will decrease and the drive shaft may bind.
We are updating the documentation to use a procedure that will ensure the correct tightness. Until we have it ready, try using all four screws, but only finger tightening them. It is possible to check for binding by detaching the wheel and using a GO command, with a value of 127, to continuously run the motor, then monitoring either the encoder speed with the SPD command, or the motor current with a ammeter in series, then adjusting the tension of the screws to get the highest speed or lowest current.
In the event that your DHB-10 is overheating, you can add a heat sink to the board, with a mild non-conductive thermal adhesive, to cover all nine of the transistors in TO-252 packages, which is the largest transistor package used on the board. Advanced Thermal Solutions part number ATS-FPX054054013-110-C2-R1, which is available from Digi-Key, should fit nicely.
Regarding the battery, we always pay attention to it, and we operate mostly with fully-charged batteries. We also measure the voltage of the batteries after crashes, right now for instance at 12.89V.
Regarding the temperature, after our last crash, we have just tried to put a finger on each component of the motor board, and they were not especially hot (not a very precise measurement, I know).
Crashes can happen with a freshly charged, cold robot, in a matter of seconds.
Finger-tightening the screws does not seem like a possible solution, since the black encoder disk will not be well aligned. Here is a photo of what I achieve manually:
Something else: Now that we mainly operate without encoders connected to the motor board, we have also observed some motor board crashes even when no encoders are attached.
Let us know if there is any other test we could do on our side.
About the finger-tightening... are you doing that with the hex-tool, or literally by "finger" only ?
I'm sure David meant to use the tool- but just to tighten only until fully in, (we might say "gently tightened"), as opposed to "hard tightened" when you've really applied a lot of force, and in doing so perhaps distorted the alignment of the motor-to-block slightly.
I'm sorry if you got that already- just I noticed in the photo above one of the bolt heads sticking out a bit- so thought I should share the thought. (This assumes those bolt-heads are supposed to be flush when installed correctly).- updated: checked an Arlo here, and they are flush.
Your feedback could be a clue.
I've really not experienced the "wheel being harder to rotate", even with our bolts screwed all the way in.
And it seems to me they should be bolted all the way in. If you get wobble, that's got to be as bad as too tight. I think you already discovered that by your previous observation about the encoders not getting properly aligned.
Just thinking along a different line here....
The motor- does that touch any of the delrin parts around it? And does the motor appear to bolt parallel to the block?
What I'm thinking.... If the spacers are all exactly the same length?, and the motor housing is not hitting/touching anything?, then the block and motor should be perfectly square. Is that what you experience? (I'm thinking that if one of those parts is out-of-square, then you could get the "wheel hard to rotate" or binding issue.
Did you get time to check those spacers... if they are all the same length?
If you find one a bit shorter... could you try a small washer (or similar) to bring them up to equal length?
I'm thinking of mechanical issues that might cause the wheels to feel hard to rotate, and not having the motor aligned square to the blocks seems a plausible cause... So that's why I wonder if the spacers might be slightly different lengths (bad tolerance), which could be solved/proven by a washer as an interim test.
So if the above test of spacer length shows they are all the same length, then I'd suggest adding something close to 1mm thick washer on all 3 spacers anyway and test again.
I really hope this solves things for you. It seems like it might.
As VonSzarvas mentioned, "finger tight" means as tight as you can turn the screw or screwdriver with your fingers, as opposed to gripping a screwdriver in your hand and using your arm to tighten it. In the case of the Allen-head bolts, I would recommend turning the shaft of the Allen wrench with your fingers, as tight as you can.
Also, regarding the mounting, our machinist got back to me on the tightness and did find that it may be possible that the ridge on the drive shaft that fits against the bearing is out of tolerance. If this is the case, tightening down the motor block will put excessive pressure against the bearing, which will add significant load. If this is the case, adding something to shim the spacers, such as one or more washers, can remedy the issue. It will take some trial and error to figure out how thick of washers to use, because if they are too thick, the encoder won't line up. If only finger tightening helps, we can send you a set of washers to try out, or find you a source for them.
Regarding the operation of the DHB-10 itself, the temperature/voltage fault will trigger regardless of whether or not the encoders are in use, but the open-loop 'GO' command may continue to operate after the fault condition has occurred. Because of this, you are more likely to get an error message when using a closed-loop command, like 'GOSPD', but only using open-loop commands will not completely avoid the issue.
In case reducing the tightness isn't enough to stop the fault conditions from occurring, I have attached a modified version of the firmware that runs the motor PWM frequency at 2 KHz, instead of the normal 20 KHZ. This will lead to a 5% or so increase in efficiency, which will reduce the load, leading to both a lower chance of an over-temperature condition or an under-voltage condition.
I forgot to mention that when using the attached 2 KHz variation of the firmware, the motors while be significantly more noisy than when using the default 20 KHz version. This is why we use the higher frequency; for most customers it is worth the ~5% reduction in efficiency to avoid the audible switching noise.
David, thanks for the new firmware. We have adapted it a bit more and landed on a PWM of 10kHz, which seems to be a good compromise for us (still a bit annoying noise, though).
We have experimented a bit with additional spacers. On the inner side, it does not work well, as visible in the following video, but we use them on the outer side for one of our wheels, which helps mitigating the fact that the holes are not perfectly aligned.
By implementing a number of measures, we have one robot that is currently running more or less OK (let's hope this will last):
- Use only 2 screws instead of 3 for the wheel assembly, and tightened just enough that it is still possible for the barrels to rotate
- Limit the rate of commands to the motor board, with at least 10ms between commands
- Limit to a max value of 220 for GOSPD command
- Limit to a max value of 100 for GO command
- Lower the PWM to 10kHz in the firmware of the motor board
We have broken a few encoders in the testing process, which we have replaced, as well as some fuses on the power board.
My colleague will do a bit more testing next week, and then there will be a vacation period.
Best regards,
Alexandre
Those washers are a bit on the thick side. If your drive shaft is out of tolerance, it won't move in or out at all, when the block is tightened down. Adding shims (e.g. washers) should just barely let the shaft move in and out. With as much movement as it had in your video, it won't be able to hold the encoder wheel in place, and the encoder wheel will be able to contact the infrared sensor, which can cause incorrect readings.
Regarding your note to limit the GOSPD command to 220; I would keep it even lower than that. The encoders have 144 positions per rotation, and the motors have a no-load maximum speed of approximately 95 RPM, when running at a 100% power level. That comes out to an approximately 228 positions per second with no load. With a load, and especially if the battery is anything less than fully charged, 220 positions per second may be unachievable. Command the DHB-10 to maintain an unachievable speed will create a motor error.
Also, you have mentioned that you needed to replace fuses. Can you provide more details on the circumstances under which they blew? The fuse is large enough, and the DHB-10 powerful enough, that it can indefinitely drive both motors at their stall current. There may be a commonality between the conditions that blew a fuse and the conditions that are causing hardware faults in the motor driver IC.
The fuse is designed to limit damage to the DHB-10 after current exceeds the absolute maximum it is capable of handling, but the fuse does not guaranteed damage will not occur. A situation extreme enough to blow the fuse could have also damaged the DHB-10. If damaged, it may not function as expected, which could possibly be a factor in the errors you are encountering.
In addition to what @Alkarex mentioned, we also change the ACC command to default value (512) instead of 1024. The robot (without any payload) works fine so far even with encoders connecting to the DHB-10. One thing we noticed before is that the robot would crash everytime it changes speed while we hand-push it to the ground. So I will do more test next week to see how it works with payload or force.
@David_Carrier, the 10A fuse broken when we accidently assigned a high value (200) to the GO command which we are not supposed to.
I confirm that I am having the same problem.
To be sure I ordered a second motor-set with DHB-10,
Propeller board etc. After a week fiddling and exchanging
different parts etc. I think I found a direction to what is causing this.
It seems that DHB-10 motor2 (right wheel) output has a much lower error-tolerance
compared to motor1 output (left wheel) at least 10 fold it seems when using motor feedback (GOSPD?).
To make it even stranger, I am able to blow the 10 Amp fuse by blocking motor1
with my hand (I feel it has very powerful torque, of at least 10 fold compared to motor2).
The fuse wont blow when I block motor2, it will go into error status with very mild torque
(a bit too soon in my opinion).
When I exchange the motor1 and motor2 wiring to the DHB-10 board, the behavior also moves
to the opposite motor/wheel assembly.
In my mind everything seems to point to the DHB-10 controller. I also wasted a lot of time looking
at the mechanical alignment. At least one motor assembly (1 of 4) was indeed creating a lot of friction.
Hope to contribute and to find a solution.
The code I used:
////////////////////////////////////////////////////////////////
#include "simpletools.h" // Include simple tools
#include "arlodrive.h" // Include arlo drive
int main() // Main function
{
pause(3000); // Go for 3 seconds
drive_speed(0, 0);
pause(1000); // Stop
print("Program done!\n"); // Program done message
Regards,
Mars. (BeeClear)
I am also experiencing similar issues on my Arlobot, which was purchased late December of last year.
I have been watching this thread for a while, hoping the issue would sort itself out, but not much has happened here recently.
If anyone has learned anything new that has not already been discussed here, I would appreciate it if they could post it.
One thing I have not seen in this thread is a definitive statement as to whether or not anyone at or associated with Parallax has been able to reproduce any of the problems.
It is extremely easy for me to reproduce the problem by running the very first code snippet that was published in this thread: //DHB10-GOSPD.c
The key to solving any problem is being able to reproduce it. That does not seem to be a problem here.
1) I would like to see a statement from someone at or associated with Parallax something like "we have tested 5/10/20 arlo's with the //DHB10-GOSPD.c program, and have observed the problem on x number of the arlo's".
It is very possible that x might be zero, but even that would be good to know.
Today I installed a header on my DHB10 so I could use the prop plug to install the debug firmware provided by @David Carrier and was getting STATUS of -2 and -3, indicating that a fault was detected on the h-bridge controller, sometimes followed by a -4 as one might expect. It seems clear to me that one or more things (mechanical, electrical, or firmware) are marginal here.
2) I know nothing about the spin language, but I have spent some time examining the firmware and it appears to me that the firmware faults out based on a single read of the fault status from the h-bridge controller in the PDLoop routine.
I work in automotive electronics, and this is something that would almost never be done. It is too susceptible to noise.
I believe the firmware needs to debounce the MOTOR_L_FAULT and MOTOR_R_FAULT inputs before making a decision to shut down the motors. I do not have an oscilloscope available to get a sense for what the right numbers might be, but sampling inputs every 10msec and requiring 3 consecutive identical samples before changing the state of the debounced version of the signal is pretty standard. This is something I could do myself, but I have limited time and would appreciate it if someone from or associated with Parallax could do it for the community.
Could someone please do this?
Have not had time to look for mechanical issues yet, but seems clear that mechanical issues could lead to increased current draw from the motors, leading to increased electrical noise, which could cause faulting out of the firmware.
All I have time for now.
Regards,
Brad
Any spin experts out there?
Looking at the spin code for the GoSpeed routine, the following statement caught my attention:
I know next to nothing about spin, but looking at the "Sign-Extend 15 or Post-Set '~~'" section of the P8X32A-Web-PropellerManual-v1.2, it seems clear to me that will be -1 after the statement.
What is not clear to me is what the value of will be because of the "Post-Set" nature of the ~~ operator.
If the expression was , like in the example given in the propeller manual, it seems clear that would be equal to the previous/original value of + 2 (not -1+2). Is this correct?
Since the expression does not contain any additional operations, is this a special case where will somehow get set to -1 (which I believe is the intent) instead of the previous/original value of ?
Or am I missing something else?
Thanks,
Brad
(I apologize for the formatting.)
For reference:
Thanks for your understanding.
@Dave Hein: Thank you for the clarification!
I am struck by the inconsistency/asymmetry of the statement relative to the other nearby statements throughout the firmware. I can not help but wonder whether it should be instead. I do not yet understand what the significance of a value of -1 might be though.
Sit tight. My email got the attention of three Parallax employees. It is in the mix.
I have my ARLO to test. I need to try and reproduce the problems.
Maybe this has something to do with the resistance of the battery connection (length of cable, thickness, termination, or even the battery internal resistance according to age & temperature).
What about trying adding a capacitor at the VIN screw-terminals?
Perhaps something rated >20V and 100uF. Preferably low ESR if choosing an electrolytic, although try what you have available in the first instance. As long as the capacitor voltage rating exceeds your battery voltage by at-least 25%. | http://forums.parallax.com/discussion/comment/1432488/ | CC-MAIN-2019-30 | refinedweb | 4,053 | 68.4 |
On Thu, Nov 28, 2019 at 06:15:16PM +0100, Stefano Garzarella wrote: > Hi, > now that we have multi-transport upstream, I started to take a look to > support network namespace (netns) in vsock. > > As we partially discussed in the multi-transport proposal [1], it could > be nice to support network namespace in vsock to reach the following > goals: > - isolate host applications from guest applications using the same ports > with CID_ANY > - assign the same CID of VMs running in different network namespaces > - partition VMs between VMMs or at finer granularity > > This preliminary implementation provides the following behavior: > - packets received from the host (received by G2H transports) are > assigned to the default netns (init_net) > - packets received from the guest (received by H2G - vhost-vsock) are > assigned to the netns of the process that opens /dev/vhost-vsock > (usually the VMM, qemu in my tests, opens the /dev/vhost-vsock) > - for vmci I need some suggestions, because I don't know how to do > and test the same in the vmci driver, for now vmci uses the > init_net > - loopback packets are exchanged only in the same netns > > Questions: > 1. Should we make configurable the netns (now it is init_net) where > packets from the host should be delivered?
Yes, it should be possible to have multiple G2H (e.g. virtio-vsock) devices and to assign them to different net namespaces. Something like net/core/dev.c:dev_change_net_namespace() will eventually be needed. > 2. Should we provide an ioctl in vhost-vsock to configure the netns > to use? (instead of using the netns of the process that opens > /dev/vhost-vsock) Creating the vhost-vsock instance in the process' net namespace makes sense. Maybe wait for a use case before adding an ioctl. > 3. Should we provide a way to disable the netns support in vsock? The code should follow CONFIG_NET_NS semantics. I'm not sure what they are exactly since struct net is always defined, regardless of whether network namespaces are enabled.
signature.asc
Description: PGP signature
_______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org | https://www.mail-archive.com/virtualization@lists.linux-foundation.org/msg37325.html | CC-MAIN-2019-51 | refinedweb | 342 | 57 |
Feb 20, 2020 02:18 PM|Golia|LINK
Hello there,
I have problem using
Items on
public static void.
When I try to recover with a
for each the items element is not recognized.
How to do resolve this ?
Please, can you help me ?
My code below.
<asp:DropDownCheckBoxes <Style SelectBoxWidth="400" DropDownBoxBoxWidth="400" DropDownBoxBoxHeight="500" SelectBoxCssClass="body" DropDownBoxCssClass="body" /> <Texts SelectBoxCaption="[ ------- Select ------- ]" /> </asp:DropDownCheckBoxes> public class pnpropo { public List<System.Web.UI.WebControls.ListItem> propo { get; set; } } [WebMethod(EnableSession = true)] [ScriptMethod] public static void Savepnpro(pnpropo pro) { string xpro = string.Empty; foreach (System.Web.UI.WebControls.ListItem item in pro.propo) <<< line error { if (item.Selected) { xpro += item.Value + "; "; } } }
All-Star
47311 Points
Feb 20, 2020 02:55 PM|mgebhard|LINK
Simply, static methods cannot read instance members like server controls. This is a C# construct and you can learn the rules by going through the C# programming guide.
There are a few ways to solve the current problem, pass the selected dropdown value to the web method or use the UpdatePanel which is specifically designed for AJAX in Web Forms.
1 reply
Last post Feb 20, 2020 02:55 PM by mgebhard | https://forums.asp.net/t/2164363.aspx?Using+Items+definition+on+public+static+void+in+c+ | CC-MAIN-2020-24 | refinedweb | 194 | 51.95 |
RHEL5 nss ldap update cause stack size related failure
Bug Description
Hi Jeff,
We've been having a problem lately with caget and other CA clients crashing
due to stack overflows in the nss_ldap library. We're running RHEL5, and
there's a change in the latest nss_ldap library that puts a 128K buffer on the stack.
The change happened between nss_ldap version 42.el5 and the newer 42.el5_7.4.
We're mostly running EPICS 3.14.9, which by default for linux is allocating a small
stack for this in src/libCom/
the library is overwriting the stack leading to random crashes. I've checked 3.14.12,
and it appears this is still the default setting for linux.
Have you had any other reports of this crash?
Any reason why we shouldn't just use the default stack size?
Are there any plans to change this in upcoming EPICS releases?
Thanks,
- Bruce
On 12/12/2011 12:17 PM, Amedeo Perazzo via RT wrote:
> Queue/Owner: PCDS-Help [open] Nobody
> Requestors: Hill, Bruce<email address hidden> x4752 901/131B [PPA Eng EE]
> Ticket: https:/
>
> Transaction: Correspondence added by perazzo
>
> I agree with Michael having 128KB on the stack is _not_ a good idea and
> I agree with Booker that a 128KB stack size on a modern Linux system is
> probably too small.
>
> My guess is that EPICS is trying to reduce the footprint as much as
> possible given that it must run on embedded systems which can have very
> limited resources.
>
> Bruce, should we ask the EPICS community how they plan to handle this?
> If RHEL6 has the same nss_ldap code as the one that broke EPICS, the
> community will be forced to handle this problem eventually.
>
>
> On 12/12/11 11:55, <email address hidden> via RT wrote:
>> Queue/Owner: PCDS-Help [open] Nobody
>> Requestors: Hill, Bruce<email address hidden> x4752 901/131B [PPA Eng EE]
>> Ticket: https:/
>>
>> Transaction: Correspondence added by mcbrowne
>>
>> Well, it's the code that we're running... I'm not willing to say it's correct
>> though! You're absolutely right... these seem like very small stack sizes.
>>
>> Proof that this is what is running: the full routine without ellipses is:
>>
>> unsigned int epicsThreadGetS
>> stackSizeClass)
>> {
>> #if ! defined (_POSIX_
>> return 0;
>> #elif defined (OSITHREAD_
>> return 0;
>> #else
>> static const unsigned stackSizeTable[
>> {128*ARCH_
>> if (stackSizeClass
>> errlogPrintf(
>> return stackSizeTable[
>> }
>>
>> if (stackSizeClass
>> errlogPrintf(
>> return stackSizeTable[
>> }
>>
>> return stackSizeTable[
>> #endif /*_POSIX_
>> }
>>
>> Running gdb on psusr117:
>>
>> psusr117% gdb caget
>> GNU gdb (GDB) Red Hat Enterprise Linux (7.0.1-37.el5-
>> For bug reporting instructions, please see:
>> <http://
>> Reading symbols from
>> /reg/g/
>> (gdb) break main
>> Breakpoint 1 at 0x401d00: file ../caget.c, line 329.
>> (gdb) run
>> Starting program:
>> /reg/g/
>> warning: no loadable sections found in added symbol-file system-supplied
>> DSO at 0x2aaaaaac7000
>> [Thread debugging using libthread_db enabled]
>>
>> Breakpoint 1, main (argc=1, argv=0x7fffffff
>> 329 {
>> (gdb) x/20i epicsThreadGetS
>> 0x2aaaaaf5e670<
>> 0x2aaaaaf5e674<
>> 0x2aaaaaf5e677<
>> <epicsThreadGet
>> 0x2aaaaaf5e679<
>> lea 0xebfc(%rip),%rax # 0x2aaaaaf6d27c<
>> 0x2aaaaaf5e680<
>> 0x2aaaaaf5e682<
>> 0x2aaaaaf5e685<
>> 0x2aaaaaf5e689<
>> 0x2aaaaaf5e68a<
>> 0x2aaaaaf5e690<
>> 0x2aaaaaf6d000
>> 0x2aaaaaf5e697<
>> 0x2aaaaaf5e699<
>> <errlogPrintf@plt>
>> 0x2aaaaaf5e69e<
>> 0x2aaaaaf5e6a3<
>> 0x2aaaaaf5e6a7<
>> 0x2aaaaaf5e6a8: nopl 0x0(%rax,%rax,1)
>> 0x2aaaaaf5e6b0<
>> 0x2aaaaaf5e6b1<
>> 0x2aaaaaf5e6b4<
>> 0x2aaaaaf5e6b5<
>> (gdb) x/3d 0x2aaaaaf6d27c
>> 0x2aaaaaf6d27c<
>> (gdb)
>>
>> In any event, it isn't just returning 0, which would be the case if we were
>> using OSITHREAD_
>> --Mike
>>
>>
>>
>> Booker Bense via RT wrote:
>>
>> On Mon, 12 Dec 2011, <email address hidden> via RT wrote:
>>
>>
>>
>> /reg/g/
>> you will see that:
>>
>>
>>
>> Is this the correct code? Does anyone know why you are setting
>> the stacksize? It's generally not reccommended.
>> http://
>> Can you just recompile with OSITHREAD_
>>
>>
>> #if defined (_POSIX_
>> #if ! defined (OSITHREAD_
>> status = pthread_
>> &pthreadInfo-
>> checkStatusOnce
>> #endif /*OSITHREAD_
>> #endif /*_POSIX_
>>
>> I don't know all the details, but 128K seems very tiny compared
>> to current memory sizes. If I'm reading that page correctly,
>> all the local variables for the thread need to fit on the stack.
>>
>> Another solution might be to simply remove ldap from the
>> nsswitch file for hosts.
>>
>> - Booker C. Bense
>>
>>
>>
>>
>>
>>
>> Core was generated by `caget UND:R02:
>> ../../.
>
>
Should the system have different defaults specified in the build system depending on if its embedded linux arch or not?
Clarifying the issue.
Should the system have different defaults for OSITHREAD_
I would be happy to change the OSITHREAD_.
- Andrew
From Bruce:
I'm also satisfied for now with our fix, which is to use
the default stack size for all our linux architecture targets
by putting the def in CONFIG_SITE.
That has fixed the stack overflow in the nss_ldap lib for our
CA tools that run on linux, and our only embedded target is
RTEMS which doesn't use OSITHREAD_
I don't think this would be the right fix for sites with
embedded posix targets, whether linux or others.
Does this point to a need for embedded versions of the
configure/
This issue should also go to the full tech-talk list soon, as
there will likely be other RHEL5 users that will be getting
these crashes as they update their nss_ldap libs.
From Andrew
I would be happy to change the OSITHREAD_
64-bit CPUs because their virtual address space is big enough for
anything, but for 32-bit CPUs using the default stack size severely
limits the number of CA servers that a single client process can talk
to. According to the CVS log Marty changed the default from YES to NO
in 2004, which may have been when we were trying to get the APS Gateway
to run on a Linux box..
From Bruce
It appears that the 3 EPICS stack sizes if we don't use defaults
are 128K, 256K, and 512K. There's a lot of room between
these and the typical 8-10MB linux default. Why so small for
these stack sizes, which are only used for posix systems?
I'm about to commit a change to the 3.14 branch that will double the stack sizes on 32-bit systems and quadruple them on 64-bit. I believe this will solve Bruce's issue, which occurred on a 64-bit machine. I'm also significantly increasing the stack sizes on Windows which have been causing problems for Mark Rivers — the WIN32 version of osdThread.c currently uses the same stack sizes as vxWorks, which are tiny for an OS that has virtual memory. The Windows sizes will match the Posix ones, and include sizeof(void*) in their calculation.
- Andrew
Fixed in R3.14.12.3 release.
From Bruce
It seems to me that there's no good reason for us to use the
USE_DEFAULT_ STACK to YES
stack size feature in the CA lib for our linux based apps and tools,
so I defined OSITHREAD_
in the EPICS CONFIG_SITE file and rebuilt.
I did a couple of loops on psusr121 using the new caget and
nss_ldap version 42.el5_7.4 with over 1100 caget's and no
crashes.
EPICS 3.14.9-0.3.0, the one used by our current caget path,
is now rebuilt using default stack sizes.
I think we can close this now. | https://bugs.launchpad.net/epics-base/+bug/903448 | CC-MAIN-2015-40 | refinedweb | 1,174 | 67.99 |
I had read about SOAP and attended a few conferences on the subject about a year ago, but didn't have much of a need for the technology back then. Recently our voice applications platform provider, Bevocal, started to make their call logs and telephony functions available via SOAP. Ah, the perfect excuse to finally jump in and start writing some SOAP client software. I found these documents below to be extremely helpful in ramping me back up on XML, SOAP, and the power these bring to interop. I hope these can help you too. It seems that the Java community currently favors the Apache Axis SOAP API, so I decided to learn that. It helped that Bevocal's sample code also uses Apache Axis.
Apache Axis Java DocsThese javadocs really helped make sense of some of the sample Axis java code from the documents below.
Apache Axis Documentation and InstallationThe official Apache site with instructions, downloads, newsgroups, etc. Note that they didn't seem to have the javadocs online here. Those come with the download.Creating Web Services with Apache AxisIn a nutshell, how to deploy and access a web service with Apache Axis. Explains how to use Java2WSDL and WSDL2Java.
Java & XML O'Reilly book chapter on SOAPFull text off the chapter on SOAP
Simple Java Axis SOAP Examples
Using SOAP with Tomcat - On Java OReilly.com
Bevocal's Call Detail Record Access Service documentationA comprehensive example that uses Apache Axis Call class, namespaces, and serialization to obtain simple Java object types made up of complex data. Bevocal did a nice job documenting their SOAP interfaces similar to a Javadoc, and provides example of the request soap xml and response soap xml.
Web Services InteroperabilityExcellent primer on web services interfaces with perl, java and .net example. You gotta love how simple Perl makes things.Top Ten FAQs for Web ServicesVery nice overview of the web service XML syntax and vocabularyJava and Soap, O'Reilly BookAbsorb the first 50 pages of this book and you're all set. Pg 210 has an early Apache Axis client example. When this book was written, Axis wasn't stable yet, and Apache SOAP was more common.Dave Winer's 'A Busy Developers Guide to SOAP 1.1'Very straightforward SOAP XML examples and explanation.Description on namespacesNot having formally studied XML previously, the reference to namespaces and XML-Schema in most of the documentation was a bit confusing. I found this document helpful in getting up to speed.XFront tutorial on XML SchemasAnother very good resource to help make sense of all the namespace and XML Schema talk. XML Schemas are rather new, and will likely replace the use of DTDs someday. Excellent powerpoint slides that explain the difference between DTDs and XML-Schemas. Finally, a picture that's worth a thousand words. XML Schemas will make complete sense to you after reviewing slides 1 thru 44 of xml-schemas1.ppt. From the link shown click on the link in upper right "XML Schema Tutorial"Web Services class from University of North CarolinaProf John Smith of the University of North Carolina has a nice set of class notes online. There is even a link to an IBM powerpoint presentation on Apache Axis. This had some nice diagrams to for simple explanation. | http://radio.weblogs.com/0100236/stories/2003/04/21/documentsIUsedToRampUpOnUsingSoapWithJava.html | crawl-002 | refinedweb | 549 | 63.8 |
We used this feature to solve a problem internally and I was tempted to blog about this cool feature. Type forwarding allows you to move a type from one assembly to another without having to recompile the application that uses the original assembly.
Here is how this works:
Writing the original library (original.cs)
using System;
public class ClassToBeForwardedLater
{
public void SomeMethod()
{
Console.WriteLine("Inside ClassToBeForwardedLater in Original.dll");
}
}
Let us compile this to original.dll as below
csc /t:library original.cs
Building the application using the original assembly (application.exe)
using System;
class Application
{
public static void
{
ClassToBeForwardedLater ctbf = new ClassToBeForwardedLater();
ctbf.SomeMethod();
}
}
Let us compile the executable as below
csc /r:original.dll application.cs
Let us run application.exe. You will get the following output
Inside ClassToBeForwardedLater in Original.dll
Re-implementing the original assembly in a new assembly (newlibrary.cs)
using System;
public class ClassToBeForwardedLater
{
public void SomeMethod()
{
Console.WriteLine("Inside ClassToBeForwarded in newlibrary");
}
}
Let us compile the newlibrary.dll as below
csc /t:library newlibrary.cs
Re-publish the original assembly, but this time with a type forwarded
using System;
using System.Runtime.CompilerServices;
[assembly:TypeForwardedTo(typeof(ClassToBeForwardedLater))]
Let us compile the original library as below
csc /t:library /r:newlibrary.dll original.cs
Let us run application.exe again and see what we get.
Inside ClassToBeForwardedLater in newlibrary
The method in the newlibrary is invoked though the application was built with a reference to original.dll. If you do ildasm on application.exe and view the metadata section you will see a reference to original.dll.
Good simple example. Nice work.
Muthu
Thanks Muthu for the feedback. Makes me feel good when I read comments like this.
I read the COM Interop article on MSDN mag and thought I’d look up your blog. I learnt new things from your blog. I’ll keep checking frequently.
Thanks Krishnan. Did you find the article useful? I am planning on writing another article on COM interop, do you have any thoughts for this?
May be it’s just me, but i don’t see any practical use of this attribute. I mean if you can modify original dll then you might as well change somemethod() instead of forwarding to to other class. You mentioned that you used it to solve some problem internally, may be you can tell us what was that problem.
Swap – just because *you* can modify the original DLL, it doesn’t mean it’s practical or sensible to change "somemethod()" in that DLL, or even that modifying the method is your objective.
The first example I can think of, off the top of my head is of a library / helper assembly that contains many, many different classes. Over time it’s grown and contains, maybe a custom UI library from Company X, who’ve now realised that it’d be sensible to split it in two, for WinForms and WebForms. Instead of forcing a recompile on all down-level clients from doing this, three new DLLs can be issued, a UILibrary.dll (the original assembly), UILibrary.WinForms.dll and UILibrary.WebForms.dll. UILibrary.dll would maintain exactly the same interface/class structure as previously but point to classes in the two new DLLs. New clients could bind only the DLL they need (web apps use WebForms, win apps use WinForms, etc,..)
In fact – I have a strong suspicion I’ll be doing something pretty much identical to my example with some code at work shortly!
Very clear example, Thanks, but I have a doubt!
We specify
[assembly:TypeForwardedTo(typeof(ClassToBeForwardedLater))]
in the original assembly but where does this assembly point to the new one for that method.
In other words, how does the main application know that func1 method, which was in the original ‘aaa’ assembly is now in new ‘bbb’ and not in a new ‘ccc’ assembly.
Reagards,
Thanks for a nice example. It is very informative, but for making it also good for the beginers, include the type of project that they have to deal with. I was just going through some questions of MCTS and this was one of the questions. Good going !!!
hey jay according to my understanding the runtime knows in which assembly the Type has been forwarded to because we have included the reference in the compile statement
csc /t:library /r:newlibrary.dll original.cs
Another thing you can do is add the
using newlibrary.cs in the orignal.cs file
If you have more then one assembly having the same Type eg. ClassToBeForwardedLater. What you can do is give the complete name of the type like this
using AssemblyB.cs
[assembly:TyprFrowardedTo(typeof(AssemblyB.ClassToBeForwardedLater))]
so that the compiler knows which assembly we are talking about .
this is my first time writing a blog , I request all you out there to please comment and let me know if i am right or wrong
my email add is alimalik03a@yahoo.com
Ali is right. It gets the reference from the reference during compilation.
Gopinath, I didn’t understand your question. What do you mean by the type of project? Are you referring to the project in Visual Studio?
Thanks for such a nice explanation Mr. Sriram.
And also thanks for all the contributors to the comments specially Ali and Rob to explain further details regarding the subject.
Regards,
Ali.Net
contact: kalilanipk@yahoo.co.uk | https://blogs.msdn.microsoft.com/thottams/2006/11/16/type-forwarding-using-typeforwardedto-attribute-in-runtime-compilerservices/ | CC-MAIN-2017-51 | refinedweb | 901 | 58.38 |
Sorry,I checked it without sm.pls ignore this mail.
On Thu, Jun 19, 2008 at 4:32 PM, Lenny Verkhovsky <lenny.verkhovsky@gmail.com> wrote:
Hi,I found what caused the problem in both cases.--- ompi/mca/btl/sm/btl_sm.c (revision 18675)
+++ ompi/mca/btl/sm/btl_sm.c (working copy)
@@ -812,7 +812,7 @@
*/
MCA_BTL_SM_FIFO_WRITE(endpoint, endpoint->my_smp_rank,
endpoint->peer_smp_rank, frag->hdr, false, rc);
- return (rc < 0 ? rc : 1);
+ return OMPI_SUCCESS;
}I am just not sure if it's OK.
Lenny.On Wed, Jun 18, 2008 at 3:21 PM, Lenny Verkhovsky <lenny.verkhovsky@gmail.com> wrote:
Hi,I am not sure if it related,but I applied your patch ( r18667 ) to r 18656 ( one before NUMA ) together with disabling sendi,The result still the same ( hanging ).
On Tue, Jun 17, 2008 at 2:10 PM, George Bosilca <bosilca@eecs.utk.edu> wrote:Lenny,
I guess you're running the latest version. If not, please update, Galen and myself corrected some bugs last week. If you're using the latest (and greatest) then ... well I imagine there is at least one bug left.
There is a quick test you can do. In the btl_sm.c in the module structure at the beginning of the file, please replace the sendi function by NULL. If this fix the problem, then at least we know that it's a sm send immediate problem.
Thanks,
george.
On Jun 17, 2008, at 7:54 AM, Lenny Verkhovsky wrote:
Hi,
BW (100) (size min max avg) 100000 576.734030 2001.882416 1062.698408
#mpirun -np 100 -hostfile hostfile_w ./mpi_p_18551 -t bw -s 100000
mpirun: killing job...
( it hangs even after 10 hours ).
It doesn't happen if I run --bynode or btl openib,self only.
Lenny. | http://www.open-mpi.org/community/lists/devel/att-4196/attachment | CC-MAIN-2013-48 | refinedweb | 292 | 85.99 |
Download YouTube videos By Python in 5 lines !!!
Howdy, today we’ll create a program by python in 5 line can download YouTube videos, so let’s get started.
Pre-requisites
you'll need :
- python is installed, if you don’t install it visit this link
Let’s Begin
We’ll create two files,
video.py&
audio.py
video.py
This file will download your video with pictures & audio
audio.py
And This’ll download the video only with audio.
Open your favorite editor, and type
$ touch video.py audio.py
We’re going to install two packages
pafy &
youtube.dl
so in terminal
$ pip install pafy youtube.dl
Little remains, in
video.py we'll import
pafy
from pafy import new
new is a function download your video by add the URL on it, so create
url variable and it's well be input
url = input("Enter the url: ")
now create
video variable and it's value is
new function
video = new(url)
the video has different quality, we want the best quality, define new variable
dl = video.getbest()
dl.download()
you can test it
$ py video.py
now, go to
audio.py, it's like
video.py but has some differences
from pafy import new
url = input("Enter the link: ")
video = new(url)
let’s define
audio variable
audio = video.audiostreams
audio[0].download()
that’s it, you can now go to YouTube and download your favorite video/s.
good bye. | https://abdfn.medium.com/download-youtube-videos-by-python-in-5-lines-2967aef603a2 | CC-MAIN-2021-43 | refinedweb | 243 | 74.49 |
#include <vtkDebugLeaks.h>
vtkDebugLeaks is used to report memory leaks at the exit of the program. It uses the vtkObjectFactory to intercept the construction of all VTK objects. It uses the UnRegister method of vtkObject to intercept the destruction of all objects. A table of object name to number of instances is kept. At the exit of the program if there are still VTK objects around it will print them out. To enable this class add the flag -DVTK_DEBUG_LEAKS to the compile line, and rebuild vtkObject and vtkObjectFactory.
Definition at line 42 of file vtkDebugLeaks.h.
Reimplemented from vtkObject.
Definition at line 46 of file vtkDebugLeaks.h.
Definition at line 73 of file vtkDebugLeaks.h.
Definition at line 74 of file vtkDebugLeaks.h.
Call this when creating a class of a given name.
Call this when deleting a class of a given name.
Print all the values in the table. Returns non-zero if there were leaks.
Get/Set flag for exiting with an error when leaks are present. Default is on when testing and off otherwise.
Get/Set flag for exiting with an error when leaks are present. Default is on when testing and off otherwise.
Definition at line 82 of file vtkDebugLeaks.h. | https://vtk.org/doc/release/5.8/html/a00508.html | CC-MAIN-2020-16 | refinedweb | 205 | 69.99 |
Ian Griffiths
DevelopMentor
February 2004
Summary: Introduces the new Avalon rendering and composition technology, and provides details about the new programming model that allows user interfaces to be defined declaratively. (16 printed pages)
Repainting vs. Markup
Vector Graphics in Avalon
XAML Drawing Primitives
Brushes
Transformations
Other Goodies
Conclusion
Before "Longhorn," the techniques used by Microsoft Windows.
Traditionally, Windows has used a reactive repainting model for managing an application's appearance: applications have been required to provide code that can repaint any part of a window from scratch on demand. The application typically draws onto the screen, which means that if any part of the window later becomes obscured by some other window, the OS has no record of what was there. If the window later becomes visible again, Windows has to ask the application to draw the window again, as Figure 1 shows.
Figure 1. The Windows redraw model
Different languages and libraries each have their own ways of representing these repaint requests to developers. For example, Windows Forms has the Control.Paint event and associated OnPaint method. Visual Basic 6.0 has its Paint event. MFC has methods such as OnDraw. However, all of these ultimately rely on the same mechanism: repaint requests delivered by the OS to the application via the Win32 WM_PAINT message.
This repainting approach is flexible, in that the application can be in complete control of its appearance. It is also reasonably efficient in its use of memory, as the OS does not need to remember what is in each and every window. The only image that needs to be stored is the current contents of the screen, a job that is done by the graphics card. Any area of any window not currently visible (either because the window is obscured by another window, or because it is minimized) does not need to be stored at all.
However, this memory efficiency is not as important as it once was—the typical memory capacity of the average PC has increased substantially since the days of Windows 1.0. Of course, display resolutions have gone up too, as have color depths, which means the amount of memory required to store the contents of a window will also typically be higher. However, screen sizes have not increased as fast as memory capacity, so on a modern graphics card, only a small portion of the video memory is used for the screen buffer. The rest is typically used for texture maps when playing games or using DirectX-based applications, but is rather underused the rest of the time.
Since memory is no longer the scarce resource it was when Windows 1.0 came out, other graphics handling models look more attractive than they once did. One of the simplest alternative models is for the OS to store the appearance of a window in a bitmap, so that the application only needs to draw it once. (This is sometimes called a "backing store" approach.) In fact, Windows has been able to support this approach for some time: An application can get backing store as a side-effect of using the "layered" window functionality added to Windows 2000.
Moving away from the traditional repainting approach makes much easier certain composition effects that were previously hard to achieve. For example, it is possible to make a layered window partially transparent, so that the windows behind it show through. This allows windows to fade in and out of view by gradually changing their transparency. (Microsoft Outlook 2003 uses this technique to make pop-up e-mail alerts less obtrusive, for example.) While it would be technically possible to do this with a repainting approach, it would mean that both the transparent window and all the windows visible through it would need to be repainted every time any one of the windows moved or changed. This would be desperately slow. But as Figure 2 shows, with the layered technique, a window's contents can be stored in some spare video memory, and then the graphics card can create the semi-transparent composition of these windows. Since most cards have special-purpose hardware for doing composition, it can work very quickly, placing only a minimal load on the CPU.
Figure 2. Layered windows
Moving away from the repainting approach also allows an application to use a simpler technique for drawing if it wants to—it can just treat the window as a canvas on which to draw images, safe in the knowledge that they will stay there. As Figure 2 shows, Windows does not need to send WM_PAINT requests to the application, because it stores the complete contents of the layered window. This can simplify drawing handling for some kinds of applications. (Visual Basic 6.0 offered this style of drawing without using layered windows. However, this facility was provided by the Visual Basic runtime, which still had to support normal Windows repainting on behalf of the application.)
The bitmap approach used by layered windows is not the only way of providing a more stable drawing model. Consider an HTML-based user interface. The application writer merely needs to provide the markup that describes their user interface. This has the same benefits that layered windows offer: the application can simply generate a particular page's contents once. There is no need for the application to be able to repaint the display simply because windows have been moved or resized—as Figure 3 shows, the HTML control retains the structure of the markup, which enables it to handle the repaint requests for the application. It is only when the user navigates to a different page that new markup will have to be supplied.
Figure 3. Markup-based drawing
A particularly interesting aspect of HTML is the DOM (Document Object Model). While a bitmap backing store simply retains an image of how the window should look, an HTML browser stores each of the text blocks and other elements used to make up the display. This makes it much easier for an application to modify the UI. With a bitmap, the only way to modify the existing image is to paint over what was there before, but with a DOM, the structure of the UI remains in a form that the program can access and modify at runtime, making it possible to go back and alter a part of the UI that you "drew" earlier. One problem with the HTML DOM, however, is that its set of primitives does not allow us to build particularly visually rich applications—HTML user interfaces tend to have to fall back on the use of bitmaps if they want to look good.
The Avalon graphics model offers the best features of all three of these approaches. As with HTML, Longhorn user interfaces can be built using markup, and at runtime the structure of this markup is available through an object model. (It is, of course, a .NET object model, rather than the HTML DOM.) However, unlike HTML, this technique has all the flexibility that would be available to classic Win32 or Windows Forms applications under the old repainting model. This is because the markup language in Longhorn is much richer than HTML—there is a section of the markup language devoted to graphics primitives, and it offers even richer functionality than GDI+. Furthermore, Longhorn offers a powerful composition model for combining multiple user interface elements. Where layered windows only supported composition of top-level windows, Longhorn offers much finer-grained composition. Each element in the UI, down to the individual graphical primitives, can have its own transparency settings. This means that for the first time, Windows has a user interface hierarchy that fully supports partially transparent controls.
The MSAvalon.Windows.Shapes namespace defines a number of element types that allow pictures and drawings to be described in markup alongside controls, text, and other elements. This endows XAML with vector drawing capabilities similar to SVG (Scalable Vector Graphics), a dialect of XML defined by the W3C for describing vector graphics.
A frequently asked question is: Why doesn't Avalon just use SVG?. SVG has its own set of conventions for element and attribute names that is at odds with the existing .NET Framework class library. Furthermore, SVG elements were not designed to fit into the Avalon object model. By not using SVG, Avalon ensures that vector graphics can be mixed in with any other kind of markup in your XAML, and can take advantage of all of the same layout facilities. (Note that in the version of the Longhorn SDK documentation released at the 2003 PDC, the XAML elements used for vector drawing are sometimes referred to collectively as WVG (Windows Vector Graphics). However, Microsoft is no longer using this name, because it implies, incorrectly, that these elements are somehow distinct from all of the other elements supported by Avalon.)
The vector graphics elements are fairly straightforward to use. They let you describe pictures as a collection of drawing primitives. The following example shows a drawing made up of a rectangle and a circle. (There is no circle primitive, we just use the Ellipse element, making the two radii the same.)
<Canvas xmlns="">
<Ellipse CenterX="80" CenterY="80" RadiusX="50" RadiusY="50"
Fill="PaleGreen" Stroke="DarkBlue" StrokeThickness="5"/>
<Rectangle RectangleLeft="65" RectangleTop="0"
RectangleWidth="30" RectangleHeight="160"
Fill="Yellow" Stroke="Black" StrokeThickness="3"/>
</Canvas>
The results are shown in Figure 4. Here we have used a normal Canvas as the container for these shapes, but any kind of container can be used—shapes can be placed almost anywhere in the markup just like any other element.
Figure 4. Simple shapes in XAML
The ability to represent drawings in markup is important, as it provides a persistent way to represent scalable drawings without the limitations that bitmaps suffer from. While it is possible to store any image as a bitmap, they only look right when displayed at a certain size. Bitmaps are really nothing more than a collection of pixels to be displayed on screen, which causes problems if you try to display them at anything other than their natural size. If you enlarge them, you hit the problem that you are trying to fill more pixels on screen than there are in the bitmap. Various techniques exist for interpolating the extra pixels, but they all have a detrimental effect on the image quality—crisp edges can become blurred, and strange artifacts can appear. These problems will be particularly obvious on high resolution output devices such as printers. Even reducing the size of a bitmap causes problems—here there are too many pixels in the bitmap to fit in the space available, so details inevitably become lost. Again there are well-known techniques to mitigate the problems, but none is perfect.
If you use the shape classes in Avalon, your images will not suffer from these problems, because they do not deal with pixels—you are representing your pictures in what is often described as a "vector" format. Vector graphics formats work by specifying the primitive shapes that make up the picture, and describing their positions using coordinates (or "vectors"). A big advantage of vector formats is that when an image comes to be displayed, it can be drawn to look exactly right for the device on which it is being shown. This avoids the resizing artifacts that bitmaps suffer from. It also means that when the image is displayed on a very high resolution device such as a printer (printers typically have at least 10 times the resolution of a screen) a version of the image can be created that takes full advantage of the device's capabilities. Because vector images are represented as primitives and vectors, they can be drawn at any resolution and any scale with minimal loss of precision. (Rendering to any device will involve some loss of precision because no device is perfect. But a vector format itself does not introduce any scaling losses of its own, unlike a bitmap format.) This is why such formats are often described as "scalable."
XAML, with its shape elements, is by no means the first vector format to be invented. In fact Windows has long supported vector graphics in the form of Windows Metafiles and Enhanced Metafiles. (.WMF and .EMF files, respectively. .WMF files date back to 16-bit versions of Windows. .EMF files came along later to support the new drawing primitives that Win32 introduced.) These work by allowing a series of GDI drawing operations to be captured, optionally stored in a file, and later replayed. When used carefully, these offer the scalability benefits common to all vector formats. So you might ask why Longhorn didn't simply carry on using metafiles for vector graphics. The answer is that XAML offers two substantial benefits over metafiles.
First, .WMF and .EMF files are binary formats that are hard to work with. Only a masochist would attempt to create such a file from scratch. In practice it is usually necessary to write some code that will draw the picture you require in order to create a metafile. If this were the only issue it wouldn't be too bad, because some drawing packages are able to export drawings in these formats. (This makes creating simple drawings like the previous example an unnecessarily long-winded process, but it's not as bad as writing code to build a metafile.) However, the problems don't stop with the difficulty of creating a windows metafile.
The second problem is that metafiles are difficult for software to manipulate. If you don't mind treating the drawing as an impenetrable lump of binary to be rendered without modification, much as a bitmap would be drawn, then this is not a big limitation. However, if you want to be able to change individual elements in the drawing, this is extremely hard work. There are APIs that will enumerate the contents of a metafile, and let you modify the way they are rendered, but these are cumbersome to use.
Avalon provides a good solution to both of these problems. Simple drawings are very straightforward to produce, as the previous example shows. (More complex drawings will doubtless be created in full-blown drawing packages, and converted to XAML.) Moreover, because shape elements are just part of the XAML markup tree, they are also extremely easy for the program to access and manipulate. Consider this example:
<DockPanel xmlns=""
xmlns:
<HorizontalSlider DockPanel.
<Canvas DockPanel.
<Ellipse ID="myEllipse" RadiusX="75" RadiusY="50" CenterX="100" CenterY="80"
Fill="PaleGreen" Stroke="DarkBlue" StrokeThickness="5"/>
</Canvas>
<def:Code><![CDATA[
void OnValueChanged(object sender,
MSAvalon.Windows.Controls.ValueChangedEventArgs args)
{
myEllipse.StrokeThickness = new MSAvalon.Windows.Length(args.NewValue);
}
]]>
</def:Code>
This defines a user interface with a slider at the top, and an ellipse in the main window area, as shown in Figure 5. As the user changes the slider position, the thickness of the ellipse's outline changes. The code accesses the Ellipse element simply by using the name assigned with the ID attribute ("myEllipse").
Figure 5. Changing UI elements dynamically
This would have been extremely difficult to achieve with metafiles, since there is no easy way to write code to access and change individual elements in a drawing. But with XAML, all drawing elements are part of the user interface tree. By simply giving an element an ID attribute, we can access the corresponding object at runtime (an Ellipse object in this case) and change its properties directly, just as the ValueChanged event handler in this example does.
XAML provides a set of drawing primitives, many of which will seem familiar if you have used GDI+ in the past. These types are all defined in the MSAvalon.Windows.Shapes namespace, and are listed in the table below. Most of these are provided as a convenience—the Path primitive can be used to build most of the other shapes, but it is usually simpler to use the specialized shapes unless the full flexibility of a Path is required.
Table 1. Shape Elements
All these types derive from a common abstract base class, Shape. Shape provides properties to control aspects common to all drawing primitives. For example, there are properties that control the way in which the shape's outline is drawn, and others that determine the way that interior regions (where present) will be filled.
The Polygon, Polyline, and Path classes all allow complex shapes to be constructed. Polygon and Polyline both define shapes with any number of straight edges, the only difference being that the Polygon defines a closed shape (one with an interior) whereas Polyline defines an open shape. Both of these have a property called Points, which allows a list of vertices to be supplied:
<Polyline Stroke="Black" Points="0,0 30,0 60,30 60,60 30,60 0,30"/>
or:
<Polygon Stroke="Black" Points="0,0 30,0 60,30 60,60 30,60 0,30"/>
As this example shows, the list of points is just a series of coordinate pairs in a string. (If you access the Points property from code at runtime, however, it is represented by the PointCollection class, allowing indexed access to a collection of Point structures.) As the output below (Figure 6) shows, the only difference between Polyline and the Polygon is that the Polygon has closed the shape off. (Note that even if you make the first and last vertex of a Polyline coincide, the shape is still considered to be open, so even if you set a Fill color, it will never paint the interior of the shape. Polygons, by contrast, can always be filled.)
Figure 6. Polyline vs. Polygon
Path is the most powerful of all of the shapes. It can be used to create any of the other shapes apart from Glyphs. It is used in a similar way to the Polyline and Polygon—just as they have a Points property allowing vertices to be supplied, the Path shape has a Data property that allows outline information to be specified. Because a Path outline can contain both straight and curved elements, the outline data can be a little more complex than the simple list of coordinates used by Polyline and Polygon.
There are two ways of setting a Path element's Data property. You can either use markup, with an element for each item in the path data, or you can use a string representation. Both approaches have the same result. The string format is the more succinct style, but the markup approach is arguably easier to follow. Either representation will cause a GeometryCollection object to be built at runtime, and assigned to the Path's Data property.
The GeometryCollection holds, unsurprisingly, a collection of Geometry objects. Geometry is an abstract class, and most of its concrete derived classes allow the shapes we looked at earlier to be created. For example, the LineGeometry, EllipseGeometry, and RectangleGeometry classes are equivalent to using the Line, Ellipse, and Rectangle shapes. (In fact those shape types are just shorthand for creating a Path shape containing the appropriate kind of Geometry.) For example, this:
<Ellipse CenterX="80" CenterY="80" RadiusX="50" RadiusY="50"
Fill="PaleGreen" Stroke="DarkBlue" StrokeThickness="5"/>
is equivalent to this:
<Path Fill="PaleGreen" Stroke="DarkBlue" StrokeThickness="5">
<Path.Data>
<GeometryCollection>
<EllipseGeometry Center="80, 80" RadiusX="50" RadiusY="50"/>
</GeometryCollection>
</Path.Data>
</Path>
The most powerful Geometry is the PathGeometry class. This is the basis of the Polygon and Polyline elements, but it allows other kinds of shapes to be created too. A PathGeometry may contain multiple shapes. It has a Figures property, which contains a collection of PathFigure objects, each of which represents an individual shape within the Path. Each individual PathFigure shape is determined by a Segments property, which contains a collection of PathSegment objects. Each PathSegment represents a segment of the shape's outline, and there are several classes derived from the abstract PathSegment class, according to whether the segment of the outline should be straight, or one of the various supported curve types. For example, the following XAML shows a Path containing a single figure whose outline has two straight edges and one curved edge:
<Canvas xmlns="">
<Path Fill="Blue" Stroke="Black" StrokeThickness="5">
<Path.Data>
<GeometryCollection>
<PathGeometry>
<PathGeometry.Figures>
<PathFigureCollection>
<PathFigure>
<PathFigure.Segments>
<PathSegmentCollection>
<StartSegment Point="10,10"/>
<LineSegment Point="100,100"/>
<BezierSegment Point1="200,200" Point2="10,200" Point3="10,100"/>
<CloseSegment />
</PathSegmentCollection>
</PathFigure.Segments>
</PathFigure>
</PathFigureCollection>
</PathGeometry.Figures>
</PathGeometry>
</GeometryCollection>
</Path.Data>
</Path>
</Canvas>
Figure 7. A path
The results are shown in Figure 7. In case you are wondering how this path can have two straight edges despite containing only one LineSegment, the second straight edge is added by the CloseSegment—this adds a closing straight line from the end of the BezierSegment back to the start of the figure.
The previous example illustrates how the markup approach to defining a path can rapidly get rather verbose. The equivalent text string version looks like this:
<Canvas xmlns="">
<Path Fill="Blue" Stroke="Black" StrokeThickness="5"
Data="M 10,10 L 100,100 C 200,200 10,200 10,100 Z" />
</Canvas>
This string format for the Data property allows each segment to be represented by a letter and a series of parameters. The letter denotes a command, and the parameters are typically coordinates. Table 2 shows a complete list of commands and their usage. Multiple figures may be introduced by closing the existing figure (with the Z command) and then starting a new one. In this particular example, there is a single figure, where the M command Moves to a starting point, the C command draws a Cubic Bezier segment, and the Z command closes the figure.
Table 2. Path.Data Commands
(may repeat)
When using this text format, coordinates can be specified in one of two forms. You may either use relative coordinates, which are relative to the previous point in the path, or "absolute" coordinates, which are relative to the location of the shape. Relative coordinates are indicated by using a lowercase letter for the command. (Relative coordinates are available only when you use this text format. If you use markup elements to describe the path data, you can only use absolute coordinates.)
In the examples shown so far, fill and outline colors have been specified using named colors. This is very convenient when simple colors are all that is required, but Longhorn allows applications to use a wide range of effects in a shape's fill and outline. The Stroke and Fill properties of the Shape class are of type Brush, and Longhorn provides several different types of Brush, providing a variety of visual effects.
Brushes are used in Longhorn to determine how an area of the screen will be filled in. If you are familiar with GDI+, then a Longhorn MSAvalon.Windows.Media.Brush has roughly the same role as a GDI+ System.Drawing.Brush. However, unlike in GDI+, where brushes are typically transient objects used only inside repaint methods, many visual elements in Longhorn provide properties of type Brush that can be set using markup. Most properties for which you might expect to be able to specify a color usually accept any brush. So unlike in Windows Forms, where you are usually limited to plain colors when specifying the foreground and background for a control, in Avalon you can use any brush for the foreground and background.
The use of brushes rather than simple colors allows for a much more interesting user interface design. As well as supporting simple solid color paint, which uses a single uniform color (much like a GDI+ SolidBrush), Avalon offers many more interesting Brush types. LinearGradientPaint offers multi-stage linear fills. RadialGradient offers a gradient fill starting from a point and spreading out to a circular boundary. ImagePaint allows any bitmap to be used as a fill pattern.
The following example shows the use of the LinearGradientBrush to fill the interior of a Polygon. The brush changes color continuously, starting from red, changing to magenta, and finishing on cyan, as specified by the three GradientStop elements.
<Canvas xmlns="">
<Polygon Points="5,5 35,5 65,35 65,65 35,65 5,35" Stroke="Black">
<Polygon.Fill>
<LinearGradientBrush StartPoint="0,0" EndPoint="1,1">
<LinearGradientBrush.GradientStops>
<GradientStopCollection>
<GradientStop Color="red" Offset="0"/>
<GradientStop Color="magenta" Offset="0.5"/>
<GradientStop Color="cyan" Offset="1"/>
</GradientStopCollection>
</LinearGradientBrush.GradientStops>
</LinearGradientBrush>
</Polygon.Fill>
</Polygon>
</Canvas>
As you can see in Figure 8, the fill goes from the top left corner of the shape to the bottom right. This is controlled by the StartPoint and EndPoint properties—the StartPoint of 0,0 indicates that the fill should start at the top left corner, and the EndPoint of 1,1 indicates the bottom right.
Figure 8. A linear gradient brush
One of the very useful features of vector graphics is that it is easy to apply transformations to them. Because the position of every part of every visual element is expressed through coordinates, it is fairly straightforward to rotate, scale, translate, or shear those elements.
In Avalon, transformations are applied to shapes using a TransformDecorator. This is an element that can be wrapped around some markup in order to apply a transformation to all of the elements in that markup. For example, consider the following markup:
<Canvas xmlns="">
<TransformDecorator>
<TransformDecorator.Transform>
<RotateTransform Angle="45"/>
</TransformDecorator.Transform>
<Text>Hello!</Text>
</TransformDecorator>
</Canvas>
This example contains just one visible element of type Text. But this Text element has been wrapped in a TransformDecorator that applies a 45-degree rotation, as Figure 9 shows. There are a number of different types of transformation that can be applied in this way: RotateTransform, ScaleTransform, TranslateTransform, and SkewTransform allow the most common transformation types to be applied easily. Alternatively you can combine multiple transformations by using a TransformCollection. (This would just contain a series of child transformations in the markup.) Or if you wish to apply a particular transformation matrix, you can use the MatrixTransform.
Figure 9. A RotateTransform in action
Transformations can be applied to any markup element. For example, rather than rotating a single text element, we could have placed a DockPanel inside the TransformDecorator, and rotated an entire user interface. All of the built-in controls such as text boxes and buttons carry on working perfectly with arbitrary transformations applied. This makes it easy to enlarge a user interface, either to improve readability for accessibility reasons, or because the display device may have a very high resolution. (Displays with 200 dpi resolutions are available. Normal Windows applications are too small to be usable on such displays. But with Avalon, where any user interface can be transformed, it is easy to scale applications up so that they are large enough to be useful, but can still take advantage of the high resolution to improve the detail and clarity of the display.)
In this article there has been space only to scratch the surface of what Longhorn can offer in the UI. As a quick taster, here are a few of the available features not discussed elsewhere in this article. Visual elements can be animated. Applications can enable interactive editing of the user interface at runtime. Other media types such as audio and video can be integrated into the display. There is support for 3-D visuals. Extensive text handling support is available, making it easy to format the text in the most appropriate way for the medium in which it is being presented. There is also an API for the "visual layer," allowing direct access to the underlying display technology that the markup builds upon.
The new graphics model introduced by Longhorn offers the best aspects of previous approaches to building the UI, such as Win32-style repainting, bitmap backing store, and HTML-based user interfaces. It provides a powerful set of drawing primitives and makes them very straightforward to use through markup, relieving application developers of the more tedious aspects of getting the user interface to appear on screen. At runtime, there is an object model that reflects the structure of the markup, making it very easy to manipulate the display and keep it up-to-date. All of this is built on top of a new display model that removes many of the restrictions of the old Win32 model, enabling much more interesting visual effects with its powerful new composition engine. | http://msdn.microsoft.com/en-us/library/aa480164.aspx | crawl-002 | refinedweb | 4,732 | 51.89 |
get bounding box from clones [SOLVED]
On 01/04/2015 at 09:11, xxxxxxxx wrote:
MODATA_SIZE is alway none and MODATA_MATRIX give me the relative scale of an object. my plan is to use the original dimension of the spreaded object do do something with a python effector
On 02/04/2015 at 06:56, xxxxxxxx wrote:
Hello,
can you give us some code and some more information on what are you trying to do? What exactly is your question?
Best wishes,
Sebastian
On 02/04/2015 at 09:41, xxxxxxxx wrote:
i got an simple python effector who is able to check nearby clones and disable them if they are to closes to each other. simple to avoid double trees on surfaces. now i want to implement a feature which is able to check the radius of a tree before removing the other clones. now i just using the MODATA_MATRIX and compare the distance from each clone with a .GetLengt()
On 07/04/2015 at 01:24, xxxxxxxx wrote:
Hello,
what "trees" are you talking about?
You can get the bounding box dimensions of an object with GetRad(). The clones are based on child objects of the generator object. This generator object is accessible via GetGenerator() or simply using gen. You could then calculate the child index based on the MODATA_CLONE array.
Best wishes,
Sebastian
On 07/04/2015 at 16:25, xxxxxxxx wrote:
Distance Effektor need a INT input and a static string
import c4d from c4d.modules import mograph as mo from c4d import Vector #Welcome to the world of Python def main() : md = mo.GeGetMoData(op) distance = op[c4d.ID_USERDATA,1] killcount = 0 if op[c4d.ID_USERDATA,1]: if md==None: return False cnt = md.GetCount() marr = md.GetArray(c4d.MODATA_MATRIX) coll = md.GetArray(c4d.MODATA_COLOR) clone_offset = md.GetArray(c4d.MODATA_CLONE) clone_source = gen.GetDown() clone_source_arr = [] while clone_source != None: clone_source_arr.append(clone_source) clone_source = clone_source.GetNext() clone_source_count = len(clone_source_arr) visible = md.GetArray(c4d.MODATA_FLAGS) divider = 1 / float(clone_source_count) offset_threshold = [] for k in range (0, clone_source_count) : offset_threshold.append(divider*k) offset_threshold.append(1.0) clone_source_count_minus = clone_source_count - 1 for i in (xrange(0, cnt)) : if visible[i] == 1: ### detect clone source: for k in range(0, clone_source_count) : if k == 0: if clone_offset[i] < offset_threshold[k+1]: #print str(k) + "=k == 0 " + str(clone_offset[i]) + " < " + str( offset_threshold[k+1]) + str(clone_source_arr[k]) radius = clone_source_arr[k].GetRad() coll[i] = c4d.Vector(0,0,1) break; elif k > 0 and k != clone_source_count_minus: if (clone_offset[i] > offset_threshold[k]) and (clone_offset[i] < offset_threshold[k+1]) : #print str(k) + "=k > 0 !& "+ str(clone_source_count) + " " + str(clone_offset[i]) + " > " + str( offset_threshold[k]) + " and " + str(clone_offset[i]) + " < " + str( offset_threshold[k+1]) + str(clone_source_arr[k]) coll[i] = c4d.Vector(0,1,0) radius = clone_source_arr[k].GetRad() break; elif k == clone_source_count_minus: if clone_offset[i] > offset_threshold[k-1]: #print str(k) + "=k =="+ str(clone_source_count_minus) + " " + str(clone_offset[i]) + " > " + str( offset_threshold[k]) + " and " + str(clone_offset[i]) + " < " + str( offset_threshold[k+1]) + str(clone_source_arr[k]) coll[i] = c4d.Vector(1,0,0) radius = clone_source_arr[k].GetRad() break; for j in (xrange(0, cnt)) : if marr[i] != marr[j]: dist = (marr[i].off - marr[j].off).GetLength() dist = dist - radius.x if dist < distance: visible[j] = 0 killcount += 1 md.SetArray(c4d.MODATA_FLAGS, visible, True) md.SetArray(c4d.MODATA_COLOR, coll, True) return True
On 08/04/2015 at 01:10, xxxxxxxx wrote:
Hello,
what exactly is your question please?
Best wishes,
Sebastian
On 08/04/2015 at 09:14, xxxxxxxx wrote:
at first the detection of the right clone seems to me quite complicated. is there a way to do this without this strange for loops ?
at second:
the effector works fine but self collision between clones with the same source object seems to ignoring the distance threshold.
On 09/04/2015 at 08:23, xxxxxxxx wrote:
Hello,
I don't really understand what you did in your code but I guess you could calculate the child index from clone_offset and then simply access your clone_source_arr array.
For questions no longer related to this thread's original topic please open a new thread. Thanks.
Best wishes,
Sebastian
On 13/04/2015 at 04:57, xxxxxxxx wrote:
this is what the
if clone_offset _ < offset_threshold[k+1]: stuff does
solved | https://plugincafe.maxon.net/topic/8623/11280_get-bounding-box-from-clones-solved | CC-MAIN-2020-16 | refinedweb | 701 | 58.99 |
Caleb Seeling1,189 Points
Cant end the Task!
I am trying to solve the Task. I tried almost everything I knew and learned in the video but cant solve the Task. Please Help!
public class Spaceship { public String shipType; public void shipType(String shipType) { this.shipType = "SHUTTLE"; } public String getShipType() { return shipType; } public void setShipType(String shipType) { this.shipType = shipType; } }
1 Answer
Jason AndersTreehouse Moderator 145,081 Points
Hey Caleb,
You are on the right track, but there are a couple of things here.
First, the instructions explicitly state to create a constructor "with no parameters", but you have added a parameter.
Second, the constructor does not return anything, so you shouldn't have the
void keyword as part of it.
Now, once those are corrected, it's just a matter of assigning the value. This is done as you would with any normal variable in Java.
Below is the correct constructor for your review with the notes above. Also, it good to remember that Challenge instructions are very specific and will always need to be followed exactly... or you will get the
Bummer.
public Spaceship() { shipType = "SHUTTLE"; }
Keep Coding! :) | https://teamtreehouse.com/community/cant-end-the-task | CC-MAIN-2020-40 | refinedweb | 190 | 66.74 |
Hide Specific Points
Hey, this is probably a dumb question but is there a way to hide specific points in a glyph? @ryan had a good idea, to just hide all points and redraw them using mojo.drawingTools which seems like the only way to go about this. TIA, C
mmm, strange question :) hidding points and redrawing is the best option, I guess. Could you give a use case why hiding specific points?
from mojo.UI import setGlyphViewDisplaySettings setGlyphViewDisplaySettings({"On Curve Points": False, "Off Curve Points": False})
It's a very specific use-case, but I'm updating my BezierSurgeon tool and it draws a faux-onCurve point along with new faux-offCurves. I didn't want the real points to interfere with the ones being drawn by the tool. See video for example
I would hide the other points while a surgeon point is placed. Unhide when the operation is cancelled or when a surgeon point gets promoted to a real point. | https://forum.robofont.com/topic/920/hide-specific-points/1 | CC-MAIN-2021-10 | refinedweb | 163 | 71.04 |
A grid geometry toolkit for controlling 2D sprite motion.
A grid geometry toolkit for controlling 2D sprite motion.
Constellation manages 2D point grids and pathfinding navigation. The library is designed to control dynamic sprite motion within a 2D environment. Constellation expands upon the motion system used in the What Makes You Tick? adventure game series. Features include:
For a grid builder demo, see lassiegames.com/constellation
Constellation root scope provides basic geometry operations and geometric primitives. All of these methods may be called directly on the
Const namespace, and are passed simple arrays and/or Constellation primitives (Point & Rect).
Const.Point
var point = new Const.Point( x, y );
Constellation point primitive. Const.Point objects have the following properties:
x: horizontal coordinate of the point.
y: vertical coordinate of the point.
Const.Rect
var rect = new Const.Rect( x, y, width, height );
Constellation rectangle primitive. Const.Rect objects have the following properties:
x: horizontal coordinate of the rectangle origin.
y: vertical coordinate of the rectangle origin.
width: rectangle width.
height: rectangle height.
Const.distance
var result = Const.distance( point1, point2 );
Calculates the distance between two provided Const.Point objects.
Const.ccw
var result = Const.ccw( point1, point2, point3, exclusive? );
Tests for counter-clockwise winding among three Const.Point objects. Returns true if the three points trend in a counter-clockwise arc. Useful for intersection tests. Passing
true for the optional
exclusive param will pass balanced arcs.
Const.intersect
var result = Const.intersect( pointA, pointB, pointC, pointD );
Tests for intersection between line segments AB and CD. Returns true if the line segments intersect.
Const.getRectForPointRing
var result = Const.getRectForPointRing( [points] );
Takes an array of Const.Point objects; returns a Const.Rect object of their bounding box.
Const.hitTestRect
var result = Const.getRectForPointRing( pointP, rect );
Takes a target point P and a rectangle object; returns true if the point falls within the rectangle.
Const.hitTestPointRing
var result = Const.hitTestPointRing( pointP, [points] );
Takes a target point P and an array of points defining a ring. Returns true if P falls within the ring of points. Hit test is performed using the ray-casting method.
Const.snapPointToLineSegment
var result = Const.snapPointToLineSegment( pointP, pointA, pointB );
Takes a target point P, and snaps it to the nearest point along line segment AB.
Const.getNearestPointToPoint
var result = Const.getNearestPointToPoint( pointP, [points] );
Takes a target point P, and an array of points to search. Returns the nearest point to P within the point collection.
Constellation Grid must be instanced.
Const.Grid
var grid = new Const.Grid( data? );
Constructor for a new Constellation grid. All grid operations must be invoked on an instance.
Grid.Node
use... grid.addNode();
Constellation grid
Node object; use a
Grid instance to create and manage nodes. Grid nodes have the following properties:
id: unique identifier for the node. Don't touch this.
x: horizontal coordinate of the node.
y: vertical coordinate of the node.
to: Table of connections to other nodes. Seriously, don't touch this.
data: A data object of user-defined data attached to the node.
Grid.Polygon
use... grid.addPolygon();
Constellation grid
Polygon object; use a
Grid instance to create and manage polygons. Grid polygons have the following properties:
id: unique identifier for the node. Don't touch this.
nodes: Array of node ids defining the polygon ring.
data: A data object of user-defined data attached to the polygon.
grid.addNode
grid.addNode( x, y, {data}? ); or
grid.addNode( {data}? );
Adds a new
Node object with specified X and Y coordinates, and an optional data object. Returns a reference to the new
Node object. A data object may be provided as the sole parameter, if the data object contains an
id property, that id will be assigned to the new node.
grid.getNodeById
grid.getNodeById( id );
Gets a node by id reference. Returns a
Node object, or
null for missing ids.
grid.getNodes
grid.getNodes( id, ... ); or
grid.getNodes( [id, ...] );
Gets one or more grid nodes by id reference, or maps an array of grid nodes to an array of ids. Returns an array of
Node objects. Invalid ids return as
null.
grid.getNumNodes
grid.getNumNodes();
Specifies the number of nodes in the grid.
grid.hasNodes
grid.hasNodes( id, ... ); or
grid.hasNodes( [id, ...] );
Tests if one or more node ids, or an array of node ids, exists within the grid.
grid.joinNodes
grid.joinNodes( id1, id2, ... ); or
grid.joinNodes( [id1, id2, ...] );
Takes two or more node ids, or an array with two or more node ids, and joins them with connections. Returns
true if changes are made.
grid.splitNodes
grid.splitNodes( id1, id2, ... ); or
grid.splitNodes( [id1, id2, ...] );
Takes two or more node ids, or an array with two or more node ids, and splits apart their common connections. Returns
true if changes are made.
grid.detachNodes
grid.detachNodes( id, ... ); or
grid.detachNodes( [id, ...] );
Takes one or more node ids, or an array of node ids, and splits them from all of their respective connections. Returns
true if changes are made.
grid.removeNodes
grid.removeNodes( id, ... ); or
grid.removeNodes( [id, ...] );
Takes one or more node ids, or an array of node ids, detaches them from all connections, then removes them each from the grid. Any dependent polygons are also removed. Returns
true if changes are made.
grid.addPolygon
grid.addPolygon( [node ids], data? );
Takes an array of three or more node ids and creates a new
Polygon object with the optional data object attached. Returns a reference to the new
Polygon object.
grid.getPolygonById
grid.getPolygonById( id );
Gets a polygon by id reference. Returns a
Polygon object, or
null for missing ids.
grid.getPolygons
grid.getPolygons( id, ... ); or
grid.getPolygons( [id, ...] );
Gets one or more polygons by id reference, or maps an array of polygons to an array of ids. Returns an array of
Polygon objects. Invalid ids return as
null.
grid.getNodesForPolygon
grid.getNodesForPolygon( id );
Takes a polygon id and returns an array of
Node objects defining its polygon ring. Returns
null if the specified polygon id is undefined.
grid.getNumPolygons
grid.getNumPolygons();
Specifies the number of polygons in the grid.
grid.removePolygons
grid.removePolygons( id, ... ); or
grid.removePolygons( [id, ...] );
Takes an array of polygon ids and removes them from the grid. All nodes defining the polygon rings are left intact. Pass
true as the optional second argument to perform changes silently without triggering an update event. Returns
true if changes are made.
grid.findPath
grid.findPath( startId, goalId, weightFunction?, estimateFunction? );
Takes two node ids defining start and goal nodes, then finds the shortest path between them. By default, routing favors the shortest path based on coordinate geometry. However, you may customize path routing using the optional weight and estimate functions:
weightFunction:
function( startNode, currentNode ) { return numericCost; }
This function is used to calculate the weight (or cost) of each new grid segment added to a path. The function is provided two Grid.Nodes as arguments, and expects a numeric segment weight to be returned. The pathfinder returns a path that accrues the lowest total weight. By default,
Const.distance is used to measure the weight of each segment.
estimateFunction:
function( currentNode, goalNode ) { return numericEstimate; }
This function optimizes search performance by providing a best-case scenario estimate for each node's cost to reach the goal. This function is provided two Grid.Node objects as arguments: the current search node, and the goal node. An estimated cost-to-goal value should be returned. By default,
Const.distance is used to estimate the best-case distance from a working path to the goal.
grid.findPathWithFewestNodes
grid.findPathWithFewestNodes( startId, goalId );
Convenience method for running
grid.findPath configured to find a path to goal using the fewest node connections rather than the shortest distance.
grid.snapPointToGrid
grid.snapPointToGrid( point );
Snaps the provided point to the nearest position among all joined line segments within the node grid. The snapped point will be plotted at the nearest available line segment, or the nearest grid point if no line segments are defined. Returns a meta object with the following attributes:
point: the snapped
Pointobject.
offset: the snapped offset distance from the original point.
segment: an array of node ids defining the line segment on which the point was snapped.
grid.bridgePoints
grid.bridgePoints( startPt, goalPt, confineToGrid? );
Creates a grid path bridging between two
Point objects that are not connected to the grid. This is a composite operation intended to take two dynamic input locations, and intelligently connect them through the existing grid structure. The steps of this algorithm operate as follows:
Test if start and goal are contained within a common polygon; if so, return a direction-connection array specifying: [start, goal].
(...not yet implemented...) Test if points fall within adjacent polygons, and if they may be connected directly through the common side.
Dynamically join start and goal points into the motion grid, then run pathfinder. Dynamic point inclusion works as follows:
If start or goal points fall within a polygon, then they'll be connected to their encompassing polygons' node rings.
Otherwise, start and goal points will create tether nodes that snap and join to the grid.
The
bridgePoints method will always return an array of point objects starting with the originally specified
startPt. The array will also contain additional path positions, and finally the
goalPt. Optionally, you may specify
confineToGrid as
true, at which time the
goalPt will be adjusted to either fall within a polygon area or snap to a grid line.
grid.getNearestNodeToPoint
grid.getNearestNodeToPoint( point );
Finds and returns the closest grid
Node object to the specified
Point position. Performs an optimized sorted search rather than brute-force distance comparisons.
grid.getNearestNodeToNode
grid.getNearestNodeToNode( id );
Finds the next closest grid node to the specified node id. Similar to
getNearestNodeToPoint, except that the input is a node id rather than a point object.
grid.hitTestPointInPolygons
grid.hitTestPointInPolygons( point );
Returns
true if the provided point intersects any
Polygon objects within the grid.
grid.getPolygonHitsForPoint
grid.getPolygonHitsForPoint( point );
Tests a
Point object for intersections with all
Polygon objects in the grid, then returns an array of polygon ids that encompass the point.
grid.getNodesInPolygon
grid.getNodesInPolygon( id );
Takes a polygon id and tests it for intersections with all nodes in the grid, then returns an array of the contained node ids. Nodes that compose the polygon's ring will be included in the returned array, even though their edge positions may fail a mathematical hit test.
grid.getNodesInRect
grid.getNodesInRect( rect );
Tests a
Rect object for intersections with all nodes in the grid, and returns an array of the contained node ids.
Constellation includes implementations of several common collection management functions for working with arrays and objects. These are very similar to Underscore.js methods, although their implementations may vary.
Const.utils.size
Const.utils.size( object );
Counts the number of items in an array, or the number of properties on an object.
Const.utils.contains
Const.utils.contains( object, value );
Accepts an array or object and a value to search for. Returns true if an array contains the provided value, or an object defines a key for the specified value.
Const.utils.each
Const.utils.each( object, iteratorFunction, context? );
Iterates an array or object with the provided function. Iterator is passed
( value, index ) for arrays, and
( value, key ) for objects. An optional scope context may be provided in which to run the interator.
Const.utils.map
Const.utils.map( object, mutatorFunction, context? );
Iterates an array or object with the provided mutator function. Mutator is passed each
value, and returns a modified value to replace the original within the collection. An optional scope context may be provided in which to run the interator.
Const.utils.all
Const.utils.all( array, testFunction, context? );
Iterates through the provided array and performs a test function on each value. Returns true if all values pass the test. | https://www.npmjs.com/package/constellation | CC-MAIN-2016-44 | refinedweb | 1,967 | 51.44 |
My Move to Kotlin
Back in '98 when I started with my CS education Java was the first programming language we were taught. Before I had experimented with QBasic, C and assembly but that was the moment where I started to really grow into being a software engineer. While I made something excursions to other languages; PHP, C#, JavaScript and even ColdFusion, Java always was my 'base' where I returned to. My great love as it were. But now she’s fallen out of fashion and I’m moving on to greener pastures: Kotlin.
Introduction
You might notice quite a big gap between this post and the previous one. There are multiple reasons for this (a project taking up a lot of my energy is one of them), but a big reason is that in my personal projects I don’t really feel like using Java anymore, but on the other hand Java is the common theme in most of my posts here. This is also reflected in the traffic my blog is getting; the "how to do X" type posts are the most popular.
My definitive move towards Kotlin started in November last year with the (awesome, check them out!) Advent of Code contest of 2017. I use these to challenge myself to learn new things. Back in 2015 I used the challenge to learn Scala and last year I started using Kotlin to solve the 2017 challenges (link if you want to check them out).
While doing the 2017 challenges I was actually having so much fun with Kotlin that I started working on the 2016 challenges as well!
Java
Likes
Don’t get me wrong: I still like Java. It’s just as strong a language as it was a year ago and the improvements being made are excellent. Having used local type inference in Scala and Kotlin I feel it’s a great idea. The Java 8 additions, streams and lambda’s, make me never want to do a pre-8 project ever again. I strongly prefer strongly typed languages for anything non-trivial; I feel they make me more productive.
But more importantly; what I like about Java is the ecosystem and community. Its ecosystem fully embraced open source and it is reflected in many mature, well written frameworks and libraries that help us Java developers be as productive as a developer can be. I strongly disagree with claims that dynamic languages make you more productive; the small amount of characters saved has to be paid for with much more extensive tests for all the run-time errors your compiler can’t help you with.
In the Java ecosystem there’s never just one way to do something. Just have a look at the amount of microservice frameworks available. While you can get caught with a case of analysis paralysis if you’re not careful, in my opinion the more choice the better!
Last but not least; the JVM is a beast. Java is fast. Does multithreading really well. So as a Java developer it is relatively easy to add a lot of value while still delivering safe and performant applications.
Dislikes
Java has a reputation of being overly verbose. The Java stewards are careful in taking new developments into consideration. With good reasons too; the Python 2 versus 3 split shows the effect that huge breaking changes can have. But while I do understand the reasoning behind this, I do feel that some changes are done too slow or in a poor fashion. It took Oracle a long time to add more functional constructs to Java, long after C# got them. Only now with Java 10 we’re getting local type inference, but I will probably never understand the omission of 'val' in favour of 'final var'. Both Scala and Kotlin already showed that this works just fine.
This can be seen in the community as well; the Java community can be quite conservative. There are still developers who are vehemently against the new lambda syntax for example. And local type inference is a hotly debated matter on reddit.
The standard API is also showing its age. As an example; Java 9 finally introduced convenience factory methods for Lists, Sets and Maps (So we no longer have to use Collections.singletonList!), but especially the one for Map is rather limited. Instead of introducing a Pair<A,B> generic which could then be used as a vararg to build a map, the [Map.of]() factory method only goes up to 10 pairs. More and you’ll have to use [Map.ofEntries]() instead.
By itself this is just a small nitpick, but overall these small annoyances add up.
Kotlin
I’ve tried out numerous other languages. Scala for example, but also Go and I even gave Node.js a shot. But most of these were 'missing' something. Generally it was the ecosystem; Scala came pretty close the the quality I was used to but for most ecosystems, Go and Node.js specifically, the tooling was horrible. It was not just a matter of them being immature but tooling moving in a completely wrong direction (for example pulling master branches directly into production code in lieu of 'dependency management').
Kotlin on the other hand did scratch the 'new and exciting' itch, while letting me access all of the ecosystem I’ve grown to accustomed to (and fond of). Unlike Scala, the interop works both ways: Kotlin code can use Java code, but the reverse is also true: Java code can use Kotlin code just fine. So, let’s discuss some of the things I really like!
Null Safe Type System
Let’s just start with the big one; Kotlin is null-safe by design. While you can have nullable values, it is not the default, and it is always a conscious decision.
Since it is part of the type system itself in most cases null checks are not needed anymore.
It removes a lot of
ObjectUtils.allNotNull() boilerplate.
What’s also cool is how smart the compiler is. A (contrived) example:
fun someFunction(myValue: String?) { if(myValue == null) { return } println(myValue.take(10)) }
In the example above I can just access myValue; the compiler knows that at that time it can’t be null.
No Checked Exceptions
Sometimes called Java’s biggest mistake. While I disagree on that (nulls are bigger) I would probably say it is probably the second biggest. All the way back in 2000-something one thing I preferred with C# was that every exception was a runtime exception. While checked exceptions sound good on paper, being forced to handle stuff, in most cases it is just extra boilerplate. For beginners it might be good to be told constantly that when you’re opening a file something can go wrong, in my experience developers tend to develop a feel for this quite fast. This has led to a weird trend in the Java ecosystem where you often see checked exceptions being wrapped in unchecked ones.
Immutable By Design
Every reference in Kotlin is either a variable or a value (or a constant but I’ll leave that out for now). A variable can be reassigned after its definition. A value can’t. So as a developer you always make a simple choice whether something is going to change somewhere down the line or it doesn’t. Same amount of characters so no 'final' clutter anywhere:
val a = 1 var b = 2 b = 3
Simple and clean.
The same applies to collections. List<>, Set<> and Map<> are all immutable. If you want a collection that you can add to, you need to explicitly use the mutable versions. This can be done by using the builder methods:
val a = listOf(1, 2, 3) //Immutable val b = mutableListOf(4, 5, 6) b += 7 //b is now [4, 5, 6, 7]
Or by creating a copy:
val a = listOf(1, 2, 3) //Immutable val b = a.toMutableList() b += 7 //b is now [1, 2, 3, 7]
Local Type Inference
One of my biggest gripes with the Java language maintainers is the weird 'designed by committee' choices they make. While I understand their struggles, it would have been so awesome if they would have adopted both var and val in Java 10. Unfortunately we’re now locked out of that permanently, in Java 10 you either use 'var' or you use 'final var'. And it can also only be used within methods; so it is not consistent at all.
Anyway; with Java 10 we now also have local type inference which added the 'var' keyword. Kotlin (and many many other languages) had this for a while now. I think it is strange to hear people complain about local type inference in that it would make your code less readable; in my opinion if you need the type of a variable within a method to understand what it is/does it is a clear sign your code is not well written. I’ve seen code like this quite often:
List<Person> list = someService.findPersons(); list.forEach(p -> doSomethingWith(p))
That’s just bad naming. Especially a few lines lower it is probably not clear anymore what 'list' contains. The Kotlin equivalent, but with proper naming:
val persons = someService.findPersons(); persons.forEach { doSomethingWith(it) }
With good consistent naming local type inference makes code more clean, concise and readable in my opinion. Kotlin still leaves the option to specify the type, which is sometimes needed if the type can’t be inferred. An example:
val persons: List<Person> = someService.findPersons(); persons.forEach { doSomethingWith(it) }
And of course your IDE can always tell you the type of something if you really need to know, so in my opinion it is just a must-have in a modern language.
Smart Casts
Kotlin has smart casting. This means that within the scope of a type check the compiler knows the type of an object. While casting doesn’t happen often, it is convenient none the less:
fun makeSound(animal: Animal) { if(animal is Cow) { animal.moo() } }
The compiler knows that within the if block the animal is actually an instance of Cow and you can call Cow-specific methods without having to cast. The compiler did it for you.
String Templating
Where in Java you’d often use String.format to format strings with variables, Kotlin has string templating built in. An example:
val name = "Jill" val grades = listOf(90, 80, 70) println("$name has ${grades.size} grades with an average of ${grades.average()}")
As you can see, if you want to just include a variable directly you can just prepend a $ and it is included. If you want to do somewhat more complex stuff you can use the ${} format to include pretty much anything.
Multi-line String Constants
And when you have awesome string templating, you also want convenient multi-line strings. And of course, Kotlin has it. Frankly, this is one of the things where I simply do not understand why this has not been in Java for a decade; sometimes it makes you wonder if the language designers actually use the language.
Multi-line strings are extremely nice in tests, for example if you want to post some JSON to an endpoint:
val someValue = 42 val json = """ { "someValue": $someValue, "doubledValue": ${someValue * 2} } """ post(json)
In Java I generally advocate for storing test JSON outside the test classes. Because Kotlin has multi line strings that support quote characters just fine inline JSON is perfectly readable.
Automatic Properties
Kotlin recognizes getters and setters as properties, even if they are inside Java classes. This is again handled by the smart compiler for you. Normally in a Kotlin class you define properties as properties:
class MyClass(val name: String, private val number: Int) val instance = MyClass("Jill", 42) println(instance.name)
You can actually define a class inside a method and you don’t need to include braces if you don’t have anything to put inside of it. This shows a simple class with name and number members. Name is available as a property on the outside, number is not, since it is private.
But let’s say we have a mixed codebase and/or are using a Java library. How does this work? Well; Kotlins compiler creates properties for you. A Java class:
public class MyJavaClass { private final String name; private final int number; public MyJavaClass(String name, int number) { this.name = name; this.number = number; } public String getName() { return name; } }
First of all; just look at the difference in boilerplate you need to create a simple Pojo when you’re not using Lombok!
This class can be used inside Kotlin the exact same way:
val instance2 = MyJavaClass("Jill", 42) println(instance2.name)
In neither case this property is assignable; it is read only.
Short Hand Function Definitions
In Kotlin, functions can have a long and a shorthand form. The latter is suitable for simple functions that just operate on their inputs. As an example, the long version of a multiple function:
fun multiply(a: Int, b: Int) : Int { return a * b }
Can also be expressed as:
fun multiply(a: Int, b: Int) = a * b
No need for braces, a return statement or return type; it is inferred from the result of the expression after the
=.
Default and Named Arguments
Function arguments can have defaults. It is something that’s included in a lot of languages. Java, unfortunately, isn’t one of them. Kotlin however, as awesome as it is, does have them:
fun divide(a: Int = 1, b: Int = 1) = a / b divide() // 1
This can save you from having to implement a lot of function overloads with different amounts of arguments.
What’s also neat is that arguments can be passed in any order if you name them:
divide(b = 10, a = 1000) // 100
Data Classes
Finally we can just get rid of Lombok! In a previous project it was one of the selling points for us for Kotlin. We used Lombok extensively for DTO’s/Domain classes to generate getters, setters and constructors and Lombok was the main factor in preventing us from updating to Java 9.
A data class in Kotlin is used to model plain objects that mainly are there to model data or state. Normally they have no behaviour. A POJO (or in this case, POKO!). An example data class:
data class Pet(val name: String, val age: Int)
Just a simple, immutable (if you want a mutable data class you can use var over val), class that contains a Pet’s data.
What’s nice is that you also get some stuff for free this way. A data class will get
toString(),
equals(),
hashCode() methods implemented for you.
Also, you get a nice
copy() function that you can use to create a modified copy of an immutable data class:
val myData = Pet("Max", 6) val myNewData = myData.copy(age = 6)
What’s also really nice is that these data classes are incredibly suitable to be used as DTO’s in a REST interface. Both as output and input. Jackson Databind can, with the Jackson kotlin plugin, deserialise JSON (or YAML or XML) into these data classes for you.
Destructuring Declarations
Another nifty feature that data classes tie into is destructuring declarations. What’s that? Quite simple. Ever seen code like this:
val pet = petService.findPet("Max") val name = pet.name
It’s not uncommon to get some kind of return value where you want to actually use just one property in a lot of places, so you store it in a separate value for user later. In Kotlin you can do this instead:
val (name, age) = findPet("Max")
Any object that has componentN() functions can be decomposed into its separate components.
You can create these functions (marked as operators) yourself for any class, but data classes get them for free as well.
Local type inference works here too; name is a String and age an Int, because those are the return values of
Pet.component1() and
Pet.component2() respectively.
If you’re not interested in one of the values you can use the underscore like in Scala:
val (name, _) = findPet("Max")
Now only name will be available. Many built in types and collections also support this:
val (a, b) = listOf(1, 2, 3)
This gets the first two elements of a List as a and b.
What happens if you destructure a list with too few elements? It throws an
IndexOutOfBoundsException.
If/When (switch) Expressions
In Kotlin
if and
when (switch) blocks are not statements, they are expressions.
So they can be used inline for their return value. A simple example is how this has replaced the ternary operator in Java:
fun defaultIfNull(s: String?, default: String) = if(s == null) default else s
Again you see how this also ties in well with shorthand notation of functions. You can do the same for when (Kotlins version of switch) statements. Let’s define an enum of colours we want to map to their hex codes:
enum class Color { RED, BLUE, GREEN }
We can write a map function like this:
fun mapColor(c: Color) = when(c) { Color.RED -> "#FF0000" Color.GREEN -> "#00FF00" Color.BLUE -> "#0000FF" }
I personally love how clean and readable this is.
The rules are simple; if you want to use an if or when block as an expression all paths need to be covered. So in the case above the compiler would not accept it if I left out the Color.RED map. The same is true for an if-else-if statement; it must be clear to the compiler that all paths are covered.
Functional Programming
While I would not class Kotlin in the FP category of languages, it does an wonderful mix of OO and FP principles that makes it easy and convenient for programmers to use a functional approach in handling their logic. It almost pushes you towards decomposing the problem into small functions with a single responsibility. It pushes you towards immutable data classes and collections that are null-safe. And then lets you compose all this into a solution by applying your functions on collections of data through map and reduce functions.
Kotlin pushes you towards immutable collections. In our day to day work it is actually not that often that we modify a collection in place. In most of my projects a lot of the logic of getting something from a database, changing it a bit and then outputting it to JSON can typically be expressed as map operations on a stream of data.
Where in Java you have to call .stream() on any collection and then later .collect() it again, Kotlin handles this for you. An example in Java:
var numbers = List.of(1, 2, 3); var doubled = numbers.stream().map(n -> n * 2).collect(Collectors.toList());
We create a list of numbers and then double them. Fortunately we can use Java 10, if you’re stuck on 8 it looks like this:
List<Integer> numbers = Arrays.asList(1, 2, 3); List<Integer> doubled = numbers.stream().map(n -> n * 2).collect(Collectors.toList());
4 years ago I thought the above was awesome (so much better than writing a for-each loop and stuff it into a new collection. But now in Kotlin I just want to do this:
val numbers = listOf(1, 2, 3) val doubled = numbers.map { it * 2 }
Clean; stripped of all boilerplate.
The compiler is even smart enough to, when you do two maps in a row, not create an entire 'in between' collection. It is now a stream that then gets collected into a List again.
Quality of life
I can actually go on for quite some time with more fun stuff. Kotlin adds sensible operator overloading (you can now just add two BigIntegers together with + or add two lists together).
Lazy loading of data.
Infix functions.
Extension functions (want to add a
.toHexadecimal() method to
List<Int>? You can!).
Global functions.
Pairs.
The list goes on.
There is just so much nice stuff there.
If a collection has an
isEmpty() function it will have the inverse
isNotEmpty() too.
You can write a reduce function to get the sum of a collection of numbers, but there is no need to, it is all built in.
And this is what made me switch. Kotlin is obviously made by developers to solve the problems they encounter when working with Java. When working in Kotlin and then going back with Java makes Java feel stale, old and more 'designed by committee' than created for developers. The Kotlin developers obviously took a good look at Java, Scala, C#, Python and a ton of other languages to see what works and what doesn’t. Kotlin is an amalgam of the 'it works' bits of the languages while at the same time they managed to not fall into the same traps these languages did fall into.
Conclusion
I love trying out new languages and frameworks. Some I have positive experiences with (Scala for example), some were mostly negative (Go and Node.js). Kotlin is different in that there is nothing that really bothers me, and I’m someone that is easily bothered by inefficiencies. Kotlin doesn’t attempt to reinvent wheels poorly; they didn’t bother creating their own build system. Maven and Gradle work just fine. They didn’t get bored of developing Kotlin half way through and ended up with something like Go. For me the Kotlin is the perfect everyday language: it solves roughly all the problems I have with Java and on top of that gave me a ton of goodies I never knew I wanted until I started using them.
And most importantly; we still have access to the entire Java ecosystem. Java’s ecosystem in my opinion is unique in its maturity and openness. Libraries are in most cases of high quality and have a solid professional community working on them. Spring, the de-facto enterprise framework, has picked up Kotlin as a first class citizen. The Android community has picked Kotlin as the primary language over Java. And Java itself is profiting from the competition; in the last few versions we got local type inference for example. Competition is good for progress.
What is also nice is to see that, with many senior Java engineers being enthusiastic about Kotlin, dev managers are also slowly warming up to the idea. The two way interop here is a killer feature; you can slowly add Kotlin to your system without having to convince someone you will need to rewrite a large part of your application, 'just because'. In my opinion Kotlin is more of a Java dialect than a separate language. Aside from the compiler there isn’t anything you need to change. It actually works fine on Java 8; which can be a selling point for some companies who are still stuck on 8.
If you are a Java developer and haven’t given Kotlin a serious try by now; give it a shot. Push through the initial hard bit where you are not familiar with the syntax. Especially when you are using IntelliJ the transition is very smooth. | https://blog.jdriven.com/2018/10/my-move-to-kotlin/ | CC-MAIN-2022-40 | refinedweb | 3,874 | 71.65 |
:
class Bob def self.hey(remark) case remark when nothing?(remark) 'Fine. Be that way!' when yell?(remark) 'Whoa, chill out!' when question?(remark) 'Sure.' else 'Whatever.' end end private def self.question?(remark) end def self.yell?(remark) end def self.nothing?(remark) end end
A very typical case statement. However, notice the redundancy in using
remark for the case statement, while also passing it to the helper methods we created. In my opinion, the
case statement should allow the
when statements to be able to work with what was assigned to
case. But in the example above, that would not work.
Enter Lambdas
So what we want to achieve is to remove the
remark parameter from the methods, while also calling the methods without needing to pass the remark, because it's already being referenced in the
case statement. Ruby Lambdas allows us to achieve this.
Because
when statements can take Lambdas and call them automatically, we can redefine the helper methods to return a lambda. Each lambda will perform the necessary logic to determine if the remark is a question, a yell, or nothing.
private def self.question? -> (r) { r[-1] == '?' } end def self.yell? lambda do |r| r = r.delete('^A-Za-z') return false if r.empty? r.split('').all? { |c| /[[:upper:]]/.match(c) } end end def self.nothing? -> (r) { r.strip.size.zero? } end
Use the new lambda literal syntax for single line body blocks. Use the lambda method for multi-line blocks. Source: Rubocop Ruby style guide
Notice that we have removed the
remark parameter from the methods. The methods no longer take any arguments, this gives the methods more sense and meaning when defining them as "boolean" methods by appending the
? prefix to the method name.
By making the methods return a lambda, the
when statement will receive this lambda when calling the method, and will proceed to call the lambda by passing the object used in the
case statement automatically. Pretty neat!
We can then re-write our case statement like this:
def self.hey(remark) case remark when nothing? 'Fine. Be that way!' when yell? 'Whoa, chill out!' when question? 'Sure.' else 'Whatever.' end end
Much better!
References
ruby · programming · exercism | http://aalvarez.me/blog/posts/using-lambdas-in-case-statements-in-ruby.html | CC-MAIN-2017-26 | refinedweb | 371 | 69.18 |
Parent Directory
|
Revision Log
*** empty log message ***
package SimBlocks; require Exporter; use ERDB; @ISA = qw(Exporter ERDB); @EXPORT = qw(); @EXPORT_OK = qw(); use strict; use Tracer; use PageBuilder; use Genome; use FIG_Config; >, which represents a set of similar regions, and B<Contig>, which represents a contiguous segment of DNA. The key relationship is B<ContainsRegionIn>, which maps groups to contigs. The mapping information includes the DNA nucleotides as well as the location in the contig. For a set of genomes, we will define a I<common group> as a group that is present in most of the genomes. (The definition of I<most> is determined by a parameter.) The similarity block database must be able to answer two questions, taking as input two sets of genomes. =over 4 =item (1) Which groups are common in both sets and which are common in only one or the other? =item (2) For each group that is common in both sets, how do the individual regions differ? =back This class is a subclass of B<ERDB>, and inherits all of its public methods. = $FIG_Config::simBlocksDB; }, "$directory/SimBlocksDBD.xml"); # Return it. return $retVal; } 1Blocks, \%set2Blocks, \%bothBlocks) = $simBlocks->CompareGenomes(\@set1, \@set2); >> Analyze two sets of genomes for commonalities. The group blocks returned will be divided into three hashes: one for those found only in set 1, one for those found only in set 2, and one for those found in both sets. Each hash is keyed by group ID and will contain B<DBObject>s for B<GroupBlock> records with B<HasInstanceOf> data attached, though the genome ID in the B<HasInstanceOf> section is not generally predictable. =over 4 =item set1 Reference to a list of genome IDs for the genomes in the first set. =item set2 Reference to a list of genome IDs for the genomes in the second set. =item RETURN Returns a list of hashes of lists. Each hash is keyed by group ID, and will contain B<DBObject>s for records in the B<GroupBlock> table. Groups found only in set 1 will be in the first hash, groups found only in set 2 will be in the second hash, and groups found in both sets will be in the third hash. =back =cut #: Return Type @%; sub CompareGenomes { # Get the parameters. my ($self, $set1, $set2) = @_; # Declare the three hashes. my (%set1Blocks, %set2Blocks, %bothBlocks); # Our first task is to find the groups in each genome set. my %groups1 = $self->BlocksInSet($set1); my %groups2 = $self->BlocksInSet($set2); # Create a trailer key to help tighten the loops. my $trailer = "\xFF"; # The groups are hashed by group ID. We will move through them in key order # to get the "set1", "set2", and "both" groups. my @blockIDs1 = (sort keys %groups1, $trailer); my @blockIDs2 = (sort keys %groups2, $trailer); # Prime the loop by getting the first ID from each list. Thanks to the trailers # we know this process can't fail. my $blockID1 = shift @blockIDs1; my $blockID2 = shift @blockIDs2; while ($blockID1 lt $trailer || $blockID2 lt $trailer) { # Compare the block IDs. if ($blockID1 lt $blockID2) { # Block 1 is only in the first set. $set1Blocks{$blockID1} = $groups1{$blockID1}; $blockID1 = shift @blockIDs1; } elsif ($blockID1 gt $blockID2) { # Block 2 is only in the second set. $set2Blocks{$blockID2} = $groups2{$blockID2}; $blockID2 = shift @blockIDs2; } else { # We have a block in both sets. $bothBlocks{$blockID1} = $groups1{$blockID1}; $blockID1 = shift @blockIDs1; $blockID2 = shift @blockIDs2; } } # Return the result. return (\%set1Blocks, \%set2Blocks, \%bothBlocks); } =head3 BlocksInSet C<< my %blockList = $simBlocks->BlocksInSet($set); >> Return a list of the group blocks found in a given set of genomes. The list returned will be a hash of B<DBObject>s, each corresponding to a single B<GroupBlock> record, with a B<HasInstanceOf> record attached, though the content of the B<HasInstanceOf> record is not predictable. The hash will be keyed by group ID. =over 4 =item set Reference to a list of genome IDs. All blocks appearing in any one of the genome IDs will be in the list returned. =item RETURN Returns a hash of B<DBObject>s corresponding to the group blocks found in the genomes of the set. =back =cut #: Return Type %%; sub BlocksInSet { # Get the parameters. my ($self, $set) = @_; # Create a hash to hold the groups found. The hash will be keyed by group ID # and contain the relevant DB object. my %retVal = (); #)'); # Store it in the hash. If it's a duplicate, it will erase the # old one. $retVal{$blockID} = $block; } } # = (); # Query all the regions for the specified block. my $query = $self->Get(['ContainsRegionIn'], "ContainsRegionIn(from-link) = ?", $blockID); # Loop through the query. while (my $region = $query->Fetch) { # Get this region's data. my ($contigID, $start, $dir, $len, $dna) = $region->Values(['ContainsRegionIn(to-link)', 'ContainsRegionIn(position)', 'ContainsRegionIn(direction)', 'ContainsRegionIn(len)', 'ContainsRegionIn(content)']); # Insure it belongs to one of our genomes. my $genomeID = Genome::FromContig($contigID); my $found = grep($_ eq $genomeID, @{$genomes}); # If it does, put it in the hash. if ($found) { my $location = "${contigID}_$start$dir$len"; $retVal{$location} = $dna; } } # Return the result. return %retVal; } =head3 TagDNA C<< my $taggedDNA = SimBlocks::TagDNA($pattern, $dnaString, $prefix, $suffix); >> Convert a DNA string from the B<ContainsRegionIn> relationship to the actual DNA, optionally with markings surrounding them. The DNA string will contain only the values corresponding to the question marks in the pattern, which should be taken from the DNA string's parent B<GroupBlock>. The resulting DNA sequence is built by copying =over 4 =item pattern DNA pattern from which the positions of variance are to be taken. =item dnaString String containing DNA string to be marked. =item prefix String to be inserted before each position of variance. =item suffix; } # Return the result. return $retVal; } 1; | http://biocvs.mcs.anl.gov/viewcvs.cgi/Sprout/SimBlocks.pm?revision=1.1&view=markup&sortby=log&sortdir=down&pathrev=mgrast_release_3_0_2 | CC-MAIN-2020-10 | refinedweb | 939 | 64.1 |
Internet Engineering Task Force (IETF) D. Schinazi Request for Comments: 9298 Google LLC Category: Standards Track August 2022 ISSN: 2070-1721
Proxying UDP in HTTP
Abstract
This document describes how to proxy UDP in HTTP, similar to how the HTTP CONNECT method allows proxying TCP in HTTP. More specifically, this document defines a protocol that allows an HTTP client to create a tunnel for UDP communications through an HTTP server that acts as a. Client Configuration 3. Tunneling UDP over HTTP 3.1. UDP Proxy Handling 3.2. HTTP/1.1 Request 3.3. HTTP/1.1 Response 3.4. HTTP/2 and HTTP/3 Requests 3.5. HTTP/2 and HTTP/3 Responses 4. Context Identifiers 5. HTTP Datagram Payload Format 6. Performance Considerations 6.1. MTU Considerations 6.2. Tunneling of ECN Marks 7. Security Considerations 8. IANA Considerations 8.1. HTTP Upgrade Token 8.2. Well-Known URI 9. References 9.1. Normative References 9.2. Informative References Acknowledgments Author's Address
1. Introduction
While HTTP provides the CONNECT method (see Section 9.3.6 of [HTTP]) for creating a TCP [TCP] tunnel to a proxy, it lacked a method for doing so for UDP [UDP] traffic prior to this specification.
This document describes a protocol for tunneling UDP to a server acting as a UDP-specific proxy over HTTP. UDP tunnels are commonly used to create an end-to-end virtual connection, which can then be secured using QUIC [QUIC] or another protocol running over UDP. Unlike the HTTP CONNECT method, the UDP proxy itself is identified with an absolute URL containing the traffic's destination. Clients generate those URLs using a URI Template [TEMPLATE], as described in Section 2.
This protocol supports all existing versions of HTTP by using HTTP Datagrams [HTTP-DGRAM]. When using HTTP/2 [HTTP/2] or HTTP/3 [HTTP/3], it uses HTTP Extended CONNECT as described in [EXT-CONNECT2] and [EXT-CONNECT3]. When using HTTP/1.x [HTTP/1.1], it uses HTTP Upgrade as defined in Section 7.8 of [HTTP]., we use the term "UDP proxy" to refer to the HTTP server that acts upon the client's UDP tunneling request to open a UDP socket to a target server and that generates the response to this request. If there are HTTP intermediaries (as defined in Section 3.7 of [HTTP]) between the client and the UDP proxy, those are referred to as "intermediaries" in this document.
Note that, when the HTTP version in use does not support multiplexing streams (such as HTTP/1.1), any reference to "stream" in this document represents the entire connection.
2. Client Configuration
HTTP clients are configured to use a UDP proxy with a URI Template [TEMPLATE] that has the variables "target_host" and "target_port". Examples are shown below:{target_host}/{target_port}/{target_host}&p={target_port}{?target_host,target_port}
Figure 1: URI Template Examples
The following requirements apply to the URI Template:
- The URI Template MUST be a level 3 template or lower.
- The URI Template MUST be in absolute form and MUST include non- empty scheme, authority, and path components.
- The path component of the URI Template MUST start with a slash ("/").
- All template variables MUST be within the path or query components of the URI.
- The URI Template MUST contain the two variables "target_host" and "target_port" and MAY contain other variables.
- The URI Template MUST NOT contain any non-ASCII Unicode characters and MUST only contain ASCII characters in the range 0x21-0x7E inclusive (note that percent-encoding is allowed; see Section 2.1 of [URI]).
- The URI Template MUST NOT use Reserved Expansion ("+" operator), Fragment Expansion ("#" operator), Label Expansion with Dot- Prefix, Path Segment Expansion with Slash-Prefix, nor Path-Style Parameter Expansion with Semicolon-Prefix.
Clients SHOULD validate the requirements above; however, clients MAY use a general-purpose URI Template implementation that lacks this specific validation. If a client detects that any of the requirements above are not met by a URI Template, the client MUST reject its configuration and abort the request without sending it to the UDP proxy.
The original HTTP CONNECT method allowed for the conveyance of the target host and port, but not the scheme, proxy authority, path, or query. Thus, clients with proxy configuration interfaces that only allow the user to configure the proxy host and the proxy port exist. Client implementations of this specification that are constrained by such limitations MAY attempt to access UDP proxying capabilities using the default template, which is defined as " udp/{target_host}/{target_port}/", where $PROXY_HOST and $PROXY_PORT are the configured host and port of the UDP proxy, respectively. UDP proxy deployments SHOULD offer service at this location if they need to interoperate with such clients.
3. Tunneling UDP over HTTP
To allow negotiation of a tunnel for UDP over HTTP, this document defines the "connect-udp" HTTP upgrade token. The resulting UDP tunnels use the Capsule Protocol (see Section 3.2 of [HTTP-DGRAM]) with HTTP Datagrams in the format defined in Section 5.
To initiate a UDP tunnel associated with a single HTTP stream, a client issues a request containing the "connect-udp" upgrade token. The target of the tunnel is indicated by the client to the UDP proxy via the "target_host" and "target_port" variables of the URI Template; see Section 2.
"target_host" supports using DNS names, IPv6 literals and IPv4 literals. Note that IPv6 scoped addressing zone identifiers are not supported. Using the terms IPv6address, IPv4address, reg-name, and port from [URI], the "target_host" and "target_port" variables MUST adhere to the format in Figure 2, using notation from [ABNF]. Additionally:
- both the "target_host" and "target_port" variables MUST NOT be empty.
- if "target_host" contains an IPv6 literal, the colons (":") MUST be percent-encoded. For example, if the target host is "2001:db8::42", it will be encoded in the URI as "2001%3Adb8%3A%3A42".
- "target_port" MUST represent an integer between 1 and 65535 inclusive.
target_host = IPv6address / IPv4address / reg-name
target_port = port
Figure 2: URI Template Variable Format
When sending its UDP proxying request, the client SHALL perform URI Template expansion to determine the path and query of its request.
If the request is successful, the UDP proxy commits to converting received HTTP Datagrams into UDP packets, and vice versa, until the tunnel is closed.
By virtue of the definition of the Capsule Protocol (see Section 3.2 of [HTTP-DGRAM]), UDP proxying requests do not carry any message content. Similarly, successful UDP proxying responses also do not carry any message content.
3.1. UDP Proxy Handling
Upon receiving a UDP proxying request:
- if the recipient is configured to use another HTTP proxy, it will act as an intermediary by forwarding the request to another HTTP server. Note that such intermediaries may need to re-encode the request if they forward it using a version of HTTP that is different from the one used to receive it, as the request encoding differs by version (see below).
- otherwise, the recipient will act as a UDP proxy. It extracts the "target_host" and "target_port" variables from the URI it has reconstructed from the request headers, decodes their percent- encoding, and establishes a tunnel by directly opening a UDP socket to the requested target.
Unlike TCP, UDP is connectionless. The UDP proxy that opens the UDP socket has no way of knowing whether the destination is reachable. Therefore, it needs to respond to the request without waiting for a packet from the target. However, if the "target_host" is a DNS name, the UDP proxy MUST perform DNS resolution before replying to the HTTP request. If errors occur during this process, the UDP proxy MUST reject the request and SHOULD send details using an appropriate Proxy-Status header field [PROXY-STATUS]. For example, if DNS resolution returns an error, the proxy can use the dns_error Proxy Error Type from Section 2.3.2 of [PROXY-STATUS].
UDP proxies can use connected UDP sockets if their operating system supports them, as that allows the UDP proxy to rely on the kernel to only send it UDP packets that match the correct 5-tuple. If the UDP proxy uses a non-connected socket, it MUST validate the IP source address and UDP source port on received packets to ensure they match the client's request. Packets that do not match MUST be discarded by the UDP proxy.
The lifetime of the socket is tied to the request stream. The UDP proxy MUST keep the socket open while the request stream is open. If a UDP proxy is notified by its operating system that its socket is no longer usable, it MUST close the request stream. For example, this can happen when an ICMP Destination Unreachable message is received; see Section 3.1 of [ICMP6]. UDP proxies MAY choose to close sockets due to a period of inactivity, but they MUST close the request stream when closing the socket. UDP proxies that close sockets after a period of inactivity SHOULD NOT use a period lower than two minutes; see Section 4.3 of [BEHAVE].
A successful response (as defined in Sections 3.3 and 3.5) indicates that the UDP proxy has opened a socket to the requested target and is willing to proxy UDP payloads. Any response other than a successful response indicates that the request has failed; thus, the client MUST abort the request.
UDP proxies MUST NOT introduce fragmentation at the IP layer when forwarding HTTP Datagrams onto a UDP socket; overly large datagrams are silently dropped. In IPv4, the Don't Fragment (DF) bit MUST be set, if possible, to prevent fragmentation on the path. Future extensions MAY remove these requirements.
Implementers of UDP proxies will benefit from reading the guidance in [UDP-USAGE].
3.2. HTTP/1.1 Request
When using HTTP/1.1 [HTTP/1.1], a UDP proxying request will meet the following requirements:
- the method SHALL be "GET".
- the request SHALL include a single Host header field containing the origin of the UDP proxy.
- the request SHALL include a Connection header field with value "Upgrade" (note that this requirement is case-insensitive as per Section 7.6.1 of [HTTP]).
- the request SHALL include an Upgrade header field with value "connect-udp".
A UDP proxying request that does not conform to these restrictions is malformed. The recipient of such a malformed request MUST respond with an error and SHOULD use the 400 (Bad Request) status code.
For example, if the client is configured with URI Template " udp/{target_host}/{target_port}/" and wishes to open a UDP proxying tunnel to target 192.0.2.6:443, it could send the following request:
GET HTTP/1.1 Host: example.org Connection: Upgrade Upgrade: connect-udp Capsule-Protocol: ?1
Figure 3: Example HTTP/1.1 Request
In HTTP/1.1, this protocol uses the GET method to mimic the design of the WebSocket Protocol [WEBSOCKET].
3.3. HTTP/1.1 Response
The UDP proxy SHALL indicate a successful response by replying with the following requirements:
- the HTTP status code on the response SHALL be 101 (Switching Protocols).
- the response SHALL include a Connection header field with value "Upgrade" (note that this requirement is case-insensitive as per Section 7.6.1 of [HTTP]).
- the response SHALL include a single Upgrade header field with value "connect-udp".
- the response SHALL meet the requirements of HTTP responses that start the Capsule Protocol; see Section 3.2 of [HTTP-DGRAM].
If any of these requirements are not met, the client MUST treat this proxying attempt as failed and abort the connection.
For example, the UDP proxy could respond with:
HTTP/1.1 101 Switching Protocols Connection: Upgrade Upgrade: connect-udp Capsule-Protocol: ?1
Figure 4: Example HTTP/1.1 Response
3.4. HTTP/2 and HTTP/3 Requests
When using HTTP/2 [HTTP/2] or HTTP/3 [HTTP/3], UDP proxying requests use HTTP Extended CONNECT. This requires that servers send an HTTP Setting as specified in [EXT-CONNECT2] and [EXT-CONNECT3] and that requests use HTTP pseudo-header fields with the following requirements:
- The :method pseudo-header field SHALL be "CONNECT".
- The :protocol pseudo-header field SHALL be "connect-udp".
- The :authority pseudo-header field SHALL contain the authority of the UDP proxy.
- The :path and :scheme pseudo-header fields SHALL NOT be empty. Their values SHALL contain the scheme and path from the URI Template after the URI Template expansion process has been completed.
A UDP proxying request that does not conform to these restrictions is malformed (see Section 8.1.1 of [HTTP/2] and Section 4.1.2 of [HTTP/3]).
For example, if the client is configured with URI Template " udp/{target_host}/{target_port}/" and wishes to open a UDP proxying tunnel to target 192.0.2.6:443, it could send the following request:
HEADERS :method = CONNECT :protocol = connect-udp :scheme = https :path = /.well-known/masque/udp/192.0.2.6/443/ :authority = example.org capsule-protocol = ?1
Figure 5: Example HTTP/2 Request
3.5. HTTP/2 and HTTP/3 Responses
The UDP proxy SHALL indicate a successful response by replying with the following requirements:
- the HTTP status code on the response SHALL be in the 2xx (Successful) range.
- the response SHALL meet the requirements of HTTP responses that start the Capsule Protocol; see Section 3.2 of [HTTP-DGRAM].
If any of these requirements are not met, the client MUST treat this proxying attempt as failed and abort the request.
For example, the UDP proxy could respond with:
HEADERS
:status = 200
capsule-protocol = ?1
Figure 6: Example HTTP/2 Response
4. Context Identifiers
The mechanism for proxying UDP in HTTP defined in this document allows future extensions to exchange HTTP Datagrams that carry different semantics from UDP payloads. Some of these extensions can augment UDP payloads with additional data, while others can exchange data that is completely separate from UDP payloads. In order to accomplish this, all HTTP Datagrams associated with UDP Proxying request streams start with a Context ID field; see Section 5.
Context IDs are 62-bit integers (0 to 2^62-1). Context IDs are encoded as variable-length integers; see Section 16 of [QUIC]. The Context ID value of 0 is reserved for UDP payloads, while non-zero values are dynamically allocated. Non-zero even-numbered Context IDs are client-allocated, and odd-numbered Context IDs are proxy- allocated. The Context ID namespace is tied to a given HTTP request; it is possible for a Context ID with the same numeric value to be simultaneously allocated in distinct requests, potentially with different semantics. Context IDs MUST NOT be re-allocated within a given HTTP namespace but MAY be allocated in any order. The Context ID allocation restrictions to the use of even-numbered and odd- numbered Context IDs exist in order to avoid the need for synchronization between endpoints. However, once a Context ID has been allocated, those restrictions do not apply to the use of the Context ID; it can be used by any client or UDP proxy, independent of which endpoint initially allocated it.
Registration is the action by which an endpoint informs its peer of the semantics and format of a given Context ID. This document does not define how registration occurs. Future extensions MAY use HTTP header fields or capsules to register Context IDs. Depending on the method being used, it is possible for datagrams to be received with Context IDs that have not yet been registered. For instance, this can be due to reordering of the packet containing the datagram and the packet containing the registration message during transmission.
5. HTTP Datagram Payload Format
When HTTP Datagrams (see Section 2 of [HTTP-DGRAM]) are associated with UDP Proxying request streams, the HTTP Datagram Payload field has the format defined in Figure 7, using notation from Section 1.3 of [QUIC]. Note that when HTTP Datagrams are encoded using QUIC DATAGRAM frames [QUIC-DGRAM], the Context ID field defined below directly follows the Quarter Stream ID field, which is at the start of the QUIC DATAGRAM frame payload; see Section 2.1 of [HTTP-DGRAM].
UDP Proxying HTTP Datagram Payload { Context ID (i), UDP Proxying Payload (..), }
Figure 7: UDP Proxying HTTP Datagram Format
Context ID: A variable-length integer (see Section 16 of [QUIC]) that contains the value of the Context ID. If an HTTP/3 Datagram that carries an unknown Context ID is received, the receiver SHALL either drop that datagram silently or buffer it temporarily (on the order of a round trip) while awaiting the registration of the corresponding Context ID. UDP Proxying Payload: The payload of the datagram, whose semantics depend on the value of the previous field. Note that this field can be empty.
UDP packets are encoded using HTTP Datagrams with the Context ID field set to zero. When the Context ID field is set to zero, the UDP Proxying Payload field contains the unmodified payload of a UDP packet (referred to as data octets in [UDP]).
By virtue of the definition of the UDP header [UDP], it is not possible to encode UDP payloads longer than 65527 bytes. Therefore, endpoints MUST NOT send HTTP Datagrams with a UDP Proxying Payload field longer than 65527 using Context ID zero. An endpoint that receives an HTTP Datagram using Context ID zero whose UDP Proxying Payload field is longer than 65527 MUST abort the corresponding stream. If a UDP proxy knows it can only send out UDP packets of a certain length due to its underlying link MTU, it has no choice but to discard incoming HTTP Datagrams using Context ID zero whose UDP Proxying Payload field is longer than that limit. If the discarded HTTP Datagram was transported by a DATAGRAM capsule, the receiver SHOULD discard that capsule without buffering the capsule contents.
If a UDP proxy receives an HTTP Datagram before it has received the corresponding request, it SHALL either drop that HTTP Datagram silently or buffer it temporarily (on the order of a round trip) while awaiting the corresponding request.
Note that buffering datagrams (either because the request was not yet received or because the Context ID is not yet known) consumes resources. Receivers that buffer datagrams SHOULD apply buffering limits in order to reduce the risk of resource exhaustion occurring. For example, receivers can limit the total number of buffered datagrams or the cumulative size of buffered datagrams on a per- stream, per-context, or per-connection basis.
A client MAY optimistically start sending UDP packets in HTTP Datagrams before receiving the response to its UDP proxying request. However, implementers should note that such proxied packets may not be processed by the UDP proxy if it responds to the request with a failure or if the proxied packets are received by the UDP proxy before the request and the UDP proxy chooses to not buffer them.
6. Performance Considerations
Bursty traffic can often lead to temporally correlated packet losses; in turn, this can lead to suboptimal responses from congestion controllers in protocols running over UDP. To avoid this, UDP. The underlying HTTP connection MUST NOT disable congestion control unless it has an out- of-band way of knowing with absolute certainty that the inner traffic is congestion-controlled.
If a client or UDP proxy with a connection containing a UDP Proxying request stream disables congestion control, it MUST NOT signal Explicit Congestion Notification (ECN) [ECN] support on that connection. That is, it MUST mark all IP headers with the Not-ECT codepoint. It MAY continue to report ECN feedback via QUIC ACK_ECN frames or the TCP ECE bit, as the peer may not have disabled congestion control., UDP proxying SHOULD be performed over HTTP/3 to allow leveraging the QUIC DATAGRAM frame.
6.1. MTU Considerations
When using HTTP/3 with the QUIC Datagram extension [QUIC-DGRAM], UDP payloads are transmitted in QUIC DATAGRAM frames. Since those cannot be fragmented, they can only carry payloads up to a given length determined by the QUIC connection configuration and the Path MTU (PMTU). If a UDP proxy is using QUIC DATAGRAM frames and it receives a UDP payload from the target that will not fit inside a QUIC DATAGRAM frame, the UDP proxy SHOULD NOT send the UDP payload in a DATAGRAM capsule, as that defeats the end-to-end unreliability characteristic that methods such as Datagram Packetization Layer PMTU Discovery (DPLPMTUD) depend on [DPLPMTUD]. In this scenario, the UDP proxy SHOULD drop the UDP payload and send an ICMP Packet Too Big message to the target; see Section 3.2 of [ICMP6].
6.2. Tunneling of ECN Marks
UDP proxying does not create an IP-in-IP tunnel, so the guidance in [ECN-TUNNEL] about transferring ECN marks between inner and outer IP headers does not apply. There is no inner IP header in UDP proxying tunnels.
In this specification, note that UDP proxying clients do not have the ability to control the ECN codepoints on UDP packets the UDP proxy sends to the target, nor can UDP proxies communicate the markings of each UDP packet from target to UDP proxy.
A UDP proxy MUST ignore ECN bits in the IP header of UDP packets received from the target, and it MUST set the ECN bits to Not-ECT on UDP packets it sends to the target. These do not relate to the ECN markings of packets sent between client and UDP proxy in any way.
7. Security Considerations
There are significant risks in allowing arbitrary clients to establish a tunnel to arbitrary targets, as that could allow bad actors to send traffic and have it attributed to the UDP proxy. HTTP servers that support UDP proxying ought to restrict its use to authenticated users.
There exist software and network deployments that perform access control checks based on the source IP address of incoming requests. For example, some software allows unauthenticated configuration changes if they originated from 127.0.0.1. Such software could be running on the same host as the UDP proxy or in the same broadcast domain. Proxied UDP traffic would then be received with a source IP address belonging to the UDP proxy. If this source address is used for access control, UDP proxying clients could use the UDP proxy to escalate their access privileges beyond those they might otherwise have. This could lead to unauthorized access by UDP proxying clients unless the UDP proxy disallows UDP proxying requests to vulnerable targets, such as the UDP proxy's own addresses and localhost, link- local, multicast, and broadcast addresses. UDP proxies can use the destination_ip_prohibited Proxy Error Type from Section 2.3.5 of [PROXY-STATUS] when rejecting such requests.
UDP proxies share many similarities with TCP CONNECT proxies when considering them as infrastructure for abuse to enable denial-of- service (DoS) attacks. Both can obfuscate the attacker's source address from the attack target. In the case of a stateless volumetric attack (e.g., a TCP SYN flood or a UDP flood), both types of proxies pass the traffic to the target host. With stateful volumetric attacks (e.g., HTTP flooding) being sent over a TCP CONNECT proxy, the proxy will only send data if the target has indicated its willingness to accept data by responding with a TCP SYN-ACK. Once the path to the target is flooded, the TCP CONNECT proxy will no longer receive replies from the target and will stop sending data. Since UDP does not establish shared state between the UDP proxy and the target, the UDP proxy could continue sending data to the target in such a situation. While a UDP proxy could potentially limit the number of UDP packets it is willing to forward until it has observed a response from the target, that provides limited protection against DoS attacks when attacks target open UDP ports where the protocol running over UDP would respond and that would be interpreted as willingness to accept UDP by the UDP proxy. Such a packet limit could also cause issues for valid traffic.
The security considerations described in Section 4 of [HTTP-DGRAM] also apply here. Since it is possible to tunnel IP packets over UDP, the guidance in [TUNNEL-SECURITY] can apply.
8. IANA Considerations
8.1. HTTP Upgrade Token
IANA has registered "connect-udp" in the "HTTP Upgrade Tokens" registry maintained at <- upgrade-tokens>. Value: connect-udp Description: Proxying of UDP Payloads Expected Version Tokens: None Reference: RFC 9298
8.2. Well-Known URI
IANA has registered "masque" in the "Well-Known URIs" registry maintained at <>. URI Suffix: masque Change Controller: IETF Reference: RFC 9298 Status: permanent Related Information: Includes all resources identified with the path prefix "/.well-known/masque/udp/"
9. References
9.1. Normative References
[ABNF] Crocker, D., Ed. and P. Overell, "Augmented BNF for Syntax Specifications: ABNF", RFC 2234, DOI 10.17487/RFC2234, November 1997, <>. [ECN] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition of Explicit Congestion Notification (ECN) to IP", RFC 3168, DOI 10.17487/RFC3168, September 2001, <>.
,
<>.
[HTTP] Fielding, R., Ed., Nottingham, M., Ed., and J. Reschke, Ed., "HTTP Semantics", STD 97, RFC 9110, DOI 10.17487/RFC9110, June 2022, <>.
[HTTP-DGRAM]
-
Schinazi, D. and L. Pardue, "HTTP Datagrams and the Capsule Protocol", RFC 9297, DOI 10.17487/RFC9297, August, <>.
[PROXY-STATUS]
-
Nottingham, M. and P. Sikora, "The Proxy-Status HTTP Response Header Field", RFC 9209, DOI 10.17487/RFC9209, June 2022, <>. , <>. [TCP] Eddy, W., Ed., "Transmission Control Protocol (TCP)", STD 7, RFC 9293, DOI 10.17487/RFC9293, August 2022, <>. [TEMPLATE] Gregorio, J., Fielding, R., Hadley, M., Nottingham, M., and D. Orchard, "URI Template", RFC 6570, DOI 10.17487/RFC6570, March 2012, <>. [UDP] Postel, J., "User Datagram Protocol", STD 6, RFC 768, DOI 10.17487/RFC0768, August 1980, <>. [URI] Berners-Lee, T., Fielding, R., and L. Masinter, "Uniform Resource Identifier (URI): Generic Syntax", STD 66, RFC 3986, DOI 10.17487/RFC3986, January 2005, <>.
9.2. Informative References
[BEHAVE] Audet, F., Ed. and C. Jennings, "Network Address Translation (NAT) Behavioral Requirements for Unicast UDP", BCP 127, RFC 4787, DOI 10.17487/RFC4787, January 2007, <>.
, <>.
[ECN-TUNNEL]
-
Briscoe, B., "Tunnelling of Explicit Congestion
Notification", RFC 6040, DOI 10.17487/RFC6040, November
2010, <>.
[HELIUM] Schwartz, B. M., "Hybrid Encapsulation Layer for IP and UDP Messages (HELIUM)", Work in Progress, Internet-Draft, draft-schwartz-httpbis-helium-00, 25 June 2018, <- httpbis-helium-00>. [HiNT] Pardue, L., "HTTP-initiated Network Tunnelling (HiNT)", Work in Progress, Internet-Draft, draft-pardue-httpbis- http-network-tunnelling-00, 2 July 2018, <- httpbis-http-network-tunnelling-00>. [ICMP6] Conta, A., Deering, S., and M. Gupta, Ed., "Internet Control Message Protocol (ICMPv6) for the Internet Protocol Version 6 (IPv6) Specification", STD 89, RFC 4443, DOI 10.17487/RFC4443, March 2006, <>.
[MASQUE-ORIGINAL]
-
Schinazi, D., "The MASQUE Protocol", Work in Progress, Internet-Draft, draft-schinazi-masque-00, 28 February 2019, <- schinazi-masque-00>.
[TUNNEL-SECURITY]
-
Krishnan, S., Thaler, D., and J. Hoagland, "Security Concerns with IP Tunneling", RFC 6169, DOI 10.17487/RFC6169, April 2011, <>.
[UDP-USAGE]
-
Eggert, L., Fairhurst, G., and G. Shepherd, "UDP Usage Guidelines", BCP 145, RFC 8085, DOI 10.17487/RFC8085, March 2017, <>.
[WEBSOCKET]
-
Fette, I. and A. Melnikov, "The WebSocket Protocol",
RFC 6455, DOI 10.17487/RFC6455, December 2011,
<>.
Acknowledgments
This document is a product of the MASQUE Working Group, and the author thanks all MASQUE enthusiasts for their contributions. This proposal was inspired directly or indirectly by prior work from many people, in particular [HELIUM] by Ben Schwartz, [HiNT] by Lucas Pardue, and the original MASQUE Protocol [MASQUE-ORIGINAL] by the author of this document.
The author would like to thank Eric Rescorla for suggesting the use of an HTTP method to proxy UDP. The author is indebted to Mark Nottingham and Lucas Pardue for the many improvements they contributed to this document. The extensibility design in this document came out of the HTTP Datagrams Design Team, whose members were Alan Frindell, Alex Chernyakhovsky, Ben Schwartz, Eric Rescorla, Lucas Pardue, Marcus Ihlar, Martin Thomson, Mike Bishop, Tommy Pauly, Victor Vasiliev, and the author of this document.
Author's Address
David Schinazi
Google LLC
1600 Amphitheatre Parkway
Mountain View, CA 94043
United States of America | http://pike.lysator.liu.se/docs/ietf/rfc/92/rfc9298.xml | CC-MAIN-2022-40 | refinedweb | 4,687 | 55.74 |
:
4.
{1, 2, 3, 4, 5, 6}.
{2, 4, 6}.
This notebook will develop all these concepts; I also have a second part that covers paradoxes in Probability Theory.
from fractions import Fraction def P(event, space): "The probability of an event, given a sample space of equiprobable outcomes." return Fraction(len(event & space), len(space))
What's the probability of rolling an even number with a single six-sided fair die?
We can define the sample space
D and the event
even, and compute the probability:
D = {1, 2, 3, 4, 5, 6} even = { 2, 4, 6} P(even, D)
Fraction(1, 2)
It is good to confirm what we already knew.
You may ask: Why does the definition of
P use
len(event & space) rather than
len(event)? Because I don't want to count outcomes that were specified in
event but aren't actually in the sample space. Consider:
even = {2, 4, 6, 8, 10, 12} P(even, D)
Fraction(1, 2)
Here,
len(event) and
len(space) are both 6, so if just divided, then
P would be 1, which is not right.
The favorable cases are the intersection of the event and the space, which in Python is
(event & space).
Also note that I use
Fraction rather than regular division because I want exact answers like 1/3, not 0.3333333333333333.
Around 1700, Jacob Bernoulli wrote about removing colored balls from an urn in his landmark treatise Ars Conjectandi, and ever since then, explanations of probability have relied on urn problems. (You'd think the urns would be empty by now.)
For example, here is a three-part problem adapted from mathforum.org:
An urn contains 23 balls: 8 white, 6 blue, and 9 red. We select six balls at random (each possible selection is equally likely). What is the probability of each of these possible outcomes:
- all balls are red
- 3 are blue, 2 are white, and 1 is red
- exactly 4 balls are white
So, an outcome is a set of 6 balls, and the sample space is the set of all possible 6 ball combinations. We'll solve each of the 3 parts using our
P function, and also using basic arithmetic; that is, counting. Counting is a bit tricky because:
To account for the first issue, I'll have 8 different white balls labelled
'W1' through
'W8', rather than having eight balls all labelled
'W'. That makes it clear that selecting
'W1' is different from selecting
'W2'.
The second issue is handled automatically by the
P function, but if I want to do calculations by hand, I will sometimes first count the number of permutations of balls, then get the number of combinations by dividing the number of permutations by c!, where c is the number of balls in a combination. For example, if I want to choose 2 white balls from the 8 available, there are 8 ways to choose a first white ball and 7 ways to choose a second, and therefore 8 × 7 = 56 permutations of two white balls. But there are only 56 / 2 = 28 combinations, because
(W1, W2) is the same combination as
(W2, W1).
We'll start by defining the contents of the urn:
def cross(A, B): "The set of ways of concatenating one item from collection A with one from B." return {a + b for a in A for b in B} urn = cross('W', '12345678') | cross('B', '123456') | cross('R', '123456789') urn
{'B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'R1', 'R2', 'R3', 'R4', 'R5', 'R6', 'R7', 'R8', 'R9', 'W1', 'W2', 'W3', 'W4', 'W5', 'W6', 'W7', 'W8'}
len(urn)
23
Now we can define the sample space,
U6, as the set of all 6-ball combinations. We use
itertools.combinations to generate the combinations, and then join each combination into a string:
import itertools def combos(items, n): "All combinations of n items; each combo as a concatenated str." return {' '.join(combo) for combo in itertools.combinations(items, n)} U6 = combos(urn, 6) len(U6)
100947
I don't want to print all 100,947 members of the sample space; let's just peek at a random sample of them:
import random random.sample(U6, 10)
['R7 W3 B1 R4 B3 W2', 'R7 R3 B1 W4 R5 W6', 'B5 B4 B6 W1 R3 R2', 'W3 B1 R4 W8 R9 W5', 'W7 B6 W1 R3 R9 B3', 'W3 W1 R8 R2 R1 W5', 'B5 R7 R4 W8 R2 W6', 'W7 W1 R3 W4 R5 B3', 'B4 R7 B2 B1 W6 W2', 'W7 B1 W4 W8 W6 W2']
Is 100,947 really the right number of ways of choosing 6 out of 23 items, or "23 choose 6", as mathematicians call it? Well, we can choose any of 23 for the first item, any of 22 for the second, and so on down to 18 for the sixth. But we don't care about the ordering of the six items, so we divide the product by 6! (the number of permutations of 6 things) giving us:
$$23 ~\mbox{choose}~ 6 = \frac{23 \cdot 22 \cdot 21 \cdot 20 \cdot 19 \cdot 18}{6!} = 100947$$
Note that $23 \cdot 22 \cdot 21 \cdot 20 \cdot 19 \cdot 18 = 23! \;/\; 17!$, so, generalizing, we can write:
$$n ~\mbox{choose}~ c = \frac{n!}{(n - c)! \cdot c!}$$
And we can translate that to code and verify that 23 choose 6 is 100,947:
from math import factorial def choose(n, c): "Number of ways to choose c items from a list of n items." return factorial(n) // (factorial(n - c) * factorial(c))
choose(23, 6)
100947
red6 = {s for s in U6 if s.count('R') == 6} P(red6, U6)
Fraction(4, 4807)
Let's investigate a bit more. How many ways of getting 6 red balls are there?
len(red6)
84
Why are there 84 ways? Because there are 9 red balls in the urn, and we are asking how many ways we can choose 6 of them:
choose(9, 6)
84
So the probabilty of 6 red balls is then just 9 choose 6 divided by the size of the sample space:
P(red6, U6) == Fraction(choose(9, 6), len(U6))
True
b3w2r1 = {s for s in U6 if s.count('B') == 3 and s.count('W') == 2 and s.count('R') == 1} P(b3w2r1, U6)
Fraction(240, 4807)
We can get the same answer by counting how many ways we can choose 3 out of 6 blues, 2 out of 8 whites, and 1 out of 9 reds, and dividing by the number of possible selections:
P(b3w2r1, U6) == Fraction(choose(6, 3) * choose(8, 2) * choose(9, 1), len(U6))
True
Here we don't need to divide by any factorials, because
choose has already accounted for that.
We can get the same answer by figuring: "there are 6 ways to pick the first blue, 5 ways to pick the second blue, and 4 ways to pick the third; then 8 ways to pick the first white and 7 to pick the second; then 9 ways to pick a red. But the order
'B1, B2, B3' should count as the same as
'B2, B3, B1' and all the other orderings; so divide by 3! to account for the permutations of blues, by 2! to account for the permutations of whites, and by 100947 to get a probability:
P(b3w2r1, U6) == Fraction((6 * 5 * 4) * (8 * 7) * 9, factorial(3) * factorial(2) * len(U6))
True
w4 = {s for s in U6 if s.count('W') == 4} P(w4, U6)
Fraction(350, 4807)
P(w4, U6) == Fraction(choose(8, 4) * choose(15, 2), len(U6))
True
P(w4, U6) == Fraction((8 * 7 * 6 * 5) * (15 * 14), factorial(4) * factorial(2) * len(U6))
True
P, with more general events¶
To calculate the probability of an even die roll, I originally said
even = {2, 4, 6}
But that's inelegant—I had to explicitly enumerate all the even numbers from one to six. If I ever wanted to deal with a twelve or twenty-sided die, I would have to go back and change
even. I would prefer to define
even once and for all like this:
def even(n): return n % 2 == 0
Now in order to make
P(even, D) work, I'll have to modify
P to accept an event as either
a set of outcomes (as before), or a predicate over outcomes—a function that returns true for an outcome that is in the event:
def P(event, space): """The probability of an event, given a sample space of equiprobable outcomes. event can be either a set of outcomes, or a predicate (true for outcomes in the event).""" if is_predicate(event): event = such_that(event, space) return Fraction(len(event & space), len(space)) is_predicate = callable def such_that(predicate, collection): "The subset of elements in the collection for which the predicate is true." return {e for e in collection if predicate(e)}
Here we see how
such_that, the new
even predicate, and the new
P work:
such_that(even, D)
{2, 4, 6}
P(even, D)
Fraction(1, 2)
D12 = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12} such_that(even, D12)
{2, 4, 6, 8, 10, 12}
P(even, D12)
Fraction(1, 2)
Note:
such_that is just like the built-in function
filter, except
such_that returns a set.
We can now define more interesting events using predicates; for example we can determine the probability that the sum of a three-dice roll is prime (using a definition of
is_prime that is efficient enough for small
n):
D3 = {(d1, d2, d3) for d1 in D for d2 in D for d3 in D} def prime_sum(outcome): return is_prime(sum(outcome)) def is_prime(n): return n > 1 and not any(n % i == 0 for i in range(2, n)) P(prime_sum, D3)
Fraction(73, 216)
suits = 'SHDC' ranks = 'A23456789TJQK' deck = cross(ranks, suits) len(deck)
52
Hands = combos(deck, 5) assert len(Hands) == choose(52, 5) random.sample(Hands, 5)
['AH 6D 5D TS 4H', 'JC AD AH 7S QC', '6C 7S 3H 9C KH', '6D 5C QH TH QS', '6C 3D 5D KH 5S']
Now we can answer questions like the probability of being dealt a flush (5 cards of the same suit):
def flush(hand): return any(hand.count(suit) == 5 for suit in suits) P(flush, Hands)
Fraction(33, 16660)
Or the probability of four of a kind:
def four_kind(hand): return any(hand.count(rank) == 4 for rank in ranks) P(four_kind, Hands)
Fraction(1, 4165)
Consider a gambling game consisting of tossing a coin. Player H wins the game if 10 heads come up, and T wins if 10 tails come up. If the game is interrupted when H has 8 heads and T has 7 tails, how should the pot of money (which happens to be 100 Francs) be split? In 1654, Blaise Pascal and Pierre de Fermat corresponded on this problem, with Fermat writing:
I think you will agree that all of these outcomes are equally likely. Thus I believe that we should divide the stakes by the ration 11:5 in my favor, that is, I should receive (11/16)*100 = 68.75 Francs, while you should receive 31.25 Francs.
I hope all is well in Paris,
Your friend and colleague,
Pierre
Pascal agreed with this solution, and replied with a generalization that made use of his previous invention, Pascal's Triangle. There's even a book about it.
We can solve the problem with the tools we have:
def win_unfinished_game(Hneeds, Tneeds): "The probability that H will win the unfinished game, given the number of points needed by H and T to win." def Hwins(outcome): return outcome.count('h') >= Hneeds return P(Hwins, continuations(Hneeds, Tneeds)) def continuations(Hneeds, Tneeds): "All continuations of a game where H needs `Hneeds` points to win and T needs `Tneeds`." rounds = ['ht' for _ in range(Hneeds + Tneeds - 1)] return set(itertools.product(*rounds))
continuations(2, 3)
{(')}
win_unfinished_game(2, 3)
Fraction(11, 16)
Our answer agrees with Pascal and Fermat; we're in good company!
So far, we have made the assumption that every outcome in a sample space is equally likely. In real life, we often get outcomes that are not equiprobable. For example, the probability of a child being a girl is not exactly 1/2, and the probability is slightly different for a second child. An article gives the following counts for two-child families in Denmark, where
GB means a family where the first child is a girl and the second a boy:
GG: 121801 GB: 126840 BG: 127123 BB: 135138
We will introduce three more definitions:
Frequency: a number describing how often an outcome occurs. Can be a count like 121801, or a ratio like 0.515.
Distribution: A mapping from outcome to frequency for each outcome in a sample space.
Probability Distribution: A distribution that has been normalized so that the sum of the frequencies is 1.
We define
ProbDist to take the same kinds of arguments that
dict does: either a mapping or an iterable of
(key, val) pairs, and/or optional keyword arguments.
class ProbDist(dict): "A Probability Distribution; an {outcome: probability} mapping." def __init__(self, mapping=(), **kwargs): self.update(mapping, **kwargs) # Make probabilities sum to 1.0; assert no negative probabilities total = sum(self.values()) for outcome in self: self[outcome] = self[outcome] / total assert self[outcome] >= 0
We also need to modify the functions
P and
such_that to accept either a sample space or a probability distribution as the second argument.
def P(event, space): """The probability of an event, given a sample space of equiprobable outcomes. event: a collection of outcomes, or a predicate that is true of outcomes in the event. space: a set of outcomes or a probability distribution of {outcome: frequency} pairs.""" if is_predicate(event): event = such_that(event, space) if isinstance(space, ProbDist): return sum(space[o] for o in space if o in event) else: return Fraction(len(event & space), len(space)) def such_that(predicate, space): """The outcomes in the sample pace for which the predicate is true. If space is a set, return a subset {outcome,...}; if space is a ProbDist, return a ProbDist {outcome: frequency,...}; in both cases only with outcomes where predicate(element) is true.""" if isinstance(space, ProbDist): return ProbDist({o:space[o] for o in space if predicate(o)}) else: return {o for o in space if predicate(o)}
Here is the probability distribution for Danish two-child families:}
And here are some predicates that will allow us to answer some questions:
def first_girl(outcome): return outcome[0] == 'G' def first_boy(outcome): return outcome[0] == 'B' def second_girl(outcome): return outcome[1] == 'G' def second_boy(outcome): return outcome[1] == 'B' def two_girls(outcome): return outcome == 'GG'
P(first_girl, DK)
0.4866706335070131
P(second_girl, DK)
0.4872245557856497
The above says that the probability of a girl is somewhere between 48% and 49%, but that it is slightly different between the first or second child.
P(second_girl, such_that(first_girl, DK)), P(second_girl, such_that(first_boy, DK))
(0.4898669165584115, 0.48471942072973107)
P(second_boy, such_that(first_girl, DK)), P(second_boy, such_that(first_boy, DK))
(0.5101330834415885, 0.5152805792702689)
The above says that the sex of the second child is more likely to be the same as the first child, by about 1/2 a percentage point.
Here's another urn problem (or "bag" problem) from prolific Python/Probability author Allen Downey :?
To solve this problem, we'll first represent probability distributions for each bag:
bag94 and
bag96:
bag94 = ProbDist(brown=30, yellow=20, red=20, green=10, orange=10, tan=10) bag96 = ProbDist(blue=24, green=20, orange=16, yellow=14, red=13, brown=13)
Next, define
MM as the joint distribution—the sample space for picking one M&M from each bag. The outcome
'yellow green' means that a yellow M&M was selected from the 1994 bag and a green one from the 1996 bag.
def joint(A, B, sep=''): """The joint distribution of two independent probability distributions. Result is all entries of the form {a+sep+b: P(a)*P(b)}""" return ProbDist({a + sep + b: A[a] * B[b] for a in A for b in B}) MM = joint(bag94, bag96, ' ') MM
{'brown blue': 0.07199999999999997, 'brown brown': 0.038999999999999986, 'brown green': 0.05999999999999997, 'brown orange': 0.04799999999999998, 'brown red': 0.038999999999999986, 'brown yellow': 0.04199999999999998, 'green blue': 0.02399999999999999, 'green brown': 0.012999999999999996, 'green green': 0.019999999999999993, 'green orange': 0.015999999999999993, 'green red': 0.012999999999999996, 'green yellow': 0.013999999999999995, 'orange blue': 0.02399999999999999, 'orange brown': 0.012999999999999996, 'orange green': 0.019999999999999993, 'orange orange': 0.015999999999999993, 'orange red': 0.012999999999999996, 'orange yellow': 0.013999999999999995, 'red blue': 0.04799999999999998, 'red brown': 0.025999999999999992, 'red green': 0.03999999999999999, 'red orange': 0.03199999999999999, 'red red': 0.025999999999999992, 'red yellow': 0.02799999999999999, 'tan blue': 0.02399999999999999, 'tan brown': 0.012999999999999996, 'tan green': 0.019999999999999993, 'tan orange': 0.015999999999999993, 'tan red': 0.012999999999999996, 'tan yellow': 0.013999999999999995, 'yellow blue': 0.04799999999999998, 'yellow brown': 0.025999999999999992, 'yellow green': 0.03999999999999999, 'yellow orange': 0.03199999999999999, 'yellow red': 0.025999999999999992, 'yellow yellow': 0.02799999999999999}
First we'll look at the "One is yellow and one is green" part:
def yellow_and_green(outcome): return 'yellow' in outcome and 'green' in outcome such_that(yellow_and_green, MM)
{'green yellow': 0.25925925925925924, 'yellow green': 0.7407407407407408}
Now we can answer the question: given that we got a yellow and a green (but don't know which comes from which bag), what is the probability that the yellow came from the 1994 bag?
def yellow94(outcome): return outcome.startswith('yellow') P(yellow94, such_that(yellow_and_green, MM))
0.7407407407407408
So there is a 74% chance that the yellow comes from the 1994 bag.
Answering this question was straightforward: just like all the other probability problems, we simply create a sample space, and use
P to pick out the probability of the event in question, given what we know about the outcome.
But in a sense it is curious that we were able to solve this problem with the same methodology as the others: this problem comes from a section titled My favorite Bayes's Theorem Problems, so one would expect that we'd need to invoke Bayes Theorem to solve it. The computation above shows that that is not necessary.
Of course, we could solve it using Bayes Theorem. Why is Bayes Theorem recommended? Because we are asked about the probability of an event given the evidence, which is not immediately available; however the probability of the evidence given the event is.
Before we see the colors of the M&Ms, there are two hypotheses,
A and
B, both with equal probability:
A: first M&M from 94 bag, second from 96 bag B: first M&M from 96 bag, second from 94 bag P(A) = P(B) = 0.5
Then we get some evidence:
E: first M&M yellow, second green
We want to know the probability of hypothesis
A, given the evidence:
P(A | E)
That's not easy to calculate (except by enumerating the sample space). But Bayes Theorem says:
P(A | E) = P(E | A) * P(A) / P(E)
The quantities on the right-hand-side are easier to calculate:
P(E | A) = 0.20 * 0.20 = 0.04 P(E | B) = 0.10 * 0.14 = 0.014 P(A) = 0.5 P(B) = 0.5 P(E) = P(E | A) * P(A) + P(E | B) * P(B) = 0.04 * 0.5 + 0.014 * 0.5 = 0.027
And we can get a final answer:
P(A | E) = P(E | A) * P(A) / P(E) = 0.04 * 0.5 / 0.027 = 0.7407407407
You have a choice: Bayes Theorem allows you to do less calculation at the cost of more algebra; that is a great trade-off if you are working with pencil and paper. Enumerating the state space allows you to do less algebra at the cost of more calculation; often a good trade-off if you have a computer. But regardless of the approach you use, it is important to understand Bayes theorem and how it works.
There is one important question that Allen Downey does not address: would you eat twenty-year-old M&Ms? 😨
This paper explains how Samuel Pepys wrote to Isaac Newton in 1693 to pose the problem:
Which of the following three propositions has the greatest chance of success?
- Six fair dice are tossed independently and at least one “6” appears.
- Twelve fair dice are tossed independently and at least two “6”s appear.
- Eighteen fair dice are tossed independently and at least three “6”s appear.
Newton was able to answer the question correctly (although his reasoning was not quite right); let's see how we can do. Since we're only interested in whether a die comes up as "6" or not, we can define a single die and the joint distribution over n dice as follows:
die = ProbDist({'6':1/6, '-':5/6}) def dice(n, die): "Joint probability from tossing n dice." if n == 1: return die else: return joint(die, dice(n - 1, die))
dice(3, die)
{'---': 0.5787037037037037, '--6': 0.11574074074074073, '-6-': 0.11574074074074073, '-66': 0.023148148148148143, '6--': 0.11574074074074073, '6-6': 0.023148148148148143, '66-': 0.023148148148148143, '666': 0.0046296296296296285}
Now we are ready to determine which proposition is more likely to have the required number of sixes:
def at_least(k, result): return lambda s: s.count(result) >= k
P(at_least(1, '6'), dice(6, die))
0.6651020233196161
P(at_least(2, '6'), dice(12, die))
0.6186673737323009
P(at_least(3, '6'), dice(18, die))
0.5973456859478227
We reach the same conclusion Newton did, that the best chance is rolling six dice.
Sometimes it is inconvenient to explicitly define a sample space. Perhaps the sample space is infinite, or perhaps it is just very large and complicated, and we feel more confident in writing a program to simulate one pass through all the complications, rather than try to enumerate the complete sample space. Random sampling from the simulation can give an accurate estimate of the probability.
Consider problem 84 from the excellent Project Euler, which asks for the probability that a player in the game Monopoly ends a roll on each of the squares on the board. To answer this we need to take into account die rolls, chance and community chest cards, and going to jail (from the "go to jail" space, from a card, or from rolling doubles three times in a row). We do not need to take into account anything about buying or selling properties or exchanging money or winning or losing the game, because these don't change a player's location. We will assume that a player in jail will always pay to get out of jail immediately.
A game of Monopoly can go on forever, so the sample space is infinite. But even if we limit the sample space to say, 1000 rolls, there are $21^{1000}$ such sequences of rolls (and even more possibilities when we consider drawing cards). So it is infeasible to explicitly represent the sample space.
But it is fairly straightforward to implement a simulation and run it for, say, 400,000 rolls (so the average square will be landed on 10,000 times). Here is the code for a simulation:
from collections import Counter, deque import random # The board: a list of the names of the 40 squares # As specified by board = """"".split() def monopoly(steps): """Simulate given number of steps of Monopoly game, yielding the number of the current square after each step.""" goto(0) # start at GO CC_deck = Deck('GO JAIL' + 14 * ' ?') CH_deck = Deck('GO JAIL C1 E3 H2 R1 R R U -3' + 6 * ' ?') doubles = 0 jail = board.index('JAIL') for _ in range(steps): d1, d2 = random.randint(1, 6), random.randint(1, 6) goto(here + d1 + d2) doubles = (doubles + 1) if (d1 == d2) else 0 if doubles == 3 or board[here] == 'G2J': goto(jail) elif board[here].startswith('CC'): do_card(CC_deck) elif board[here].startswith('CH'): do_card(CH_deck) yield here def goto(square): "Update the global variable 'here' to be square." global here here = square % len(board) def Deck(names): "Make a shuffled deck of cards, given a space-delimited string." cards = names.split() random.shuffle(cards) return deque(cards) def do_card(deck): "Take the top card from deck and do what it says." global here card = deck[0] # The top card deck.rotate(-1) # Move top card to bottom of deck if card == 'R' or card == 'U': while not board[here].startswith(card): goto(here + 1) # Advance to next railroad or utility elif card == '-3': goto(here - 3) # Go back 3 spaces elif card != '?': goto(board.index(card))# Go to destination named on card
And the results:
results = list(monopoly(400000))
I'll show a histogram of the squares, with a dotted red line at the average:
%matplotlib inline import matplotlib.pyplot as plt plt.hist(results, bins=40) avg = len(results) / 40 plt.plot([0, 39], [avg, avg], 'r--');
Another way to see the results:
ProbDist(Counter(board[i] for i in results))
{'A1': 0.0211675, 'A2': 0.021105, 'B1': 0.022485, 'B2': 0.0233825, 'B3': 0.0231025, 'C1': 0.02732, 'C2': 0.0240925, 'C3': 0.02433, 'CC1': 0.0187875, 'CC2': 0.0258225, 'CC3': 0.02384, 'CH1': 0.00864, 'CH2': 0.010405, 'CH3': 0.008545, 'D1': 0.028305, 'D2': 0.0294775, 'D3': 0.03062, 'E1': 0.0285925, 'E2': 0.0273425, 'E3': 0.0317775, 'F1': 0.027155, 'F2': 0.02686, 'F3': 0.0261775, 'FP': 0.0289025, 'G1': 0.0269675, 'G2': 0.0258, 'G3': 0.0249025, 'GO': 0.0305975, 'H1': 0.0219575, 'H2': 0.0261525, 'JAIL': 0.0621475, 'R1': 0.03009, 'R2': 0.028755, 'R3': 0.030435, 'R4': 0.024545, 'T1': 0.023715, 'T2': 0.02204, 'U1': 0.0257525, 'U2': 0.0279075}
There is one square far above average:
JAIL, at a little over 6%. There are four squares far below average: the three chance squares,
CH1,
CH2, and
CH3, at around 1% (because 10 of the 16 chance cards send the player away from the square), and the "Go to Jail" square, square number 30 on the plot, which has a frequency of 0 because you can't end a turn there. The other squares are around 2% to 3% each, which you would expect, because 100% / 40 = 2.5%..)
The Central Limit Theorem states that if you have a collection of random variables and sum them up, then the larger the collection, the closer the sum will be to a normal distribution (also called a Gaussian distribution or a bell-shaped curve). The theorem applies in all but a few pathological cases.
As an example, let's take 5 random variables reprsenting the per-game scores of 5 basketball players, and then sum them together to form the team score. Each random variable/player is represented as a function; calling the function returns a single sample from the distribution:
from random import gauss, triangular, choice, vonmisesvariate, uniform def SC(): return posint(gauss(15.1, 3) + 3 * triangular(1, 4, 13)) # 30.1 def KT(): return posint(gauss(10.2, 3) + 3 * triangular(1, 3.5, 9)) # 22.1 def DG(): return posint(vonmisesvariate(30, 2) * 3.08) # 14.0 def HB(): return posint(gauss(6.7, 1.5) if choice((True, False)) else gauss(16.7, 2.5)) # 11.7 def OT(): return posint(triangular(5, 17, 25) + uniform(0, 30) + gauss(6, 3)) # 37.0 def posint(x): "Positive integer"; return max(0, int(round(x)))
And here is a function to sample a random variable k times, show a histogram of the results, and return the mean:
from statistics import mean def repeated_hist(rv, bins=10, k=100000): "Repeat rv() k times and make a histogram of the results." samples = [rv() for _ in range(k)] plt.hist(samples, bins=bins) return mean(samples)
The two top-scoring players have scoring distributions that are slightly skewed from normal:
repeated_hist(SC, bins=range(60))
30.09618
repeated_hist(KT, bins=range(60))
22.1383
The next two players have bi-modal distributions; some games they score a lot, some games not:
repeated_hist(DG, bins=range(60))
14.02429
repeated_hist(HB, bins=range(60))
11.70888
The fifth "player" (actually the sum of all the other players on the team) looks like this:
repeated_hist(OT, bins=range(60))
36.31564
Now we define the team score to be the sum of the five players, and look at the distribution:
def GSW(): return SC() + KT() + DG() + HB() + OT() repeated_hist(GSW, bins=range(70, 160, 2))
114.31262
Sure enough, this looks very much like a normal distribution. The Central Limit Theorem appears to hold in this case. But I have to say "Central Limit" is not a very evocative name, so I propose we re-name this as the Strength in Numbers Theorem, to indicate the fact that if you have a lot of numbers, you tend to get the expected result.
We've had an interesting tour and met some giants of the field: Laplace, Bernoulli, Fermat, Pascal, Bayes, Newton, ... even Mr. Monopoly and The Count.
The conclusion is: be explicit about what the problem says, and then methodical about defining the sample space, and finally be careful in counting the number of outcomes in the numerator and denominator. Easy as 1-2-3.
Everything up to here has been about discrete, finite sample spaces, where we can enumerate all the possible outcomes.
But I was asked about continuous sample spaces, such as the space of real numbers. The principles are the.
Oliver Roeder posed this problem in the 538 Riddler blog: bullion —?
We'll use this notation:
For example, if player A chooses a cutoff of A = 0.6, that means that A would accept any first number greater than 0.6, and reject any number below that cutoff. The question is: What cutoff, A, should player A choose to maximize the chance of winning, that is, maximize P(a > b)?
First, simulate the number that a player with a given cutoff gets (note that
random.random() returns a float sampled uniformly from the interval [0..1]):
def number(cutoff): "Play the game with given cutoff, returning the first or second random number." first = random.random() return first if first > cutoff else random.random()
number(.5)
0.643051044503982
Now compare the numbers returned with a cutoff of A versus a cutoff of B, and repeat for a large number of trials; this gives us an estimate of the probability that cutoff A is better than cutoff B:
def Pwin(A, B, trials=30000): "The probability that cutoff A wins against cutoff B." Awins = sum(number(A) > number(B) for _ in range(trials)) return Awins / trials
Pwin(.5, .6)
0.49946666666666667
Now define a function,
top, that considers a collection of possible cutoffs, estimate the probability for each cutoff playing against each other cutoff, and returns a list with the
N top cutoffs (the ones that defeated the most number of opponent cutoffs), and the number of opponents they defeat:
def top(N, cutoffs): "Return the N best cutoffs and the number of opponent cutoffs they beat." winners = Counter(A if Pwin(A, B) > 0.5 else B for (A, B) in itertools.combinations(cutoffs, 2)) return winners.most_common(N)
from numpy import arange %time top(5, arange(0.50, 0.99, 0.01))
We get a good idea of the top cutoffs, but they are close to each other, so we can't quite be sure which is best, only that the best is somewhere around 0.60. We could get a better estimate by increasing the number of trials, but that would consume more time.
More promising is the possibility of making
Pwin(A, B) an exact calculation. But before we get to
Pwin(A, B), let's solve a simpler problem: assume that both players A and B have chosen a cutoff, and have each received a number above the cutoff. What is the probability that A gets the higher number? We'll call this
Phigher(A, B). We can think of this as a two-dimensional sample space of points in the (a, b) plane, where a ranges from the cutoff A to 1 and b ranges from the cutoff B to 1. Here is a diagram of that two-dimensional sample space, with the cutoffs A=0.5 and B=0.6:
The total area of the sample space is 0.5 × 0.4 = 0.20, and in general it is (1 - A) · (1 - B). What about the favorable cases, where A beats B? That corresponds to the shaded triangle below:
The area of a triangle is 1/2 the base times the height, or in this case, 0.42 / 2 = 0.08, and in general, (1 - B)2 / 2. So in general we have:
Phigher(A, B) = favorable / total favorable = ((1 - B) ** 2) / 2 total = (1 - A) * (1 - B) Phigher(A, B) = (((1 - B) ** 2) / 2) / ((1 - A) * (1 - B)) Phigher(A, B) = (1 - B) / (2 * (1 - A))
And in this specific case we have:
A = 0.5; B = 0.6 favorable = 0.4 ** 2 / 2 = 0.08 total = 0.5 * 0.4 = 0.20 Phigher(0.5, 0.6) = 0.08 / 0.20 = 0.4
But note that this only works when the cutoff A ≤ B; when A > B, we need to reverse things. That gives us the code:
def Phigher(A, B): "Probability that a sample from [A..1] is higher than one from [B..1]." if A <= B: return (1 - B) / (2 * (1 - A)) else: return 1 - Phigher(B, A)
Phigher(0.5, 0.6)
We're now ready to tackle the full game. There are four cases to consider, depending on whether A and B gets a first number that is above or below their cutoff choices:
For example, the first row of this table says that the event of both first numbers being above their respective cutoffs has probability (1 - A) · (1 - B), and if this does occur, then the probability of A winning is Phigher(A, B).
We're ready to replace the old simulation-based
Pwin with a new calculation-based version:
def Pwin(A, B): "With what probability does cutoff A win against cutoff B?" return ((1-A) * (1-B) * Phigher(A, B) # both above cutoff + A * B * Phigher(0, 0) # both below cutoff + (1-A) * B * Phigher(A, 0) # A above, B below + A * (1-B) * Phigher(0, B)) # A below, B above
That was a lot of algebra. Let's define a few tests to check for obvious errors:
def test(): assert Phigher(0.5, 0.5) == Phigher(0.7, 0.7) == Phigher(0, 0) == 0.5 assert Pwin(0.5, 0.5) == Pwin(0.7, 0.7) == 0.5 assert Phigher(.6, .5) == 0.6 assert Phigher(.5, .6) == 0.4 return 'ok' test()
Let's repeat the calculation with our new, exact
Pwin:
top(5, arange(0.50, 0.99, 0.01))
It is good to see that the simulation and the exact calculation are in rough agreement; that gives me more confidence in both of them. We see here that 0.62 defeats all the other cutoffs, and 0.61 defeats all cutoffs except 0.62. The great thing about the exact calculation code is that it runs fast, regardless of how much accuracy we want. We can zero in on the range around 0.6:
top(10, arange(0.500, 0.700, 0.001))
This says 0.618 is best, better than 0.620. We can get even more accuracy:
top(5, arange(0.61700, 0.61900, 0.00001))
So 0.61803 is best. Does that number look familiar? Can you prove that it is what I think it is?
To understand the strategic possibilities, it is helpful to draw a 3D plot of
Pwin(A, B) for values of A and B between 0 and 1:
import numpy as np from mpl_toolkits.mplot3d.axes3d import Axes3D def map2(fn, A, B): "Map fn to corresponding elements of 2D arrays A and B." return [list(map(fn, Arow, Brow)) for (Arow, Brow) in zip(A, B)] cutoffs = arange(0.00, 1.00, 0.02) A, B = np.meshgrid(cutoffs, cutoffs) fig = plt.figure(figsize=(10,10)) ax = fig.add_subplot(1, 1, 1, projection='3d') ax.set_xlabel('A') ax.set_ylabel('B') ax.set_zlabel('Pwin(A, B)') ax.plot_surface(A, B, map2(Pwin, A, B));
What does this Pringle of Probability show us? The highest win percentage for A, the peak of the surface, occurs when A is around 0.5 and B is 0 or 1. We can confirm that, finding the maximum
Pwin(A, B) for many different cutoff values of
A and
B:
cutoffs = (set(arange(0.00, 1.00, 0.01)) | set(arange(0.500, 0.700, 0.001)) | set(arange(0.61700, 0.61900, 0.00001)))
max([Pwin(A, B), A, B] for A in cutoffs for B in cutoffs)
So A could win 62.5% of the time if only B would chose a cutoff of 0. But, unfortunately for A, a rational player B is not going to do that. We can ask what happens if the game is changed so that player A has to declare a cutoff first, and then player B gets to respond with a cutoff, with full knowledge of A's choice. In other words, what cutoff should A choose to maximize
Pwin(A, B), given that B is going to take that knowledge and pick a cutoff that minimizes
Pwin(A, B)?
max(min([Pwin(A, B), A, B] for B in cutoffs) for A in cutoffs)
And what if we run it the other way around, where B chooses a cutoff first, and then A responds?
min(max([Pwin(A, B), A, B] for A in cutoffs) for B in cutoffs)
In both cases, the rational choice for both players in a cutoff of 0.61803, which corresponds to the "saddle point" in the middle of the plot. This is a stable equilibrium; consider fixing B = 0.61803, and notice that if A changes to any other value, we slip off the saddle to the right or left, resulting in a worse win probability for A. Similarly, if we fix A = 0.61803, then if B changes to another value, we ride up the saddle to a higher win percentage for A, which is worse for B. So neither player will want to move from the saddle point.
The moral for continuous spaces is the same as for discrete spaces: be careful about defining your space; count/measure carefully, and let your code take care of the rest. | http://nbviewer.jupyter.org/url/norvig.com/ipython/Probability.ipynb | CC-MAIN-2018-26 | refinedweb | 6,476 | 72.76 |
CGTalk
>
Development and Hardware
>
Graphics Programming
> import logic in maya
PDA
View Full Version :
import logic in maya
pursuit
01-18-2007, 12:44 PM
hi,
i wonder the basic logic of import operation in maya.
my purpose is writing a program or just mel script that will delete all the items about the imported file or object. is it possible logicallly???
for example assume that i imported a file that has 2 or more objects and layers, names of the object and layers are not well-designed they are randomly chosen,
my assumed algorithm is:
"select object1
delete all related objects"
i don't expect you write the command, just want to know wheter or not there exists such a relation????
mummey
01-26-2007, 02:04 AM
Are you looking for the Maya MEL sub-forum ()?
CGTalk Moderation
01-26-2007, 02. | http://forums.cgsociety.org/archive/index.php/t-452773.html | CC-MAIN-2014-52 | refinedweb | 144 | 57.4 |
I'm not sure if you've yet noticed, but I've been very into making browser games recently. I think the web is an exciting platform with a lot of cool things you can do. From Binaural Audio to doing things with sensors like Gyroscope and Accelerometer and other cool things, as well as the fact that they run virtually everywhere a modern web browser does, the web has it all and I thought it was time we started experimenting with this.
I already wrote a couple of games using these web technologies, the most known is probably Cyclepath. It uses sensors to help with turning on mobile, snappy Keyboard Input, as well as binaural Audio with environmental effects, basically everything an audio game would need.
I wanted to give to the community, so I've spent the last few days decoupling all these snippets of code and making them completely reusable by anyone. Furthermore, they're all on GitHub, so anyone with the knowledge of JavaScript can help me improve them and end up in the published module on NPM. I'd be very happy.
For those of you that already know how to develop using modern ES6 JavaScript and don't want to read about how to set up a testing and development environment because you already know how, all these modules are on GitHub and NPM.
My GitHub profile:
My NPM profile:
The libraries you will find there:
AGK-Input: Helps with Keyboard Input, and let's you use the Keyboard somewhat like you were used to from BGT in the browser.
AGK-SoundObject: An easy way to load, preload and play sounds.
AGK-SoundSource: Quick wrapper around AGK-SoundObject, which is a wrapper itself so this is a total WrapperCeption, to quickly create 3D sound sources.
AGK-TTS: Quick way to speak in the browser
AGK-UI: Build simple user interfaces like Menu's and scrolling text
AGK-Utils: Random stuff like distance, collision, random numbers, stuff that hasn't yet made it into it's own module but will at some point.
Examples and usage is on the respective GitHub or NPM pages.
If you don't know how to set any of this up then don't worry. I'll work on sharing my development template or boilerplate code including all my scripts to build and test games and quickly get to production. This is going to take me some time though, so please be patient with me.
I'll also be working on an example game that utilizes all of these libraries and is easy to understand.
But a quick introduction
I use the parcel bundlers to bundle my code. This is more or less necessary to make use of modern ES6 modules properly. If you want to quickly set up an environment:
Install Node.JS. You can get Node.JS from.
Create a directory for your game.
Open a terminal and enter your game directory.
Type npm init
Fill out the fields to your liking.
Once it has written the package.json file, install the modules.
The ones you'll need globally are:
npm install parcel-bundler -g
npm install http-server -g
The -g tells it to install these modules globally. That means they'll run everywhere in your terminal as well as be globally available in your JavaScripts.
Time to install the modules for the game.
Make sure you're still in the directory of your game. For now, we're going to only write a quick script that uses AGK-SoundObject to play a sound. Then:
npm install agk-soundsource --save
The --save tells NPM to update the dependencies in your package.json. This is useful for when you share your code. If a package.json is present in the current directory, simply typing npm install will install all the dependencies present in your package.json file.
OK, we have the module. I usually set up a client directory for all my HTML, sounds and javaScript.
mkdir client
Cool. Go into that directory and create an index.html, a js folder and a sounds folder.
Drop any wave file you want in that sounds folder. It doesn't need to be just wave files, you'll have to look at your browser and what codecs it supports.
Sweet. In the js folder, create a file called main.js.
Populate the file with this contents:
import SoundObject from "agk-soundobject"; SoundObject.directory = "./sounds/"; SoundObject.extension = ".wav"; const sound = SoundObject.create("meow"); sound.play();
This will load the meow.wav from the sounds folder in your client folder and play it.
Right. Next open index.html outside of the JS folder and populate it with this:
<html> <head> <title>My game yay!</title> </head> <body> <script src="./main.js"></script> </body> </html>
The main.js file will be created by parcel in the next step in the client folder, not inside the js folder. The JS folder contains your written code, the main.js file outside of the JS folder will be the compiled JavaScript.
Open a terminal and make sure you're inside the client directory of your game.
Then run this:
parcel watch ./js/main.js -d ./ --target=browser
Parcel has a few commands. Watch will open a server that constantly looks for changes in your JavaScript, and when a change is detected it will automagically rebuild the bundle. Furthermore, if your browser tab is currently open the browser will automagically reload the changes you've made. Awesome right?
Next, open a new terminal and get back into your game directory. Then simply run:
http-server ./client
This will open an HTTP-Server to your client folder.
Now you can open your browser and go to and if everything goes as planned you should hear meow.wav!
If you didn't understand any of this, then that means you've not messed much with modern JavaScript. That's OK. There are so many resources to learn javaScript development out there it's unreal. Just go wild and don't be afraid to mess around.
If this seems counter productive and like a lot of work, yeah you'd be right. It used to be much more convoluted when we didn't have Parcel and had to configure webpack by hand to build the files, luckily parcel exists now and that step falls away.
That said, I'll soon upload my development template, so you will just be able to clone it with git, npm install and start coding right away.
If you have any questions feel free to ask.
I hope I didn't miss anything and I hope this is of use to someone. Have fun! <3 | http://forum.audiogames.net/viewtopic.php?pid=355215 | CC-MAIN-2018-22 | refinedweb | 1,120 | 73.58 |
New Socket Module
Contents
- New Socket Module
- Introduction
- Requirements
- Cpython compatibility
- SSL Support
- Cpython versions
- Other I/O
- Socket options
- Differences between cpython and jython
- Known issues and workarounds
Introduction
This page descibes the socket module for jython 2.5, which now has support for asynchronous or non-blocking operations. This support makes possible the use of event-based server frameworks on jython. Examples of cpython event-based frameworks are Twisted, Zope and Medusa. It is considered by some that Asynchronous IO is the only way to address The C10K problem
The socket module is pretty much feature complete. It is written to use java.nio apis as much as possible, i.e. all reads and writes are carried out through the sockets java.nio.channel based interface. However, there are some circumstances, specifically timeout operations, where the java.net.socket interfaces must be used.
The socket module is quite stable; a number of bugs in the functionality have been found and fixed.
There are necessarily some differences between the behaviour of the cpython and jython socket modules, because jython is implemented on the java socket model, which is more restrictive than the C language socket interface that cpython is based on. It is the purpose of this document to describe those differences. If you find a difference in behaviour between cpython and jython that is not documented here, and where jython is behaving differently to the cpython socket documentation, then that should be considered a bug and reported, please.
Requirements
Non-blocking support
SSL support
- JVM version: any version on which jython runs.
Any jython version >= 2.5
Cpython compatibility
The new socket module has been written to comply as closely as possible with the cpython 2.5 API for non-blocking sockets, therefore you should use the cpython socket documentation as your reference when writing code.
If the jython module exhibits behaviour that differs from that described in the cpython documentation, then that should be considered a bug and reported as such, except for certain unavoidable differences, desribed below.
SSL Support
The module includes an implementation of client side SSL support, which is compatible with the cpython SSL API. The client side ssl example from the cpython documentation should just work.
However, the cpython SSL API is extremely basic, and essentially only permits the formation of SSL wrapped sockets. It does NOT include support for any of the following
- Management of Certificates, i.e. loading, storing, manipulating certificates.
- Verification of Certificates, i.e. verifying the chain of trust.
- Non-blocking SSL support.
- Server side SSL support.
All of the above are possible, but since no other python version includes that support in the base distribution, I'm not going to do it for jython either; trying to design an API would be complex enough; implementing would be a lot of work beyond that.
If you have serious SSL or crypto requirements, then I strongly recommend using the java crypto libraries, or one of the excellent third-party crypto libraries for java, such as that from the Legion of the Bouncy Castle.
Certificate Checking
By default, the cpython socket library does not carry out certificate checking, which means that it will accept expired certificates, etc. This is acceptable when the validity of the remote certificate is not important, such as in testing.
In Java, by default, certificate checking is enabled. This means that the jython ssl routines will verify the validity of the remote certificate, and refuse to form the connection if it is not valid. If this is a problem for you, then you can get around it as described here. I have also written a blog post about this: Installing an all trusting security provider on java and jython
There is also a pure Jython implementation of this same thing here: Trusting All Certificates in Jython
By configuring your JVM
If you are using self-generated certificates for testing, then you can import those certificates into the JVM running your jython scripts, and the certificate will then be recognised. See this article from Sun on how to install certificates on a JVM.
By installing your own Security Provider
The following technique has been adapted from the ZXTM KnowledgeHub article Using the Control API with Java.
WARNING: Installing this all-trusting security provider means that all certificates will be accepted, regardless of validity! Do not use this technique if you need to verify the trustworthiness of an SSL certificate!
In order to accept all certificates, valid or not, you can install your own security provider. Here is a sample Security Provider, written in java, which trusts all certificates. This code should be saved in a file called MyProvider.java, compiled and the resulting MyProvider.class file placed in the class path.
import java.security.Security; import java.security.KeyStore; import java.security.Provider; import java.security.cert.X509Certificate; import javax.net.ssl.ManagerFactoryParameters; import javax.net.ssl.TrustManager; import javax.net.ssl.TrustManagerFactorySpi; import javax.net.ssl.X509TrustManager; public class MyProvider extends Provider { public MyProvider() { super("MyProvider", 1.0, "Trust certificates"); put("TrustManagerFactory.TrustAllCertificates", MyTrustManagerFactory.class.getName()); } protected static class MyTrustManagerFactory extends TrustManagerFactorySpi { public MyTrustManagerFactory() {} protected void engineInit( KeyStore keystore ) {} protected void engineInit(ManagerFactoryParameters mgrparams ) {} protected TrustManager[] engineGetTrustManagers() { return new TrustManager[] {new MyX509TrustManager()}; } } protected static class MyX509TrustManager implements X509TrustManager { public void checkClientTrusted(X509Certificate[] chain, String authType) {} public void checkServerTrusted(X509Certificate[] chain, String authType) {} public X509Certificate[] getAcceptedIssuers() { return null; } } }
To install this custom Security Provider from jython, execute the following code
import java import MyProvider # Install the all-trusting trust manager java.security.Security.addProvider(MyProvider()) java.security.Security.setProperty("ssl.TrustManagerFactory.algorithm", "TrustAllCertificates")
All code which creates SSL socket connections should now accept all certicates, whether they are valid or not.
Cpython versions
It is the intention that these modules be fully API compatible with cpython, as far as is possible or sensible. This means that any cpython socket code that is syntax compatible with your selected jython version should produce identical behaviour to the same code running on cpython.
The unit-tests provided with these modules should also run on all versions of cpython. See below under unit-tests for more details.
Other I/O, java.nio.channels.
Socket options
The following socket options are supported on jython
For TCP client sockets
- SO_ERROR
- SO_KEEPALIVE
- SO_LINGER
- SO_OOBINLINE
- SO_RCVBUF
- SO_REUSEADDR
- SO_SNDBUF
- SO_TIMEOUT
- SO_TYPE
- TCP_NODELAY
For TCP server sockets
- SO_ACCEPTCONN
- SO_ERROR
- SO_RCVBUF
- SO_REUSEADDR
- SO_TIMEOUT
- SO_TYPE
The following TCP client socket options can be set on TCP Server sockets, but will have no effect on them. Instead, they will be propagated to accepted client sockets. For more details, read #1309 - Server sockets do not support client options and propagate them to 'accept'ed client sockets
- SO_KEEPALIVE
- SO_LINGER
- SO_OOBINLINE
- SO_SNDBUF
- TCP_NODELAY
For UDP sockets
- SO_BROADCAST
- SO_ERROR
- SO_RCVBUF
- SO_REUSEADDR
- SO_SNDBUF
- SO_TIMEOUT
- SO_TYPE
If an option is not explicitly listed above, it is explicitly not supported, i.e. jython cannot because java does not support the option.
Note that the level at which all but one of the options above are get and set is socket.SOL_SOCKET. The exception is TCP_NODELAY, which is at level socket.IPPROTO_TCP.
All but one of the above take either a single boolean/integer or a single integer as a parameter, so they might be called like so
mysock.setsockopt(socket.SOL_SOCKET, socket.SO_OOBINLINE, 1) mysock.setsockopt(socket.SOL_SOCKET, socket.SO_RCVBUF, 32768) mysock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
The exception is SO_LINGER, which (thanks to cpython's C heritage) takes a packed struct containing two values, the first is the flag to enable or disable SO_LINGER, the second specifies the linger time. The return value from getting the option is similarly packed. Here is a code snippet
import struct linger_enabled = 1 linger_time = 10 linger_struct = struct.pack('ii', linger_enabled, linger_time) mysock.setsockopt(socket.SOL_SOCKET, socket.SO_LINGER, linger_struct) linger_status = mysock.getsockopt(socket.SOL_SOCKET, socket.SO_LINGER) linger_enabled, linger_time = struct.unpack('ii', linger_status)
Differences between cpython and jython
socket.fileno() does not return an integer
Cpython, as is appropriate for its C language heritage, returns an integer from the socket.fileno() method. However, the concept of having an integer handle on I/O channels is a C language specific idiom: C maintains an internal of array of FILE* structures, and the integer "file descriptor" is an index into that array.
In java, there is no concept of a file descriptor table. Instead, java deals with objects, java.nio.channel objects in the case of sockets.
Also, the only reason why one might need the return value from the socket.fileno() method is to pass it to one of the functions in the select module. However, the portable way to do this, i.e. the way to write code that works portably across cpython and jython (and other pythons such as ironpython, pypy, etc) is to pass the socket object directly to the select call. This is approved by Guido, the python BDFL, as you can see from this email discussion on python-dev: python-dev: return value of socket.fileno().
The socket.fileno() method has been added to jython purely for code compatibility, i.e. to prevent exceptions from code that was not written to be portable (we try hard to be good citizens in jython-land {{/wiki/modernized/img/smile4.png|;-) )
For a much more detailed discussion of this subject, see this email discussion: fileno support is not in jython. Reason?
Socket shutdown
On sockets, the shutdown method is closely related to the close method, in that it terminates network communication on the socket. However, there are differences, because close relates to the file descriptor connected to the socket, whereas the shutdown method relates to the socket itself. But java doesn't have file descriptors, and doesn't have the shutdown method for sockets in general; only TCP client sockets have shutdown methods; for TCP server sockets and UDP sockets, the method to shutdown a socket is the close method. For a detailed discussion of these issues, see this blog post: Socket shutdown versus socket close on cpython, jython and java. On jython, the socket method is implemented as follows, for each type of socket
TCP Client sockets. When shutdown is called on TCP client sockets, the how parameter determines how the read and write streams of the socket are treated. This is standard cpython behaviour, and follows the cpython documentation.
TCP Server sockets. When shutdown is called on TCP server sockets, the effect should be to close the listening socket, and discard all incoming connections in the listen queue. However, in java there is no shutdown method for server sockets (see ServerSocket and ServerSocketChannel), there is only a close method. So in jython, the shutdown method on TCP server sockets is a no-op, i.e. it does nothing. If you want to stop accepting incoming requests, then you should call the close() method on the socket.
UDP sockets. When shutdown is called on an UDP socket, as with TCP server sockets, the effect should be to stop accepting incoming packets. However, in java there is no shutdown method on udp sockets (see DatagramSocket and DatagramChannel), there is only a close method. So in jython, the shutdown method on UDP sockets is a no-op, i.e. it does nothing. If you want to stop accepting incoming packets, then you should call the close() method on the socket.
Error handling
These modules have been coded so that the error handling is as close to the error handling of cpython as possible. So, ideally, cpython socket and select code run on jython should exhibit identical behaviour to cpython, in terms of exception types raised, error numbers returned, etc.
However, due to the different semantics of java and C, there will be differences in the error handling, which will be documented here as they are discovered.
Differences in the treatment of zero timeout values
On cpython, when you specify a zero (i.e. 0 or 0.0) timeout value, the socket should behave the same as if the socket had been placed in non-blocking mode. See the cpython socket object documentation for details.
However, Java interprets a zero timeout value as an infinite timeout, i.e. the socket is placed in blocking mode.
To solve this conflict, I decided that the best thing to do with zero timeouts is to adjust them to the smallest possible timeout in java, which is 1 millisecond. So if you do socket.settimeout(0) with the new jython socket module, what you will really get is equivalent to the cpython call socket.settimeout(0.001).
This means that you may get differing exception signatures when using zero timeouts under cpython and jython.
- Cpython: socket operations with zero timeouts that fail will raise socket.error exceptions. This is equivalent to a -1 return from the C socket.recv call, with errno set to EAGAIN, meaning "The socket is marked non-blocking and the receive operation would block, or a receive timeout had been set and the timeout expired before data was received."
- Jython: socket operations with zero timeouts that fail will generate socket.timeout exceptions.
Deferred socket creation on jython
Java has different objects for Client (java.net.Socket+java.nio.channels.SocketChannel) and Server (java.net.ServerSocket+java.nio.channels.ServerSocketChannel) sockets. Jython cannot know whether to create a client or server socket until some client or server specific behaviour is observed. For clients, this is a connect() call, and for servers, this is a listen() call.
When a connect() call is made, jython then knows that this is a client socket, and to create a java.net.Socket+java.nio.channels.SocketChannel to implement the socket.
Similarly, when a listen() call is made, jython then knows that this is a server socket, and to create a java.net.ServerSocket+java.nio.channels.ServerSocketChannel to implement the socket.
Any attempt to get information about a socket before either a connect() or listen() call is made, i.e. before the implementation socket is created, may fail, although attempts are made to return already set configuration values, if possible.
For example, consider the following code
>>> import socket >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> s.bind( ("localhost", 8888) ) >>> print s.getsockname() (u'127.0.0.1', 8888) >>> s.listen(5) >>> print s.getsockname() (u'127.0.0.1', 8888)
The value for port number, 8888, that is returned after the bind() call, is not the port number retrieved from the underlying socket itself, because the socket does not yet exist. Instead, you are being returned the requested configuration value, the value you passed in. Only after the listen() call does the socket exist, so only then is the port number retrieved from the actual underlying socket.
This may seem a trivial concern, since the correct port is always returned. But there is one situation where it does make a difference. If you request a port number of 0, then that is a request to the operating system to pick an ephemeral port number for you, one that is guaranteed to be free. Consider this code
>>> import socket >>> s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) >>> s.bind( ("localhost", 0) ) >>> print s.getsockname() (u'127.0.0.1', 0) >>> s.listen(5) >>> print s.getsockname() (u'127.0.0.1', 3357)
In this scenario, if you relied on the port number returned after the bind call but before the listen call, e.g. handing it to clients, then your clients would fail, because 0 is an invalid port number.
Similarly, the actual underlying implementation socket for client sockets does not exist until you have called one of the connect() methods.
Known issues and workarounds
Null address returned from UDP socket.recvfrom() in timeout mode
This bug is reported on the jython bug tracker: UDP recvfrom fails to get the remote address.
Description
When an UDP socket is in timeout mode, the address returned from socket.recvfrom() is always (None, -1).
This only happens in timeout mode, because that code path uses java.net.DatagramSocket.receive() to receive packets. For some reason, DatagramPackets receive()d in this way either
Return null from the DatagramPacket.getAddress() method, thus preventing us from obtaining the source address
Cause an exception when DatagramPacket.getSocketAddress() is called.
This bug has been reported to Sun, but is not yet public on the Java bug database.
Workaround
- Until the java bug is fixed, only use sockets in either blocking or non-blocking mode; the problem will not occur in this case.
If you need timeouts, then until the java bug is fixed, you should probably create your own DatagramSockets, directly through the java.net APIs.
IPV6 address support
Description
On some versions of the JVM, there are problems with bind'ing and connect'ing sockets using IPV6 addresses, even though the OS platform ostensibly supports IPV6. See this bug on the Java bug database: NIO channels with IPv6 on Windows
There have also been reports of IPV6 problems with java.nio on Windows 7 64 bit, so these instructions also apply to that platform, since the jython socket module uses java.nio extensively. Further information: the reporter of that bug has found that the problem appears on JDK 1.6, and goes away when he uses a release of JDK 1.7, which is still in pre-release.
The most appropriate mechanism for retrieving IP addresses is the getaddrinfo() function, to which you pass parameters to specify the kind of addresses you're interested in. The function returns a 5-tuple, the last element of which is a tuple of (ip-address, port), which you then use to bind or connect your socket. For example
>>> import socket >>> socket.getaddrinfo("localhost", 80, socket.AF_INET, socket.SOCK_STREAM)[0][4] ('127.0.0.1', 80)
If you want to retrieve IPV4 addresses only, you pass the family parameter AF_INET, as in the example above.
If you want to retrieve IPV6 addresses only, you pass the family parameter AF_INET6, like this.
>>> import socket >>> socket.getaddrinfo("localhost", 80, socket.AF_INET6, socket.SOCK_STREAM)[0][4] ('0:0:0:0:0:0:0:1', 80)
If you don't care whether you receive IPV4 or IPV6 addresses, specify the family parameter as AF_UNSPEC, and you will be returned a mix of addresses, if your system supports IPV6, like this
>>> import socket >>> for a in socket.getaddrinfo("localhost", 80, socket.AF_UNSPEC, socket.SOCK_STREAM) ... print a[4] ... ('127.0.0.1', 80) ('0:0:0:0:0:0:0:1', 80)
A problem arises because of incomplete IPV6 support in the JVM, particularly on older JVMs, such as the bugs mentioned above. So, on some systems, if you try to use IPV6 addresses, you will get a java.net.SocketException.
This is problematic in jython, because there are multiple modules, such as telnetlib, httplib, smtplib, etc, that use the socket library to retrieve addresses, but which use the AF_UNSPEC parameter to getaddrinfo(). This means that those modules may be returned IPV6 addresses, which they will try to bind or connect, and possibly fail, for the reasons discussed above. Which would mean you get failures from those modules.
So to prevent those failures, we've put a workaround in place.
Workaround
The workaround is a jython-specific function which instructs the getaddrinfo() function to only return IPV4 addresses. Although this function is in the socket module, it affects all modules that use the getaddrinfo() function.
It is used like this
>>> import socket >>> socket._use_ipv4_addresses_only(True) >>> for a in socket.getaddrinfo("localhost", 80, socket.AF_UNSPEC, socket.SOCK_STREAM) ... print a[4] ... ('127.0.0.1', 80)
Note that this function should be called before any other modules attempt to use the socket module, like so
import httplib import socket socket._use_ipv4_addresses_only(True) py_dot_org = httplib.HTTPConnection('')
To restore normal behaviour of the AF_UNSPEC family, pass a False value to socket._use_ipv4_addresses_only()
NB: This workaround only affects the getaddrinfo() function. If you pass traditional (hostname, port) tuples containing IPV6 addresses to socket functions, e.g. ("::1", 80), you will not be able to avail of this workaround.
Internationalized Domain Name support
Internationalized Domain Names for Applications (IDNA) is a mechanism whereby characters from outside the US-ASCII character set may be used in domain names. IDNA is specified in RFC 3490.
Before IDNA, it was not possible to include non-ASCII characters in domain names, e.g. accented characters such as á, é, í, ó, ú, etc, or characters from non-Roman alphabets, such Japanese, Chinese, Korean, Hebrew, Arabic, etc. In May 2010, based on widespread support for IDNA, ICANN permitted the first non-ASCII TLDs to be made available.
Therefore, in order to access the entire internet, including non-ASCII domain names, IDNA support is required.
Description
When you pass a non-ASCII hostname to the cpython or jython socket module, it attempts to automatically map that name to its IDNA equivalent. This applies to all functions that take hostnames, e.g. getaddrinfo(), bind(), connect(), etc, as in these examples from cpython
Python 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.getaddrinfo(u"alán.com", 80) [(2, 0, 0, '', ('207.234.185.84', 80))] >>>
The problem is that jython's IDNA support is currently broken. This means that any attempt to use socket module functions to access domain names that contain non-ASCII characters will break.
Jython 2.5.0 (Release_2_5_0:6476, Jun 16 2009, 13:33:26) [Java HotSpot(TM) Client VM (Sun Microsystems Inc.)] on java1.5.0_18 Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.getaddrinfo(u"alán.com", 80) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\jython250\Lib\socket.py", line 593, in getaddrinfo raise _map_exception(jlx) socket.gaierror: (20001, 'getaddrinfo failed') >>>
Workaround 1 for Java 6 JVMs
There is builtin IDNA support in Java 6. This support has been used in the jython socket module, but only when jython is run on a Java 6 JVM. This support was first made available in jython 2.5.2rc4. See this example from the latest jython
Jython 2.5.2rc3 (trunk, Feb 1 2011, 21:42:10) [Java HotSpot(TM) Client VM (Sun Microsystems Inc.)] on java1.6.0_13 Type "help", "copyright", "credits" or "license" for more information. >>> import socket >>> socket.getaddrinfo(u"alán.com", 80) [(2, None, 0, 'xhaus.com', ('207.234.185.84', 80))] >>>
This IDNA support will not work on Java 5 JVMs: You must use a Java 6 JVM for it to work.
Workaround 1 for Java 5 JVMs
If you are unable to upgrade the JVM on which you run jython to Java 6, then there is another potential workaround.
There is a GNU library for IDNA support, which includes Java support for IDNA.
If you want to have IDNA support in jython on Java 5, then you download the GNU library, put libidn.jar on your classpath, and use it as follows
>>> import gnu.inet.encoding.IDNA >>> domain = u"alán.com" >>> idna_domain = gnu.inet.encoding.IDNA.toASCII(domain) >>> idna_domain u'xn--aln-fla.com' >>> import socket >>> socket.getaddrinfo(idna_domain, 80) [(2, None, 0, 'xhaus.com', ('207.234.185.84', 80))]
This same technique will also work on socket-dependent modules, such as httplib, urllib/2, ftplib, telnetlib, SocketServer, etc. | https://wiki.python.org/jython/NewSocketModule?highlight=SocketServer | CC-MAIN-2016-22 | refinedweb | 3,891 | 56.35 |
.
Thanks ccc! I'll give that a look. I was using the terminal escape codes for a bash script, but they don't work in StaSh. A different way sounds fun!
Thanks again!
Thanks JonB! I certainly got those from the Pythonista docs. Was under the impression that was how StaSh was doing it. I didn't go down the code chain as far as I should have.
Thanks for the color references. That will work for now. I really didn't relish working through the couple hundred different color names I found on the web for CSS colors. You saved me some work! Thanks!
Good to know that there is another way to set text with a tuple. I might give that a look. With what you gave me though, probably won't need to do anything with that for awhile though!
Thanks again for the help!
Mike
- AtomBombed
Or you could just use base escape codes. That's what I use since they're cleaner and more dynamic:
class stashansi: fore_red = "\x9b31m" fore_blue = "\x9b34m" fore_end = "\x9b39m" back_red = "\x9b41m" back_blue = "\x9b44m" back_end = "\x9b49m" bold = "\x9b1m" underline = "\x9b4m" all_end = "\9x0m" print(stashansi.back_blue + stashansi.fore_red + "Hello world" + back_end + "This is just red with no blue background" + fore_end + "Now it's all just normal text...")
Be aware that whenever you print something with ANSI escape codes, you MUST use the ender escape codes to change the colors back to the original state, or else if your program ends, the colors will continue to be the same even outside of your program!
Thanks AtomBombed! That is very similar to what I was using with standard sh/bash escape codes.
I might update my code to use this, but it's working well right now, so ...
It's good to have the escape codes in one place though!
Thanks again!
Mike
I forgot and didn't mention -- I went the other way! I had this working on Mac terminal with colors and wanted it to work in StaSh. Now, it selects things dynamically and should work on either without any changes!
Thanks again!
Mike
Hi Mike,
You mentioned you went the other way - just wondering what this means, can you share an example that works interchangeably with StaSh and macOS terminal?
I am trying to accomplish a similar goal and any pointers would be greatly appreciated :-)
Thanks,
Darren
Hi Darren!
I made the classes (shown below) to handle the color. I had the TerminalColors class already written, so didn't want to change the interface to handle both cases, so I created the getTerminalColorClass() function to key off of the existence (or not) of the _stash variable that can give access to the text_xxx functions.
There is certainly some refactoring that could be done -- just haven't gotten to it yet. If I used the ANSI codes, as suggested, then it's likely all I would need to change are the actual definitions of the colors. I'd probably use some sort of enum that is defined differently like the getTerminalColorClass() function.
def getTerminalColorClass(): if _stash != None: return StashTerminalColors() return TerminalColors() class TerminalColors(object): """ here is a good url for the different colors and things: """ HEADER = '\033[95m' BLUE = '\033[94m' DARK_BLUE = '\033[34m' GREEN = '\033[92m' DARK_GREEN = '\033[32m' YELLOW = '\033[93m' DARK_YELLOW = '\033[33m' RED = '\033[91m' DARK_RED = '\033[31m' BOLD = '\033[1m' UNDERLINE = '\033[4m' CYAN = '\033[96m' DARK_CYAN = '\033[36m' WHITE = '\033[97m' PURPLE = '\033[95m' DARK_PURPLE = '\033[35m' END = '\033[0m' def __init__(self): pass def simple_print(self, color, str_msg): print( "%s%s" % (self.get_color_string(color, str_msg), self.END)) def complex_print(self, color, str_format, data): str_msg = str_format % data self.simple_print( color, str_msg ) def multi_color_print(self, color_str_tuple_list): str_msg = "" for clr, str in color_str_tuple_list: str_msg += self.get_color_string(clr, str) self.println(str_msg) def get_color_string(self, color, str_msg): return "%s%s" % (color, str_msg) def println(self, str_msg): print( "%s%s" % (str_msg, self.END) ) class StashTerminalColors (object): # from shcommon.py in stash code HEADER = 'UNDERLINE' BLACK = 'black' RED = 'red' GREEN = 'green' BROWN = 'brown' BLUE = 'blue' MAGENTA = 'magenta' PURPLE = 'magenta' CYAN = 'cyan' WHITE = 'white' GRAY = 'gray' YELLOW = 'yellow' SMOKE = 'smoke' DEFAULT = 'white' STRIKE = 'STRIKE' BOLD = 'BOLD' UNDERLINE = 'UNDERLINE' BOLD_ITALIC = 'BOLD_ITALIC' ITALIC = 'ITALIC' END = 'END' def __init__(self): self._stash = _stash def simple_print(self, color, str_msg): print("%s" % self.get_color_string(color, str_msg)) def get_color_string(self, color, str_msg): if color == self.BOLD: prn_str = _stash.text_bold(str_msg) elif color == self.UNDERLINE: prn_str = _stash.text_underline(str_msg) elif color == self.BOLD_ITALIC: prn_str = _stash.text_bold_italic(str_msg) elif color == self.ITALIC: prn_str = _stash.text_italic(str_msg) elif color == self.STRIKE: prn_str = _stash.text_strikethrough(str_msg) else: prn_str = _stash.text_color(str_msg, color) return prn_str def multi_color_print(self, color_str_tuple_list): str_msg = "" for clr, str in color_str_tuple_list: str_msg += self.get_color_string(clr, str) print(str_msg)
The class is pretty simple to use (written explicitly for my use case, so ... YMMV). I'll give examples of usage for both below.
Here are some examples of using simple_print():
tc = getTerminalColorClass() tc.simple_print(tc.BLUE, "this should be BLUE") print("hopefully, this is NOT blue") tc.simple_print(tc.DEFAULT, "this is the DEFAULT color (white)") tc.simple_print(tc.BROWN, "this is BROWN text") tc.simple_print(tc.BOLD, "this should be BOLD") tc.simple_print(tc.UNDERLINE, "this should be UNDERLINE") tc.simple_print(tc.STRIKE, "this should be STRIKETHROUGH") tc.simple_print(tc.ITALIC, "this should be ITALIC") tc.simple_print(tc.BOLD_ITALIC, "this should be BOLD ITALIC") # will combine multiple effects msg = '\n ID Status Date(-t) Owner(-u) Description (-d)\n' tc.simple_print(self.tc.BOLD + self.tc.UNDERLINE, msg)
Here is some example for multi_color_print():
def print_status_footer(): tc = getTerminalColorClass() # statusLine = "Status: [+]add [@]block [-]reject [*]accept [#]workon [.]finish" prn_list = [ (tc.WHITE, "Status: "), (get_color_for_status('+'), "[+]add "), (get_color_for_status('@'), "[@]block "), (get_color_for_status('-'), "[-]reject "), (get_color_for_status('*'), "[*]accept "), (get_color_for_status('#'), "[#]workon "), (get_color_for_status('.'), "[.]finish ") ] tc.multi_color_print(prn_list) tc.simple_print(tc.WHITE, " ST=status PR=priority")
The commented out line shows what it was before I added color to it. The get_color_for_status() just returns one of the colors defined in the terminal color class.
I hope that this helps!
Mike | https://forum.omz-software.com/topic/4175/stash-is-it-possible-to-get-color-output-from-a-script-to-the-stash-console/17 | CC-MAIN-2021-17 | refinedweb | 997 | 58.89 |
Charlie Zender
2013-07-23
we need volunteers to contribute cygwin builds. anyone?
cz
Charlie Zender
2013-07-23
also note that there are now native windows builds distributed on the homepage.
cz
it's both an indirect object and the subject of a noun phrase, so i share your confusion.
the ncap executable is not ncap2, it is an earlier form of ncap2 that is easier to build.
it can perform a limited number of straightforward functions but nothing fancy.
we're (thanks mark!) working on getting ncap2 in the next cygwing build.
cz
I have not made much progress on ncap2. I am stuck on the ANTLR-C++ interface. Different ANTLR versions appear to have wildly different build systems. The last one that uses the old familiar configure mechanism is 2.7.7 (7 years old) and on Cygwin there is a compilation failure. Too much trouble for a mere scientist like me.
@brendandetracey, hadfield
We will provide NCO 4.3.6 cygwing binaries
Pedro
Mark, what is the antlr error? It rings a bell. I think it may just require a one-line change in the antlr source tree…
cz
compiling /home/hadfield/tmp/antlr-2.7.7/lib/cpp/src/../../../lib/cpp/src/CharScanner.cpp
In file included from /home/hadfield/tmp/antlr-2.7.7/lib/cpp/src/../../../lib/cpp/src/CharScanner.cpp:10:0:
C:/Cygwin/home/hadfield/tmp/antlr-2.7.7/lib/cpp/antlr/CharScanner.hpp:474:30: error: ‘EOF’ was not declared in this scope
C:/Cygwin/home/hadfield/tmp/antlr-2.7.7/lib/cpp/antlr/CharScanner.hpp: In member function ‘bool antlr::CharScannerLiteralsLess::operator()(const string&, const string&) const’:
C:/Cygwin/home/hadfield/tmp/antlr-2.7.7/lib/cpp/antlr/CharScanner.hpp:565:41: error: ‘strcasecmp’ was not declared in this scope
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>> E R R O R <<
============================================================
g++ -O2 -DNDEBUG -felide-constructors -pipe -c -I . -I C:/Cygwin/home/hadfield/tmp/antlr-2.7.7/lib/cpp /home/hadfield/tmp/antlr-2.7.7/lib/cpp/src/../../../lib/cpp/src/CharScanner.cpp
============================================================
Got an error while trying to execute command above. Error
messages (if any) must have shown before. The exit code was:
exit(1)
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Makefile:181: recipe for target
/home/hadfield/tmp/antlr-2.7.7/lib/cpp/src/CharScanner.o' failed
make[3]: *** [/home/hadfield/tmp/antlr-2.7.7/lib/cpp/src/CharScanner.o] Error 1
make[3]: Leaving directory/home/hadfield/tmp/antlr-2.7.7/lib/cpp/src'
Makefile:146: recipe for target
all' failed
make[2]: *** [all] Error 1
make[2]: Leaving directory/home/hadfield/tmp/antlr-2.7.7/lib/cpp'
Makefile:141: recipe for target
all' failed
make[1]: *** [all] Error 1
make[1]: Leaving directory/home/hadfield/tmp/antlr-2.7.7/lib'
Makefile:160: recipe for target `all' failed
make: [all] Error 1
Mark, I found that error too while building ANTLR 2.7.7;
This version here (bottom of page)
has that fixed
Pedro
The ANTLR file CharScanner.hpp needs to have a line like the following
#include <strings.h>
near the top so it can find the prototype for strcasecmp().
cz
If Pedro is going to build a NCO 4.3.6 Cygwin binary with a working ncap2 then I am more than happy to abandon my own feeble efforts.
Mark, until the NCO 4.3.6 Cygwin binaries are ready, you might want to give a try to the native NCO 4.3.6 Windows build
Out of curiosity: Why the need for Cygwin binaries? Because you use Cygwin anyway?
Pedro
Because I use Cygwin anyway. Because in the Cygwin environment it is usually best to use Cygwin-aware applications-ones that understand Unix paths, for example. Because my Cygwin netCDF library can handle things like OpenDAP, and NCO can make use of this seamlessly. Because I'm a bit stubborn when I try to build something under Cygwin and run into problems.
But I'm definitely not a purist and I'm not stubborn enough to bash my head against a brick wall endlessly. A Windows-native ncap2 executable would be handy. :-)
Mark Hadfield
2013-10-01
I downloaded and installed the NCO Windows package. Thanks, Pedro. When I run ncks.exe it's looking for hdf5_D.dll. Where do I find this? Here?
I presume it's 32-bit, not 64-bit?
Pedro Vicente
2013-10-01
I added that missing file to the NCO distribution
please download again
By the way, all operators are built in this native distribution, including ncap2
However there are 2 features that are missing from the native build, but are available in the Cygwin build
1) UDunits
2) Regular expressions
This is because there are not Windows native ports of these packages. We will support them in a future release, hopefully (meaning we will try to build a Windows native port ourselves).
So, if these are important features to you, you may want to stick with the Cygwin build for a while.
Other feature also not present in the native build is HDF4 linkage (allowing to open HDF4 files via NCO), I will add that soon
Pedro
Mark Hadfield
2013-10-02
And now it's asking for hdf5_hl_D.dll -:)
Pedro Vicente
2013-10-02
I uploaded a new NCO 4.3.6 Windows package with that DLL file included.
Pedro
I downloaded the latest cygwin build (nco-4.3.7.win32.cygwin.tar.gz (5.0M): Executables Cygwin-compatible (last updated Thursday, 17-Oct-2013 21:14:04 UTC).) and installed by untarring into /usr/local/.
For some reason ncap2 does nothing. Can anyone enlighten me on this?
(Thanks for the updated cygwin build. I was in the process of attempting my own, but after waiting fruitlessly three months for our IT dept to allow me to run the cygwin setup to install GSL, I gave up.)
Pedro Vicente
2013-10-29
What was the command you used for ncap2?
Can you try this?
ncks -C -v time ~/nco/data/in.nc
ncap2 -s 'time=time+1' ~/nco/data/in.nc out.nc
ncks -C -v time out.nc
Pedro
Should not really matter what the command is.
When I try:
ncap2
nothing happens or prints to the screen, just back to the command prompt.
When I try:
ncap2 -r
again nothing, just the next command prompt.
Should it matter where I unpacked the tarball? I noticed there are some libraries as well as executables. Would the location of these matter? I was assuming the executables are statically linked.
Pedro Vicente
2013-10-29
What happens if you run ncap2, or any other operator, in the place where you unpacked to?
cd /usr/local
cd bin
ncap2 -r
ncks -r
All other nco executables work. This is, if I enter the command with no options, the help text is displayed. It is only ncap2 that does not work.
I tried ncap2 from /usr/local/bin but it did not work.
FYI:( 5941077 Oct 17 18:10 ncap2.exe )
One other hint.
ldd /usr/local/bin/ncap2
hangs. Maybe this means a library is not properly linked? This is not my area of expertise.
Try this command on your system and see what it reports. | http://sourceforge.net/p/nco/discussion/9829/thread/ce445836 | CC-MAIN-2015-22 | refinedweb | 1,210 | 59.9 |
Finding Neighbors¶
Now that you’ve been introduced to the basics of interacting with freud, let’s dive into the central feature of freud: efficiently and flexibly finding neighbors in periodic systems.
Problem Statement¶
Neighbor-Based Calculations¶
As discussed in the previous section, a central task in many of the computations in freud is finding particles’ neighbors. These calculations typically only involve a limited subset of a particle’s neighbors that are defined as characterizing its local environment. This requirement is analogous to the force calculations typically performed in molecular dynamics simulations, where a cutoff radius is specified beyond which pair forces are assumed to be small enough to neglect. Unlike in simulation, though, many analyses call for different specifications than simply selecting all points within a certain distance.
An important example is the calculation of order parameters, which can help characterize phase transitions. Such parameters can be highly sensitive to the precise way in which neighbors are selected. For instance, if a hard distance cutoff is imposed in finding neighbors for the hexatic order parameter, a particle may only be found to have five neighbors when it actually has six neighbors except the last particle is slightly outside the cutoff radius. To accomodate such differences in a flexible manner, freud allows users to specify neighbors in a variety of ways.
Finding Periodic Neighbors¶
Finding neighbors in periodic systems is significantly more challenging than in aperiodic systems. To illustrate the difference, consider the figure above, where the black dashed line indicates the boundaries of the system. If this system were aperiodic, the three nearest neighbors for point 1 would be points 5, 6, and 7. However, due to periodicity, point 2 is actually closer to point 1 than any of the others if you consider moving straight through the top (or equivalently, the bottom) boundary. Although many tools provide efficient implementations of algorithms for finding neighbors in aperiodic systems, they seldom generalize to periodic systems. Even more rare is the ability to work not just in cubic periodic systems, which are relatively tractable, but in arbitrary triclinic geometries as described in Periodic Boundary Conditions. This is precisely the type of calculation freud is designed for.
Neighbor Querying¶
To understand how
Compute classes find neighbors in freud, it helps to start by learning about freud’s neighbor finding classes directly.
Note that much more detail on this topic is available in the Query API topic guide; in this section we will restrict ourselves to a higher-level overview.
For our demonstration, we will make use of the
freud.locality.AABBQuery class, which implements one fast method for periodic neighbor finding.
The primary mode of interfacing with this class (and other neighbor finding classes) is through the
query interface.
import numpy as np import freud # As an example, we randomly generate 100 points in a 10x10x10 cubic box. L = 10 num_points = 100 # We shift all points into the expected range for freud. points = np.random.rand(num_points)*L - L/2 box = freud.box.Box.cube(L) aq = freud.locality.AABBQuery(box, points) # Now we generate a smaller sample of points for which we want to find # neighbors based on the original set. query_points = np.random.rand(num_points/10)*L - L/2 distances = [] # Here, we ask for the 4 nearest neighbors of each point in query_points. for bond in aq.query(query_points, dict(num_neighbors=4)): # The returned bonds are tuples of the form # (query_point_index, point_index, distance). For instance, a bond # (1, 3, 0.2) would indicate that points[3] was one of the 4 nearest # neighbors for query_points[1], and that they are separated by a # distance of 0.2 # (i.e. np.linalg.norm(query_points[1] - points[3]) == 2). distances.append(bond[2]) avg_distance = np.mean(distances)
Let’s dig into this script a little bit.
Our first step is creating a set of 100 points in a cubic box.
Note that the shifting done in the code above could also be accomplished using the
Box.wrap method like so:
box.wrap(np.random.rand(num_points)*L).
The result would appear different, because if plotted without considering periodicity, the points would range from
-L/2 to
L/2 rather than from 0 to
L.
However, these two sets of points would be equivalent in a periodic system.
We then generate an additional set of
query_points and ask for neighbors using the
query method.
This function accepts two arguments: a set of points, and a
dict of query arguments.
Query arguments are a central concept in freud and represent a complete specification of the set of neighbors to be found.
In general, the most common forms of queries are those requesting either a fixed number of neighbors, as in the example above, or those requesting all neighbors within a specific distance.
For example, if we wanted to rerun the above example but instead find all bonds of length less than or equal to 2, we would simply replace the for loop above with:
for bond in aq.query(query_points, dict(r_max=2)): distances.append(bond[2])
Query arguments constitute a powerful method for specifying a query request.
Many query arguments may be combined for more specific purposes.
A common use-case is finding all neighbors within a single set of points (i.e. setting
query_points = points in the above example).
In this situation, however, it is typically not useful for a point to find itself as a neighbor since it is trivially the closest point to itself and falls within any cutoff radius.
To avoid this, we can use the
exclude_ii query argument:
query_points = points for bond in aq.query(query_points, dict(num_neighbors=4, exclude_ii=True)): pass
The above example will find the 4 nearest neighbors to each point, excepting the point itself. A complete description of valid query arguments can be found in Query API.
Neighbor Lists¶
Query arguments provide a simple but powerful language with which to express neighbor finding logic.
Used in the manner shown above,
query can be used to express many calculations in a very natural, Pythonic way.
By itself, though, the API shown above is somewhat restrictive because the output of
query is a generator.
If you aren’t familiar with generators, the important thing to know is that they can be looped over, but only once.
Unlike objects like lists, which you can loop over as many times as you like, once you’ve looped over a generator once, you can’t start again from the beginning.
In the examples above, this wasn’t a problem because we simply iterated over the bonds once for a single calculation.
However, in many practical cases we may need to reuse the set of neighbors multiple times.
A simple solution would be to simply to store the bonds into a list as we loop over them.
However, because this is such a common use-case, freud provides its own containers for bonds: the
freud.locality.NeighborList.
Queries can easily be used to generate
NeighborList objects using their
toNeighborList method:
query_result = aq.query(query_points, dict(num_neighbors=4, exclude_ii)) nlist = query_result.toNeighborList()
The resulting object provides a persistent container for bond data.
Using
NeighborLists, our original example might instead look like this:
import numpy as np import freud L = 10 num_points = 100 points = np.random.rand(num_points)*L - L/2 box = freud.box.Box.cube(L) aq = freud.locality.AABBQuery(box, points) query_points = np.random.rand(num_points/10)*L - L/2 distances = [] # Here, we ask for the 4 nearest neighbors of each point in query_points. query_result = aq.query(query_points, dict(num_neighbors=4)): nlist = query_result.toNeighborList() for (i, j) in nlist: # Note that we have to wrap the bond vector before taking the norm; # this is the simplest way to compute distances in a periodic system. distances.append(np.linalg.norm(box.wrap(query_points[i] - points[j]))) avg_distance = np.mean(distances)
Note that in the above example we looped directly over the
nlist and recomputed distances.
However, the
query_result contained information about distances: here’s how we access that through the
nlist:
assert np.all(nlist.distances == distances)
The indices are also accessible through properties, or through a NumPy-like slicing interface:
assert np.all(nlist.query_point_indices == nlist[:, 0]) assert np.all(nlist.point_indices == nlist[:, 1])
Note that the
query_points are always in the first column, while the
points are in the second column.
freud.locality.NeighborList objects also store other properties; for instance, they may assign different weights to different bonds.
This feature can be used by, for example,
freud.order.Steinhardt, which is typically used for calculating Steinhardt order parameters, a standard tool for characterizing crystalline order.
When provided appropriately weighted neighbors, however, the class instead computes Minkowski structure metrics, which are much more sensitive measures that can differentiate a wider array of crystal structures. | https://freud.readthedocs.io/en/v2.5.0/gettingstarted/tutorial/neighborfinding.html | CC-MAIN-2021-21 | refinedweb | 1,465 | 54.93 |
ilogb, ilogbf, ilogbl - get integer exponent of a floating-point value
Current Version:
Linux Kernel - 3.80
Synopsis
#include <math.h>
int ilogb(double x);
ilogbf(), ilogbl():
int ilogb(double x);
int ilogbf(float x);
int ilogbl(long double x);
Link with -lm.
Feature Test Macro Requirements for glibc (see feature_test_macros(7)):
ilogb():
- _BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 500 || _XOPEN_SOURCE && _XOPEN_SOURCE_EXTENDED || _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L;
or cc -std=c99
ilogbf(), ilogbl():
- _BSD_SOURCE || _SVID_SOURCE || _XOPEN_SOURCE >= 600 || _ISOC99_SOURCE || _POSIX_C_SOURCE >= 200112L;
or cc -std=c99
Description
Return Value
See math_error(7) for information on how to determine whether an error has occurred when calling these functions.
The following errors can occur:
- Domain error: x is 0 or a NaN
- An invalid floating-point exception (FE_INVALID) is raised, and errno is set to EDOM (but see BUGS).
-
- Domain error: x is an infinity
- An invalid floating-point exception (FE_INVALID) is raised, and errno is set to EDOM (but see BUGS).
Attributes
Conforming To
C99, POSIX.1-2001.
Bugs
Before version 2.16, the following bugs existed in the glibc implementation of these functions:
- *
- The domain error case where x is 0 or a NaN did not cause errno to be set or (on some achitectures) raise a floating-point exception.
- *
- The domain error case where x is an infinity did not cause errno to be set or raise a floating-point exception.
See Also
log(3), logb(3), significand(3) Inspired by a page by Walter Harms created 2002-08-10 | https://community.spiceworks.com/linux/man/3/ilogbl | CC-MAIN-2019-09 | refinedweb | 244 | 51.07 |
Containers are redefining the way we build and operate reliable systems in the cloud by providing a way to wrap up an application in.
Windows Containers offer two different types of containers or runtimes – Windows Server Containers, which provides application isolation through process and namespace isolation technology, and Hyper-V Isolation, which expands on the isolation provided by Windows Server Containers by running each container in a highly optimized virtual machine. Running a container on Windows with or without Hyper-V Isolation is a runtime decision.
On Tuesday, December 5th, 2017 from 9:00 AM to 10:00 AM Pacific, we will host a one-hour AMA (Ask Microsoft Anything) focused on Windows Containers. In this free session, you’ll be able to ask Microsoft experts your questions about the new, graphical management solution for Windows Server.
Our goal for this AMA is to help you learn:
- What are containers?
- What types of Windows Containers are there?
- What is Docker?
- How do I containerize an app?
- What resources are available for developers and IT pros?
Learn more about Windows containers on Windows Server. Then join the conversation to interact directly with the people who built them. | https://blogs.technet.microsoft.com/windowsserver/2017/11/28/ask-microsoft-anything-windows-server-containers/ | CC-MAIN-2018-09 | refinedweb | 196 | 55.44 |
.
Hey, I am happily using capistrano to deploy my django projects right now, but thanks for this great introduction to Fabric anyways. It will probably come in handy at some time.
Really interesting. I had fabric in my scope but had not the opportunity so far to play with.
It encourages me to use it instead of thie perl script I must use and that nobody can maintain nor improve :-(
Did you face some limitations with fabric so far ?
I haven't really run into any limitations with Fabric, although it can be a bit confusing to get started since most tutorials are out of date (which is almost inevitable because the project is changing rapidly), and it relies heavily on magic. Much like Django eventually went through a magic removal stage, I think Fabric will eventually get less and less magical and more and more predictable as it ages.
Ok, thanks for your feedback.
Hey Will -- looks like you got picked up on Reddit, good job =)
I'd definitely love to get some of the magic out of Fabric once the feature set has settled down some.
As I was explaining to someone on the mailing list today, the main reason I can think of for the existing magic is to alter the namespace within Fabric commands (the user-defined functions) to allow access to the operations such as run(), sudo() and so forth, without needing to explicitly import them.
In Capistrano, which is Ruby, that sort of thing isn't even an issue because Ruby and Rails play so fast and loose with the namespacing and importing side of things, which is why it's well suited to DSLs. Python and the 'explicit over implicit' creed are naturally at odds with such an approach.
I can't say for now how exactly Fabric might try to become less magical, but again, I'm quite interested in trying to keep things Pythonic whenever possible :) so we'll see!
Do you also know about django-fabric? It's a custom command management plugin for django, so you can do 'manage.py fab'
It can be found at
Paul,
Although I'm all for the idea of integration, I'm not really sure what the benefit of using django-fabric is? Instead of typing
fab deployI would need to type
python manage.py fab deploy, along with needing to add the
fabapp to my
INSTALLED_APPLICATIONSsetting for my project.
Do you have any scenarios where django-fabric is a clear improvement over using Fabric normally?
Will,
I agree with you that django-fabric doesn't make any sense.
However, I'm liking the thought of having all manage.py commands as part of my fabfile.py so I can do everything with fabric, e.g.: "fab syncdb" etc. just like you did with your test target.
I'll see if I can figure out a way to have any unknown commands passed through to fabric to make the fabfile easier.
The design of fabric is broken from the start if you have to interpolate strings inside of commands that are going to be run via the shell.
As soon as "dir" has any shell meta-characters in it you are dead in the water.
I don't know why commands aren't built up and executed as lists the way they should be. execve() takes a list and the python language supports lists - it seems like a no-brainer to design an API that runs commands around lists to avoid this problem all together. But time and time again you get an API built around strings.
These are problems waiting to happen.
I migrated an existing deployment script to Fabric recently. Aside from the annoying namespace system the experience was pretty positive. One thing I couldn't figure out though was how to get the output from run() back into a python variable.. I might look into implementing this feature if it isn't part of the project at the moment.
You can simply write:
with the trunk version of Fabric, I am using it extensively and this is working a wonder.
I have found, that invoke command, repeated in the loop, actually runs only once. All following tries to invoke same function fail with warning "Skipping xxxxxx (already invoked).".
I looked into the fabric's sources, and found, this interesting docstring in the infoke function:
So, to make it works for different repositories, you need to use such code:
@@ python def git_pull(repo, parent, branch): "Updates the repository." config.repo = repo config.parent = parent config.branch = branch run("cd ~/git/$(repo)/; git pull $(parent) $(branch)")
def pull(): require('fab_hosts', provided_by=[production]) for repo, parent, branch in config.repos: invoke((git_pull, dict(repo=repo, parent=parent, branch=branch))) @@
Hey, I just stumbled on your page from delicious. I also use Fabric to deploy my Django applications and it's made things much easier. I wrote a post a few days ago about how I do it: Deploying a Site with Fabric and Mercurial.
Your post gave me some new ideas for my fabfiles like server rebooting and deploying a specific version. Thanks!
run() and sudo() returns the output of the command so you can do:
foo = run("ls -l")
But this does not work with local(), why is that?
I'm running into the same issue where I need to determine which war file to deploy so I'm trying to run:
warfile = local('ls -l *.war | head -1')
then I want to use the variable $(warfile) throughout my deploy script.
But currently, I can't get "local" to do that...
I really enjoy your posts!
It would be great if you could post an updated version of your fabfile for Fabric 0.9.
Reply to this entry | http://lethain.com/entry/2008/nov/04/deploying-django-with-fabric/ | crawl-002 | refinedweb | 966 | 70.84 |
[
]
Kihwal Lee updated HDFS-4080:
-----------------------------
Attachment: hdfs-4080.patch
> Add an option to disable block-level state change logging
> ---------------------------------------------------------
>
> Key: HDFS-4080
> URL:
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: name-node
> Affects Versions: 3.0.0, 2.0.3-alpha
> Reporter: Kihwal Lee
> Assignee: Kihwal Lee
> Attachments: hdfs-4080.patch
>
>
> Although the block-level logging in namenode is useful for debugging, it can add a significant
overhead to busy hdfs clusters since they are done while the namespace write lock is held.
One example is shown in HDFS-4075. In this example, the write lock was held for 5 minutes
while logging 11 million log messages for 5.5 million block invalidation events.
> It will be useful if we have an option to disable these block-level log messages and
keep other state change messages going. If others feel that they can turned into DEBUG (with
addition of isDebugEnabled() checks), that may also work too, but there might be people depending
on the messages.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: | http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201210.mbox/%3C2044009247.39330.1351526413805.JavaMail.jiratomcat@arcas%3E | CC-MAIN-2016-26 | refinedweb | 190 | 55.54 |
Python Scripts as Splunk Data Inputs
Some pitfalls, and how to avoid them.
For any infrastructure of significant size, you’re going to have an overwhelming amount of data produced. This data can come from any number of sources: Log files, CPU/disk performance, traffic/app load, etc. Monitoring and making sense of this data is critical to maintaining a happy system, and it certainly isn’t a trivial task.
The solution
Our platform of choice for this task is Splunk. It gives us the ability to ingest large amounts of data from our machines, pulling it into a single platform where it is parsed and indexed. Once indexed, Splunk allows us to intelligently search, analyze, and visualize the data using the tools provided by the platform. Splunk also gives us the ability to create alerts based on certain thresholds or trends in the data, which is critical for staying on top of metrics and for spotting issues as soon as (or even before) they occur.
External scripts as data inputs
One of the features that Splunk touts is the ability to pull in data from an external script, which essentially gives you the capability to generate data from any arbitrary source. There are some pitfalls, however, when using external scripts as data inputs, and Splunk was less than helpful in tracking down where I was going wrong.
A real-life example
For this example we will be using the Dark Sky API, which powers the Forecast.io weather website. You will need an API key, but it’s freely-available from the Forecast.io developer website.
We’ll also be using the Forecast.io Python library, via pip:
pip install python-forecastio
I’ll assume that the reader is familiar with how to set up and use virtual environments in Python, and therefore will not go into any more detail about installing or using Python packages.
Step 1: Create a Python script
We’re going to add this script to the general Splunk scripts directory (not associated with any particular app). This directory is located at
$SPLUNK_HOME/bin/scripts/
Create and edit a file with the name
forecast.py in the above directory, and ensure that it has the following contents (be sure to fill in your API key):
#!/usr/bin/env python
import datetime
import forecastio
import json
# Lat/Long for Alcatraz.
lat = 37.8267
lng = -122.423
api_key = "[YOUR API KEY HERE]"
data = forecastio.load_forecast(api_key, lat, lng).currently().d
data['datetime'] = datetime.datetime.now().isoformat()
print json.dumps(data, indent=4)
This script uses your API key and the supplied lat/long coordinates to retrieve a forecast for the given location. Splunk will read anything our script outputs to stdout, so we simply use the
Note that our Python script also adds a
datetime field in ISO 8601 format to the JSON payload; this will make it easier for Splunk to index our returned data with the correct time.
Problem #1: Splunk’s bundled Python interpreter
Here’s where we run into our first issue. If we were to simply add this script as a Data Input in Splunk, it would fail to run when called. This is because Splunk will use its own internally-bundled Python interpreter and libraries when running the script.
We will need to create a wrapper script that will call our Python script, using the correct interpreter and paths. In addition, we will need to unset the
$LD_LIBRARY_PATH environment variable, or else your Python instance will still be using Splunk’s bundled libraries.
Create and edit a file by the name of
wrapper.sh in the same place you created
forecast.py, and ensure that it has the following contents (fill in your dist-packages and Python interpreter path):
#!/bin/bash
unset LD_LIBRARY_PATH
PYTHONPATH=[YOUR DIST-PACKAGES PATH] [YOUR PYTHON INTERPRETER] $SPLUNK_HOME/splunk/bin/scripts/forecast.py
Takeaway #1: Unless your Python script is so basic that it does not need anything outside of the standard library, use a wrapper script.
Problem #2: External script ownership
Splunk Enterprise will run scripts and jobs as whichever system user is the owner of
$SPLUNK_HOME. I initially noticed that the
splunkd service was running as the
root user, and mistakenly assumed that my scripts must also be owned by root.
To compound my troubles, Splunk did not give me any insight into the failure when attempting to run a script as the wrong user; there was nothing in the logs or job results to alert me to this fact. The data input simply failed to produce any events, and Splunk was silent on any failed attempts to run the script. It was purely a guess on my part about script ownership which led me to discover this.
Once I was able to track this down, I searched high and low through the Splunk config files and settings menus, trying to figure out where Splunk had been told to run scripts and jobs as a particular user. With no luck, I turned to Google. Only then was I able to find the answer, buried in a documentation article on Splunk Enterprise Installation.
Takeaway #2: Be sure that your script is owned by the same system user that owns the
$SPLUNK_HOMEdirectory.
Step 2: Add your script as a data input in Splunk
Okay, so we’ve got our shiny new Python script, and we’ve got our handy-dandy wrapper script ready to go. Let’s create a new Data Input in Splunk for our script.
As the administrator, go to Settings and click on Data Inputs:
There are two types of inputs: Local Inputs and Forwarded Inputs. In this example, we are going to add a script to the server running the Splunk indexer, so we’ll click on Script in the Local Inputs section.
Click the New button to add a new data input. You will be presented with the following screen:
Make sure to select
$SPLUNK_HOME/bin/scripts as the script path, and pick our
wrapper.sh script as the script to be run. You can ignore the Command and Source name override fields.
The default time interval is 60 seconds. This means that Splunk will attempt to run your script every 60 seconds, parsing and indexing any data returned from the script.
Click Next to display the screen where we will configure the input type:
Click on Select Sourcetype, highlight Structured, and choose _json, since we are outputting JSON from our Python script.
If you have a specific app that you would like to associate this data with, or a custom index to store the data in, choose the appropriate values on this page. Otherwise, you can leave the rest of the options at their default values.
Click the Review button to continue to the next page. If everything looks correct, click the Submit button.
Step 3: Profit!
That’s it! Splunk should now begin running your script and indexing data from our script! You can search for all events from your new Data Input by using the following Splunk search query:
source="[PATH]/bin/scripts/wrapper.sh"
Replace
[PATH] in the above search query with whatever the value of
$SPLUNK_HOME is for you. | https://medium.com/@wahoowilky/python-scripts-as-splunk-data-inputs-5b6a9273eb70 | CC-MAIN-2018-34 | refinedweb | 1,202 | 70.63 |
Getting a Basic React App Up And Running
Stephen Charles Weiss
Originally published at
stephencharlesweiss.com
on
・14 min read
At this point, I feel fairly comfortable with React, but when I had to go back to the basics and get an app up and running this weekend, I found I’d forgotten more than I thought.
Since I’m stubborn (stupid?) and didn’t want to use
npx create-react-app to bootstrap, I had to look up a few things.1 Below are my notes on what I learned when it comes to getting a basic React app up and running.
A quick preview on what you can expect to learn by reading on:
- How React can fit within a larger website (i.e. how to blend HTML with React)
- How to fit multiple React components (which could be expanded into full fledged features in their own right)
- How to bundle React using Webpack and Babel
Adding React To A Website
The React team has a great page on getting React into an existing website quickly.2 Unfortunately, in my case, I had nothing going, so I needed to start even farther upstream than that.
Let’s start with the absolute basics:
- Make a directory for your project,
mkdir <the-name-of-my-project>
- Navigate into it,
cd <the-name-of-my-project>
- Initialize the repo with
gitand
npm(
git initand
npm init).
-
Scaffold a basic app structure with some files and folders. Here’s what mine looked like
. ├── .gitignore ├── .prettierrc ├── dist │ └── index.html ├── package-lock.json ├── package.json ├── src │ └── index.js └── webpack.config.js
Setting up the HTML
At a really basic level, React works by overwriting a single element in the DOM. The convention is that this is done by having an empty
<div> element with an
id=“app” that React-DOM will be able to identify and overwrite.
I deviated ever-so-slightly for purposes of explicitness (which will become more clear when I add a second React component later). This is my first
dist/index.html
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <meta http- <title>Toast-Demo</title> </head> <body> <div id="React-App"></div> <script src="bundle.js"></script> </body> </html>
With our HTML ready, we now need an actual React component.
(We’ll also come back to the
<script> tag.)
Our First React Component
This is what I put into
src/index.js
import ReactDOM from ‘react-dom’; import React from ‘react’; const HelloWorld = () => { return ( <div> Hello world! </div> ) }; ReactDOM.render( <HelloWorld/>, document.getElementById(‘React-App’) )
From this, it’s easy to see how ReactDOM renders the
HelloWorld component — it replaces what’s in the document (
index.html) at the location of the Id,
’React-App’.
If at this point, we tried to open the
index.html in our browser, we’d see a blank screen. This is because even though React replaced the
div in the DOM, it can’t be interpreted.
We need to build our app and create the bundle.
Using Webpack and Babel To Bundle Our App
Babel is a Javascript compiler — an application that converts code written in future versions of Javascript and translates it down to browser compatible versions.3 A few of the ways Babel can help are highlighted on the first page of their Docs:
- Transform syntax
- Polyfill features that are missing in your target environment (through @babel/polyfill )
- Source code transformations (codemods)
- And more! (check out these videos for inspiration)
This is accomplished through a variety of plugins and ladders, but what should be clear is that it’s both very easy to setup and very powerful.
Webpack uses Babel (in our case) to coordinate the whole process and create a bundle by using it as a loader and specifying certain options. Another convention (similar to
id=“app” for React) is to call the output of Webpack
bundle. You can name it whatever you want and specify it within the webpack configurations. It should also be noted that Webpack is much more powerful than what I’m demo-ing here which is meant only to to illustrate how to compile Javascript and JSX files for use in our demo app.
In the root directory, our
webpack.config.js file has the following setup:
const path = require(‘path’) module.exports = { entry: ‘./src/index.js’, output: { filename: ‘bundle.js’, path: path.resolve(__dirname, ‘dist’) }, module: { rules: [ { test: [/\.js$/, /\.jsx?$/], exclude: /node_modules/, loader: 'babel-loader’, options: { presets: [‘@babel/env’, ‘@babel/react’,] } }, ], } }
Things to note:
- Entry point - this is what Webpack is looking to bundle
- Output - this is where the product of that bundling process will go (and you can see we’ve named int
bundle.js).
- Modules - these are the tools to use in the effort of bundling
The way I’ve set this up to name the presets within the options of the
webpack.config.js means that I do not need a
.bablerc file4
Dependencies
We’re using quite a few dependencies here, so it’s worth looking at the
package.json
{ "name": "react-playground", "version": "0.0.1", "description": "a playground to understand react, webpack, and babel", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "build": "webpack", }, "keywords": [ "react" ], "author": "Stephen Weiss <stephen.c.weiss@gmail.com>", "license": "MIT", "devDependencies": { "@babel/core": "^7.5.5", "@babel/preset-env": "^7.5.5", "@babel/preset-react": "^7.0.0", "@babel/preset-typescript": "^7.3.3", "babel-loader": "^8.0.6", "prettier": "^1.18.2", "webpack": "^4.39.3", "webpack-cli": "^3.3.7" }, "dependencies": { "react": "^16.9.0", "react-dom": "^16.9.0" } }
Launching The App
Now that the app is configured, we have a React Component, and we’ve set up our Webpack, we’re ready to build.
In the shell, run our script
npm run build (
npx webpack —config webpack.config.js also works if you don’t want to install
webpack as a dependency).
Once that’s done, you should see a new file,
dist/bundle.js.
And now, when you open / refresh your application in the browser, it should display our
HelloWorld component.
I promised I’d come back to
<script> tag: This is the only reason that the app loads. Without it, we’d have a bundle of Javascript, but nothing invoking it. As a result, even though we’ve compiled our app, the client would never have a reason to call it and so would not display our React app.
Adding A Second React Component
To add a second React component and blend that into an existing website, we need to make a few changes:
- Update our
srcdirectory to include a second React component (both the first React component and second could be extended significantly, this is just a simple example)
- Update
webpack.config.jsto have multiple entry points
- Update our
dist/index.htmlto note where the different React components should go.
Part Deux: A New React Component
In the
src directory, I added an
index2.js (not a great name, but it’ll do):
import ReactDOM from ‘react-dom’; import React from ‘react’; const PartDeux = () => { return ( <div> PartDeux </div> ) }; ReactDOM.render( <PartDeux/>, document.getElementById(‘React-App-2’) )
It’s another very simple React component that will mount to the
div with the id
React-App-2 in our
index.html.
Modifying Webpack
The
webpack.config.js file remains large the same with the exception of the
entry key:
const path = require(‘path’) module.exports = { entry: [‘./src/index.js’, ‘./src/index2.js’,], ... }
Modifying the HTML
Finally, update the HTML to indicate where the second component will go:
<!DOCTYPE html> <html lang=“en”> <head> <meta charset=“UTF-8”> <meta name=“viewport” content=“width=device-width, initial-scale=1.0”> <meta http-equiv=“X-UA-Compatible” content=“ie=edge”> <title>React-Demo</title> </head> <body> <h1> Here’s my first react entry point </h1><div id=“React-App”></div> <h1>Here’s my second react entry point</h1> <div id=“React-App-2”></div> <script src=“bundle.js”></script> </body> </html>
Rebundle and Run
Running webpack again and opening up our
index.html in our browser, I now see:
Voilá
Conclusion
Hopefully this demo helps explain how React can mount to the DOM, how to use multiple different React applications within one website and how to orchestrate it all with Webpack and Babel. I know I learned a ton through the process!
This full code for this demo can be found on my Github.5
Footnotes
- 1 Create a New React App | React
- 2 Add React to a Website | React
- 3 What is Babel? | Babel
- 4 Configure Babel | Babel
- 5 react-demo | GitHub
Resources / Additional Reading
Is it possible to get relevant industry experience on your own (not through working at a company)?
This is an anonymous post sent in by a member who does not want their name disclo...
| https://dev.to/stephencweiss/getting-a-basic-react-app-up-and-running-28im | CC-MAIN-2019-43 | refinedweb | 1,488 | 56.25 |
Why do people use Python?
- Software quality (readable, reusable and maintainable)
- Developer productivity (is typically one-third to one-fifth the size of equivalent C++ or Java code)
- Program portability (Most Python programs run unchanged on all major computer platforms)
- Support libraries (Python comes with a large collection of prebuilt and portable functionality)
- Component integration (Python scripts can easily communicate with other parts of an application)
- Enjoyment (Easy to built-in toolset)
Is Python a “Scripting Language”
Python is a general-purspose programming language that is often applied in scripting roles. It is commonly defined as an object-oriented scripting language- a definition that blends support for OOP with and overall orientation toward scripting roles.
What are Python’s technical strengths?
- It’s Object-Oriented and Functional
- It’s free (To use and distribute)
- It’s open source
- It’s portable
- It’s powerful
- It has a large collection of precoded library tools
- It’s mixable (Ex. Python’s C API lets C programs call and be called by Python)
First Script
import sys print(sys.platform) print(2 ** 100) x = 'spam!' print(8 * x)
In this code we are:
- raise 2 to a 100 power
- Uses x to string repetition
To run this script type in your terminal:
python script1.py
Module’s files
To exemplify this create the folloing file:
myfile.py and add this line:
title='The Meaning of Life'
Then write a new file called:
use_myfile.py and add this line:
import myfile print('The secret is: ' + myfile.title)
Finally run your
use_myfile.py:
python use_myfile.py
When this file is imported, its code is run to generate the module’s attribute. That is, the assignment statement creates a variable and module attribute named title.
Return to the main article | https://josdem.io/techtalk/python/ | CC-MAIN-2022-33 | refinedweb | 294 | 54.63 |
Here is a patch to the libc 5.2.18 to hopefully fix the NIS storms. I could not test it since the debian source package for the libc does not build its targets. Anyone know how to get the debian source package to build? "debian.rules binary" or "debian.rules" both fail without ever reaching the initgroups.c file The patch has to be applied to the directory libc-5.2.18/grp --- initgroups.c.orig Wed May 31 22:08:15 1995 +++ initgroups.c Thu Sep 26 12:10:32 1996 @@ -82,9 +82,10 @@ } else if (0 == strcmp(g->gr_name, "+")) { - ypmode = 1; - g = __nis_getgrent(1, info); - if (NULL == g) +# Disable exhaustive NIS searches +# ypmode = 1; +# g = __nis_getgrent(1, info); +# if (NULL == g) break; } #endif /* YP */ Consequences of this patch: - You loose NO functionality. I have never gotten /etc/groups to work across an NIS network. Miquel confirmed a while ago that the NIS stuff for /etc/group does not work. - The users still have the group they are assigned to in /etc/passwd (or the NIS map) - For The +:: stuff at the end of the /etc/group nothing will be done. - You can still manually include a group with +group - Exhaustive NIS searches are not performed anymore. I reviewed the NIS code for /etc/passwd as well. The library contains an extensive caching mechanism for those. Reviewing what I just wrote: You might simply have the same effect by removing the +:: line from etc groups .... <G> {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{} {} | https://lists.debian.org/debian-devel/1996/09/msg00740.html | CC-MAIN-2016-30 | refinedweb | 249 | 74.29 |
Converting a Manual switched washing machine in to a Arduino powered smart washer.
Last night I was finishing some washing and thought about ways to make this thing smarter.
Maybe adding a flow meter to the pump so when the train cycle has finished and there is no more water being pumped out the spin cycle ends.
Using something like this.
The other thought was to add an actual water level sensor. There is a tube at the base of the tub that goes upwards to the existing sensor/switch, you dial up the water level and when that level is reached the switch triggers but I was contemplating adding one of these instead.
MPX5010GP
Only other thing that would be cool is a sensor to work out just how much washing is in the machine but I cant really see any way to do this at the moment as it cant really be based on weight as clothes have different weights and an optical sensor would not really work as things like sheets when you put them in tend to sit a little higher but then compress more when they are wet.
This is the MOV that was across the relay contacts.
Looks like it got a little toasty.
Still suffering from a bit of noise or some form of instability, its similar to what I was finding using the Mini Pro to drive a stepper. Checked the PSU and that is nice and stable so when in doubt fall back to what worked to solve the stepper issue I had and that was swap out the Mini Pro for a Nano.
I suspect there is something in the Mini Pro design that makes it a little more susceptible to interference but I guess after a few more cycles we will soon see.
The only other issue it could be is that one of the salvaged caps I used for the R/C circuit has decided to start failing. When i finalise things I may add a few MOV's to the mix
6 loads of washing and it is running as expected without any issues, I have even had the guts to leave it plugged in with the power on for 2 days now.
Now its time to pretty things up and start working on some additional features and more advanced code with a few safety features and maybe hook up the lid switch again
Given most of the routines are time based I have decided to make use of probably one of the best librarys i have found. Chrono as it has all the features required and the developers are prety quick to respond to any issues.
LOL: And as luck would have it ,after completing 6 loads of washing and hanging it out to dry it started to rain :( given the urgent need for some clothes I decided to use the dryer and to my dismay that stopped working. I can see another hack coming in the near future but for now the problem was the tensioner for the drum belt had work out at the door clip had broken. Thankfully I have a 3D printer on hand so printing up a new clip and a new tensioner was a simple process.
Sure enough the snubber seems to sort out most of the issues however the snubber for the spin cycle needed to be connected in parallel across the motor and not across the relay as the actuator seems to use an asynchronous motor and is really low current. Putting the snubber across the relay was causing the R/C circut to feed enough power to run the motor even when it was turned off.
I could adjust the values however did not have the components here to do that.
It seems that the water valve solinoids do not need snubbers in place.
Now to test the machine on a full wash cycle then back to making a more advanced version of the code with a few additional features like gentle wash and an extended spin rinse cycle as firing water in to machine while it is spinning seems to cause the clothes to rinse out a lot better than just fill, agitate and spin.
Now for some code, I will add the final version later as there are a pile of additional timer routines but for now I am testing the stability of the system. There seems to still be some RFI issues when the spin cycle ends and the actuator drops the brake but I also ran out of parts to make a snubber for the actuator so this fault should go away once the additional snubber is n place.
Yes i know its full of while loops but I wanted this to be as simple as possible for testing purposes.
#include <Bounce2.h> // Constants for Control Pins const int LedPin = 13; // LED Pin // Washingmachine Sensor Pins const int WaterLevelPin = 2; // N/C Switch to sense the water level If it goes open then stop everyting. const int LindSwitchPin = 3; // Goes OPEN circut if the lit is opend. E-STOP const int SpinSensorPin = 12; // When the spin locking pin is in place this is short // IN# are the relay inputs for the Duinotech 8 Channel Board const int CWPin = 4; // IN1 - Clockwise Pin const int CCWPin = 5; // IN2 - Counter Clockwise / FAST SPIN Pin. const int SpinActuatorPin = 6; // IN3 - Pull the locking pin in for the spin cycle const int ColdWaterValve = 7; // IN4 - Cold Water Valve const int HotWaterValve = 8; // IN5 - Cold Water Valve const int WaterPumpPin = 9; // IN6 - Water Pump // Control Pannel Pins const int StartPin = 10; // Restart After and Error const int RestartPin = 11; // Restart After and Error // Control Variables int ledState = LOW; // ledState used to set the LED int CWState = LOW; // Clockwise Relay State int CCWState = LOW; // Counter Clockwise Relay State int SpinState = LOW; // Spint Actuator State int ColdState = LOW; // Cold Water Valve State int HotState = LOW; // Hot Water Valve State int PumpState = LOW; // Water Pump State int incomingByte = 0; int MachineCycle = 0; int WaterLevel = 0; int SpinEnabled = 0; long AgitationCounter = 0; long BallanceCounter = 0; long RinseCounter = 0; long SpinCounter = 0; long EmptyWater = 0; long SoakCounter = 0; // Instantiate a Bounce object Bounce debouncer = Bounce(); void setup() { Serial.begin(9600); // Serial Communications for Debuging pinMode(StartPin,INPUT_PULLUP); // Pin to use for the Start Button debouncer.attach(StartPin); // After setting up the button, setup the Bounce instance : debouncer.interval(40); // interval in ms // set the digital pin as output: pinMode(LedPin, OUTPUT); // Onboard LED on pin 13 pinMode(WaterLevelPin, INPUT_PULLUP); // Enable Pullup for WaterLevel Sensor pinMode(LindSwitchPin, INPUT_PULLUP); // Enable Pullup for Lid Switch pinMode(CWPin, OUTPUT); // Clockwise Motor Control pinMode(CCWPin, OUTPUT); // Counter Closkwise Motor Control pinMode(SpinActuatorPin, OUTPUT); // Spin Actuator Solinoid pinMode(ColdWaterValve, OUTPUT); // Cold Water Valve pinMode(HotWaterValve, OUTPUT); // Hot Water Valve pinMode(WaterPumpPin, OUTPUT); // Water Pump pinMode(StartPin, INPUT_PULLUP); // Start Button pinMode(RestartPin, INPUT_PULLUP); // Restart/Resume after Error pinMode(SpinSensorPin, INPUT_PULLUP); // Enable Pullup for Spin Sensor } void loop() { debouncer.update(); // Update the Bounce instance : int value = debouncer.read(); // Get the updated value : if ( value == LOW ) { Serial.println("Washing Cycle Started"); MachineCycle = 1; // If the button is pushed start the machine } // send data only when you receive data: if (Serial.available() > 0) { incomingByte = Serial.read(); // read the incoming byte: Serial.print("I received: "); // say what you got: Serial.println(incomingByte, DEC); } if(incomingByte == 48){ Serial.println("Stop Machine"); MachineCycle = 0; } // If we receive a 1 run this if(incomingByte == 49){ Serial.println("Start Machiner"); MachineCycle = 1; } if(MachineCycle == 1){ Serial.println("Filling Machine with Cold Water for Wash Cycle"); while(digitalRead(WaterLevelPin) != 1){ //Fill the machine digitalWrite(LedPin, HIGH); //Start Filing digitalWrite(ColdWaterValve, HIGH); //turn...Read more »
After writing some code and making the thing work then connecting the motors things were not working as I would have expected. The arduino started to lock up and do some really weird things and the USB would drop out for no reason.
I re-wrote the code several times thinking that it was something I had done then made s really simple for loop that did the same thing.
It turns out that the motors when running on 240V and connected to the relay was creating some RFI that was getting picked up by the arduino's clock causing it to lock up and do some weird things.
So I decided to lineament a snubber circuit made up from a .47UF cap and a 1W 47ohm resistor connected across the relay points. in theory it should be connected in series to the motor but this was not working correctly and made little difference
Why they don't build this on the the duinotech board in the first place is any ones guess as most people using this board would suffer from the same problem.
The first step was to start tracing all the wires and rip out the old controller.
Essentially the motor in this one is a 3 wire AC motor, this uses a starter cap and a common neutral. One winding causes the motor to rotate CW the other CCW.
There is a kind of actuator that pulls in the a locking pin to engage the spin cycle when the motor is run CW.
Care must be taken with this as unlike most things that have been wired up form some reason the simpson 711 wires are all labled BUT one end of the wire will say one thing then when it enters the connector it will say something else the at the termination point it will be labeled somehting else again.
I decided to base my wiring on the plug labels.
A1 = Blue = Common Neutral wire for Motor, Spin Actuator and Water Pump.
A2 = White = Starter Capacitor and CCW Motor
A4 = Yellow = Spin Actuator
A5 = Red = Capacitor and CW Motor
A6 = Black = Water Pump
A7 = Purple = Spin Engaged Switch
A9 = Red = Spin Engaged Switch
White CV wire = Cold Water Valve
Black HV wire = Hot water valve
Purple = COM = Common Water Level Sensor
Black = NC = Normally Closed Water Level Sensor
Brown = NO = Normally Open Water Level Sensor
Inside there is also a microswitch for the lid however for the moment I have no bothered connecting this.
The washing machine has now been retired :( After a reasonable amount of service it decided to spring a leak and its time to strip it down and use the motor and other bits for another project.
I have SO been here. As a single parent and then sole carer for the last 20 years I have killed several washing machines. I replace brushes in the motor and belts but anything else costs more than an engineer to replace... Grrrr... ;-) So I have taken to rescuing my friends' machines too. I hate things designed to fall apart and electively avoid them but white goods manufacturers, oh, boy do they have me.
This, I never thought of doing. When my current machine dies, its getting a Boris Brain... Epic! Thanks for posting dude.
After my last two green washers I am tempted to go buy as dumb a washer as I can find and do what you did. I want some things washed in hot water and rinsed in hot water or at least warm as well.
I would also like to stop the sound of grinding boulders in the gear box and get it to spin long enough to get the water out of the clothes instead of saving electricity and water a leaving dirty clothes.
@Gordon Couger I hear you on that note. there were more than a few times I wanted a cycle a little different to the norm. A really long soak cycle would be nice as well.
setting things up this way seem to open up all kinds of options.
The temptation to actually build a custom control board with the 6 required relays or even sold state switches is prety high along with the required protection for the EMF etc as that seems to have been the most problematic.
Hmm, Maybe even have a CF slot in it so you can run a specific wash program something similar to sending GGODE to the CNC machine.
Ok, I can see this project is going to get away on me.
Optoisolate the solid state motor relays and solenoid valves then read the cycle times from a file on a SD card or WiFi them in with EMS8266. With the AC optoisolated from the board you can run the computer on a battery and insulate it from the washer and not worry about any tramp voltage. You need some kind of interlock to keep from turning on two things at once.
I worked on a CAN network that had 77 Volts AC of tramp voltage on the dialectical ground that got MUNGED during a remodel. I found out one day when I sweated though my shirt to actual ground and got bit. The net works and 6 computer worked fine with tramp voltage all over them.
The old washer were just a sequencing switch.
Hi,
great idea but it could be really hard to implement the correct sequence.
Some time ago my girlfriend and I had the idea to buy an additional washing machine for her textile-dyeing hobby where you need to wash with cold water. Since most machines do not have such a program I though it could be quite easy to implement a simple washing sequence using an Arduino.
My last project (not posted it yet) was to replace a broken mechanical relay box of a waste oil burner with an Arduino - meanwhile it's working good and I'm still optimizing the software.
By the way I would strongly recommend to use an Arduino UNO or even a larger version with more pins and to add a LCD/Button shield to have the possibility to implement some kind of an interface. I guess you have plenty of space inside the washing machine and so you don't have to save space by using the "Mini" version. In most projects you really need more pins than expected.
I still have plenty of pins for an LCD and buttons :)
I used the Min Pro because i have a pile of them here for other projects.
The goal was to make this as simple as possible, there is one button you press to initiate and one dial you use to set the water level. hose full of boys there is no such thing as delicate wash :)
Timing has been a fun thing to look at, most washes agitate for 8 to 15 minutes.
the empty cycle can me calculated based on the within reason as can the spin time.
I added a spin rinse cycle that works better than I had expected, when the spin is up to speed water is jetted in to the machine after the wash cycle and it really pushes out any excess soapy water from the clothes.
then its a quick rinse cycle with agitate and its all done.
I may put a few LED's on so someone can see the status of the cycles but in all honesty the idea was to keep in as simple as possible.
I love the whole idea, totally custom wash cycles.
I'm just now getting into Arduino, but I think I'm completely hooked!
Now I just have to get my girlfriend onboard! LOL | https://hackaday.io/project/20224-washing-machine-conversion | CC-MAIN-2019-39 | refinedweb | 2,574 | 63.63 |
Programming Language Concepts Using C and C++/Object-Based Programming
Speech, basically an activity that involves sharing a picture of countless hues with others, is successful only when the parties involved come up with similar, if not identical, depictions of a thought, an experience, or a design. Success is a possibility when the following criteria are met:
- Parties involved share a common medium,
- This common medium supports relevant concepts.
Absence or lack of these criteria will turn communication into a nightmarish mime performance. Imagine two people, with no common medium between them, trying to communicate with each other. Too much room for ambiguity, isn’t it? As a matter of fact, even when the two parties speak the same language their life views, read it “paradigms”, may make communication an unbearable exercise.
A concept, abstract or concrete, does not have any corresponding representation in the language if it doesn’t have room in the imagination of its speakers. For instance, Arabic speakers use the same word for ice and snow, while Eskimos have tens of words for snow. This cannot be used as a proof of intellectual incapacity, however: roles are reversed when it comes to depicting qualities of a camel.
Last deficiency is dealt with in two ways: A new word, probably related to an already existing one, is introduced; or an idiom is invented to express the inexpressible. The former seems like a better choice since the latter is open to misinterpretation and therefore leads to ambiguity, which brings us back to square one. It will also blur its-that is, new concept's-relation with other concepts and turn the vocabulary of a language into a forest of branchless trees.
So, what with programming languages? Programming languages, like natural languages, are meant to be used for communicating with others: machine languages are used to communicate with a specific processor; high-level languages are used to express our solution to fellow programmers; specification languages relay analysis/ design decisions to other analysts/designers.
Had we had it-that is, relaying our ideas to her/his majesty, the computer-as our one and only goal providing a solution to a problem would not have been such a difficult task. However, we have another goal, which is worthier of our intellectual efforts: explaining our solution to other human beings. Achieving this more elusive goal requires adoption of a disciplined approach and a collection of high-level concepts. The former is useful in analyzing the problem at hand and providing a design for its solution. It enables us to more easily spot recurring patterns and apply already-tested solutions to sub-problems. The latter is the vocabulary you use to express yourself. Using idioms instead of language constructs for this purpose is a potential for misunderstanding and a barrier erected between the newcomer and the happy few.
On the other hand, assimilation of idioms will not only let you speak the language but also make you one of the native speakers. Speaking a foreign language is now changed from a dull exercise of applying grammar rules into an intellectual journey in the mindscapes of others. This journey, if not cut short, generally reveals more about the concept the idiom is a substitute for; it helps you build [in a bottom-up fashion] a web of interrelated concepts. Next time you take the journey signposts you erected before will help you more easily find your way.
So, which programming language(s) should we learn? If it is your 10-year old cousin who asked this question, it wouldn’t be an end to the universe if (s)he started with MS Visual Basic or some other Web-based scripting language; if it is a will-be professional who will earn her/ his living by writing programs, (s)he had better care more about concepts than the syntax of certain programming languages. What is crucial in such a case is the ability to build a foundation of concepts, not a collection of random buzzwords. For this reason C/ C++, with their idiomatic nature, will be the primary tools for taking our journey into the programming language concepts.
ModuleEdit
Programming (or software production) can be seen as an activity of translating a problem (expressed in a declarative language) into a solution (expressed in machine code of a computer). Some phases in this translation are carried out by automatic code generating translators such as compilers and assemblers, while some are performed by human agents.[1]
Key to easing this translation process, apart from shifting the burden to code generators, is matching the concepts in the source language with those in the target language. Failing to do so means extra effort is required and leads to using ad hoc techniques and/ or idioms. Such an approach, while good enough to relay the solution to the computer, is not ideal for explaining your intentions to the fellow programmers. Accepting the validity of this observation won’t help you in some situations, however. In such a case, you should strive to adopt an idiomatic approach instead of using an ad hoc technique. And this is exactly what we will try to achieve in this handout: We will establish a semi-formal technique to simulate the notion of objects in C. If successful, such an approach will ease the transition from an object-based initial model (typically, a logical design specification) to a procedural model (C program).
To achieve our goal, we will adopt a technique familiar to us from the last two sections of the Programming Level Structures chapter: simulation of a concept using lower-level ones. Not having a class or module concept in C, we will use an operating system concept: file. To see it in action, read on!
InterfaceEdit
Sticking to a widely adopted convention, contents of a header file, or parts of it, are put inside an
#ifndef-#endif pair. This avoids multiple-inclusion of the file. First time the file is processed by the preprocessor,
COMPLEX_H is undefined and everything in between the
#ifndef and
#endif directives will be included. Next time the header file is included while preprocessing the same source file, probably by some other included file,
COMPLEX_H will have already been defined and all the contents in between the
#ifndef-#endif pair are skipped.
Following the
COMPLEX_H macro, General.h is included to bring in the macro definition for
BOOL.
#ifndef COMPLEX_H #define COMPLEX_H #include "General.h"
The following is a so-called forward declaration. We declare our intention of using a data type named
struct _COMPLEX but betray no details about its structure. We fill in the details by providing the definition in the implementation file, Complex.c. Users of the
Complex data type need not know the details of the implementation. Such an approach gives the implementer the flexibility of changing the underlying data structures without breaking the existing client programs.
The reason why we defer the definition to the implementation file is the lack of access specifiers (e.g.,
public,
protected,
public) in C as we have in object-oriented programming languages. This forces us to keep everything that should not be accessed by the user as a secret.
struct _COMPLEX;
Observe that
Complex is defined to be a pointer. This complies with the rule that an interface should include things that don’t change. And regardless of the representation of a complex number, which can be changed at the whim of the implementer, memory layout of the pointer to this representation will never change. Hence do we use
Complex in the function prototypes rather than
struct _COMPLEX.
Note the distinction between interface and implementation is reinforced by sticking to conventions, not by some language rule checked by the C compiler. We could lump the interface and implementation into a single file and the compiler would not complain a bit.
typedef struct _COMPLEX* Complex;
All of the following prototypes (function declarations) are qualified with the
extern keyword. This means that their implementations (function definitions) may appear in a different file, which in this case is the corresponding implementation file. This makes exporting functions possible: All files including the current header file will be able to use these functions without providing any implementations for them. Seen in this perspective, the following list of prototypes can be regarded as an interface to an underlying object claiming to provide an implementation.
Definition: An interface is an abstract protocol for communicating with an object.
All
extern functions are exported (that is, they are made visible to other files that are used to build the executable) from the implementing file(s) and are linked with–read it as “imported”–by their clients. Such imported functions are said to have external linkage. In case they are implemented in another file, addresses of these functions are unknown to the compiler and are marked to be so in the object file produced by the compiler. The linker, in the process of building the executable, will later fill in these values.
extern Complex Complex_Create(double, double); extern void Complex_Destroy(Complex);
Remember that
Complex is a
typedef for a pointer to
struct _COMPLEX. That is, it is essentially a pointer. For this reason, when qualified with the
const keyword it is the pointer that is guarded against change, not the fields pointed to by the pointer. This type of behavior is similar to that displayed in Java: when an object field is declared to be
final, it is the handle, not the underlying object, that is guarded against change.
Depending on where it is placed using const may mean different things.
Another point worth mentioning is the first formal parameter common to all functions:
const Complex this. This corresponds to the target object (the implicitly passed first argument) in the object-oriented programming languages. The function is applied on the object passed as the first argument, which is appropriately named
this. Identity of this object cannot change during the function call although the object content can vary. That is why we qualify the parameter type with
const keyword.
extern Complex Complex_Add(const Complex this, const Complex); extern Complex Complex_Divide(const Complex this, const Complex); extern BOOL Complex_Equal(const Complex this, const Complex); extern double Complex_Im(const Complex this); extern Complex Complex_Multiply(const Complex this, const Complex); extern void Complex_Print(const Complex this); extern double Complex_Re(const Complex this); extern Complex Complex_Subtract(const Complex this, const Complex); #endif
For obvious reasons, signatures of
Complex_Create and
Complex_Destroy form exceptions to the above mentioned pattern. The constructor-like function
Complex_Create allocates heap memory for the yet-to-be-created object and initializes it, whereas the destructor-like function
Complex_Destroy frees the heap memory used by the object and makes the object pointer unusable by assigning
NULL to it.
ImplementationEdit
#include <math.h> #include <stdio.h> #include <stdlib.h> #include "General.h"
The following directive may at first seem extraneous. After all, why should we include a list of prototypes (plus some other stuff) when it is us who provide the function bodies for them? By including this list we get the compiler to synchronize the interface and implementation. Say you modified the signature of a function in the implementation file and forgot to make relevant changes in the interface file; the function with the modified signature will not be usable (because it is not listed in the interface) and a function in the interface file won’t have a corresponding implementation (because the intended implementation now has a different signature). When we include the header file, compiler will be able to spot the mismatch and let you know about it.
Ironically, this becomes possible due to the lack of function overloading in C. C compilers will take the implementation as the definition of the corresponding declaration and make sure they match. Had we had function overloading compilers would have taken the definition as an overloading instance of the declaration and carried on with compilation.
Unlike DOS, where ‘\’ is used, UNIX uses ‘/’ as the separator between path name components. C having been developed mainly in UNIX-based environments uses ‘/’ for the same purpose. The reason why our previous examples worked all right was due to the fact that the compilers used were DOS implementations and interpreted ‘\’ correctly. If we want more portable code, we should use ‘/’ instead of ‘\’.
#include "math/Complex.h"
The following prototypes are provided here in the implementation file, because they are not part of the interface. They are used as utility functions to implement other functions. Had they been part of the interface, we would have put them in the corresponding header file, Complex.h.
Notice these two functions are qualified to be
static. When global variables and functions are declared
static, they are made local to the file they are being defined in.[2] That is, they are not accessible from outside the file. Such an object or a function is said to have internal linkage.
In C, functions, variables, and constants are by default
extern. In other words, unless otherwise stated they are accessible from outside the current file. This means we can omit all occurrences of
extern in the header file. This is not advisable, though. It would make porting your code from C to C++ difficult. For example, constants in C++ are by default
static, exactly the opposite of what we have in C!
Definition: An implementation is a concrete data type that supports one or more interfaces by providing precise semantic interpretations of each of the interface’s abstract operations.
static Complex Complex__Conjugate(const Complex); static double Complex__AbsoluteValue(const Complex);
We provide the details for the forward declaration made in the header file. Realize that this is the implementation file and the following definition is seen only by the implementer. Normally, the only files seen by the users are header files and the object files.
struct _COMPLEX { double im, re; };
Following function serves to create and initialize a
Complex variable, similar to the combination of new operator and constructor in object-oriented programming languages.
Definition: Constructor is a distinguished, implicitly called[3] function that initializes an object. Following the successful allocation of memory typically by a
new operator[4], it is invoked by the compiler-synthesized code.
Note the constructor-like function must be explicitly called in our case. Because, the notion of a constructor is not part of the C programming language.
Sometimes we need to have more than one such function. As a matter of fact, there are at least two other ways to construct a complex number: from another complex number and polar coordinates. Unfortunately, should we like to add another constructor; we have to come up with a function that has a new name or provide different function definitions through a single variadic funtion, because C does not support function name overloading.
Definition: Function name overloading allows multiple function instances that provide a common operation on different argument types to share a common name.
Complex Complex_Create(double real, double imaginary) { Complex this; this = (Complex) malloc(sizeof(struct _COMPLEX)); if (!this) { fprintf(stderr, "Out of memory...\n"); return(NULL); } /* end of if(!this) */ this->re = real; this->im = imaginary; return(this);
Assuming the widely used convention that return value is stored in a register, upon completion of the constructor function we have the partial memory image provided on the next page.
Observe the lifetime of the memory region allocated on the heap is not limited to that of the local pointer
this. At the conclusion of the function,
this will have been automatically discarded while heap memory will still be alive thanks to the pointer copied into the register.
} /* end of Complex Complex_Create(double, double) */
Following function serves to destroy and garbage-collect a
Complex variable. It is similar to destructors in object-oriented programming languages.
Definition: Destructor is a distinguished, implicitly called function that cleans up any of the resources the object acquired through the execution of its constructors, or through the execution of any of its member functions along the way. It is typically called before the invocation of a memory de-allocation function.
Programming languages with garbage collection introduce the notion of a finalizer function. Now that the garbage collector reclaims unused heap memory, programmer need not bother about it anymore. But, what about files, sockets, and etc.? These must in some way be returned to the system, which is what a finalizer is meant to do.
Our destructor-like function is rather simple. All we have to do is to return the heap memory allocated to the
Complex variable that is passed as the sole argument of the function.
free returns the memory pointed to by its argument, not the argument itself. One other reminder:
free is used to de-allocate heap memory; static data and run-time stack memory is de-allocated by the compiler-synthesized code.
Be that a region in heap or otherwise, one should not make assumptions about the contents of de-allocated memory. Doing so will give rise to non-portable software with unpredictable behavior.
void Complex_Destroy(Complex this) { free(this); } Complex Complex_Add(const Complex this, const Complex rhs) { Complex result = Complex_Create(0, 0); result->re = this->re + rhs->re; result->im = this->im + rhs->im; return(result); } /* end of Complex Complex_Add(const Complex, const Complex) */ Complex Complex_Divide(const Complex this, const Complex rhs) { double norm = Complex__AbsoluteValue(rhs); Complex result = Complex_Create(0, 0); Complex conjugate = Complex__Conjugate(rhs); Complex numerator = Complex_Multiply(this, conjugate); result->re = numerator->re / (norm * norm); result->im = numerator->im / (norm * norm); Complex_Destroy(numerator); Complex_Destroy(conjugate); return(result); } /* end of Complex Complex_Divide(const Complex, const Complex) */
Following function checks for equality of two complex numbers. Note that equality-check and identity-check are two different things. That’s why we do not use pointer semantics for comparison. Instead, we check whether the corresponding fields of the two numbers are equal or not.
BOOL Complex_Equal(const Complex this, const Complex rhs) { if (this->re == rhs->re && this->im == rhs->im) return(TRUE); else return(FALSE); } /* end of BOOL Complex_Equal(const Complex, const Complex) */
Following function serves as what is called a get-method (or an accessor). We provide such functions in order to avoid the violation of information hiding principle. The user should access the underlying structure members through functions. Sometimes functions are also provided to change values of members. These are called the set-methods (or mutators).
Definition: Information hiding is a formal mechanism for preventing the functions of a program to access directly the internal representation of an abstract data type.
It should be noted that accessors [and mutators] can also be provided for attributes of an object that are not backed by members of the underlying structure. For example, a complex number has two polar attributes that can be derived from its Cartesian attributes: norm and angle.
double Complex_Im(const Complex this) { return(this->im); } Complex Complex_Multiply(const Complex this, const Complex rhs) { Complex result = Complex_Create(0, 0); result->re = this->re * rhs->re - this->im * rhs->im; result->im = this->re * rhs->im + this->im * rhs->re; return(result); } /* end of Complex Complex_Multiply(const Complex, const Complex) */
Next function is meant to serve a similar purpose as the
toString of Java. This one, however, does not return any value; it just writes the output to the standard output file, which is definitely much less flexible than its Java counterpart, where a
String is returned and the user can make use of it in any way she sees it fit: she can send it to the standard output/error file, a disk file, or another application listening at the end of a socket. A function with such a semantic is given below.
char* Complex_ToString(const Complex this) { double im = this->im; double re = this->re; char *ret_str = (char *) malloc(25 + 1); if(im == 0) { sprintf(ret_str, “%g”, re); return ret_str; } if(re == 0) { sprintf(ret_str, “%gi”, im); return ret_str; } sprintf(ret_str, “(%g %c %gi)”, re, im < 0 ? ‘-‘ : ‘+’, im < 0 ? –im : im); return ret_str; } /* end of char* Complex_ToString(const Complex) */
However, users of the above implementation should not forget returning the memory allocated for the
char* object holding the string representation.
char* c1_rep = Complex_ToString(c1); ... // use c1_rep free(c1_rep); // no automatic garbage collection in C!
void Complex_Print(const Complex this) { double im = this->im, re = this->re; if (im == 0) {printf(“%g”, re); return;} if (re == 0) {printf(“%gi”, im); return;} printf("(%g %c %gi)", re, im < 0 ? ‘-‘ : ‘+’, im < 0 ? –im : im); } /* end of void Complex_Print(const Complex) */ double Complex_Re(const Complex this) { return(this->re); } Complex Complex_Subtract(const Complex this, const Complex rhs) { Complex result = Complex_Create(0, 0); result->re = this->re - rhs->re; result->im = this->im - rhs->im; return(result); } /* end of Complex Complex_Subtract(const Complex, const Complex) */
Next two functions do not appear in the header file. Users of the
Complex data type do not even know about them. For this reason, they do not [and cannot] use them directly. The implementer can at any time choose to make changes to these functions and other hidden entities, such as the representation of the type. This is a flexibility provided to us by applying the information hiding principle.
static Complex Complex__Conjugate(const Complex this) { Complex result = Complex_Create(0, 0); result->re = this->re; result->im = - this->im; return(result); } /* end of Complex Complex__Conjugate(const Complex) */ static double Complex__AbsoluteValue(const Complex this) { return(hypot(this->re, this->im)); } /* end of double Complex__AbsoluteValue(const Complex) */
Test ProgramEdit
#include <stdio.h>
Inclusion of Complex.h brings in the prototypes for functions that can be applied on a
Complex object. This enables the C compiler to check whether these functions are used correctly in the appropriate context. One other purpose of header files is to serve as a specification of the interface to the human readers.
Notice it is the prototype that is brought in, not the object code that contains the implementation. Object code of external functions is plugged in by the linker.
Normally, a user is not given access to the source files. The rationale behind this is to protect the implementer’s intellectual property. Instead, the object files, which are not intelligible to human readers, are given. The object files are the compiled versions of the corresponding source files and therefore semantically equivalent to the source files.
#include "math/Complex.h" int main(void) { Complex num1, num2, num3; num1 = Complex_Create(2, 3); printf("#1 = "); Complex_Print(num1); printf("\n"); num2 = Complex_Create(5, 6); printf("#2 = "); Complex_Print(num2); printf("\n");
As soon as the next assignment command is completed we will have the partial memory image given below:
Notice the non-contiguous nature of heap allocation. Although for a program of this size memory allocated will likely be contiguous, as the program gets larger this becomes impossible. The only job of a memory allocator is to satisfy allocation demands; address of the allocated memory is of no consequence. For doing this it may use different algorithms such as first-fit, worst-fit, best-fit, and etc.
num3 = Complex_Add(num1, num2); printf("#1 + #2: "); Complex_Print(num3); printf("\n");
In
Complex_Add we had created a complex number on the heap and returned a pointer to this as the result. Next time we use the same
Complex variable to hold the result of another operation, the old location that holds the result of the previous operation will be unreachable. Such unreachable, and therefore unusable, locations in memory are referred to as garbage. In programming languages with automatic garbage collection, such unused heap memory is reclaimed by the runtime system of the language. In object-oriented programming languages without automatic garbage collection, this must be taken care of by the programmer through the invocation of a function such as
delete, which in turn invokes a special function called destructor. In non-object-oriented programming languages, destructor function has to be simulated and the programmer must explicitly return such memory regions back to the system for reuse. In our case, the function simulating the destructor is named
Complex_Destroy.
Upon completion of the next line, we will have the following partial memory image:
Observe that
num3 still points to the same location. That is, we can still use
num3 to manipulate the same region of memory. But, no guarantee is given about the contents. So, in order to keep the user away from the temptation of using this value, it would be a good idea to change the value of
num3 to something that cannot be used to refer to an object. This value is
NULL. Each time a memory region is de-allocated, the pointer pointing to it should either be made to show another region, as in this test program, or the user should assign
NULL to the pointer variable. A second, more secure approach gives the responsibility of assigning
NULL to the implementer. The problem is we need to modify the pointer itself, not the region it points to. This deficiency can be removed by making the following changes:
... void Complex_Destroy(Complex* this) { free(*this); *this = NULL; } /* end of void Complex_Destroy(Complex* ) ...
... Complex_Destroy(&num3); ...
Complex_Destroy(num3); num3 = Complex_Subtract(num1, num2); printf("#1 - #2: "); Complex_Print(num3); printf("\n"); Complex_Destroy(num3); num3 = Complex_Multiply(num1, num2); printf("#1 * #2: "); Complex_Print(num3); printf("\n"); Complex_Destroy(num3); num3 = Complex_Divide(num1, num2); printf("#1 / #2: "); Complex_Print(num3); printf("\n"); Complex_Destroy(num3); Complex_Destroy(num1); Complex_Destroy(num2); return(0); } /* end of int main(void) */
Running the Test ProgramEdit
- gcc –I ~/include –c Complex.c↵ # ~ stands for the home directory of the current user; note the space between –I and ~/include!
The above command will produce Complex.o. Note the use of –I and –c options. The former gives the preprocessor a hint about the place(s) to look for non-system header files, while the latter will cause the compiler to stop before linking. Unless a header file is not found in the given list of directories, it is searched in the system directories.
As you can see, our code has no main function. That is, it is not runnable on its own. It just provides an implementation for complex numbers. This implementation will later be used by programs like Complex_Test.c, which manipulate complex numbers.
- gcc –I ~/include –lm –o Complex_Test Complex_Test.c Complex.o↵
The above command will compile Complex_Test.c and link it with the required object files. The output of linking will be written in a file named Complex_Test. -l option is used for linking to libraries.[5] In this case we link to a library named libm.a, where m stands for the math library. We need to link to this library to make sure that object code for functions, such as
hypot, is included in the executable. As a result of linking to the math library, only the object code of the file containing the implementation for
hypot is included in the executable.
Program Development ProcessEdit
The whole process can be pictured as shown in the following diagram.
Black colored region in the diagram represents the implementer side of the process. What goes on inside this box is of no concern to the users; number of sub-processes involved, intermediate outputs produced are immaterial to them. As a matter of fact, the module could have been written in a programming language other than C and it would still be OK as long as the client and the implementer use the same binary interface. All they should care about is the output of this black-box, Complex.o, and the header file, Complex.h, which is needed to figure out the functionality offered by Complex.o.
Note that Complex.o is semantically equivalent to Complex.c. The difference lies in their intelligibility to human readers and computers: C source code is intelligible to a human being while the corresponding object file is not. This lack of intelligibility serves to protect the intellectual property of the implementer. After spending months on a project, the implementer delivers the object module to the clients, which contains no hints as to how it has been implemented.
Once the user acquires the object module and the related header file(s), she follows the following steps to build an executable using this object module.
- Write the source code for the program. Now that this program will refer to the functionality offered in Complex.o, we must include the relevant header files, which is in this case Complex.h. This will ensure correct use of functionality supplied in Complex.o.
- Once you get the program to compile you must provide the code that implements the functionality used. This functionality is delivered to you in the object module named Complex.o. All you have to do is to link this with the object code of your test program.
- In addition to the object module, you must have access to the libraries and other object modules used in Complex.o and the program. In other words, we may not be able to test our program unless we have certain files. In our case these are the Standard C Library and the Math Library. Unless we have these libraries on our disk or the implementer supplies them to us, we will not be able to build the executable.
NotesEdit
- ↑ All but one of these activities performed by human agents can be accomplished by an automaton. However, the very act of devising the initial model for the problem seems to be staying with us for a while.
- ↑ Relative address of a function local to the current file is determined at compile time. In other words, address of such a funtion is determined statically, Hence is the
statickeyword.
- ↑ It’s not an implicit call in the sense of the definition made in the Control Level Structure chapter. Although it is not the programmer who makes the call, constructor call is not outside the control of the programmer. The programmer has knowledge of when and which constructor will be invoked.
- ↑ In C++, in addition to creating it in the heap and using through a pointer, an object can be embedded into the static data region or runtime stack. That is, they can be accessed without pointers. Such objects obey C++ scope rules just like other variables: they are automatically created and destroyed as the related scope is entered and exited. For this reason, they do not require any invocation of the
newoperator. Same behavior can be seen in the use of
structs (value types) in C#.
- ↑ A [static] library is basically a set of .o (.obj in MS Windows) files, obtained by compiling a corresponding set of .c files, plus some meta-data. This meta-data is used to speed up extraction of .o files and answer queries about the library content. There are typically one or more .h files containing the declarations necessary to use those .o files. | https://en.m.wikibooks.org/wiki/Programming_Language_Concepts_Using_C_and_C%2B%2B/Object-Based_Programming | CC-MAIN-2022-05 | refinedweb | 5,142 | 52.9 |
Forth is a Stack-based ProgrammingLanguage developed by ChuckMoore in the 1960s (see the
Evolution of Forth). Forth code is written in ReversePolish notation, since it works directly with the stack (actually, several stacks). There is no implicit stack management, which means that function calls are very fast: the arguments are whatever is already on the Stack when the function is called and the return value is whatever is left on the Stack at return.
Forth is defined in terms of itself more so than probably any other language. A Forth system generally consists of only a tiny kernel and a small number of Forth functions written in AssemblyLanguage, but even most of the compiler is written in Forth, as is the standard library. Real Forth geeks write their own Forth kernels from the metal up. (ChuckMoore creates his own Forth machines from silicon.) It doesn't take much effort once you've absorbed the Forth philosophy – there is shockingly little holding Forth up. This is in tune with ChuckMoore's philosophy of brutal simplicity in software engineering.
Forth produces very compact code. A whole interpreter and development system will easily fit into 8 kilobytes and leave plenty of room for user code. Back when architectures with 8 kilobytes of RAM weren't just the computers that you developed for but also the computers you developed on, Forth was very popular. Today it is most popular in embedded systems and other highly constrained environments: f.ex., it is the language of choice for NASA on many satellites and planetary probes, where its compactness and functional correctness (see below in Comparison with Haskell) outweigh the speed of AssemblyLanguage. It forms the basis for the OpenFirmware and the RPL language used in HewlettPackard calculators and is used in some of UCSD's music courses. It has been used successfully in ArtificialIntelligence research.
This is a general description of how Forth works. It's different. If you like Brainf*ck you will like Forth.
Forth functions are called words. They are organised in the user dictionary.
A Forth machine has two stacks. When subroutines (called words in Forth lingo) are called they pop arguments off the parameter stack and push back return values. The return stack holds the return addresses of word calls.
Finally, the input buffer for the Forth parser forms an integral part of a Forth system.
Words manipulate the parameter stack directly to perform calculations on their arguments and leave return values. Traditionally, there are no local variables in Forth (but there are global variables, which are simply pointers stored under a name in the dictionary): the parameter stack is used for all temporary storage. There are lots of words for many simple stack manipulations, such as pushing another copy of the top-most value, dropping it, rotating the stack in either direction, rotating part of the stack, etc. Programming in Forth means keeping track of what is supposed to be on the stack at any point.
The dictionary is also structured like a stack. As words are defined, they are added to the top of the stack. You can recycle memory by forgetting, which means dropping all the entries at the top of the dictionary down to some word you specify (usually you do this implicitly by executing a word defined using MARKER). Sets of words are usually loaded/defined as a unit, and are usually referred to as a vocabulary, analogous to libraries in other languages. The first vocabulary is a tiny kernel of word definitions written in MachineCode, then there is a standard library vocabulary defined in Forth, and then optional vocabularies that define a simple text editor, assembler and disk OperatingSystem. (A modern Forth system might have a TCP/IP stack on there somewhere too.) Some Forths permit multiple vocabularies to exist concurrently and independently, thus providing namespaces.
The word : tells Forth to begin compiling the subsequent words found on the input buffer, and the word ; then tells it to stop. You use : to add new words to the Forth system until you've created a high-level Forth vocabulary that you can easily express your problem in. You just keep going until you've defined a single word that, when called, executes your entire program. Forth is very much a BottomUp language. :-)
Most words compile to call statements. Words that look like integers compile to push statements. Some words, however, have flags that tell Forth to execute them immediately while compiling: these create control structures in the ByteCode of a word, affect parsing, or perform other compiler or macro functions.
There is no syntax in Forth, just words that manipulate other words and the input buffer and user dictionary.
Sentences long extremely and notation Polish reverse in writing about wrong is what?
— Jarkko Hietaniemi
The following code is a simple text editor from
Retro Native Forth:
0 variable lastBlock asmbase $1024 + constant blockBase : block 512 * blockBase + ; : select dup lastBlock ! ; : vl dup 40 type cr 40 + ; : view lastBlock @ block vl vl vl vl vl vl vl vl vl vl vl vl ; : edit select block s drop drop view ; : delete select block 512 0 fill view ; : edit-line 40 * lastBlock @ block + s drop drop view ; : delete-line 40 * lastBlock @ block + 37 0 fill view ;
This demonstrates the preferred programming style in Forth: short words laid out mostly horizontally, just like words in a college dictionary.
The next example is a utility word from the IRC bot in
ISForth.
\ receive up to 512 bytes from bot socket : bot-read off> #inbuff inbuff \ where to receive into begin 1 over bot-fd recv \ read one char from bot socket 1- \ result should be 0 now if \ if its not then we didnt get any input drop exit \ so discard buffer address and exit then incr> #inbuff \ we got a character dup c@ \ get the character we just read $0a <> \ while its not an eol #inbuff 512 <> and while 1+ \ advance buffer address repeat drop \ discard buffer address ;
This is a single long word, written with a predominantly vertical layout. Such code is hard to read (eg. the while loop's condition check consists of the overwhelming bulk of the code), and is considered poor style.
For an almost poetic example of Forth, see LifeInForth.
Contemporary Forth systems are much more complex than naïve interpreters in order to avoid spending a lot of time executing call statements and to reduce the overhead of interpretation.
Instead of compiling words to call statements, they might be inlined, even as MachineCode instructions embedded in the ByteCode so the target CPU can execute them directly. One form of this is called subroutine threading. Since many target architectures are register-based, numeric literals might be embedded as immediate operands in MachineCode instructions.
Optimising compilers will wait before putting anything in the dictionary until they encounter the ; word, rather than building the definition piece-meal in the dictionary. They can then perform proper lexical analysis and parsing to help generate efficient code. Some highly optimizing compilers will even infer types based on the words called in a word!
: block 512 * blockBase + ;
let block b = blockBase + ( 512 * b )
The similarities go far deeper though, and there are many deep dualities.
Haskell compiles software to a tree of closures – bundles of code and data that together form a function in the mathematical sense. In Forth, because parameters are placed on the stack, there is no need to store free or bound variables in the program structure. Hence, a poiner to a closure consists only of its code pointer; therefore, typical "threaded" Forth code maps 1:1 to Haskell's closure-based code.
The fact that
bigFORTH and the
Glasgow Haskell Compiler produce binaries that are about equally performant is no coincidence.
Forth has issues using local variables, but can maintain global state with ease. Haskell has issues using global state, but can maintain local variables with ease. To address these issues, ANSI Forth introduced a portable locals wordset, while Haskell uses monads to deal with global state. Both constructs are fully expressable in the core language, and both address (essentially) different sides of the same problem.
Forth allows the compiler to be extended by fundamentally altering the compiler in terms of words written in the base language itself. Haskell utilizes monads for this same purpose.
Haskell uses the >>= operator to compose the right-hand monadic function onto the results returned by the left-hand computation. Save for the threading of state this is normal function composition. In Forth, which is described mathematically as a purely combinator-based language, composition is performed by concatenation – e.g., simply listing one word after another. Hence, both Forth and Haskell code can be expressed via function composition, and therefore, reasoned about algebraically.
Free Forth compiler for PalmPilot
A Forth compiler for the PalmPilot
GNU's Forth. It's unusual for a Forth in that it comes with a manual. Forth programmers aren't big on manuals :-(
It has a VirtualMachine and kernel (see below) written in C. It might be a good one to study.
Both a ForthOS and a Forth for Linux systems.
A native code compiler for x86 which includes a GUI library and form editor.
A good book for starting with forth. by Leo Brodie.
Ching's World. (WLUG member who blogs about forth)
Finally, some
Forth humour at UserFriendly.
CategoryProgrammingLanguages, CategoryMachineOrientedProgrammingLanguages, CategorySystemsProgrammingLanguages | http://wiki.wlug.org.nz/Forth | CC-MAIN-2020-16 | refinedweb | 1,564 | 53 |
== intro no, i'm not talking about using a nice frontend to ghci, with added functionality, although the haskell modes for emacs and vim, and other such gui/ide/editor efforts, are well worth looking into!-) also, i'm not going to talk about the eagerly anticipated ghci debugger. what i am going to talk about are some of the little things one can do to improve the ghci user experience, including features asked for in ghc's ticket tracker, features known and cherished from hugs, and features that are not even available in hugs. the keys to adding such useful features are :def, which allows us to define new ghci commands, and :redir, which allows us to capture the output of ghci commands for further processing. and to save you the search: :def has been with us for a while (it is in ghci 6.6.1), and :redir is not among the pre-defined ghci commands, not even in ghc head. the reason i started looking into adding commands to ghci were (a) the added functionality in ghci 6.7, which will (b) soon require me to be able to capture ghci command output, for proper interfacing with my haskell mode plugins for vim. originally, i thought that output capture would need to be hacked into the ghci sources (i even drafted a patch:-). it then became clear that we can implement :redir on top of the existing commands, even in ghci 6.6.1! all the commands we're going to introduce are defined in the file dot-squashed.ghci, which can be found here: (there is a patch pending for ghc head that will let us spread definitions in ghci over multiple lines, but for now, we have to squash them into single lines..; i will take the liberty to violate this restriction in the explanations below), and work with ghci 6.6.1, although there are more commands and information to play with in ghc 6.7 or later. == command overview so what kind of commands are we going to define? here's an overview: Prelude> :def s readFile Prelude> :s dot-squashed.ghci Prelude> :defs list :. <file> -- source commands from <file> :pwd -- print working directory :ls [<path>] -- list directory ("." by default) :redir <var> <cmd> -- execute <cmd>, redirecting stdout to <var> :redirErr <var> <cmd> -- execute <cmd>, redirecting stderr to <var> :grep <pat> <cmd> -- filter lines matching <pat> from the output of <cmd> :find <id> -- call editor (:set editor) on definition of <id> :b [<mod>] -- :browse <mod> (default: first loaded module) :le [<mod>] -- try to :load <mod> (default to :reload); edit first error, if any :defs {list,undef} -- list or undefine user-defined commands == let's go - simple things first :def is commonly used to abbreviate frequently entered commands: Prelude> :grep :def :? :def <cmd> <expr> define a command :<cmd> where the <expr> is of type 'String -> IO String', so it takes a command parameter (actually, the rest of the command line after :<cmd>), does some IO, and returns a String, which is going to be interpreted as ghci input. that's why our ':def s readFile' defined a :s command to source ghci input from a file, which we then used to source our new command definitions (alternatively, we could have put those into our .ghci file). ghci 6.7 and later also have a :cmd command, which takes an expression of type 'IO String' - that expression is immediately executed, and its result String is interpreted as ghci input. we'll define our own version for ghci 6.6.1: -- 6.6.1 doesn't have this, omit this def for later ghci's :def cmd \l->return $ unlines [":def cmdTmp \\_->"++l,":cmdTmp",":undef cmdTmp"] we simply use the expression parameter to define a temporary command, which we execute immediately and undefine afterwards. == being helpful and self-documenting ghci only keeps an internal record of commands added with :def, so we can neither get a list of them, nor will they appear in :? output. we can be more helpful by wrapping each command with a line of help: -- make commands helpful let { cmdHelp cmd msg "--help" = return $ "putStrLn "++show msg ; cmdHelp cmd msg other = cmd other } then we can define a few standard utilities, with help texts: :def . cmdHelp readFile ":. <file>\t\t-- source commands from <file>" let pwd _ = return "System.Directory.getCurrentDirectory >>= putStrLn" :def pwd cmdHelp pwd ":pwd\t\t\t-- print working directory" let ls p = return $ "mapM_ putStrLn =<< System.Directory.getDirectoryContents "++show path where {path = if (null p) then "." else p} :def ls cmdHelp ls ":ls [<path>]\t\t-- list directory (\".\" by default)" sourcing commands saves key-strokes on a frequently used operation, while :pwd and :ls save me from having to recall whether i'm using ghci in windows or unix, by using haskell functions;-) we'll see later how we register our commands with :defs, but each of these commands responds to calls for --help: Prelude> :. --help :. <file> -- source commands from <file> so, once registered, we can let :defs give us --help for all of them: Prelude> :grep :\.|:pwd|:ls :defs list :. <file> -- source commands from <file> :pwd -- print working directory :ls [<path>] -- list directory ("." by default) == a hammer for many nails: capturing ghci command output now for the big tool: :def is fine, and there are several ghci commands that give useful information to the user, such as :?, :show modules, :browse, etc. if we could only get hold of that information, we could define much more interesting new ghci commands. the plan is to redirect stdout to a temporary file, execute one of those helpful ghci commands, restore stdout to the terminal, and read the contents of the temporary file into a variable (then clean away the temporary file). unfortunately, there's no portable redirection functionality in the standard libs, but in ghci, we're ghc-dependent anyway, and ghc provides us with GHC.Handle. now, take a deep breath, 'cause here we cmdHelp redir ":redir <var> <cmd>\t-- execute <cmd>, redirecting stdout to <var>" okay, that was a handful, and it doesn't look easier on a single line. our :redir command takes two parameters: the name of a variable to bind the output to, and the name of a command to capture the output of. so we split the command line into var and cmd, and we complain with usage info if that fails (unchecked failure in ghci command definitions is generally a bad idea). the rest is fairly straightforward, actually, but for the tedious inconvenience of keeping the different levels of interpretation and scopes in mind: we're using the current scope to construct strings that represent commands which will be executed in the scope in which :redir will be called. also, we need to fully qualify our variables, to be sure we can get the right ones, no matter what module is loaded when the command is executed (strictly speaking, we should qualify the remaining functions as well, to avoid pathological cases like "import Prelude()" -- please keep that in mind if you do redefine or hide any of those standard functions). we don't want output from our auxiliary variable bindings, so we :set -fno-print-bind-result; then we get a temporary file f, with handle h, make a copy of stdout, then redirect stdout to h; we insert the cmd we want to run (note the lack of quotes); afterwards, we restore stdout, read the temporary file and bind its contents to var (note again the quoting), before we clear away the temporary file. and what do we get for all that trouble? Prelude> :redir --help :redir <var> <cmd> -- execute <cmd>, redirecting stdout to <var> Prelude> :redir x :type id Prelude> x "id :: a -> a\n" Prelude> putStrLn x id :: a -> a not much, yet, but that is a very useful hammer!-) and if :redir is our hammer, the question becomes: what are our nails? == filtering command output we've already seen several uses of :grep, so let's deal with that next: let { merge [] = [] ; merge (l:c:ls) | i c > i l = merge ((l++c):ls) where {i l = length (takeWhile Data.Char.isSpace l)} ; merge (l:ls) = l:merge ls ; grep patcmd = case break Data.Char.isSpace patcmd of { (pat,_:cmd) -> return $ unlines [":redir out "++cmd ,"let ls = "++if ":browse" `Data.List.isPrefixOf` cmd then "merge (lines out)" else "lines out" ,"let match pat = Data.Maybe.isJust . Text.Regex.matchRegex (Text.Regex.mkRegex pat)" ,"putStrLn $ unlines $ (\"\":) $ filter (match "++show pat++") $ ls"] ; _ -> return "putStrLn \"usage: :grep <pat> <cmd>\"" } } :def grep cmdHelp grep ":grep <pat> <cmd>\t-- filter lines matching <pat> from the output of <cmd>" -- (also merges pretty-printed lines if <cmd> is :browse) another handful, but not all that much if we focus on the grep function: again, we split the commandline into pat and cmd (note that this simple-minded approach doesn't permit spaces in pat); we then run the cmd, capturing its output in the variable out, and apply a simple filter to the lines in out, based on matching the regular expression pattern pat; that's it (oh, and if cmd happens to be :browse, we undo the pretty-printer's attempt to spread information over multiple lines, which would interfere with our line-based filtering). now, this is more obviously useful, isn't it?-) we can :grep for module-related commands in :?: Prelude> :grep mod :? :add <filename> ... add module(s) to the current target set :browse [*]<module> display the names defined by <module> :edit edit last module :load <filename> ... load module(s) and their dependents :module [+/-] [*]<mod> ... set the context for expression evaluation :reload reload the current module set :show modules show the currently loaded modules or find out what folds there are in the Prelude (this is similar to hugs' :names command, btw): Prelude> :grep fold :browse Prelude foldr :: (a -> b -> b) -> b -> [a] -> b foldl1 :: (a -> a -> a) -> [a] -> a foldr1 :: (a -> a -> a) -> [a] -> a foldl :: (a -> b -> a) -> a -> [b] -> a or what monadic commands in the Prelude operate on lists (this is no replacement for hoogle, but useful for finding simple matches): Prelude> :grep Monad.*\[.*\] :browse Prelude sequence_ :: (Monad m) => [m a] -> m () sequence :: (Monad m) => [m a] -> m [a] mapM_ :: (Monad m) => (a -> m b) -> [a] -> m () mapM :: (Monad m) => (a -> m b) -> [a] -> m [b] == for our next trick: more hugs commands for ghci === :find we've already included the :names functionality, but my favourite hugs command must be :find <name>, which edits the module containing the definition of <name>. this is even more useful as hugs comes with sources for Prelude and libs, so you can simply say ':find span' and see span's standard definition. but, let's at least define a ghci :find for modules for which we do have the sources! the plan is to capture the output of :info <name>, grep for those helpful comments "-- Defined at <file>:<line>:<col>:", which tell us where to look, then call :edit with that information. for that to work, we need an editor that can open a file at a specified line: -- if your editor doesn't understand :e +line file -- (jump to line in file), you'll need to change -- functions find and loadEditErr below :set editor gvim now, for the find functionality: let find id = return $ unlines [":redir out :info "++id ,"let ls = filter (Data.List.isInfixOf \"-- Defined\") $ lines out" ,"let match pat = Text.Regex.matchRegex (Text.Regex.mkRegex pat)" ,"let m = match \"-- Defined at ([^<:]*):([^:]*):\" $ head ls" ,":cmd return $ case (ls,m) of { (_:_,Just [mod,line]) -> (\":e +\"++line++\" \"++mod) ; _ -> \"\" }"] :def find cmdHelp find ":find <id>\t\t-- call editor (:set editor) on definition of <id>" that's fairly easy by now, isn't it?-) we capture the output of :info, grab the definition location, if any, and call :e. the latter is a bit tricky because we need to compose the :e command with the information we have obtained from :info. we achieve this by an extra level of interpretation, passing our constructed ':e +line file' command to :cmd. note again that :find will work only where we have the sources. for instance, we don't have the sources for Prelude.span, so :find span wouldn't work - :info span doesn't list a source file, only a module: Prelude> :info span span :: (a -> Bool) -> [a] -> ([a], [a]) -- Defined in GHC.List but load a module of your own, then try ':find main' or something!-) === #1468: :browse should browse currently loaded module another thing that hugs does is default to browsing the current module, if :browse is called with no explicit module parameter. this has been asked for in a ghci ticket. by now, you're probably ahead of me, looking for ghci commands to grab the current module from?-) you're looking for :show modules, and the implementation of :b is indeed straightforward: let { b browse "" = return $ unlines [":redir out :show modules" ,":cmd case lines out of { (l:_) -> return ("++show browse++"++head (words l)) ; _ -> return \"\" }"] ; b browse m = return (browse++m) } :def b cmdHelp (b ":browse ") ":b [<mod>]\t\t-- :browse <mod> (default: first loaded module)" i pass the browse command to use because my ghci has two versions (also in that patch pending for ghc head), and you might want to protect that call to head. but otherwise, no need to wait for someone to hack the ghci sources to fix that ticket!-) === #95: GHCi editor binding with ":e" the ghci ticket that asked for adding the hugs :edit command also asked for more of hugs :edit functionality: after failing to load a file, :e will edit the first error location. can we do that without fixing ghci? yes, we can, but there is a slight obstacle to overcome. we could make a variant of :redir that redirects stderr instead of stdout, but we want to apply it to commands like :load/:reload, to capture their error reports, if any. unfortunately, those commands clear the bindings in scope, which means that our cleanup operation would fail to restore stderr to the terminal, not to mention reading the tempfile contents into a variable. we need to be a bit more careful. fortunately, :l/:r do not clear the current commands, so if we manage to capture our handle and filename variables in a temporary command, we can execute that to finish processing after calling :l/:r. so, here we go: let redirErr varcmd = case break Data.Char.isSpace varcmd of { (var,_:cmd) -> return $ unlines [":set -fno-print-bind-result" ,"tmp <- System.Directory.getTemporaryDirectory" ,"(f,h) <- System.IO.openTempFile tmp \"ghci\"" ,"ste <- GHC.Handle.hDuplicate System.IO.stderr" ,"GHC.Handle.hDuplicateTo h System.IO.stderr" ,"System.IO.hClose h" ,"let readFileNow f = readFile f >>= \\t->length t `seq` return t" ,"let afterCmd _ = do { GHC.Handle.hDuplicateTo ste System.IO.stderr ; r <- readFileNow f ; System.Directory.removeFile f ; return $ \""++var++" <- return \"++show r }" ,":def afterCmd afterCmd" , cmd , ":afterCmd" , ":undef afterCmd" ] ; _ -> return "putStrLn \"usage: :redirErr <var> <cmd>\"" } :def redirErr cmdHelp redirErr ":redirErr <var> <cmd>\t-- execute <cmd>, redirecting stderr to <var>" if you compare redirErr with redir, you'll notice that we've only exchanged stderr for stdout, and moved the commands after cmd into a temporary :afterCmd, which slightly complicates the quoting. otherwise, the plan is unchanged. we're now in a position to handle error location editing: let loadEditErr m = return $ unlines [if null m then ":redirErr err :r" else ":redirErr err :l "++m ,"let match pat = Text.Regex.matchRegex (Text.Regex.mkRegex pat)" ,"let ms = Data.Maybe.catMaybes $ map (match \"^([^:]*):([^:]*):([^:]*):\") $ lines err" ,":cmd return $ case ms of { ([mod,line,col]:_) -> (\":e +\"++line++\" \"++mod) ; _ -> \"\" }"] :def le cmdHelp loadEditErr ":le [<mod>]\t\t-- try to :load <mod> (default to :reload); edit first error, if any" depending on whether we use :le with an explicit module to load or not, we use :load or :reload, capture the error output, filter it for error locations (<file>:<line>:<col>:), and call the editor with that information for the first error, if any. we could even make ghci wait for the editor to close, then loop until all errors have been handled. but that kind of edit/compile loop is better handled inside your editor (in vim, :help quickfix). == wrapping up, keeping a list finally, let's keep a record of the command we've added, so that we can find out what they are and what they do: let { cmds = [".","pwd","ls","redir","redirErr","grep","find","b","le","defs"] ; defs "list" = return $ unlines $ "putStrLn \"\"": [":"++cmd++" --help"| cmd <- cmds]++ ["putStrLn \"\""] ; defs "undef" = return $ unlines [":undef "++cmd| cmd <- cmds] ; defs _ = return "putStrLn \"usage: :defs {list,undef}\"" } :def defs cmdHelp defs ":defs {list,undef}\t-- list or undefine user-defined commands" we simply list the commands manually (we could make them self-registering by keeping an IORef somewhere, but let's keep things simple for now), and provide an administrative interface (:defs) to list (call --help for all) or undefine (useful if you edit your .ghci file and want to reload it) all our commands. that's how we were able to type ':defs list' at the beginning of this session to get our overview of available user-defined commands. i hope you enjoyed this little tour of ghci, and find the proposed commands and techniques useful. claus | http://www.haskell.org/pipermail/haskell-cafe/2007-September/032260.html | crawl-002 | refinedweb | 2,900 | 62.51 |
Opened 3 years ago
Closed 17 months ago
#3647 closed Bug (Fixed)
Ghost border artifact with _GDIPlus_ImageResize
Description
The function _GDIPlus_ImageResize creates images that exhibit the "Ghost Border" problem explained here.
It's not all that noticeable unless you're looking for it, however it's significant for my use case. I'm capturing images of text, up-sizing it by 4X with the High Quality Bicubic mode, and then feeding the resulting image into Tesseract OCR. Up-sizing the image drastically improves the OCR, but for some image captures there is not much of a clean border area around the text to begin with, and in those cases the border artifact on the scaled image can sometimes cause incorrect OCR results.
Please consider adding the TileFlipXY option into the resize logic so that the edges of the resized image won't have this artifact. Even aside from my own use case, this is an artifact that somewhat reduces the potential quality of the image.
I'm not sure if TileFlipXY should always be active, or if perhaps it should just be an optional parameter? Best for you to decide I suppose.
Thanks!
Attachments (6)
Change History (14)
comment:1 Changed 3 years ago by anonymous
comment:2 Changed 3 years ago by anonymous
More good info about this issue here: here
comment:3 Changed 3 years ago by anonymous
Some obligatory test code:
#include <GDIPlus.au3>
_GDIPlus_Startup()
Local $hImage = _GDIPlus_ImageLoadFromFile(@ScriptDir & "\TestImage.png")
Local $iWidth = _GDIPlus_ImageGetWidth($hImage)
Local $iHeight = _GDIPlus_ImageGetHeight($hImage)
Local $hBitmap = _GDIPlus_ImageResize( _
$hImage, $iWidth * 4, $iHeight * 4, _
$GDIP_INTERPOLATIONMODE_HIGHQUALITYBICUBIC)
_GDIPlus_ImageSaveToFile($hBitmap, @ScriptDir & "\TestResult.png")
Sleep(500)
_GDIPlus_BitmapDispose($hBitmap)
_GDIPlus_ImageDispose($hImage)
_GDIPlus_Shutdown()
comment:4 Changed 3 years ago by anonymous
See also (somewhat) related ticket # 3650
comment:5 Changed 3 years ago by mdepot@…
I created a fix for this issue, as well as the issue in ticket #3650. The fix is within the attached file "Test Resize.au3" It contains replacement functions for _GDIPlus_ImageScale and _GDIPlus_ImageResize, as well as a new function to make a dll call to SetWrapMode on an ImageAttribute. These new functions add an ImageAttribute with WrapMode set to TileFlipXY. Currently it's passing the explicit integer 3 to SetWrapMode, which corresponds to TileFlipXY, but that 3 should be replaced with whatever the proper GDI variable is from the headers. Sorry I didn't have that info handy when I made this. Another improvement that could be made is for _GDIPlus_ImageScale to call _GDIPlus_ImageResize instead of repeating a lot of the same code within. Again, this was just a quick fix though. I'm also attaching some sample images showing old and new results.
Changed 3 years ago by mdepot@…
comment:6 Changed 3 years ago by anonymous
comment:7 Changed 17 months ago by Jpm
- Component changed from AutoIt to Standard UDFs
- Owner set to Jpm
- Status changed from new to assigned
comment:8 Changed 17 months ago by Jpm
- Milestone set to 3.3.15.4
-.
Or to put it another way, in order to prevent the edge from doing interpolation against the default background, provide the DrawImage method with an ImageAttributes instance which has WrapMode set to TileFlipXY. | https://www.autoitscript.com/trac/autoit/ticket/3647 | CC-MAIN-2021-43 | refinedweb | 528 | 57.1 |
Still not quite comfortable with PHP 5.3 namespaces? No problem! Give us
120 seconds and we'll introduce you to all the crazy characters ("namespace",
"use" and "\") and show you how they work.
1 video
5 videos
11 videos
13 videos
10 videos
Ah, sorry I misunderstood you. I'm glad you discovered the answer :)
Hey, thanks for your reply. However, I meant just the beginning of the file, when you declare to 'use' a class from another, non-global namespace. I found the answer somewhere else, turns out that it doesn't make any difference, but it should be written without the backslash (as a convention). Cheers!
Hey Dan Marshall
If you prepend a backslash when instantiating an object, i.e. $movie = new \Movie() then, PHP will try to find that class from the global scope (The global scope is where built-in classes live, like for example $today = new \DateTime())
$movie = new \Movie()
$today = new \DateTime()
I hope it helps. Cheers!
Thanks for the tutorial, great! Could you elaborate on the backslashes, though? Should it be 'use \...' or 'use ...' from inside a namespace?
Thanks, kept your 120 second promise :)
Hey Yassin,
Glad you liked this tutorial! And thanks for your feedback :)
Cheers!
Very clear! Thank you very much for that tutorial! :) <3
easy, fast and great !!!!Thanks
Hey Jeric,
We're glad you like it! Honestly, this is my favorite tutorial :p
Sweet As! Thank you!
Hey Mr TK
In this case we are using "require" in a file (some-other-file.php) that doesn't define a namespace, the one who has a namespace is the Foo class, so if you want to create a Foo object in other file, you have two options
1) In that file, define it's namespace and write a "use" statement that points to the "Foo" class
2) Don't define a namespace and use "require" or "include" that points for that class
And why do you need to require a file when namespacing? Isn't namespacing intended to do away with requiring?
Awesome! Really save my time!
It think it would actually help !I generally have the same 'problem'.But not putting sub-titles also makes us focusing more on the screen and the intonation of the voice.
So having the choice of activating sub-titles would be a benefit (in my opinion).
Waow ! These 2 minutes were incredibly efficient (in my case).Thank you for this course, I feel like I've just discovered my new Bible with knpuniversity :) :-) | https://symfonycasts.com/screencast/php-namespaces-in-120-seconds | CC-MAIN-2019-30 | refinedweb | 421 | 73.88 |
Before diving into how you can access the 3D dust map, we'd like to make you aware of a few important points about the dataset.
There are two versions of the 3D dust map, which we refer to as Bayestar17 (Green et al. 2018) and Bayestar15 (Green et al. 2015). Please refer to the papers for detailed differences between the maps. Bayestar17 supersedes Bayestar15.
The units of Bayestar17 and Bayestar15 differ. This is primarily due to the different extinction laws assumed by the two versions of the dust map. While Bayestar17 assumes the extinction law derived by Schlafly et al. (2016), Bayestar15 relies on the extinction laws of Fitzpatrick (1999) and Cardelli et al. (1989). Both Bayestar17 and Bayestar15 are intended to provide reddening in a similar unit as SFD (Schlegel et al. 1998), which is not quite equal to E(B-V ) (see the recalibration of SFD undertaken in Schlafly & Finkbeiner 2011).
In order to convert Bayestar17 to extinction in Pan-STARRS 1 or 2MASS passbands, multiply the value reported by the map by the following coefficients:
In order to convert to extinction or reddening in other passbands, one must assume some relation between extinction in Pan-STARRS 1 or 2MASS passbands and other passbands. For example, applying the RV = 3.1 Fitzpatrick (1999) reddening law to a 7000 K source spectrum, as done in Table 6 of Schlafly & Finkbeiner (2011), one obtains the relations
Because the Fitzpatrick (1999) reddening law is different from the reddening law we assumed when producing Bayestar17, the two above relations give slightly different conversions between the values reported by Bayestar17 and E(B-V ). Using Eq. (1), we find that E(B-V ) = 0.884 × (Bayestar17). Using Eq. (2), we find that E(B-V ) = 0.996 × (Bayestar17).
The overall normalization of Bayestar17 was chosen so that one unit of Bayestar17 reddening predicts the same E(g-r ) as one unit of the original SFD reddening map. That means that if one assumes Eq. (1) to hold, then Bayestar17 is equivalent to SFD, and reddening in non-PS1 passbands can be obtained by multiplying Bayestar17 by the coefficients in Table 6 of Schlafly & Finkbeiner (2011).
In contrast, Bayestar15 reports uses the same units as Schlegel, Finkbeiner & Davis (1998) reddenings. Although this was originally supposed to be the excess B-V in the Landolt filter system, Schlafly & Finkbeiner (2011) found that it differs somewhat from the true stellar B-V excess. Therefore, in order to convert our values of E(B-V ) to extinction in various filter systems, consult Table 6 of Schlafly & Finkbeiner (2011) (use the values in the RV = 3.1 column), which are based on the Fitzpatrick (1999) reddening law. For 2MASS passbands, Bayestar15 assumes a Cardelli et al. (1989) reddening law.
The extinction coefficients assumed by Bayestar15 are as follows:
Note that the Bayestar15 extinction coefficients differ more from those used by Bayestar17 in the near-infrared than in the optical. This is due to the uncertainty in the gray component of the extinction vector (corresponding to an overall additive change to all extinction coefficients), which is not as well constrained as the ratios of reddenings in different filter combinations. For example, the ratio of E(g-r ) to E(J-K ) is better constrained than Ag, Ar, AJ or AKs individually. Because the near-infrared extinction coefficients are smaller than those at optical wavelengths, near-infrared extinction estimates are more affected (percentually) by uncertainty in the gray component than optical extinctions.
The Bayestar17 extinction coefficients were derived under the assumption of zero reddening in the WISE W2 passband. This necessarily produces an underestimate of the gray component of the extinction vector. If one instead assumes that AH / AK = 1.55 (Indebetouw et al. 2005), then an additional 0.141 should be added to all of the Bayestar17 extinction coefficients. If one assumes that AH / AK = 1.74 (Nishiyama et al. 2006), then one should add in 0.063 to all of the Bayestar17 extinction coefficients. The gray component that should be added into the Bayestar17 extinction coefficients is therefore in the range 0 ≲ ΔR ≲ 0.141.
For each sightline, we provide multiple estimates of the distance vs. reddening relationship. Alongside the maximum-probability density estimate (essentially, the best-fit) distance-reddening curve, we also provide samples of the distance-reddening curve, which are representative of the formal uncertainty in the relation. Most statistics you may wish to derive, like the median reddening to a given distance, are best determined by using the representative samples, rather than the best-fit relation.
We include a number of pieces of information on the reliability of each pixel. A convergence flag marks whether our fit to the line-of-sight reddening curve converged. This is a formal indicator, meaning that we correctly sampled the spread of possible distance-reddening relations, given our model assumptions. It does not, however, indicate that our model assumptions were correct for that pixel. This convergence flag is based on the Gelman-Rubin diagnostic, a method of flagging Markov Chain Monte Carlo non-convergence.
Additionally, minimum and maximum reliable distances are provided for each pixel, based on the distribution of stars along the sightline. Because we determine the presence of dust by observing differential reddening of foreground and background stars, we cannot trace dust beyond the farthest stars in a given pixel. Our estimates of dust reddening closer than the nearest observed stars in a pixel are similarly uncertain. We therefore urge caution in using our maps outside of the distance range indicated for each pixel.
The Python package dustmaps provides functions both for querying Bayestar15/17 and for downloading the maps. The dustmaps package also makes a number of additional 3D and 2D dust maps available through a uniform framework. For users who do not wish to download the entire Bayestar15/17 maps, dustmaps provides functions for querying these maps remotely.
After installing dustmaps and fetching the Bayestar data cubes, one can query it as follows:
>>> from astropy.coordinates import SkyCoord >>> import astropy.units as units >>> from dustmaps.bayestar import BayestarQuery >>> >>> bayestar = BayestarQuery(version='bayestar2017') # Bayestar2017 is the default >>> coords = SkyCoord(90.*units.deg, 30.*units.deg, ... distance=100.*units.pc, frame='galactic') >>> >>> reddening = bayestar(coords, mode='median') >>> print(reddening) 0.00621500005946
If you prefer not to download the full Bayestar data cubes, you can query the map remotely:
>>> from astropy.coordinates import SkyCoord >>> import astropy.units as units >>> from dustmaps.bayestar import BayestarWebQuery >>> >>> bayestar = BayestarWebQuery(version='bayestar2017') >>> coords = SkyCoord(90.*units.deg, 30.*units.deg, ... distance=100.*units.pc, frame='galactic') >>> >>> reddening = bayestar(coords, mode='random_sample') >>> print(reddening) 0.00590000022203
The above code will contact our server to retrieve only the coordinates you're interested in. If you're interested in only a few - or even a few thousand - coordinates, this is the most efficient way to query the map.
Using dustmaps, you can also query multiple coordinates at once. For example, the following snippet of code remotely queries the 90ᵗʰ percentile of reddening in the Bayestar17 map at an array of coordinates:
>>> import numpy as np >>> from astropy.coordinates import SkyCoord >>> import astropy.units as units >>> from dustmaps.bayestar import BayestarWebQuery >>> >>> bayestar = BayestarWebQuery() # Uses Bayestar2017 by default. >>> >>> l = np.array([30., 60., 90.]) >>> b = np.array([-15., 10., 70.]) >>> d = np.array([0.1, 3., 0.5]) >>> coords = SkyCoord(l*units.deg, b*units.deg, ... distance=d*units.kpc, frame='galactic') >>> >>> reddening = bayestar(coords, mode='percentile', pct=90.) >>> print(reddening) [ 0.085303 0.22474321 0.03297591]
The dustmaps package can be used to query a number of dust maps beyond Bayestar15/17. For example, you can query Schlegel, Finkbeiner & Davis (1998), either from a version stored on local disk or remotely:
>>> from astropy.coordinates import SkyCoord >>> import astropy.units as units >>> from dustmaps.sfd import SFDWebQuery >>> >>> sfd = SFDWebQuery() >>> coords = SkyCoord(45.*units.deg, 45.*units.deg, frame='icrs') # Equatorial >>> >>> ebv_sfd = sfd(coords) >>> print(ebv_sfd) 0.22122733295
See the dustmaps documentation for more information.
If you prefer not to download the dustmaps Python package, or if you don't use Python, you can still remotely query the older version of our map, Bayestar15, with the following function, given in both Python and IDL. We strongly recommend the dustmaps package, but the following code is an alternative way to access the older Bayestar15 map.
import json, requests def query(lon, lat, coordsys='gal', mode='full'): ''' Send a line-of-sight reddening query to the Argonaut web server. Inputs: lon, lat: longitude and latitude, in degrees. coordsys: 'gal' for Galactic, 'equ' for Equatorial (J2000). mode: 'full', 'lite' or 'sfd' In 'full' mode, outputs a dictionary containing, among other things: 'distmod': The distance moduli that define the distance bins. 'best': The best-fit (maximum proability density) line-of-sight reddening, in units of SFD-equivalent E(B-V), to each distance modulus in 'distmod.' See Schlafly & Finkbeiner (2011) for a definition of the reddening vector (use R_V = 3.1). 'samples': Samples of the line-of-sight reddening, drawn from the probability density on reddening profiles. 'success': 1 if the query succeeded, and 0 otherwise. 'converged': 1 if the line-of-sight reddening fit converged, and 0 otherwise. 'n_stars': # of stars used to fit the line-of-sight reddening. 'DM_reliable_min': Minimum reliable distance modulus in pixel. 'DM_reliable_max': Maximum reliable distance modulus in pixel. Less information is returned in 'lite' mode, while in 'sfd' mode, the Schlegel, Finkbeiner & Davis (1998) E(B-V) is returned. ''' url = '' payload = {'mode': mode} if coordsys.lower() in ['gal', 'g']: payload['l'] = lon payload['b'] = lat elif coordsys.lower() in ['equ', 'e']: payload['ra'] = lon payload['dec'] = lat else: raise ValueError("coordsys '{0}' not understood.".format(coordsys)) headers = {'content-type': 'application/json'} r = requests.post(url, data=json.dumps(payload), headers=headers) try: r.raise_for_status() except requests.exceptions.HTTPError as e: print('Response received from Argonaut:') print(r.text) raise e return json.loads(r.text)
This code can be adapted for any programming language that can issue HTTP POST requests. The code can also be found on GitHub.
To query one sightline, say, Galactic coordinates (ℓ, b) = (90°, 10°), you would call
>>> # Query the Galactic coordinates (l, b) = (90, 10): >>> qresult = query(90, 10, coordsys='gal') >>> >>> # See what information is returned for each pixel: >>> qresult.keys() [u'b', u'GR', u'distmod', u'l', u'DM_reliable_max', u'ra', u'samples', u'n_stars', u'converged', u'success', u'dec', u'DM_reliable_min', u'best'] >>> >>> qresult['n_stars'] 750 >>> qresult['converged'] 1 >>> # Get the best-fit E(B-V) in each distance slice >>> qresult['best'] [0.00426, 0.00678, 0.0074, 0.00948, 0.01202, 0.01623, 0.01815, 0.0245, 0.0887, 0.09576, 0.10139, 0.12954, 0.1328, 0.21297, 0.23867, 0.24461, 0.37452, 0.37671, 0.37684, 0.37693, 0.37695, 0.37695, 0.37696, 0.37698, 0.37698, 0.37699, 0.37699, 0.377, 0.37705, 0.37708, 0.37711] >>> >>> # See the distance modulus of each distance slice >>> qresult['distmod'] ]
You can also query multiple sightlines simultaneously, simply by passing longitude and latitude as lists:
>>> qresult = query([45, 170, 250], [0, -20, 40]) >>> >>> qresult['n_stars'] [352, 162, 254] >>> qresult['converged'] [1, 1, 1] >>> # Look at the best fit for the first pixel: >>> qresult['best'][0] [0.00545, 0.00742, 0.00805, 0.01069, 0.02103, 0.02718, 0.02955, 0.03305, 0.36131, 0.37278, 0.38425, 0.41758, 1.53727, 1.55566, 1.65976, 1.67286, 1.78662, 1.79262, 1.88519, 1.94605, 1.95938, 2.0443, 2.39438, 2.43858, 2.49927, 2.54787, 2.58704, 2.58738, 2.58754, 2.58754, 2.58755]
If you are going to be querying large numbers of sightlines at once, we kindly request that you use this batch syntax, rather than calling
query() in a loop. It will be faster, because you only have to contact the server once, and it will reduce the load on the Argonaut server.
Two additional query modes are provided, beyond the default
'full' mode. If
mode = 'lite' is passed to the
query() function, then less information is returned per sightline:
>>> qresult = query(180, 0, coordsys='gal', mode='lite') >>> qresult.keys() [u'b', u'success', u'distmod', u'sigma', u'median', u'l', u'DM_reliable_max', u'ra', u'n_stars', u'converged', u'dec', u'DM_reliable_min', u'best'] >>> >>> # Get the median E(B-V) to each distance slice: >>> qresult['median'] [0.0204, 0.02747, 0.03027, 0.03036, 0.03047, 0.05214, 0.05523, 0.0748, 0.07807, 0.10002, 0.13699, 0.2013, 0.20158, 0.20734, 0.23129, 0.73734, 0.76125, 0.83905, 0.90236, 1.05944, 1.08085, 1.11408, 1.11925, 1.12212, 1.12285, 1.12289, 1.12297, 1.12306, 1.12308, 1.12309, 1.12312] >>> >>> # Get the standard deviation of E(B-V) in each slice >>> # (actually, half the difference between the 84th and 16th percentiles): >>> qresult['sigma'] [0.03226, 0.03476, 0.03452, 0.03442, 0.03439, 0.03567, 0.03625, 0.0317, 0.03238, 0.03326, 0.05249, 0.0401, 0.03919, 0.03278, 0.08339, 0.05099, 0.03615, 0.04552, 0.05177, 0.03678, 0.03552, 0.05246, 0.05055, 0.05361, 0.05422, 0.0538, 0.05381, 0.05381, 0.0538, 0.0538, 0.05379]
Finally, for the convenience of many users who also want to query the two-dimensional Schlegel, Finkbeiner & Davis (1998) map of dust reddening, the option
mode = 'sfd' is also provided:
>>> qresult = query([0, 10, 15], [75, 80, 85], coordsys='gal', mode='sfd') >>> >>> qresult.keys() [u'EBV_SFD', u'b', u'dec', u'l', u'ra'] >>> >>> # E(B-V), in magnitudes: >>> qresult['EBV_SFD'] [0.02119, 0.01813, 0.01352]
If you prefer to work with the entire 3D map directly, rather than using the Python package dustmaps, you can obtain the data cube in either HDF5 or FITS format from the Harvard Dataverse: Bayestar17 and Bayestar15.
The full map comes to over 4.5 GB in compressed HDF5 format, so if you're only interested in individual sightlines, we strongly recommend you use the remote query API in the dustmaps package.
The HDF5 format is a self-documenting, highly flexible format for scientific data. It has a number of powerful features, such as internal compression and compound datatypes (similar to numpy structured arrays), and has bindings in many different programming languages, including C, Python, Fortran and IDL.
The HDF5 file we provide has four datasets:
All four datasets are ordered in the same way, so that the nth element of the /samples dataset corresponds to the same pixel as described by the nth entry in /pixel_info. As our 3D dust map contains pixels of different sizes, /pixel_info specifies each pixel by a HEALPix
nside and nested pixel index.
An example in Python will help illustrate the structure of the file:
>>> import numpy as np >>> import h5py >>> >>> f = h5py.File('dust-map-3d.h5', 'r') >>> pix_info = f['/pixel_info'][:] >>> samples = f['/samples'][:] >>> best_fit = f['/best_fit'][:] >>> GR = f['/GRDiagnostic'][:] >>> f.close() >>> >>> print(pix_info['nside']) [512 512 512 ..., 1024 1024 1024] >>> >>> print(pix_info['healpix_index']) [1461557 1461559 1461602 ..., 6062092 6062096 6062112] >>> >>> print(pix_info['n_stars']) [628 622 688 ..., 322 370 272] >>> >>> # Best-fit E(B-V) in each pixel >>> best_fit.shape # (# of pixels, # of distance bins) (2437292, 31) >>> >>> # Get the best-fit E(B-V) in each distance bin for the first pixel >>> best_fit[0] array([ 0.00401 , 0.00554 , 0.012 , 0.01245 , 0.01769 , 0.02089 , 0.02355 , 0.03183 , 0.04297 , 0.08127 , 0.11928 , 0.1384 , 0.95464998, 0.9813 , 1.50296998, 1.55045998, 1.81668997, 1.86567998, 1.9109 , 2.00281 , 2.01739001, 2.02519011, 2.02575994, 2.03046989, 2.03072 , 2.03102994, 2.03109002, 2.03109002, 2.03110003, 2.03110003, 2.03111005], dtype=float32) >>> >>> # Samples of E(B-V) from the Markov Chain >>> samples.shape # (# of pixels, # of samples, # of distance bins) (2437292, 20, 31) >>> >>> # The Gelman-Rubin convergence diagnostic in the first pixel. >>> # Each distance bin has a separate value. >>> # Typically, GR > 1.1 indicates non-convergence. >>> GR[0] array([ 1.01499999, 1.01999998, 1.01900005, 1.01699996, 1.01999998, 1.01999998, 1.02400005, 1.01600003, 1.00800002, 1.00600004, 1.00100005, 1.00199997, 1.00300002, 1.02499998, 1.01699996, 1.00300002, 1.01300001, 1.00300002, 1.00199997, 1.00199997, 1.00199997, 1.00199997, 1.00100005, 1.00100005, 1.00100005, 1.00100005, 1.00100005, 1.00100005, 1.00100005, 1.00100005, 1.00100005], dtype=float32)
As a simple example of how to work with the full data cube, we will plot the median reddening in the farthest distance bin. We begin by opening the HDF5 file and extracting the information we need:
>>> import numpy as np >>> import h5py >>> import healpy as hp >>> import matplotlib.pyplot as plt >>> >>> # Open the file and extract pixel information and median reddening in the far limit >>> f = h5py.File('dust-map-3d.h5', 'r') >>> pix_info = f['/pixel_info'][:] >>> EBV_far_median = np.median(f['/samples'][:,:,-1], axis=1) >>> f.close()
The variable
pix_info specifies the location of each pixel (by
nside and
healpix_index), while
EBV_far_median contains the median reddening in each pixel in the farthest distance bin. We want to construct a single-resolution HEALPix map, which we can use standard library routines to plot.
We find the maximum
nside present in the map, and create an empty array,
pix_val, to house the upsampled map:
>>> # Construct an empty map at the highest HEALPix resolution present in the map >>> nside_max = np.max(pix_info['nside']) >>> n_pix = hp.pixelfunc.nside2npix(nside_max) >>> pix_val = np.empty(n_pix, dtype='f8') >>> pix_val[:] = np.nan
Now, we have to fill the upsampled map, by putting every pixel in the original map into the correct location(s) in the upsampled map. Because our original map has multiple resolutions, pixels that are below the maximum resolution correspond to multiple pixels in the upsampled map. We loop through the
nside resolutions present in the original map, placing all the pixels of the same resolution into the upsampled map at once:
>>> # Fill the upsampled map >>> for nside in np.unique(pix_info['nside']): ... # Get indices of all pixels at current nside level ... idx = pix_info['nside'] == nside ... ... # Extract E(B-V) of each selected pixel ... pix_val_n = EBV_far_median[idx] ... ... # Determine nested index of each selected pixel in upsampled map ... mult_factor = (nside_max/nside)**2 ... pix_idx_n = pix_info['healpix_index'][idx] * mult_factor ... ... # Write the selected pixels into the upsampled map ... for offset in range(mult_factor): ... pix_val[pix_idx_n+offset] = pix_val_n[:]
Now we have an array,
pix_val, that represents reddening in the farthest distance bin. We can use one of healpy's built-in visualization functions to plot the map:
>>> # Plot the results using healpy's matplotlib routines >>> hp.visufunc.mollview(pix_val, nest=True, xsize=4000, min=0., max=4., rot=(130., 0.), format=r'$%g$', title=r'$\mathrm{E} ( B-V \, )$', unit='$\mathrm{mags}$') >>> plt.show()
Here's the resulting map. Note that it's centered on (ℓ, b) = (130°, 0°):
All code snippets included on this page are covered by the MIT License. In other words, feel free to use the code provided here in your own work. | http://argonaut.skymaps.info/usage | CC-MAIN-2018-34 | refinedweb | 3,168 | 55.24 |
Some time ago I've published an article How to set programmatically
a value of a watermarked TextBox via JavaScript about some specifics of working with a text input box decorated with a TextboxWatermark AJAX Extender
control from the Ajax Control Toolkit library. The technique described in the article has proven useful for a number of developers so when the
new release of the Ajax Control Toolkit has recently been announced
I've decided to update the article to cover some of the braking changes in Ajax Control Toolkit components.
TextboxWatermark
One of the most significant changes in the recent Ajax Control Toolkit release that affected everything is renaming namespaces
and control names. Most of the JavaScript classes in the ACT has been moved into Sys.Extended.UI and some of them renamed.
Correspondingly the AjaxControlToolkitTextboxWrapper class that is crucial for the technique described in this article
is now called Sys.Extended.UI.TextBoxWrapper and this is a breaking change. Most if its methods haven't been renamed and
this is a good news however the new way of using the wrapper has been introduced.
Below are the code examples demonstrating how to write the code correctly with the new version of the ACT.
First we need to acquire an instance of the Sys.Extended.UI.TextBoxWrapper class for our textbox control
and this is the newly introduced technique compared to the previous version of the ACT:
// get the instance of a textbox element
var textBox = $get(textBoxId);
// use the get_Wrapper static method of the TextBoxWrapper class to get the instance of the wrapper for the textbox
var wrapper = Sys.Extended.UI.TextBoxWrapper.get_Wrapper(textBox);
In the code above first I assume that the ACT is present so I don't have to check that Sys.Extended.UI
namespace is defined. Secondly the code above is safe even if the textbox is not watermarked: the get_Wrapper
static method will always return the instance of the TextBoxWrapper; the new instance will be created if the textbox is not watermarked.
Now we can set or get a value of the textbox using the instance of the TextBoxWrapper:
// get the textbox value
var oldValue = wrapper.get_Value();
// set the textbox value
wrapper.set_Value(newVlaue);
In a nutshell there are two major changes introduced in the new ACT release that affect the coding technique described
here: the name of the textbox wrapper class has been changed; and now it's not required to check whether the textbox is watermarked
to use the technique above.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) | http://www.codeproject.com/Articles/532591/Howplustoplussetplusprogrammaticallyplusaplusvalue | CC-MAIN-2015-22 | refinedweb | 438 | 56.59 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.