text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
digitalmars.D.bugs - [Issue 2876] New: suggest keyword:auto return
- d-bugmail puremagic.com Apr 22 2009
- d-bugmail puremagic.com Apr 22 2009
- d-bugmail puremagic.com Apr 22 2009
- d-bugmail puremagic.com Jun 08 2009
- d-bugmail puremagic.com Dec 26 2009 Summary: suggest keyword:auto return Product: D Version: unspecified Platform: PC OS/Version: Windows Status: NEW Severity: normal Priority: P2 Component: AssignedTo: bugzilla digitalmars.com ReportedBy: galaxylang gmail.com I'm interest in D's metaprogram,surely D is the most powerful language now. However when i write D program,i found if we can program by this way: auto ReturnStatic(int i)(int a) { int f1(int b){return a+b;} double f2(double c){return c+a;} static if(i==0){return &f1;} else return &f2; } this will be great enhance the language power --
Apr 22 2009 ------- Comment #1 from fvbommel wxs.nl 2009-04-22 07:56 ------- *** Bug 2877 has been marked as a duplicate of this bug. *** --
Apr 22 2009 fvbommel wxs.nl changed: What |Removed |Added ---------------------------------------------------------------------------- Summary|suggest keyword:auto return |suggest keyword:auto return ------- Comment #2 from fvbommel wxs.nl 2009-04-22 07:57 ------- Isn't this already implemented (in D2)? --
Apr 22 2009 Brad Roberts <braddr puremagic.com> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |braddr puremagic.com Platform|x86 |All Version|unspecified |future Summary|suggest keyword:auto return |Enhancement to 'auto' | |return OS/Version|Windows |All Severity|normal |enhancement --- Comment #3 from Brad Roberts <braddr puremagic.com> 2009-06-08 01:08:39 PDT --- I doubt the current auto return deduction could handle this code, but I haven't tried it. Changing it to an enhancement request for a future version. -- Configure issuemail: ------- You are receiving this mail because: -------
Jun 08 2009 Witold Baryluk <baryluk smp.if.uj.edu.pl> changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |RESOLVED CC| |baryluk smp.if.uj.edu.pl Resolution| |WORKSFORME --- Comment #4 from Witold Baryluk <baryluk smp.if.uj.edu.pl> 2009-12-26 18:52:06 PST --- I don't see any problem with this example. It is quite simple to compile, and in fact it works in DMD 2.037 as desired. This example uses three things which are fully supported: - static if over template parameters, no problem for long time - coping/allocating variable on heap if they are used in delegate (or nested functions) and returned from function (escaping scope), no problem in most cases - infering return type, no problem as we have exactly one return (after static if selection), and its type is quite "simple" (it is delegate int(int), or delegate float(float)), and compiler knows this. Please close this bug with WORKSFORME. -- Configure issuemail: ------- You are receiving this mail because: -------
Dec 26 2009 | http://www.digitalmars.com/d/archives/digitalmars/D/bugs/Issue_2876_New_suggest_keyword_auto_return_17080.html | CC-MAIN-2013-48 | en | refinedweb |
*
A friendly place for programming greenhorns!
Big Moose Saloon
Search
|
Java FAQ
|
Recent Topics
Register / Login
JavaRanch
»
Java Forums
»
Certification
»
Programmer Certification (SCJP/OCPJP)
Author
SCJP by KS & BB: Static access to non-static enum
Matt Vanecek
Greenhorn
Joined: Oct 30, 2008
Posts: 3
posted
Oct 30, 2008 11:38:00
0
Decided to get certified, so picked up this book yesterday. I have a couple questions on the self-test in Chapter 1 that didn't seem to be covered in the chapter:
1. ST question 4:
enum Animals { DOG("woof"), CAT("meow"), FISH("burble"); String sound; Animals(String s) { sound = s; } } public class TestEnum { static Animals a; static String s; // my addition public static void main(String [] args) { System.out.println(a.DOG.sound + " " + a.FISH.sound); // prints "woof burble" System.out.println("s = " + s); // print "null" } }
My assumption was that TestEnum.a would be null, since it is not initialized. Any other static field would be null, per default initialization. However, aside from static access using a non-static reference (a.DOG instead of Animals.DOG), this code works. Are statically-declared enums treated differently by Java? Where is this documented?
2: ST Question 9:
public class TestDays { public enum Days { MON, TUE, WED }; public static void main(String [] args) { for (Days d : Days.values()) ; Days [] d2 = Days.values(); System.out.println(d2[2]); } }
How is TestDays.main() able to access Days, when Days appears to me to be an instance declaration?
I love enums, but have not ever used them in a way that is contrary to normal Java access rules (e.g., can't access non-static fields from a static method, etc.) Is this contradiction further explained in the rest of the book, or am I missing something (seriously)?
Thanks,
Matt
Ankit Garg
Saloon Keeper
Joined: Aug 03, 2008
Posts: 9258
8
I like...
posted
Oct 30, 2008 13:04:00
0
For the first one, let me show you what the code of the enum will look like after compilation
class Animals extends Enum { public static Animals DOG = new Animals("woof"); public static Animals CAT = new Animals("meow"); public static Animals FISH = new Animals("burble"); String sound; Animals(String s) { sound = s; } }
So as you can see, DOG, CAT, FISH are static constants of the Animals class itself. And I think that you know that you can access static members of a class using a null reference of the class. Look at the following example
class A { static void method() { System.out.println("Hello"); } public static void main(String[] args) { A aObj = null; aObj.method(); //prints Hello } }
SCJP 6 | SCWCD 5 |
Javaranch SCJP FAQ
|
SCWCD Links
Matt Vanecek
Greenhorn
Joined: Oct 30, 2008
Posts: 3
posted
Oct 30, 2008 18:19:00
0
Thank you, Ankit! That explains the first one.
For the second one, I'm assuming the spec makes some special provision for enums declared inside a class? For example, for a normal non-static inner class, if I tried to instantiate the inner class directly from main(), I'd get a "No enclosing instance of type OuterClass is accessible." error. Inner classes also can't have static members, etc., which means in main() I can't do "InnerClass.someStaticMethod()", because the thing wouldn't compile anyhow with static fields in InnerClass.
But it seems enums are special, in that I can create an enum as an instance enum, yet can still access the instance enum in a static manner because the Java compiler makes almost everything inside the enum static, and anything inside the enum is only accessible via one of the static enum fields (e.g., Days.WED)--which can only be accessed statically via Days....or something like that.
I guess it's safe to say that writing an enum definition as a non-static "inner class" (so to speak) still leaves the enum class accessible in a static manner, where declaring a field of type "enum" follows the normal rules of access, as do any other non-static inner classes.
Somewhat circular and takes a bit of thinking...I *think* I understand this twist. My enums have always been stand-alone, because they're used across multiple classes, so I've never encountered this twist before.
Thanks,
Matt
Ankit Garg
Saloon Keeper
Joined: Aug 03, 2008
Posts: 9258
8
I like...
posted
Oct 30, 2008 22:25:00
0
Well an enum inside a class doesn't have an instance of enclosing class associated with it...That's why you can use it from static methods....
I agree. Here's the link:
subject: SCJP by KS & BB: Static access to non-static enum
Similar Threads
Enum reference confusion
Are instances of enums static by default?
Problem related to enums
Static object
What is the default value of an enum?
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/270822/java-programmer-SCJP/certification/SCJP-KS-BB-Static-access | CC-MAIN-2013-48 | en | refinedweb |
15 October 2007 12:32 [Source: ICIS news]
LONDON (ICIS news)--Spain's Abengoa Bioenergy has opened a $35m (€22m) pilot ethanol plant in Nebraska for researching the conversion of lignocellulosic into biofuel, the company said on Monday.
?xml:namespace>
Abengoa planned to process 700 tonnes/day of biomass at the plant to produce 44m litres/year of ethanol as well as other forms of renewable energy such as electricity and vapour, the company said in a statement.
Meanwhile, at the opening ceremony chief executive and president Javier Salgado also said the company had signed a collaboration agreement with the US Department of Energy worth $38m to design and develop the world's first commercial-scale biomass-into-ethanol plant in ?xml:namespace>
The new technologies obtained at the Nebraska plant would be implemented at the Kansas biomass bio-refinery, the company said.
Lignocellulosic biomass is matter composed of cellulose, hemicellulose and lign | http://www.icis.com/Articles/2007/10/15/9070178/spains-abengoa-opens-35m-pilot-ethanol-plant.html | CC-MAIN-2013-48 | en | refinedweb |
Skip navigation links
java.lang.Object
oracle.adfdt.view.common.binding.creator.v2.BinderResult
public class BinderResult
Result from the Binder. Currently just used to indicate if the bind operation was successful or not. Typically, binders can use one of the static classes for normal result reporting. (This may allow future enhancements where Binders can pass information back to their callers that were determined during the bind operation).
public static final BinderResult OK
public static final BinderResult NO_BINDER
public static final BinderResult FAILED
public BinderResult(BinderResult.Result result)
public BinderResult.Result getResult()
Skip navigation links | http://docs.oracle.com/cd/E35521_01/apirefs.111230/e18581/oracle/adfdt/view/common/binding/creator/v2/BinderResult.html | CC-MAIN-2013-48 | en | refinedweb |
10 September 2010 23:54 [Source: ICIS news]
By Joe Kamalick
WASHINGTON (ICIS)--Nine years after the September 2001 terrorist attacks, the ?xml:namespace>
“We are in a far different place than nine years ago,” said Scott Jensen, spokesman and security specialist at the American Chemistry Council (ACC).
Noting that the council’s member companies have spent some $8bn (€6.32bn) on security improvements since 2001, Jensen said that “We have a better understanding of the risks and vulnerabilities because of our industry’s own programmes and through government regulations”.
Jensen also pointed out that of the 18 critical infrastructure sectors identified by the Department of Homeland Security (DHS) as needing beefed-up protections, the chemicals industry is the only one whose security is directly regulated by the department.
DHS was given a mandate and authority by Congress in 2006 to establish and monitor security criteria at the nation’s chemical plants in order to reduce their vulnerability to an attack by terrorists seeking mass civilian casualties.
The department’s resulting Chemical Facility Anti-Terrorism Standards (CFATS) have been ramped up over the last four years and are expected to be extended for at least another year until Congress can settle on a permanent, long-term plan.
Because of the department’s close work with the chemicals sector under CFATS, Jensen holds that his industry may be ahead of the 17 other critical infrastructures.
Under CFATS, the department sets security standards for some 5,000 chemical facilities deemed to be at high risk for terrorist attack, but owners and operators determine what specific security measures to implement to meet those standards. The industry’s efforts are subject to department review and correction, however.
“Some of the things that the chemicals industry is doing are being used by DHS as models in working with other infrastructure areas,” Jensen said, citing utilities as one.
Like the chemicals sector, the electric utilities industry has large facilities that often are sited amid or near dense population centres. And, because utilities are highly automated like chemical plants, the two industries share vulnerabilities to cyber attacks, he said.
Among the other 17 critical infrastructure and key resources (CIKR) identified by federal officials for improved antiterrorism security needs are agriculture, dams, energy, banking and finance, communications, information technology, transportation and shipping.
Lawrence Sloan, president of the Society of Chemical Manufacturers and Affiliates (SOCMA), said that his specialty chemicals industry “has worked hard collaboratively with DHS to raise awareness of the existing threats among chemical companies, and these businesses have a much stronger approach to security than they did ten years ago”.
Sloan said that the co-operative approach taken by the private sector and the federal government has flourished in the last several years.
“Case in point, through the Chemical Sector Coordinating Council on which SOCMA serves, DHS has reached out to industry through SOCMA to build engagement with voluntary exercises, programmes and resources,” Sloan said.
According to DHS officials, diversion of weapons-capable chemicals and cyber attacks on industry facilities probably rank as the most likely forms of potential terrorist action against chemical facilities, although truck-bombs and other conventional violent assaults remain a threat.
“DHS has placed a great emphasis on cybersecurity awareness in recent years and has focussed outreach to this industry,” Sloan said. “The threat of cyber-related attacks is real and must continue to be given as high a priority as more conventional tactics.”
Despite the ongoing public-private cooperation in chemical site security, some in Congress contend that the chemicals industry is still far too vulnerable, and they want changes.
Sponsors of pending legislation in Congress want DHS security rules for chemical sites to include authority to order the use of inherently safer technology (IST), for example, by ordering the reduction or elimination of certain dangerous substances at specific sites or by forcing operators to use lower temperature and pressure processes.
Both ACC and SOCMA oppose an IST mandate.
Sloan said that his group feels that the current CFATS programme is working so effectively “that we’d like to see it extended permanently”.
Both trade groups and other industry associations support legislation sponsored by Senator Susan Collins (Republican-Maine) that would extend the existing regulatory system for three more years.
A vote on that bill may come within weeks, if only because the existing CFATS provisions will otherwise expire at the end of September.
($1 = €0.79) | http://www.icis.com/Articles/2010/09/10/9392703/us-chem-sector-sees-itself-as-safer-nine-years-after-911.html | CC-MAIN-2013-48 | en | refinedweb |
Agenda
See also: IRC log
Steven: the latest
editor's draft is built nightly, not yet uploaded to W3C
site
... please take a look and give us feedback
... the area we're having problems with still is blank nodes
... we want to create anonymous nodes and give them names independent of URLs
... for this one little edge case it seems a new naming and referencing mechanism needs to be created
... e.g. aboutblank and hrefblank
... and we need a way to give a (blank) name to an element
... in RDF/A there was a proposed XPointer framework for referring to blank nodes and a naming mechanism
... the problem with this XPointer framework solution was that we'd have to create a spec for it, as it's not defined anywhere
Mark: we could simply define that
XPointer framework as in the RDF/A draft -- we don't have to go
all the way back to XPointer
... the framework makes use of the XPointer architecture
... we could define this in our spec, don't need to go back to XPointer WG
Jeremy: I didn't like the RDF/A
XPointer solution
... I found it "ugly"
... one nodeID attribute is sufficient; no need for extra hrefblank
... what's hard is providing text for an HTML author who is not RDF-aware to motivate this nodeID attribute
Steven: you're suggesting a nodeID attribute that simultaneously defines and references?
Jeremy: yes
... in the [October] RDF/A proposal, if there was not a subject in the triple then it was inherited from the context
... if nodeID expresses the subject it's possible to default the object and vice-versa
... while clunky, this achieves the goal of being able to express all of RDF without making things too complicated for an HTML author
... [the spec is] affected by the complexity of the mapping rules from the XHTML attributes to RDF
... I felt the [October] RDF/A mapping rules were a bit too complex
Mark: I came up with the XPointer
solution because I'm pretty sure we do need two
attributes
... RDF/XML does not need two attributes due to its striping syntax
... in the XHTML mechanism we've tried to make it possible to write a complete triple in a single element
... Jeremy's suggested defaulting mechanism may work but I'm pretty sure there are cases where you still need two attributes
... it's also an aesthetic problem for bnodes to have URIs that are not supposed to be considered as URIs
Ben: is the bnode section in this editor's draft yet?
Steven: no. the document I cited
has RDF/A transformed into XHTML2 style except for the bnode
material
... we've been struggling with finding some mechanism for bnodes that is more aesthetic and that is explicable to HTML authors without referring to the term "bnode"
Jeremy: I agree with that objective
Mark: why would an author use
anything other than id="A" and id="B" to refer to anonymous
nodes?
... why should I [as a document author] be worried about any outside use of those names?
... e.g. I use those names locally, but why should I prevent them from use outside?
... I think this is a concern more to RDF folk than to an HTML author
Ben: let's move this discussion to e-mail
Jeremy: at the March face-to-face, Steven suggested that XHTML2 would go to Last Call without addressing this particular issue
ACTION: Ben to move bnode discussion to email list. [recorded in]
Steven: we don't expect any given
XHTML spec to have addressed all open issues; we simply have to
freeze a document and count remaining open issues as Last Call
issues
... if we didn't do this we'd never get a document out the door; people are always reading new drafts and bringing new issues
Ralph: are the words in the editors' draft now fairly stable? would a careful review be a waste of time?
Steven: not a waste of time
... people are fixing schema errors now
... but up to publication there is always the chance of changes
ACTION: Steven to send email about latest draft of RDF/A included in XHTML 2 [recorded in]
Steven: the editors' draft cited above has dealt with all the issues from the HTML WG F2F
ACTION: BenA to Examples in RDF/XHTML and RDF/XML - use cases [recorded in] [reworded below]
Ben: any specific examples you'd like to see, or just use cases?
Steven: use cases good
Mark: it is OK to give RDF/XML
examples and we'll translate them
... most of the larger examples I've done have been based on RSS and FOAF. These might not express everything
ACTION: BenA Provide RDF/XML examples and english description to Steven and Mark (use cases) [recorded in] [CONTINUES]
ACTION: Tom Baker and Gavin to get feedback about use of RDF in XHTML in their respective communities [recorded in] [CONTINUES]
ACTION: DanBri RDF schema for new XHTML2 namespace elements [recorded in] [CONTINUES]
Ben: let's plan to meet weekly, even if it's a short meeting
next meeting: 12 April, 1400 UTC
Ben: discussing f2f feedback; do we need to address a pre-XHTML2 solution?
Jeremy: I'll take an action to get HP feedback
ACTION: Jeremy to ask HP about need for pre-XHTML 2 solution to RDF-in-HTML problem [recorded in]
Ben: Dan and Dom did a new editor's draft for GRDDL
Ralph: I read the new draft but didn't look at the diffs
Ben: what I gleaned from the
... and specifying what happens if more than one transform is given within a document
Ralph: Dave Beckett
announced an implementation
... Dave asked if multiple tranformations yield a single graph
... Dom answered in the affirmative
Ben: what communities do we need to approach?
ACTION: Ben to ask Tom, Gavin and CC about opinion on GRDDL and pre-XHTML2 [recorded in]
Ralph: Creative Commons and
Dublin Core are still in the top of our list.
... we could grow the list but at least we need those two responses
<Zakim> Jeremy, you wanted to summarize changelog and to discuss possible questions to users
Jeremy: I see only editorial
changes in the GRDDL
changelog
... should I be asking HP "Is GRDDL useful" or "Do we need a solution to embedding RDF before XHTML2"?
Ben: I think the latter; do we need a solution before XHTML2 and if so, is GRDDL a good way to do this?
Jeremy: the vast majority of HTML
on the web is ill-formed, not XML
... if we're seeking user feedback, it's up to them to define the scope
Ben: I think it's good to cast a
wide net to find out user expectations
... this task force may not last long enough to provide every solution, but please ask
Ralph: Dave Beckett's GRDDL
implementation partly answers some questions that Ivan Herman
raised in a hallway conversation at the March F2F
... note that the GRDDL spec says transformations SHOULD be XSLT, not MUST
... so implementations have to be careful | http://www.w3.org/2005/04/05-swbp-minutes.html | CC-MAIN-2013-48 | en | refinedweb |
Package: cryptkeeper Version: 0.9.5-4 Severity: grave Tags: patch User: debian-bsd@lists.debian.org Usertags: kfreebsd Sorry for not noticing this before. Changes in 0.9.5-4 make an even worse bug, since cryptkeeper with its current dependencies is uninstallable on GNU/kFreeBSD. "fuse" package (fuse-utils is just a transitional package) is only available on GNU/Linux. Apparently cryptkeeper needs it because it invokes "fusermount" utility directly (in src/encfs_wrapper.cpp). As fusermount is Linux-specific, the alternative for GNU/kFreeBSD is to use umount. Attached patch fixes both problems. In addition, FUSE architectures include linux-any and kfreebsd-any but not hurd-i386, this is also fixed in my patch. --
=== modified file 'debian/control' --- debian/control 2011-12-12 19:19:29 +0000 +++ debian/control 2011-12-12 19:21:10 +0000 @@ -8,8 +8,8 @@ Homepage: DM-Upload-Allowed:yes Package: cryptkeeper -Architecture: any -Depends: ${shlibs:Depends}, ${misc:Depends}, zenity, fuse-utils, encfs +Architecture: linux-any kfreebsd-any +Depends: ${shlibs:Depends}, ${misc:Depends}, zenity, fuse [linux-any], encfs Description: EncFS system tray applet for GNOME An encrypted folders manager, it allows users to mount and unmount encfs folders, to change the password and to create new crypted folders. It === modified file 'src/encfs_wrapper.cpp' --- src/encfs_wrapper.cpp 2011-12-12 19:19:29 +0000 +++ src/encfs_wrapper.cpp 2011-12-12 19:20:32 +0000 @@ -242,7 +242,11 @@ int encfs_stash_unmount (const char *mou // unmount int pid = fork (); if (pid == 0) { +#ifdef __linux__ execlp ("fusermount", "fusermount", "-u", mount_dir, NULL); +#else + execlp ("umount", "umount", mount_dir, NULL); +#endif exit (0); } int status; | http://lists.debian.org/debian-bsd/2011/12/msg00151.html | CC-MAIN-2013-48 | en | refinedweb |
Add-DnsServerResourceRecordPtr
Updated: January 17, 2013
Applies To: Windows Server 2012
Add-DnsServerResourceRecordPtr
Syntax
Parameter Set: Add0 Add-DnsServerResourceRecordPtr [-ZoneName] <String> [-Name] <String> [-PtrDomainName] <String> [-AgeRecord] [-AllowUpdateAny] [-AsJob] [-CimSession <CimSession[]> ] [-ComputerName <String> ] [-PassThru] [-ThrottleLimit <Int32> ] [-TimeToLive <TimeSpan> ] [-Confirm] [-WhatIf] [ <CommonParameters>]
Detailed Description
The Add-DnsServerResourceRecordPtr cmdlet adds a specified pointer (PTR) record to a specified Domain Name System (DNS) zone.
PTR resource records support reverse lookup based on the in-addr.arpa domain. PTR records locate a computer by its IP address and resolve the address to the DNS domain name for that computer..
-Name<String>
Specifies part of the IP address for the host. You can use either an IPv4 or IPv6 address. For example, if you use an IPv4 class C reverse lookup zone, then Name specifies the last octet of the IP address. If you use a class B reverse lookup zone, then Name specifies the last two octets.
-PassThru
Returns an object representing the item with which you are working. By default, this cmdlet does not generate any output.
-PtrDomainName<String>
Specifies an FQDN for a resource record in the DNS namespace. This value is the response to a reverse lookup using this PTR.
-ThrottleLimit<Int32>
reverse lookup.ManagementBaseObject
Outputs
The output type is the type of the objects that the cmdlet emits.
- Microsoft.Management.Infrastructure.CimInstance#DnsServerResourceRecord
Examples
Example 1: Add a PTR record
This command adds a type PTR DNS record in the zone named contoso.com. The record maps IP address 192.168.0.17 to the name host17.contoso.com. The command includes the AllowUpdateAny and AgeRecord parameters, and provides a TTL value. Because the command includes the AgeRecord parameter, a DNS server can scavenge this record. | http://technet.microsoft.com/en-us/library/jj649912(v=wps.620).aspx | CC-MAIN-2013-48 | en | refinedweb |
Microsoft ).
Rx's LINQ compositional model has garnered a lot of attention in the Rx documentation. This should be no surprise; much of what a developer creates in Rx are LINQ expressions. As stated earlier; Task Parallel Library (TPL) is also an important component in the Rx architecture. In fact, TPL underlies some of Rx's asynchronous behavior. When dealing with issues like Thread affinity; understanding how Rx leverages TPL is essential. Using an Rx sample application, this article will demonstrate how TPL fits within Rx.
Rx Introduction
As stated earlier Rx is a set of extensions to the .NET Framework for asynchronously consuming observable collections. Rx is built on .NET Framework components like LINQ and the Task Parallel Library.
Understanding Rx begins with understanding the IObservable, IObserver, and IEnumerable Interfaces. IEnumerable collections are consumed in a "pull-based" fashion. For example: Foreach loop. IObservable are consumed in a "push-based" or rather eventing fashion. The IObservable and IObserver interfaces appear below.
public interface IObservable<out T> { IDisposable Subscribe(IObserver<T> observer); } public interface IObserver<in T> { void OnCompleted(); void OnError(Exception error); void OnNext(T value); }
Eventing usually follows a subscribe, receive events, and unsubscribe sort of pattern. IObservable encapsulates this in Subscribe, OnNext, OnError methods. Wrapping something like, for example, text output operations would allow a developer to consume text output in a LINQ query.
As I stated earlier, TPL underlies some of Rx's asynchronous behavior. Before showing where TPL fits into Rx, it's important to know what TPL is.
Task Parallel Library Overview
I think of TPL as a new set of classes that follow an alternative model to traditional .NET Threads and Threading data structures. The core part of TPL is the Task class. Tasks are a sort of wrapper for work to be done. Tasks are assigned a workload, which is usually a delegate or llamda expression. Tasks are a higher level of abstraction than a Thread. A Thread executes the underlying work, but a Task allows a developer to structure and compose the execution of the underlying work, by invoking a Task and even linking a Task's completion to other Tasks.
Task execution is actually handled by a second component called a TaskScheduler. TaskScheduler manages the collection of Threads for a Task class workload. .NET Framework includes a default TaskSheduler, but a developer may want to create a custom TaskScheduler to handle custom workloads.
For the most part, the TPL is hidden within Rx. In fact, guidelines mentioned in the resources below lead a developer to believe that concurrency features of TPL may not always be necessary. However there are areas where TPL surfaces. As I mentioned earlier, I'm going to demonstrate where, using some sample code.
Rx Sample
The sample application appears below.
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Concurrency; using System.Threading; using System.Threading.Tasks; using System.Disposables; namespace Test.Rx.TaskScheduler { class Program { static void Main(string[] args) { IObservable<int> ob = Observable.CreateWithDisposable<int>(o => { var cancel = new CancellationDisposable(); Scheduler.NewThread.Schedule(() => { int i = 0; while (true ) { Thread.Sleep(200); if (!cancel.Token.IsCancellationRequested) { o.OnNext(i++); } else { Console.WriteLine("Cancel event signaled"); o.OnCompleted(); break; } } } ); return cancel; } ); IDisposable subscription = ob.Subscribe(i => Console.WriteLine(i)); Console.WriteLine("Press any key to cancel. . . "); Console.ReadKey(); subscription.Dispose(); Console.WriteLine("Press any key to quit. . . "); Console.ReadKey(); } } }
I'll cover the Scheduler class later in the article. For now, understand that Scheduler is the point where a developer can interact with the TPL.
CancellationDisposable is a wrapper for the TPL CancellationToken. The Token property allows access to the CancellationToken. As you can see in the code above; the llamda pauses before checking the cancellation token. A complete discussion of CancellationTokens is beyond the scope of this article.
The sample invokes the CancellationDisposable Token property when the user presses any key. When the Token is invoked; IsCancellationRequested becomes true and the llamda invokes OnCompleted. OnCompleted signals the observer that there are no more events to observe.
Rx Scheduler class is the most interesting part of the sample and as mentioned before the point where a developer can adjust how TPL works with Rx.
Rx Scheduler
The Scheduler class appears below.
// Summary: // Provides a set of static methods for creating Schedulers. public static class Scheduler { // Summary: // Gets the scheduler that schedules work as soon as possible on the current // thread. public static CurrentThreadScheduler CurrentThread { get; } // // Summary: // Gets the scheduler that schedules work on the current Dispatcher. public static DispatcherScheduler Dispatcher { get; } // // Summary: // Gets the scheduler that schedules work immediately on the current thread. public static ImmediateScheduler Immediate { get; } // // Summary: // Gets the scheduler that schedules work on a new thread. public static NewThreadScheduler NewThread { get; } // // Summary: // Gets the scheduler that schedules work on the default Task Factory. public static TaskPoolScheduler TaskPool { get; } // // Summary: // Gets the scheduler that schedules work on the ThreadPool. public static ThreadPoolScheduler ThreadPool { get; } // Summary: // Schedules action to be executed recursively. public static IDisposable Schedule(this IScheduler scheduler, Action<Action> action); // // Summary: // Schedules action to be executed recursively at each dueTime. public static IDisposable Schedule(this IScheduler scheduler, Action<Action<DateTimeOffset>> action, DateTimeOffset dueTime); // // Summary: // Schedules action to be executed recursively after each dueTime. public static IDisposable Schedule(this IScheduler scheduler, Action<Action<TimeSpan>> action, TimeSpan dueTime); // // Summary: // Schedules action to be executed at dueTime. public static IDisposable Schedule(this IScheduler scheduler, Action action, DateTimeOffset dueTime); }
Each property in the Scheduler class is a portal to changing where Rx invokes the llambda expression. Deciding which Scheduler class property to use depends on the application.
This was a long running llambda so the sample utilized the Scheduler that, according to documentation, creates a new Thread. Had the Immediate or CurrentThread property been used, the application Thread would never have been available to display or collect input.
Dispatcher can be used with Windows Presentation Foundation (WPF). WPF controls can only be adjusted from the Thread they have been created in. At first this may appear to be a limitation, but consider what would happen if two separate Threads attempted to adjust a control at the same time.
Conclusion
Rx is a set of libraries built on LINQ and the Task Parallel Library (TPL). The Rx Scheduler class allows a developer to plug into the underlying TPL. Scheduler includes properties for working with Windows Presentation Foundation (WPF) and even a custom TPL TaskScheduler.
Sources
Reactive Framework Rx Wiki
"Understanding Tasks in .NET Framework 4.0 Task Parallel Library"
There are no comments yet. Be the first to comment! | http://www.codeguru.com/csharp/csharp/cs_linq/article.php/c18869/Microsoft-NET-Reactive-Extensions-and-NET-Framework-Task-Parallel-Library.htm | CC-MAIN-2013-48 | en | refinedweb |
This article describes the implementation and process of developing Blue Hour. Blue Hour is a Window Phone application for calculation the time the sun rises and sets for the next 30 days.
I love to take pictures using a digital camera. As most photographers know, light is the most important ingredient of a photo. The light during sunrise or sunrise is especially beautiful, making these periods ideal moments to take pictures.
Blue hour refers to the period of twilight each morning and evening where there is neither full daylight nor complete darkness. The time is considered special because of the quality of the light at this time of day, hence the name of the application Blue Hour
During the Techdays 2012 in the Netherlands Microsoft made the attending developers the following offer: If you create three apps for the Windows Phone and these apps get approved into the market place you earn a Nokia Lumia 800. I did not apply directly because I did not know how much time it would take. Two days later I decided to apply for the offer as I like a challenge. After describing my three ideas I got a new Nokia Lumia 800 phone delivered at home. So it was time to create my first Windows Phone App.
This article is written just after I submitted my first app for the second time to the marketplace. The first time it did not get certified because of reasons I will describe later in the article. As I was planning to develop three applications I took some time to create a proper infrastructure for the development.
Windows Phone application are developed using Microsoft Silverlight. I used Visual Studio to implement the application. Visual Studio Express is installed automatically when installing the Windows Phone SDK.
For instruction and documentation on how to create Windows Phone applications I used the excellent tutorial application TailSpin which describes how a team of software developers from Microsoft implemented a Windows phone application. I found it was a real implementation and not your typical demo application. Other resource used where video's from channel 9 and the documentation on MSDN.
The following frameworks and patterns were used during the implementing of Blue Hour.
As described in the first paragraph Blue Hour calculates the sunrise and sunset times for your current location for the next 30 days. The application detects your location using the Windows Phone location services. I will describe this in more detail later in this article. The calculation of the sunrise and sunset was reused from a source code from a previous application. The source code had to be converted to a Silverlight assembly which was done without any problems.
The application had the following requirements
As I am planning to write three applications I took some time to think about the base architecture of all three application. Each application will use the MVVM pattern to separate the view from the business logic. Dependency injection is used separate the creation of instances of classes from the actual use of the classes.
The NFunq dependency injection framework performs the actual injections of the instances.
How data is retrieved and visualized on to the view can be best described using an example. The image above shows the classes that play a role in this process. The class shown on the left is the view while the classes on the right perform retrieving data from an external or internal service.
The view SunriseSunsetListView is databound to the SunriseSunsetListViewModel. When the user presses the refresh button on the view a command is triggered on the SunriseSunsetListViewModel. The SunriseSunsetListViewModel calls the AstronomyService to retrieve a list of sunrise and sunset times. The AstronomyService in its turn uses the LocationService and the SunCalculator to get the current location and to calculate the sunrise and sunset tiumes of that location. The SunriseSunset list which the AstronomyService acts as the model. This model gets translated into a ViewModel by the SunriseSunsetListViewModel. You could create a separate class to perform this mapping, I decided not to implement this because of the limited size and scale of the application.
SunriseSunsetListView
SunriseSunsetListViewModel
AstronomyService
LocationService
SunCalculator
SunriseSunset
Blue Hour uses the MVVM pattern to separate the presentation from the business logic. All the ViewModels from the application are bound to a view using data binding. A special class ViewModelLocator is implemented which wraps the dependency container. The ViewModelLocator has a separate property for each ViewModel in the application. Below an excerpt of the ViewModelLocator class is shown.
public class ViewModelLocator
{
private readonly ContainerLocator containerLocator;
public ViewModelLocator()
{
this.containerLocator = new ContainerLocator();
}
public SunriseSunsetListViewModel SunriseSunsetListViewModel
{
get
{
return this.containerLocator.Resolve<sunrisesunsetlistviewmodel>();
}
}
....
</sunrisesunsetlistviewmodel>
The ViewModelLocator class instantiates the ContainerLocator in the constructor. ContainerLocator wraps the Container class of NFunq. The SunriseSunsetListViewModel requests an instance of the SunriseSunsetListViewModel class from the ContainerLocator. This ViewModel is then coupled to the view in XAML using the DataContext of the view using a static resource. To make this possible the resource must be created in the App.xaml file.
<Application.Resources>
<viewmodels:ViewModelLocator x:
</Application.Resources>
Then the resource can be used in view by binding the view to the ViewModel as follows.
DataContext="{Binding Source={StaticResource ViewModelLocator}, Path=SunriseSunsetListViewModel}"
The ContainerLocator class wraps the Container class of NFunq, it includes registration of the types. The generic Resolve method is responsible for constructing an instance of the requested type. The method delegates the actual creation of the instance to the Funq container. This class separates the application from the actual DI implementation. This gives us the freedom to switch to another DI framework if we ever need to.
public class ContainerLocator : IDisposable
{
private readonly Container container;
public ContainerLocator()
{
this.container = new Container();
this.ConfigureContainer();
}
public TService Resolve<tservice>()
{
return container.ResolveNamed<tservice>((string)null);
}
private void ConfigureContainer()
{
this.container.Register(
new SunriseSunsetListViewModel(
new AstronomyService(new SunCalculator(), new LocationService(
new BlueHourSettingsManager(new SettingsHelper()))), new BlueHourSettingsManager(new SettingsHelper())));
this.container.Register(new SettingsViewModel(new BlueHourSettingsManager(new SettingsHelper())));
this.container.Register(new AboutViewModel());
}
....
</tservice></tservice>
People who ever used the Prism framework together with Unity in a Silverlight project may find the implementation strange. There is no auto registration with Funq that scans the assembly for types and registers them. Automatic construction of views together with their dependencies is not supported. I did not find another DI container for Windows Phone that supports these features.
I had used the Prism library with Silverlight development, therefore I was happy to find that there is a Windows Phone version. The biggest reason to choose this framework was the ease with which you can attach commands to views and fire global events. With the MVVM pattern you don't directly code in the eventhandler of for example a button. Instead you create a Command inside the ViewModel and bind the button to this command using normal databinding.
This has the advantage that your view is now totally decoupled from your ViewModel. This enables you to test your ViewModel more easily.
public DelegateCommand AboutCommand
{
get
{
return aboutCommand;
}
set
{
aboutCommand = value;
OnPropertyChanged("AboutCommand");
}
}
public SunriseSunsetListViewModel(IAstronomyService astronomyService, BlueHourSettingsManager blueHourSettingsManager)
{
this.AboutCommand = new DelegateCommand(() => NavigationService.Navigate("/Views/About.xaml"));
...
This DelegateCommand is not available in Silverlight, it is something that comes with the Prism framework. In the constructor of the ViewModel I create an instance of the DelegateCommand and I instruct it to navigate to the about view when the user presses the button.
<Custom:Interaction.Behaviors>
<prismInteractivity:ApplicationBarButtonCommand
<prismInteractivity:ApplicationBarButtonCommand
<prismInteractivity:ApplicationBarButtonCommand
</Custom:Interaction.Behaviors>
The command gets bind to the ApplicationBarCommand in the XAML.
Unit testing is possible using the Silverlight Unit testing framework for Windows Phone. This framework is translated from the original Silverlight unit testing framework. You need a separate Windows Phone aplication
to perform the actual testing. Although this is not ideal as it tends to break
your TDD rythm, it is better than nothing.
Jeff Wilcox describes the release of this framework on his blog.
I just converted the already existing test cases of the class that calculates
the sunrise and sunset times to using this framework.
The Silverlight toolkit for Windows Phone includes a nice implementation for page or view transitions. Instead of using the normal "boring" page transition, you can create a sliding or rotating animation when navigating from one screen to another. The implementation in your application is pretty easy to do. First thing you need is a reference to the toolkit. With this reference set you need to include the following Style in the Application.Resources element in your App.xaml file.
<Style x:
<Setter Property="toolkit:TransitionService.NavigationInTransition">
<Setter.Value>
<toolkit:NavigationInTransition>
<toolkit:NavigationInTransition.Backward>
<toolkit:SlideTransition
</toolkit:NavigationInTransition.Backward>
<toolkit:NavigationInTransition.Forward>
<toolkit:SlideTransition
</toolkit:NavigationInTransition.Forward>
</toolkit:NavigationInTransition>
</Setter.Value>
</Setter>
<Setter Property="toolkit:TransitionService.NavigationOutTransition">
<Setter.Value>
<toolkit:NavigationOutTransition>
<toolkit:NavigationOutTransition.Backward>
<toolkit:SlideTransition
</toolkit:NavigationOutTransition.Backward>
<toolkit:NavigationOutTransition.Forward>
<toolkit:SlideTransition
</toolkit:NavigationOutTransition.Forward>
</toolkit:NavigationOutTransition>
</Setter.Value>
</Setter>
</Style>
For my application I chose the slide transition but there are other transitions available. When you reference this style in a page the transition occurs when navigating to another page. The following should be added to the page's XAML.
<phone:PhoneApplicationPage
Style="{StaticResource AnimatedPage}"
You also need to change the instance of the RootFrame instance in the code behing to a TransitionFrame.
RootFrame
TransitionFrame
// TransitionFrame();
RootFrame.Navigated += CompleteInitializePhoneApplication;
// Handle navigation failures
RootFrame.NavigationFailed += RootFrame_NavigationFailed;
// Ensure we don't initialize again
phoneApplicationInitialized = true;
}
The NavigationService which is a default Windows Phone class is responsible for navigating from one page to another. Because Blue Hour uses the MVVM pattern it cannot use the navigation service straight out of the box. The NavigationService is a property of the Page class and the ViewModel does not know anything about the view.
NavigationService
Page
There are several solutions for getting around this; one for example is to just pass the NavigationService from the page into the ViewModel. But this will make testing the ViewModel more difficult. Therefore, I used the solution as described by Rob Garfoot. In this solution he created a wrapper around the NavigationService which gets filled via a dependency property.
With this solution navigation via the ViewModel is possible if you implement the following three steps.
<phone:PhoneApplicationPage
Navigation:Navigator.Source="{Binding}"
INavigable
private void NavigateToAView()
{
NavigationService.Navigate("Views/About.xaml");
}
Most applications include a number of settings that can be changed by the user. Blue Hour has a setting to explicitly let the user opt-in for letting the aplication use the location services of the Phone. A separate view is responsible for managing those settings. The Windows Phone platform uses Silverlight's isolated storage for persisting settings.
public class SettingsHelper
{
private readonly IsolatedStorageSettings settings =
IsolatedStorageSettings.ApplicationSettings;
public T GetSetting<t>(string settingName, T defaultValue)
{
if (!settings.Contains(settingName))
{
settings.Add(settingName, defaultValue);
}
return (T)settings[settingName];
}
public void UpdateSetting<t>(string settingName, T value)
{
if (!settings.Contains(settingName))
{
settings.Add(settingName, value);
}
else
{
settings[settingName] = value;
}
settings.Save();
}
}
</t></t>
For reading and saving settings I created the SettingsHelper class. This class encapsulates reading and writing to the Isolated storage of the Windows Phone. Usage of this class is then simply providing the name and the value of the setting depending on whether you want to write or read a setting.
SettingsHelper
public bool IsLocationServiceAllowed
{
get
{
return this.settingsHelper.GetSetting(
Constants.Settings.AllowAccessToLocationServicesKey, false);
}
set
{
this.settingsHelper.UpdateSetting(
Constants.Settings.AllowAccessToLocationServicesKey, value);
}
}
This IsLocationServiceAllowed property from the ViewModel is then databound to the ToggleSwitch to let the user adjust the setting.
Blue Hour uses the location services of the Windows Phone to detect the location of the customer and calculate the sunrise and sunset times based on that location. Access to the location services is achieved by the GeoCoordinateWatcher class. The constructor of this class takes a single argument that specifies the accuracy of the location that is needed by the application.
GeoCoordinateWatcher
private readonly GeoCoordinateWatcher geoCoordinateWatcher =
new GeoCoordinateWatcher(GeoPositionAccuracy.Default);
The accuracy parameter of the constructor can have two possible values GeoPositionAccuracy.Default which means low accuracy and GeoPositionAccuracy.High which obviously means high accuracy. Blue Hour needs only low accuracy which probably means than it supplies a coordinate faster.
GeoPositionAccuracy.Default
GeoPositionAccuracy.High
I wanted the Blue Hour application to be available in two languages, English and Dutch (my native language) therefore the application had to be localized. As with all Microsoft.Net applications that you want to localize it starts with adding all localizable strings to a resource file.
The default language of BlueHour is English, therefore the key value pairs in the AppResources.resx file are in the english language. All other supported languages should be added according to the following file name format AppResources.[culture].resx. The cultures that the application supports should be specified in the project file of the application. This is a bit of a nuisance, you have to unload the project in visual studio and edit the xml of the project manually. The SupportedCultures element in the application should include the languages that are supported by the application. Multiple cultures can be specified by separating them using a semicolon.
<supportedcultures>
nl-NL; en-US;
</supportedcultures>
By adding the supported cultures of your application to the project file, the application will automatically switch to that location if you switch the phones Display language via the Region+language page.
To reference a resource from the sourcecode I introduce a new class called LocalizedStrings which is used from the Xaml views.
LocalizedStrings
public class LocalizedStrings
{
private readonly AppResources localizedResources = new AppResources();
public AppResources LocalizedResources
{
get
{
return localizedResources;
}
}
}
You have to create an application resource to be able to use this class/resource using databinding. The following adds the class LocalizedStrings to the application resources.
<Application.Resources>
<local:LocalizedStrings x:
</Application.Resources>
Once this is added it is possible to bind to the resource using the following standard databinding syntax. The Text="{Binding LocalizedResources.ApplicationTitle, Source={StaticResource LocalizedStrings}}" defines what the actual resource key is (LocalizedResources.ApplicationTitle) and where it comes from Source={StaticResource LocalizedStrings}
Text="{Binding LocalizedResources.ApplicationTitle, Source={StaticResource LocalizedStrings}}"
LocalizedResources.ApplicationTitle
Source={StaticResource LocalizedStrings}
<TextBlock x:
Referencing resource from code could be done in the normal .Net way, so by referencing AppResource.ApplicationTitle.
AppResource.ApplicationTitle
Note that the only way to switch to another language is to restart your phone. If
you want to allow the users to dynamically change the language of your
application you have to implement this in your app by using a language switch.
Joost van Schaik describes
in good detail how you can implement this in your application.
If you are a single app developer you will have to do the marketing of your application yourself. With social media and the integration of the market place this becomes possible
and even easy.
Ratings is one of the most important aspect to get your app more visible. With good ratings your app becomes more visible and with more visibility comes the chance to get more good ratings.Ratings can be given to an application via the marketplace. By giving the user easy access to the ability to rate your application it becomes more likely that the user will rate your app. In BlueHour I have added a separate Rate button that redirects the user directly to the page to rate your app. Implementing this is very easy to do.
private void RateApp()
{
new MarketplaceReviewTask().Show();
}
This sends the user directly to the rating page for your app. You can even show the user a popup and ask them to rate the application every n-th time your application gets started. But I find this behavior very annoying.
Another possibility to market your application is to create buzz for you application using social media. The Windows Phone has the ability to share all kinds of content using the connected social media of the user. It is also very easy to implement this ability in your application.
private void ShareApp(SunriseSunset sunriseSunset)
{
var shareTask = new ShareLinkTask();
shareTask.Message = string.Format(
"According to BlueHour the sun rises at {0} and sets at {1} tomorrow.",
sunriseSunset.Sunrise,
sunriseSunset.Sunset);
shareTask.Title = "Blue Hour WP7 App";
shareTask.LinkUri = new Uri("");
shareTask.Show();
}
When the user presses the button that executes this code the user is able to select on which linked social media account the message must be shared. Then after asking confirmation the message is sent.
Users of your app will run into bugs in your application. The most obvious way for a user to report this bug is using the rating mechanism in the marketplace. Most likely this would be a negative rating. There is no way for you to contact the user that creates a negative rating. To support users that run into trouble you could integrate the support option into your application.
public void ContactAuthor()
{
var emailTask = new EmailComposeTask();
}
In blue hour I added the ability for the user to send me a support email. The code above will create a template for an email and show this in the send email dialog. If the user wants to he or she can contact me as the author of the application for support.
It is also possible to attract the users attention by suggesting other apps that you may have implemented and are available in the marketplace. This will promote the other applications that you have available in the marketplace.
public void MoreApps()
{
var searchTask = new MarketplaceSearchTask();
searchTask.SearchTerms = "Patrick Kalkman";
searchTask.Show();
}
The SearchTerms property should be set to your publishers name to enable searching for your other applications.
SearchTerms
The marketplace keeps track of the amount of customers that download your application. But what you cannot see is if and how often the customer uses your application. There are several options available that add analytics to your application. Below a list of the more popular ones.
For Blue Hour I chose the mTiks platform because it is just to easy to implementing and it usage is free. To implement it you have to do the following.
The following should be added to the code behind of your app.xaml file. This notifies mTiks when you application is started and stopped.
private void Application_Launching(object sender, LaunchingEventArgs e)
{
mtiks.Instance.Start(MTiksApplicationKey, Assembly.GetExecutingAssembly());
}
private void Application_Activated(object sender, ActivatedEventArgs e)
{
mtiks.Instance.Start(MTiksApplicationKey, Assembly.GetExecutingAssembly());
}
private void Application_Deactivated(object sender, DeactivatedEventArgs e)
{
mtiks.Instance.Stop();
}
private void Application_Closing(object sender, ClosingEventArgs e)
{
mtiks.Instance.Stop();
}
It is also possible to notify mTiks when the user performs a certain function within your application. This can be implemented by triggering an event.
mtiks.Instance.postEventAttributes("REFRESH");
mTiks then registers the amount of times this event gets fired from your application. Once you have this integration in place, mTiks presents you a nice dashboard with the analytics information of your app.
When you have finished your application and want to distribute it to customers, the application must get certified by Microsoft. Before an application gets certified it is thoroughly tested. To prepare for this certification you can use the Market Place Test Kit which can be started by right-clicking on your main application and selecting "Open Marketplace Test Kit". The test kit is self explanatory and includes a number of automated test and a number of manual test that correspond with the tests that Microsoft executes before your application gets certified.
One thing you have to update is the capabilities section in the WMAppManifest.xml
file which is included in your project. This section describes the capabilities
your application needs from the phone's infrastructure. Make sure that you
update this section with the required capabilities. The Market Place Test Kit is
able to identify the required capabilities of your application and can write
these into the WMAppManifest.xml file.
As I stated in the introduction this article was written just after I submitted Blue Hour for the second time to the market. The first time I submitted the app it failed certification. The test report I got back from the test team stated that I did not include a privacy policy.
I missed this as it is clearly stated in the requirements. 2.7.2 states that "The privacy policy of your application must inform users about how location data from the Location Service API is used and disclosed and the controls that users have over the use and sharing of location data. This can be hosted within or directly linked from the application."
What I did was add a description to the settings screen that states what the application does with the retrieved location data and that the user can disable or enable it.
I stripped out certain parts from the source code such as my application key for
the mtiks framework. The source code of Blue Hour is available for download
and hopefully will make developing your first Windows Phone application a
little bit easier.
The application is available in the marketplace and the full source code can be downloaded from the top of the article. If you like the article, a vote or comment is appreciated. Thanks.
Below shows a photo that I took during the Blue Hour. It is taken in the Netherlands in the city of Rotterdam.
History
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Man throws away trove of Bitcoin worth $7.5 million | http://www.codeproject.com/Articles/355127/Blue-Hour?fid=1700512&df=90&mpp=10&noise=1&prof=False&sort=Position&view=None&spc=None&fr=11 | CC-MAIN-2013-48 | en | refinedweb |
csNewtonianParticleSystem Class ReferenceThis class has a set of particles that behave with phsyics.
More...
[Common Plugin Classes]
#include <csplugincommon/particlesys/partgen.h>
Inheritance diagram for csNewtonianParticleSystem:
Detailed DescriptionThis class has a set of particles that behave with phsyics.
They each have a speed and an acceleration.
Definition at line 394.0.2 by doxygen 1.4.7 | http://www.crystalspace3d.org/docs/online/api-1.0/classcsNewtonianParticleSystem.html | CC-MAIN-2013-48 | en | refinedweb |
14 September 2009 15:58 [Source: ICIS news]
HOUSTON (ICIS news)--Mexico’s state oil company Pemex awarded ICA Fluor engineering, procurement and construction (EPC) contracts worth $638m (€440m) for two low sulphur gasoline projects in Mexico, Fluor announced on Monday.
The refineries were awarded to ICA Fluor after a public bidding process on 20 August. Fluor said it would book its portion of the award in the fiscal third quarter.
The contracts were part of Pemex's clean fuels programme, the companies said.
ICA Fluor said it would be responsible for the EPC, testing and start-up of refinery projects in Cadereyta, Nuevo ?xml:namespace>
The Cadereyta refinery project includes a 42,500 bbl/day catalytic distillation train, while the
The work is scheduled to take place over a term of 44 months.
ICA Fluor is a Mexico-based industrial construction joint venture between Empresas ICA and Fluor.
( | http://www.icis.com/Articles/2009/09/14/9247311/pemex-awards-contracts-for-two-mexico-gasoline-projects.html | CC-MAIN-2013-48 | en | refinedweb |
This is an automated email from the ASF dual-hosted git repository.
amanin pushed a commit to branch refactor/sql-store
in repository
commit a1a7d3abecbd614a9e21f7590538c3a1fcc20829
Author: Alexis Manin <amanin@apache.org>
AuthorDate: Tue Aug 27 10:05:47 2019 +0200
doc(SQL-Store): minor javadoc addition for stream delegation applied to sql queries.
---
.../apache/sis/internal/sql/feature/StreamSQL.java | 52 ++++++++++++++++++++++
1 file changed, 52 insertions(+)
diff --git a/storage/sis-sqlstore/src/main/java/org/apache/sis/internal/sql/feature/StreamSQL.java
b/storage/sis-sqlstore/src/main/java/org/apache/sis/internal/sql/feature/StreamSQL.java
index c93798b..1a59655 100644
--- a/storage/sis-sqlstore/src/main/java/org/apache/sis/internal/sql/feature/StreamSQL.java
+++ b/storage/sis-sqlstore/src/main/java/org/apache/sis/internal/sql/feature/StreamSQL.java
@@ -1,3 +1.sis.internal.sql.feature;
import java.sql.Connection;
@@ -11,6 +27,7 @@ import java.util.function.DoubleConsumer;
import java.util.function.DoubleFunction;
import java.util.function.DoubleUnaryOperator;
import java.util.function.Function;
+import java.util.function.Predicate;
import java.util.function.Supplier;
import java.util.function.ToDoubleFunction;
import java.util.function.ToIntFunction;
@@ -28,6 +45,21 @@ import org.apache.sis.internal.util.StreamDecoration;
import org.apache.sis.storage.DataStoreException;
import org.apache.sis.util.collection.BackingStoreException;
+import static org.apache.sis.util.ArgumentChecks.ensureNonNull;
+
+/**
+ * Manages query lifecycle and optimizations. Operations like {@link #count()}, {@link #distinct()},
{@link #skip(long)}
+ * and {@link #limit(long)} are delegated to underlying SQL database. This class consistently
propagate optimisation
+ * strategies through streams obtained using {@link #map(Function)}, {@link #mapToDouble(ToDoubleFunction)}
and
+ * {@link #peek(Consumer)} operations. However, for result consistency, no optimization is
stacked once either
+ * {@link #filter(Predicate)} or {@link #flatMap(Function)} operations are called, as they
modify browing flow (the
+ * count of stream elements is not bound 1 to 1 to query result rows).
+ *
+ * @since 1.0
+ *
+ * @author Alexis Manin (Geomatys)
+ *
+ */
class StreamSQL extends StreamDecoration<Feature> {
final Features.Builder queryBuilder;
@@ -144,7 +176,14 @@ class StreamSQL extends StreamDecoration<Feature> {
.onClose(() -> queryBuilder.parent.closeRef(connectionRef));
}
+ /**
+ * Transform a callable into supplier by catching any potential verified exception and
rethrowing it as a {@link BackingStoreException}.
+ * @param generator The callable to use in a non-verified error context. Must not be
null.
+ * @param <T> The return type of input callable.
+ * @return A supplier that delegates work to given callable, wrapping any verified exception
in the process.
+ */
private static <T> Supplier<T> uncheck(final Callable<T> generator)
{
+ ensureNonNull("Generator", generator);
return () -> {
try {
return generator.call();
@@ -156,6 +195,13 @@ class StreamSQL extends StreamDecoration<Feature> {
};
}
+ /**
+ * Describes a stream on which a {@link Stream#map(Function) mapping operation} has been
set. It serves to delegate
+ * optimizable operation to underlying sql stream (which could be an indirect parent).
+ *
+ * @param <I> Type of object received as input of mapping operation.
+ * @param <O> Return type of mapping operation.
+ */
private static class MappedStream<I, O> extends StreamDecoration<O> {
private final Function<? super I, ? extends O> mapper;
private Stream<I> source;
@@ -239,6 +285,12 @@ class StreamSQL extends StreamDecoration<Feature> {
}
}
+ /**
+ * Same purpose as {@link MappedStream}, but specialized for {@link Stream#mapToDouble(ToDoubleFunction)
double mapping}
+ * operations.
+ *
+ * @param <T> Type of objects contained in source stream (before double mapping).
+ */
private static class ToDoubleStream<T> extends DoubleStreamDecoration {
Stream<T> source; | http://mail-archives.eu.apache.org/mod_mbox/sis-commits/201911.mbox/%3C20191112164428.206F2851E5@gitbox.apache.org%3E | CC-MAIN-2020-05 | en | refinedweb |
Generates efficiency plots from the histograms generated by AliAnalysisTaskLinkToMC. More...
#include "Riostream.h"
#include "TH1.h"
#include "TH2.h"
#include "TFile.h"
#include "TList.h"
#include "TGraphAsymmErrors.h"
#include "TCanvas.h"
Go to the source code of this file.
Generates efficiency plots from the histograms generated by AliAnalysisTaskLinkToMC.
This macro is used to generate efficiency plots, fake track ratio plots and also calculate the total integrated efficiency and fake track ratio. The histograms generated by the AliAnalysisTaskLinkToMC analysis task is used as input. The macro can be run with root as follows:
$ root PlotEfficiency.C("hists.root")
where hists.root is the name of the output file containing the histograms generated by the analysis task. Note that the '\' character before the '(' and '"' characters is required. Alternatively run the macros as follows from the root command prompt:
$ root .L PlotEfficiency.C PlotEfficiency("hists.root")
Definition in file PlotEfficiency.C.
This routine calculates the integrated efficiency for a 1D histogram. [in]
Definition at line 50 of file PlotEfficiency.C.
Referenced by PlotEfficiency().
Opens the file containing the histograms generated by the AliAnalysisTaskLinkToMC analysis task and tries to find the first TList in the file. From the TList we try to find TH2 classes with the following names: "findableTracksHist", "foundTracksHistMC" and "fakeTracksHist" From these we plot the efficiency plots for the pT and rapidity dimension by projecting the 2D histograms. Also, the fake track ratio histograms are created. Finally the total integrated efficiency and fake track ratio is calculated and printed.
Definition at line 82 of file PlotEfficiency.C. | http://alidoc.cern.ch/AliPhysics/vAN-20181210/_plot_efficiency_8_c.html | CC-MAIN-2020-05 | en | refinedweb |
Adding scope for Watcher data model¶
Problem description¶
For a large cloud infrastructure, such as CERN, there are more than 10k servers, retrieving data from Nova to build Watcher compute data model may take a long time. If the audit is just for a subset of all nodes, it’s better to get the data from the nodes that audit needs.
Proposed change¶
As now, Watcher builds compute data model when starting the Decision Engine. And there is a periodic task to rebuild the data model. To avoid building the data model before creating audit, we need to check a flag before building the data model. for example:
def execute(self): """Build the compute cluster data model""" if self._audit_scope_handler is None: LOG.debug("No audit, Don't Build compute data model") return builder = ModelBuilder(self.osc) return builder.execute(self._data_model_scope)
Audit scope is a optional parameter when creating audit. If user don’t set a scope ,the default scope is empty, it means this audit used for all nodes. An example of the audit scope:
{"compute": [{"host_aggregates": [ {"id": 1}, {"id": 2}, {"id": 3}]}, {"availability_zones": [ {"name": "AZ1"}, {"name": "AZ2"}]}, }
When building the data model according to audit scope, there are some cases need to be considered:
no data model, audit scope is empty¶
It’s the first time to build the data model. Because audit scope is empty, the data model should include all the nodes.
no data model, audit scope is not empty¶
It’s the first time to build the data model according to audit scope.
Existing data model, new audit scope is empty¶
If the data model has included all nodes, it will not be rebuilt.
If the data model doesn’t include all nodes, it will be rebuilt.
existing data model, new audit scope is not empty¶
If the nodes specified in scope are already included in the data model, it will not be rebuilt.
If the nodes specified in the scope aren’t included in the data model, it will be rebuilt.
Performance Impact¶
This will reduce the impact on system performance, especially for large cloud infrastructure. | http://specs.openstack.org/openstack/watcher-specs/specs/stein/implemented/scope-for-watcher-datamodel.html | CC-MAIN-2020-05 | en | refinedweb |
Definition at line 60 of file QuasiRandom.h.
#include <Math/QuasiRandom.h>
Create a QuasiRandom generator.
Use default engine constructor. Engine will be initialized via Initialize() function in order to allocate resources
Definition at line 70 of file QuasiRandom.h.
Create a QuasiRandom generator based on a provided generic engine.
Engine will be initialized via Initialize() function in order to allocate resources
Definition at line 80 of file QuasiRandom.h.
Destructor: call Terminate() function of engine to free any allocated resource.
Definition at line 88 of file QuasiRandom.h.
Return the size of the generator state.
Definition at line 140 of file QuasiRandom.h.
Return the name of the generator.
Definition at line 154 of file QuasiRandom.h.
Return the dimension of the generator.
Definition at line 147 of file QuasiRandom.h.
Generate next quasi random numbers points.
Definition at line 95 of file QuasiRandom.h.
Generate next quasi random numbers point (1 - dimension)
Definition at line 102 of file QuasiRandom.h.
Generate quasi random numbers between ]0,1[ 0 and 1 are excluded Function to be compatible with ROOT TRandom compatibility.
Definition at line 111 of file QuasiRandom.h.
Generate an array of random numbers between ]0,1[ Function to preserve ROOT Trandom compatibility The array will be filled as x1,y1,z1,....x2,y2,z2,...
Definition at line 126 of file QuasiRandom.h.
skip the next n number and jumb directly to the current state + n
Definition at line 118 of file QuasiRandom.h.
Return the type (name) of the used generator.
Definition at line 133 of file QuasiRandom.h.
Definition at line 160 of file QuasiRandom.h. | https://root.cern.ch/doc/master/classROOT_1_1Math_1_1QuasiRandom.html | CC-MAIN-2020-05 | en | refinedweb |
30694/connect-to-a-mysql-using-python
Very similar to mysqldb but better than mysqldb
import oursql
db_connection = oursql.connect(host='127.0.0.1',user='foo',passwd='foobar',db='db_name')
cur=db_connection.cursor()
cur.execute("SELECT * FROM `tbl_name`")
for row in cur.fetchall():
print row[0]
The tutorial in the documentation is pretty decent.
db = MySQLdb.connect(host="localhost", # ...READ MORE
down voteacceptTheeThe problem is that you're iterating ...READ MORE
You don't have a template matching tabularQueryResult in your ...READ MORE
In Python 2, use urllib2 which comes ...READ MORE
Sorted!!!! just found the solution,
1 - apparently ...READ MORE
connect mysql database with python
import MySQLdb
db = ...READ MORE
Setup an ssh tunnel before you use ...READ MORE
There is this code I used for ...READ MORE
You can break out of each loop ...READ MORE
OR
Already have an account? Sign in. | https://www.edureka.co/community/30694/connect-to-a-mysql-using-python | CC-MAIN-2020-05 | en | refinedweb |
How can I communicate between two React JS Components?
Hello again! I hope you enjoyed my previous article about React JS: Loops in JSX. Today I’m going to explain another simple but core flow inside a React Application: Communicate two React JS components, for this, as any key concept, we need to understand what use cases are probable to appear inside those apps. Let’s get started.
How the elements relate each other?
In React is sometimes useful to see all of our components as nodes, which each one is part of a component three, so, said this, we might encounter that components relate to each other through relationships, what kind of relationships? well, there are many:
Parent to child
Props
Props are the easiest way to share information between components, and it’s the key feature we should understand about React.
To share properties from a Parent Component to its child component, the easiest way is through attributes
class Parent extends React.Component { constructor() { super(); this.myData = { firstName: 'Mario', lastName: 'Gomez' }; } render() { return ( ); } }
Also we can use ECMAScript 6 spread operator as a shorthand for this, but we must be aware of the properties names in the child, and maybe we are sending unnecessary information.
render() { return ( ); }
Using Refs
Nothing explains better than the example itself:
class Child extends React.Component { constructor() { super(); } sayHello() { return 'hello'; } render() { return {this.sayHello()} ; } } class Parent extends React.Component { constructor() { super(); } render() { return ( ); } componentDidMount() { var sayHello = this.refs.foo.sayHello(); // Now sayHello calls the child method } }
Child to Parent
Callback Functions
This is simple: Pass a function to the child and from the child, you can use that function to access its parent. Example ahead:
class Parent extends React.Component { constructor() { super(); this.greeting = 'Hello'; } sayHello() { return this.greeting; } render() { return ( ); } } class Child extends React.Component { constructor() { super(); } render() { return {this.props.sayHello()} ; } } // We need to declare the prop Type Child.propTypes = { sayHello: React.PropTypes.func }
Event bubbling
Getting help from an old concept, this way we can allow Parent Components to capture DOM elements originated by children.
class Parent extends React.Component { constructor() { super(); } render() { return ( <div>// Any number of child components can be added here.</div> ); } handleKeyUp(event) { // This function will be called for the 'onkeyup' event in any <input type="text" /> // fields rendered by any of my child components. } }
## Sibling Components
Through parent
class ParentComponent extends React.Component { constructor() { super(); } render() { return ( <div></div> ); } child1Function() { return '1'; } child2Function() { return '2'; } }
Observer Pattern
This software pattern designates an object capable of sending messages to other objects. Inside the React context, this means the components should subscribe to certain messages and other components should publish messages to the subscribers.
We can accomplish this using the
componentDidMount method to subscribe components and unsubscribe using the
componentWillMount method.
For more information about the implementation of those please refer to PubSubJS, EventEmitter or MicroEvent.js
And that’s everything for today, thanks for reading! | http://blog.magmalabs.io/2016/11/01/react-js-communication-between-components.html | CC-MAIN-2020-05 | en | refinedweb |
Set Up Fluent Bit as a DaemonSet to Send Logs to CloudWatch Logs
The following sections help you deploy Fluent Bit to send logs from containers to CloudWatch Logs.
Topics
Differences if you're already using Fluentd
If you are already using Fluentd to send logs from containers to CloudWatch Logs, read this section to see the differences between Fluentd and Fluent Bit. If you are not already using Fluentd with Container Insights, you can skip to Setting up Fluent Bit.
We provide two default configurations for Fluent Bit:
Fluent Bit optimized configuration — A configuration aligned with Fluent Bit best practices.
Fluentd-compatible configuration — A configuration that is aligned with Fluentd behavior as much as possible.
The following list explains the differences between Fluentd and each Fluent Bit configuration in detail.
Differences in log stream names — If you use the Fluent Bit optimized configuration, the log stream names will be different.
Under
/aws/containerinsights/Cluster_Name/application
Fluent Bit optimized configuration sends logs to
kubernetes-nodeName-application.var.log.containers.
kubernetes-podName_
kubernetes-namespace_
kubernetes-container-name-
kubernetes-containerID
Fluentd sends logs to
kubernetes-podName_
kubernetes-namespace_
kubernetes-containerName_
kubernetes-containerID
Under
/aws/containerinsights/Cluster_Name/host
Fluent Bit optimized configuration sends logs to
kubernetes-nodeName.
host-log-file
Fluentd sends logs to
host-log-file-
Kubernetes-NodePrivateIp
Under
/aws/containerinsights/Cluster_Name/dataplane
Fluent Bit optimized configuration sends logs to
kubernetes-nodeName.
dataplaneServiceLog
Fluentd sends logs to
dataplaneServiceLog-
Kubernetes-nodeName
The
kube-proxyand
aws-nodelog files that Container Insights writes are in different locations. In the Fluent Bit optimized configuration, they are in
/aws/containerinsights/Cluster_Name/application. In FluentD, they are in
/aws/containerinsights/Cluster_Name/dataplane.
Most metadata such as
pod_nameand
namespace_nameare the same in Fluent Bit and Fluentd, but the following are different.
The Fluent Bit optimized configuration uses
docker_idand Fluentd use
Docker.container_id.
Both Fluent Bit configurations do not use the following metadata. They are present only in Fluentd:
container_image_id,
master_url,
namespace_id, and
namespace_labels.
Setting up Fluent Bit
To set up Fluent Bit to collect logs from your containers, you can follow the steps in Quick Start Setup for Container Insights on Amazon EKS and Kubernetes or you can follow the steps in this section.
In the following steps, you set up Fluent Bit as a daemonSet to send logs to CloudWatch Logs. When you complete this step, Fluent Bit creates the following log groups if they don't already exist.
To install Fluent Bit to send logs from containers to CloudWatch Logs
If you don't already have a namespace called
amazon-cloudwatch, create one by entering the following command:
kubectl apply -f
Run the following command to create a ConfigMap named
cluster-infowith the cluster name and the Region to send logs to. Replace
cluster-nameand
cluster-regionwith your cluster's name and Region.
ClusterName=
cluster-nameRegionName=
cluster-regionFluentBitHttpPort='2020' FluentBitReadFromHead='Off' [[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On' [[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On' kubectl create configmap fluent-bit-cluster-info \ --from-literal=cluster.name=${ClusterName} \ --from-literal=http.server=${FluentBitHttpServer} \ --from-literal=http.port=${FluentBitHttpPort} \ --from-literal=read.head=${FluentBitReadFromHead} \ --from-literal=read.tail=${FluentBitReadFromTail} \ --from-literal=logs.region=${RegionName} -n amazon-cloudwatch
In this command, the
FluentBitHttpServerfor monitoring plugin metrics is on by default. To turn it off, change the third line in the command to
FluentBitHttpPort=''(empty string) in the command.
Also by default, Fluent Bit reads log files from the tail, and will capture only new logs after it is deployed. If you want the opposite, set
FluentBitReadFromHead='On'and it will collect all logs in the file system.
Download and deploy the Fluent Bit daemonset to the cluster by running one of the following commands.
If you want the Fluent Bit optimized configuration, run this command.
kubectl apply -f
If you want the Fluent Bit configuration that is more similar to Fluentd, run this command.
kubectl apply -f
Validate the deployment by entering the following command. Each node should have one pod named fluent-bit-*.
kubectl get pods -n amazon-cloudwatch
The above steps create the following resources in the cluster:
A service account named
Fluent-Bitin the
amazon-cloudwatchnamespace. This service account is used to run the Fluent Bit daemonSet. For more information, see Managing Service Accounts
in the Kubernetes Reference.
A cluster role named
Fluent-Bit-rolein the
amazon-cloudwatchnamespace. This cluster role grants
get,
list, and
watchpermissions on pod logs to the
Fluent-Bitservice account. For more information, see API Overview
in the Kubernetes Reference.
A ConfigMap named
Fluent-Bit-configin the
amazon-cloudwatchnamespace. This ConfigMap contains the configuration to be used by Fluent Bit. For more information, see Configure a Pod to Use a ConfigMap
in the Kubernetes Tasks documentation.
If you want to verify your Fluent Bit setup, follow these steps.
Verify the Fluent Bit setup
Open the CloudWatch console at
.
In the navigation pane, choose Logs.
Make sure that you're in the Region where you deployed Fluent Bit.
Check the list of log groups in the Region. You should see the following:
/aws/containerinsights/
Cluster_Name/application
/aws/containerinsights/
Cluster_Name/host
/aws/containerinsights/
Cluster_Name/dataplane
Navigate to one of these log groups and check the Last Event Time for the log streams. If it is recent relative to when you deployed Fluent Bit, the setup is verified.
There might be a slight delay in creating the
/dataplanelog group. This is normal as these log groups only get created when Fluent Bit starts sending logs for that log group.
Multiline Log Support
By default, the multiline log entry starter is any character with no white space. This means that all log lines that start with a character that does not have white space are considered as a new multiline log entry.
If your own application logs use a different multiline starter, you can support
them by making two changes in the
Fluent-Bit.yaml file.
First, exclude them from the default input by adding the pathnames of
your log files to an
exclude_path field in the
containers
section of
Fluent-Bit.yaml. The following is an example.
[INPUT] Name tail Tag application.* Exclude_Path
full_pathname_of_log_file*,
full_pathname_of_log_file2* Path /var/log/containers/*.log
Next, add a block for your log files to the
Fluent-Bit.yaml
file. Refer to the cloudwatch-agent log configuration example below which uses a
timestamp regular expression as the multiline starter.
application-log.conf: | [INPUT] Name tail Tag application.* Path /var/log/containers/cloudwatch-agent* Docker_Mode On Docker_Mode_Flush 5 Docker_Mode_Parser cwagent_firstline Parser docker DB /fluent-bit/state/flb_cwagent.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 parsers.conf: | [PARSER] Name cwagent_firstline Format regex Regex (?<log>(?<="log":")\d{4}[\/-]\d{1,2}[\/-]\d{1,2}[ T]\d{2}:\d{2}:\d{2}(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=}) Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%LZ
Reducing the Log Volume From Fluent Bit (Optional)
By default, we send Fluent Bit application logs and Kubernetes metadata to CloudWatch. If you want to reduce the volume of data being sent to CloudWatch, you can stop one or both of these data sources from being sent to CloudWatch.
To stop Fluent Bit application logs, remove the following section from the
Fluent-Bit.yaml file.
[INPUT] Name tail Tag application.* Path /var/log/containers/fluent_bit* Parser docker DB /fluent-bit/state/flb_log.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10
To remove Kubernetes metadata from being appended to log events that are sent to
CloudWatch, add the following filters to the
application-log.conf
section in the
Fluent-Bit.yaml file.
application-log.conf: | [FILTER] Name nest Match application.* Operation lift Nested_under kubernetes Add_prefix Kube. [FILTER] Name modify Match application.* Remove Kube.<Metadata_1> Remove Kube.<Metadata_2> Remove Kube.<Metadata_3> [FILTER] Name nest Match application.* Operation nest Wildcard Kube.* Nested_under kubernetes Remove_prefix Kube.
Troubleshooting
If you don't see these log groups and are looking in the correct Region, check the logs for the Fluent Bit daemonSet pods to look for the error.
Run the following command and make sure that the status is
Running.
kubectl get pods -n amazon-cloudwatch
If the logs have errors related to IAM permissions, check the IAM role that is attached to the cluster nodes. For more information about the permissions required to run an Amazon EKS cluster, see For more information about the permissions required to run an Amazon EKS cluster, see Amazon EKS IAM Policies, Roles, and Permissions in the Amazon EKS User Guide.
If the pod status is
CreateContainerConfigError, get the exact error
by running the following command.
kubectl describe pod pod_name -n amazon-cloudwatch
Dashboard
You can create a dashboard to monitor metrics of each running plugin. You can see data for input and output bytes and for record processing rates as well as output errors and retry/failed rates. To view these metrics, you will need to install the CloudWatch agent with Prometheus metrics collection for Amazon EKS and Kubernetes clusters. For more information about how to set up the dashboard, see Install the CloudWatch agent with Prometheus metrics collection on Amazon EKS and Kubernetes clusters.
Before you can set up this dashboard, you must set up Container Insights for Prometheus metrics. For more information, see Container Insights Prometheus Metrics Monitoring.
To create a dashboard for the Fluent Bit Prometheus metrics
Create environment variables, replacing the values on the right in the following lines to match your deployment.
DASHBOARD_NAME=
your_cw_dashboard_nameREGION_NAME=
your_metric_region_such_as_us-west-1CLUSTER_NAME=
your_kubernetes_cluster_name
Create the dashboard by running the following command.
curl \ | sed "s/{{YOUR_AWS_REGION}}/${REGION_NAME}/g" \ | sed "s/{{YOUR_CLUSTER_NAME}}/${CLUSTER_NAME}/g" \ | xargs -0 aws cloudwatch put-dashboard --dashboard-name ${DASHBOARD_NAME} --dashboard-body | https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Container-Insights-setup-logs-FluentBit.html | CC-MAIN-2021-10 | en | refinedweb |
HTML parser ant text retriever using user defined rule set
Project description
lieparse is HTML parser ant text retriever using user defined rule set.
HISTORY
Library was initially written for Vilnius University Liepa-2 project. Although LIEPA is an abbreviation of project name, in Lithuanian Liepa means “Linden tree”. The tree image is in projects logotype also.
QUICK USAGE
Lets say you have HTML markup text read into string HtmlText. Then to retrieve all text from division with id=”main” you need to:
from lieparse import lieParser rules = '<div id="main">$Data[]</div> ::$Data[];' parser = lieParser(rules) parser.feed(HtmlText) parser.close()
More sophisticated example can be found after rules syntax definitions
RULES SYNTAX
Rules consist of statements optionally separated by white space.
White space is considered space, tab, new line and comment.
Comment begins from # sign and lasts to line end. (Concretely regex match r'(?:\s*(?:#.*?\n)?)*\s*')
Statements can contain incorporated statements and data definitions.
Statements are:
Rep statement - loops matching all incorporated statements:
#(<<other statements>>) #+(<<other statements>>) *(<<other statements>>)
where:
# is optional numeric value and means repeat count + is one-or-more modifier. If standing alone is same as 1+ * is zero-or-more modifier. Cannot be preceded by number
If number or ‘+’, ‘*’ modifiers are found before other statements (Tag or Any), repeat block is generated automatically. So writing ‘2<div></div>’ is automatically converted to ‘2(<div></div>)’.
Any statement - matches any of incorporated statements:
{<<other statements>>}
Any-match is done by statements definition order until first one matches. Statement can contain Any, Tag or Rep statements. Print statement is not allowed here.
Tag statement - main matching statement the html text is checked onto:
<name attr="string" $aData[<<aData attrs>>] > <<filterStr>> $Data[<<Data attrs>>] <<other statements>> </name>
where:
name is tag name, something like ‘div’, ‘li’, ‘span’.
attr is optional and optionally multiple attribute that must be present in html tag to be matched. Real tag must contain all, but maybe not only, specified attributes to match this rule. If attribute in html tag has no value, in rule it must be specified with empty string as a value.
class=”” attribute is split by whitespace into sets while parsing rule as well as while parsing html. Rule attribute set must be html attribute subset to match.
style=”” attribute is handled similarly, but splitting on ‘;’ and replacing multiple white space to single space and stripping spaces before adding to set.
$aData is optional and optionally multiple attribute data definition. Can be indirect data ($*data[]) also. Definition follows. Data variable name must insensitive match regular expression '[a-z]+[a-z0-9_]*'
filterStr is optional tag data filtering string. If enclosed in ‘/’ marks - regular expression match is performed against Tag data. If simple string - full match is performed (i.e. “My data” is equivalent to “/^My data$/”). If tag data is not matched - tag is considered not matching.
$Data is optional and optionally multiple data collection attribute. Can be indirect data ($*data[]) or to-first-tag data ($data[!]) or both.
Statement can incorporate other statements (Rep, Any, Tag, Print) mixed with $Data definitions
Print statement - only facility to output gathered data:
:<<flags>> <<loopDef>>:<<"string">> $pData[<<pData attrs>>] <<Other print statements>> ;
- where:
flags is optional print behavior modifiers - string (no quotes) containing one or more flag letters. Next flags are defined:
n - print new line after full print statement N - print new line after each individual loop of print statement s - separate each print value with space
loopDef is expression defining how much times print body will be performed. If not specified it defaults to 1. If defined - it is counted at run time depending on real data. Loop counter is from 0 up-to loopDef. On run time current loop counter can be accessed in index expressions as $0. Outer loop statements counter is accessible as $1 for first surrounding print statement, $2 for second and so on, the last being ourselves (so same as $0).
- loopDef can be one of next:
indexExpr - countable expression (look below) with $# as surrounding loop counters, numbers, parenthesis and arithmetic operations ‘+’, ‘-‘, ‘*’.
$Data - get length of Data array (note no []).
$*Data - get length of array, which name is in $Data.
string is optional string that will be printed
pData is data variable name (can be indirect: $*pData) from which data will be printed. Full definition is below.
string, pData and other print statements can be freely mixed inside print statement body.
- indexExpr - countable expression, that can be used in print statement loop definition and
in pData (print statement data) definition. indexExpr is countable expression with $# as surrounding loop counters, numbers, parenthesis and arithmetic operations ‘+’, ‘-‘, ‘*’.
Valid indexExpr’s:
3 $2 + 1 ($1 + 1) * 2
- Data statements can be found inside Tag definition (aData), inside Tag body (dData and xData)
and print statement (pData). Data reference (without []) can be found in print loopDef. pData can not be modified - information is only retrieved from named variable. Other types of Data is dedicated to collect data from html text. All data variables are arrays. After definition (even if it occurs with ‘+’ sign) array pointer is 0. Pointer can be incremented by ‘+’ sign in variable attributes. Pointer can never be decremented. ‘-‘ sign in attributes clears variable data, leaving index unchanged. ‘!’ in attributes defines xData instead of dData. Variables can be direct:
$<<name>>[<<attr>>] - defines variable named <<name>>
and indirect:
$*<<name>>[<<attr>>] - here name of variable is kept in last element of array $<<name>>[]
Only one level of indirection is allowed. <<attr>> and behavior differs depending on variable scope (aData, dData, xData or pData). However in all scopes accessed data is same for same named variable.
For aData, dData and xData:
<<attr>> consists of optional flag with values ‘!’, +’ or ‘-‘ and optional space separated strings.
If flag is:
‘!’ - xData type variable is defined. Valid only for variables inside Tag body.
‘+’ - index value is incremented before other operations. The exception is if variable is first time defined - in this case index is left 0.
‘-‘ - all data accumulated in variable by current index is cleared before other operations.
When no flag is present, data is appended to variable by current index.
String can be enclosed in double quotation marks. This allows strings with spaces. If no strings are defined - passed data is simply added to variable.
String can be:
/<<match>>/ - if passed data not matches regular expression it is ignored. All other strings are not processed
/<<find>>/<<repl>>/ - if <<find>> regular expression matches passed data, it is replaced with <<repl>> and got data added to variable. On no match - data is ignored. Other strings are processed with all data passed to them.
@<<attrName>> - Value of specified Tag attribute is added to variable.
<<otherString>> - specified string is added to variable.
Data passed to variables is:
aData - all Tag attributes with names as name=”value”. If there is some class values they are passed as separate class=”value” pairs.
dData - all accumulated data in this and above Tag levels.
xData - all accumulated data up to first sub-tag match.
For pData:
- <<attr>> can be one of next forms:
-
<<indexExpr0>>;<<indexExpr>> <<regexps>> - for indirect variables only or
<<indexExpr>> <<regexps>> - for all variables
<<indexExpr>> - is optional array index value at which will be printed. If not specified defaults to $0
<<indexExpr0>> - is optional parent array index from which variable name is taken. Defaults to $0.
<<regexps>> is optional regular expressions in form /<<find>>/<<repl>>/ All expressions are applied to data value before print by order of appearance.
ADVANCED EXAMPLE
We will retrieve python library names from:
import sys from lieparse import lieParser from pycurl import Curl, global_init, global_cleanup, GLOBAL_ALL *<code class="xref"> $Data[+] </code> </table> :N $Data:$Data[]; # if flags are ns we will have space separated list ''' global_init(pycurl.GLOBAL_ALL) c = Curl() c.setopt(c.USERAGENT, usragent) c.setopt(c.SSL_VERIFYPEER, 0) # have problems verifying certificate under Windows c.setopt(c.URL, url) s = c.perform_rs() global_cleanup() parser = lieParser(rules) parser.feed(s) v = parser.close() if v != 0: print("Unmatched {} items".format(v), file=sys.stderr)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/lieparse/1.0.4/ | CC-MAIN-2021-10 | en | refinedweb |
Generate Random Alphanumeric String in C++
In this tutorial, we will learn how to generate a random alphanumeric string in C++ using srand() function.
An alphanumeric string is a series of symbols containing only alphabets and numerical digits, without having any special characters. In this article, we will attempt to randomize a string of alphanumeric symbols in the C++ programming language. Here we will be using the rand() and srand() functions from the stdlib.h header file. We will also be using time.h to set the seed for the random functions being used.
Using the Random Functions
In C++, we have the declaration of the array given below:
int a[100]; std::cin>>a> int main() { for(int i = 0; i < 3; ++i) cout>>rand()>>'\t'; return 0; }
The output would be:
1681 9562 1288
The second time, the output would still be:
1681 9562 1288
No matter how many times the program, the output will always be:
1681 9562 1288> int main() { srand(time(0)); for(int i = 0; i < 3; ++i) cout>>rand()>>'\t'; return 0; }
Now the output would be:
1681 9562 1288
The second output will be different, like:
123 9331 1414
And the third output would also be distinct:
444 23124 9
Create a Random Alphanumeric String in C++
To create the alphanumeric string, we first declare a string having all possible valid values that can be taken, i.e. letters and digits.
char valid[] = "1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
Now, given a user-defined value of the length of the string to be created, we loop the random value as many times as required to create the required string.
#include <iostream> #include<stdlib.h> #include<time.h> using namespace std; int main() { srand(time(0)); int len; char valid[] = "1234567890ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"; char rand_array[250]; const int l = 62; // 62 is the length of string 'valid' cout<<"Enter length of random array required(upper limit = 250)\n"; cin>>len; for (int i = 0; i < len; ++i ) { rand_array[i] = valid[rand() % l]; } cout<<"The Random Array is :\n"<<rand_array; }
A sample program run would look like this:
Enter length of random array required(upper limit = 250) 25 The Random Array is : u0216qe12MkxPjeeaho0GiCVB739
Hence a random alphanumeric string of given length has been created using C++ through the use of random functions in stdlib.h and setting the seed with the help of time function within the header file time.h.
You may also like:
Generate a Matrix of Random Numbers in C++
How to fetch a random line from a text file in C++ | https://www.codespeedy.com/generate-random-alphanumeric-string-in-cpp/ | CC-MAIN-2020-45 | en | refinedweb |
i installed cuda toolkit 9.0 and downladed cudnn 7.0.5. what is the next step??
i havnt changed or appended any path files.
im getting error:
File “C:\Users\balaraman\Anaconda3\envs\tensorflow\lib\site-packages\numpy\core_init_.py”, line 16, in
from . import multiarray
ImportError: DLL load failed: The volume does not contain a recognized file system.
Please make sure that all required file system drivers are loaded and that the volume is not corrupted.
how to install tensorflow with gpu on windows?
i installed cuda toolkit 9.0 and downladed cudnn 7.0.5. what is the next step??
I didn’t try Windows install, but there are instructions:
Try go ahead with python-pip:
pip3 install --upgrade tensorflow-gpu | https://forums.developer.nvidia.com/t/how-to-install-tensorflow-with-gpu-on-windows/58770 | CC-MAIN-2020-45 | en | refinedweb |
.conf.urls.url()instances.
- the following arguments:
- An instance of
HttpRequest.
- If the matched regular expression returned no named groups, then the matches from the regular expression are provided as positional arguments.
- The keyword arguments are made up of any named groups matched by the regular expression, overridden by any arguments specified in the optional
kwargsargument to
django.conf.urls.url().
- If no regex matches, or if an exception is raised during any point in this process, Django invokes an appropriate error-handling view. See Error handling below.
Example¶
Here’s a sample URLconf:
from django.conf.urls import url from . import views urlpatterns = [ url(r'^articles/2003/$', views.special_case_2003), url(r'^articles/([0-9]{4})/$', views.year_archive), url(r'^articles/([0-9]{4})/([0-9]{2})/$', views.month_archive), url(r'^articles/([0-9]{4})/([0-9]{2})/([0-9]+)/$',/03/would match the final pattern. Django would call the function:
from django.conf.urls import]{2})/$', views.article_detail), ]
This accomplishes exactly the same thing as the previous example, with one subtle difference: The captured values are passed to view functions as keyword arguments rather than positional arguments. For example:
- A request to
/articles/2005/03/would call the function
views.month_archive(request, year='2005', month='03'), instead of
views.month_archive(request, '2005', '03').
- A request to
/articles/2003/03/03/would call the function.
Captured arguments are always strings¶
Each captured argument is sent to the view as a plain Python string, regardless of what sort of match the regular expression makes. For example, in this URLconf line:
url(r'^articles/(?P<year>.
Error handling¶
When Django can’t find a regex matching the requested URL, or when an exception is raised, Django will invoke.conf.urls import include, url urlpatterns = [ # ... snip ... url(r'^community/', include('django_website.aggregator.urls')), url(r'^contact/', include('django_website.contact.urls')), # ...(r'^$', main_views.homepage), url(r'^help/', include('apps.help.urls')), url(r'.conf.urls import.
Passing extra options to view functions¶
URLconfs have a hook that lets you pass extra arguments to your view functions, as a Python dictionary.
The
django.conf.urls.url() function can take an optional third argument
which should be a dictionary of extra keyword arguments to pass to the view
function.
For example:
from django.conf.urls import url from . import views urlpatterns = [ url(r'^blog/(?P<year>[0-9] from django.conf.urls import include,/$',.conf.urls import url from . import views urlpatterns = [ #... url(r'^articles/([0-9].urls import reverse from django.http import HttpResponseRedirect.conf.urls import include, url urlpatterns = [ url(r'^author-polls/', include('polls.urls', namespace='author-polls')), url(r'^publisher-polls/', include('polls.urls', namespace='publisher-polls')), ]
from django.conf.urls import url from . import views app_name = 'polls' urlpatterns = [ url(r'^$', views.IndexView.as_view(), name='index'), url(r'^(?P<pk>\d+)/$',>). | https://docs.djangoproject.com/en/1.10/topics/http/urls/ | CC-MAIN-2020-45 | en | refinedweb |
Description
template<class Real = double>
class chrono::ChVector< Real >
Definition of general purpose 3d vector variables, such as points in 3D.
This class implements the vectorial algebra in 3D (Gibbs products). ChVector is templated by precision, with default 'double'.
Further info at the Mathematical objects in Chrono manual page.
#include <ChVector.h>
Member Function Documentation
◆ DirToDxDyDz()
Use the Gram-Schmidt orthonormalization to find the three orthogonal vectors of a coordinate system whose X axis is this vector.
Vsingular (optional) sets the normal to the plane on which Dz must lie. is set to [1,0,0]) and return true otherwise.
◆ operator*()
Operator for element-wise multiplication.
Note that this is neither dot product nor cross product.
◆ operator/()
Operator for element-wise division.
Note that 3D vector algebra is a skew field, non-divisional algebra, so this division operation is just an element-by element division.
◆ SetNull()
Set the vector to the null vector.
Sets the vector as a null vector.
The documentation for this class was generated from the following file:
- /builds/uwsbel/chrono/src/chrono/core/ChVector.h | http://api.projectchrono.org/development/classchrono_1_1_ch_vector.html | CC-MAIN-2020-45 | en | refinedweb |
An Open Source, implementation of the Microsoft XNA 4 Framework. Our goal is to allow XNA developers to target iOS, Android, Windows 8, Mac, Linux and more!xna game-development iphone mono monotouch windows-phone windows-phone-7
This
Cross platform wrapper of OpenCV for .NET Framework.Old versions of OpenCvSharp is maintained in opencvsharp_2410.opencv wrapper dotnet-core c-sharp mono opencvsharp nuget
Python for .NET is a package that gives Python programmers nearly seamless integration with the .NET Common Language Runtime (CLR) and provides a powerful application scripting tool for .NET developers. It allows Python code to interact with the CLR, and may also be used to embed Python into a .NET application.Python for .NET allows CLR namespaces to be treated essentially as Python packages.c-sharp mono fsharp cpython clr pythonnet dotnet-framework osx dllexport winforms wsl pinvoke dlr ffi
Non-Blocking Reactive Streams Foundation for the JVM both implementing a Reactive Extensions inspired API and efficient event streaming support. Reactor 3 requires Java 8 or + to run.reactive flux mono reactive-streams flow asynchronous reactive-extensions jvm
Checkout the examples on how to do many things with Qml.Net. Register your new type with Qml.charp qml net-core mono.typescript mono compiler translator
We have large collection of open source products. Follow the tags from
Tag Cloud >>
Open source products are scattered around the web. Please provide information
about the open source projects you own / you use.
Add Projects. | https://www.findbestopensource.com/tagged/mono | CC-MAIN-2020-45 | en | refinedweb |
Physics.CapsuleCast, but this function will return all hits the capsule sweep intersects.
Casts a capsule against all colliders in the Scene and returns detailed information on each collider which was hit.
The capsule is defined by the two spheres with
radius around
point1 and
point2, which form the two ends of the capsule.
Hits are returned all colliders: For colliders that overlap the capsule.
using UnityEngine; using System.Collections;
public class ExampleClass : MonoBehaviour { void Update() { RaycastHit[] hits; CharacterController charCtrl = GetComponent<CharacterController>(); Vector3 p1 = transform.position + charCtrl.center + Vector3.up * -charCtrl.height * 0.5F; Vector3 p2 = p1 + Vector3.up * charCtrl.height;
// Cast character controller shape 10 meters forward, to see if it is about to hit anything hits = Physics.CapsuleCastAll(p1, p2, charCtrl.radius, transform.forward, 10);
// Change the material of all hit colliders // to use a transparent Shader for (int i = 0; i < hits.Length; i++) { RaycastHit hit = hits[i]; Renderer rend = hit.transform.GetComponent<Renderer>();
if (rend) { rend.material.shader = Shader.Find("Transparent/Diffuse"); Color tempColor = rend.material.color; tempColor.a = 0.3F; rend.material.color = tempColor; } } } } | https://docs.unity3d.com/ScriptReference/Physics.CapsuleCastAll.html | CC-MAIN-2020-45 | en | refinedweb |
React Redux Performance Optimization - Selectors & Reselect
In this video you will learn about selectors in Redux and how to improve their performance by using Reselect library. Let's jump right into it.
Content
Here I have a generated create-react-app project with redux-connect. As you can see here we have 2 reducers and we have 2 actions: change username and add user. When we type something in input we change the username field in reducer and when we click on Add button we save user in our reducer.
It may seem as a lot of already written code so if some parts are not clear for you I will link my previous video about reducers here on the top.
Now let's talk about selectors. So as you can see normally we just write in mapStateToProps fields that we need from state. This is completely fine but just imagine that we have several components where we connect to the same data. First of all it's code duplicate and secondly it's easier to make an error because we don't use a single function but we each time write the full path.
To avoid this we can use selectors. What is selector? It's just a function which returns the part of the state. Let's create a selectors.js file in store and a selector to get users array from our state.
src/store/selectors.js
export const usersSelector = (state) => state.users.users;
As you can see there is no magic here. It's just a function which gets the global state and returns some part of it. The important thing is a codestyle. Normally we have a postfix selector in the name of we start the word with selectSomething.
Now we can use this selector in our App component.
import { usersSelector } from "./store/selectors"; const mapStateToProps = (state) => { return { users: usersSelector(state) }; };
So we just moved getting of some properties to selector and it works exactly like before but we can now reuse the same function everywhere.
Let's try a more complex example. For example we want to have a search input and get a filtered array of all users that we have.
App.js
handleSearch = (e) => { this.props.dispatch({ type: "CHANGE_SEARCH", payload: e.target.value }); }; <input type="text" placeholder="Search" value={this.props.search} onChange={this.handleSearch} />
We also need to add this field in our reducer.
src/store/reducers/users.js
const initialState = { users: [], username: "", search: "", }; const reducer = (state = initialState, action) => { switch (action.type) { ... case "CHANGE_SEARCH": return { ...state, search: action.payload, }; default: return state; } };
As you can see now our search property changes when we type in our input. So we have in reducer a searched string and our users. Which means we can filter users on the fly inside mapStateToProps.
const filteredUsers = state.users.users.filter((user) => { console.log("filtering users..."); return user.includes(state.users.search); }); return { users: usersSelector(state), filteredUsers: filteredUsers }
As you can see everything is working and every time when we type we get a filtered users array.
But there is a super important thing to remember. Every time when our Redux state changes all mapStateToProps are being called. Doesn't matter on what properties you are subscribed. They are all being called. This is why with each letter that we type we see console.log of calling mapStateToProps.
Which is actually fine if we just select properties from the object. Because if this properties didn't change React won't rerender the component. But it's not find if we make some additional calculation. Because in our case now our filtering users is being called every single time when we change the state. This is really bad because if we have lots of data it will take a lot of time.
But we have a solution for that. What we want to do is define when we must recalculate our filtered users. For this we must install an additional library which does exactly that and it is the recommended solution from Redux.
yarn add reselect
Now let's move our filtering in selector. First of all because it make our code cleaner and secondly it will be easier to use reselect there.
selectors.js
export const filteredUsersSelector = (state) => { return state.users.users.filter((user) => { console.log("filtering users..."); return user.includes(state.users.search); }); };
So everything is working as before but at least our mapStateToProps is clean.
Now the question how reselect helps us to make filter less often? It memoizes value. Memoization means that the calculation of our filter is being cached until some of the properties that we define won't change. In our case to calculate filtered users we need users property from the state and search property. So if this 2 properties didn't change then we will get a cached value back.
import { createSelector } from "reselect"; export const filteredUsersSelector = createSelector( (state) => state.users.users, (state) => state.users.search, (users, search) => { return users.filter((user) => { console.log("filtering users..."); return user.includes(search); }); } );
So here we first define as a functions all dependencies of our calculations. Then in the last function we make calculations based on all properties that we defined before. The syntax may look scary but it's just 1 or several functions which define our dependencies and then the last function with calculations.
Now as you can see in browser our filtering is not called when we change username in state. So any changes in state except of this 2 properties that we defined are completely ignored. Only when users array or search changes, we recalculate our data.
But now I think you want to ask why we are not memoizing every selector that we have? Because it also takes performance to store a value and check if dependencies changed and it doesn't make any sense to make it for simple selections on the state.
Call to action
So here are important points to remember.
- Defining selectors is good because they are reusable and our mapStateToProps looks cleaner and we can easily test our selectors
- Memoization without additional library is not possible but extremely needed when you have calculations that you want to make
- Reselect library is awesome to solve exactly this problems. | https://monsterlessons-academy.com/p/react-redux-performance-optimizations-selectors-and-reselect | CC-MAIN-2021-31 | en | refinedweb |
Mass assignment, also known as over-posting, is an attack used on websites that involve some sort of model-binding to a request. It is used to set values on the server that a developer did not expect to be set. This is a well known attack now, and has been discussed many times before, (it was a famous attack used against GitHub some years ago), but I wanted to go over some of the ways to prevent falling victim to it in your ASP.NET Core applications.
How does it work?
Mass assignment typically occurs during model binding as part of MVC. A simple example would be where you have a form on your website in which you are editing some data. You also have some properties on your model which are not editable as part of the form, but instead are used to control the display of the form, or may not be used at all.
For example, consider this simple model:
public class UserModel { public string Name { get; set; } public bool IsAdmin { get; set; } }
It has two properties, but we only actually going to allow the user to edit the
Name property - the
IsAdmin property is just used to control the markup they see:
@model UserModel <form asp- <div class="form-group"> <label asp-</label> <input class="form-control" type="TextBox" asp- </div> <div class="form-group"> @if (Model.IsAdmin) { <i>You are an admin</i> } else { <i>You are a standard user</i> } </div> <button class="btn btn-sm" type="submit">Submit</button> </form>
So the idea here is that you only render a single
input tag to the markup, but you post this to a method that uses the same model as you used for rendering:
[HttpPost] public IActionResult Vulnerable(UserModel model) { return View("Index", model); } , a malicious user can set the
IsAdmin field to
true. The model binder will dutifully bind the value, and you have just fallen victim to mass assignment/over posting:
Defending against the attack
So how can you prevent this attack? Luckily there's a whole host of different ways, and they are generally the same as the approaches you could use in the previous version of ASP.NET. I'll run through a number of your options here.
1. Use
BindAttribute on the action method
Seeing as the vulnerability is due to model binding, our first option is to use the
BindAttribute:
public IActionResult Safe1([Bind(nameof(UserModel.Name))] UserModel model) { return View("Index", model); }
The
BindAttribute lets you whitelist only those properties which should be bound from the incoming request. In our case, we have specified just
Name, so even if a user provides a value for
IsAdmin, it will not be bound. This approach works, but is not particularly elegant, as it requires you specify all the properties that you want to bind.
2. Use
[Editable] or
[BindNever] on the model
Instead of applying binding directives in the action method, you could use
DataAnnotations on the model instead.
DataAnnotations are often used to provide additional metadata on a model for both generating appropriate markup and for validation.
For example, our
UserModel might actually be already decorated with some data annotations for the
Name property:
public class UserModel { [MaxLength(200)] [Display(Name = "Full name")] [Required] public string Name { get; set; } [Editable(false)] public bool IsAdmin { get; set; } }
Notice that as well as the
Name attributes, I have also added an
EditableAttribute. This will be respected by the model binder when the post is made, so an attempt to post to
IsAdmin will be ignored.
The problem with this one is that although applying the
EditableAttribute to the
IsAdmin produces the correct output, it may not be semantically correct in general. What if you can edit the
IsAdmin property in some cases? Things can just get a little messy sometimes.
As pointed out by Hamid in the comments, the
[BindNever]attribute is a better fit here. Using
[BindNever]in place of
[Editable(false)]will prevent binding without additional implications.
3. Use two different models
Instead of trying to retrofit safety to our models, often the better approach is conceptually a more simple one. That is to say that our binding/input model contains different data to our view/output model. Yes, they both have a
Name property, but they are encapsulating different parts of the system so it could be argued they should be two different classes:
public class BindingModel { [MaxLength(200)] [Display(Name = "Full name")] [Required] public string Name { get; set; } } public class UserModel { [MaxLength(200)] [Display(Name = "Full name")] [Required] public string Name { get; set; } [Editable(false)] public bool IsAdmin { get; set; } }
Here our
BindingModel is the model actually provided to the action method during model binding, while the
UserModel is the model used by the View during HTML generation:
public IActionResult Safe3(BindingModel bindingModel) { var model = new UserModel(); // can be simplified using AutoMapper model.Name = bindingModel.Name; return View("Index", model); }
Even if the
IsAdmin property is posted, it will not be bound as there is no
IsAdmin property on
BindingModel. The obvious disadvantage to this simplistic approach is the duplication this brings, especially when it comes to the data annotations used for validation and input generation. Any time you need to, for example, update the max string length, you need to remember to do it in two different places.
This brings us on to a variant of this approach:
4. Use a base class
Where you have common properties like this, an obvious choice would be to make one of the models inherit from the other, like so:
public class BindingModel { [MaxLength(200)] [Display(Name = "Full name")] [Required] public string Name { get; set; } } public class UserModel : BindingModel { public bool IsAdmin { get; set; } }
This approach keeps your models safe from mass assignment attacks by using different models for model binding and for View generation. But compared to the previous approach, you keep your validation logic DRY.
public IActionResult Safe4(BindingModel bindingModel) { // do something with the binding model // when ready to display HTML, create a new view model var model = new UserModel(); // can be simplified using e.g. AutoMapper model.Name = bindingModel.Name; return View("Index", model); }
There is also a variation of this approach which keeps your models completely separate, but allows you to avoid duplicating all your data annotation attributes by using the
ModelMetadataTypeAttribute.
5. Use
ModelMetadataTypeAttribute
The purpose of this attribute is to allow you defer all the data annotations and additional metadata about you model to a different class. If you want to keep your
BindingModel and
UserModel hierarchically distinct, but also son't want to duplicate all the
[MaxLength(200)] attributes etc, you can use this approach:
[ModelMetadataType(typeof(UserModel))] public class BindingModel { public string Name { get; set; } } public class UserModel { [MaxLength(200)] [Display(Name = "Full name")] [Required] public string Name { get; set; } public bool IsAdmin { get; set; } }
Note that only the
UserModel contains any metadata attributes, and that there is no class hierarchy between the models. However the MVC model binder will use the metadata of the equivalent properties in the
UserModel when binding or validating the
BindingModel.
The main thing to be aware of here is that there is an implicit contract between the two models now - if you were to rename
Name on the
UserModel, the
BindingModel would no longer have a matching contract. There wouldn't be an error, but the validation attributes would no longer be applied to
BindingModel.
Summary
This was a very quick run down of some of the options available to you to prevent mass assignment. Which approach you take is up to you, though I would definitely suggest using one of the latter 2-model approaches. There are other options too, such as doing explicit binding via
TryUpdateModelAsync<> but the options I've shown represent some of the most common approaches. Whatever you do, don't just blindly bind your view models if you have properties that should not be edited by a user, or you could be in for a nasty surprise.
And whatever you do, don't bind directly to your
EntityFramework models. Pretty please. | https://andrewlock.net/preventing-mass-assignment-or-over-posting-in-asp-net-core/ | CC-MAIN-2021-31 | en | refinedweb |
diff --git a/go/go/forms.py b/go/go/forms.py index c6b418138b499287236397f95311c9f08123ae01..db898d4eeca4c50bac347974f194e1d016428614 100644 --- a/go/go/forms.py +++ b/go/go/forms.py @@ -14,7 +14,7 @@ from django.utils import timezone from django.utils.safestring import mark_safe # App Imports -from .models import URL, RegisteredUser +from .models import URL # Other Imports # from bootstrap3_datetime.widgets import DateTimePicker @@ -275,85 +275,3 @@ class EditForm(URLForm): class Meta(URLForm.Meta): # what attributes are included fields = URLForm.Meta.fields - -class SignupForm(ModelForm): - """ - The form that is used when a user is signing up to be a RegisteredUser - """ - - # The full name of the RegisteredUser - full_name = CharField( - required=True, - label='Full Name (Required)', - max_length=100, - widget=TextInput(), - help_text="We can fill in this field based on information provided by.", - ) - - # The RegisteredUser's chosen organization - organization = CharField( - required=True, - label='Organization (Required)', - max_length=100, - widget=TextInput(), - help_text="Or whatever \"group\" you would associate with on campus.", - ) - - # The RegisteredUser's reason for signing up to us Go - description = CharField( - required=False, - label='Description (Optional)', - max_length=200, - widget=Textarea(), - help_text="Describe what type of links you would intend to create with Go.", - ) - - # A user becomes registered when they agree to the TOS - registered = BooleanField( - required=True, - # ***Need to replace lower url with production URL*** - # ie. go.gmu.edu/about#terms - label=mark_safe( - 'Do you accept the Terms of Service?' - ), - help_text="Esssentially the GMU Responsible Use of Computing policies.", - ) - - def __init__(self, request, *args, **kwargs): - """ - On initialization of the form, crispy forms renders this layout - """ - - # Necessary to call request in forms.py, is otherwise restricted to - # views.py and models.py - self.request = request - super(SignupForm, self).__init__(*args, **kwargs) - self.helper = FormHelper() - self.helper.form_class = 'form-horizontal' - self.helper.label_class = 'col-md-4' - self.helper.field_class = 'col-md-6' - - self.helper.layout = Layout( - Fieldset('', - Div( - # Place in form fields - Div( - 'full_name', - 'organization', - 'description', - 'registered', - css_class='well'), - - # Extras at bottom - StrictButton('Submit',css_class='btn btn-primary btn-md col-md-4', type='submit'), - css_class='col-md-6'))) - - class Meta: - """ - Metadata about this ModelForm - """ - - # what model this form is for - model = RegisteredUser - # what attributes are included - fields = ['full_name', 'organization', 'description', 'registered'] diff --git a/go/go/templates/core/signup.html b/go/go/templates/core/signup.html deleted file mode 100644 index a5feb00bd513b486a6ab4349374bb5275cc0b56d..0000000000000000000000000000000000000000 --- a/go/go/templates/core/signup.html +++ /dev/null @@ -1,51 +0,0 @@ - -{% extends 'layouts/base.html' %} - - -{% load crispy_forms_tags %} - - -{% block title %} -SRCT Go •. -
- Because Go allows users to represent their group or organization with - George Mason branding, user accounts must be manually approved by a Go - official. --
- {% if not request.user.registereduser.registered %}
- If you have not done so, you may sign up
- for an account.
-
- This process takes time. Please be patient. - {% else %} - According to our database you have registered, but are not yet approved, to use Go. -
- This process takes time. Please be patient. - {% endif %} -
- Your registration request has been sent!
-
- You will recieve a confirmation email when your application has been received - and once it has been approved. -
- Please be patient, our administrators will handle your application as soon as they can. - | https://git.gmu.edu/srct/go/-/commit/314acd1d4566cc8e70d4b2dc5427a9a39ca08e50.diff | CC-MAIN-2021-31 | en | refinedweb |
Refactoring for everyone
How and why to use Eclipse's automated refactoring features
Why refactor?
Refactoring is changing the structure of a program without changing its functionality. Refactoring is a powerful technique, but it needs to be performed carefully. The main danger is that errors can inadvertently be introduced, especially when refactoring is done by hand. This danger leads to a common criticism of refactoring: why fix code if it isn't broken?
There are several reasons that you might want to refactor code. The first is the stuff of legend: The hoary old code base for a venerable product is inherited or otherwise mysteriously appears. The original development team has disappeared. A new version, with new features, must be created, but the code is no longer comprehensible. The new development team, working night and day, deciphers it, maps it, and after much planning and design, tears the code to shreds. Finally, painstakingly, they put it all back together according to the new vision. This is refactoring on a heroic scale and few have lived to tell this tale.
A more realistic scenario is that a new requirement is introduced to a project that requires a design change. It's immaterial whether this requirement was introduced because of an inadvertent oversight in the original plan or because an iterative approach (such as agile or test-driven development) is being used that deliberately introduces requirements throughout the development process. This is refactoring on a much smaller scale, and it generally involves altering the class hierarchy, perhaps by introducing interfaces or abstract classes, splitting classes, rearranging classes, and so on.
A final reason for refactoring, when automated refactoring tools are available, is simply as a shortcut for generating code in the first place -- something like using a spellchecker to type a word for you when you aren't certain how it's spelled. This mundane use of refactoring -- for generating getter and setter methods, for example -- can be an effective time saver once you are familiar with the tools.
Eclipse's refactoring tools aren't intended to be used for refactoring at a heroic scale -- few tools are -- but they are invaluable for making code changes in the course of the average programmer's day, whether that involves agile development techniques or not. After all, any complicated operation that can be automated is tedium that can be avoided. Knowing what refactoring tools Eclipse makes available, and understanding their intended uses, will greatly improve your productivity.
There are two important ways that you can reduce the risk of breaking code. One way is to have a thorough set of unit tests for the code: the code should pass the tests both before and after refactoring. The second way is to use an automated tool, such as Eclipse's refactoring features, to perform this refactoring.
The combination of thorough testing and automated refactoring is especially powerful and has transformed this once mysterious art to a useful, everyday tool. This ability to change the structure of your code without altering its functionality, quickly and safely, in order to add functionality or improve its maintainability can dramatically affect the way you design and develop code, whether you incorporate it into a formal agile methodology or not.
Types of refactoring in Eclipse
Eclipse's refactoring tools can be grouped into three broad categories (and this is the order in which they appear in the Refactoring menu):
- Changing the name and physical organization of code, including renaming fields, variables, classes, and interfaces, and moving packages and classes
- Changing the logical organization of code at the class level, including turning anonymous classes into nested classes, turning nested classes into top-level classes, creating interfaces from concrete classes, and moving methods or fields from a class to a subclass or superclass
- Changing the code within a class, including turning local variables into class fields, turning selected code in a method into a separate method, and generating getter and setter methods for fields
Several refactorings don't fit neatly into these three categories, particularly Change Method Signature, which is included in the third category here. Apart from this exception, the sections that follow will discuss Eclipse's refactoring tools in this order.
Physical reorganization and renaming
You can obviously rename or move files around in the file system without a special tool, but doing so with Java source files may require
that you edit many files to update
import or
package statements. Similarly, you can easily rename classes, methods, and variables using a text editor's search and replace functionality, but you need to do this with care, because different classes may have like-named methods or variables; it can be tedious to go through all the files in a project making the sure that every instance is correctly identified and changed.
Eclipse's Rename and Move are able to makes these changes intelligently, throughout the entire project, without user intervention, because Eclipse understands the code semantically and is able to identify references to a specific method, variable, or class names. Making this task easy helps ensure that method, variable, and class names indicate their intent clearly.
It's quite common to find code that has inappropriate or misleading names because the code was changed to work differently than
originally planned. For example, a program that looks for specific words in a file might be extended to work with Web pages by using the
URL class to obtain an
InputStream. If this input stream was called
file originally, it should be changed to reflect its new more general nature, perhaps to
sourceStream. Developers often fail to make changes like this because it can be a messy and tedious process. This, of course, makes the code confusing to the next developer who must work with it.
To rename a Java element, simply click on it in the Package Explorer view or select it in a Java source file, then select Refactor > Rename. In the dialog box, select the new name and choose whether Eclipse should also change references to the name. The exact fields that are displayed depend on the type of element that you select. For example, if you select a field that has getter and setter methods, you can also update the names of these methods to reflect the new field. Figure 1 shows a simple example.
Figure 1. Renaming a local variable
Like all Eclipse refactorings, after you have specified everything necessary to perform the refactoring, you can press Preview to see the changes that Eclipse proposes to make, in a comparison dialog, which lets you veto or approve each change in each affected file, individually. If you have confidence in Eclipse's ability to perform the change correctly, you can instead just press OK. Obviously, if you are uncertain what a refactoring will do, you'll want to preview first, but this isn't usually necessary for simple refactorings like Rename and Move.
Move works very much like Rename: You select a Java element (usually a class), specify its new location, and specify whether references should also be updated. You can then choose Preview to examine the changes or OK to carry out the refactoring immediately as shown in Figure 2.
Figure 2. Moving a class from one package to another
On some platforms (notably Windows), you can also move classes from one package or folder to another by simply dragging and dropping them in the Package Explorer view. All references will be updated automatically.
Redefining class relationships
A large set of Eclipse's refactorings let you alter your class relationships automatically. These refactorings aren't as generally useful as the other types of refactorings that Eclipse has to offer, but are valuable because they perform fairly complex tasks. When they're useful, they're very useful.
Promoting anonymous and nested classes
Two refactorings, Convert Anonymous Class to Nested and Convert Nested Type to Top Level, are similar in that they move a class out of its current to scope to the enclosing scope.
An anonymous class is a kind of syntactic shorthand that lets you instantiate a class implementing an abstract class or interface where you need it, without having to explicitly give it a class name. This is commonly used when creating listeners in the user interface, for example. In Listing 1, assume that Bag is an interface defined elsewhere that declares two methods,
get() and
set().
Listing 1. Bag class
public class BagExample { void processMessage(String msg) { Bag bag = new Bag() { Object o; public Object get() { return o; } public void set(Object o) { this.o = o; } }; bag.set(msg); MessagePipe pipe = new MessagePipe(); pipe.send(bag); } }
When an anonymous class becomes so large that the code becomes difficult to read, you should consider making the anonymous class a
proper class; to preserve encapsulation (in other words, to hide it from outside classes that don't need to know about it), you should make this a
nested class rather than a top-level class. You can do this by clicking inside the anonymous class and selecting Refactor > Convert Anonymous Class to Nested. Enter a name for the class, such as
BagImpl, when prompted and then select either Preview or OK. This will change the code as shown in Listing 2.
Listing 2. Refactored Bag class
public class BagExample { private final class BagImpl implements Bag { Object o; public Object get() { return o; } public void set(Object o) { this.o = o; } } void processMessage(String msg) { Bag bag = new BagImpl(); bag.set(msg); MessagePipe pipe = new MessagePipe(); pipe.send(bag); } }
Convert Nested Type to Top Level is useful when you want to make a nested class available to other classes. You might, for example, be
using a value object inside a class -- such as the
BagImpl class above. If you later decide that this data should be shared between classes, this refactoring will create a new class file from the nested class. You can do this by highlighting the class name in the source file (or clicking on the class name in the Outline view) and selecting Refactor > Convert Nested Type to Top Level.
This refactoring will ask you to provide a name for the enclosing instance. It may offer a suggestion, such as
example, which you can accept. The meaning of this will be clear in a moment. After pressing OK, the code for the enclosing
BagExample class will be changed as shown in Listing 3.
Listing 3. Refactored Bag class
public class BagExample { void processMessage(String msg) { Bag bag = new BagImpl(this); bag.set(msg); MessagePipe pipe = new MessagePipe(); pipe.send(bag); } }
Note that when a class is nested, it has access to the outer class's members. To preserve this functionality, the refactoring will add an instance of the enclosing class BagExample to the formerly nested class. This is the instance variable you were previously asked to provide a name for. It also creates a constructor that sets this instance variable. The new BagImpl class that the refactoring creates is shown in Listing 4.
Listing 4. BagImpl class
final class BagImpl implements Bag { private final BagExample example; /** * @paramBagExample */ BagImpl(BagExample example) { this.example = example; // TODO Auto-generated constructor stub } Object o; public Object get() { return o; } public void set(Object o) { this.o = o; } }
If you don't need to preserve access to the
BagExample class, as is the case here, you can safely remove the instance variable and the constructor, and change the code in the
BagExample class to the default no-arg constructor.
Moving member within the class hierarchy
Two other refactorings, Push Down and Pull Up, move class methods or fields from a class to its subclass or superclass, respectively.
Suppose you have an abstract class
Vehicle, defined as follows in Listing 5.
Listing 5. Abstract Vehicle class
public abstract class Vehicle { protected int passengers; protected String motor; public int getPassengers() { return passengers; } public void setPassengers(int i) { passengers = i; } public String getMotor() { return motor; } public void setMotor(String string) { motor = string; } }
You also have a subclass of
Vehicle called
Automobile as shown in Listing 6.
Listing 6. Automobile class
public class Automobile extends Vehicle { private String make; private String model; public String getMake() { return make; } public String getModel() { return model; } public void setMake(String string) { make = string; } public void setModel(String string) { model = string; } }
Notice that one of the attributes of
Vehicle is
motor. This is fine if you know that you will only ever deal with motorized vehicles, but if you want to allow for things like rowboats, you may want to push the
motor attribute down from the
Vehicle class into the Automobile class. To do this, select
motor in the Outline view, then select Refactor > Push Down.
Eclipse is smart enough to realize that you can't always move a field by itself and provides a button Add Required, but this
doesn't always work correctly in Eclipse 2.1. You need to verify that any methods that depend on this field are also pushed down. In this case there are
two, the getter and setter methods that accompany the
motor field, as shown
in Figure 3.
Figure 3. Adding required members
After pressing OK, the
motor field and the
getMotor() and
setMotor() methods will be moved to the
Automobile class. Listing 7 shows what the
Automobile class looks like after this refactoring.
Listing 7. Refactored Automobile class
public class Automobile extends Vehicle { private String make; private String model; protected String motor; public String getMake() { return make; } public String getModel() { return model; } public void setMake(String string) { make = string; } public void setModel(String string) { model = string; } public String getMotor() { return motor; } public void setMotor(String string) { motor = string; } }
The Pull Up refactoring is nearly identical to Push down, except, of course, that it moves class members from a class to its superclass
instead of subclass. You might use this if you later changed your mind and decided to move
motor back to the
Vehicle class. The same warning about making sure that you select all required members applies.
Having motor in the
Automobile class means that if you create other subclasses of
Vehicle, such as
Bus, you'll need to add
motor (and its associated methods) to the
Bus class too. One way of representing relationships like this is to create an interface,
Motorized, which
Automobile and
Bus would implement, but
RowBoat would not.
The easiest way to create the
Motorized interface is to use the Extract Interface refactoring on
Automobile. To do this, select the
Automobile class in the Outline view and then choose Refactor > Extract Interface from the menu. The dialog will allow you to select which methods you want included in the interface as shown in Figure 4.
Figure 4. Extracting the Motorized interface
After selecting OK, an interface is created, as shown in Listing 8.
Listing 8. Motorized interface
public interface Motorized { public abstract String getMotor(); public abstract void setMotor(String string); }
And the class declaration for
Automobile is altered as follows:
public class Automobile extends Vehicle implements Motorized
Using a supertype
The final refactoring included in this category is Use Supertype Where Possible. Consider an application that manages an inventory of
automobiles. Throughout, it uses objects of type
Automobile. If you wanted to be able to handle all types of vehicles, you could use this refactoring to change references to
Automobile to references to Vehicle (see Figure 5). If you perform any
type-checking in your code using the
instanceof operator, you will need to determine whether it is appropriate to use the specific type or the supertype and check the first option, Use the selected supertype in 'instanceof' expressions, appropriately.
Figure 5. Changing Automobile to its supertype, Vehicle
The need for using a supertype arises frequently in the Java language, especially when the Factory Method pattern is used. Typically this is
implemented by having an abstract class that has a static
create() method that returns a concrete object implementing the abstract class. This can be useful if the type of concrete object that must be created depends on implementation details that are of no interest to client classes.
Changing code within a class
The largest variety of refactorings are those that reorganize code within a class. Among other things, these allow you to introduce (or remove) intermediate variables, create a new method from a portion of an old one, and create getter and setter methods for a field.
Extracting and inlining
There are several refactorings beginning with the word Extract: Extract Method, Extract Local Variable, and Extract Constants. The first one, Extract Method, as you might expect, will create a new method from code you've selected. Take, for example, the
main() method in the class in Listing 8. It evaluates command-line options and if it finds any that begin with
-D, stores them as name-value pairs in a
Properties object.
Listing 8. main()
import java.util.Properties; import java.util.StringTokenizer; public class StartApp { public static void main(String[] args) { Properties props = new Properties(); for (int i= 0; i < args.length; i++) { if(args[i].startsWith("-D")) { String s = args[i].substring(2); StringTokenizer st = new StringTokenizer(s, "="); if(st.countTokens() == 2) { props.setProperty(st.nextToken(), st.nextToken()); } } } //continue... } }
There are two main cases where you might want to take some code out of a method and put it in another method. The first case is if the method is too long and does two or more logically distinct operations. (We don't know what else this
main() method does, but from the evidence we see here, that's not a reason for extracting a method here.) The second case is if there is a logically distinct section of code that can be re-used by other methods. Sometimes, for example, you find yourself repeating several lines of code in several different methods. That's a possibility in this case, but you probably wouldn't perform this refactoring until you actually needed to re-use this
code.
Assuming there is another place where you need to parse name-value pairs and add them to a
Properties object, you could extract the section of code that includes the
StringTokenizer declaration and following
if clause. To do this, highlight this code and then select Refactor > Extract Method from the menu. You'll be prompted for a method name; enter
addProperty, and then verify that the method has two parameters,
Properties prop and
Strings. Listing 9 shows the class after Eclipse extracts the method
addProp().
Listing 9. addProp() extracted
import java.util.Properties; import java.util.StringTokenizer; public class Extract { public static void main(String[] args) { Properties props = new Properties(); for (int i = 0; i < args.length; i++) { if (args[i].startsWith("-D")) { String s = args[i].substring(2); addProp(props, s); } } } private static void addProp(Properties props, String s) { StringTokenizer st = new StringTokenizer(s, "="); if (st.countTokens() == 2) { props.setProperty(st.nextToken(), st.nextToken()); } } }
The Extract Local Variable refactoring takes an expression that is being used directly and assigns it to a local variable first. This variable is then used where the expression used to be. For example, in the
addProp() method above, you can highlight the first call to
st.nextToken() and select Refactor > Extract Local Variable. You will be prompted to provide a variable; enter
key. Notice that there is an option to replace all occurrences of the selected expression with references to the new variable. This is often appropriate, but not in this case of the
nextToken() method, which (obviously) returns a different value each time it is called. Make sure this option is not selected; see Figure 6.
Figure 6. Don't replace all occurrences of selected expression
Next, repeat this refactoring for the second call to
st.nextToken(), this time calling the new local variable
value. Listing 10 shows the code after these two refactorings.
Listing 10. Refactored code
private static void addProp(Properties props, String s) { StringTokenizer st = new StringTokenizer(s, "="); if(st.countTokens() == 2) { String key = st.nextToken(); String value = st.nextToken(); props.setProperty(key, value); } }
Introducing variables in this way provides several benefits. First, by providing meaningful names to the expressions, it makes explicit what the code is doing. Second, it makes it easier to debug the code, because we can easily inspect the values that the expressions return. Finally, in cases where multiple instances of an expression can be replaced with a single variable, this can be more efficient.
Extract Constant is similar to Extract Local Variable, but you must select a static, constant expression, which the refactoring will convert to a static final constant. This is useful for removing hard-coded numbers and strings from your code. For example, in the code above we used
-D" for the command line option defining a name-value pair. Highlight
-D" in the code, select Refactor > Extract Constant, and enter
DEFINE as the name of the constant. This refactoring will change the code as shown in Listing 11.
Listing 11. Refactored code
public class Extract { private static final String DEFINE = "-D"; public static void main(String[] args) { Properties props = new Properties(); for (int i = 0; i < args.length; i++) { if (args[i].startsWith(DEFINE)) { String s = args[i].substring(2); addProp(props, s); } } } // ...
For each Extract... refactoring, there is a corresponding Inline... refactoring that performs the reverse operation.
For example, if you
highlight the variable s in the code above, select Refactor > Inline..., then press OK, Eclipse use the expression
args[i].substring(2) directly in the call to
addProp() as follows:
if(args[i].startsWith(DEFINE)) { addProp(props,args[i].substring(2)); }
This can be marginally more efficient than using a temporary variable and, by making the code terser, makes it either easier to read or more cryptic, depending on your point of view. Generally, however, inlining like this does not have much to recommend it.
In the same way that you can replace a variable with an inline expression, you can also highlight a method name or a static final constant. Select Refactor > Inline... from the menu, and Eclipse will replace method calls with the method code, or references to the constant with the constants value, respectively.
Encapsulating fields
It's generally not considered good practice to expose the internal structure of your objects. That's why the
Vehicle class, and its subclasses, have either private or protected fields, and public setter and getter methods to provide access. These methods can be generated automatically in two different ways.
One way to generate these methods is to use the Source > Generate Getter and Setter. This will display a dialog box with the proposed getter and setter methods for each field that does not already have one. This is not a refactoring, though, because it does not update references to the fields to use the new methods; you'll need to that yourself if necessary. This option is a great time saver, but it's best used when creating a class initially, or when adding new fields to a class, because no other code references these fields yet, so there's no other code to change.
The second way to generate getter and setter methods is to select the field and then choose Refactor > Encapsulate Field from the menu. This method only generates getters and setters for a single field at a time, but in contrast to Source > Generate Getter and Setter, it also changes references to the field into calls to the new methods.
For example, start fresh with a new, simple version of the
Automobile class, as shown in Listing 12.
Listing 12. Simple Automobile class
public class Automobile extends Vehicle { public String make; public String model; }
Next, create a class that instantiates
Automobile and accesses the
make field directly, as shown in Listing 13.
Listing 13. Instantiate Automobile
public class AutomobileTest { public void race() { Automobilecar1 = new Automobile(); car1.make= "Austin Healy"; car1.model= "Sprite"; // ... } }
Now encapsulate the
make field by highlighting the field name and selecting Refactor > Encapsulate Field. In the dialog, enter names for the getter and setter methods -- as you might expect, these are
getMake() and
setMake() by default. You can also choose whether methods that are in the same class as the field will continue to access the field directly or whether these references will be changed to use the access methods like all other classes. (Some people have a strong preference one way or the other, but as it happens, it doesn't matter what you choose in this case, since there are no references to the
make field in
Automobile). See Figure 7.
Figure 7. Encapsulating a field
After pressing OK, the
make field in the Automobile class will be private and will have
getMake() and
setMake() methods as shown in Listing 14.
Listing 14. Refactored Automobile class
public class Automobile extends Vehicle { private String make; public String model; public void setMake(String make) { this.make = make; } public String getMake() { return make; } }
The
AutomobileTest class will also be updated to use the new access methods, as shown in Listing 15.
Listing 15. AutomobileTest class
public class AutomobileTest { public void race() { Automobilecar1 = new Automobile(); car1.setMake("Austin Healy"); car1.model= "Sprite"; // ... } }
Change Method Signature
The final refactoring considered here is the most difficult to use: Change Method Signature. It's fairly obvious what this does -- change the parameters, visibility, and return type of a method. What isn't so obvious is the effect these changes have on the method or on the code that calls the method. There is no magic here. If the changes cause problems in the method being refactored -- because it leaves undefined variables or mismatched types -- the refactoring operations will flag these. You have the option to accept the refactoring anyway and correct the problems afterwards, or cancel the refactoring. If the refactoring causes problems in other methods, these are ignored and you must fix them yourself after the refactoring.
To clarify this, consider the following class and method in Listing 16.
Listing 16. MethodSigExample class
public class MethodSigExample { public int test(String s, int i) { int x = i + s.length(); return x; } }
The method
test() in the class above is called by a method in another class, as shown in Listing 17.
Listing 17. callTest method
public void callTest() { MethodSigExample eg = new MethodSigExample(); int r = eg.test("hello", 10); }
Highlight
test in the first class and select Refactor > Change Method Signature. The dialog box in Figure 8 will appear.
Figure 8. Change Method Signature options
The first option is to change the method's visibility. In
this example, changing it to protected or private would prevent the
callTest() method in the second class from accessing. (If they were in separate packages, changing access to default would also cause this problem.) Eclipse will not flag this error while performing the refactoring; it's up to you to select an appropriate value.
The next option is to change the return type. Changing the return type to
float, for example, isn't flagged as an error because an
int in the
test() method's return statement is automatically promoted to
float. Nonetheless, this will result in a problem in the
callTest() in the second class, because a
float cannot be converted to
int. You will need to either cast the return value returned by
test() to
int or change the type of
r in
callTest() to
float.
Similar considerations apply if we change the type of the first parameter from
String to
int. This will be flagged during refactoring because it causes a problem in the method being refactored:
int does not have a
length() method. Changing it to
StringBuffer, however, will not be flagged as problem, because it does have a
length() method. This will, of course, cause a problem in the
callTest() method, because it is still passing a
String when it calls
test().
As mentioned previously, in cases where this refactoring results in an error, whether flagged or not, you can continue by simply correcting the errors on a case-by-case basis. Another approach is to preempt errors. If you want to remove the parameter
i, because it's unneeded, you could start by removing references to it in the method being refactored. Removing the parameter will then go more smoothly.
One final thing to be explained is the Default Value option. This is only used when a parameter is being added to the method signature. It
is used to provide a value when the parameter is added to callers. For example, if we add a parameter of type
String, with a name
n, and a default value of
world, the call to
test() in the
callTest() method will be changed as follows:
public void callTest() { MethodSigExample eg = new MethodSigExample(); int r = eg.test("hello", 10, "world"); }
The point to take away from this seemingly dire discussion about the Change Method Signature refactoring is not that it is problematic, but rather that it is a powerful, time-saving refactoring that often requires thoughtful planning to be used successfully.
Summary
Eclipse's tools make refactoring easy, and becoming familiar with them can help you improve your productivity. Agile development methods, which add program features iteratively, depend on refactoring as a technique to alter and extend a program's design. But even if you are not using a formal method that requires refactoring, Eclipse's refactoring tools provide a time-saving way to make common types of code changes. Taking some time to become familiar with them so that you can recognize the situations where they can be applied is a worthwhile investment of your time.
Downloadable resources
Related topics
- The key text on refactoring is Refactoring: Improving the Design of Existing Code by Martin Fowler, Kent Beck, John Brant, William Opdyke, and Don Roberts (Addison-Wesley, 1999).
- Refactoring, as an ongoing process, is discussed by the author in the context of designing and developing a project in Eclipse in Eclipse In Action: A Guide for Java Developers, by David Gallardo, Ed Burnette, and Robert McGovern (Manning, 2003).
- Patterns (such as the Factory Method mentioned in this article) are an important tool for understanding and discussing object-oriented design. The classic text is Design Patterns: Elements of Reusable Object-Oriented Software by Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides (Addison-Wesley, 1995).
- One drawback for the Java programmer is that the examples in Design Patterns use C++; for a book that translates patterns to the Java language, see Patterns in Java, Volume One: A Catalog of Reusable Design Patterns Illustrated with UML, Mark Grand (Wiley, 1998).
- For an introduction to one variety of agile programming, see Extreme Programming Explained: Embrace Change, by Kent Beck (Addison-Wesley, 1999).
- Martin Fowler's Web site is refactoring central on the Web.
- For more information on unit testing with JUnit, visit the JUnit Web site.
- "Java design patterns 101" is David's introductory tutorial on patterns (developerWorks, January 2002).
- In "Getting started with the Eclipse Platform," David provides a starting point for learning more about Eclipse (developerWorks, November 2002).
- Find more articles for Eclipse users in the Open source projects zone on developerWorks. Also see the latest Eclipse technology downloads on alphaWorks. | https://www.ibm.com/developerworks/library/os-ecref/ | CC-MAIN-2020-05 | en | refinedweb |
#include <openssl/tls1.h> long SSL_CTX_set_tlsext_status_cb(SSL_CTX *ctx, int (*callback)(SSL *, void *)); long SSL_CTX_set_tlsext_status_arg(SSL_CTX *ctx, void *arg); long SSL_set_tlsext_status_type(SSL *s, int type); long SSL_get_tlsext_status_ocsp_resp(ssl, unsigned char **resp); long SSL_set_tlsext_status_ocsp_resp(ssl, unsigned char *resp, int len);
The response returned by the server can be obtained via a call to SSL_get_tlsext_status_ocsp_resp(). The value *resp will be updated to point to the OCSP response data and the return value will be the length of that data. Typically a callback would obtain an OCSP_RESPONSE object from this data via a call to the d2i_OCSP_RESPONSE() function. If the server has not provided any response data then *resp will be NULL and the return value from SSL_get_tlsext_status_ocsp_resp() will be -1.
A server application must also call the SSL_CTX_set_tlsext_status_cb() function if it wants to be able to provide clients with OCSP Certificate Status responses. Typically the server callback would obtain the server certificate that is being sent back to the client via a call to SSL_get_certificate(); obtain the OCSP response to be sent back; and then set that response data by calling SSL_set_tlsext_status_ocsp_resp(). A pointer to the response data should be provided in the resp argument, and the length of that data should be in the len argument.
The callback when used on the server side should return with either SSL_TLSEXT_ERR_OK (meaning that the OCSP response that has been set should be returned), SSL_TLSEXT_ERR_NOACK (meaning that an OCSP response should not be returned) or SSL_TLSEXT_ERR_ALERT_FATAL (meaning that a fatal error has occurred).
SSL_CTX_set_tlsext_status_cb(), SSL_CTX_set_tlsext_status_arg(), SSL_set_tlsext_status_type() and SSL_set_tlsext_status_ocsp_resp() return 0 on error or 1 on success.
SSL_get_tlsext_status_ocsp_resp() returns the length of the OCSP response data or -1 if there is no OCSP response data. | https://www.commandlinux.com/man-page/man3/SSL_CTX_set_tlsext_status_arg.3ssl.html | CC-MAIN-2020-05 | en | refinedweb |
Root Password Readable in Clear Text with Ubuntu 520
BBitmaster writes "An extremely critical bug and security threat was discovered in Ubuntu Breezy Badger 5.10 earlier today by a visitor on the Ubuntu Forums that allows anyone to read the root password simply by opening an installer log file. Apparently the installer fails to clean its log files and leaves them readable to all users. The bug has been fixed, and only affects The 5.10 Breezy Badger release. Ubuntu users, be sure to get the patch right away."
Re:I believe this is a feature (Score:2, Informative)
Re:I believe this is a feature (Score:2, Informative)
Re:But Ubuntu has no root account! (Score:5, Informative)
Colin Watson's response was very professional (Score:4, Informative)
Re:I believe this is a feature (Score:2, Informative)
Re:okay (Score:5, Informative)
Re:Just in case (Score:3, Informative)
The password in the log file was the primary account's password. This account is a member of the sudoers group, so the same password can get you root access.
Re:Just in case (Score:3, Informative)
Preview of 5.10 Not Affected (Score:2, Informative)
For Ubuntu 5.10 users: (Score:2, Informative)
Solution (Score:5,).
Re:Saw this on Digg (Score:5, Informative).
So not only did they have a similar problem, it persisted for over a year after initially being found & alledgedly fixed..
Re:What does patch help? (Score:3, Informative)
What does this patch fix? The installer?
No, the patch removes that key from the file, and chmod's it 600.
Re:Saw this on Digg (Score:5, Informative)
Actually they reflect reality and are the result of customer requests.
In managed environments, patches are almost never applied ad-hoc, as they are released. They are collected together then tested and rolled out on a schedule, usually monthly. therefore output the "routine" log to one file and the "debug" log to a different file.
Doesn't this just go back to the same problem though? No. First, debug logs don't need to be written to quickly, because debug sessions are going to be slow anyway. Therefore you can encrypt them or otherwise make them unreadable to the casual observer. In general, you want these to be sent to the maintainer as part of a bug report in the event of an install failure, so just pre-encrypt them with the maintainer's public PGP/GPG key.
A more "correct" solution would be to assign different debug levels to different levels of logging, where your maximum level logs absolutely ALL data entered by the user, but where distributed versions are issued with much more basic logging that excludes private information that isn't likely to be useful in debugging the problem anyway.
(The ideal solution is to have maintenance debugging for logging everything as a distinct patch to the basic distribution, so the basic distribution cannot - even accidentally - log everything. That way, users don't even have to put up with obscenely inflated binaries that have lots of debug stuff that will likely never be used, and maintainers don't ever have brown-paper-bag security scares.)
Root Passwords should never be stored ANYWHERE... (Score:2, Informative)
Re:UNIX mouse driver released (Score:5, Informative)
Since long before MS-DOS had them:
Look. [wikipedia.org].
Re:So what if this was fixed quickly. (Score:4, Informative)
Re:Solution (Score:3, Informative)
Not in my logs at all (Score:2, Informative)
Ubuntu 5.10 "Breezy Badger" \n \l
I upgraded from Warty - with dist-upgrade - maybe thats my deal... apt-get update && apt-get upgrade, anyway.:Solution (Score:3, Informative)
cat
Re using Fedora since FC1, and you happened to be using it on a 586 architecture, you would have found out. Because for some reason they decided that on that architecture they would compile glibc with some options making it pretty picky about the location of the stack. This caused programs to crash at random, and the bug was never fixed. They simply wouldn't accept, that there could be a bug in glibc.
I can install Fedora and be fairly certain that even if somehow my system stopped updating
Actually that is not so unlikely to happen. Because on FC4 rhn-applet will always tell you, that there are no updates available. And occationally yum will also say that even when there are updates available. And the Fedora people does not consider this to be a bug.
And while we are at it, do you know what happens to the umask on a Fedora system? If I decide to set my umask to 077 such that other users cannot read by default, then
I'm not saying Fedora is a bad distribution, after all I do use it on all my systems. You just shouldn't claim it to be so much more secure than other distributions. Yes, this bug in Ubuntu is very bad, but unfortunately they are not the first to introduce a bug that bad.
Re:Saw this on Digg (Score:2, Informative)
Security against an attack if you have physical, unsupervised access to the box is nil, in any case. Carry a pendrive or a bootable CD containing a rescue Linux distro with you and boot from it. There, you can mess around with system config files and do things like creating your very own SSH account on the machine. Due to the way PCs work, the only way to protect your machine against attacks by someone with physical access to it is to raise a BIOS password or encrypt your files, not a bad idea in any case.
Re:[easier] Solution (Score:2, Informative)
Re:So what if this was fixed quickly. (Score:2, Informative)
Let's check your facts...
"the sky is blue" -- Well, the sky is actually black and it only appears blue because light is scattered in the atmosphere. So far you're 0 for 1.
"water is wet" -- This one is true... if you only consider its liquid form. However, its solid and gaseous forms are most definitely not wet. That makes you 0 for 2.
With a record like that, can we really believe your third so-called "fact"?:So what if this was fixed quickly. (Score:2, Informative)
Let me guess: American, right? Only an American can be this bad at science.
A black sky is the way it is. Ever see that thing they call "space"? You'll see the sky is black. The aforementioned scattering of light in our atmostphere makes it look blue during the day, but the sky itself is black. Consult any primary school science class for further details.
Water is the name of a chemical compound, also known as Dihydrogen monoxide. The phase doesn't change what it is, it is still water, the same way liquid nitrogen is still nitrogen. If that doesn't satisfy you, there is solid water that is not ice. It is amorphous solid water. And gaseous water is also called water vapor. Notice how both of those specifically mention that they are water.
Thanks for trying. Get an primary school education before trying again.
Brilliant use of an irrelevant last line, by the way.
Re:Saw this on Digg (Score:2, Informative)
Firstly owning up and making changes:
.)" - Colin Watson
Second quote:
"We've never updated the ISO images for any released Ubuntu distributions. We don't intend to, either, unless some terrifying and unforeseen showstopper arises." -CJW
Terrifying showstopper?? You mean like this one?! This could affect their reputation for years. I'd destroy all CDs affected. It's one thing to screw up. It something different to knowingly mail that CD to another unsuspecting user.
Re:Solution (Score:1, Informative)
Re:So what if this was fixed quickly. (Score:3, Informative)
For the record:
I'm happy to take responsibility for the lack of testing that meant we didn't spot this earlier, but it's not quite the trivial stupid mistake that people are making it out to be.
Re:Choose strong obscure passwords (Score:1, Informative)
Re:Open Password! (Score:2, Informative)
Re:[easier] Solution (Score:1, Informative)
Or before typing sensitive info, then when finished. That way the history file isn't flushed, just the relevant entries.:Open Password! (Score:3, Informative)
Dunno - presumably it's long been in any password cracker out there? Along with "none" or "password" or any other "clever" password there is?
Re:Patch mirror (Score:4, Informative)
Re:Choose strong obscure passwords (Score:2, Informative)
Oh yeah!
typedef struct {
unsigned int len;
char *content;
} String;
Re:Choose strong obscure passwords (Score:3, Informative)
How about
#include <string> ? Radical, I know, but you have to put strings that contain their length and can contain nul somewhere!
Re:Choose strong obscure passwords (Score:2, Informative)
Re:[easier] Solution (Score:2, Informative)
If you wish to change root's pass, you need to 'sudo passwd root' or 'sudo su -;passwd'
Re:Real Solution: CHANGE YOUR PASSWORD (Score:2, Informative)
Additionally, this should only happen if you're performing an expert install; the normal installation procedure doesn't seem to have this problem.
The installer maintainer (Colin Watson) has said two things that may (or may not) be of interest:
I don't see how this is happening, because we deliberately db_set those questions to empty after retrieving the password to avoid this problem.
So I guess that didn't work on some install types. The other, which addresses your question about Breezy install CDs:.
Re:[easier] Solution (Score:2, Informative)
Where does this idea that you need to type "sudo passwd root" come from? I see it repeated in IRC channels and message boards, but it's just not true.conf/questions.dat
Re:Saw this on Digg (Score:2, Informative). None of these are possible with a blank password on the target account. | https://slashdot.org/story/06/03/13/0525254/root-password-readable-in-clear-text-with-ubuntu/informative-comments | CC-MAIN-2017-43 | en | refinedweb |
A more flexible warning module.
warn
Better warnings.
See the full documentation.
The Python standard warning module is extremely good, and I believe underutilized; Though it missed a few functionality; in particular it allows filtering only on the code that triggered/called a deprecated functions, but have no ability to filter depending on the module that emitted the warning.
This is an attempt to fix that.
Too long didn’t read:
Explicit is better than implicit:
from warn import patch patch() # use the warning module as usual
Though the warnings.filterwarning function has now gained the emodule keyword parameter to filer by the module that emitted the warning; example:
import warnings warnings.filter('default', category=DeprecationWarnings, emodule='matplotlib\.pyplot.*')
All warnings from matplotlib.pyplot and its submodule will now be show by default, regardless of whether you trigger them directly, via pandas, seaborn, your own code…
Warning emitter, warning caller.
Python warnings are a beautiful relative simple piece of code which is extremely powerful in the right hands once you learned how to use it.
It allows you to determine a posteriori whether you want a particular piece of code to trigger an exception, display a message to the user or simply do nothing.
It is difficult to show the full power of the waring with a simple piece of code, but in large code base, and once you start having several layer of dependency a parsimonious usage of warning , and in particular DeprecationWarning can make a large difference.
Caller, vs Emitter
Let’s clear up some vocabulary first, to differentiate the warning “Caller” from the warning “Emitter”
# file emitter.py def public_api(param1, deprecated_parameter=None):: if deprecated_parameter: return _deprecated_function(param1, deprecated_parameter): else: return normal_buisnell_logic(param1) def _deprecated_function(param1, deprecated_parameter): import warningsA # warning emitted here warnings.warn('using `deprecated_parameter` is deprecated ', DeprecationWarning, stacklevel=3)
# file caller.py from emitter import public_api public_api(1, True) # warning triggered here.
you can now do something like
from warn import path patch() import warnings warnings.filter('default', category=DeprecationWarnings, emodule='emitter.*') import emitter emitter.bar() # will log the warning !
Change this to “error” in your test-suite, and filter by all your dependencies !
The Python built-in module allow you to filter warning by caller (assuming the emitter have set the stacklevel options right, which is not always obvious to do). This is extremely useful when you are developing the caller; but not that much when you are developing the emitter.
It is common for a caller to actually have many underlying library the can trigger warnings, or for a developer to only care about a subset of the emitter warnings.
Many libraries are going around this limitation by sub-classing Warnings; two example are Matplolib and sympy in order to selectively enable them. Still this only give a coarse way of filtering warnings, and it required to know where the warnings are defined in order to import and filter for them.
Because of Python default choice to filter out deprecation warnings, this also forced either inherit UserWarning (choice of matplotlib), which removes the semantic meaning of DeprecationWarning offered by Python or to inject a custom filter in the warnings filter module on import (choice of Sympy), which can lead to surprising behavior.
Availability on Python 2
I don’t know if if works on Python 2; I don’t really have the time to investigate; I don’t particularly care a lot; but feel free to send a PR that ads support if necessary I would be happy to merge it.
limitations
This does not work on packages that either :
- Got and keep a reference on warnings.warn before patch() have been called; that is to say things of the form: from warnings iport warn
- Cannot work on C-extensions (aka won’t filter on numpy) ; Both of the above are technically possible with Assembly Patching which I’m not confortable with.
The Ugly
As Warnings filters have to be 5 tuples with specific types this works by shoving dummy instances in the filters list and using this as keys for a proxy to lookup real filter keys. So worse case scenario the filter you insert with this module will just be no-op. But you will incur a performance penalty if you use this, especially if your codebase triggers a lot of warnings.
Get the upstream
I’d love feedback and have a nicer API to deal with warnings at CPython level in order to provide custom filters, and custom filters functions.
Aparté
Good Deprecation Warnings.
A good warning and in particular Deprecation warning is extremely helpful and can make the difference for the adoption of an API. Take the following fiction example:
>>> import warnings >>> warnings.simplefilter('default') >>> from quezetraste import frobulate, constribule >>> frobulate('HI', 3) DeprecationWarning: The 'frobulate' function is deprecated. >>> contribule('Hi', 3) DeprecationWarning: The 'constribule(message, recipient_id)' function of the 'quezetraste' package is deprecated since version 7.3. It haz been replaced by 'Recipient(id).send(message)' which was available since 7.2. See
Turn the DeprecationWarnings into error in your test-suite!
At least make them visible; at best once you fixed a deprecation warning turn this specific one into an error to not reproduce it.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/warn/ | CC-MAIN-2017-43 | en | refinedweb |
I first learned about Hugo Liu's research around three years ago when I was looking into NLP, commonsense data, artifical intelligence code and theory. His work, and others at MIT, is debatably the latest and greatest in today's research and very useful, at least informative, for any program designer. Most of the projects available from MIT are not written in Microsoft Visual Studio .Net and I am attempting to make use of ConceptNet for educational, research purposes, and just to have fun with the technology involved.
ConceptNet¹ is a commonsense knowledgebase, composed mainly from the Open Mind Project, written and put together by Hugo Liu and Push Singh (Media Laboratory Massachusetts Institute of Technology). ConceptNet 2.1 (current version at the time of this composition) also includes MontyLingua, a natural-language-processing package. ConceptNet is written in Python but it's commonsense knowledgebase is stored in text files. To read more specific details about the complete overview of ConceptNet, read Liu and Singh's outstanding paper (pdf).
Unlike other projects like WordNet or Cyc, ConceptNet is based more on Context. ConceptNet makes it possible, within the limit's of it's knowledgebase, to allow a computer to understand new concepts or even unknown concepts by using conceptual correlations called Knowledge-Lines (K-Lines: a term introduced by Minsky, cf. The Society of Mind (1987)). K-Lines may be thought of a list of previous knowledge about a subject or task. ConceptNet actually puts these K-Lines together using it's twenty relationship types that fall into eight categories (including K-Lines)to form a network of data to simulate conceptual reasoning. What really makes all this possible is ConceptNet's Relational Ontology (the eight categories and twenty relationship types). ConceptNet is structured around MIT's Open Mind Common Sense Project knowledge base. ConceptNet uses it's data in two processes: The Normalization Process and the Relaxation Process. The Normalization Process involves all the predicate to get filtered and undergo lexical distillation (Verbs and Nouns are reduced to their basic baseforms). Also ConceptNet removes determiners("a", "the", etc.) and modals("may", "could", "will", etc) in this stage. It also uses Parts Of Speech Tagging to validate well structured word orders. (The Normalization Process is not demonstrated in this demo. Please feel free to use your own POS Taggers with this Class Library) The Relaxation Process raises or "Lifts" heavily weighted common sense predicate nodes (one line from the predicate file(s)) and duplicate nodes are merged. This is reflected in each predicate's "f" and "i" metadata tags. Where f equals the number of utterances and i equals the number of of times inferred.
ConceptNet's power of linking subjects together is attributed to twenty relationship types defined by it's Relational Ontology. Here is 2.1's twenty relationship types and eight categories:
• K-Lines: ConceptuallyRelatedTo, ThematicKLine, SuperThematicKLine
• Things: IsA, PartOf, PropertyOf, De.nedAs, MadeOf
• Spatial: LocationOf
• Events: SubeventOf, PrerequisiteEventOf, First-SubeventOf, LastSubeventOf
• Causal: EffectOf, DesirousEffectOf
• Affective: MotivationOf, DesireOf
• Functional: CapableOfReceivingAction, UsedFor
• Agents: CapableOf
Example lines from ConceptNet's 2.1 predicate files:
(UsedFor "ball" "throw" "f=4;i=0;")
(LocationOf "popcorn" "at movie" "f=7;i=0;")
(CapableOfReceivingAction "film" "place on reel" "f=2;i=0;")
(IsA "guitar" "musical instrument with string" "f=2;i=0;")
(SubeventOf "talk" "debate" "f=2;i=0;")
(CapableOf "person" "write book" "f=11;i=1;")
(MotivationOf "audition for part" "act in play" "f=2;i=0;")
(PropertyOf "bacteria" "small" "f=2;i=0;")
Again, please remember that ConceptNet 2.1 is written in the Python programming language and not C# but it's commonsense knowledgebase data is in text file format totalling around 96mb when uncompressed. You must download the ConceptNet text files by agreeing to it's user agreement (this of course goes for all of the projects listed below for download) and then downloading the entire ConceptNet Python Project.
This is a very simple No-Fills Class Library written in MS VS.Net. I have quickly thrown it together mostly because I just downloaded ConceptNet for the first time yesterday and noticed a shortage of VS.Net friendly code. For some reason, I don't remember there being a public download of ConceptNet before, which I may be mistaken, however I have known about this project for some time. It's papers have been available via MIT. There are two projects in the solution ConceptNet Demo App and ConceptNetUtils. ConceptNetUtils is the ConceptNet Class Library and consists of three Classes: FoundList, Misc, Search.
ConceptNetUtils.FoundList
Holds search result data in an index format.
Access: Public
Base Classes: Object
Members Description
protected string[] LineFound //Holds the strings.
static public int size = 999;//This can hold up to 999
//data strings.
public string this[]//Holds string data.
Count() //Returns int count of non "" strings
//(populated indexes).
Reset() //Resets data to null.
public int get_f(int index) //Returns int of the
//f metadata in predicate string line(node)
public int get_i(int index) //Returns int of the
//i metadata in predicate string line(node)
ConceptNetUtils.Misc
Created for Misc Methods
Access: Public
Base Classes: Object
Members Description
public string RemoveCategoryString(string R_TYPE)
// Returns string without the "K-Lines: " or "Spatial: ",
// if All then remove string after.
public string XMLGetNode(string path_xmlfilename,
string elementname)
// Returns string of data in an element node.
public string XMLGetAttribute(string path_xmlfilename,
string elementname, string attributename)
// Returns string of Attribute data in an element node.
ConceptNetUtils.Search
Takes care of Searching ConceptNet text files.
Access: Public
Base Classes: Object
Members Description
public bool CreateTextFile(string fullfilename)
// Returns true if Creates a text file using the
// current FoundList data.
public string GetFoundListLine(int index)
// Returns string of data in iterator (LineFound[]).
public int GetTotalLineCount()
// Returns int count of total lines found.
public void SearchFor(string fullpathfilename,
string SubjectWord,
string R_Type,
int MAX_RESULTS,
bool CreateOutputFile,
string fullpathTextFilename)
//Searches incoming ConceptNet text file
//and fills the FoundList iterator (LineFound[]).
//fullpathfilename = Path of ContextNet .txt
//fullpathTextFilename = Path of .txt to be created.
public static string Predicatefile1;
public static string Predicatefile2;
public static string Predicatefile3;
public static string Predicatefile4;
public static string Predicatefile5;
public void SearchFor(int index,
string SubjectWord,
string R_Type,
int MAX_RESULTS,
bool CreateOutputFile,
string fullpathTextFilename)
// Searches incoming ConceptNet int index 1 to x
// and fills the FoundList iterator.
public void XMLSearchForChecked(string path_xmlfilename,
string SubjectWord,
string R_Type,
int MAX_RESULTS,
bool CreateOutputFile,
string fullpathTextFilename)
// Searches for the Attribute: (checked="yes") found in
// an XML file and then searches predicate files.
public static string GetPredicatePathtoFilename(int index)
// Returns full string path to the requested index number.
// Index starts at 1.
public void XMLLoadFilePaths(string settingsxmlfile)
// Sets Predicatefile1 thru Predicatefile5 variables
// after loading them from an XML file.
public static int getnode_f(string node)
public static int getnode_i(string node)
public void Sort_f(ArrayList inList, out ArrayList rankedList)
public void Sort_i(ArrayList inList, out ArrayList rankedList)
public class Compare_f : IComparer
public class Compare_i : IComparer
1.) Make sure you have downloaded and installed ConceptNet 2.1. (I installed it into path ...\My Documents\Python Projects\conceptnet2.1\)2.) Download and unzip this article's .Net Solution and project files.3.) Navigate to the location "...\ConceptNet Demo App\bin\Release" and run the ConceptNet Demo App.exe. It will automaticly open the "Set Location of Knowledgebase Files" dialogbox and, on it's first run, you must click the browse button to a predicate file (ConceptNet or other) then click ok. Following runs will remember the location of checked predicate files you wish to search.4.) You are now ready to a)Enter a word, b)Choose a relationship (ConceptNet looks at IsA, then PropertyOf), c)Click the Search button to display found nodes. You may then sort them by clicking the "Sort by f" or "Sort by i".
ConceptNet 2.1 can be a tool to create personalized commonsense knowledgebase networks. Hopefully this MS VS.Net Class Library project can be informational, useful, and fun.
The 0.x version posted on this article will no longer be under development. I am working on an updated version using Microsoft Visual C# Express 2005 with .Net 2.0 framework and will serve as the latest version of the ConceptNet Class Library in C# that I am working on. It will probably make use of the IronPython library. If you are interested, here is a small peek into getting ConceptNet Mini-Browser (written in Natural Python code) to execute using IronPython:My wdevs blog post with some code.I am just working on it whenever I have free time.
¹ Liu, H. & Singh, P. (2004) ConceptNet: A Practical Commonsense Reasoning Toolkit. BT Technology Journal, To Appear. Volume 22, forthcoming issue. Kluwer Academic Publishers. ConceptNet: A Practical Commonsense Reasoning Toolkit, Hugo Liu and Push Singh Media Laboratory Massachusetts Institute of TechnologyInvestigating ConceptNet, Dustin Smith; Advisor: Stan Thomas, Ph.D. December 2004 Open Mind Common Sense Project Hugo Liu websiteWordNetCyc
This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.
A list of licenses authors might use can be found here
zin_zin wrote:Could you please tell me how to run this demo project? I downloaded all what you mentioned and executed ConceptNet Demo App.exe. But didn't work. There was an application error. I don't know what's wrong
zin_zin wrote:Still didn't work. I can't even open ConceptNet Demo App. There was an error saying "Application cannot be initialized"
<br />
using IronPython.Runtime;<br />
using ConceptNetUtils;<br />
<br />
private void Form1_Load(object sender, EventArgs e)<br />
{<br />
//Display the form<br />
this.Show();<br />
this.Update();<br />
<br />
//Must set & override the paths to ConceptNet install.<br />
//They are found in CNUDB and CNUMontylingua.py<br />
ConceptNetUtils.CNDB.ConceptNet21path = ConceptNetUtils.Paths.MiscDirs.ConceptNet;<br />
<br />
//Load predicate files and create semantic network<br />
ConceptNetUtils.CNDB.Initialize();<br />
}<br />
<br />
private void Method1()<br />
{<br />
IronPython.Runtime.List myList = new IronPython.Runtime.List();<br />
myList = ConceptNetUtils.CNDB.get_context("apple");<br />
}<br />
kzachos wrote:just wondering whether you've tried to use ConceptNetXMLRPCServer.py in order to access ConceptNet from .NET platform.
kzachos wrote: I am using XML-RPC.NET as the client. I have no problems accessing some of the "easier" methods like chunk(text), but i dont seem to be able to access methods like get_analogous_concepts(textnode, simple_results_p=0).
kzachos wrote: I am not sure what the parameter and result type should be in c#.
kzachos wrote:Have you tried to call methods like get_analogous_concepts using IronPython?
shrik`st wrote:what about making a web service out of it?
jconwell wrote:Maybe i'm just slow, but what does this do?
jconwell wrote:I'm actually looking for a good NLP engine, but nothing in this article screams NLP to me.
computerguru92382 wrote:Neat article. Found it very interest
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | https://www.codeproject.com/Articles/12582/Having-fun-with-MIT-s-ConceptNet-2-1-in-C | CC-MAIN-2017-43 | en | refinedweb |
I would like to sketch my rational for the assembly-side coding which Ihave, which is much more code than I would like. As always, what goes in isyour call.Concern: The bios which I have seen that blows away the input ecx willcontinue to do so if the spec extends the report.Solution: Keep track of the largest ecx out seen, and stop making calls ifwe are in danger of overflowing the buffer.It has been suggested that we can rely on WinTel to ensure the backwardscompatibility of the new biosen. I don't support that position.Concern: Some bios might hit random code which returns cf clear, ecx heldconstant, and edx copied to eax--the first few calls, after which theyscribble on the HD (or worse).Solution: Aggressively test the registers (and data) for compliance. Testthat buffer writes actually occur. Abort if anything is out of bounds.I believe I fully understand what you are driving towards in the 16-bitcode. As a professional validator, I'm not going to send you code whichmakes avoidable dangerous bios call without warning you first. That's a"call" I'm not going to make. ;-)You specifically mentioned es:di being changed as a don't care. The problemis that the ACPI spec (Table 14.2) describes es:di out as, "Return AddressRange Descriptor pointer. Same value as on input." the spec _requires_ me to read es:di out to find where the data is. (Ibelieve that if it is NOT the same as es:di in, my only defensible course isto abort & dump all the data.)Similarly if ecx out is <20. If some clod ignores ecx in and gives meextended data, I can ignore the data. But if he gives me less than 20 bytesback, I don't know how to extend it. Since a bios returning less than 20bytes is clearly non-compliant in an essential fashion, I again abort & dumpall data.I believe that so far, I have described code which is "simple, stupid, andstraightforward". What I have NOT described is code which checks that thedata is actually written, or that the written data makes any sense at all.If a bios "blowing up" is not a concern, then the overlap/wrap checking mustof course be withdrawn. (This part of the code is clearly neither stupidnor straightforward.) Note that wrap checking will catch random data 50%per call, and that overlapping checking catches 2/3 of the surviving randomdata each call after the first.While my current code tests to see if the buffer is written essentially forfree, it is simple enough to add a direct test.Finally, I worried that some bios might return more regions than our buffercan hold. It seemed good to me to allow the user to set configurationoptions that would drop reports of regions we can do without--regions ofzero length, or of reserved or unknown type. (I said can, not want to!)The other alternative is to terminate calls, but bioses often seem to reportavailable ram last rather than first.Encapsulating both the e820 & the e801 code in their own #ifdef CONFIG_seemed to me to be a simple way to ensure that someone with a "blows up ifyou make this call" bios did not need to hack the code.There is also code to warn some hacker who would try to make the call withecx_in < 20.NathanThis has been setup.S on my system for some time. You will note that theaddress, size pair returned is transformed into base, top. Thisdramatically simplifies all other code (especially the overlap checking),and is the same code needed to test for wrapping.--- linux-2.3.47/arch/i386/boot/setup.S Sun Feb 20 22:37:09 2000+++ linux-2.3.47w10/arch/i386/boot/setup.S Tue Feb 22 16:02:37 2000@@ -259,8 +259,15 @@ xorl %eax, %eax movl %eax, (0x1e0)-#ifndef STANDARD_MEMORY_BIOS_CALL movb %al, (E820NR)++#ifdef CONFIG_MEM_e820++# There are some biosen which behave unpredictably if int 15-e820 iscalled.+# We dare not tempt fate with them.++# See Documentation/e820.txt for explanation of this section.+ # Try three different memory detection schemes. First, try # e820h, which lets us assemble a memory map, then try e801h, # which returns a 32-bit memory size, and finally 88h, which@@ -270,48 +277,191 @@ # the memory map from hell. e820h returns memory classified into # a whole bunch of different types, and allows memory holes and # everything. We scan through this memory map and build a list-# of the first 32 memory areas, which we return at [E820MAP].-#+# of the first memory areas, which we return at [E820MAP].+# -meme820:- movl $0x534d4150, %edx # ascii `SMAP'- xorl %ebx, %ebx # continuation counter- movw $E820MAP, %di # point into the whitelist+meme820: + xorl %ebx, %ebx # continuation counter+ movw %ax, (E820_ERR_CODE) # zero out error flag++ movw $E820MAP + 2, %di # point into the whitelist # so we can have the bios # directly write into it. ++ +# E820_MAX_LEN is sizeof(e820_mem_region) Someone must have gotten tiredof+# hearing about the errors w/o knowing how to fix them...+#if ACPI_MEM_FIX > E820_MAX_LEN+ movw $ACPI_MEM_FIX, (E820_MAXL_PTR)+#else+ movw $E820_MAX_LEN, (E820_MAXL_PTR)+#endif++ jmpe820:- movl $0x0000e820, %eax # e820, upper word zeroed- movl $20, %ecx # size of the e820rec- pushw %ds # data record.- popw %es+ movw $E820_HEADROOM, %cx+ subw %di, %cx # total space remaining -per-+ # report bookkeeping+ movw $ACPI_MEM_MANY, %dx+ cmpw (E820_MAXL_PTR),%cx+ jc e820out0 # out of room+++# E820_MIN_LEN = 20 +#if ACPI_MEM_FIX < 20 +BOZO: This is a violation of the ACPI SMAP spec.+#else+ movl $ACPI_MEM_FIX, %ecx+#endif++#ifdef CONFIG_ACPI_SMAP_BADR+ movl $0xfecd98ab, %edx # choker if no data written+ movl %edx, 4(%di)+ movl %edx, 12(%di)+#endif+ movw %ds, %ax # es:di is data pointer+ pushw %di # push record pointer+ movw %ax, %es+ movl $0x0000e820, %eax # e820, upper word zeroed+ movl $0x534d4150, %edx # ascii `SMAP'+ push %ds # and data segment+ int $0x15 # make the call- jc bail820 # fall to e801 if it fails+ pop %ds+ movw $ACPI_MEM_ABORT,%dx+ jc e820out2 # a error if not on 1st pass+ ++ movw $ACPI_MEM_GSPEC,%dx+ cmpl $0x534d4150, %eax # check the return is `SMAP'+ jne e820out2+ + movw %es, %ax # spec says %es:%diunmolested+ movw %ds, %dx # & that data is placed at + cmpw %dx, %ax # %es:%di out, so if %es:%di+ movw $ACPI_MEM_GSPEC,%dx # change, we cannot proceed+ popw %ax+ jne e820out0++ cmpw %ax, %di+ jne e820out0+++# NOTE: Spec SAYS %ecx out <= %ecx in, but we have seen sloppy bioseswhich+# might not stick with this. We keep up with the longest to date to avoid+# overruns & test for the violation in setup.c++ movw $E820_HEADROOM, %ax # check if entire buffer+ subw %di, %ax # overflowed+ movzwl %ax, %eax+ cmpl %ecx, %eax+ movw $ACPI_MEM_BUFF, %dx+ jc e820out0++ cmpw $E820_MIN_LEN, %cx # minimum length for spec+ movw $ACPI_MEM_SHORTD,%dx+ jc e820out0++ cmpw %cx, (E820_MAXL_PTR) # is this a new max length+ jnc no_new_max # for return data?+ movw %cx, (E820_MAXL_PTR)+no_new_max:+++ movw %cx, E820_REP_LEN(%di) # # of bytes read++#ifdef CONFIG_ACPI_SMAP_ADZR+ movl E820_SIZE(%di), %eax # ignore zero-length+ orl E820_SIZE+4(%di), %eax # regions+ je againe820+#endif - cmpl $0x534d4150, %eax # check the return is `SMAP'- jne bail820 # fall to e801 if it fails+# Two regions overlap iff the top of each is above the bottom of the other.+# we use this fact in the following code: -# cmpl $1, 16(%di) # is this usable memory?-# jne again820+ incb (E820NR) # Even if we reject thisrecord,+ # it may prove usefull fordebug+ # to see it.++# First transform base, length pair into base, top pairs.+# Note: E820_BASE = E820_TOP+ movl E820_SIZE(%di), %edx+ movl E820_BASE(%di), %eax+ addl %eax, %edx+ movl E820_SIZE+4(%di), %ecx+ movl %edx, E820_TOP(%di)+ adcl E820_BASE+4(%di), %ecx+ movl %ecx, E820_TOP+4(%di)++#ifdef CONFIG_ACPI_SMAP_BADR+ movw $ACPI_MEM_WRAP, %dx+ jc e820out0++ cmpw $E820MAP + 2, %di # Don't test first+ je end_badr # region.++ cmpl E820_TOP-E820_REC_SIZE(%di), %eax+ movl E820_BASE+4(%di), %eax+ sbbl E820_TOP+4-E820_REC_SIZE(%di), %eax+ movl E820_TOP(%di), %edx+ setcb %al+ cmpl E820_BASE-E820_REC_SIZE(%di), %edx+ sbbl E820_BASE+4-E820_REC_SIZE(%di), %ecx+ setab %cl+ + movw $ACPI_MEM_OVLY, %dx+ andb %cl, %al+ jne e820out0+#endif+end_badr:+ +#if defined( CONFIG_ACPI_MEM_ADDR) || defined( CONFIG_ACPI_MEM_ADUR)+ decb (E820NR) # Just because we drop these+#endif # records doesn't mean we cannot validate them! It'ssimpler+ # to back the counter off now.++#ifdef CONFIG_ACPI_MEM_ADRR+ cmpl $E820_RESERVED, E820_TYPE(%di)+ je againe820+#endif - # If this is usable memory, we save it by simply advancing %di by- # sizeof(e820rec).- #-good820:- movb (E820NR), %al # up to 32 entries- cmpb $E820MAX, %al- jnl bail820+#ifdef CONFIG_ACPI_MEM_ADUR+ cmpl $E820_UNKNOWN, E820_TYPE(%di) # known are 1-3. check+ je againe820 # 0 and >3+ cmpl $E820_NVS, E820_TYPE(%di)+ ja againe820+#endif +#if defined( CONFIG_ACPI_MEM_ADDR) || defined( CONFIG_ACPI_MEM_ADUR) incb (E820NR)- movw %di, %ax- addw $20, %ax- movw %ax, %di-again820:- cmpl $0, %ebx # check to see if- jne jmpe820 # %ebx is set to EOF-bail820:+#endif++ addw $E820_REC_SIZE, %di # point %di at next record++againe820:+ cmpl $0, %ebx # check to see if %ebx is+ jne jmpe820 # set to EOF++ movw $0, %dx # no errors!+ jmp e820out0 + +e820out2:+ addw $2, %sp # correct stack+e820out0:++ mov %dx, (E820_ERR_CODE) # report error type+ # go to e801 routine+++#endif # CONFIG_MEM_e820+++#ifdef CONFIG_MEM_e801+# There are some bioses which don't even like e801. We avoid these aswell.+ # method E801H: # memory size is in 1k chunksizes, to avoid confusing loadlin. # we store the 0xe801 memory size in a completely different place,@@ -331,12 +481,12 @@ andl $0xffff, %ecx # clear sign extend addl %ecx, (0x1e0) # and add lower memory into # total size.+#endif # CONFIG_MEM_e801 # Ye Olde Traditional Methode. Returns the memory size (up to 16mb or # 64mb, depending on the bios) in ax. mem88: -#endif movb $0x88, %ah int $0x15 movw %ax, (2)> -----Original Message-----> From: Linus Torvalds [mailto:torvalds@transmeta.com]> Sent: Saturday, February 26, 2000 10:54 PM> To: david parsons> Cc: linux-kernel@vger.rutgers.edu; Zook, Nathan> Subject: Re: [PATCH] fancy new memory detection, for > pre-patch-2.3.48-2> > > > > On Sat, 26 Feb 2000, david parsons wrote:> > > > > - do the old-style calls regardless> > > > The e801 call is broken on some new bioses -- I've got > some Pentium II> > boxes where e801 cheerfully returns 550mb on a machine > that only has> > 128mb of core. > > That's fine. I'm not advocating _using_ the value. I'm really > advocating:> > - the 16-bit assembly language does all the calls, and > gathers all the> information.> - the 16-bit assembly language does NOT try to maek sense of the> information, In particular, it doesn't try to figure out > whether the> information is broken or not, or _which_ of the memory > information it> should use.> - in short, the 16-bit assembly code is STUPID.> > - ..and all the real WORK is done in C. In an __init section > that gets> thrown away. Not in unreadable assembly code. Especially not if the> rules are arbitrary and pretty much made up to match BIOS > bugs in the> first place.> > See my argument?> > Linus> -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at | http://lkml.org/lkml/2000/2/28/111 | CC-MAIN-2017-43 | en | refinedweb |
I know this is basic stuff for many here, but as I couldn't find any similar article on this site I thought Id share this useful code snippet for those not so au-fait with the Xml classes.
Basically, I wanted a quick and easy way to get from a non-formatted string of XML, to something that could be nicely displayed in a textbox (ie with indenting and line breaks) without too much mucking about.
Simple. Just call the method and display the result, eg. in a Windows forms textbox. Displaying in Webform elements may need the newline characters changed which can be done easily using String.Replace.
String.Replace
using System.Text;
using System.Xml;
. . .
/// <span class="code-SummaryComment"><summary>
</span>
/// Returns formatted xml string (indent and newlines) from unformatted XML
/// string for display in eg textboxes.
/// <span class="code-SummaryComment"></summary>
</span>
/// <span class="code-SummaryComment"><param name="sUnformattedXml">Unformatted xml string.</param>
</span>
/// <span class="code-SummaryComment"><returns>Formatted xml string and any exceptions that occur.</returns>
</span>
private string FormatXml(string sUnformattedXml)
{
//load unformatted xml into a dom
XmlDocument xd = new XmlDocument();
xd.LoadXml(sUnformattedXml);
//will hold formatted xml
StringBuilder sb = new StringBuilder();
//pumps the formatted xml into the StringBuilder above
StringWriter sw = new StringWriter(sb);
//does the formatting
XmlTextWriter xtw = null;
try
{
//point the xtw at the StringWriter
xtw = new XmlTextWriter(sw);
//we want the output formatted
xtw.Formatting = Formatting.Indented;
//get the dom to dump its contents into the xtw
xd.WriteTo(xtw);
}
finally
{
//clean up even if error
if (xtw != null)
xtw.Close();
}
//return the formatted xml
return sb.ToString();
}. | http://www.codeproject.com/Articles/17150/Formatting-XML-A-Code-snippet?fid=375767&df=90&mpp=10&noise=1&prof=False&sort=Position&view=None&spc=None&fr=11 | CC-MAIN-2015-14 | en | refinedweb |
I tried including the boost thread library to get working with multi-threading within my DLL
but some off stuff happened.
I added the necessary directories into the project settings and put in
#include <boost\thread.hpp>but it told me the file didn't exist. (which it did)
I did some poking around and realized the problem, the directories needed for the header file were needed in the executable settings too.
Now the confusing part. After adding the directories to the project settings I started getting errors saying how Windows.h suddenly stopped existing!!
I "fixxed" this by having the settings inherit from default.
I put that in quotes because now I have a really odd problem, everything builds okay in debug (or so it seams) but when the program runs it doesn't even open a window. it does nothing at all.
just out of curiosity I decided to build in release. Which keeps giving me
1>------ Build started: Project: IndigoFramework, Configuration: Release Win32 ------ 1> Indigo.cpp 1>c:\users\bombshell\documents\visual studio 2010\projects\indigoengine\indigoengine\Indigo.h(8): fatal error C1083: Cannot open include file: 'd3dx11.h': No such file or directory 2>------ Build started: Project: IndigoEngine, Configuration: Release Win32 ------ 2> MProg.cpp 2>c:\users\bombshell\documents\visual studio 2010\projects\indigoengine\indigoengine\Indigo.h(8): fatal error C1083: Cannot open include file: 'd3dx11.h': No such file or directory========== Build: 0 succeeded, 2 failed, 0 up-to-date, 0 skipped ==========
but on further investigation, though I don't know why building release gives me those errors
removing the
#include <boost\thread.hpp>from the DLL header file everything goes back to normal... (though building in release still gives the above error)
I'm just completely confused as to what is going on.
Any help is appreciated,
Thanks in advanced,
Bombshell | http://www.gamedev.net/topic/606591-vs-2010-c-strange-behavior-including-boost/?forceDownload=1&_k=880ea6a14ea49e853634fbdc5015a024 | CC-MAIN-2015-14 | en | refinedweb |
Learn to Code iOS Apps 1: Welcome to Programming
During your years of iPhone usage have you ever thought “Gee, I wish I could write a mobile app”, or even “Sheesh, I could totally write a better app than that!”?
You’re in luck – developing an iOS app is not hard. In fact, there are numerous tools that make developing your own iOS app easy and fun. Armed with a little knowledge and these tools, you too can learn to code iOS apps!
This tutorial series will teach you how to make an iOS app from scratch. No knowledge of programming is required to follow this tutorial series — the entire process is broken down into a sequence of steps that will take you from programming zero to App Store hero.
This series has four parts:
- In Part 1 (You are Here!), you will learn the basics of Objective-C programming and you will start to create your first simple game. You are here!
- In Part 2,.
The only prerequisite to this series is a Mac running OS X Lion (10.7) or later – and having a willingness to learn! :]
Note: If you’re already familiar with the basics of Objective-C and Foundation, feel free to skip ahead to Part 3 and get started with iOS.
Getting Started
The first thing you need to do is install a free program called Xcode..
“Wait a minute,” you may think, “Why am I creating a Mac OSX command line app, I wanted to make an iPhone app!”
Well, both Native Mac and iOS apps are both written in the same programming language — Objective-C — and use the same set of tools to create and build applications. So starting with a command line app is the simplest way to start learning the basics. Once you’ve mastered doing some basic things there, making an iPhone app (like you’ll do later in this series) will be that much easier!
So let’s get started. Open up Xcode, and you’ll see a window that looks like this:
Click the button that says Create a new Xcode project, located directly below the Welcome to Xcode title, as shown in the screenshot below:
If you accidentally close the “Welcome to Xcode” window, you can create a new project by going to the File menu and selecting New > Project….
In the column on the left hand side, find the OS X section, click on Application and select Command Line Tool as shown below:
Click Next. On the following screen, fill in the fields as indicated:
- Product Name: My First Project
- Organization Name: This field can be left blank. Or you can enter your company name.
- Company Identifier: Enter com.yourname, such as com.johnsmith
- Type: Foundation
- Use Automatic Reference Counting: Check this box
Your screen should resemble the one below:
Click Next. Choose a location to store the project files (the Desktop is as good a place as any), and click Create. Xcode will set up your new project and open it up in the editor for you.
Running Your First App
Xcode comes with project templates which include some basic starter code; that means that even before you’ve written a line of code, you can run your project and see what it looks like. Granted, your project won’t do much right now, but this is a good opportunity to become familiar with running your project and viewing the output.
To build and run your project, find the Run button on the upper left corner of the Xcode window, as shown below, and click it:
Look at the bottom of the screen in the All Output pane; you should see Hello, World! displayed there, as shown below:
How about that — you’ve created and run your first OS X program! Before you go adding more functionality to your program, take a few minutes and got through the following sections to learn about the various parts of Xcode and how your program is structured.
Note: If you want to learn more about Xcode and how to use it, you can always refer to the Apple Xcode User Guide.
The left pane of Xcode displays a list of files that are part of the project. The files you see were automatically created by the project template you used. Find main.m inside the My First Project folder and click on it to open it up in the editor, as shown below:
The editor window should look very similar to the following screenshot:
Find the following line located around the middle of the file:
Aha — this looks like the line that printed out the text that you saw in the “All Output” pane. To be certain of that, change the text to something else. Modify the line as shown below:
Click the Run button; you should see your new text in the “All Output” pane as shown below:
You have now changed your program to output your own custom message. But there’s obviously more to the app than just a single line of output. What makes the app tick?
The Structure of Your Source Code
main.m is the source code of your application. Source code is like a list of instructions to tell the computer what you want it to do.
However, a computer cannot run source code directly. Computers only understand a language called machine code, so there needs to be an intermediate step to transform your high-level source code into instructions that the CPU can carry out. Xcode does this when it builds and runs your app by compiling your source code. This step processes the source code and generates the corresponding machine code.
If this sounds complicated, don’t worry — you don’t need to know anything about the machine language part other than to know it’s there. Both you and the compiler understand Objective-C code, so that’s the common language you’ll use to communicate.
At the top of main.m, you’ll see several lines beginning with two slashes (
//), as shown in the screenshot below:
These lines are comments and will be ignored by the compiler. Comments are used to document the code of your app and leave any tidbits of information that other programmers — or your future self — might find useful. Look at the middle of the file and you will see a perfect example of this:
The comment
// insert code here... is part of the project template from Xcode. It doesn’t change how the program runs, but it was put there by some helpful engineer at Apple to help you understand the code and get started.
Import Statements
Directly below the comments at the top of main.m is the following line:
That line of code is known as an import statement. In Xcode, not everything has to be contained in one single file; instead, you can use code contained in separate files. The import statement tells the compiler that “when you compile this app, also use the code from this particular file”.
As you can imagine, developing for OS X and iOS requires a lot of diverse functionality, ranging from dealing with text, to making requests over a network, to finding your location on a map. Rather than include a veritable “kitchen sink” of functionality into every app you create, import statements allow you to pick which features you require for your app to function. This helps to decrease the size of your code, the processing overhead required, and compile time.
Apple bundles OS features into frameworks. The import statement shown above instructs the compiler to use the Foundation framework, which provides the minimum foundation (as the name suggests) for any app.
Here’s a bit of trivia for you: how many lines of code do you think Foundation/Foundation.h adds to your main.m file? 10? 1000? 100000? A million?
The Main Function
Look at the line following the
import statement:
This line declares a function called
main. All of the code in your app that provides some type of processing or logic is encapsulated into functions; the
main function is what kicks off the whole app.
Think of a function as a unit of code that accepts input and produces output. For example, a function could take an account number, look it up in a database, and return the account holder’s name.
The
int part of
int main means the return value of
main returns an integer such as 10 or -2. The
(int argc, const char * argv[]) bits in parentheses are the arguments, or inputs, to the function. You’ll revisit the arguments of a function a bit later on.
Immediately below
int main is an open curly brace (
{) which indicates the start of the function. A few lines down you’ll see the corresponding closing curly brace (
}). Everything contained between the two braces is part of the
main function.
Since Objective-C is a procedural language, your program will start at the top of
main and execute each line of the function in order. The first line of
main reads as follows:
Just like in
main, curly braces are used to surround a group of related lines of code. In this case, everything between the braces are part of a common autorelease pool.
Autorelease pools are used to manage memory. Every object you use in an app will consume some amount of memory — everything from buttons, to text fields, to advanced in-memory storage of user data eats away at the available memory. Manual memory management is a tricky task, and you’ll find memory leaks in lots of code — even code written by expert programmers!
Instead of tracking all the objects that consume memory and freeing them when you’re done with them,
autoreleasepool automates this task for you. Remember when you created your project in Xcode and checked “Use Automatic Reference Counting”? Automatic Reference Counting, or ARC, is another tool that helps manage memory in your app so you almost never need to worry about memory usage yourself.
You’ll recognize the next line; it’s the one that you edited to create a custom message:
The
NSLog function prints out text to the console, which can be pretty handy when you’re debugging your code. Since you can’t always tell exactly what your app is doing behind the scenes,
NSLog statements help you log the actions of your app by printing out things like strings or the values of variables. By analyzing the
NSLog output, you’ll gain some insight as to what your app is doing.
If you’re worried about your end user seeing
NSLog statements on their iPhones, don’t fret — the end user won’t see the
NSLog output anywhere in the app itself.
In programming, text inside double quotation marks is known as a string. A string is how you store words or phrases. In Objective-C, strings are prefixed with an
@ sign.
Look at the end of the
NSLog line, you’ll see that the line is terminated by a semicolon. What does that do?
The Objective-C compiler doesn’t use line breaks to decide when one “line” of code ends and when one begins; instead, semicolons indicate the end of a single line of code. The
NSLog statement above could be written like this:
…and it would function in the same manner.
To see what happens when you don’t terminate a line of code with a semicolon, delete the semicolon at the end of the
NSLog statement, then press the Run button. You’ll see the following error indicated in Xcode:
The
NSLog line is highlighted in red, and a message states
"Expected ';' after expression". Syntax errors like this stop the compiler in its tracks, and the compiler won’t be able to continue until you fix the issue. In this case, the correction is simple: just add the semicolon at the end of the line, and your program will compile and run properly.
There’s just one more line of code to look at in
main:
This line of code is known as a return statement. The function terminates when this line is encountered; therefore any lines of code following the return statement will not execute. Since this is the main function, this
return statement will terminate the entire program.
What does the “0” mean after the
return statement? Recall that this function was declared as
int main, which means the return value has to be an integer. You’re making good on that promise by returning the integer value “0”. If there are no actual values to be returned to the caller of this function, zero is typically used as the standard return value for a function to indicate that it completed without error.
Working With Variables
Computers are terribly good at remembering pieces of information such as names, dates, and photos. Variables provide ways for you to store and manipulate these types of objects in your program. There are four basic types of variables:
- int: stores a whole number, such as 1, 487, or -54.
- float: stores a floating-point number with decimal precision, such as 0.5, 3.14, or 1.0
- char: stores a single character, such as “e”, “A”, or “$”.
- BOOL stores a YES or NO value, also known as a “boolean” value. Other programming languages sometimes use TRUE and FALSE.
To create a variable — also known as declaring a variable — you simply specify its type, give it a name and optionally provide a default value.
Add the following line of code to main.m between the
@autoreleasepool line and the
NSLog line:
Don’t forget that all-important semicolon!
The line above creates a new integer variable called num and assigns it a value of 400.
Now that you have a variable to use in your app, test it out with an
NSLog statement. Printing out the values of variables is a little more complicated than printing out strings; you can’t just put the word “num” in the message passed to
NSLog and see it output to the console.
Instead, you need to use a construct called format specifiers which use placeholders in the text string to show
NSLog where to put the value of the variable.
Find the following line in main.m:
…and replace it with the following line of code:
Click the Run button in the upper left corner. You should get a message in the console that says:
That looks great — but how did Xcode know how to print out the value of
num?
The
%i in the code above is a format specifier that says to Xcode “replace this placeholder with the first variable argument following this quoted string, and format it as an integer”.
What if you had two values to print out? In that case, the code would look similar to the following:
Okay, so
%i is used for integer formatting. But what about other variable types? The most common format specifiers are listed below:
- %i: int
- %f: float
- %c: char
There isn’t a specific format specifier for boolean values. If you need to display a boolean value, use
%i; it will print out “1” for YES and “0” for NO.
Along with declaring variables and setting and printing values, you can also perform mathematical operations directly in your code.
Add the following line to main.m, immediately below the
int num = 400; line:
The above code takes the current value of
num, adds 100 to it, and then replaces the original value of
num with the new sum — 500.
Press the Run button in the upper left corner; you should see the following output in your console:
That’s enough theory to get started — you’re probably itching to start coding your first real app!
Building Your First Game
The application you’ll create in this tutorial is the classic game “Higher or Lower”. The computer generates a secret random number and prompts you to guess what that number is. After each successive guess, the computer tells you if your guess was too high or too low. The game also keeps track of how many turns it took for you to guess the correct number.
To get started, clear out all of the lines in the
@autoreleasepool block of main.m so that
main looks like the code below:
All the code you add in the steps below will be contained between the curly braces of the the
@autoreleasepool block.
You’re going to need three variables: one to store the correct answer, one to store the player’s guess and one to store the number of turns.
Add the following code within the
@autoreleasepool block:
The code above declares and initializes the three variables you need for your game. However, it won’t be much fun to play the game if
answer is always zero. You’ll need something to create random numbers.
Fortunately, there’s a built-in random number generator,
arc4random, which generates random numbers for you. Neat!
Add the following code directly below the three variable declarations you added earlier:
answer now stores a random integer. The
NSLog line is there to help you test your app as you go along.
Click the Run button in the upper left corner and check your console output. Run your app repeatedly to see that it generates a different number each time. It seems to work well, but what do you notice about the numbers themselves?
The numbers have a huge range — trying to guess a number between 1 and 1228691167 doesn’t sound like a lot of fun. You’ll need to scale those numbers back a little to generate numbers between 1 and 100.
There’s an arithmetic operator called the modulo operator — written as
% in Objective-C — that can help you with this scaling. The modulo operation simply divides the first number by the second number and returns the remainder. For example,
14705 % 100 will produce
5, as 100 goes into 14705 a total of 147 times, with a remainder of 5.
To scale your values back between 1 and 100, you can simply use the above trick on your randomly generated numbers. However, if you divide the randomly generated number by 100, you’ll end up with numbers that range from 0 to 99. So, you simply need to add 1 to the remainder to get values that range from 1 to 100.
Find the following line in your code:
…and modify it to look like the line below:
Run your app a few times and check the console output. Instead of huge numbers, your app should only produce numbers between 1 and 100.
You now know how to create and display information to your user, but how do you go about accepting input from the user to use in your app?
That’s accomplished by the
scanf function — read on to learn how it works.
Obtaining User Input
Add the following lines of code immediately after the previously added code:
Aha — that
%i looks familiar, doesn’t it? Format specifiers are used for output and input functions in your app. The
%i format specifier causes
scanf to process the player’s input as an integer.
Run your app; when you see the “Enter a number” prompt, click your mouse in the console to make the cursor appear. Type a number and press Enter; the program should print the number back to you, as shown in the screenshot below:
Now that you’ve confirmed that the random number generator and the user input methods work, you don’t need your debug statements any longer. Remove the following two
NSLog statements from your code:
and
Okay — you have the basic user input and output methods in place. Time to add some game logic.
Working With Conditionals
Right now, your code runs from top to bottom in a linear fashion. But how do you handle the situation where you need to perform different actions based on the user’s input?
Think about the design of your game for a moment. Your game has three possible conditions that need to be checked, and a set of corresponding actions:
-. Conditionals work by determining if a particular set of conditions is true. If so, then the app will perform the corresponding specific set of actions.
Add the following lines of code immediately after the
scanf("%i", &guess); line:
The conditional statement above starts with an
if statement and provides a set of conditions inside the parentheses. In the first block, the condition is “is
guess greater than
answer?”. If that condition is true, then the app executes the actions inside the first set of curly braces, skips the rest of the conditional statement, and carries on.
If the first condition was not met, the reverse condition is tested with an
else if statement: “is
guess less than
answer?”. If so, then the app executes the second set of actions inside the curly braces.
Finally, if neither of the first two conditions are true, then the player must have guessed the correct number. In this case, the app executes the third and final set of actions inside the curly braces. Note that this
else statement doesn’t have any conditions to check; this acts as a “catch-all” condition that will execute if none of the preceding conditions were true.
There are many different comparison operators that you can use in your
if statements, including the ones listed below:
- > : greater than
- < : less than
- >= : greater than or equal to
- <= : less than or equal to
- == : equal to
- != : not equal to
Note: To check if two variables are equal, use two equal signs. A single equals sign is the assignment operator, which assigns a value to a variable. It’s an easy mistake to make, but just remember that “equal TO” needs “TWO equals”! :]
Run your app, and try to guess the number that the computer chose. What happens after you make one guess?
Right now you can only enter one guess before the program quits. Unless you are extremely good at guessing — or psychic! :] — your app will tell you that your guess is incorrect and terminate.
Well, that’s no fun. You need some way to loop back to some point in the the program and give the player another chance to guess. Additionally, you want the app to stop when the player guesses the correct number.
This is a job for a while loop.
Working With While Loops
A while loop is constructed much like an
if statement; they both have a condition and a set of curly braces that contain code to execute if the condition is true.
An
if statement runs a code block only once, but a while loop will run the block of code repeatedly until the condition is no longer true. That means your code block needs an exit condition that makes the condition false to end the execution of the while loop. If you don’t have an exit condition, the loop could run forever!
The first question is which code needs to be inside the while loop. You don’t want to loop over the random number generation with the
arc4random statement, or else the player will be guessing a new random number each time! Just the user prompt, scanf, and the conditional
if block needs to be looped over.
The other question is how to create your exit condition. The repeat condition is to loop while
guess does not match
answer. This way, as soon as the user guesses the correct number, the exit condition occurs automatically.
Note that you will need to add two lines to your existing code to wrap your game logic in a while loop: the
while statement itself, and the closing curly brace to close off the
while loop.
Modify your code to include the two lines indicated by the comments below:
Run your app, and play through the game a few times. How good of a guesser are you?
Adding the Final Touches
You now have a functional game! There’s only one thing to add: the turn counter. This will give your player some feedback on their gameplay.
The
turn variable has already been created to store this information, so it’s just a matter of incrementing the value of
turn each time the player makes a guess.
Add the following line of code directly underneath the
while (guess != answer) { statement:
turn++; increments the count by one. Why don’t you just use
turn = turn + 1;, you ask? Functionally, it’s the same thing. However, incrementing a variable is such a common programming task that it pays to have a shorthand method to save on typing.
Fun Fact: The “C” programming language was derived from a previous language called “B”. When the next iteration of the C language was written, the developers put their tongue firmly in cheek and named the new language “C++” — meaning “one better than C”. :]
All that’s left to do is display the current value of
turn in two places: on the user prompt, and at the end of the game.
Find the following line of code:
…and modify it to look like the line below:
The code above uses the format specifier
%i to display the current value of
turn in the user prompt.
Add the following line of code immediately after the closing curly brace of the while loop:
This will display the final number of guesses once the player has guessed the correct number.
If you feel adventurous, instead of adding the above line to log the number of turns after the while loop, you could also modify the congratulatory message to output the number of turns right there. But I’ll leave that as an exercise for you :]
Take a minute and review the contents of
main in your app to make sure that it matches the code below:
Run your app and check out the latest changes!
Where To Go From Here?
By creating this small app, you’ve learned some of the most fundamental concepts in Objective-C, namely:
- functions
if..
elseblocks
- format specifiers
- while loops
The final project with full source code can be found here.
You’re now ready to move on to the next tutorial in this series, where you’ll learn about some more fundamental concepts in Objective-C, including working with objects and classes.
If you have any question or comments, come join the discussion on this series in the forums!
47 Comments: ... iguration/
NSLog(@"Guess #%i: Enter a number between 1 and 100", turn);
what is function of symbol '#'? | http://www.raywenderlich.com/38557/learn-to-code-ios-apps-1-welcome-to-programming | CC-MAIN-2015-14 | en | refinedweb |
strtod - convert string to a double-precision number
#include <stdlib.h> double strtod(const char *str, char **endptr);
The strtod() function converts the initial portion of the string pointed to by str to type double representation. First it decomposes the input string into three parts: an initial, possibly empty, sequence of white-space characters (as specified by isspace()); a subject sequence interpreted as a floating-point constant; and a final string of one or more unrecognised characters, including the terminating null byte of the input string. Then it attempts to convert the subject sequence to a floating-point number, and returns the result.
The expected form of the subject sequence is an optional + or - sign, then a non-empty sequence of digits optionally containing a radix character, then an optional exponent part. An exponent part consists of e or E, followed by an optional sign, followed by one or more decimal digits. The subject sequence is defined as the longest initial subsequence of the input string, starting with the first non-white-space character, that is of the expected form. The subject sequence is empty if the input string is empty or consists entirely of white-space characters, or if the first character that is not white space is other than a sign, a digit or a radix character.
If the subject sequence has the expected form, the sequence starting with the first digit or the radix character (whichever occurs first) is interpreted as a floating constant of the C language, except that the radix character is used in place of a period, and that if neither an exponent part nor a radix character appears, a radix character is assumed to follow the last digit in the string. If the subject sequence begins with a minus sign, the value resulting from the conversion is negated. A pointer to the final string is stored in the object pointed to by endptr, provided that endptr is not a null pointer.
The radix character is defined in the program's locale (category LC_NUMERIC). In the POSIX locale, or in a locale where the radix character is not defined, the radix character defaults to a period (.).
In other than the POSIX locale, otherod(). | http://pubs.opengroup.org/onlinepubs/7990989775/xsh/strtod.html | CC-MAIN-2015-14 | en | refinedweb |
07 October 2013 10:55 [Source: ICIS news]
BERLIN (ICIS)--Household insulation as an end-use for petrochemicals has grown significantly in the past five years and remains a growth area, a European adipic acid (ADA) producer said on Monday.?xml:namespace>
Although the construction industry has been performing badly since the global economic downturn of 2008, the amount of insulation being used in houses is increasing.
“Compared with five years ago, half as many houses [are being built] but there is three times as much insulation,” the producer said speaking on the sidelines of the 47th annual European Petrochemical Association (EPCA) meeting in Berlin, Germany.
The producer added that growth in the insulation sector will increase as the construction industry recovers.
“If it’s increasing on a low level of construction, it will pick-up in the future,” the producer said.
The producer went on to caution that, although the end-use is increasing, it remains a small percentage of the overall end-use portfolio of petrochemical markets.
“We’re not talking 90% of the [end-use] market, and of course there’s a lot of competition in that field,” the producer said.
Insulation is an end use for petrochemical products such as adipic acid, polyethylene terepthlate (PET), polyols and phthalic anhydride (PA). | http://www.icis.com/Articles/2013/10/07/9712743/EPCA-13-insulation-growth-to-remain-strong-ADA-producer-says.html | CC-MAIN-2015-14 | en | refinedweb |
I have been doing a little bit of work recently on Jersey implementation and one of the precursors to this has been to try to get the WADL that Jersey will generation by default to contain just a little bit more information with regards to the data being transferred. This makes it possible to help the user when generating client code and running testing tools against resources.
To this end I have put together a WADL generator decorator that examines all the JAX-B classes used by the -RS application and generates a bunch of XML Schema files. This is now in 1.9-SNAPSHOT which you can download from the Jersey web site in the normal way. (If you want to use the JResponse part of this you will need a build after the 13th of July)
This feature is not enabled by default in 1.9; but hopefully with some good feedback and a small amount of caching I might convince the Jersey bods to make this the default. For the moment you need to create and register a WsdlGeneratorConfig class to get this to work. So your class might look like this:
package examples; import com.sun.jersey.api.wadl.config.WadlGeneratorConfig; import com.sun.jersey.api.wadl.config.WadlGeneratorDescription; import com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator; import java.util.List; public class SchemaGenConfig extends WadlGeneratorConfig { @Override public List<WadlGeneratorDescription> configure() { return generator( WadlGeneratorJAXBGrammarGenerator.class ).descriptions(); } }
You then need to make this part of the initialization of the Jersey servlet, so your web.xml might looks like this:
<?xml version = '1.0' encoding = 'windows-1252'?> <web-app <servlet> <servlet-name>jersey</servlet-name> <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class> <init-param> <param-name>com.sun.jersey.config.property.WadlGeneratorConfig</param-name> <param-value>examples.SchemaGenConfig</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>jersey</servlet-name> <url-pattern>/jersey/*</url-pattern> </servlet-mapping> </web-app>
For the purposes of this blog I am just going to show the three basic references to entities that the code supports. So at the moment if will obviously process classes that are directly referenced and either a return value or as the entity parameter on a method. The code also supports the Jersey specific class JResponse, which is a subclass of Response that can have a generic parameter. (Hopefully this oversight will be fixed in JAX-RS 2.0)
package examples; import com.sun.jersey.api.JReponse; import examples.types.IndirectReturn; import examples.types.SimpleParam; import examples.types.SimpleReturn; import javax.ws.rs.Consumes; import javax.ws.rs.GET; import javax.ws.rs.PUT; import javax.ws.rs.Path; import javax.ws.rs.Produces; @Path("/root") public class RootResource { @GET @Produces("application/simple+xml") public SimpleReturn get() { return new SimpleReturn(); } @GET @Produces("application/indrect+xml") public JResponse<IndirectReturn> getIndirect() { return JResponse.ok(new IndirectReturn() ) .type( "application/indrect+xml" ).build(); } @PUT @Consumes("application/simple+xml") public void put(SimpleParam param) { } }
The type classes are all pretty trivial so I won't show them here. The only important factor is that they have the @XmlRootElement annotation on them. Although not shown here you can also use the JAX-B annotation @XmlSeeAlso on the resource classes to reference other classes that are not directly or indirectly referenced from the resource files. The most common use case for this is when you have a subtype of a class.
So enough of the java code, lets see what the WADL that is generated looks like:
<?xml version="1.0" encoding="UTF-8"?> <application xmlns=""> <doc xmlns: <grammars> <include href="application.wadl/xsd0.xsd"> <doc xml: </include> </grammars> <resources base=""> <resource path="/root"> <method name="GET" id="get"> <response> <representation xmlns: </response> </method> <method name="PUT" id="put"> <request> <representation xmlns: </request> </method> <method name="GET" id="getIndirect"> <response> <representation xmlns: </response> </method> </resource> </resources> </application>
In this example there is only one schema in the grammar section; but the code supports multiple schemas being generated with references between them. Let look at the schema for this example, note I did say the classes were trivial!
<?xml version="1.0" encoding="UTF-8"?> <xs:schema <xs:element <xs:element <xs:element <xs:complexType <xs:sequence/> </xs:complexType> <xs:complexType <xs:sequence/> </xs:complexType> <xs:complexType <xs:sequence/> </xs:complexType> </xs:schema>
There is still some internal work to be done on caching; but the basics are in place. Feedback would be appreciated, particularly in cases where the code doesn't see the referenced classes. Finally thanks to Pavel Bucek for being patient as I learned the ropes.
Update 5th September 2011 This feature has been enabled by default so you no longer have to perform any of the WadlGeneration configuration when working with 1.9 final release of Jersey.
13 comments:.
Keep up the good work.
--Chris
Interesting you should say that, I found and old tranaction from Marc Hadley that did work along that lines. I wil try it dig it out and see if it is stil useful.....
Thanks for your kind words!
Gerard
Thanks! AFAICT it doesn't work with the Jersey Maven plugin. Maybe you find some time to review
Reinhard,
Thanks for your patch, I will take a look over the next few days. Hopefully we can get this into 1.9. Just need to write some unit tests first.
Gerard
Hey Gerard,
Nice blog! Is there an email address I can contact you in private?
Ilias Tsagklis,
Oracle email addresses tend to be firstname.surname@oracle.com so you can guess mind as gerard.davison@oracle.com.
Gerard
do you have a complete source code example for getting this to work?
We'd love to be able to generate the xsd from the POJOs and have the xsd available in the wadl.
I don't want to have to have the xsd in a context folder. Just more management I'd rather not have.
Oggie,
This has been the default for a little while in Jersey 1.x and now 2.x. You need only have @XmlRootElement on the POJO classes for the XML Schema to be rendered. Is this not working for you?
Gerard
How do you rename the generated XSD file (xsd0.xsd) from the WADL? It is more user-friendly for the WADL to generate a name you want.
Colin,
You know that is a good idea, can you raise a Jersey bug for this? Perhaps even if we cannot control the name we can have a better default.
Gerard
I've testing this and in a project were I'm using JAXB generated pojos, this class needs to consider XmlType.class also:
if (
(clazz.getAnnotation(XmlRootElement.class) != null)
|| (clazz.getAnnotation(XmlType.class) != null)
){
classSet.add(clazz);
Just if anybodyels needs it...
Thanks a lot for your effort...
Just for those like me using JAXB generated classes from external jar as POJOs, this class needs to take care also of XmlType.class
if (
(clazz.getAnnotation(XmlRootElement.class) != null)
|| (clazz.getAnnotation(XmlType.class) != null)
){
classSet.add(clazz);
David,
Can you raise a bug on the Jersey project for this - and I will take a look. @XmlType is problematic because it doesn't have enough information to tell you what the element in the xml should be called.
There might be something I can do in your usecase.
Gerard | http://kingsfleet.blogspot.com/2011/07/auttomatic-xml-schema-generation-for.html | CC-MAIN-2015-14 | en | refinedweb |
QuotingSubject: [PATCH 2/2] fold up - net/core/scm.c: cred is constSigned-off-by: Serge Hallyn <serge.hallyn@canonical.com>--- net/core/scm.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-)diff --git a/net/core/scm.c b/net/core/scm.cindex 21b5d0b..528fa36 100644--- a/net/core/scm.c+++ b/net/core/scm.c@@ -43,7 +43,7 @@ * setu(g)id. */ -static __inline__ bool uidequiv(struct cred *src, struct ucred *tgt,+static __inline__ bool uidequiv(const struct cred *src, struct ucred *tgt, struct user_namespace *ns) { if (src->user_ns != ns)@@ -57,7 +57,7 @@ check_capable: return false; } -static __inline__ bool gidequiv(struct cred *src, struct ucred *tgt,+static __inline__ bool gidequiv(const struct cred *src, struct ucred *tgt, struct user_namespace *ns) { if (src->user_ns != ns)-- 1.7.5.4 | http://lkml.org/lkml/2011/8/4/320 | CC-MAIN-2015-14 | en | refinedweb |
07 February 2012 07:50 [Source: ICIS news]
SINGAPORE (ICIS)--BP reported on Tuesday a 41.5% year-on-year fall in its replacement cost profit before interest and tax at its refining and marketing division to $564m (€429m) in the fourth quarter of last year, partly because of lower margins.
The segment’s profit before interest and tax for the December quarter 2011 fell by 71.2% year on year to $657m, the UK-based energy major said in a statement.
“The fourth quarter saw continued strong operations with our refinery utilization remaining well above the industry average,” it said.
“Compared with the same period last year, our result benefited from an improved contribution from supply and trading relative to the fourth-quarter loss in 2010 and our ability to access WTI-priced crude grades in the ?xml:namespace>
But these positive factors were offset by reduced refining margins, lower petrochemicals margins and foreign exchange impacts, the company added.
The segment’s fourth-quarter results also included net non-operating charges of $140m, compared with non-operating gains of $86m in the same period a year earlier, the company said.
For the full year of 2011, BP’s refining and marketing division’s replacement cost profit before interest and tax slipped by 1.5 % to $5.47bn, while its profit before interest and tax grew by 10% to $7.96bn.
Meanwhile, BP's overall fourth-quarter replacement cost profit was $7.61bn, up by 65% year on year.
For the full year of 2011, BP posted an overall replacement cost profit of $23.9bn, compared with a loss of $4.91bn a year | http://www.icis.com/Articles/2012/02/07/9529885/bp-refining-and-marketing-q4-replacement-cost-profit-falls-41.5.html | CC-MAIN-2015-14 | en | refinedweb |
Introduction to JBoss Seam
What is Seam?JBoss Seam is a "lightweight framework for Java EE 5.0". What does that mean? Isn't Java EE (Enterprise Edition) 5.0 itself a collection of "frameworks"? Why do you need another one that is outside of the official specification? Well, we view Seam as the "missing framework" that should have been included in Java EE 5.0. It sits on top of Java EE 5.0 frameworks to provide a consistent and easy-to-understand programming model for all components in an enterprise web application. It also makes stateful applications and business process-driven applications a breeze to develop. In another words, Seam is all about developer productivity and application scalability.
1. Integrate and Enhance Java EE FrameworksThe core frameworks in Java EE 5.0 are EJB (Enterprise JavaBeans) 3.0 and JSF (JavaServer Faces) 1.2. EJB 3.0 (EJB3, hereafter) is a POJO (Plain Old Java Objects) based lightweight framework for business services and database persistence. JSF is a MVC (Model-View-Controller) component framework for web applications. Most Java EE 5.0 web applications will have both EJB3 modules for business logic and JSF modules for the web front end. However, while EJB3 and JSF are complementary to each other, they are designed as separate frameworks each with its own philosophy. For instance, EJB3 uses annotations to configure services, while JSF makes use of XML files. Furthermore, EJB3 and JSF components are not aware of each other at the framework level. To make EJB3 and JSF work together, you need artificial facade objects (i.e., JSF backing beans) to tie business components to web pages, and boilerplate code (a.k.a plumbing code) to make method calls across framework boundaries. Gluing those technologies together is part of Seam's responsibilities.
Seam collapses the artificial layer between EJB3 and JSF. It provides a consistent, annotation-based approach to integrate EJB3 and JSF. With a few simple annotations, the EJB3 business components in Seam can now be used directly to back JSF web forms or handle web UI events. Seam allows developers to use the "same kind of stuff", annotated POJOs, for all application components. Compared with applications developed in other web frameworks, Seam applications are conceptually simple and require significantly less code (both in Java and XML) for the same functionalities. If you are impatient and want a quick preview of how simple a Seam application is, you can have a look at the hello world example described lower in this article.
Seam also makes it easy to accomplish tasks that were "difficult" on JSF. For instance, one of the major complaints of JSF is that it relies too much on HTTP POST. It is hard to bookmark a JSF web page and then get it via HTTP GET. Well, with Seam, generating a bookmarkable RESTful web page is very easy. Seam provides a number JSF component tags and annotations that would increase the "web friendliness" and web page efficiency of JSF applications.
At the same time, Seam expands the EJB3 component model to POJOs and brings the stateful context from the web tier to the business components. Furthermore, Seam integrates a number of leading other open source frameworks such as jBPM, JBoss Rules (a.k.a Drools), JBoss Portal, JBoss Microcontainer etc. Seam not only "wire them together" but also enhance the frameworks in similar ways it does to the JSF + EJB3 combination.
While Seam is rooted in Java EE 5.0, its application is not limited to Java EE 5.0 servers. Your Seam applications can be deployed in J2EE 1.4 application servers as well as in plain Tomcat servers. That means you can obtain production support for your Seam applications today!
1 + 1 > 2
It would be a mistake to think that Seam is just another integration framework that wires various frameworks together. Seam provides its own managed stateful context that allows the frameworks to deeply integrate with others via annotations, EL (Expression Language) expressions etc. That level of integration comes from Seam developer's intimate knowledge of the third party frameworks.
2. A Web Frameworks that Understands ORM
Object Relational Mapping (ORM) solutions are widely used in today's enterprise applications. However, most current business and web frameworks are not designed for ORM. They do not manage the persistence context over the entire web interaction lifecycle from the request comes in to the response is fully rendered. That has resulted in all kinds of ORM errors included the dreaded LazyInitializationException, and gave rise to ugly hacks like the "Data Transfer Object" (DTO).
Seam was invented by Gavin King, the inventor of the most popular ORM solution in the world (Hibernate). It is designed from the ground up to promote ORM best practices. With Seam, there is no more DTOs to write; lazy loading just works; and the ORM performance can be greatly improved since the extended persistence context acts as a natural cache to reduce database round trips.
Furthermore, since Seam integrates the ORM layer with the business and presentation layer, we can display ORM objects direct, you can even use database validator annotations on input forms , and redirect ORM exceptions to custom error pages.
3. Designed for Stateful Web Applications
Seam is designed for stateful web applications. Web applications are inherently multi-user applications, and e-commerce applications are inherently stateful and transactional. However, most existing web application frameworks are geared toward stateless applications. You have to fiddle with the HTTP session objects to manage user states. That not only clutters your application with code un-related to the core business logic, but also brings on an array of performance issues.
In Seam, all the basic application components are inherently stateful. They are much easier to use than the HTTP session since their states are declaratively managed by Seam. There is no need to write distracting state management code in a Seam application -- just annotate the component with its scope, lifecycle methods, and other stateful properties -- and Seam takes over the rest. Seam stateful components also provide much finer control over user states than the plain HTTP session does. For instance, you can have multiple "conversations", each consisting of a sequence of web requests and business method calls, in a HTTP session.
Furthermore, database caches and transactions can be automatically tied with the application state in Seam. Seam automatically holds database updates in memory and only commits to the database at the end of a conversation. The in-memory cache greatly reduces database load in complex stateful applications.
In addition to all the above, Seam takes state management in web applications a big step further by supporting integration with the Open Source JBoss jBPM business process engine. You can now specify the work flows of different people in the organization (i.e., customers, managers, technical support etc.) and use the work flow to drive the application, instead of relying on the UI event handlers and databases.
4. Web 2.0 Ready
Seam is fully optimized for Web 2.0 style applications. It provides multiple ways for AJAX (Asynchronous JavaScript And XML, a technology to add interactivity to web pages) support -- from drop-in JavaScript-less AJAX components, AJAX-enabling existing JSF components, to a custom JavaScript library that provide direct access to Seam server components from the browser as Javascript objects. Internally, Seam provides an advanced concurrency model to efficiently manage multiple AJAX requests from the same user.
A big challenge for AJAX applications is the increased database load. An AJAX application makes much more frequent requests to the server than its non-AJAX counterpart does. If all those AJAX requests have to be served by the database, the database would not be able to handle the load. The stateful persistence context in Seam acts as an in-memory cache. It can hold information throughout a long running conversation, and hence helps to reduce the database round trips.
Web 2.0 applications also tend to employ complex relational models for its data (e.g., a social network site is all about managing and presenting the relationships between "users"). For those sites, lazy loading in the ORM layer is crucial. Otherwise, a single query could cascade to loading the entire database. As we discussed earlier, Seam is the only web framework today that supports lazy loading correctly for web applications.
5. POJO Services via Dependency Bijection
Seam is a "lightweight framework" because it promotes the use of POJO (plain old Java objects) as service components. There are no framework interfaces or abstract classes to "hook" components into the application. The question, of course, is how do those POJOs interact with each other to form an application? How do they interact with container services (e.g., the database persistence service)?
Seam wires POJO components together using a popular design pattern known as "dependency injection" (DI). Under this pattern, the Seam framework manages the lifecyle of all the components. When a component needs to use another, it declares this dependency to Seam using annotations. Seam determines where to get this dependent component based on the application's current state and "injects" it into the asking component.
Expanding on the dependency injection concept, a Seam component A can also create another component B and "outjects" the created component B back to Seam for other components, such as C, to use later.
This type of bi-directional dependency management is widely used in even the simplest Seam web applications (e.g., the hello world example in Chapter 2). In Seam terms, we call this "dependency bijection".
6. Configuration by Exception
The key design principal that makes Seam so easy to use is "Configuration by exception". The idea is to have a set of common-sense default behavior for the components. The developer only needs to configure the component explicitly when the desired behavior is not the default. For instance, when Seam injects component A as a property of component B, the Seam name of component A defaults to the recipient property name in component B. There are many little things like that in Seam. The overall result is that configuration metadata in Seam is much simpler than competing Java frameworks. As a result, most Seam applications can be adequately configured with a small number of simple Java annotations. Developers benefit from reduced complexity and, in the end, much less lines of code for the same functionalities developed in competing frameworks.
7. Avoid XML Abuse
As you probably noticed, Java annotations play a crucial role in expressing and managing Seam configuration metadata. That is done by design to make the framework easier to work with.
In the early days of J2EE, XML was viewed as the "holy grail" for configuration management. Framework designers throw all kinds of configuration information, including Java class and method names, in XML files without much thought about the consequence to developers. In retrospect, that was a big mistake. XML configuration files are highly repetitive. They have to repeat information already in the code in order to connect the configuration to the code. Those repetitions make the application prone to minor errors (e.g., a mis-spelled class name would show up as an hard-to-debug error at runtime). The lack of reasonable default configuration settings further compounds this problem. In fact, in some frameworks, the amount of boilerplate code disguised as XML may rival or even exceed the amount of actual Java code in the application. For J2EE developers, this abuse of XML is commonly known as the "XML hell".
The enterprise Java community recognizes this problem with XML abuse and has very successful attempts to replace XML files with annotations in Java source code. EJB3 is the effort by the official Java standardization body to promote the use of annotations in enterprise Java components. EJB3 makes XML files completely optional, and it is definitely a step toward the right direction. Seam adds to EJB3 annotations and expands the annotation-based programming model to the entire web application.
Of course, XML is not entirely bad for configuration data. Seam designers recognize that XML is well-suited to specify web application pages flows or define business process work flows. The XML file allows us to centrally manage the work flow for the entire application, as opposed to scatter the information around in Java source files. The work flow information has little coupling with the source code -- and hence the XML files do not need to duplicate typed information already available in the code.
8. Designed for Testing
Seam is designed from ground up for easy testing. Since all Seam components are just annotated POJOs, they are very easy to unit test. You can just create instances of the POJOs using the regular Java new keyword and then run any methods in your testing framework (e.g., JUnit or TestNG). If you need to test the interaction between multiple Seam components, you can instantiate those components individually and then setup their relationships manually (i.e., use the setter methods explicitly instead of relying on Seam's dependency injection features)..
9. Great Tools Support
Tools support is crucial for an application framework that focuses on developer productivity. Seam is distributed with a command line application generator called Seam Gen. Seam Gen closely resembles the tools available in Ruby-On-Rails. It supports features like generating complete CRUD applications from a database, quick developer turn around for web applications via the "edit / save / reload browser" actions, testing support etc.
But more importantly, Seam Gen generated projects work out-of-the-box with leading Java IDEs such as Eclipse and NetBeans. With Seam Gen, you can get started with Seam in no time!
10. Let's Start Coding!
In a nutshell, Seam simplifies the developer overhead for Java EE applications, and at the same time, adds powerful new features beyond Java EE 5.0. In this next section (excerpted from chapter 2 in the book), we will show you some real code examples to illustrate how Seam works. You can find the source code download for all example applications in the book from the book web site.
Seam Hello World
The most basic and widely used functionality of JBoss Seam is to be the glue between EJB3 and JSF. Seam allows seamless (no pun intended!) integration between the two frameworks through managed components. It extends the EJB3 annotated POJO (plain old Java objects) programming model to the entire web application. There is no more artificially required JNDI lookup, verbose JSF backing bean declaration, excessive facade business methods, and painstakingly passing objects between tiers etc.
Continue to use Java EE patterns in Seam
In traditional Java EE applications, some design patterns, such as JNDI lookup, XML declaration of components, value objects, business facade, are mandatory. Seam eliminates those artificial requirements with annotated POJOs. However, you are still free to use those patterns when they are truely needed in your Seam applications.
Writing a Seam web application is conceptually very simple. You just need to code the following components:
- Entity objects represent the data model. The entity objects could be entity beans in the Java Persistence API (JPA, a.k.a, EJB3 persistence) or Hibernate POJOs. They are automatically mapped to relational database tables.
- JSF web pages display the user interface. The pages capture user input via forms and display result data. The form fields and data display tables are mapped to entity beans or collections of entity beans.
- EJB3 session beans or annotated Seam POJOs act as UI event handlers for the JSF web pages. They process user input encapsulated in entity beans and generate data objects for display in the next step (or page).
All the above components are managed by Seam and they are automatically injected into the right pages / objects at runtime. For instance, when the user clicks a button to submit a JSF form, Seam automatically parses the form fields and constructs an entity bean. Then, Seam passes the entity bean into the event handler session bean, which is also created by Seam, for processing. You do not need to manage component lifecycles and relationships between components in your own code. There is no boilerplate code and no XML file for dependency management.
In this chapter, we use a hello world example to show exactly how Seam glues together a web application. The example application works like this: The user can enter her name on a web form to "say hello" to Seam. Once she submits, the application saves her name to a relational database and displays all the users that have said hello to Seam. The example project is in the HelloWorld folder in the source code download for this book. To build it, you must have Apache ANT 1.6+ () installed. Enter the helloworld directory and run the command ant. The build result is the build/jars/helloworld.ear file, which you can directly copy into your JBoss AS instance's server/default/deploy directory. Now, start JBoss AS, and the application is available at the URL.
Install JBoss AS
To run examples in the book, we recommend you to use the JEMS (JBoss Enterprise Middleware Suite) GUI installer to install a Seam-compatible version of JBoss AS. The JEMS installer can be downloaded from. Please refer to Appendix A, Install and Deploy JBoss AS if you need further help on JBoss AS installation and application deployment.
You are welcome to use the sample application as a template to jump start your own Seam projects (see Appendix B, Use Example Applications as Templates). Or, you can use the command line tool Seam Gen (see Chapter 4, Rapid Application Development Tools) to automatically generate project templates, including all configuration files, for you. In this chapter, we will not spend too much time explaining the details of the directory structure in the source code project. Instead, we focus on the code and configuration artifacts a developer must write or manage to build a Seam application. This way, you can apply the knowledge to any project structure without being confined to our template.
Source Code Directories
A Seam application consists of Java classes and XML/text configuration files. In the book's example projects, the Java source code files are in the src directory, the web pages are in the view directory, and all configuration files are in the resources directory. See more in Appendix B, Use Example Applications as Templates.
1. Create a Data Model
The data model in the hello world application is simply a Person class with a name and an id property. The @Entity annotation tells the container to map this class to a relational database table, with each property a column in the table. Each Person instance corresponds to a row of data in the table. Since Seam is "configuration by exception", the container simply uses the class name property name for the table name and column name. The @Id and @GeneratedValue annotations on the id property indicates that the id column is for the primary key and its value is automatically generated by the application server for each Person object saved into the database.
@Entity
@Name("person")
public class Person implements Serializable {
private long id;
private String name;
@Id @GeneratedValue
public long getId() { return id;}
public void setId(long id) { this.id = id; }
public String getName() { return name; }
public void setName(String name) {this.name = name;}
}
The most important annotation in the Person class is the @Name annotation. It specifies the string name the Person bean should be registered under Seam. In other Seam components (e.g., pages and session beans), you can reference the managed Person bean for this component using the "person" name.
2. Map the Data Model to a Web Form
In the JSF page, we use the Person bean to back the form input text field. The #{person.name} symbol refers to the name property on the Seam component named "person", which is an instance of the Person entity bean.
<h:form>
Please enter your name:<br/>
<h:inputText<br/>
<h:commandButton
</h:form>
Below the entry form, the JSF page displays all people who has said "hello" to Seam in the database. The list of people is stored in a Seam component named "fans". The fans component is a List <Person> object. The JSF dataTable iterates through the list and displays each Person object in a row. The fan symbol is the iterator for the fans list. Figure 2.1, The Hello World web page shows the web page.
<h:dataTable
<h:column>
<h:outputText
</h:column>
</h:dataTable>
Figure 2.1. The Hello World web page
When the user clicks on the "Say Hello" button to submit the form, Seam creates the person managed component with the input data. It then invokes the sayHello() method on Seam component named "manager" (i.e., #{manager.sayHello} is the UI event handler for the form submit button), which saves the person object to the database and refreshes the fans list. The manager component is an EJB3 session bean and we will discuss it in the next section.
3. Handle Web Events
The manager component in Seam is the ManagerAction session bean, as specified by the @Name annotation on the class. The ManagerAction class has person and fans fields annotated with the @In and @Out annotations.
@Stateless
@Name("manager")
public class ManagerAction implements Manager {
@In @Out
private Person person;
@Out
private List <Person> fans;
The @In and @Out annotations are at the heart of the Seam programming model. So, let's look at exactly what they do here.
What is bijection
In Seam documentation, you sometimes see the term "bijection". That refers to the two-way injection and outjection interaction between Seam components and the Seam managed context.
Since the person field already contains the form data via injection, the sayHello() method simply saves it to the database via the JPA EntityManager, which is injected via the @PersistenceContext annotation. Then it refreshes the fans and person objects, which are outjected after the method exits. The sayHello() returns null to indicate that the current JSF page will be re-displayed with the most up-to-date model data after the call.
@PersistenceContext
private EntityManager em;
public String sayHello () {
em.persist (person);
person = new Person ();
fans = em.createQuery("select p from Person p")
.getResultList();
return null;
}
We are almost done, except for one little thing. As you probably noticed, the ManagerAction bean class implements the Manager interface. In order to conform to the EJB3 session bean specification, we need an interface that lists all the business methods in the bean. Below is the code for the Manager interface. Fortunately, it is easy to automatically generate this interface from any modern IDE tool.
@Local
public interface Manager {
public String sayHello ();
}
That is all the code you need for the Hello World example. In the next two sections, we cover alternative ways to do things and the configuration of Seam applications. You can skip the rest of the chapter for now if you want to jump right into the code and customize the helloworld project for your own small database application.
4. Better Understand the Seam Programming Model
Now we have rushed through the Hello World example application. But we have left off some important topics, such as alternative ways to do things and important features not covered by the above code. In this section, let's go through those topics. They help you gain a deeper understanding of Seam. But for the impatient, you can skip this section and come back later.
4.1. Seam POJO Components
In the above example, we used an EJB3 session bean to implement the application logic. But we are not limited to use EJB3 components in Seam. In fact, in Seam, any POJO with a @Name annotation can be turned into a managed component.
For instance, we can make ManagerAction a POJO instead of a EJB3 session bean.
@Name("manager")
public class ManagerAction {
@In (create=true)
private EntityManager em;
... ...
}
Using POJOs to replace EJB3 beans has pros and cons. POJOs are slightly simpler to program since they do not require EJB3-specific annotations and interfaces (see above). If all your business components are Seam POJOs, you can run your Seam application outside of the EJB3 application server (see Chapter 23, Seam Without EJB3).
However, POJOs also have less features than EJB3 components since POJOs cannot get EJB3 container services. Examples of EJB3 services you lose in non-EJB3 Seam POJOs include the following.
- The @PersistenceContext injection no longer works in POJOs. In order to obtain an EntityManager in a Seam POJO, you have to initialize the EntityManager in Seam configuration file and then use the Seam @In annotation to inject it into the POJO.
- There is no support for declarative method-level transaction in POJOs. Instead, you can configure Seam to demarcate a database transaction from when the web request is received until the response page is rendered.
- Seam POJOs cannot be message-driven components.
- No support for @Asynchronous methods.
- No support for container managed security.
- No transaction or component level persistence contexts. All persistence contexts in Seam POJOs are "extended" (see Section 7.1,"The Default Conversation Scope" for more details).
- No integration into the container's management architecture (ie. JMX console services).
- No Java remoting (RMI) into Seam POJO methods.
- Seam POJOs cannot be @WebService components.
- No JCA integration.
So, why would anyone want to use POJO components when deploying in an EJB3 container? Well, POJO components are good for pure "business logic" components, which delegate data access, messaging, and other infrastructure functionalities to other components. For instance, we can use POJO components to manage Seam data access objects. The "business logic" POJO is useful since they can be re-used in other frameworks if you need to. But all in all, their application is much smaller than EJB3 components, especially in small to middle sized applications. So, in most examples throughout this book, we use EJB3 components.
4.2. Ease of Testing
As we mentioned in Chapter 1, What is Seam, Seam is built from ground up to enable easy and out-of-the-container testing. In the helloworld project, we included two test cases for unit testing and integrated JSF testing in the test folder. The Seam testing infrastructure mocks the database, JSF, Seam context, and other application server services in plain Java SE environment. Just run ant test to run those tests.
4.3. Getter / Setter Based Bijection
In the Hello World example, we demonstrated how to biject Seam components against field variables. You can also biject components against getter and setter methods. For instance, the following code would work just fine.
private Person person;
private List <Person> fans;
@In
public void setPerson (Person person) {
this.person = person;
}
@Out
public Person getPerson () {
return person;
}
@Out
public List <Person> getFans () {
return fans;
}
While the above getter / setter methods are trivial, the real value of bijection via getter / setter methods is that you can add custom logic to manipulate the bijection process. For instance, you can validate the injected object or retrieve the outjected object on the fly from the database.
4.4. Avoid Excessive Bijection
Dependency bijection is a very useful design pattern. However, like any other design pattern, there is always a danger of overusing it. Too much dependency bijection can make the code harder to read since the developer must mentally figure out where each component is injected from. Too much bijection could also adds performance overhead since the bijection happens at runtime.
In the Hello World example, there is a simple way to reduce and even eliminate the bijection: just make the data components properties of the business component. This way, in the JSF pages, we only need to reference the business component and there is no bijection needed to tie the business and data components. For instance, we can change the ManagerAction class to the following.
@Stateless
@Name("manager")
public class ManagerAction implements Manager {
private Person person;
public Person getPerson () {return person;}
public void setPerson (Person person) {
this.person = person;
}
private List <Person> fans;
public List<Person> getFans () {return fans;}
... ...
}
Then, on the web page, we reference the properties as follows.
<h:form>
Please enter your name:<br/>
<h:inputText
<br/>
<h:commandButton
</h:form>
... ...
<h:dataTable
<h:column>
<h:outputText
</h:column>
</h:dataTable>
The bottom line is that Seam is versatile when it comes to dependency management. It is generally a good practice to encapsulate the data component with its data access business component. This is especially the case for stateful business components.
4.5. Page navigation in JSF
In this example, we have a single page application. After each button click, JSF re-renders the page with updated data model values. Obviously, most web applications would have more than one page. In JSF, an UI event handler method can determine which page to display next by returning the string name of a navigation rule. For instance, you can define the following navigation rule in the navigation.xml file.
<navigation-case>
<from-outcome>anotherPage</from-outcome>
<to-view-id>/anotherPage.jsp</to-view-id>
</navigation-case>
Then, if the sayHello() method returns the string value "anotherPage", JSF would display the anotherPage.jsp page next. This gives us programatic control over which page to display next from inside the UI event handler method.
4.6. Access database via the EntityManager
The JPA (Java Persistence API, a.k.a EJB3 Entity Bean Persistence) EntityManager manages the mapping between relational database tables and entity bean objects. The EntityManager is created by the application server at runtime. You can inject an EntityManager instance using the @PersistenceContext annotation.
The EntityManager.persist() method saves an entity bean object as a row in its mapped relational table. The EntityManager.query() method runs an SQL-like query to retrieve data from the database in the form of a collection of entity bean objects. Please refer to the JPA documentation for more on how to use the EntityManager and the query language. In this book, we only use the simplest queries.
By default, the EntityManager saves data to the embedded HSQL database. If you are running the application in JBoss AS on the local machine, you can open a GUI console for the HSQL database via the following steps: go to page, click on the database=localDB,service=Hypersonic MBean, and then click on the "invoke" button under the startDatabaseManager method. You can execute any SQL commands against the database from the console.
5. Configuration and Packaging
Next, let's move on to configuration files and application packaging next. You can actually generate almost all the configuration files and build script via the Seam Gen command line utility, or you can simply reuse the ones in the sample application source project. So, if you want to learn Seam programming techniques first and worry about configuration / deployment later, that is fine. You can safely skip this section and come back later when you need it.
In this section, we focus on the Seam EJB3 component configuration here. Seam POJO configuration and deployment outside of JBoss AS is of course also possible.
Most Seam configuration files are XML files. But wait! Hadn't we just promised that Seam would get us out of the "XML hell" in J2EE and Spring? How come it has XML files too? Well, as it turns out, there are some good uses for XML files! XML files are very good for deployment time configuration (e.g., the root URL of the web application and the location of the backend database) because it allows us to make deploy-time changes without changing and re-compiling the code; they are good for gluing different sub-systems in the application server together (e.g., to configure how JSF components interacts with Seam EJB3 components); XML files are also good for presentation related content (e.g., the web page and page navigation flow).
What we are against is to replicate information already exist in the Java source code to XML files. As you will soon see, this simple Seam application has several XML configuration files. All of them are very short and none of them concerns information already available in the Java code. In another word, there is no "XML code" in Seam.
Furthermore, most content in those XML files are fairly static. So, you can easily reuse those files for your own Seam applications. Please refer to Appendix B, Use Example Applications as Templates for instructions on how to use the sample application as a template for your own applications.
We will use the next several pages to detail the configuration files and packaging structure of the sample application. If you are impatient and are happy with the application template, you can skip those. Anyway, without further ado, let's look into how the hello world example application is configured and packaged. To build a deployable Seam application for JBoss AS, we have to package all the above Java classes and configuration files in an Enterprise Application aRchive (EAR) file. In this example, the EAR file is helloworld.ear. It contains three JAR files and two XML configuration files.
helloworld.ear
|+ app.war // Contains web pages etc.
|+ app.jar // Contains Seam components
|+ jboss-seam.jar // The Seam library
|+ META-INF
|+ application.xml
|+ jboss-app.xml
Source Code Directories
In the source code project, the resources/WEB-INF directory contains the configuration files that go into app.war/WEB-INF; the resources/META-INF directory contains files that go into app.jar/META-INF and helloworld.ear/META-INF; the resources directory root has files that go into the root directory of app.jar.
The application.xml file lists the JAR files in the EAR and specifies the root URL for this application.
<application>
<display-name>Seam Hello World</display-name>
<module>
<web>
<web-uri>app.war</web-uri>
<context-root>/helloworld</context-root>
</web>
</module>
<module>
<ejb>app.jar</ejb>
</module>
<module>
<java>jboss-seam.jar</java>
</module>
</application>
The jboss-app.xml file specifies the class loader for this application. Each EAR application should have a unique string name for the class loader. Here, we use the application name in the class loader name to avoid repetition.
<jboss-app>
<loader-repository>
helloworld:archive=helloworld.ear
</loader-repository>
</jboss-app>
The jboss-seam.jar file is the Seam library JAR file from the Seam distribution. The app.war and app.jar files are built by us. So, let's look into the app.war and app.jar files next.
5.1. The WAR file
The app.war file is a JAR file packaged to the Web Application aRchive (WAR) specification. It contains the web pages as well as standard JSF / Seam configuration files. You can also put JSF-specific library files in the WEB-INF/lib directory (e.g., the jboss-seam-ui.jar).
app.war
|+ hello.jsp
|+ index.html
|+ WEB-INF
|+ web.xml
|+ faces-config.xml
|+ components.xml
|+ navigation.xml
The web.xml file is required by all Java EE web applications. JSF uses it to configure the JSF controller servlet and Seam uses it intercept all web requests. The configuration in this file is pretty standard.
<web-app version="2.4"
xmlns=""
xmlns:xsi="..."
xsi:
<!-- Seam -->
<listener>
<listener-class>
org.jboss.seam.servlet.SeamListener
</listener-class>
</listener>
<!-- MyFaces -->
<listener>
<listener-class>
org.apache.myfaces.webapp.StartupServletContextListener
</listener-class>
</listener>
<context-param>
<param-name>
javax.faces.STATE_SAVING_METHOD
</param-name>
<param-value>client</param-value>
</context-param>
>*.seam</url-pattern>
</servlet-mapping>
<context-param>
<param-name>javax.faces.CONFIG_FILES</param-name>
<param-value>/WEB-INF/navigation.xml</param-value>
</context-param>
</web-app>
The faces-config.xml file is a standard configuration file for JSF. Seam uses it to add the Seam interceptor into the JSF lifecycle.
<faces-config>
<lifecycle>
<phase-listener>
org.jboss.seam.jsf.SeamPhaseListener
</phase-listener>
</lifecycle>
</faces-config>
The navigation.xml file contains JSF page navigation rules for multi-page applications. Since the hello world example only has a single page, this file is empty here.
The components.xml file contains Seam-specific configuration options. It is also pretty much application-independent with the exception of the jndi-pattern property, which must include the EAR file's base name for Seam to access EJB3 beans by their full JNDI name.
<components ...>
<core:init
<core:manager
</components>
5.2. The Seam Components JAR
The app.jar file contains all EJB3 bean classes (both entity beans and session beans), as well as EJB3 related configuration files.
app.jar
|+ Person.class // entity bean
|+ Manager.class // session bean interface
|+ ManagerAction.class // session bean
|+ seam.properties // empty file but needed
|+ META-INF
|+ ejb-jar.xml
|+ persistence.xml
The seam.properties file is empty here but it is required for JBoss to know that this JAR file contains Seam EJB3 bean classes, and process the annotations accordingly.
The ejb-jar.xml file contains extra configurations that can override or supplement the annotations on EJB3 beans. In a Seam application, it adds the Seam interceptor to all EJB3 classes. We can reuse the same file for all Seam applications.
<ejb-jar>
<assembly-descriptor>
<interceptor-binding>
<ejb-name>*</ejb-name>
<interceptor-class>
org.jboss.seam.ejb.SeamInterceptor
</interceptor-class>
</interceptor-binding>
</assembly-descriptor>
</ejb-jar>
The persistence.xml file configures the backend database source for the EJB3 entity beans. In this example, we just use the default HSQL database embedded inside JBoss AS (i.e., the java:/DefaultDS data source).
"/>
<property name="hibernate.show_sql" value="true"/>
</properties>
</persistence-unit>
</persistence>
So, that's all the configuration and packaging a simple Seam application needs. We will cover more configuration options and library files as we move to more advanced topics in this book. Again, the simplest way to start your Seam application is not to worry about those configuration files at all and start from a ready-made application template.
6. How is this Simple?
That's it for the hello world application. With three simple Java classes, a JSF page, and a bunch of largely static configuration files, we have a complete database driven web application. The entire application requires less than 30 lines of Java code and no "XML code". However, if you are coming from a PHP background, you might still be asking: "How is this simple? I can do that in PHP with less code!"
Well, the answer is that Seam applications are conceptually much simpler than PHP (or any other scripting language) applications. The Seam component model allows us to add more functionalities to the application in a controlled and maintainable manner. As we will soon see, Seam components make it a breeze to develop stateful and transactional web applications. The object-relational mapping framework (i.e., entity beans) allows us to focus on the abstract data model without having to deal with database-specific SQL statements.
This article was based on an excerpt from chapters 1 and 2 - in the rest of this book, we will discuss how to develop increasingly complex Seam applications using Seam components. See also the table of contents showing all the topics in the book.
See also two interviews with Seam creator Gavin King, from previous news articles about Seam:
- JBoss Seam 1.1 Indepth: An Interview with Gavin King
- JBoss Seam 1.0: rethinking web application architecture
About the author
Dr. Michael Yuan is the author of JBoss Seam: Simplicity and Power Beyond Java EE 5.0 - a book on next generation web application frameworks. He is also the author of Nokia Smartphone Hacks and other 3 technology books. Michael specializes in lightweight enterprise / web application, and end-to-end mobile application development. You can contact him via his blog. | http://www.infoq.com/articles/jboss-seam | CC-MAIN-2015-14 | en | refinedweb |
Does anyone know how to set up the AI to always lose? or something....
Does anyone know how to set up the AI to always lose? or something....
Hello! Can someone please give me any advice!
In the java game Pong, how do you defeat the computer? What is the code, method, or whatever that allows you to defeat the comp. please help me i need to turn this in for class. I need help. Anything...
Here it is;
public class Car {
private String make;
private String model;
private int year;
public Car() {
I'm so confused and I really need someone's assistance. I cant express how much I'd appreciate it.
here is my code!
public class Airplane {
private String make;
private String model;... | http://www.javaprogrammingforums.com/search.php?s=4acbb94b779904a5425db9c0775c44da&searchid=1461224 | CC-MAIN-2015-14 | en | refinedweb |
iMeshFactoryList Struct Reference
[Mesh support]
A list of mesh factories. More...
#include <iengine/mesh.h>
Inheritance diagram for iMeshFactoryList:
Detailed Description
A list of mesh factories.
Main ways to get pointers to this interface:
Main users of this interface:
Definition at line 948 of file mesh.h.
Member Function Documentation
Add a mesh factory wrapper.
Find a mesh factory wrapper and return its index.
Find a mesh factory wrapper by name.
Return a mesh factory wrapper by index.
Return the number of mesh factory wrappers in this list.
Remove a mesh factory wrapper.
Remove the nth mesh factory wrapper.
Remove all mesh factory wrappers.
The documentation for this struct was generated from the following file:
Generated for Crystal Space 1.4.1 by doxygen 1.7.1 | http://www.crystalspace3d.org/docs/online/api-1.4.1/structiMeshFactoryList.html | CC-MAIN-2015-14 | en | refinedweb |
Extras/SteeringCommittee/Meeting-20060810
From FedoraProject
2006 August 10 FESCo
Meeting Summaries are posted on the wiki at:
Attending
- warren
- thl
- scop
- c4chris
- rdieter
- tibbs
- abadger1999
- bpepple
- dgilmore
Summary
Mass Rebuild
- Plan and questions are on the wiki at
- Which packages need to be rebuilt
- sha256sum wasn't implemented in rpm so that isn't a factor
- Minimal buildroots have been implemented and will influence most packages
- Decided to go with the original plan: If a maintainer thinks a rebuild isn't required, they write a commit message that tells why not.
- Criteria that maintainers should apply is everything that isn't content should be rebuilt.
comps.xml
- The comps.xml script produced a big list:
- Commandline stuff should be included
- Packages should not be listed twice (confuses end users)
- nagmails will be sent so people know they have packages needing looking at
Legacy in buildroots
- Voted to activate legacy in buildroots and discuss maintainers responsibilities later
- tibbs will send out an email regarding maintainer responsibilities
- thl will document that FE branches in maintenance mode use Legacy packages
Ctrl-C Problem
- Async notification seems to be the only method to guarantee commit mail is sent
- Warren to take that up at the infrastructure meeting
Packaging Committee Report
- Python .pyos are to be included directly in packages instead of %ghosted
- c4chris is looking for a script to file bugzilla bugs for packages that need fixing
- scop has related changes to the python spec template prepared:
Sponsorship Nominations
- rdieter was accepted
Misc
- FESCo members are now on both FAB and -maintainers
Future Elections
- Draft is at:
- No objections to it yet. Wait one more week and then vote on acceptance
Package Database
-
-
- c4chris posted some brain dumps about this to extras-list
- c4chris joining infrastructure list so he can help coordinate between the extras community needs and infrastructure people doing the implementation
Free discussion
- zaptel-kmod and kmod policy in general
- Main question: Should kmods which have no intention of ever moving into the mainstream kernel be allowed in?
- If the package is well maintained and end-users accept the risk, why not?
- Fedora kernel developers have stated they will not debug kernel problems where users have kernel mods not from upstream installed
- thl to take the question to FAB
- documents how to remove a package from Extras (in case it is renamed upstream, moved to Core, etc)
After Meeting Discussion
- Too many items on the agenda per meeting? Items to discuss moving onto email instead of in the weekly IRC meetings
- Possibly move packaging committee report to email list
- To be able to discuss before the next packaging meeting, the FESCo meeting can't be directly after the packaging meeting. One would have to change times/dates
- Owners of a task update the status on the wiki before the meeting
- New sponsor discussion could happen entirely on list:
- Use two lists cvs-sponsors and fesco-list
Log
(09:55:13) ***warren here. (09:59:42) thl has changed the topic to: FESCo meeting in progress (09:59:46) thl: hi everyone (09:59:54) thl: who's around? (09:59:57) Rathann: o/ (10:00:01) Rathann: :> (10:00:07) ***c4chris__ is here (10:00:11) scop [n=scop] entered the room. (10:00:15) c4chris__ is now known as c4chris (10:00:18) drfickle left the room (quit: "Please do not reply to this burrito"). (10:00:24) rdieter: here. (10:00:36) tibbs: president. (10:00:41) warren: president? (10:00:56) tibbs: Used to say that in grade school. (10:00:59) c4chris: s/id// (10:01:07) c4chris: :) (10:01:24) thl: okay, then let's start slowly... (10:01:25) ***abadger1999 here (10:01:33) thl has changed the topic to: FESCo meeting in progress -- M{ae}ss-Rebuild (10:01:54) ***bpepple is here. (10:01:55) thl: scop, I assigned that job to you some days ago (10:02:06) scop: works4me (10:02:23) scop: but I've noticed a bunch of "fc6 respin" builds recently (10:02:29) thl: scop, there are some question still open; see (10:03:02) thl: scop, can you work out the details that are still below "to be discussed" (10:03:17) thl: then we can discuss it in the next meeting (10:03:26) thl: (or now if someone prefers) (10:03:45) abadger1999: rpm signing w/ sha256sum seems to affect all packages (10:03:53) scop: I can do that, but I think those look pretty much like no-brainers (10:03:57) abadger1999: So the answer to which packages need rebuilding would be all. (10:04:04) scop: good point (10:04:13) thl: scop, probably, but someone need to work them out anyway ;-) (10:04:14) f13: abadger1999: that didn't make it in (10:04:22) f13: abadger1999: there is no sha256sum. (10:04:38) ***cweyl is here (rabble) (10:04:44) rdieter: is sha256sum signing in place now/yet (no?)? (10:04:48) f13: abadger1999: so the real answer is anything that is gcc compiled. (10:05:08) f13: rdieter: I don't think the patches even went into our rpm package yet. It was nacked to do such signing for FC6/RHEL5 (10:05:27) f13: the option may be patched into rpm, but it won't be enabled by default. (10:05:27) abadger1999: What about rebuilding with minimal buildroots? (10:05:45) ***dgilmore is kinda here (10:05:54) f13: abadger1999: certainly possible criteria (10:06:01) c4chris: And do we start on August 28 ? (10:06:17) f13: now that Extras has the minimal roots, you could add 'anything that hasn't built since <foo>' where foo is the date that the minimal buildroots went into place. (10:06:29) thl: c4chris, that's the plan currently, but I suggest we discuss this next week again (or the week after that) (10:06:37) c4chris: k (10:07:01) dgilmore: abadger1999: minimal buildroots are in place and the buildsys-build packages have been fixed (10:07:20) thl: well, a lot of people don't like a mass-rebuild were everything is rebuild (10:07:28) f13: abadger1999: for Core, the criteria was: Any binary compiled package (for gnu-hash support), any package that hasn't yet been built in brew (our minimal buildroot stuff), and any package that was still being pulled in from FC5. (10:07:49) thl: because it creates lot's of downloads that are unnesessary for people (10:08:07) thl: but if we want to rebuild everything to make sure that it stil builds -> okay for me (10:08:24) c4chris: thl, we are talking devel here (10:08:43) thl: c4chris, yes, but devel will become FC6 (10:08:45) c4chris: I think we need the mass rebuild (10:08:47) rdieter: c4chris++ right, better to be safe than sorry later. (10:08:59) thl: and people updateing from FC5 have the downloads then, too (10:09:11) c4chris: thl, become is the key word... (10:09:15) f13: rebuilding for gnu-hash is a big bonus (10:09:28) f13: I think people would like the performance increase it can give. (10:09:30) warren: FC3 and FC6 will be Extras releases that continue to be used long after FC4 and FC5 are retired for obvious reasons. Rebuilding FC6 Extras now is a good idea. (10:09:47) thl: okay, so a real mass rebuild (10:09:56) warren: I guarantee it will also find more bugs. (10:09:57) thl: we should post this to the list for discussion (10:10:00) scop: it's just plain silly to rebuild eg. huge game content packages that aren't anything else but stuff copied from the tarball to the file system (10:10:05) thl: who will manage that? (10:10:25) warren: scop, what if we setup an explicit exclusion list? owners can request packages to exclude. (10:11:17) scop: sure, if there's someone to review/maintain that list (10:11:36) scop: personally, I think the original plan would work well enough (10:11:45) warren: Are we proposing that we automatically rebuild everything, or first ask maintainers to do it themselves to see who is AWOL? (10:11:58) thl: warren, ask maintainers (10:11:59) c4chris: warren, ask maintainers (10:12:04) rdieter: scop, warren: isn't that what "needs.rebuild" on FC6MassRebuild? (10:12:08) warren: If that's the case, then they can deal with their own exclusions. (10:12:21) c4chris: warren, I think so too (10:12:35) warren: OK, this plan is good. (10:13:03) warren: > are there still 3 orphans in devel repo according to the package report: dxpc gtkglarea2 soundtracker. What do do? Remove them? (10:13:09) thl: does anyone want to annouce this plan to the list for discussion? (10:13:14) f13: ask people, give it a week or two, then step in and buildthe ones that haven't piped up? (10:13:18) thl: or do we simply mention it in the summary? (10:13:19) warren: One more warning on the mailing list asking for owners, with Aug 28th deadline to remove if nobody claims it. (10:13:36) rdieter: warren++ (10:13:47) thl: warren, +1 (10:13:47) c4chris: warren, yup (10:13:55) bpepple: warren: +1 (10:14:03) warren: I'll do that warning now... (10:14:23) scop: does anything else depend on any of those three? (10:14:30) thl: warren, let me check if those three are still around first (or if there are others still around) (10:14:33) warren: good question, I'll check (10:14:36) tibbs: dxpc was rebuilt fairly recently. (10:15:04) thl: warren, no, seems those three are the only ones according to the PackageStatus page (10:15:18) rdieter: I updated/built dxpc recently... so that it would be in good shape for any potential new maintainer. (10:15:23) ***f13 steps out (10:15:37) thl: okay, so again: does anyone want to annouce this new plan to the list for discussion? or do we simply mention it in the summary? (10:15:48) scop: one more item: what happens if a maintainer does not take care of his rebuilds? (10:15:52) warren: rdieter, without anybody responsible though, do we really want to keep it? (10:16:02) scop: thl, I thought warren said he'd announce it (10:16:06) rdieter: warren: no maintainer -> remove it. (10:16:23) warren: scop, I said I'd announce the orphan removal warning (10:16:26) thl: scop, I though warren want's to warn that those three might get removed? (10:16:52) scop: yep... so what's the new plan thl was talking about? (10:17:00) tibbs: BTW, I count 37 pachages belonging to extras-orphan@fedoraproject.org in the current owners.list. (10:17:02) scop: Extras/Schedule/FC6MassRebuild? (10:17:26) thl: scop, I thought we rebuild everything now (besides big data-only packages)? (10:17:38) thl: that's the impression I got (10:18:15) warren: How about rebuild everything *EXCEPT* maintainers can choose to explicitly skip it if they have a good reason? (10:18:16) c4chris: thl, right, but that's pretty much what FC6MassRebuild says, no? (10:18:31) warren: ooh... (10:18:34) scop: c4chris, exactly (10:18:38) thl: warren, we need to define "good reason" in that case (10:18:45) warren: How about rebuild everything *EXCEPT* maintainers can choose to explicitly skip it if they have a good reason? Except they MUST rebuild if it is demonstrated that a rebuild would fail. (10:19:05) warren: Binaries without GNU_HASH always rebuild? (10:19:13) warren: perl modules built against earlier perl versions? (10:19:15) warren: python? (10:19:16) thl: warren, Binaries without GNU_HASH always rebuild +1 (10:19:44) cweyl: warren: so basically everything that isn't content? (10:20:25) c4chris: cweyl, yes that's a good way to put it (10:21:23) abadger1999: cweyl: Sounds good. So everything goes through the minimal buildroot. (10:21:38) thl: "so basically everything that isn't content" -- +1, 0 or -1 please! (10:21:45) scop: as a general rule, works4me (10:21:45) warren: Content must be rebuilt *IF* it would fail to rebuild. (10:21:51) thl: "everything that isn't content" +0.66 (10:22:02) scop: warren, if it fails to rebuild, it can't be rebuilt (10:22:11) warren: Then it must be fixed? (10:22:25) scop: yes (10:22:27) warren: How about a separate exclude.list that contains (10:22:31) warren: packagename reason (10:22:51) warren: uqm-bigasscontent 1GB of game data that doesn't change. (10:23:02) rdieter: warren: do we really care about the reason? (10:23:13) Nodoid [n=paul] entered the room. (10:23:27) scop: I still think that the commit message to needs.rebuild is enough (10:23:38) c4chris: scop, +1 (10:23:42) thl: scop, +1 (10:23:42) abadger1999: scop: +1 (10:24:01) warren: hmm.. I guess (10:24:02) warren: ok (10:24:04) tibbs: !2 (10:24:04) c4chris: "everything that isn't content" +1 (10:24:08) tibbs: crap. (10:24:12) tibbs: +1 (10:24:28) bpepple: +1 (10:24:45) rdieter: I agree with scop, why isn't needs.rebuild sufficient? (or is this orthogonal to that?) (10:25:25) thl: guys we run late (10:25:38) warren: Let's move on (10:25:42) rdieter: ok (10:25:44) thl: afaics the current plan looks like this: (10:25:47) scop: we use needs.rebuild, but append something like "as a general rule, everything that is not pure content should be rebuilt" to FC6MassRebuild (10:25:53) thl: "everything that isn't content need a rebuild" (10:26:06) thl: "a needs.rebuild file will be placed into cvs" (10:26:31) thl: and if people don#t rebuild stuff they have to mention the reasons in the cvs delete commits message of needs.rebuild (10:26:37) thl: that okay for everybody? (10:26:40) ***warren looks at the 37 orphans... (10:26:42) c4chris: thl, scop: +1 (10:26:46) rdieter: +1 (10:26:55) scop: +1 (10:27:02) abadger1999: +1 (10:27:04) tibbs: +1 (10:27:05) bpepple: +1 (10:27:11) thl: okay, then let's move on (10:27:24) thl has changed the topic to: FESCO meeting -- Use comps.xml properly (10:27:26) thl: c4chris ? (10:27:31) warren: I suggested the exclude.list with reasons because it is easier to search than commit messages (10:27:35) warren: but that's fine (10:27:38) c4chris: Well I produced a big list... (10:27:54) c4chris: There was another idea to trim down soem more libs (10:28:17) thl: c4chris, do you want to work further on that idea and the stuff in general? (10:28:22) c4chris: So far, only Hans has added stuff to comps... (10:28:49) scop: warren, searching for needs.rebuild in a folder containing commit mails should work (10:28:49) c4chris: thl, a few things are not really clear: (10:28:56) thl: c4chris, we should send out mails directly to the maintainers when we now what needs to be in comps (and what not) (10:29:07) c4chris: do we also want cmdline stuff? (10:29:09) thl: then at least some maintainers will add stuff (10:29:19) thl: c4chris, cmdline stuff -> IMHO yes (10:29:38) c4chris: and do we allow packages listed twice? (10:29:42) thl: c4chris, or how does core handle cmdline stuff? (10:30:00) thl: c4chris, twiece? good question. Maybe you should ask jeremy or f13 (10:30:16) c4chris: thl, I think there are cmdline tools in Core too (10:30:17) scop: I'd suggest a SIG or a special group for taking care of comps (10:30:20) rdieter: c4chris: twice, as in more than one section/group? (10:30:25) jeremy: c4chris: packages being listed twice should be avoided (10:30:28) c4chris: rdieter, yes (10:30:34) jeremy: it leads to lots of user confusion (10:30:41) c4chris: jeremy, k, I thought so (10:30:54) rdieter: agreed, pick one(primary) group (and stick with it). (10:30:56) thl: scop, well, do you want to run the SIG? (10:31:10) warren: c4chris, twice is fine (10:31:17) warren: jeremy, eh? (10:31:24) thl: scop, I think we need a QA sig that looks after stuff like this (10:31:45) c4chris: thl, we sorta have a QA SIG... :-) (10:31:49) scop: thl, no, I'm not personally terribly interested in it (10:31:50) warren: Hmm, I might be thinking of the common practice of listing packages multiple times in the hidden language groups. (10:32:25) thl: c4chris, well, then it would be good if that sig could handle that ;-) (10:32:32) scop: which is actually why I'd prefer someone who is interested and can keep things consistent and useful would maintain comps (10:32:48) c4chris: thl, k (10:32:58) thl: c4chris, thx (10:33:08) thl: well, was this all regarding this topic for today? (10:33:29) c4chris: thl, yup. I'll see about sending some nagmails... (10:33:43) thl: c4chris, tia (10:33:45) thl has changed the topic to: FESCO meeting in progress currently -- Activate legacy in buildroots (10:33:54) thl: well, we had the discussion on the list (10:34:21) thl: my vote: activate legacy in buildroots now, discuss the maintainer responsibilities later (10:34:22) mspevack is now known as mspevack_afk (10:34:38) dgilmore: +1 (10:34:43) thl: building FE3 and FE4 without legacy is ridiculous (10:34:51) warren: +1 (10:35:00) tibbs: +1 (10:35:01) c4chris: +1 (10:35:09) rdieter: +1 (10:35:09) thl: abadger1999 ? (10:35:18) abadger1999: Yeah, why not? +1 (10:35:35) bpepple: +1 (10:35:38) thl: k, settled (10:35:43) dgilmore: Ill get the mock config changes done, and make sure we sync up the legacy tree (10:35:51) thl: dgilmore, tia (10:36:01) thl has changed the topic to: FESCO meeting in progress currently -- CTRL-C problem (10:36:05) thl: any news? (10:36:08) scop: hold on a bit (10:36:10) abadger1999: tibbs, Are you still going to send out a maintainer resp. email? (10:36:17) thl has changed the topic to: FESCO meeting in progress currently -- Activate legacy in buildroots (10:36:19) thl: scop, ? (10:36:38) scop: it should be also documented somewhere that use of "EOL" FE branches assumess FL is in use too (10:36:40) tibbs: I lost some work in the wiki crash, unfortunately. (10:37:03) thl: scop, agreed (10:37:04) tibbs: I've been trying to feel out where the community is on FE3-4 maintenance. (10:37:13) thl: dgilmore, can you look after that, too? (10:37:42) thl: is the proper place afaics (10:38:05) scop: indeed (10:38:06) warren: What ever happened with the security SIG? The top priority of a security team would be to track issues and file tickets if new issues appear. Has that began? (10:38:18) abadger1999: tibbs: :-( (10:38:18) scop: yes (10:38:20) bpepple: warren: Yeah. (10:38:22) abadger1999: warren: Yes. (10:38:24) thl: warren, that's working afaics (10:38:26) warren: good =) (10:38:28) tibbs: That's been ongoing for some time. (10:38:55) thl: scop, well, I'll put in on if no one else wants (10:39:00) thl: so, let's move on (10:39:04) thl has changed the topic to: FESCO meeting in progress currently -- CTRL-C problem (10:39:19) thl: any news? sopwith still traveling afaik (10:39:22) thl: so probably no (10:39:31) ***thl will skip this one in 20 seconds (10:39:34) warren: Same thought as last week (10:39:46) warren: only way to really fix this is to make CVS commit mail async (10:39:50) warren: do we want to do this? (10:40:12) thl: warren, are there ans disadvantages (10:40:30) thl: s/ans/any/ (10:40:50) scop: yes, someone has to do the work :) (10:41:11) warren: I'll bring it up at the infrastructure meeting today (10:41:21) warren: I don't know how easy it would be (10:41:28) thl: scop, hehe, the usual problem ;-) (10:41:35) thl: warren, tia (10:41:40) scop: and actually, it can be somewhat difficult (10:41:40) thl: k, moving on (10:41:41) warren: tia means? (10:41:46) scop: warren, TIA (10:41:50) warren: ? (10:41:55) scop: Thanks In Advance (10:41:58) warren: oh (10:41:59) thl: thx in advance (10:41:59) warren: ok (10:42:12) thl has changed the topic to: FESCO meeting in progress currently -- Packaging Committee Report (10:42:14) thl: ? (10:42:42) abadger1999: I think the only thing that passed today was changing how .pyos are handled. (10:42:56) abadger1999: They are to be included instead of ghosted. (10:43:16) thl: abadger1999, we probably should run a script over a devel checkout of extras and file bugs (10:43:24) thl: otherwise stuff will never get fixed... (10:43:34) thl: maybe another job for the QA sig? (10:43:37) abadger1999: That's a good idea. (10:43:47) bpepple: Yeah, there should be a lot of python packages this affects. (10:43:54) c4chris: thl, gotcha (10:43:56) thl: or any other volunteers (10:43:59) thl: ? (10:44:08) thl: c4chris, sorry ;-) (10:44:16) c4chris: np (10:44:27) scop: related python spec template changes: (10:44:54) thl: abadger1999, c4chris, can you look after such a script please? (10:45:02) rdieter: it was/is-going to be mentioned on fedora-maintainers too... (10:45:12) scop: grep -lF '%ghost' */devel/*.spec (10:45:23) c4chris: thl, yup, we'll cook something up (10:45:32) thl: scop, + "| file _bugs.sh" (10:45:36) thl: c4chris, tia (10:45:49) abadger1999: thl: Will do. (10:45:49) thl: k, moving on (10:45:58) thl has changed the topic to: FESCO meeting in progress currently -- Sponsorship nominations (10:46:04) thl: any new nominations? (10:46:29) c4chris: not that I know of (10:46:32) dgilmore: thl: yeah ill look after it also (10:46:33) ***bpepple listens to the crickets. (10:46:35) rdieter: Well, it feels dirty, but I'd like to nominate me. (10:46:56) c4chris: rdieter, self nominations are fine (10:46:57) rdieter: (wanted to sponsor someone the other day, and realized I couldn't... yet) (10:47:21) ***thl wonders why rdieter isn't a sponsor yet (10:47:26) warren: +1 rdieter (10:47:27) thl: well, that's probably easy (10:47:32) bpepple: +1 (10:47:33) abadger1999: +1 (10:47:34) c4chris: +1 (10:47:35) thl: I think we don#t need to discus this (10:47:36) scop: huh? -1 (10:47:39) thl: rdieter sponsor +1 (10:47:39) dgilmore: +1 for rdieter (10:47:41) scop: OOPS +1 (10:47:51) thl: k, rdieter accepted (10:48:03) thl has changed the topic to: FESCO meeting in progress currently -- approve kmod's (10:48:03) rdieter: thanks, no I have no excuse for more work.. (: (10:48:04) tibbs: +1 (10:48:08) tibbs: (slow) (10:48:16) ***warren upgrades rdieter (10:48:21) thl: no new kmods up for discussion, moving on (10:48:35) thl has changed the topic to: FESCO meeting in progress currently -- MISC (10:48:53) thl: dgilmore, FE3 and FE4 builders are working fine now (python and elf-utils?) (10:48:57) thl: ? (10:49:01) dgilmore: thl: yep all donr (10:49:02) c4chris: crap, the package database item has been eaten by the wiki crash... (10:49:04) dgilmore: done (10:49:19) thl: dgilmore, thx (10:49:24) warren: BTW, are all FESCO members on fedora-maintainers and fedora-advisory-board? (10:49:29) thl: c4chris, uhhps, yes, sorry (10:49:55) rdieter: -maintainers, probably, fab maybe not (but probably should) 9: (10:49:57) dgilmore: warren: should be though i know us new FESCO guys were only just added to fab (10:50:04) tibbs: warren: My mailbox is certainly bulging from the latter list, yes. (10:50:10) bpepple: warren: I'm on both. (10:50:13) c4chris: warren, I am (10:50:13) thl: all FESCo members should be on FAB now (10:50:16) tibbs: Someone went through and added us. (10:50:22) thl: I checked the subscribers last week (10:50:26) rdieter: good. (10:50:35) dgilmore: fab is high volume :) (10:50:42) warren: As developement leaders in Fedora, your opinions would be valued in many matters of importance discussed on fab. (10:50:55) thl has changed the topic to: FESCO meeting in progress currently -- Future FESCo elections (10:51:29) thl: abadger1999, we wait a bit more for replys to your mail before we proceed? (10:51:29) abadger1999: I posted the draft. Anyone want to propose changes? (10:51:34) dgilmore: warren: i gave a longish reply last night about how i went about doing aurora extras (10:51:52) ***warren reads that mail... (10:51:54) dgilmore: abadger1999: looked pretty sane to me (10:51:58) c4chris: abadger1999, I like the draft (10:52:01) warren: rdieter, upgraded (10:52:21) ***thl votes for "wait another week before we accept the proposal" (10:52:38) c4chris: thl, k (10:52:44) bpepple: thl: +1 (10:52:49) rdieter: thl: +1 (10:52:54) thl: k, so let's move on (10:52:55) abadger1999: thl: +1 (10:53:05) thl has changed the topic to: FESCO meeting in progress currently -- package database (10:53:46) thl: c4chris, warren ,do we want to discuss stuff regarding that topic today? (10:53:46) c4chris: I posted a couple brain-dumps kinda mails (10:54:10) c4chris: any word of advice at this time ? (10:54:23) warren: Keep dumping, next step is to collect and organize everything we want. (10:54:29) thl: c4chris, well, "simply do something until somebody yells" (10:54:44) c4chris: thl, k (10:54:57) thl: c4chris, sorry, but that#s often the only way to really get something done afaics (10:54:57) dgilmore: c4chris: Just that there is alot of things that it needs to support. We need to design it in a modular fashion so it can grow as we grow (10:55:17) mspevack_afk is now known as mspevack (10:55:25) c4chris: warren, k. I'll try to start the collect phase soon... (10:55:26) [splinux] left the room (quit: "Ex-Chat"). (10:55:33) warren: due to the large scope of package database, mailing lists and wiki are most appropriate and a best use of time. Only after we have the mess better organized into plans should we discuss it? (10:55:34) abadger1999: c4chris: Are you on infrastructure list? (10:55:43) thl: warren, +1 (10:55:59) dgilmore: warren: +1 (10:56:08) c4chris: abadger1999, not sure (10:56:11) abadger1999: warren: +1 (10:56:18) bpepple: warren: +1 (10:56:24) c4chris: abadger1999, is it open? (10:56:31) warren: c4chris, yes, to anyone (10:56:40) c4chris: I'll check (10:56:52) thl: k, so let's move on (10:56:57) c4chris: I think I'm on the buildsys list (or soemthing) (10:57:00) warren: (10:57:03) thl has changed the topic to: FESCO meeting in progress currently -- free discussion around extras (10:57:10) thl: anything that we need to discuss? (10:57:11) j-rod: dgilmore: how are you setting -j32? (10:57:18) thl: or was that all for today? (10:57:34) dgilmore: j-rod: thats how many cpus are in the box so its being auto done (10:57:45) nirik: thl: any thoughts further on zaptel-kmod? (10:57:59) scop: this info should find a home somewhere: (10:58:10) dgilmore: thl: I think thats all. Ive seen no further feedback on buildysys issues (10:58:43) thl: nirik, good idea (10:58:44) tibbs: scop: Yes, this is in FESCo's jurisdiction, it seems. (10:58:52) thl has changed the topic to: FESCO meeting in progress currently -- zaptel-kmod (10:58:58) scop: it's in the packaging namespace but is Extras only (at least for now) so that's not quite the correct place for it (10:59:16) thl: well, nirik started a discussion on fedora-devel (10:59:18) j-rod: dgilmore: ah, gotcha -- I was thinking it would be better to use slightly less, with the intention of filling the cpus with multiple simultaneous builds (10:59:25) cweyl: scop: maybe just move to Extras/PackageEndOfLife? (10:59:28) thl: but there was much that came out of it afaics (10:59:47) scop: cweyl, yeah, maybe (10:59:48) nirik: I guess I would say: should the kmod guidelines say "If the upstream has no plans to merge with the upstream kernel, the module is not acceptable for extras" ? (11:00:08) thl: nirik, well, IMHO yes (11:00:13) dgilmore: zaptel as much as i want it in. We need to do something to make sure that it gets upstream. So we should ask the community of someone is willing to do that (11:00:58) thl: dgilmore, sounds like a good idea, but I doubt we'll find someone (11:01:22) cweyl: wait, I've never really understood this. why should it matter if (for whatever, presumably legitimate reason) an upstream decides to not pursue having it merged into the kernel proper? (11:02:12) thl: c4chris, well, drivers belong in the kernel -- kmods are a solution to bridge the gap until they get merged into the kenrel, but no long term solution (11:02:26) thl: s/c4chris/cweyl/ (11:02:31) thl: cweyl, simply example: (11:02:46) thl: kmod-foo breaks after rebase to 2.6.18 (11:02:56) dgilmore: though dave jones comment that he wont provide support for kernels with any kind of external module means that we could have confused users if they file a bug and get in return WONTFIX becaue of the extras kmods (11:02:59) thl: but a new kmod-bar doesn#t build anymore on 2.6.17 (11:03:06) devrimgunduz left the room (quit: Remote closed the connection). (11:03:13) thl: people that need both kmod-foo and kmod-bar will have problems now (11:03:25) tibbs: I personally believe that the length of the solution is up to the maintainer of the Extras package. (11:03:47) cweyl: tibbs: +1 (11:03:49) tibbs: Any external module solution will have problems keeping in step with kernels. At that point it's up to the maintainer. (11:03:56) cweyl: thl: I think we're setting the bar too high (11:04:05) scop: cweyl++ (11:04:17) tibbs: I don't believe that acceptance into extras should be used as any kind of political leverage as I have seen some state before. (11:04:45) tibbs: The issue of bugs and their interaction with the main kernel is quite compelling, though. (11:04:46) thl: tibbs, that was the agreement we settled on before we worked on the kmod stuff at all (11:04:54) cweyl: it sounds like zaptel-kmod is well maintained, isn't going anywhere anytime soon, and isn't going into the mainstream kernel ever due to business reasons... why shouldn't we let a maintainer package it for people who want it? (11:05:04) thl: well (11:05:15) thl: I'll bring it up to fab for discussion (11:05:19) thl: that okay for everybody? (11:05:26) cweyl: wait -- why fap? (11:05:30) thl: FAB (11:05:33) tibbs: thl: I was not a party to that discussion. (11:05:33) abadger1999: To me we have to keep our kernel people happy. (11:05:34) cweyl: err, fab? isn't this just an extras? (11:05:34) thl: sorry, typo (11:05:46) thl: abadger1999, +1 (11:05:47) bpepple: abadger1999: +1 (11:05:47) cweyl: err, a FESCo decision? (11:06:05) tibbs: If the "agreement" is unchangeable then that would be unfortunate. (11:06:08) thl: cweyl, no, this IMHO is something that matter for fedora at a whole project (11:06:13) cweyl: hrm. (11:06:21) thl: tibbs, everything can be changed (11:06:25) ***dgilmore steps back, I want it in but i understand the reasons for not having it in (11:06:35) warren: thl, except Bush's mind. (11:06:45) tibbs: WTF? (11:06:45) thl: warren, :) (11:06:57) cweyl: well, think of it this way too: as a user, I choose to buy a nvidia card, knowing that I'll need a kmod for it. (11:07:07) cweyl: I know there are risks that go along with that, and I'm willing to take them. (11:07:21) c4chris: but when your kernel crash, you usually file a kernel bug... (11:07:30) warren: BTW, vaguely on this topic, there was interesting news yesterday. (11:07:34) cweyl: Same thing with people who want to use zaptek-kmod, or the iscsi module that was discussed a while back... (11:07:40) warren: AMD is planning on open sourcing some of the ATI driver stuff. (11:07:49) tibbs: warren: Link? (11:07:54) thl: warren, are they really planing it? (11:07:59) c4chris: cweyl, that's why we need to keep the kernel maintainers happy (11:08:00) ***warren finds URL (11:08:00) bpepple: warren: Yeah, that looks like it could be promising. (11:08:01) dgilmore: cweyl: not always. my company provides me a system (probably laptop) it needs a kmod and i dont support binary drivers. but i get no say in the purchasing decision (11:08:04) thl: I only heard rumors (11:08:30) ***nirik also only heard rumors. (11:08:31) cweyl: c4chris: I'm not saying we shouldn't :) (11:08:41) tibbs: I've only heard wishful thinking. (11:08:47) warren: (11:08:49) nirik: warren: you talking about: ? (11:08:58) nirik: yeah, I read that as a rumor. (11:08:59) thl: I consider that as rumors (11:09:12) c4chris: cweyl, that means we probably need FAB buying the idea too... (11:09:24) thl: I ascutally asked AMD and ATI guys for clarification already earlier today (11:09:27) warren: I think consulting FAB i sa good idea. (11:09:30) thl: no reply until now (11:09:38) warren: thl, not surprised (11:09:40) cweyl: dgilmore: right. but the point is I want to use h/w that requires a kmod, and my decision to do that doesn't impact anyone else (11:09:54) warren: Anyway, if this becomes true, it will put pressure on NVidia. (11:10:05) ***bpepple thinks it needs to go to FAB also. (11:10:07) cweyl: c4chris: it's not like kmods change their package, or globally affect all fedora users (11:10:09) thl: so, anything else that needs to be discussed? (11:10:18) ***thl will close the meeting in 60 seconds (11:10:24) dgilmore: cweyl: yes and no. its not always my decsion but yes i want my hareware to work (11:10:45) warren: I think we're actually slowly winning the proprietary kernel module war (11:10:55) warren: Intel is leading the way, and hopefully AMD comes next (11:11:00) ***thl will close the meeting in 30 seconds (11:11:11) abadger1999: Approving this? (11:11:21) warren: Our only way to promote further growth is to maintain our hard line stance. (11:11:46) warren: SuSe and Ubuntu both switched away from proprietary modules to a hard line stance. (11:11:50) warren: we're doing the right thing (11:11:58) thl: abadger1999, well, if that something FESCo should approve (11:11:58) warren: it will be painful meanwhile though... (11:12:08) thl: why is it in the Packaging namespace then? (11:12:28) thl: abadger1999, but well, get's a +1 from me (11:12:32) c4chris: abadger1999, looks fine to me (11:12:52) scop: I suggested to put it in the packaging namespace, but others corrected me (11:13:07) abadger1999: thl: scop proposed it in packaging this morning but it seems much more like a FESCo thing. (11:13:17) thl: abadger1999, well, never mind (11:13:26) thl: abadger1999, it actually describes what we do already (11:13:29) thl: so +1 (11:13:35) c4chris: +1 (11:13:36) abadger1999: +1 (11:13:37) rdieter: +1 (11:13:38) bpepple: +1 (11:13:57) tibbs: +1 (11:14:00) thl: k, settled (11:14:16) thl: abadger1999, can you moe it over to a proper place in the wiki please? (11:14:23) ***thl will close the meeting in 30 (11:14:32) abadger1999: will do. (11:14:33) thl: s/moe/move/ (11:14:40) ***thl will close the meeting in 10 (11:14:50) thl: -- MARK -- Meeting end (11:14:55) thl: thx everybody! (11:15:05) tibbs: thl: Thanks. (11:15:11) c4chris: thl, thx (11:15:31) thl has changed the topic to: This is the Fedora Extras channel, home of the FESCo meetings and general Extras discussion. | | Next FESCo Meeting: 2006-08-17 1700 UTC (11:15:43) ***c4chris goes afk: time fer dinner :-) (11:16:10) thl: are you guys still satisfied with the way I run the meetings? (11:16:19) thl: or is there anything we should change? (11:16:33) tibbs: I'm not sure it could be done much better. (11:16:45) scop: thl, absolutely no problem with that (11:16:49) thl: I know I'm a bit hectic now and then (11:16:55) scop: I think the agenda is a bit swollen, though (11:17:01) dgilmore: thl: i think your doinga great job (11:17:21) thl: scop, you mean the wiki (11:17:28) dgilmore: thl: something you may need to step up and say hey were done on this now move on (11:17:40) thl: well, I wanted a better overview for those that missed a meeting or two (11:18:05) scop: thl, no, but I think there are maybe a bit too many things to process per meeting (11:18:39) thl: scop, yes (11:18:52) thl: maybe we should do more via mail (11:19:09) thl: e.g. the reports from the packaging committee maybe (11:19:33) scop: that would work for me (11:19:34) thl: maybe the owners of task should update the wiki pages with a status update *before* the meeting (11:19:51) abadger1999: thl: Both of those would be good ideas. (11:19:52) thl: we could avoid the "status update" questions then (11:20:05) scop: and sponsor stuff could be taken entirely to the list (11:20:09) Rathann: Nodoid: wow, you really did it, #202004 (11:21:08) thl: I'll think about it a bit more (11:21:37) thl: abadger1999, scop, but we need to make sure that we discuss important things from the packaging committee meetings here (11:21:50) scop: good point (11:21:51) thl: the less important things could be done on the list (11:22:11) abadger1999: If it needs to be discussed here then the report needs to be done here. (11:22:22) abadger1999: Or the wekly packaging meeting could be changed. (11:22:27) bpepple: scop: +1 (11:22:58) thl: abadger1999, changeing thepackaging meetings could help (11:23:51) thl: btw, regading sponsor discussions (11:23:59) thl: do we want to do that on cvs-sponsors (11:24:02) thl: or fesco only (11:24:11) ***thl votes for cvs-sponsors (11:24:38) ***bpepple votes for fesco. Could give more frank discussions. (11:24:48) abadger1999: I'll add meeting time to the packaging agenda. (11:25:44) thl: bpepple, but we are getting quite big, so it might more and more often happen that FESCo members don#t know those that were nominated for the sponsor job (11:26:23) bpepple: thl: It's pretty easy to query bugzilla for the reviews. (11:26:48) thl: well, let's discuss this on the list or in the next meeting (11:26:56) bpepple: no problem. (11:27:21) tibbs: c4chris did add bugzilla links to the "top reviewers" table in PackageStatus. (11:27:36) thl: bpepple, the past discussions we had on cvs-sponsors were quite frank iirc (11:27:45) tibbs: That list currently covers twelve reviews and up. (11:28:58) bpepple: yeah, but I'm afraid some of the discussions might discourage the participants enthusiasm, if there not comfortable with criticisms. (11:29:37) scop left the room ("Leaving"). (11:31:29) thl: bpepple, maybe we should do it on both lists? (11:32:00) bpepple: That might be a good idea. (11:34:20) cweyl: if there are criticisms, and they're discussed privately, might I suggest that it's a good idea to offer those criticisms, packaged constructively, to the nominee? That way they know what prevented them from being approved, and what they need to fix (11:34:20) thl: c4chris, /Extras/Schedule/PackageDatabase in place again (11:34:42) thl: cweyl, yeah, that's what I thought already, too (11:35:12) cweyl: and doing that publicly would give others guidance, establish precedent, etc, etc (11:35:30) cweyl: thl: I suspect I'm just stating the obvious here, but someone had to do it :) (11:35:45) dgilmore: thl: whats cvs-sponsers (11:36:26) bpepple: cweyl: Agreed, that was what I was thinking. (11:37:21) thl: dgilmore, a mailing-list where all sponsors are subscribed (it#s actually an alias or something else and no proper mailinglist) (11:38:00) dgilmore: thl: ok well i think its bad to discusss there because some fesco members are not in on that. (11:38:04) nirik: yeah, it's an alias... which unfortunately causes SPF issues. ;( (11:38:28) dgilmore: thl: namely me and im sure others (11:38:40) cweyl: nirik: cvs-sponsors causes skin cancer? (11:38:41) thl: dgilmore, well, we probably really should discuss on both lists (11:39:02) dgilmore: and I know i really dont have the time to dedicate to being a sponsor (11:39:37) nirik: cweyl: Sender Policy Framework... skin cancer might be easier to treat sometimes. ;( (11:40:00) cweyl: nirik: gotcha :) (11:40:36) dgilmore: nirik: people using SPF should add redhat /fedora mailservers to their dns (11:40:39) thl: dgilmore, np, I also don't find enough time to review and sponsor (11:40:49) thl: dgilmore, that's sad and I don#t like it (11:41:15) thl: but that's how it is ATM... :-/ (11:42:10) dgilmore: thl: yeaqh it is. Between Security and Infrastructure and my sparc port of extras, FESCO and maintaining my own packages I review when i can but I dont want to do a half assed job of something (11:42:50) thl: I actually even tried to get rid of a lot of packages in Extras (11:43:05) thl: to have more time for other stuff (11:43:13) dgilmore: I dont have a huge amount of packages. (11:43:36) dgilmore: I try to commit myself to things where i feel ill have a positive effect on something | http://fedoraproject.org/wiki/Extras/SteeringCommittee/Meeting-20060810 | CC-MAIN-2015-14 | en | refinedweb |
24 February 2012 06:59 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
The two companies had signed a framework agreement for the acquisition on 18 January, Zhejiang Satellite said in a statement to the Shenzhen Stock Exchange.
Later Zhejiang Satellite signed a transfer agreement with Zhejiang Julong on 3 February, according to the statement.
Separately the company’s board approved a further investment of CNY200m for the under construction 450,000 tonne/year propylene (C3) project of Zhejiang Julong, according to the statement.
The acquisition is part of the strategic development plan of Zhejiang Satellite, an acrylic acid and acrylate esters producer | http://www.icis.com/Articles/2012/02/24/9535475/chinas-zhejiang-satellite-to-acquire-fullshares-of-zhejiang-julong.html | CC-MAIN-2015-14 | en | refinedweb |
06 June 2013 20:05 [Source: ICIS news]
HOUSTON (ICIS) – ?xml:namespace>
April imports were also higher compared with March, when they were 10,873 tonnes, the ITC said.
US exports of ABS rose by 23% to 9,055 tonnes in April from 7,372 tonnes a year ago. April ABS exports also increased from March, when they were 7,981 tonnes, according the ITC.
The largest sources of imports in April | http://www.icis.com/Articles/2013/06/06/9676263/us-abs-imports-rose-by-2.5-in-april-exports-up-23.html | CC-MAIN-2015-14 | en | refinedweb |
- NAME
- VERSION
- SYNOPSIS
- DESCRIPTION
- ATTRIBUTES
- SUBROUTINES/METHODS
- DEPENDENCIES
- SEE ALSO
- PLATFORMS
- BUGS AND LIMITATIONS
- AUTHOR
- LICENSE AND COPYRIGHT
NAME
BuzzSaw::DataSource - A Moose role which defines the BuzzSaw data source interface
VERSION
This documentation refers to BuzzSaw::DataSource version 0.12.0
SYNOPSIS
package BuzzSaw::DataSource::Example; use Moose;
with 'BuzzSaw::DataSource';
sub next_entry { my ($self) = @_; .... return $line; }
sub reset { my ($self) = @_; .... }
DESCRIPTION
This is a Moose role which defines the methods which must be implemented by any BuzzSaw data source class. It also provides a number of common attributes which all data sources will require. A data source is literally what the name implies, the class provides a standard interface to any set of log data. A data source has a parser associated with it which is known to be capable of parsing the particular format of data found within this source. Note that this means that different types of log files (e.g. syslog, postgresql and apache) must be represented by different resources even though they are all sets of files. There is no requirement that the data be stored in files, it would be just as easy to store and retrieve it from a database. As long as the data source returns data in the same way, one complete entry at a time, it will work. A BuzzSaw data source is expected to work like a stream. Each time the next entry is requested the method should automatically move on until all entries in all resources are exhausted. For example, the Files data source automatically moves on from one file to another whenever the end-of-file is reached.
The following atributes are common to all classes which implement this interface.
- db
This attribute holds a reference to the BuzzSaw::DB object. When the DataSource object is created you can pass in a string which is treated as a configuration file name, this is used to create the BuzzSaw::DB object via the
new_with_configclass method. Alternatively, a hash can be given which is used as the set of parameters with which to create the new BuzzSaw::DB object.
- parser
This attribute holds a reference to an object of a class which implements the BuzzSaw::Parser role. If a string is passed in then it is considered to be a class name in the BuzzSaw::Parser namespace, short names are allowed, e.g. passing in
RFC3339would result in a new BuzzSaw::Parser::RFC3339 object being created.
- readall
This is a boolean value which controls whether or not all files should be read. If it is set to
true(i.e. a value of 1 - one) then the code which normally attempts to avoid re-reading previously seen files will not be used. The default value is
false(i.e. a value of 0 - zero).
SUBROUTINES/METHODS
Any class which implements this role must provide the following two methods.
- $entry = $source->next_entry
This method returns the next entry from the stream of log entries as a simple string. For example, with the Files data source - which works through all lines in a set of files - this will return the next line in the file.
This method should use the BuzzSaw::DB object
start_processingand
register_logmethods to avoid re-reading sources (unless the
readallattribute is true). It is also expected to begin and end DB transactions at appropriate times. For example, the Files data source starts a transaction when a file is opened and ends the transaction when the file is closed. This is designed to strike a balance between efficiency and the need to commit regularly to avoid the potential for data loss.
Note that this method does NOT return a parsed entry, it returns the simple string which is the next single complete log entry. When the data source is exhausted it will return the
undefvalue.
- $source->reset
This method must reset the position of all (if any) internal iterators to their initial values. This then leaves the data source back at the original starting position. Note that this does not imply that a second parsing would be identical to the first (e.g. files may have disappeared in the meantime).
The following methods are provided as they are commonly useful to most possible data sources.
- $sum = $source->checksum_file($file)
This returns a string which is the base-64 encoded SHA-256 digest of the contents of the specified file.
- $sum = $source->checksum_data($data)
This returns a string which is the base-64 encoded SHA-256 digest of the specified data.
DEPENDENCIES
This module is powered by Moose, it also requires MooseX::Types, MooseX::Log::Log4perl and MooseX::SimpleConfig.
The Digest::SHA module is also required.
SEE ALSO
BuzzSaw, BuzzSaw::DataSource::Files, DataSource::Importer, BuzzSaw::DB, BuzzSaw::Parser.
1 POD Error
The following errors were encountered while parsing the POD:
- Around line 206:
You forgot a '=back' before '=head1' | https://metacpan.org/pod/BuzzSaw::DataSource | CC-MAIN-2015-14 | en | refinedweb |
From: Steven Rostedt <srostedt@redhat.com>The ftrace utility reads delimited tokens from user space.Andrew Morton did not like how ftrace open coded this. He hada good point since more than one location performed this feature.This patch creates a copy_strtok_from_user function that can copya delimited token from user space. This puts the code in thelib/uaccess.c file. This keeps the code in a single locationand may be optimized in the future.Signed-off-by: Steven Rostedt <srostedt@redhat.com>--- include/linux/uaccess.h | 5 ++ lib/uaccess.c | 176 ++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 180 insertions(+), 1 deletions(-)diff --git a/include/linux/uaccess.h b/include/linux/uaccess.hindex 6b58367..08769ad 100644--- a/include/linux/uaccess.h+++ b/include/linux/uaccess.h@@ -106,4 +106,9 @@ extern long probe_kernel_read(void *dst, void *src, size_t size); */ extern long probe_kernel_write(void *dst, void *src, size_t size); +extern int copy_strtok_from_user(void *to, const void __user *from,+ unsigned int copy, unsigned int read,+ unsigned int *copied, int skip,+ const char *delim);+ #endif /* __LINUX_UACCESS_H__ */diff --git a/lib/uaccess.c b/lib/uaccess.cindex ac40796..0c12360 100644--- a/lib/uaccess.c+++ b/lib/uaccess.c@@ -1,8 +1,19 @@ /*- * Access kernel memory without faulting.+ * lib/uaccess.c+ * Generic memory access without faulting.+ *+ * Copyright (C) 2008 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>+ *+ * Added copy_strtok_from_user -+ * Copyright (C) 2009 Red Hat, Inc., Steven Rostedt <srostedt@redhat.com>+ *+ * This source code is licensed under the GNU General Public License,+ * Version 2. See the file COPYING for more details.+ * */ #include <linux/uaccess.h> #include <linux/module.h>+#include <linux/ctype.h> #include <linux/mm.h> /**@@ -53,3 +64,166 @@ long probe_kernel_write(void *dst, void *src, size_t size) return ret ? -EFAULT : 0; } EXPORT_SYMBOL_GPL(probe_kernel_write);++/**+ * copy_strtok_from_user - copy a delimited token from user space+ * @to: The location to copy to+ * @from: The location to copy from+ * @copy: The number of bytes to copy+ * @read: The number of bytes to read+ * @copied: The number of bytes actually copied to @to+ * @skip: If other than zero, will skip leading white space+ * @delim: NULL terminated string of character delimiters+ *+ * This reads from a user buffer, a delimited toke.+ * If skip is set, then it will trim all leading delimiters.+ * Then it will copy thet token (non delimiter characters) until+ * @copy bytes have been copied, @read bytes have been read from+ * the user buffer, or more delimiters have been encountered.+ *+ * Note, if skip is not set, and a dilmiter exists at the beginning+ * it will return immediately.+ *+ * Example of use:+ *+ * in user space a write(fd, "foo bar zot", 12) is done. We want to+ * read three words.+ *+ * len = 12; - length of user buffer+ * ret = copy_strtok_from_user(buf, ubuf, 100, len, @copied, 1, " ");+ * ret will equal 4 ("foo " read)+ * buf will contain "foo"+ * copied will equal 3 ("foo" copied)+ * Note, @skip could be 1 or zero and the same would have happened+ * since there was no leading space.+ *+ * len -= ret; - 4 bytes was read+ * read = ret;+ * ret = copy_strtok_from_user(buf, ubuf+read, 100, len, @copied, 1, " ");+ * ret will equal 5 (" bar " read, notice the double space between+ * foo and bar in the original write.)+ * buf will contain "bar"+ * copied will equal 3 ("bar" copied)+ * Note, @skip is 1, if it was zero the results would be different.+ * (see below)+ *+ * len -= ret; - 5 bytes read+ * read += ret;+ * ret = copy_strtok_from_user(buf, ubuf+read, 100, len, @copied, 1, " ");+ * ret = -EAGAIN (no space after "zot")+ * buf will contain "zot"+ * copied will equal 3 ("zot" copied)+ *+ * If the second copy_strtok_from_user above had 0 for @skip, where we+ * did not want to skip leading space (" bar zot")+ * ret will equal 1 (" " read)+ * buf will not be modified+ * copied will equal 0 (nothing copied).+ *+ * Returns:+ * The number of bytes read from user space (@from). This may or may not+ * be the same as what was copied into @to.+ *+ * -EAGAIN, if we copied a token successfully, but never hit an+ * ending delimiter. The number of bytes copied will be the same+ * as @read. Note, if skip is set, and all we hit were delimiters+ * then we will also returne -EAGAIN with @copied = 0.+ *+ * @copied will contain the number of bytes copied into @to+ *+ * -EFAULT, if we faulted during any part of the copy.+ * @copied will be undefined.+ *+ * -EINVAL, if we fill up @from before hitting a single delimiter.+ * @copy must be bigger than the expected token to read.+ */+int copy_strtok_from_user(void *to, const void __user *from,+ unsigned int copy, unsigned int read,+ unsigned int *copied, int skip,+ const char *delim)+{+ unsigned int have_read = 0;+ unsigned int have_copied = 0;+ const char __user *user = from;+ char *kern = to;+ int ret, len;+ char ch;++ /* get the first character */+ ret = get_user(ch, user++);+ if (ret)+ return ret;+ have_read++;++ len = strlen(delim);++ /*+ * If skip is set, and the first character is a delimiter+ * then we will continue to read until we find a non delimiter.+ */+ if (skip) {+ while (have_read < read && memchr(delim, ch, len)) {+ ret = get_user(ch, user++);+ if (ret)+ return ret;+ have_read++;+ }++ /*+ * If ch is still a delimiter, then have_read == read.+ * We successfully copied zero bytes. But this is+ * still valid. Just let the caller try again.+ */+ if (memchr(delim, ch, len)) {+ ret = -EAGAIN;+ goto out;+ }+ } else if (memchr(delim, ch, len)) {+ /*+ * If skip was not set and the first character was+ * a delimiter, then we return immediately.+ */+ ret = have_read;+ goto out;+ }+++ /* Now read the actual token */+ while (have_read < read &&+ have_copied < copy && !memchr(delim, ch, len)) {++ kern[have_copied++] = ch;++ ret = get_user(ch, user++);+ if (ret)+ return ret;++ have_read++;+ }++ /*+ * If we ended with a delimiter then we have successfully+ * read in a full token.+ *+ * If ch is not a delimiter, and we have filled up @from,+ * then this was an invalid token.+ *+ * If ch is not white space, and we still have room in @from+ * then we let the caller know we have split a token.+ * (have_read == read)+ */+ if (memchr(delim, ch, len))+ ret = have_read;+ else if (have_copied == copy)+ ret = -EINVAL;+ else {+ WARN_ON(have_read != read);+ ret = -EAGAIN;+ }++ out:+ *copied = have_copied;++ return ret;+}+EXPORT_SYMBOL_GPL(copy_strtok_from_user);-- 1.5.6.5-- | http://lkml.org/lkml/2009/2/26/16 | CC-MAIN-2015-14 | en | refinedweb |
In last week’s column, I mentioned del.icio.us, Joshua Schachter’s “social bookmarking” service. Since then, I’ve explored the service more deeply in a series of blog entries. Using del.icio.us, I’m now able to process information in dramatically more efficient ways. Let’s look at some of the reasons why.
For starters, del.icio.us is a machine-independent way to store bookmarks. From any Web page, you can use a del.icio.us bookmarklet to post the page’s URL, title, description, and a set of keywords or tags. From any computer, you can then recover the page by searching for text in the title or description or by navigating to it using one of its tags.
Dumping your own information into a service is always a concern. What if the service goes belly-up? You need an exit strategy, and del.icio.us provides exactly the right kind. A simple URL retrieves all your posts as an XML file. I now run a scheduled daily fetch of that URL, so that everything I add to del.icio.us is backed up locally.
A clean exit strategy is obviously desirable. Less obvious but equally crucial is a robust entry strategy. How easily can you import your own data into the service? The test case here was an XML file with hundreds of my blog entries. Thanks to the simplicity of del.icio.us’ API, which is similar to REST (representational state transfer), it passed the test with flying colors. After tagging the entries with keywords, I transformed the file into the set of URLs needed to populate my slice of the del.icio.us namespace. Suddenly, my blog entries and InfoWorld columns became navigable in a new and powerful way.
Of course, most blogging systems support categorized browsing. But I quit using my blog that way because I wasn’t interested in building a private taxonomy. A tag in del.icio.us is really a topic in a publish/subscribe network. When I assign a tag to an item, I’m routing the item to a topic. Anyone who subscribes to that topic using its RSS feed can monitor the items flowing to it.
If anyone can publish to a topic, won’t the signal-to-noise ratio degrade? Yes, but del.icio.us has another ace up its sleeve. For a given topic, you could subscribe to all items, but you might rather subscribe to postings only from people whose views on that topic you trust. On the topic of social software, for example, Clay Shirky and Sébastien Paquet are two observers who would make excellent filters.
In a March 2003 column, I wrote about the challenges of doing publish/subscribe at Internet scale. David Rosenblum, who was then CTO of messaging startup PreCache, had described to me an optimization procedure he called “filter merging.” The architecture of del.icio.us lends itself to just that kind of optimization. The combination of several trusted human filters, with respect to some topic of interest, yields a powerful merged filter.
Nothing about del.icio.us is rocket science. A competent developer could re-create the service in short order. And that’s one of its greatest strengths. We’re all becoming information routers, but we’re still discovering how the process needs to work. To do the experiment, we’ll need flexible and lightweight systems that are easy to implement, join, use, and build on. Joshua Schachter has shown how to build the right kind of laboratory. | http://www.infoworld.com/article/2666472/application-development/the-human-information-filter.html | CC-MAIN-2015-14 | en | refinedweb |
i have added the sout(ex); in catch field
and when i push the button it gives me this error :
java.io.IOException: could not create audio stream from input stream
i have added the sout(ex); in catch field
and when i push the button it gives me this error :
java.io.IOException: could not create audio stream from input stream
no ... there are no error messages ...
it should play the audio in background when i push the button ! .... but unfortunatly it dsnt do nythng :(
Hi friends
im new at this topic and i have tried all the things to make it work but without results .. :-<
can you please tell me whats not going in this simple code ??
/*
* To change...
i have done a thing like that .. but it doesnt works .. :(
public class DragMouseAdapter implements MouseListener{
JLabel templbl,lbl1,lbl2;
@Override
...
how can i do that ?
can you explain me with an example ?
sorry but im at start ...
i have tried to make a Drag and Drop for type of Puzzle. Its my school project and im the only one who is stuck with this trouble ...
it doesnt exchange me the Location of the Labels .. with...
ah ok ...
is it possible to drag a Label containing Image from a panel1 to another Label containing in panel2 ... making a type of exchange of Labels ? or its not possible due to the Panels ??
i tried to use img.addMouseListener()..etc
but the add methods for ImageIcon doesnt seem to exist :(
Hi friends ..
i have been trying to do a thing like exchange 2 JLabels Location using MouseListener or MouseLocation without any success :(
These 2 labels Contain ImageIcons. and have the same...
HI
i have been working on this project for a pair of days and i dont remember if i had this problem at the start as well or not ...
because the programme sometime works perfectly and sometime...
hi community! :p
im new at java and need so much help :D
so help me whenever you can ;)
thnx | http://www.javaprogrammingforums.com/search.php?s=4acbb94b779904a5425db9c0775c44da&searchid=1461229 | CC-MAIN-2015-14 | en | refinedweb |
...(concept_name, member)
The declaration of the concept is
template<class Sig, class T = _self> struct concept_name;
where
Sig is a function type giving the signature of the member function, and
T is the object type.
T may be const-qualified for const member functions.
concept_name<R(A...) const, T> is an alias for
concept_name<R(A...), const T>.
This macro can only be used at namespace scope.
Example:
namespace boost { BOOST_TYPE_ERASURE_MEMBER(push_back) } typedef boost::has_push_back<void(int)> push_back_concept;
The concept defined by this function may be specialized to provide a concept_map. The class object will be passed by reference as the first parameter.
template<> struct has_push_back<void(int), std::list<int> > { static void apply(std::list<int>& l, int i) { l.push_back(i); } };
In C++03, the macro can only be used in the global namespace and is defined as:
#define BOOST_TYPE_ERASURE_MEMBER(qualified_name, member, N)
Example:
BOOST_TYPE_ERASURE_MEMBER((boost)(has_push_back), push_back, 1) typedef boost::has_push_back<void(int), _self> push_back_concept;
For backwards compatibility, this form is always accepted. | https://www.boost.org/doc/libs/master/doc/html/BOOST_TYPE_ERASURE_MEMBER.html | CC-MAIN-2021-49 | en | refinedweb |
If we have a close look at LEGO® products, we can see that they are all made of the same building blocks. However, the composition of these blocks is the key differentiator for whether we are building a castle or space ship. It's pretty much the same for Podman, and its sibling projects Buildah, Skopeo, and CRI-O. However, instead of recycled plastic, the building blocks for our container tools are made of open source code. Sharing these building blocks allows us to provide rock-solid, enterprise-grade container tools. Features ship faster, bugs are fixed quicker, and the code is battle-tested. And, well, instead of bringing joy into playrooms, the container tools bring joy into data centers and workstations.
The most basic building block for our container tools is the containers/storage library, which locally stores and manages containers and container images. Going one level higher, we find the containers/image library. As the name suggests, this library deals with container images and is incredibly powerful. It allows us to pull and push images, manipulate images (e.g., change layer compression), inspect images along with their configuration and layers, and copy images between so-called image transports. A transport can refer to a local container storage, a container registry, a tar archive, and much more. Dan Walsh wrote a great blog post on the various transports that I highly recommend reading..
Managing container registries with registries.conf
The
registries.conf configuration is in play whenever we push or pull an image. Or, more generally speaking, whenever we contact a container registry. That's an easy rule of thumb. The systemwide location is
/etc/containers/registries.conf, but if you want to change that for a single user, you can create a new file at
$HOME/.config/containers/registries.conf.
So let's dive right into it. In the following sections, we will go through some examples that explain the various configuration options in the
registries.conf. The examples are real-world scenarios and may be a source of inspiration for tackling your individual use case.
Pulling by short names
Humans are lazy, and I am no exception to that. It is much more convenient to do a
podman pull ubi8 rather than
podman pull registry.access.redhat.com/ubi8:latest. I keep forgetting which image lives on which registry, and there are many images and a lot of registries out there. There is Docker Hub and Quay.io, plus registries for Amazon, Microsoft, Google, Red Hat, and many other Linux distributions.
Docker addressed our laziness by always resolving to the Docker Hub. A
docker pull alpine will resolve to
docker.io/library/alpine:latest, and
docker pull repo/image:tag will resolve to
docker.io/repo/image:tag (notice the specified repo). Podman and its sibling projects did not want to lock users into using one registry only, so short names can resolve to more than docker.io, and as you may expect, we can configure that in the
registries.conf as follows:
unqualified-search-registries = ['registry.fedoraproject.org', 'registry.access.redhat.com', 'registry.centos.org', 'docker.io']
The above snippet is taken directly from the
registries.conf in Fedora 33. It's a list of registries that are contacted in the specified order when pulling a short name image. If the image cannot be found on the first registry, Podman will attempt to pull from the second registry and so on. Buildah and CRI-O follow the same logic but note that Skopeo always normalizes to docker.io.
[ You might also like: What's the next Linux workload that you plan to containerize? ]
Searching images
Similar to the previous section on pulling, images are commonly searched by name. When doing a
podman search, I usually do not know or simply forgot on which registry the given image lives. When using Docker, you can only search on the Docker Hub. Podman gives more freedom to users and allows for searching images on any registry. And unsurprisingly,
registries.conf has a solution.
Similar to pulling, the
unqualified-search-registries are also used when using a short name with
podman search. A
podman search foo will look for images named foo in all unqualified-search registries.
Large corporations usually have on-premises container registries. Integrating such registries in your workflow is as simple as adding them to the list of unqualified-search registries.
Short-name aliases
Newer versions of Podman, Buildah, and CRI-O ship with a new way of resolving short names, primarily by using aliases. Aliases are a simple TOML table
[aliases] in the form
"name" = "value", similar to how Bash aliases work. We maintain a central list of aliases together with the community upstream at
github.com/containers/shortnames. If you own an image and want to have an alias, feel free to open a pull request or reach out to us.
Some distributions, like RHEL8, plan on shipping their own lists of short-names to help users and prevent them from accidentally pulling images from the wrong registry.
Explaining how short-name aliases work in detail would expand this blog post significantly, so if you are interested, please refer to an earlier blog post on short-name aliases.
Configuring a local container registry
Running a local container registry is quite common. I have one running all the time, so I can cache images and develop and test new features such as auto-updates in Podman. The bandwidth in my home office is limited, so I appreciate the fast pushes and pulls. Since everything is running locally, I don't need to worry about setting up TLS for the registry. That implies connecting to the registry via HTTP rather than via HTTPS. Podman allows you to do that by specifying
--tls-verify=false on the command line, which will skip TLS verification and allow insecure connections.
An alternative approach to skipping TLS verification via the command line is by using the
registries.conf. This may be more convenient, especially for automated scripts where we don't want to manually add command-line flags. Let's have a look at the config snippet below.
[[registry]] location="localhost:5000" insecure=true
The format of the registries.conf is TOML. The double braces of
[[registry]] indicate that we can specify a list (or table) of
[registry] objects. In this example, there is only one registry where the location (i.e., its address) is set to
localhost:5000. That is where a local registry is commonly running. Whenever the
containers/image library connects to a container registry with that location, it will look up its configuration and act accordingly. In this case, the registry is configured to be insecure, and TLS verification will be skipped. Podman and the other container tools can now talk to the local registry without getting the connections rejected.
Blocking a registry, namespace, or image
In case you want to prevent users or tools from pulling from a specific registry, you can do as follows.
[[registry]] location="registry.example.org" blocked=true
The
blocked=true prevents connections to this registry, or at least to blocks pulling data from it.
However, it's surprisingly common among users to block only specific namespaces or individual images but not the entire registry. Let's assume that we want to stop users from pulling images from the namespace
registry.example.org/namespace. The
registries.conf will now look like this:
[[registry]]] location="registry.example.org" prefix="registry.example.org/example" blocked=true
I just introduced a new config knob:
prefix. A prefix instructs only to select the specified configuration when we attempt to pull an image that is matched by the specific prefix. For example, if we would run a
podman pull registry.example.org/example/image:latest, the specified prefix would match, and Podman would be blocked from pulling the image. If you want to block a specific image, you can set it using the following:
prefix="registry.example.org/namespace/image"
Using a prefix is a very powerful tool to meet all kinds of use cases. It can be combined with all knobs of a
[registry]. Note that using a prefix is optional. If none is specified, the prefix will default to the (mandatory) location.
Mirroring registries
Let's assume that we are running our workload in an air-gapped environment. All our servers are disconnected from the internet. There are many reasons for that. We may be running on the edge or running in a highly security-sensitive environment that forbids us from connecting to the internet. In this case, we cannot connect to the original registry but need to run a registry that mirrors the local network's contents.
A registry mirror is a registry that will be contacted before attempting to pull from the original one. It's a common use case and one of the oldest feature requests in the container ecosystem.
[[registry]] location="registry.access.redhat.com" [[registry.mirror]] location="internal.registry.mirror"
With this configuration, when pulling the Universal Base Image via
podman pull ubi8, the image would be pulled from the mirror instead of Red Hat's container registry.
Note that we can specify multiple mirrors that will be contacted in the specified order. Let's have a quick look at what that means:
[[registry]] location="registry.example.com" [[registry.mirror]] location="mirror-1.com" [[registry.mirror]] location="mirror-2.com" [[registry.mirror]] location="mirror-3.com"
Let's assume we are attempting to pull the image
registry.example.com/myimage:latest. Mirrors are contacted in the specified order (i.e., top-down), which means that Podman would first try to pull the image from
mirror-1.com. If the image is not present or the pull fails for other reasons, Podman would contact
mirror-2.com and so forth. If all mirror pulls fail, Podman will contact the main
registry.example.com.
Note that mirrors also support the
insecure knob. If you want to skip TLS verification for a specific mirror, just add
insecure=true.
Remapping references
As we explored above, a
prefix is used to select a specific
[registry] in the
registries.conf. While prefixes are a powerful means to block specific namespaces or certain images from being pulled, they can also be used to remap entire images. Similar to mirrors, we can use a prefix to pull from a different registry and a different namespace.
To illustrate what I mean by remapping, let's consider that we run in an air-gapped environment. We cannot access container registries since we are disconnected from the internet. Our workload is using images from Quay.io, Docker Hub, and Red Hat's container registry. While we could have one network-local mirror per registry, we could also just use one with the following config.
[[registry]] prefix="quay.io" location="internal.registry.mirror/quay" [[registry]] prefix="docker.io" location="internal.registry.mirror/docker" [[registry]] prefix="registry.access.redhat.com" location="internal.registry.mirror/redhat"
A
podman pull quay.io/buildah/stable:latest will now instead pull
internal.registry.mirror/quay/buildah/stable:latest. However, the pulled image will remain
quay.io/buildah/stable:latest since the remapping and mirroring happen transparently to Podman and the other container tools.
As we can see in the snippet above,
internal.registry.mirror is our network-local mirror that we are using to pull images on behalf of Quay.io, Docker Hub, and Red Hat's container registry. Images of each registry reside on separate namespaces on the registry (i.e., "quay", "docker", "redhat")—simple yet powerful trick to remap images when pulling. You may ask yourself how we can pre-populate the internal mirror with the images from the three registries. I do not recommend doing that manually but to use
skopeo sync instead. With
skopeo sync, a sysadmin can easily load all images onto a USB drive, bring that to an air-gapped cluster, and preload the mirror.
There are countless use cases where such remapping may help. For instance, when using another registry during tests, it may come in handy to transparently pull from another (testing or staging) registry than in production. No code changes are needed.
Tom Sweeney and Ed Santiago used the remapping to develop a creative solution to address the rate limits of Docker Hub. In late November 2020, Docker Hub started to limit the number of pulls per user in a given timeframe. At first, we were concerned because large parts of our testing systems, and continuous integration used Docker Hub images. But with a simple change to the
registries.conf on our systems, Tom and Ed found a great solution. That spared us from the manual and tedious task of changing all images referring to docker.io in our tests.
Advanced configuration management via drop-on config files
Managing configurations is challenging. Our systems are updated all the time, and with the updates may come configuration changes. We may want to add new registries, configure new mirrors, correct previous settings or extend the default configuration of Fedora. There are many motivations, and for certain
registries.conf supports it via so-called drop-in configuration files.
When loading the configuration, the
containers/image library will first load the main configuration file at
/etc/containers/registries.conf and then all files in the
/etc/containers/registries.conf.d directory in alpha-numerical order.
Using such drop-in
registries.conf files is straight forward. Just place a
.conf file in the directory, and Podman will get the updated configuration. Note that tables in the config are merged while simple knobs are overridden. This means, in practice, that the
[[registry]] table can easily be extended with new registries. If a registry with the same prefix already exists, the registry setting will be overridden. The same applies to the
[aliases] table. Simple configuration knobs such as unqualified-search-registries are always overridden.
[ Getting started with containers? Check out this free course. Deploying containerized applications: A technical overview. ]
Conclusion
The
registries.conf is a core building block of our container tools. It allows for configuring all kinds of properties when talking to a container registry. If you are interested in studying the configuration in greater detail, you can either do a
man containers-registries.conf to the read the man page on your Linux machine or visit the upstream documentation. | https://www.redhat.com/sysadmin/manage-container-registries | CC-MAIN-2021-49 | en | refinedweb |
Problem Statement
“Iterative method to find ancestors of a given binary tree” problem states that you are given a binary tree and an integer representing a key. Create a function to print all the ancestors of the given key using iteration.
Example
Input
key = 6
5 2 1
Explanation: The path from the root to the node having value 6 is 1 2 5 6. thus the parent sequence from 6 to root is 5 2 1.
Input :
key = 10
9 7 1
Algorithm
1. Create a structure Node which contains an integer variable to hold data and two pointers left and right of type Node. 2. After that, create the parameterized constructor of the structure which accepts an integer value as it's parameter. Store the integer value in the data variable of Node and initialize left and right pointers as null. 3. Create a function to print the tree path which accepts an unordered map and an integer variable key as it's a parameter. While the key variable is equal to the map[key], print the key. 4. Similarly, create another function to set the parents for all the nodes in the given binary tree which accepts a pointer to the root node and an unordered map as it's parameter. 5. Create a stack data structure of Node* type. Push/insert the root in the stack. 6. After that, traverse while the size of the stack is not 0. Create a pointer curr of type Node and store the element at the top of the stack in it and pop/delete it from the stack. 7. If the right of the curr is present, update the map at index data of right of curr as the data of curr. Push/insert the right of curr in the stack. 8. If the left of the curr is present, update the map at index data of the left of curr as the data of curr. Push/insert the left of curr in the stack. 9. Similarly, create another function to print the ancestors. Check if the root is equal to null, return. 10. Create an unordered map and set the data of its root as 0. 11. Call the function to set the parents of all nodes. 12. Finally, call the function to print the tree path.
Here for this problem, we are given a binary tree and we need to print ancestors of a node. Just like we have children of a node, we have its ancestor except the root of the tree. If you consider the leaves of a tree, they don’t have children. Similarly, the root node does not have an ancestor. Here we store the parent of each node. So that when we need to print its ancestors. We will use the parents to find their ancestors. We have used an unordered_map/HashSet to perform this operation. Since the HashSet allows the insertion/searching/deletion in O(1). We are able to solve this problem in linear time complexity. In the solution below, we are using a graph searching like an approach to traverse the whole tree. And while traversing we are storing the parent of each node. Then we have created a function, while traverse over our map (which stores the parents) to print the ancestors. the whole solution runs in linear time complexity of O(N).
Code
C++ Program for Iterative method to find ancestors of a given binary tree
#include <bits/stdc++.h> using namespace std; struct Node{ int data; Node *left, *right; Node(int data){ this->data = data; this->left = this->right = nullptr; } }; void printTopToBottomPath(unordered_map<int, int> parent, int key){ while(key = parent[key]){ cout << key << " "; } cout << endl; } void setParent(Node* root, unordered_map<int, int> &parent){ stack<Node*> stack; stack.push(root); while(!stack.empty()){ Node* curr = stack.top(); stack.pop(); if(curr->right){ parent[curr->right->data] = curr->data; stack.push(curr->right); } if(curr->left){ parent[curr->left->data] = curr->data; stack.push(curr->left); } } } void printAncestors(Node* root, int node){ if(root == nullptr){ return; } unordered_map<int, int> parent; parent[root->data] = 0; setParent(root, parent); printTopToBottomPath(parent, node); } int main(){ Node* root = nullptr;); return 0; }
5 2 1
Java Program for Iterative method to find ancestors of a given binary tree
import java.util.*; class Node{ int data; Node left = null, right = null; Node(int data){ this.data = data; } } class Main{ public static void printTopToBottomPath(Map<Integer, Integer> parent, int key){ while((key = parent.get(key)) != 0){ System.out.print(key + " "); } } public static void setParent(Node root, Map<Integer, Integer> parent){ Deque<Node> stack = new ArrayDeque<>(); stack.add(root); while(!stack.isEmpty()){ Node curr = stack.pollLast(); if(curr.right != null){ parent.put(curr.right.data, curr.data); stack.add(curr.right); } if(curr.left != null){ parent.put(curr.left.data, curr.data); stack.add(curr.left); } } } public static void printAncestors(Node root, int node){ if(root == null){ return; } Map<Integer, Integer> parent = new HashMap<>(); parent.put(root.data, 0); setParent(root, parent); printTopToBottomPath(parent, node); } public static void main(String[] args){ Node); } }
5 2 1
Complexity Analysis
Time Complexity
O(N), where N is the number of nodes in the given tree. We have just traversed over the whole tree and once we have traversed over the map to print ancestors.
Space Complexity
O(N) because we used extra space in unordered_map and stack. | https://www.tutorialcup.com/interview/stack/iterative-method-to-find-ancestors-of-a-given-binary-tree.htm | CC-MAIN-2021-49 | en | refinedweb |
The problem Rank Transform of an Array Leetcode Solution provided us with an array of integers. The array or the given sequence is unsorted. We need to assign ranks to each integer in the given sequence. There are some restriction s for assigning the ranks.
- The ranks must start with 1.
- The larger the number, the higher the rank (larger in numeric terms).
- Ranks must be as small as possible for each integer.
So, let’s take a look at a few examples.
arr = [40,10,20,30]
[4,1,2,3]
Explanation: It will be easier to understand the example if we sort the given input. After sorting, the input becomes [10, 20, 30, 40]. Now if we follow the given rules. The ranks will be [1, 2, 3, 4] as per the new modified array. If we match the elements in the output. They are the same, confirming the correctness of the output.
[100,100,100]
[1, 1, 1]
Explanation: Since all the elements in the input are the same. Thus all must have the same rank that is 1. Hence the output contains three instances of 1.
Approach for Rank Transform of an Array Leetcode Solution
The problem Rank Transform of an Array Leetcode Solution asked us to assign ranks to the given sequence. The conditions that are to be met are already stated in the problem description. So, instead of describing them once again. We will go through the solution directly. So, as seen in the example. It’s easier to assign ranks to a sorted sequence. So, we use an ordered map to store the elements in the given input sequence. Using an ordered map makes sure that the elements are in sorted order.
Now, we must deal with the third condition. The third condition states that we must assign the smallest ranks as much as possible. So, we simply assign numbers from 1 to the keys present on the map. This takes care of all three imposed conditions. The rank of larger numbers is higher. The ranks start from 1. They are as small as possible.
Now, we simply traverse through the input sequence and assign the ranks stored in the map.
Code
C++ code for Rank Transform of an Array Leetcode Solution
#include <bits/stdc++.h> using namespace std; vector<int> arrayRankTransform(vector<int>& arr) { map<int, int> m; for(auto x: arr) m[x] = 1; int lst = 0; for(auto x: m){ m[x.first] = lst + 1; lst++; } for(int i=0;i<arr.size();i++) arr[i] = m[arr[i]]; return arr; } int main(){ vector<int> input = {40, 10, 30, 20}; vector<int> output = arrayRankTransform(input); for(auto x: input) cout<<x<<" "; }
4 1 3 2
Java code Rank Transform of an Array Leetcode Solution
import java.util.*; import java.lang.*; import java.io.*; class Main { public static int[] arrayRankTransform(int[] arr) { Map<Integer, Integer> m = new TreeMap<Integer, Integer>(); for(int x: arr) m.put(x, 1); int lst = 0; for(Integer x: m.keySet()){ m.put(x, lst + m.get(x)); lst = m.get(x); } for(int i=0;i<arr.length;i++) arr[i] = m.get(arr[i]); return arr; } public static void main (String[] args) throws java.lang.Exception { int[] input = {40, 10, 30, 20}; int[] output = arrayRankTransform(input); for(int x: output) System.out.print(x+" "); } }
4 1 3 2
Complexity Analysis
Time Complexity
O(NlogN), since we used an ordered map we have the logarithmic factor for insertion, deletion, and searching.
Space Complexity
O(N), because we use an ordered map to store the elements in the input. | https://www.tutorialcup.com/leetcode-solutions/rank-transform-of-an-array-leetcode-solution.htm | CC-MAIN-2021-49 | en | refinedweb |
TraceProcessor version 0.2.0 is now available on NuGet with the following package ID:
Microsoft.Windows.EventTracing.Processing.All
This release contains minor feature additions and bug fixes since version 0.1.0. (A full changelog is below).
There are a couple of project settings we recommend using with TraceProcessor:
- We recommend running exes as 64-bit. The Visual Studio default for a new C# console application is Any CPU with Prefer 32-bit checked. Trace processing can be memory-intensive, especially with larger traces, and we recommend changing Platform target to x64 (or unchecking Prefer 32-bit) in exes that use TraceProcessor. To change these settings, see the Build tab under Properties for the project. To change these settings for all configurations, ensure that the Configuration dropdown is set to All Configurations, rather than the default of the current configuration only.
- We also suggest using NuGet with the newer-style PackageReference mode rather than the older packages.config mode. To change the default for new projects, see Tools, NuGet Package Manager, Package Manager Settings, Package Management, Default package management format.
TraceProcessor supports loading symbols and getting stacks from several data sources. The following console application looks at CPU samples and outputs the estimated duration that a specific function was running (based on the trace’s statistical sampling of CPU usage):
using Microsoft.Windows.EventTracing; using Microsoft.Windows.EventTracing.Cpu; using Microsoft.Windows.EventTracing.Symbols; using System; using System.Collections.Generic; class Program { static void Main(string[] args) { if (args.Length != 3) { Console.Error.WriteLine("Usage: GetCpuSampleDuration.exe <trace.etl> <imageName> <functionName>"); return; } string tracePath = args[0]; string imageName = args[1]; string functionName = args[2]; Dictionary<string, Duration> matchDurationByCommandLine = new Dictionary<string, Duration>(); using (ITraceProcessor trace = TraceProcessor.Create(tracePath)) { IPendingResult<ISymbolDataSource> pendingSymbolData = trace.UseSymbols(); IPendingResult<ICpuSampleDataSource> pendingCpuSamplingData = trace.UseCpuSamplingData(); trace.Process(); ISymbolDataSource symbolData = pendingSymbolData.Result; ICpuSampleDataSource cpuSamplingData = pendingCpuSamplingData.Result; symbolData.LoadSymbolsForConsoleAsync(SymCachePath.Automatic, SymbolPath.Automatic).GetAwaiter().GetResult(); Console.WriteLine(); IThreadStackPattern pattern = AnalyzerThreadStackPattern.Parse($"{imageName}!{functionName}"); foreach (ICpuSample sample in cpuSamplingData.Samples) { if (sample.Stack != null && sample.Stack.Matches(pattern)) { string commandLine = sample.Process.CommandLine; if (!matchDurationByCommandLine.ContainsKey(commandLine)) { matchDurationByCommandLine.Add(commandLine, Duration.Zero); } matchDurationByCommandLine[commandLine] += sample.Weight; } } } foreach (string commandLine in matchDurationByCommandLine.Keys) { Console.WriteLine($"{commandLine}: {matchDurationByCommandLine[commandLine]}"); } } }
Running this program produces output similar to the following:
C:\GetCpuSampleDuration\bin\Debug\> GetCpuSampleDuration.exe C:\boot.etl user32.dll LoadImageInternal
0.0% (0 of 1165; 0 loaded)
<snip>
100.0% (1165 of 1165; 791 loaded)
wininit.exe: 15.99 ms
C:\Windows\Explorer.EXE: 5 ms
winlogon.exe: 20.15 ms
“C:\Users\AdminUAC\AppData\Local\Microsoft\OneDrive\OneDrive.exe” /background: 2.09 ms
(Output details will vary depending on the trace).
Internally, TraceProcessor uses the SymCache format, which is a cache of some of the data stored in a PDB. When loading symbols, TraceProcessor requires specifying a location to use for these SymCache files (a SymCache path) and supports optionally specifying a SymbolPath to access PDBs. When a SymbolPath is provided, TraceProcessor will create SymCache files out of PDB files as needed, and subsequent processing of the same data can use the SymCache files directly for better performance.
The full changelog for version 0.2.0 is as follows:
Breaking Changes
- Multiple Timestamp properties are now TraceTimestamp instead (which implicitly converts to the former Timestamp return type).
- When a trace containing lost events is processed and AllowLostEvents was not set to true, a new TraceLostEventsException is thrown.
New Data Exposed
- ISymbolDataSource now exposes Pdbs. This list contains every PDB that LoadSymbols is capable of loading for the trace.
- IDiskActivity now exposes StorportDriverDiskServiceDuration and IORateData.
- IMappedFileLifetime and IPageFileSectionLifetime now expose CreateStacks and DeleteStacks.
- UseContextSwitchData and trace.UseReadyThreadData are now available individually rather than only as part of trace.UseCpuSchedulingData.
- Last Branch Record (LBR) data has been added and is available via trace.UseLastBranchRecordData.
- EventContext now provides access to original trace timestamp values.
- IEnergyEstimationInterval now exposes ConsumerId.
Bug Fixes
- A NullReferenceException inside of ICpuThreadActivity.WaitingDuration has been fixed.
- An InvalidOperationException inside of Stack Tags has been fixed.
- An InvalidOperationException inside of multiple file and registry path related properties has been fixed.
- A handle leak inside of TraceProcessor.Create has been fixed.
- A hang inside of ISymbolDataSource.LoadSymbolsAsync has been fixed.
- Support for loading local PDB files and transcoding them into symcache files has been fixed.
- Disks that were not mounted when the trace was recorded will now result in an IDisk that will throw on access to most properties instead of returning zeroes. Use the new IDisk.HasData property to check this condition before accessing these properties. This pattern is similar to how IPartition already functions.
- A COMException in IDiskActivityDataSource.GetUsagePercentages has been fixed.
Other
- IImageWeakKey has been deprecated as it can contain inaccurate data. IImage.Timestamp and IImage.Size should be used instead.
- OriginalImageName has been deprecated as it can contain inaccurate data. IImage.OriginalFileName should be used instead.
- Most primitive data types (Timestamp, FrequencyValue, etc) now implement IComparable.
- A new setting, SuppressFirstTimeSetupMessage, has been added to TraceProcessorSettings. When set to true, the message regarding our first time setup process running will be suppressed.
- SymbolPath and SymCachePath now include static helpers for generating commonly used paths.
As before, if you find these packages useful, we would love to hear from you, and we welcome your feedback. For questions using this package, you can post on StackOverflow with the tag .net-traceprocessing, and feedback can also be sent via email to [email protected]. | https://blogs.windows.com/windowsdeveloper/2019/08/07/traceprocessor-0-2-0/ | CC-MAIN-2021-49 | en | refinedweb |
%Calendar.Hijri
persistent class %Calendar.Hijri extends %Library.Persistent, %XML.Adaptor
SQL Table Name: %Calendar.HijriA %Calendar.Hijri object contains a sequence of monthly lunar observations that can be installed into an InterSystems IRIS process so that the $ZDATE[TIME][H](x,20,...) and $ZDATE[TIME][H](x,21,...) functions use those observations when computing Hijri dates.
Evaluating ##class(%Calendar.Hijri).%New(Name,Year,DateH,Months) produces a new Hijri Observation Calendar object with the the calendar name specified by the argument Name. The argument Year is the Hijri year number for the year containing the first 12 lunar observations. The argument DateH contains a ObjectScript $HOROLOG date value specifying the first day of the first Hijri year containing observational data. The String Months contains the observations starting from the first month of Hijri year Year. If the length of Months string argument is not a multiple 12 then extra observations will be added to fill out to the end of the last year. These extra months will attempt to bring the last day of the Observed calendar years closer to the Tabular calendar. These supplied months can be changed in the future after the actual observations are available. A character in the string Months must be "0" if there is a lunar observation which ends the corresponding month on the 29th day. The character in the string Months must be "1" if there is no lunar observation on the 29th day of the month and that month instead has 30 days. No characters other than "0" and "1" are permitted in the Months string.
Note the description of the Delta property which keeps track of the difference between the Observational Calendar and the Tabular Calendar. When a %Calendar.Hijri is created (by the %New method) or is modified (by the AddObservation method), there is a restriction that the the difference between the Observed Calendar dates and the corresponding Tabular Calendar dates cannot be greater than 5 days apart. Such a difference between the Observed Calendar and the Tabular Calendar indicates an error in the observational data that is trying to become part of the %Calendar.Hijri object. An attempt to have a difference that is more than 5 days earlier or more than 5 days later means the new object will not be created or that an existing object will not be modified.
Users of the %Calendar.Hijri Class are encouraged to suggest improvements to InterSystems that could be added to this Class
Property Inventory
Method Inventory
Parameters
Properties
The character "0" indicates a 29 day month; the character "1" indicates a 30 day month.
Methods
If the new Observation is being made to a month before Month 12 then AddNewObservation method will attempt to modify an observation in a month following the new Observation so that the Observed - Tabular difference at the end of the year is not changed.
You must have installed all the observations for the months of the last year before you use AddObservation to add an observations for a new, additional ending year. After a new year is added to the table of observations, the only way to modify any preceding year to to call ##class(%Calendar.Hijri).%New(Name,Year,DateH,Months) to in create a new object containing the modified observations. (Hint: If you want to keep the same name as an existing object then you must delete the existing object before creating a new object with the same name.)
Note: AddObservation just modifies the in-process copy of the Calendar. You must use the %Save method to save the modification back to your namespace. Also, AddObservation does not "install" your change into the process. You must use the InstallCalendar method to have the $ZDATE[TIME][H](x,20,...) or $ZDATE[TIME][H](x,21,...) functions start using the new observation.
Indexes
Inherited Members
Inherited Methods
- %AddToSaveSet()
- %AddToSyncSet()
- %BMEBuilt()
- From()
- XMLDTD()
- XMLExport()
- XMLExportToStream()
- XMLExportToString()
- XMLNew()
- XMLSchema()
- XMLSchemaNamespace()
- XMLSchemaType()
Storage
Storage Model: Storage (%Calendar.Hijri) | https://docs.intersystems.com/healthconnectlatest/csp/documatic/%25CSP.Documatic.cls?LIBRARY=%25SYS&CLASSNAME=%25Calendar.Hijri | CC-MAIN-2021-49 | en | refinedweb |
First Look: Text Analytics with InterSystems Products
This First Look guide introduces you to InterSystems IRIS® data platform support for Natural Language Processing (NLP) text analytics, which provides semantic analysis of unstructured text data in a variety of natural languages. This enables you to discover useful information about the contents of a large number of text documents without any prior knowledge of the contents of the texts.
This First Look guide presents an introduction to InterSystems IRIS Natural Language Processing, and walks through some initial tasks associated with indexing text data for semantic text analysis. Once you’ve completed this exploration, you will have indexed a group of texts and performed analysis determining the most common entities in those texts, metrics about those entities, various kinds of associations found between entities, and viewing the appearances of an entity in the source texts. These activities are designed to use only the default settings and features, so that you can acquaint yourself with the fundamentals of NLP text analysis. For the full documentation on Text Analytics, see the InterSystems IRIS Natural Language Processing (NLP) Guide.
A related, but separate, tool for handling unstructured texts is InterSystems IRIS SQL Search. SQL Search allows you to search for these same entities, as well as single words, regular expressions and other constructs in multiple texts. Inherently, a search solution presupposes that you know what you are looking for. NLP text analytics is designed to help you discover content and connections between content entities without necessarily starting from an idea to look for.
To browse all of the First Looks, including those that can be performed on a free evaluation instance of InterSystems IRIS
, see InterSystems First Looks
.
Why NLP Text Analytics Is Important
Increasingly, organizations are amassing larger and larger quantities of unstructured text data, far in excess of their ability to read or catalog these texts. Frequently, an organization may have little or no idea what the contents of these text documents are. Conventional “top-down” text analysis based on pure search technologies makes assumptions about the contents of these texts, which may miss important content.
InterSystems IRIS Natural Language Processing (NLP) allows you to perform text analysis on these texts without any upfront knowledge of the subject matter. It does this by applying language-specific rules that identify semantic entities. Because these rules are specific to the language, not the content, NLP can provide insight into the contents of texts without the use of a dictionary or ontology.
How InterSystems IRIS Implements NLP Text Analytics
To prepare texts for NLP analytics you must load those texts into a domain, and then build the domain. Based on its analysis of the texts, NLP builds indices for the domain that NLP can use to rapidly analyze large quantities of text. Texts can be input from a variety of data locations, including SQL tables, text files, strings, globals, and RSS data.
NLP supports the following functionality:
Language models: identifying semantic relationships between words is language-specific. NLP contains semantic rules (language models) for ten natural languages that enable analysis of a text on any subject written in that language. If you specify more than one language, NLP performs automatic language identification by determining the best match between each sentence in each text and the specified languages. NLP analysis does not require the upfront creation or association of dictionaries or ontologies, although you can expand its functionality by adding them.
Entity analysis: NLP operates on semantic groups of one or more words known as entities. Entities are identified as either Concepts (which include nouns and noun phrases) or Relations (which include verbs and prepositions). Commonly, the most relevant entities to consider are Concepts, though it is also possible to analyze Relations. Sentence and word boundaries are always observed. Letter case is ignored.
Path analysis: NLP groups coherent sequences of Concepts and Relations into Paths. A sentence usually consists of a single Path. A Path reveals the connections between entities.
Attributes: NLP flags semantic attributes such as negation, so that you can differentiate text sequences that are positive (“evidence of structural damage”) from those that are negative (“no evidence of structural damage”).
Frequency, Spread, and Dominance: These are metrics calculated for an entity. Frequency is the number of times an entity appears in a group of texts. Spread is the number of texts that contain that entity. Dominance is a more nuanced metric generated by factoring in the entity frequency relative to the length of each text, the frequency of other entities having words in common, and other factors. Entities are commonly returned sorted in descending order by these metrics. These metrics provide insight into the content of texts, enabling you to perform deeper analysis of specific entities.
Similar Entities, Related Concepts, and Proximity Profile. Given an entity, these discover other relevant entities. For example, given a short entity, similar entities would include other, longer entities in the domain that contain the same words, thereby being more specific entities than the seed one. Given an entity, related entities are other entities in the same sentence that are associated to the specified entity by a single Relation. Given an entity, the Proximity metric calculates the proximity within paths between the specified entity and other entities.
Dictionaries: you can add optional dictionaries to identify synonyms for an entity.
Summarization: you can use NLP to generate a summary of a text, requesting the summary as a percentage of the whole. For example, a 50% summary would consist of half of the sentences in the original text, with NLP selecting those sentences that are calculated as most relevant to the overall source text.
Trying NLP Text Analytics for Yourself
It is easy to use InterSystems IRIS Text Analytics. This simple procedure walks you through the basic steps of generating NLP metrics.
This example is provided to give you some initial experience with InterSystems IRIS Natural Language Processing. You should not use this example as the basis for developing a real application. To use NLP in a real situation you should fully research the available choices provided by the software, then develop your application to create robust and efficient code.
Before You Begin
To use the procedure, you will need a running InterSystems IRIS instance. also need to obtain the Aviation.Event SQL table, which is available on GitHub at
. Follow the instructions provided in Downloading and Setting up the Sample Files):
Domain name: The name you assign to a domain must be unique for the current namespace (not just unique within its package class); domain names are not case-sensitive. For this example, specify MyTest.
Definition class name: the domain definition package name and class name, separated by a period. From the Domain name field press the Tab key to generate a default Definition class name: Samples.MyTest.
Allow Custom Updates: this check box enables adding data or dictionaries to this domain manually. For this example, do not check the box.
Click the Finish button to create the domain. This displays the Model Elements selection screen.
Within a domain you can define data locations and other model elements for the domain. To add or modify model elements, click on the expansion triangle next to one of the headings. Initially, no expansion occurs. Once you have defined some model elements, clicking the expansion triangle shows the model elements you have defined.
Click the Data Locations triangle to display the Details tab on the right side of the screen. The Details tab shows five Add Data options. Select Add data from table.
This option allows you to specify data stored in an SQL table. In this example we will specify the following fields:
Name: A name for the set of extracted data files. Use the default: Table_1.
Schema: From the drop-down list select Aviation.
Table Name: From the drop-down list select Event.
ID Field: From the drop-down list select ID.
Data Field: From the drop-down list select NarrativeFull.
The Domain Architect page heading is followed by an asterisk (*) if there are unsaved changes to the current domain definition. Click Save to save your changes.
Compile the Domain by pressing the Compile button.
Build the NLP indices for the data sources by pressing the Build button.
Explore the Data
Explore the data using the procedure that follows:
On the Domain Architect page, select the Tools tab on the right side of the screen, then click the Domain Explorer button.
The Domain Explorer initially displays a list of the most significant concepts in the source texts:
The frequency tab displays the Top Concepts in descending order by frequency. Each listed item is shown with its frequency count (number of occurrences) and spread count (number of sources containing that concept).
For example, the concept pilot has a frequency of 6206 and a spread of 1085; the concept student pilot has a frequency of 319 and a spread of 141.
The dominance tab displays the Dominant Concepts in descending order by dominance calculation.
For example, the concept pilot has a dominance of 351.6008; the concept student pilot has a dominance of 49.3625.
When you select one of these concepts the other Domain Explorer listings are displayed:
Similar Entities lists the selected concept, and any other concepts that contain that concept, each with its frequency and spread.
For example, selecting student pilot lists the Similar Entities including student pilot, student pilot certificate, student pilot's logbook, solo student pilot.
Related Concepts lists other concepts that are related to the selected concept, with the frequency and spread of the instances of those concepts in this related context.
For example, selecting student pilot lists Related Concepts including flight instructor and airplane.
Proximity Profile lists other concepts found in proximity to the selected concept, with calculated proximity score of the instances of those concepts when found in the same sentence as the selected concept.
For example, selecting student pilot lists a Proximity Profile including airplane with a proximity of 2702, and flight instructor with a proximity of 1662.
By selecting a concept in any of these lists, these listings are refreshed based on that concept. Alternatively, you can also type an entity (Concept or Relation) into the Domain Explorer Explore area and click the Explore! button.
By using these listings, you can determine what concepts appear in the source documents, how significant they are, and what other concepts are associated with them.
The lower portion of the Domain Explorer allows you to view how a selected concept appears in the source texts:
The Sources tab lists by source all of the sentences that contain the selected concept. The concept is highlighted and red text is used to indicate negation that involves the concept.
The Paths tab lists all of the paths that contain the selected concept. The path text is highlighted to show NLP indexing: the selected concept is highlighted in orange, other concepts in the path are highlighted in blue, path-relevant concepts (commonly pronouns) are highlighted in light blue. Relations are shown white. Red text is used to indicate negation that involves the concept.
By clicking the eye icon, you can display the complete text of the source, with the selected concept highlighted, and red text used to indicate negation.
The indexing toggle button displays the complete text of the source with highlighting showing NLP indexing. Because this is the source text, capitalization, punctuation, and non-relevant words are shown; these aspects of the text are not shown in the Paths listing.
The % option allows you to display a summary of the text. Specify a percentage. The total number of sentences in the text is reduced to that percentage. The sentences that NLP includes in the summary are determined by metrics calculating their significance to the full text.
You can add a skiplist to exclude undesired concepts. Often the list of top concepts begins with those that are too common or have little value in discovering useful information. These may be words or phrases that appear in all of the sources (such as ”accident report” or “conclusions”), general concepts (such as “airplane” or “pilot”), or concepts not relevant to your use of the data (such as a list of cities). You can use a_1). In the Entries box list entries (concepts) one concept per line; entries are not case-sensitive. In this example list the concepts: pilot, student pilot, co-pilot, passenger, instructor, flight instructor, certified flight instructor.
Save and Compile the domain. (You do not need to Build the domain to add, modify, or remove: | https://docs.intersystems.com/healthconnectlatest/csp/docbook/stubcanonicalbaseurl/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_textanalytics | CC-MAIN-2021-49 | en | refinedweb |
In this problem, we are given an array of points. This represents a list of x-coordinates and y-coordinates of some points that lie on an X-Y 2-D plane. We need to check if these points form a straight line. Note that there will be at least 2 points in the input and their absolute values will not exceed 10^4.
Example
Co-ordinates = {{1 , 2} , {2 , 3} , {3 , 4} , {4 , 5} , {5 , 6}}
true
Explanation The following diagram confirms that all points are collinear.
Co-ordinates = {{1 , 2} , {3 , 4}}
true
Explanation: Two connected points always form a straight line.
Approach
It is easy to observe that if there are only two points in the given list then they will always form a straight line and the result will be true. However, if there are three points, all of them may or may not be collinear. So, we can fix any two points, form a line passing through them, and check if all the other points lie on the same line. Mathematically, this can be done by checking the slope of the line formed between any two points. For example, let us consider we have three points: P1 = (x1 , y1) , P2 = (x2 , y2) and P3 = (x3 , y3).
Now, let’s form a line passing through P1 and P2. The slope of this line will be:
Slope = (y2 – y1) / (x2 – x1)
In order to ensure that P3 is collinear with P1 and P2, let us find the slope of the line formed by points P2 and P3 first. That is,
Slope(P2 and P3) = (y3 – y2) / (x3 – x2)
Now, the points are collinear only and only if: Slope of line formed by P1 and P2 = Slope of line formed by P2 and P3.
Therefore, if P1, P2 and P3 are collinear, we have
(y2 – y1) : (x2 – x1) :: (y3 – y2) : (x3 – x2) , or
(y2 – y1) * (x3 – x2) = (x2 – x1) * (y3 – y2)
Therefore, we can fix two points, say P1 and P2, and for every i > 2 in the input list, we can check if slope(1 , 2) is equal to Slope(1 , i) by cross-product check as we saw above.
Algorithm
- We use a boolean function checkStraightLine() to return whether an array of points passed to it forms a straight line
- If the array has only 2 points:
- return true
- Initialize x0 as x-coordinate of first point and y0 as y-coordinate of second point. Similarly, (x1 , y1) for coordinates of second point
- Since we need their difference for cross-product check, store their differences as:
- dx = x1 – x0
- dy = y1 – y0
- Now for every point in the array after the second point:
- Store x and y coordinates of this point in variables x and y
- Now, we check whether the slopes of the first two points and the slope of this and the first point are the same:
- If dy * (x – x0) is not equal to dx * (y – y0)
- return false
- Return true
- Print the result
Implementation of Check If It Is a Straight Line Leetcode Solution
C++ Program
#include <bits/stdc++.h> using namespace std; bool checkStraightLine(vector <vector <int> > &coordinates) { if(coordinates.size() == 2) return true; int x0 = coordinates[0][0] , x1 = coordinates[1][0]; int y0 = coordinates[0][1] , y1 = coordinates[1][1]; int dx = x1 - x0 , dy = y1 - y0; for(int i = 2 ; i < coordinates.size() ; i++) { int x = coordinates[i][0] , y = coordinates[i][1]; if(dy * (x - x0) != dx * (y - y0)) return false; } return true; } int main() { vector <vector <int> > coordinates = {{1 , 2} , {2 , 3} , {3 , 4} , {4 , 5} , {5 , 6}}; cout << (checkStraightLine(coordinates) ? "true\n" : "false\n"); }
Java Program
class check_straight_line { public static void main(String args[]) { int[][] coordinates = {{1 , 2} , {2 , 3} , {3 , 4} , {4 , 5} , {5 , 6}}; System.out.println(checkStraightLine(coordinates) ? "true" : "false"); } static boolean checkStraightLine(int[][] coordinates) { if(coordinates.length == 2) return true; int x0 = coordinates[0][0] , x1 = coordinates[1][0]; int y0 = coordinates[0][1] , y1 = coordinates[1][1]; int dx = x1 - x0 , dy = y1 - y0; for(int i = 2 ; i < coordinates.length ; i++) { int x = coordinates[i][0] , y = coordinates[i][1]; if(dy * (x - x0) != dx * (y - y0)) return false; } return true; } }
true
Complexity Analysis of Check If It Is a Straight Line Leetcode Solution
Time Complexity
O(N) where N = number of points in the array. We do a single pass of the array and all the operations performed in it take constant time.
Space Complexity
O(1) as we only use constant memory space. | https://www.tutorialcup.com/leetcode-solutions/check-if-it-is-a-straight-line-leetcode-solution.htm | CC-MAIN-2021-49 | en | refinedweb |
the actual gimp pkg version is 2.10.24 on Fbsd 13, kde plasma 5.22.4
gimp is basically working, but not the plugins, gmic is not even appearing in the menu, resynthesizer also not as well as other plugins for example some installed FUs are not there
I have py27 installed as default version for gimp and the py plugins should work, I know they are ok, because I have them working through the latest gimp versions since 2.10.something, only that they are linux machines
specialy I am interested in plugin-heal-selection.py because it is important for my work
eventually the point is this startup error
File "/usr/local/libexec/gimp/2.2/plug-ins/plugin-resynth-fill-pattern.py", line 33, in <module>
from gimpfu import *
ModuleNotFoundError: No module named 'gimpfu'
all failing plugins are claiming "No module named 'gimpfu'" but gimpfu is in the expected place, well, I guess it is
/usr/local/libexec/gimp/2.2/plug-ins/script-fu/script-fu*
I created sym links 2.0 and 2.1 to 2.2 because no effect
I hope you can help me here
thanks | https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=258868 | CC-MAIN-2021-49 | en | refinedweb |
C++ Program to Calculate the Sum of Natural Numbers
Grammarly
In this post you will learn how to Calculate the Sum of Natural Numbers.
This lesson will teach you how to calculate the sum of natural numbers, with a mathematical function using the C++ Language. Let’s look at the below source code.
How to Calculate the Sum of Natural Numbers?
Source Code
#include<iostream> using namespace std; int main() { int n, sum; cin>> n; cout<<" Enter a value : "<< n << endl; sum = n*(n+1)/2; cout<<"\n Sum of first "<< n <<" natural numbers is : "<<sum; return 0; }
Input
5
Output
Enter a value : 5 Sum of first 5 natural numbers is : 15
find the sum of natural numbers we use the following mathematical formula
- First we declare the variable n and sum as integers. We then collect the number from the user and store it in n using function
cin>>,display the value using
cout<<and the Insertion Operators'<<‘ , ‘>>’.
- Using the formula
sum = n*(n+1)/2and the value n, we calculate the sum of natural numbers and store it in the integer sum.
- Using the output statement
coutdisplay the answer.
- using for loop and a mathematical function. Let’s look at the below source code.
#include<iostream> using namespace std; int main() { int n, i, sum=0; cin >> n; cout << "Enter the n value: "<< n << endl; for (i = 1; i <= n; i++) { sum += i; } cout<<"\n Sum of first "<< n <<" natural numbers is : "<<sum; return 0; }
Input
5
Output
Enter the n value: 5 Sum of first 5 natural numbers is : 15
- In this source code after declaring the integers we will be using and getting the value form the user, we then use <= n; i++) { sum += i; }
- In the for loop first is the initialization of the variable
i = 1next the condition to be satisfied
i <= nwhich is followed by increment operator
i++.
- The next line in the for loop is the loop statement
sum += iwhich is executed after the for loop is run.
- The loop statement is expanded as
sum = sum + iwhere the initial value of sum is 0 and i is 1. Now when the for loop is executed, if the condition is satisfied, i.e i is less than or equal to n, the value i is in incremented by one and the mathematical loop statement is executed.
- Now the value of sum changes as per the loop statement function. The execution moves back up to the for loop and is executed repeatedly until the value i is not less than or equal to n. Now the function terminates and the last sum value is the Sum of n Natural Numbers.
- The rest of the statements are similar to find the Sum of Natural Numbers. | https://developerpublish.com/academy/courses/c-programming-examples-2/lessons/c-program-to-calculate-the-sum-of-natural-numbers/ | CC-MAIN-2021-49 | en | refinedweb |
lsh4slsh4s
UsageUsage
Add the dependency
libraryDependencies += "net.pishen" %% "lsh4s" % "0.6.0"
Add the resolver
resolvers += Resolver.bintrayRepo("pishen", "maven")
Hash the vectors (the whole hashing process will run in memory, you may need to enlarge your JVM's heap size.)
import lsh4s._ val lsh = LSH.hash("./input_vectors", numOfHashGroups = 10, bucketSize = 10000, outputPath = "mem") val neighbors: Seq[Long] = lsh.query(itemId, maxReturnSize = 30).get.map(_.id)
- The format of
./input_vectorsis
<item id> <vector>for each line, here is an example:
3 0.2 -1.5 0.3 5 0.4 0.01 -0.5 0 1.1 0.9 -0.1 2 1.2 0.8 0.2
- All the hash groups will be combined in the end to find the neighbors, larger
numOfHashGroupswill produce a more accurate model, but takes more memory when hashing.
- Larger
bucketSizewill produce a more accurate model as well, but takes more time when finding neighbors.
outputPath = "mem"is for memory mode, otherwise it will be the output file for LSH model (we recommend pointing this file to an empty directory, since we will create and delete several intermediate files around it.)
lsh4s uses slf4j, remember to add your own backend to see the log. For example, to print the log on screen, add
libraryDependencies += "org.slf4j" % "slf4j-simple" % "1.7.14" | https://index.scala-lang.org/pishen/lsh4s/lsh4s/0.6.0?target=_2.10 | CC-MAIN-2021-49 | en | refinedweb |
This article is translated with (free version) and reviewed by Mincong.
Introduction
During my visit to my family back home in April this year, I came across a lot of technical content in Chinese when I was looking for information, but I also found that not many of them have good quality. So I started to write Chinese articles in my blog, hoping to contribute to the Chinese developer community. More concretely, I wrote in two languages: Chinese and English. But in practice, I found that it is not a good experience for reader to see two languages in one blog.
After I started writing in Chinese, several times my colleagues found my popular English articles in Google Search and started reading them. This thing made me embarrassed, because since April, my articles are all in Chinese. Let me think about it from their point of view: what happens if after reading, they are curious to read more articles? They may click on the home page and then surprised to see a bunch of Chinese blogs: they may feel confused and feel like they are in the wrong place. For someone who doesn’t know another language, the experience can be very bad. The opposite also holds true: when a Chinese friend sees my blog and sees a bunch of English articles, it feels hard to get interested in reading them. If the articles are written in their native language, it will be much more user-friendly.
That’s why I want to do internationalization: I want to provide a comfortable reading experience for every reader. I want to have a clear distinction between the different languages in the blog, so that when people visit, they can read the content in the language they are familiar with, no matter which page they click on. Then the blog itself can also provide options for people to switch to another language.
This post will share with you the internationalization of my blog.
Proposals
There are serveral proposals for internationalization, and I’ll discuss their feasibility below.
- Provide translation feature. Embed translation button in the article and use third-party translators (Bing, Google, DeepL, etc.) to translate when user clicks the translation button.
- Interlink English and Chinese articles. Embed a link to English article in Chinese article and another link to Chinese article in English article.
- Introduce the concept of page key. Chinese and English articles share the same page key.
- Use two collections:
postsand
cn.
Final plan: Use two collections.
Proposal 1: Provide Translation Feature
Embed a translation button inside the article and use a third-party translator (Bing, Google, DeepL, etc.) to translate when the translation button is clicked. The rationale is that my blog is not very visited, with about 18,000 visitors per month. And I am not a professional writer, purely writing for fun and not making money. There is no need to be so serious. This feature is inspired by Chrome’s Translate button, which allows you to translate a page in a non-frequently used language by clicking the Translate button in the URL input field, or by right-clicking on the page content.
The advantages of this proposal are:
- It’s easy to implement
The disadvantages of this proposal are:
- There are no two articles, there is only one.
- No correction for translation results
- Without articles, we can’t attract readers through articles
Proposal 2: Interlinking English and Chinese Posts
Add a link to the corresponding article at the beginning of each article in order to switch languages. That is, embed a link to an English article in a Chinese article and a link to a Chinese article in an English article. On the web page, add a button or an icon to achieve language switching. This way, readers can access another language version of the article by clicking this button or this icon while reading.
For example, for the article “Implementing backward-compatible schema changes in MongoDB”, the switch between the English and Chinese versions of the article can be implemented in the following form.
English version.
--- layout: post title: Making Backward-Compatible Schema Changes in MongoDB date: 2021-02-27 17:07:27 +0100 categories: [java-serialization, reliability] tags: [java, mongodb, serialization, jackson, reliability] comments: true + lang: en + version: + en-CN: 2021-04-30-mongodb-schema-compatibility.md ... ---
Chinese version.
--- layout: post title: Is it really that easy to add and delete fields in MongoDB? date: 2021-04-30 23:09:38 +0800 categories: [java-serialization, reliability] tags: [java, mongodb, serialization, jackson, reliability] comments: true + lang: zh + version: + en-US: 2021-02-27-mongodb-schema-compatibility.md ... ---
The advantages of this proposal are:
- The links to existing articles remain unchanged and do not affect SEO
The disadvantages of this proposal are:
- It is not possible to know the link of another version of the other article through the article link.
- If you change the link to the article, you should remember to change the referer, i.e. in the other language page.
Proposal 3: Shared Page Key between Chinese and English
Introduce the concept of page key in each article. When a user accesses the article, the page URL contains both the language and the page key. More precisely, it follows the following expression.{lang}/{page-key}
English version (ideal state).
--- layout: post title: Making Backward-Compatible Schema Changes in MongoDB date: 2021-02-27 17:07:27 +0100 categories: [java-serialization, reliability] tags: [java, mongodb, serialization, jackson, reliability] comments: true + key: mongodb-schema-compatibility + lang: en ... ---
Chinese version (ideal state).
--- layout: post title: Is it really that easy to add and delete fields in MongoDB? date: 2021-04-30 23:09:38 +0800 categories: [java-serialization, reliability] tags: [java, mongodb, serialization, jackson, reliability] comments: true + key: mongodb-schema-compatibility + lang: zh ... ---
The advantage of this proposal are:
- disadvantages of this proposal are:
- May affect SEO
Originally this was my perferred solution. Unfortunately it is not possible to implement. Because not all variables are available as part of Jekyll’s permalink. For example, Jekyll does not support custom variable
lang as part of a link. See the official documentation Permalinks for variables supported by Permalinks.
Proposal 4: Use Two Collections
The first collection is the default
posts and the second collection is
cn.
The advantage of this is the same as proposal 3:
- downside of this are:
- The default plugin
jekyll-paginateonly supports pagination for the default collection posts. If you need to paginate another collection, you need to use the
jekyll-paginate-v2plugin. However, GitHub Pages does not officially support the
jekyll-paginate-v2plugin.
Other Considerations
- Whether the theme you are currently using has support for internationalization? For example, I use Jekyll TeXt Theme, which has some support for internationalization itself. The information in the header and footer of the browsing page can be automatically adjusted according to the language of the page. However, it does not translation for the page content directly.
- If you’re using GitHub Pages, consider whether GitHub Pages has support for the plugins that you use. Only some of the Jekyll plugins are officially supported by GitHub, and others won’t work even if you install them. This will affect you unless you don’t use the official site generation, you can generate pages locally yourself or generate them from your custom CI pipeline.
- Consider using another Jekyll internationalization plugin, such as jekyll-multiple-languages-plugin. I didn’t look into it at the time I wrote the proposals, and only found out about this plugin after the project was done… But this plugin is not supported by GitHub Pages neither.
Other Websites
How do other blogs work? Is there anything we can learn from them?
Elasticsearch Blog
Elastic’s blog is internationalized, and each article is available in multiple languages, such as the following article: How to design a scalable Elasticsearch data storage architecture.
It is listed below in three languages.
It is named in the following way.{post}{country}/blog/{post}
English blogs do not have EN prefix, other languages use country abbreviation as prefix, for example, CN for China, JP for Japan.
TeXt Theme
Jekyll TeXt Theme is a highly customizable Jekyll theme for personal or team websites, blogs, projects, documents and more. It references the iOS 11 style with big and prominent headers and rounded buttons and cards. It was written by Alibaba’s engineer Tian Qi (kitian616). This theme supports internationalization. In fact, the documentation of this theme itself is internationalized. If you don’t believe me, see this table:
It is named in the following way.{lang}/{post}
Whatever the language, the language abbreviation is used as the prefix, for example, zh for Chinese, en for English.
Final solution
The final solution is proposal 4: use two collections. The first collection is the default
posts and the second collection is
cn. The main goal is to modify the article links to the following format.{country}/{post}{country}/{page}
The two parts of the link here.
countryis the country, EN for English-speaking countries and CN for China. This expression was better than using locale en/zh because it’s not only a matter of language, but also the components loaded by the page: for example, the Chinese page will suggest WeChat but not the English version. In the future, I’ll also consider splitting the other components into two different versions: Chinese and English pages load different comment systems, different SEO scripts, etc.
- The
postor
pageis the ID of the blog post or the ID of another page.
Next, I want to share with you the specific tasks that need to be done when implementing internationalization.
Tasks
This section is a detailed explanation of the main tasks that need to be done. This section may be a bit long, it’s mainly for those who are interested in changing their blogs for real. If you don’t want to internationalize your site, I suggest avoid reading it into details.
Task 1: Modify Chinese Articles
Modify the article link to the following format.{post}
Since most of the Chinese articles were written after April this year, there is no need to keep the original links. At the beginning of each article, add two pieces of information: language and link redirection.
+ lang: zh date: 2021-04-20 11:21:16 +0800 categories: [java-core] tags: [java, akka] @@ -13,6 +14,8 @@ excerpt: > image: /assets/bg-ocean-ng-L0xOtAnv94Y-unsplash.jpg cover: /assets/bg-ocean-ng-L0xOtAnv94Y-unsplash.jpg + redirect_from: + - /2021/04/20/exponential-backoff-in-akka/ article_header:
Then create a new collection called
cn. Store it in the folder
_cn according to Jekyll naming requirements, then put all Chinese articles in that folder and remove the “year, month and day” part of the file name.
Changes in article links.
- Before:
- After:
In addition, in the global configuration file (
_config.yml), configure the information about the
cn collection, such as the permalink, whether to display the table of contents, etc. For details, see:
Task 2: Modify English Articles
I have 168 English articles on my blog, some of which have important page views. I don’t want them to lose any information because of the internationalization, such as comments and likes on Disqus. So my strategy for English articles is to not make any changes to existing articles and only change the new articles. For new articles, I use the new naming convention{post}. In the following paragraphs, let’s discuss it further.
For all existing articles, explicitly mark the article language as English in the front matter at the article level.
find _posts -type f -exec sed -i '' -E 's/date:/i lang: en' {} +
And after adding
permalink so that they are not interfered with by the global configuration.
#! /bin/bash paths=($(find "${HOME}/github/mincong-h.github.io/_posts" -type f -name "*.md" | tr '\n' ' ')) i=0 for path in "${paths[@]}" do filename="${path##*/}" year=$(echo $filename | sed -E 's/^([[:digit:]]+)-([[:digit:]]+)-([[:digit:]]+)-(. *)\.md/\1/') month=$(echo $filename | sed -E 's/^([[:digit:]]+)-([[:digit:]]+)-([[:digit:]]+)-(. *)\.md/\2/') day=$(echo $filename | sed -E 's/^([[:digit:]]+)-([[:digit:]]+)-([[:digit:]]+)-(. *)\.md/\3/') name=$(echo $filename | sed -E 's/^([[:digit:]]+)-([[:digit:]]+)-([[:digit:]]+)-(. *)\.md/\4/') permalink="/${year}/${month}/${day}/${name}/" echo "${i}: year=${year}, month=${month}, day=${day}, name=${name}, permalink=${permalink}" sed -i '' -E '/comments:/i\ permalink: PERMALINK ' "$path" sed -i '' "s|PERMALINK|${permalink}|" "$path" i=$((i + 1)) done
For new articles, use the new naming convention (
_config.yml).
- permalink: /:year/:month/:day/:title/ + permalink: /en/:title/
Also you need to modify the post generation script
newpost.sh to make it generate both Chinese and English posts. Here is an excerpt from the script: we generate the paths for both Chinese and English posts, confirm that they do not exist, and then add new content.
title="${*:1}" if [[ -z "$title" ]]; then echo 'usage: newpost.sh My New Blog' exit 1 fi bloghome=$(cd "$(dirname "$0")" || exit; pwd) url=$(echo "$title" | tr '[:upper:]' '[:lower:]' | tr ' ' '-') filename="$(date +"%Y-%m-%d")-$url.md" filepath_en="${bloghome}/_posts/${filename}" filepath_cn="${bloghome}/_cn/${filename}" if [[ -f "$filepath_en" ]]; then echo "${filepath_en} already exists." exit 1 fi if [[ -f "$filepath_cn" ]]; then echo "${filepath_cn} already exists." exit 1 fi append_metadata_en "$filepath_en" "$title" append_metadata_cn "$filepath_cn" "$title" # Not for EN, because EN post is translated. append_content "$filepath_cn" echo "Blog posts created!" echo " EN: ${filepath_en}" echo " CN: ${filepath_cn}"
For more details, see:
Task 3: Adding a Chinese Homepage
Adding a Chinese homepage sounds easy, as if all you need to do is copy
index.html from the blog home page to
cn/index.html and translate a few words. Actually, it is way more complex than that. I use the official Jekyll plugin jekyll-paginate (v1) for my home page. But this plugin only supports pagination for the default set
posts, not for other sets, such as
cn. So the real meaning of adding a Chinese homepage is to upgrade the plugin to jekyll-paginate-v2 to support pagination for the Chinese collection
cn.
Install and use the new plugin in the site configuration (
_config.yml) at
- paginate: 8 - paginate_path: /page:num # don't change this unless for special need + pagination: + enabled: true + per_page: 8 ## => Sources @@ -238,7 +240,7 @@ defaults: ############################## plugins: - jekyll-feed - - jekyll-paginate + - jekyll-paginate-v2
Modified the paginator of the TeXt Theme theme itself to avoid using
site.posts directly as a source for posts. And also add a specific prefix to the homepage, so that English and Chinese have their own homepage, i.e. and.
- {%- assign _post_count = site.posts | size -%} + {%- assign _post_count = paginator.total_posts -%} {%- assign _page_count = paginator.total_pages -%} <p>{{ _locale_statistics | replace: '[POST_COUNT]', _post_count | replace: '[PAGE_COUNT]', _page_count }}</p> <div class="pagination__menu"> @@ -51,7 +51,7 @@ </li> {%- elsif page == 1 -%} - {%- assign _home_path = site.paths.home | default: site.data.variables.default.paths.home -%} + {%- assign _home_path = site.paths.home | default: site.data.variables.default.paths.home | append: include.baseurl -%} {%- include snippets/prepend-baseurl.html path=_home_path -%}
There are actually some other modifications to consider, but I won’t expand on them due to the timing. Here is the final result for the home page, a comparison between English and Chinese:
For more details see:
Task 4: Modifying Build and Deployment
You can no longer use the old automatic deployment method because of jekyll-paginate-v2, a plugin that is not officially supported by GitHub. Now you need to deploy it manually or via the CI. That is, you no longer deploy from the
master branch. After the code is merged into
master, the new pages are generated manually or by CI (core command:
jekyll build). Then, the generated content, which is in the folder
_site, is uploaded to the
gh-pages branch for deployment.
To do it manually, the main steps are as follows: create a new, master-independent branch
gh-pages, add an empty commit as the start of the branch, then empty the local Jekyll generated files folder
_site and connect it to the new branch
gh-pages
git checkout --orphan gh-pages git commit --allow-empty -m "Initialize gh-pages" rm -rf _site git worktree add _site gh-pages # "jekyll build" or equivalent commands
To implement this task, you also need to change the branch to deploy from
master to
ph-pages in the GitHub project settings.
For more information, see: Sangsoo Nam, Using Git Worktree to Deploy GitHub Pages, 2019.
To do it via the CI (GitHub Actions in my case), you can use the following workflow:
name: Deploy to GitHub Pages on: push: branches: - master - docker # testing jobs: build-and-deploy: runs-on: ubuntu-latest env: JEKYLL_ENV: production steps: - name: Checkout source code uses: actions/checkout@v2 with: persist-credentials: false - name: Set up Ruby uses: ruby/setup-ruby@v1 with: ruby-version: 2.6 # Not needed with a .ruby-version file bundler-cache: true # runs 'bundle install' and caches installed gems automatically - name: Install dependencies in the Gemfile run: | bundler install --path vendor/bundle - name: Build Jekyll website run: | bundle exec jekyll build - name: Deploy GitHub Pages uses: JamesIves/github-pages-deploy-action@4.1.4 with: branch: gh-pages folder: _site
Task 5: Modifying More Pages
In the task above, we mainly mentioned modifications for Chinese articles and English articles. But a website has many other pages besides articles, such as categories, series, archives, about, etc. These pages also need to be modified before they can be used properly.
The main objective is to ensure a consistent user-experience for browsing. When navigating between pages, all links in English pages will lead to English pages, and all links in Chinese pages will lead to Chinese pages. This creates a comfortable reading experience for the user: because all pages are in a language they are familiar with. As for the pages that already exist, we need to redirect them to the new links. The following is a list of new pages and the redirection of existing pages.
Category pages:{category}/{category}/ ->{category} ->{category}/
Series pages:{serie}/ ->{serie} ->{serie}/
About page: ->
Archived pages: ->
For more information, see:
Task 6: Language Switching Button
Provide a language switch button in the website to enable languange switching. There are two main buttons here: one in the top right corner of the page, displayed as a flag, and another button in the title section of the article, highlighted in red for the current language and white for the optional other languages. The difference between these two buttons is that the top-right button will switch to the home page in another language when clicked, while the language button on the page will make the page jump directly to another version of the same article. I call them “global switching” and “article switching” feature.
For the global switching feature, the main tasks are to write the flag, link, and other information of another language in the configuration file of the page navigation, and then use these information when the page is generated.
Register information to the data file of the page navigation (
_data/navigation.yml).
site: ... # switch to the other langage urls2: en : /cn/ zh : / urls2_src: en : /assets/flag-CN.png zh : /assets/flag-US.png urls2_alt: en : "Switch to Chinese" zh : "Switch to English"
The header (
_includes/header.html) should include this element as well.
<li> <a href="{{ _site_root2 }}"> <img src="{{ _site_root2_src }}" alt="{{ _site_roo2_alt }}" class="navigation__lang_img"> </a> </li>
For the local toggle feature, the implementation is quite different. This is achieved by looking for articles with the same name in a collection of other languages. Here, articles in different languages must use the same filename, otherwise they cannot be found. Specifically, we first get the article ID, then extract the characters after the last slash
/ (with the slash
/), then take this information to traverse other collections and return the corresponding link:
{% assign _id = include.article.id %} {% assign _filename = _id | split:"/" | last %} {% assign _suffix = _filename | prepend: "/" %} {% assign _matched = include.collection | where_exp: "item", "item.id contains _suffix" | first %} {% if _matched %} {% assign __return = _matched.url %} {% else %} {% assign __return = nil %} {% endif %}
For more information, see.
-
-
Remaining Tasks
Having done this, the entire internationalization task is basically done. The following tasks can be addressed in the future to improve the situation:
- Implement both Chinese and English RSS feeds.
- Load more Chinese components on Chinese pages, such as loading WeChat’s SDK for sharing, loading Baidu’s SDK to improve search presence on Chinese search engine, replacing Disqus with another commenting system that can be loaded in mainland China without VPN, and links to other Chinese developer platforms.
- Automate the Chinese-to-English translation by scripting translation requests directly to third-party translation platforms, such as Google Translate, DeepL, etc.
- Fix the tag-cloud feature in the archive page. The tag-cloud currently uses
site.tagsfor tag-related statistics. However, all the tags of Chinese articles (under the
cncollection) are not taken into account.
- Fix the article category pages. Now the category page can show text in Chinese, but the actual article list is retrieved from English collection
postsrather than Chinese collection
cn.
If you have other suggestions, please feel free to leave a comment!
Going Further
How to go further from this article?
- If you’ve never heard of Jekyll, you can visit official website to learn about this great blogging engine.
- If you’ve never tried the free GitHub Pages, visit the official website and try to build your and host your own personal blog for free!
- If you haven’t tried Jekyll TeXt Theme by Qi Tian, maybe you would like to try it.
- If you want to learn more about jekyll-paginate-v2, you can visit their GitHub project.
Conclusion
In this article, we have seen the process of internationalizing of this site, an internationalization based on Jekyll and TeXt Theme. We compared the pros and the cons of the four proposals; we looked at other blogs’ implementations of internationalization; we listed the six of the more important tasks; and we looked further into the next steps for internationalization. Finally, I also share some resources for you to going further from this article. I hope this article has given you some insights. If you’re interested in more information and advice, please follow me on GitHub mincong-h. Thank you all!
References
- Elastic, “Elastic Blog”, Elastic, 2021.
- MrPowerScripts, “How to get around the jekyll-pagination-v2 limitation of GitHub pages with CircleCI”, MrPowerScripts, 2019.
- Sangsoo Nam, “Using Git Worktree to Deploy GitHub Pages”, Sangsoo Nam, 2019.- worktree-to-deploy-github-pages.html
- Jekyll, “Jekyll Documentation”, Jekyll, 2021.
- Sverrir Sigmundarson, “jekyll-paginate-v2”, GitHub, 2021.
- Tian Qi, “Internationalization”, TeXt Theme, 2021.
- Rahul Patil, “How to insert text after a certain string in a file?”, Unix & Linux - Stack Exchange, 2014.. stackexchange.com/a/121173/220624
- Taewoo Lee, “[Jekyll](EN) Make array and add element in liquid”, TWpower’s Tech Blog, 2020.- make-array-and-add-element-in-jekyll-liquid-en | https://mincong.io/en/jekyll-i18n/ | CC-MAIN-2021-49 | en | refinedweb |
Introduction
Recently, there are many live cameras/CCTVs (closed-circuit television) has been installed in many places such as offices, roads, and homes, in order to help human work. There are various purposes of installing CCTVs, such as security, surveillance, and data analysis. One of the challenges of installing CCTVs is about data management. Most CCTVs are set to take data/image near realtime conditions. If a CCTV takes data/image every minute and each data size is 50 KB, then we need 72MB of hard disk capacity per day, or 26.28 GB per year. If the purpose of using CCTV is to analyze data, we need to take data in a long span. In this scale we can say this is big data, therefore we need to compress data.
Compressing live camera data are commonly done in the following two phases: In the first phase, the CCTV compress the data which will be delivered into the data centre. This is commonly done by JPEG compression. In the second phase, we compress data in the data centre. What I want to explain in this article is a compression which is done in the second phase. In this phase, images are commonly compressed by image subtraction method, then the run-length enconding method. Here we increase the compression rate of the method above by calculating the similarity of images.
Required Application and Libraries
- Python 3.5
- OpenCV 3.1
- scikit-image (compare_ssim) 0.13.0
- PIL
Tested on macOS Sierra 10.12.2 and Linux (Ubuntu 16.04)
Implementation
1. Explanation about the compression method
We combine two methods; First, the image subtraction method, then the run-length compression method. The image subtraction method increases the number of redundant pixels by subtracting pixels from two similiar images, we then compress the redundant pixels by the run-length encoding method. The formula to subtract the image is below:
C = B - A
which is:
C is subtracted image (image member)
B is subtraction subject
A is subtraction object (image key)
The algorithm of image subtraction uses an iteration that compares the similarity between two images. If the similarity is less than the threshold then the image member will become image key, else, the image key will be the same until the iteration process finds an image member that has similarity bigger than threshold. The next step, the program compress image key and image member using run-length encoding compression.
Bellow is the pseudocode:
image key="" while image(n) < total image image member="" if (keyfile == "") image key = image(n) image member = image(n+1) else image key = image(n) endif if similarity(image key, image member) < thresshold if (image key == image(n)) compress(image key) endif compress (image member) image key = image member continue endif image subtracted = image member - image key compress (image subtracted) if (image key == image(n)) compress(image key) endif end
Based on that algorithm, the compression rate depends on the total number of image keys and image members. The more image member and less image key that are created, the higher rate compression that will be got.
2. How does image subtraction work?
The image subtraction method uses the OpenCV subtraction function. If there is a same pixel between two images, then the pixel will be colored black, or (0,0,0) in RGB form. Below is the sample of image subtraction using OpenCV and the result:
img1 = cv.imread("./img1.jpeg") img2 = cv.imread("./img2.jpeg") img3 = cv.subtract(img2, img1)
Here we will compare the original image and the subtracted image using OpenCV-im.load():
im = Image.open("./data/img1.jpeg") pix = im.load() w,h = im.size total_ori = 0 for i in range (0,w): for j in range (0,h): if "(0, 0, 0)" in pix[i,j]: total_ori = total_ori + 1 im2 = Image.open("./data/img3.jpeg") pix = im2.load() w,h = im2.size total_subtraction = 0 for i in range (0,w): for j in range (0,h): if "(0, 0, 0)" in pix[i,j]: total_subtraction = total_subtraction + 1 print ("original image 0 pixel: %d"% total_ori) print ("subtraction image 0 pixel: %d"% total_subtraction)
Result:
original image 0 pixel: 25 subtraction image 0 pixel: 112833
Based on the result above, it is clear that the image subtraction method can increase the number of consecutive pixels of identical color. Then the next step is to compress the subtracted images by run-length encoding compression.
3. How does the run-length encoding work?
Run-length encoding encodes images. There are many types of run-length encoding; here we use Packbits. Below is a sample code:
foo=Image.open(filein) foo.save(fileout, compression = 'packbits')
How does it actually work?
Let us explain the method using an example.
Here is a 16-byte image:
W W W W W B B W W B B W W W W W
Compression Result:
5W 2B 2W 2B 5W
The 16-byte image above will be written as a file with the following format/sequence: "W W W W W B B W W B B W W W W W". Based on the sequence, it is easy to understand that there are redundant pixels in the sequence. Packbits algorithm encodes the redundant pixels by storing series (run) of identical pixels color. In our case, the sequence "W W W W W B B W W B B W W W W W" will encode as "5W 2B 2W 2B 5W". Thus, the total number of bytes decreases from 16 bytes to 10 bytes.
Result
The best compression rate is obtained when the image similarity is 50%
Summary
The key point to get higher compression rate with this method is to increase the number of image members and to decrease the number of image keys. However, we have to maintain the similarity between image keys and image members. The more similar the images, the higher compression rate we will get from Packbits. | https://qiita.com/billy98/items/0845f43c079f36629bf9 | CC-MAIN-2021-49 | en | refinedweb |
This document describes the boot sequence for Fuchsia from the time the Zircon layer hands control over to the Garnet layer. This document is a work in progress that will need to be extended as we bring up more of the system.
appmgr's job is to host the environment tree and help create processes in these environments. Processes created by
appmgr have an
zx::channel back to their environment, which lets them create other processes in their environment and to create nested environments.
At startup,
appmgr creates an empty root environment and creates the initial apps listed in
/system/data/appmgr/initial.config in that environment. Typically, these applications create environments nested directly in the root environment. The default configuration contains one initial app:
bootstrap.
sysmgr's job is to create the boot environment and create a number of initial components in the boot environment.
The services that
sysmgr offers in the boot environment are not provided by bootstrap itself. Instead, when
sysmgr receives a request for a service for the first time,
sysmgr lazily creates the appropriate app to implement that service and routes the request to that app. The table of which components implement which services is contained in the
/system/data/bootstrap/services.config file. Subsequent requests for the same service are routed to the already running app. If the app terminates,
sysmgr will start it again the next time it receives a request for a service implemented by that app.
sysmgr also runs a number of components in the boot environment at startup. The list of components to run at startup is contained in the
/system/data/bootstrap/apps.config file.
basemgr's job is to setup the interactive flow for user login and user management.
It first gets access to the root view of the system, starts up Device Shell and draws the Device Shell UI in the root view starting the interactive flow. It also manages a user database that is exposed to Device Shell via the User Provider FIDL API.
This API allows the Device Shell to add a new user, delete an existing user, enumerate all existing users and login as an existing user or in incognito mode.
Adding a new user is done using an Account Manager service that can talk to an identity provider to get an id token to access the user's Ledger.
Logging-in as an existing user starts an instance of
sessionmgr with that user's id token and with a namespace that is mapped within and managed by
basemgr's namespace.
Logging-in as a guest user (in incognito mode) starts an instance of
sessionmgr but without an id token and a temporary namespace. | https://fuchsia.googlesource.com/docs/+/5ccd5842a7d2ca5d248dc07b0ed6ea8dd31840df/the-book/boot_sequence.md | CC-MAIN-2021-49 | en | refinedweb |
Problem bottom direction or in the right direction. by destination, here we mean the bottom-right cell. One condition is there, that you need to move along such a path which gives us a maximum average value.
Example
3 3 // number of rows and columns 1 1 2 10 1 100 10 10 1
22.6
Explanation
We moved from the top-left cell in the following manner: 1-> 10-> 1-> 100-> 1. So adding this up gives us a total sum of 113. The average thus becomes equal to 113/5 = 22.6
Approach
One can come up with a brute force approach, which is to generate all the possible ways to reach the bottom-right cell. Once you have generated all the paths. Now move on them and find the sum of integers that lie on the path. So, once you have all the sums. Find the sum which is maximum among these.
But using this approach is sure to go into TLE or will exceed the time limit. Because the generation of such paths will lead to exponential time complexity. So, to solve the problem under time constraints. We need to find an efficient solution. But, how can we do that? We can do that using Dynamic Programming. And the problem also resembles very much to the Maximum Path Sum problem. In that problem, we need to find the maximum path sum in a 2D array. The same is what we are going to do, But in the end, we will take the average.
So for the Dynamic Programming approach. We will create a 2D DP array where each cell in DP matric will denote the maximum sum which can be attained if we need to start from the top left corner to the current cell. So we can write a general recurrence relation.
// Base Cases
dp[0][0] = a[0][0]
dp[i][0] = dp[i-1][0] + a[i][0] // here i starts from 1st row until the last row of matrix
dp[0][i] = dp[0][i-1] + a[0][i] // here i starts from 1st column until the last column of matrix
// Recurrence Relation
dp[i][j] = max(dp[i-1][j], dp[i][j-1]) + a[i][j] // for all other cells
C++ code for Path with maximum average value
#include <bits/stdc++.h> using namespace std; typedef long long ll; int main(){ int n,m; // number of rows and columns in input matrix cin>>n>>m; int input[n][m]; for(int i=0;i<n;i++){ for(int j=0;j<m;j++) cin>>input[i][j]; } int dp] = max(dp[i-1][j], dp[i][j-1]) + input[i][j]; } // division by n+m-1, because you need to travel at least n+m-1 cells to reach bottom right cell cout<<(dp[n-1][m-1]/((double)n+m-1)); }
3 3 1 1 2 10 1 100 10 10 1
22.6
Java code for Path with maximum average value
import java.util.*; class Main{ public static void main(String[] args){ Scanner sc = new Scanner(System.in); int n = sc.nextInt(); int m = sc.nextInt(); // number of rows and columns in input matrix int input[][] = new int[n][m]; for(int i=0;i<n;i++){ for(int j=0;j<m;j++) input[i][j] = sc.nextInt(); } int dp[][] = new int] = Math.max(dp[i-1][j], dp[i][j-1]) + input[i][j]; } // division by n+m-1, because you need to travel at least n+m-1 cells to reach bottom right cell System.out.print(dp[n-1][m-1]/((double)n+m-1)); } }
3 3 1 1 2 10 1 100 10 10 1
22.6
Complexity Analysis
Time Complexity
O(N^2) since we have simply traversed over the input array. And the transition in our DP took only O(1) time.
Space Complexity
O(N^2) since we created a 2D DP array. | https://www.tutorialcup.com/interview/dynamic-programming/path-with-maximum-average-value.htm | CC-MAIN-2021-49 | en | refinedweb |
Twilio Connector
About the Twilio Connector
The Twilio platform serves APIs for text messaging, VoIP, and voice calls. The Anypoint Connector for Twilio provides connectivity to the Twilio text messaging API.
This connector provides an API for sending and receiving text messages. To get started with Twilio, follow the steps below to gain access to the free Twilio sandbox service to send SMS text messages. You can configure the Twilio connector in Anypoint Studio with your API credentials.
Note: To use the Twilio connector, you must have an active Twilio.com account, either as a Trial or Paid. To create an account, see Try Twilio.
The Anypoint Connector for Twilio provides connectivity to the Twilio platform, which serves APIs for text messaging, VoIP, and voice calls.
You can use this document to understand how to set up and configure a basic flow using the connector. You can track feature additions, compatibility, limitations, and API version updates with each release of the connector using the Connector Release Notes. Review the connector operations and functionality using the Technical Reference with the demo applications.
About Prerequisites
This document assumes that you are familiar with Mule, Anypoint Connectors, and Anypoint Studio. To increase your familiarity with Studio, consider completing a Anypoint Studio Tutorial. This page requires some basic knowledge of Mule Concepts, Elements in a Mule Flow, and Global Elements.
About Hardware and Software Requirements
For hardware and software requirements, visit the Hardware and Software Requirements page.
To Create a New Twilio Account
To create a new Twilio account:
Browse to Try Twilio.
After you log in to your developer account, you see the Twilio dashboard. Note the Account SID and Auth Token values, and copy the credentials to the Twilio connector configuration menu in Anypoint Studio.
With a free developer account, you need to verify your SMS-enabled phone before you can send text messages to it.
Click Numbers in the Twilio website navigation to go to the Manage Numbers portion of their website.
Click Verify Numbers, and then click Verify a number.
Enter your cell phone number, and click Call this number. Follow the instructions provided to validate your number. You receive an automated phone call and need to enter an authorization code.
Finally, to send SMS-messages, you need to use the Sandbox
fromnumber. This phone number needs to be entered into the Twilio connector configuration menu in Anypoint Studio. This Sandbox number is also located on the Twilio dashboard, with the Account SID and Auth Token.
Tip: As you copy fields from the Twilio website to the Anypoint Studio connector configuration, be sure to not copy in additional leading/trailing characters or spaces. It is a good idea to visually confirm that your copy and paste functions did not capture surrounding characters. Configure Global Properties in Studio
To use the Twilio connector in your Mule project, search for "twilio" and drag the connector to your Studio canvas. Click the green plus sign to the right of Connector Configuration and set the fields as shown in this example.
After setting the parameters, click Test Connection to ensure you can reach the Twilio.com API.
About Configuring Operations
You can set the following operations:
Get Message List
Get Message
Send Message
Redact Message
Delete Message
Get Media List
Get Media
Delete Media
These fields can accompany an operation:
About the Connector’s Namespace and Schema
When designing your application in Studio, the act of dragging the connector from the palette onto the Anypoint Studio canvas automatically populates the XML code with the connector namespace and schema location.
Namespace:
Schema Location:
<mule xmlns="" xmlns: <!-- put your global configuration elements and flows here --> </mule>
To Configure Use Cases
The following are common use cases for the Twilio connector:
Send and Redact Messages
In the following example, a Mule application sends a message to a phone number, and then redacts it.
Create a new Mule application and add the following properties to the
mule-app.propertiesfile:
Add an empty flow and drag an HTTP endpoint to the inbound part of the flow. Set its path to
/send/{toNumber}.
Drag a Transform Message at the flow and prepare the input for the Twilio connector:
%dw 1.0 %output application/java --- { body: "You are now subscribed!", from: "${fromNumber}", to: "+" ++ inboundProperties.'http.uri.params'.toNumber } as :object { class : "org.mule.modules.twilio.pojo.sendmessagerequest.MessageInput" }
Add a Twilio Connector after the Transform Message and apply the following settings:
Select the Send Message operation.
Set Account Sid to
${accountSid}, and Entity Reference to
#[payload].
Drag a Variable component and configure the following parameters:
Set Name to
messageSid.
Set Value to
#[payload.getSid()].
Add another Transform Message to create the input for the Redact Message operation:
%dw 1.0 %output application/java --- { body: "", from: payload.from, to: payload.'to' } as :object { class : "org.mule.modules.twilio.pojo.redactmessagerequest.MessageInput" }
Drag a Twilio Connector after the Transform Message and apply the following settings:
Select the Redact Message operation.
Set Account Sid to
${accountSid}.
Set Message Sid to
\#[messageSid](this is the variable we stored two steps above).
Set Entity Reference to
#[payload].
Put an Object to JSON transformer at the end of the flow.
Run the application and point your browser to{toNumber}, replacing the
toNumberwith a valid mobile phone number.
About Connector Performance
To define the pooling profile for the connector manually, access the Pooling Profile tab in the applicable global element for the connector. For background information on pooling, see Tuning Performance.
About Other Resources
Access the Twilio Connector Release Notes.
Visit Twilio’s official REST API Reference. | https://docs.mulesoft.com/mule-runtime/3.9/twilio-connector | CC-MAIN-2020-05 | en | refinedweb |
#include <time.h>time_t mktime(struct tm *time);
The mktime( ) function returns the calendar-time equivalent of the broken-down time found in the structure pointed to by time. The elements tm_wday and tm_yday are set by the function, so they need not be defined at the time of the call.
If mktime( ) cannot represent the information as a valid calendar time, –1 is returned.
Related functions are time( ), gmtime( ), asctime( ), and ctime( ). | https://flylib.com/books/en/3.13.1.299/1/ | CC-MAIN-2020-05 | en | refinedweb |
#include <Lobby2Message.h>
Should this message not be processed on the server if the requesting user disconnects before it completes? This should be true for functions that only return data. False for functions that affect other users, or change the database
Implements RakNet::Lobby2Message.
If data members can be validated for correctness in the server's main thread, override this function and do those checks here.
Reimplemented from RakNet::Lobby2Message.
Is this message something that should only be run by a system with admin privileges? Set admin privileges with Lobby2Server::AddAdminAddress()
Implements RakNet::Lobby2Message.
Does this function require logging into the server before it can be executed? If true, the user id and user handle will be automatically inferred by the last login by looking up the sender's system address. If false, the message should include the username so the database query can lookup which user is performing this operation.
Implements RakNet::Lobby2Message.
Is this message something that should only be run by a system with ranking upload priviledges? Set ranking privileges with Lobby2Server::AddRankingAddress()
Implements RakNet::Lobby2Message. | http://www.jenkinssoftware.com/raknet/manual/Doxygen/structRakNet_1_1System__CreateTitle.html | CC-MAIN-2020-05 | en | refinedweb |
Building a Download Monitor With Android and Kotlin
Building a Download Monitor With Android and Kotlin
Learn how to use Kotlin to build an Android app that shows a progress dialog with a download percentage.
Join the DZone community and get the full member experience.Join For Free
In this article, we will develop a simple app that will download a file and show a progress dialog with a percentage using Kotlin. Kotlin is getting popular nowadays and is used a lot in Android development. The objective of this article is to show how Kotlin can be used in Android development.
Environment Setup
Download Android Studio 3.0, if you have not installed it, from the Google site. This article uses Android Studio 3.0. The reason I am using Android Studio is that it supports Kotlin already so that you do not have to download it. For earlier versions of Android Studio, you will have to add Kotlin support manually.
Create an Android project with an Empty Activity. At this point, an Android project has been created but there is no support for Kotlin development. We are going to add it next.
Gradle Scripts
Go to your "build.gradle" (module:app) and add the following line in the beginning:
apply plugin: 'kotlin-android-extensions'
We have now added the Kotlin Android extension to the project.
Then, at the end of the script, in the dependencies section, add the following:
compile "org.jetbrains.kotlin:kotlin-stdlib-jre7:$kotlin_version" compile "org.jetbrains.kotlin:kotlin-reflect:$kotlin_version"
Sync with Gradle to build the project. Now you have set up the environment. Let us begin.
Download Tracker
Create a Kotlin class by right-clicking on the project and naming it DownloadActivity. It will create a file with the extension ".kt".
We are not going to spend much time on user interface design. Our user interface will have a single button called "Download" and this will trigger the download process. Please find the layout file, activity_main, below:
<?xml version="1.0" encoding="utf-8"?> <android.support.constraint.ConstraintLayout xmlns: <Button android: </android.support.constraint.ConstraintLayout>
Open the Download.kt file and add the following lines:
class TestActivity : AppCompatActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main); } }
We are extending AppCompatActivty class in Kotlin. The ":" symbol shows that it is extending a class.
Next, we have overridden the onCreate lifecycle method by using the "override" and "fun" keywords. The third line is displaying the user interface with a single button in it.
Next, we are going to handle some UI operations. Upon clicking on the Download button, it will initiate a download process.
But how do we handle that? There are some synthetic functions in the Android Kotlin extension that will initialize the button id as a variable and can be used in the code, as shown below:
downloadBtn.setOnClickListener { }
It is pretty cool, isn't? You do not have to call findByViewId(R.id.......). The Kotlin extension for Android will do it for you. In order to achieve this, we need to import this synthetic API in the code. Add the following line:
import kotlinx.android.synthetic.main.activity_main.*
Now we can add the code when the button is clicked. The above code snippet is trying to achieve what we talked about just now.
downloadBtn.setOnClickListener { // we are going to call Download async task to begin our download val task = DownloadTask(ProgressDialog(applicationContext)); task.execute(""); }
You do not have to worry about null pointer exceptions. If the button element is not present, then the variable name is still available, but you may not able to call any operations on it. It will throw a compile time error saying "object reference not found" when we attempt to use any of the operations.
Download Task
This, as you guessed, is an AsyncTask that will download the file from the internet. At the moment, we have hardcoded the URL. The URL is a PDF file. You can change this value of your own choice. We are going to define the class below using an inner class of Kotlin. Let us do that now:
inner class DownloadTask (var dialog: ProgressDialog) : AsyncTask<String, Sting, String> { }
And we need to override methods like onPreExecute, doInBackground, onPostExecute, onProgressUpdate, etc.
override fun onPreExecute() { super.onPreExecute() } override fun onPostExecute(result: String?) { super.onPostExecute(result) } override fun onProgressUpdate(vararg values: String?) { super.onProgressUpdate(*values) }
Next, we need to display a progress dialog when we begin the download. When we defined an Async Task earlier, we passed an instance of ProgressDialog to its constructor. This type of constructor is called Primary Constructor because it does not contain any code. In the OnPreExecute method, we will set up this class, as shown below:
dialog.setTitle("Downloading file. Please wait.."); dialog.isIndeterminate = false; dialog.max = 100; dialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL); dialog.setCancelable(true); dialog.show();
Please observe that we have not assigned the ProgressDialog instance to a variable, but still, it can be accessed from all of the methods of the class. This is because Kotlin treats it as a property implicitly and will be used for property initialization.
Next, we will code for downloading the file using a standard URLConnection API.
Download It Anyway
val url = URL(params[0]); val connection: URLConnection = url.openConnection(); connection.connectTimeout = 30000; connection.readTimeout = 30000; connection.connect(); val length: Int = connection.contentLength; val bis: BufferedInputStream = BufferedInputStream(url.openStream(), BUFFER_SIZE); val data = ByteArray(BUFFER_SIZE); var total: Long = 0; var size: Int? = 0;
The above code opens a URL connection to the site and constructs a buffered input stream object with a default value of BUFFER_SIZE. The BUFFER_SIZE is a final static variable here with a value of 8192. We will see how we can define a final static variable in Kotlin.
In Kotlin, we can use "Companion Object" to define a final static variable, as shown below:
companion object { // download buffer size const val BUFFER_SIZE:Int = 8192; }
We will now move on to the remaining part of the code.
while (true) { size = bis.read(data) ?: break; total += size.toLong(); val percent: Int = ((total * 100) / length).toInt(); publishProgress(percent.toString()); } out.flush();
The second line in the above code snippet reads the data from the stream until it becomes null. Then, it will exit from the loop. The real beauty of Kotlin. The remaining part of the code is calculating the percent it progressed while downloading and updating the progress dialog.
override fun onProgressUpdate(vararg values: String) { super.onProgressUpdate(*values) // we always make sure that the the below operation will not throw null pointer exception // other way is use null check like this // if(percent != null ) val percent = values[0].toInt(); dialog.progress = percent; }
The above code will update the progress bar while downloading. If all goes well, you can see the progress.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/download-tracker-with-android-and-kotlin?fromrel=true | CC-MAIN-2020-05 | en | refinedweb |
span8
span4
span8
span4
Hi!
I am trying to convert a ESRI shapefile to a GMLwhich will be input for a calculation model that only takes GML. While building the conversion I get stuck at 2 issues.
When you open the desired result and my current result in an Inspector, you will notice some difference leading to my issues:
FeatureCollectionCalculator
If you open the desired result in an Inspector and look at the single record (top printscreen below), you will notice a difference compared to my current result (bottom printscreen below). My own current result has every featureMember{}.owns in a new column, the desired result doesn’t. But instead has <missing value> in the column GML_parent_property. However, if I doubleclick on both (Feature Information screen opens) the result look largely the same.
RoadNetwork
Similar question as the FeatureCollectionCalculator but with 2 properties (element{}.owns and element{}.xlink_href).
SRM2Road
The printscreen below shows a random lineobject from the desired result. Among the attributes is a kind of list in list named ‘vehicles’ with 5 properties (StandardVehicle.maximumSpeed, StandardVehicle.stagnationFactor, StandardVehicle.strictEnforcement, StandardVehicle.vehiclesPerDay and StandardVehicle.vehicleType). The Vehicles{0} always has properties of Light Traffic, vehicles{1} always has properties of Normal freight etc.
Just like the other examples this appears to be kind of a list. However, if you give the properties the same name for every traffictype, one will overwrite the other. In other words, if I rename the property of both LIGHT_TRAFFIC and NORMAL_FREIGHT to vehicleType the attribute of NORMAL_FREIGHT vehicleType will overwrite the one of LIGHT_TRAFFIC as both ‘columns’ have the same name.
So it looks like I have to change the property name while being in a list somehow? Or is this solved while building the list?
I included some results, a script etc to help you. Inside the zip you will find:
AERIUS_Verkeersmodel_TEST_321_output_Totaal.GML my current result (script output)
AERIUS_TEST_GML.GML intermediate result (output of the FeatureWriter in my script)
NRU_Gebruiksfase_Plan_2025_(vru34)_02-04-2019.GML the desired result.
Invulsheet AERIUS.xls User input used for metadata. What’s the name of the project etc.
vru34_VRU34_2030B_20_milieu input shape, to be converted to GML.
IMAER.XSD the schemadefinition, used in the desired result.
AERIUS_Luchtkwaliteit_SHPtoGML.ZIP the FME Workbench (2016.1) and the most recent logfile (the script doesn’t throw any errors at the moment)
Any help is greatly appreciated!
Casper
Safe_community_package.zip
Answers Answers and Comments
12 People are following this question.
Building nested GEOJson 3 Answers
Complex KML: how convert them to vector formats? 2 Answers
Nested JSON array to SQL table 2 Answers
Nested Json templater 1 Answer
How to change elements(namespaces) in GML ? 1 Answer | https://knowledge.safe.com/questions/95116/nested-lists-in-gml.html | CC-MAIN-2020-05 | en | refinedweb |
I thought it would be fun to teach Java while I make a video game. This is a simple game, but it will teach a lot of the logic needed to make your own game.
This tutorial will teach you about arrays, class fields, class methods, how to set a default value for an array, and a ton of logic.
I hope you like it. If you missed the previous parts though you should check them out first here Java Video Tutorial.
I reference an article on how to install java libraries, in the video below. You can also find all of the code after the video.
If you like videos like this share it
LessonEight.java
import java.util.Arrays; import org.apache.commons.lang3.ArrayUtils; // /* The Goal of this tutorial is to make a game like this ------------------------------ |*||*||*||*||*||*||*||*||*||*| |*||*||*||*||*||*||*||*||*||*| |*||*||*||*||*||*||*||*||*||*| |*||M||F||*||*||*||*||*||*||*| |*||*||*||*||*||*||*||*||*||*| |*||*||*||*||*||*||*||*||*||*| |*||*||*||*||*||*||*||*||*||*| |P||*||*||*||*||*||*||*||*||*| |*||*||*||*||D||*||*||*||*||*| |*||*||*||*||*||*||*||*||*||*| ------------------------------ [9,9] */ public class LessonEight { public static void main(String[] args) { MonsterTwo.buildBattleBoard(); char[][] tempBattleBoard = new char[10][10]; // ObjectName[] ArrayName = new ObjectName[4]; MonsterTwo[] Monsters = new MonsterTwo[4]; // MonsterTwo(int health, int attack, int movement, String name) Monsters[0] = new MonsterTwo(1000, 20, 1, "Frank"); Monsters[1] = new MonsterTwo(500, 40, 2, "Drac"); Monsters[2] = new MonsterTwo(1000, 20, 1, "Paul"); Monsters[3] = new MonsterTwo(1000, 20, 1, "George"); MonsterTwo.redrawBoard(); } }
MonsterTwo.java
import java.util.Arrays; // MonsterTwo{ static char[][] battleBoard = new char[10][10]; public static void buildBattleBoard(){ for(char[] row : battleBoard) { Arrays.fill(row, '*'); } } public static void redrawBoard() { int k = 1; while(k <= 30){ System.out.print('-'); k++; } System.out.println(); for (int i = 0; i < battleBoard.length; i++) { for(int j = 0; j < battleBoard[i].length; j++) { System.out.print("|" + battleBoard[i][j] + "|"); } System.out.println(); } k = 1; while(k <= 30){ System.out.print('-'); k++; } System.out.println(); } // Class Variables or Fields // You declare constants with final public final String TOMBSTONE = "Here Lies a Dead monster"; // private fields are not visible outside of the class private int health = 500; private int attack = 20; private int movement = 2; private boolean alive = true; // public variables are visible outside of the class // You should have as few as possible public fields public String name = "Big Monster"; public char nameChar1 = 'B'; public int xPosition = 0; public int yPosition = 0; public static int numOfMonsters = 0; // Class Methods // Accessor Methods are used to get and set the values of private fields public int getAttack() { return attack; } public int getMovement() { return movement; } public int getHealth() { return health; } public boolean getAlive() { return alive; } // MonsterTwo(int health, int attack, int movement, String name) { this.health = health; this.attack = attack; this.movement = movement; this.name = name; /* If the attributes had the same names as the class health, attack, movement * You could refer to the objects fields by proceeding them with this * this.health = health; * this.attack = attack; * objectFieldName = attributeFieldName; */ int maxXBoardSpace = battleBoard.length - 1; int maxYBoardSpace = battleBoard[0].length - 1; int randNumX, randNumY; do { randNumX = (int) (Math.random() * maxXBoardSpace); randNumY = (int) (Math.random() * maxYBoardSpace); } while(battleBoard[randNumX][randNumY] != '*'); this.xPosition = randNumX; this.yPosition = randNumY; this.nameChar1 = name.charAt(0); battleBoard[this.yPosition][this.xPosition] = this.nameChar1; numOfMonsters++; } // You can overload constructors like any other method // The following constructor is the one provided by default if you don't create any other constructors // Default Constructor public MonsterTwo() { numOfMonsters++; } public static void main(String[] args) { } }
First of all thanks alot for the awesome tutorial, probably the best one and you have my support. i completed upto this tutorial. i done exactly all the steps and i am getting the same output as of 20:45 but all the “-” and “|*|” are in next lines so i am not getting the same pattern.
Thanks
Sheron
I figure it out what was the mistake… i put println in the following two places.
System.out.print(“|” + battleBoard[i][j] + “|”);
while(k <= 30){ System.out.print('-'); k++; }
Thank you very much.. keep up your good work.
I’m glad you figured it out. Sorry, I couldn’t get you an answer quick enough. I’m glad you are enjoying the videos 🙂
Love the your tutorials! Is there a single file with the source code I could download?
Thanks
MB
I have all of the code attached to each individual part of the Java tutorial series. You can find every part of it on one page here Java Video Tutorial. I hope that helps
I got an error that says :
LessonEight.java:2: error: package org.apache.common.lang3 does not exist
import org.apache.common.lang3.ArrayUtils;
I use geany as my editor. Do I have to use eclipse or netbeans instead?
I show how to get that library here How to install java libraries
sorry this might seem like a stupid question, but i was just wondering
why do you do this
int maxYBoardSpace = battleBoard[0].length – 1; // i’m talking about the[0], i don’t see it’s function
for the x case you didn’t include it
int maxXBoardSpace = battleBoard.length – 1;
this is answered in tutorial 9.thanks and great tutorials really enjoying them
Thank you very much 🙂 You’re very welcome
There is no such thing as a stupid question. I created a multidimensional array named battleBoard. To get the length or size of the second part of the array I have to follow battleBoard with [0]. Does that make more sense?
Hi Derek, I copied and pasted your script into netbeans, removed all the line numbers and am left with 1 issue. At line 98
return alive;
from
public int getAlive()
{
return alive;
}
The error being flagged up is required int found boolean
I know private boolean alive = true; is declaring a boolean and do not quite understand how it is making an int out of alive as that is the boolean. Could you advise please. Thanks
Sorry about that. The return type should be boolean. I corrected the code. I don’t know how I missed that
1. In the LessonEight class I commented out the import org.apache.commons.lang3.ArrayUtils line because I don’t have it but the program runs fine without it. Have you actually used any of these utils yet? If so where? I’m not using an IDE just a text editor and a command line Window.
2. If one does use such a set of utils will the compiled program run in a standard environment ( without the apche stuff)?
3. Why does MonsterTwo class have a main function?
I like your videos but it takes time to fully digest some of them. Thanks!
It seems that you have your first constructor,
public MonsterTwo(int health, int attack, int movement, String name)
commented out. Is that a mistake?
Does everything from this.health = health; TO numOfMonsters++; fit under the first connstructor?
Reason I’m asking, is that I cannot for the life of me figure out why the grid is not printing properly. My grid keeps printing one long vertical of – then |*| with the |G| then the – again. I looked very carefully to compare your nested for loop to mine, but they both match exactly. hmmmmmrrrmmm :/
Got it figured out. I was doing println() instead of print() in the while loop while(k <= 30){ System.out.print('-'); k++; }
Woopsies daisies!!
I’m glad you got it fixed. Sorry I couldn’t help quicker
Hello Derek,
I would like to know how you used the object Monster in main method without creating it or can we use classes directly in place of objects?
Regards,
Mohit
MonsterTwo[] Monsters creates an array of MonsterTwo objects. It is named Monsters. Does that help?
Thanks, it helps 🙂
You’re very welcome 🙂
I didn’t quite get it. The MonsterTwo class is not defined as an array. How is that you are able to directly create a array of it?
It is an array filled with MonsterTwo objects. Feel free to skip parts 8 and 10 of this tutorial series. I was experimenting with an idea, but I don’t like how it came out. I’m very happy with the rest of the tutorial though.
Congrats you have now replaced the kind lady from India (Prop. Calc. videos) as my favorite internet personality. Just playing around with your code a bit, and I added the ability when using the default constructor MonsterTwo() to display the first letter of the name on the board. As is (at least in the video) this don’t be a happening. But a little of “The joy of Cut and Paste,” and BAM!! Show me the ‘B’ 🙂 Anyway thank you for the time and effort. I have been using my old textbook for a review, and as a guide in my plan to help my brother (he has 1301 in fall), and this is 10^(you pick) times better! I plan on watching all your videos over the next 90 days…It’s Java Boot Camp!!! …An array of objects? Who wouldn’t get fired up!!!
Thank you very much for the kind words 🙂 I’m very happy that you are enjoying the videos
Hi Derek,
Thanks for your help ! Enjoying a lot.
Trying to compilate and run this one, and the Eclipse tells me:
‘Selection does not contain a main type’
Sorry, I am a new.
Hi,
I’m glad you’re enjoying the videos 🙂
You normally get that error if you don’t save your file in the src folder. I made a tutorial to help with many of the other common Eclipse errors that may help called Install Eclipse for Java. I hope that helps
in the file lessonEight.java , you have defined a class lessonEight and you written a code ” MonsterTwo.buildBattleBoard();
where MonsterTwo is a object according to you. but how can the object have the same name as of its class. i mean to we should have defined an object like
MonsterTwo xyz = new Monstertwo();
and then called the function
xyz.buildBattleBoard();
You could definitely do it both ways. I considered parts 8 and 10 of this series to be bad. Feel free to skip them. The rest are good though. I was experimenting to much in these 2 videos
int maxXBoardSpace = battleBoard.length – 1;
int maxYBoardSpace = battleBoard[0].length – 1;
can u explian the second line
char[][] tempBattleBoard = new char[10][10];
what is the use of above array?
To represent the game board. Feel free to skip parts 8 and 10 of this tutorial series. I think I pushed this idea to soon. I reexamine logic in programming later in the tutorial.
I am having difficulty in understand how MonsterTwo.buildBattleboard(); works.
To my understanding, MonsterTwo is the blueprint for new MonsterTwo objects to be created. Once new MonsterTwo objects are created, then we can perform different operations on the new object. But when we write MonsterTwo.buildBattleboard(); before creating a new object, where is the computer performing this action? There is no object to work with. I am a newbie and this may be a very silly question. But if you please take some time to explain a bit, it would be helpful. I hope my query is clear to you.. 🙂
The best thing to do is to skip parts 8 and 10. I was trying to teach a topic that was to advanced using the limited amount of Java I had taught at that point. The rest of the tutorial is good though in my opinion. To improve I have to on occasion experiment with new ideas and parts 8 and 10 were failed experiments in my opinion.
I’ve imported the apache lang3 files as you showed in your video but my Eclipse still doesn’t recognize them for some reason. At the top where you import libraries it says that the import org.apache cannot be resolved and when I try to run it anyways I get this message:
Exception in thread “main” java.lang.Error: Unresolved compilation problem:
at LessonEight.main(LessonEight.java:29)
I’ve tried copying the commons-lang3-3.3.2.jar and commons-lan3.3.3.2.jar files (those are the ones I could find on their site, they should be the latest version) to the src folder. That didn’t help either.
I show how to install java libraries here. I hope it helps
Well hello Derek!
I have been following your java tutorial series heretofore and everything has been going on well. I must say, you happen to be the best java tutor I have followed so far… way2go Derek!
But it seems now I am having issues fixing this error I am getting after trying to run LessonEight.java. It says something funny like, “IOConsole Updater – An internal error has occurred.” When I click the ‘Details’ button I get “No more handles.” May you please help me troubleshoot this?
Hello, I’m very happy that you like the videos 🙂
That is an Eclipse error that occurs for complex reasons. Take a look at my tutorial on how to install Eclipse for Java. It will sort that out.
please i am using netbeans 8.0 and it does not recognize “org.apache.commons.lang3.ArrayUtils;” How do i install it on Netbeans?
1. Download the commons-lang-2.5-bin.zip
2. Extract it, and find the commons-lang-2.5.jar
3. Add that jar to the classpath of your project by:
4. right click your project > Properties > Libraries > Add Jar/Folder
Source :
Great tutorials Derek, thanks so much. Just to be clear on the x and y positions, typically when we draw a graph, x is the horizontal axis and y is the vertical access but I guess when we’re dealing with the battleBoard here as constructed, x is analogous to i, which actually is the rows of the char[][] and Y is the columns. So in other words, on the battleBoard, x actually is the vertical axis and Y is the horizontal. correct?
Thank you 🙂 Yes you are correct. Sorry about any confusion that caused.
Hi Derek.
First off tnx a bunch for your excellent tutorials. I have a question regarding the difference between the JRE System Library versions. I tried running it with JavaSE-1.8 and it didn’t work, After some foolin around I tried changing the JRE System Libarary to version 1.6 and it miraculously worked! Any idea as to why it doesn’t work with 1.8?
Eclipse doesn’t like Java 8. I don’t use it. Stick with 7
Hi Derek,
As always….great tutorial!!! I have a question….why did you make the buildBattleBoard() and redrawBoard() methods static?
Thanks
Hi Yusuf, I did that because building the battle bored isn’t something a monster would be able to do. It is a utility method. Feel free to skip parts 8 and 10 of this tutorial series. I cover this information much better later in the tutorial. | http://www.newthinktank.com/2012/01/java-video-tutorial-8/?replytocom=15515 | CC-MAIN-2020-05 | en | refinedweb |
I'm working on a fighting game, using two 360 controllers for 2 players. To get this to work, I've added xInputPureDotNet to my project. Works perfectly, only minor bug seems to be it detects controller 1 as 2 and vice versa, not a big problem to work around. So, In the sample unity project xInputTest which is included, there is a line accessing vibrate for each motor based on trigger input. Since I don't really fully understand how xinput works, I've been trying to use that example to add force feedback to my own project. The line for vibration is :
Gamepad.SetVibration(playerIndex, state.Triggers.left,state.Triggers.right);
By exposing the state.Triggers.left,state.Triggers.right in the inspector, they appear as normal axis input, maxing out at 1.
However, if I plug in a 1 through code, such as :
var vib1 = 1; var vib2 = 1;
Gamepad.SetVibration(playerIndex, vib1,vib2)
(the format is SetVibration(XInputDotNetPure.PlayerIndex,float,float))
Doesn't seem to matter what number I use, all I get is a faint murmur on one controller.
The xinput test only seems to take input from one controller (controller 2), so what I want to figure out is:
A: how can I set up the xinputdotnetpure.PlayerIndex for each controller, since the example only seems to work for one?
Note: the test detects both controllers, as evidenced in the debug log. Only seems to take input from one though.
All I need is to be able to refer to each controller as a PlayerIndex.
B: since the two latter arguments are floats, can't I just set them manually? Why doesn't this work?
In the test scene, (using the state.Trigger.right etc.) the vibrate fully works, at least on the one controller. But if I replace with my own floats, just a faint murmur.
If there's anyone out there who knows how to do this, an example would be extremely helpful..
Thanks in advance for any ideas!
Update:figured out the PlayerIndex, it's PlayerIndex.One, PlayerIndex.Two, etc.
so now I've got the vibrate basically working, just not as strong as I'd like it to be.
Here's the script I'm using, in case anyone's interested:
using UnityEngine;
using XInputDotNetPure;
public class XInputVibrateTest : MonoBehaviour
{
public float testA;
public float testB;
public float testC;
public float testD;
void Update()
{
GamePad.SetVibration(PlayerIndex.Two,testA,testB);
GamePad.SetVibration(PlayerIndex.One,testC, testD);
}
}
then I just set the floats when I want vibration (again, doesn't seem to matter much WHAT I set them to, that's the issue now.. I think it goes 0 to 1, but I can't get the full vibration as in the demo project no matter what I try.)
Answer by Seth-Bergman
·
Jan 06, 2013 at 03:34 AM
In response to this more recent post, I have just revisited this issue. It seems the problem was the part "PlayerIndex.One" "PlayerIndex.Two"... instead I believe simply replacing "0" and "1" would do it, as in:
GamePad.SetVibration(0,testA,testB);
GamePad.SetVibration(1,testC, testD);
(for controller 3 it would be "2", and for controller 4 it would be "3")
(well, I don't know if that was the issue, doesn't really make sense, but at any rate, all seems to work fine now..)
for a javascript example, see my answer in the link :)
Answer by JtheSpaceC
·
Dec 14, 2015 at 02:47 PM
The accepted answer didn't help me. I've failed to get this done about a dozen times until today. You need to download a 'using' namespace to get access to the GamePad function.
The download and instructions (scroll down) can be found here. Enjoy!
Thanks a bunch!
Answer by A0101A
·
Feb 20, 2014 at 02:48 PM
I also have enabled the RUMBLE function, for my FPS game dev, by using the XInputDotNetPure system and it works ok for game pad vibration, but - at the times - i get the permanent vibration that does not stop, after some heavy objects hit the Player, or his camera ... the player has a rigidbody, so, could that be related to the problem, anyone ? Let's shed some light on this, i would not like to think it's a unity bug - i don't think so,Input: vibration on more than 4 360 controller?
0
Answers
rigidbody vibration
0
Answers
Handheld.Vibrate() not working
1
Answer
Network Vibration Problem
1
Answer
Vibrating/Blur Character
1
Answer | https://answers.unity.com/questions/218084/xinput-how-do-i-access-vibration-on-360-controller.html?childToView=375451 | CC-MAIN-2020-05 | en | refinedweb |
.
Namespace and Syntax
XML namespace:
xmlns:bpm ""
XML Schema location:
Syntax:
<bpm:jbpm /> <bpm:process
Using jBPM with Mule consists of a few things:
Configuring jBPM
Configuring Hibernate and the database used to store process state
Declaring jBPM as the BPMS to use in your Mule configuration
Interacting with Mule from your process definition
jBPM Configuration
The default configuration file for jBPM is called
jbpm.cfg.xml. You need to include this file as part of your Mule application. If defaults are ok for you, then it could be as simple as the following.
jBPM Configuration (jbpm.cfg.xml)
<jbpm-configuration> <import resource="jbpm.default.cfg.xml" /> <import resource="jbpm.jpdl.cfg.xml" /> <import resource="jbpm.tx.hibernate.cfg.xml" /> <process-engine-context> <object class="org.mule.module.jbpm.MuleMessageService" /> (1) </process-engine-context> </jbpm-configuration>.
Derby settings
<property name="hibernate.dialect">org.hibernate.dialect.DerbyDialect</property> <property name="hibernate.connection.driver_class">org.apache.derby.jdbc.EmbeddedDriver</property> <property name="hibernate.connection.url">jdbc:derby:memory:muleEmbeddedDB</property> <property name="hibernate.hbm2ddl.auto">create-drop</property>
While an Oracle database uses these settings:
Oracle settings
<property name="hibernate.dialect">org.hibernate.dialect.OracleDialect</property> <property name="hibernate.connection.driver_class">oracle.jdbc.driver.OracleDriver</property> <property name="hibernate.connection.url">jdbc:oracle:thin:user/pass@server:1521:dbname</property> also deletes
<bpm:jbpm />
Custom config
<bpm:jbpm
Process Definition (jPDL)
For lack of a good standard in the BPM community, jBPM has traditionally used its own DSL for process definitions called jPDL. It
<mule ...cut... xmlns: (1) <bpm:jbpm (2) <flow name="ToBPMS"> <composite-source> <inbound-endpoint (3) <inbound-endpoint </composite-source> <bpm:process (4) </flow> ...cut... </mule>
Example jPDL Process Definition
<process name="LoanBroker" xmlns=""> <mule-receive <transition to="sendToCreditAgency" /> </mule-receive> (1) <mule-send <transition to="sendToBanks" /> </mule-send> (2) <decision name="sendToBanks"> (3) <transition to="sendToBigBank"> <condition expr="#{customerRequest.loanAmount >= 20000}" /> (4) </transition> <transition to="sendToMediumBank"> <condition expr="#{customerRequest.loanAmount >= 10000}" /> </transition> ...cut... </decision> ...cut... <end name="loanApproved" /> </process>
<mule ...cut... <bpm:jbpm <model> <service name="ToBPMS"> (1) <inbound> <inbound-endpoint <inbound-endpoint </invound> <bpm:process </service> ...cut... </model> </mule>
XML Schema
This module uses the schema from the BPM Module; it does not have its own schema.
Import the BPM schema as follows:
xmlns:bpm="" xsi:schemaLocation=""
Refer to BPM Module Reference for detailed information on the elements of the BPM schema. | https://docs.mulesoft.com/mule-runtime/3.7/jboss-jbpm-module-reference | CC-MAIN-2020-05 | en | refinedweb |
The Revit API discussion forum continues to reach ever new levels of depth and coverage.
Here are a couple of recent topics:
- Welcome to the top solution authors, Jim!
- Setting a parameter to regenerate the model
- Checking model for C4R versus local file
Welcome to the Top Solution Authors, Jim!
The breadth and depth of Revit API discussion forum solutions can only be achieved thanks to a growing amount of input from real developers – unlike myself and my developer support colleagues – providing advanced answers to hitherto unsolved problems.
They are honoured in the list of top solution authors:
Very many thanks as always to Rudi @Revitalizer Honke, Alexander @aignatovich Ignatovich and Frank @Fair59 Aarssen for sharing their professional experience and ideas that most of us others would never be able to come up with.
My Chinese colleague Jim Jia has also been participating in the forum for quite a while.
Now he made it into the list of top solution authors as well.
Congratulations, Jim, and thank you very much for all your work!
Setting a Parameter to Regenerate the Model
We have frequently discussed the need to regenerate the model or individual elements and various ways to achieve that efficiently.
Another aspect of this and a simple solution came up in the thread on new sloped roof not visible:
Question: I have a problem creating a new sloped roof using code.
I use the basic sample shown in the developer guide and The Building Coder article on creating a roof.
I create a new
FootPrintRoof, set
DefinesSlopes to true for each model curve, and assign a
SlopeAngle.
My macro doesn't have errors, but I can't see the new roof in any view.
I can find it only in a roof schedule, but I cannot see the 3D element.
I have tried to refresh the view in the code and I have noticed that the roof appears on the screen only for one second and then it disappears again.
I have tried using Basic roof and Sloped glazing, but it still doesn't work.
If I don't set the sloping, I can create a planar roof without any issue
What is the problem with the slope angle?
How can I solve that and make the sloped roof visible?
Answer: I solved the problem.
Now, I create my sloped glazing and then simply set one of its parameters using the API.
I've tried many different parameters, using either parameters connected to the UI and descriptive parameters: all of them are ok to regenerate the correct visualization of the sloped glazing.
This is my trick, I hope it is useful!
If anyone has a better way, please let me know.
Many thanks to @newbieng for raising the issue and sharing this simple and effective solution!
Checking Model for C4R versus Local File
A new issue was raised and solved in the long discussion on browsing model files in the cloud:
Question: Does anyone know how to check whether this file is C4R versus a local file?
Is it simply file extension, or a document property?
Answer: There's an internal property
IsModelInCloud on the
Document object in the Revit API that you can access using reflection:
public static bool GetIsModelInCloud( Document document ) { PropertyInfo p = typeof( Document ).GetProperty( "IsModelInCloud", BindingFlags.NonPublic | BindingFlags.Instance ); return (bool) p.GetValue( document ); }
Many thanks to Paul @pvella Vella for sharing this neat little secret! | https://thebuildingcoder.typepad.com/blog/2017/11/cloud-model-predicate-and-set-parameter-regenerates.html | CC-MAIN-2020-05 | en | refinedweb |
/* Thread and interpreter state structures and their interfaces */
#ifndef Py_PYSTATE_H
#define Py_PYSTATE_H
#ifdef __cplusplus
extern "C" {
#endif
/* State shared between threads */
struct _ts; /* Forward */
struct _is; /* Forward */
typedef struct _is {
struct _is *next;
struct _ts *tstate_head;
PyObject *modules;
PyObject *sysdict;
PyObject *builtins;
PyObject *modules_reloading;
PyObject *codec_search_path;
PyObject *codec_search_cache;
PyObject *codec_error_registry;
#ifdef HAVE_DLOPEN
int dlopenflags;
#endif
#ifdef WITH_TSC
int tscdump;
#endif
} PyInterpreterState;
/* State unique per thread */
struct _frame; /* Avoid including frameobject.h */
/* Py_tracefunc return -1 when raising an exception, or 0 for success. */
typedef int (*Py_tracefunc)(PyObject *, struct _frame *, int, PyObject *);
/* The following values are used for 'what' for tracefunc functions: */
#define PyTrace_CALL 0
#define PyTrace_EXCEPTION 1
#define PyTrace_LINE 2
#define PyTrace_RETURN 3
#define PyTrace_C_CALL 4
#define PyTrace_C_EXCEPTION 5
#define PyTrace_C_RETURN 6
typedef struct _ts {
/* See Python/ceval.c for comments explaining most fields */
struct _ts *next;
PyInterpreterState *interp;
struct _frame *frame;
int recursion_depth;
/* 'tracing' keeps track of the execution depth when tracing/profiling.
This is to prevent the actual trace/profile code from being recorded in
the trace/profile. */
int tracing;
int use_tracing;
Py_tracefunc c_profilefunc;
Py_tracefunc c_tracefunc;
PyObject *c_profileobj;
PyObject *c_traceobj;
PyObject *curexc_type;
PyObject *curexc_value;
PyObject *curexc_traceback;
PyObject *exc_type;
PyObject *exc_value;
PyObject *exc_traceback;
PyObject *dict; /* Stores per-thread state */
/*;
int gilstate_counter;
PyObject *async_exc; /* Asynchronous exception to raise */
long thread_id; /* Thread id where this tstate was created */
/* XXX signal handlers should also be here */
} PyThreadState;
PyAPI_FUNC(PyInterpreterState *) PyInterpreterState_New(void);
PyAPI_FUNC(void) PyInterpreterState_Clear(PyInterpreterState *);
PyAPI_FUNC(void) PyInterpreterState_Delete(PyInterpreterState *);
PyAPI_FUNC(PyThreadState *) PyThreadState_New(PyInterpreterState *);
PyAPI_FUNC(PyThreadState *) _PyThreadState_Prealloc(PyInterpreterState *);
PyAPI_FUNC(void) _PyThreadState_Init(PyThreadState *);
PyAPI_FUNC(void) PyThreadState_Clear(PyThreadState *);
PyAPI_FUNC(void) PyThreadState_Delete(PyThreadState *);
#ifdef WITH_THREAD
PyAPI_FUNC(void) PyThreadState_DeleteCurrent(void);
#endif
PyAPI_FUNC(PyThreadState *) PyThreadState_Get(void);
PyAPI_FUNC(PyThreadState *) PyThreadState_Swap(PyThreadState *);
PyAPI_FUNC(PyObject *) PyThreadState_GetDict(void);
PyAPI_FUNC(int) PyThreadState_SetAsyncExc(long, PyObject *);
/* Variable and macro for in-line access to current thread state */
PyAPI_DATA(PyThreadState *) _PyThreadState_Current;
#ifdef Py_DEBUG
#define PyThreadState_GET() PyThreadState_Get()
#else
#define PyThreadState_GET() (_PyThreadState_Current)
#endif
typedef
enum {PyGILState_LOCKED, PyGILState_UNLOCKED}
PyGILState_STATE;
/*_Ensure());
/*.
*/
PyAPI_FUNC(void) PyGILState_Release(PyGILState_STATE);
/* Helper/diagnostic function - get the current thread state for
this thread. May return NULL if no GILState API has been used
on the current thread. Note that the main thread always has such a
thread-state, even if no auto-thread-state call has been made
on the main thread.
*/
PyAPI_FUNC(PyThreadState *) PyGILState_GetThisThreadState(void);
/* The implementation of sys._current_frames() Returns a dict mapping
thread id to that thread's current frame.
*/
PyAPI_FUNC(PyObject *) _PyThread_CurrentFrames(void);
/* Routines for advanced debuggers, requested by David Beazley.
Don't use unless you know what you are doing! */
PyAPI_FUNC(PyInterpreterState *) PyInterpreterState_Head(void);
PyAPI_FUNC(PyInterpreterState *) PyInterpreterState_Next(PyInterpreterState *);
PyAPI_FUNC(PyThreadState *) PyInterpreterState_ThreadHead(PyInterpreterState *);
PyAPI_FUNC(PyThreadState *) PyThreadState_Next(PyThreadState *);
typedef struct _frame *(*PyThreadFrameGetter)(PyThreadState *self_);
/* hook for PyEval_GetFrame(), requested for Psyco */
PyAPI_DATA(PyThreadFrameGetter) _PyThreadState_GetFrame;
#ifdef __cplusplus
}
#endif
#endif /* !Py_PYSTATE) | https://www.codeproject.com/script/Articles/ViewDownloads.aspx?aid=515115&zep=native_python%2Fpython_singlethread%2Fpython%2Finclude%2Fpystate.h&rzp=%2FKB%2Fwindows-phone-7%2F515115%2F%2Fpython_example.zip | CC-MAIN-2018-26 | en | refinedweb |
finalize
import java.sql.*; import java.text.*; import java.util.*; /** * This class has an enforced life cycle: after destroy is * called, no useful method can be called on this object * without throwing an IllegalStateException. */ public final class DbConnection { public DbConnection () { //build a connection and assign it to a field //elided.. fConnection = ConnectionPool.getInstance().getConnection(); } /** * Ensure the resources of this object are cleaned up in an orderly manner. * * The user of this class must call destroy when finished with * the object. Calling destroy a second time is permitted, but is * a no-operation. */ public void destroy() throws SQLException { if (fIsDestroyed) { return; } else{ if (fConnection != null) fConnection.close(); fConnection = null; //flag that destory has been called, and that //no further calls on this object are valid fIsDestroyed = true; } } /** * Fetches something from the db. * * This is an example of a non-private method which must ensure that * <code>destroy</code> has not yet been called * before proceeding with execution. */ synchronized public Object fetchBlah(String aId) throws SQLException { validatePlaceInLifeCycle(); //..elided return null; } /** * If the user fails to call <code>destroy</code>, then implementing * finalize will act as a safety net, but this is not foolproof. */ protected void finalize() throws Throwable{ try{ destroy(); } finally{ super.finalize(); } } /** * Allow the user to determine if <code>destroy</code> has been called. */ public boolean isDestoyed() { return fIsDestroyed; } // PRIVATE /** * Connection which is constructed and managed by this object. * The user of this class must call destroy in order to release this * Connection resource. */ private Connection fConnection; /** * This object has a specific "life cycle", such that methods must be called * in the order: others + destroy. fIsDestroyed keeps track of the lifecycle, * and non-private methods must check this value at the start of execution. * If destroy is called more than once, a no-operation occurs. */ private boolean fIsDestroyed; /** * Once <code>destroy</code> has been called, the services of this class * are no longer available. * * @throws IllegalStateException if <code>destroy</code> has * already been called. */ private void validatePlaceInLifeCycle(){ if (fIsDestroyed) { String message = "Method cannot be called after destroy has been called."; throw new IllegalStateException(message); } } } | http://javapractices.com/topic/TopicAction.do;jsessionid=563F9C5BF204A55AAD5F02649BB718D9?Id=43 | CC-MAIN-2018-26 | en | refinedweb |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
TypeError: pop() takes at most 1 argument (2 given)
def create(self, cr, uid, vals, context=None): id = super(sale_order, self).create(cr, uid, vals, context) order_line_ids = self.pool.get('sale.order.line').search(cr, uid, [('order_id', '=', id)], context=context) for i in self.browse(cr,uid,order_line_ids, context=context): self.pool.get('sale.order.section').create(cr, uid,(0,0,{'order_line':i.id})) return id
I get the error TypeError: pop() takes at most 1 argument (2 given) when i want to create the sale order. It comes from the line :
self.pool.get('sale.order.section').create(cr, uid,(0,0,{'order_line':i.id}))
Where is the problem ?
Hi ,
try to divise your code in clear lines.
try with this code :
section_pool=self.pool.get('sale.order.section') order_line_pool=self.pool.get('sale.order.line') for order_line_id in order_line_ids order_line =order_line_pool.browse(cr,uid,order_line_id) section = { 'order_line':order_line.id, 'name' : order_line.name, 'number' : 0, #others fields } section_pool.create(cr, uid, section )
Thanks.
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
That's ok thanks a lot !!
I edit my answer . if it works thankyou to mark this question as resolved. | https://www.odoo.com/forum/help-1/question/typeerror-pop-takes-at-most-1-argument-2-given-16859 | CC-MAIN-2018-26 | en | refinedweb |
Summary
Lists the feature classes in the workspace, limited by name, feature type, and optional feature dataset.
Discussion
The workspace environment must be set first before using several of the List functions, including ListDatasets, ListFeatureClasses, ListFiles, ListRasters, ListTables, and ListWorkspaces.
Syntax
ListFeatureClasses ({wild_card}, {feature_type}, {feature_dataset})
Code sample
Copy shapefiles to a geodatabase.
import os import arcpy # Set the workspace for ListFeatureClasses arcpy.env.workspace = "c:/base" # Use the ListFeatureClasses function to return a list of # shapefiles. featureclasses = arcpy.ListFeatureClasses() # Copy shapefiles to a file geodatabase for fc in featureclasses: arcpy.CopyFeatures_management( fc, os.path.join("c:/base/output.gdb", os.path.splitext(fc)[0]))
List all feature classes in a geodatabase, including any within feature datasets.
import arcpy import os arcpy.env.workspace = "c:/base/gdb.gdb" datasets = arcpy.ListDatasets(feature_type='feature') datasets = [''] + datasets if datasets is not None else [] for ds in datasets: for fc in arcpy.ListFeatureClasses(feature_dataset=ds): path = os.path.join(arcpy.env.workspace, ds, fc) print(path) | http://pro.arcgis.com/en/pro-app/arcpy/functions/listfeatureclasses.htm | CC-MAIN-2018-26 | en | refinedweb |
Framework Design Guidelines
This section provides guidelines for designing libraries that extend and interact with the .NET Framework. The goal is to help library designers ensure API consistency and ease of use by providing a unified programming model that is independent of the programming language used for development. We recommend that you follow these design guidelines when developing classes and components that extend the .NET Framework. Inconsistent library design adversely affects developer productivity and discourages adoption.
The guidelines are organized as simple recommendations prefixed with the terms
Do,
Consider,
Avoid, and
Do not. These guidelines are intended to help class library designers understand the trade-offs between different solutions. There might be situations where good library design requires that you violate these design guidelines. Such cases should be rare, and it is important that you have a clear and compelling reason for your decision.
These guidelines are excerpted from the book Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries, 2nd Edition, by Krzysztof Cwalina and Brad Abrams.
In This Section
Naming Guidelines
Provides guidelines for naming assemblies, namespaces, types, and members in class libraries.
Type Design Guidelines
Provides guidelines for using static and abstract classes, interfaces, enumerations, structures, and other types.
Member Design Guidelines
Provides guidelines for designing and using properties, methods, constructors, fields, events, operators, and parameters.
Designing for Extensibility
Discusses extensibility mechanisms such as subclassing, using events, virtual members, and callbacks, and explains how to choose the mechanisms that best meet your framework's requirements.
Design Guidelines for Exceptions
Describes design guidelines for designing, throwing, and catching exceptions.
Usage Guidelines
Describes guidelines for using common types such as arrays, attributes, and collections, supporting serialization, and overloading equality operators.
Common Design Patterns
Provides guidelines for choosing and implementing dependency properties and the dispose
Overview
Roadmap for the .NET Framework
Development Guide | https://docs.microsoft.com/en-us/dotnet/standard/design-guidelines/ | CC-MAIN-2018-26 | en | refinedweb |
They're relatively easy to manage. The __init__.py file (to make a module into a package) is very elegant. And stuff can be put into the __init__.py file to create a kind of top-level or header module in a larger package of modules.
To a limit.
It took hours, but I found the edge of the envelope. The hard way.
We have a package with about 10 distinct Django apps. Each Django app is -- itself -- a package. Nothing surprising or difficult here.
At first, just one of those apps used a couple of fancy security-related functions to assure that only certain people could see certain things in the view. It turns out that merely being logged in (and a member of the right group) isn't enough. We have some additional context choices that you must make.
The view functions wind up with a structure that looks like this.
@login_required
def someView( request, object_id, context_from_URL ):
no_good = check_other_context( context_from_URL )
if no_good is not None: return no_good
still_no_good = check_session()
if still_no_good is not None: return still_no_good
# you get the idea
At first, just one app had this feature.
Then, it grew. Now several apps need to use check_session and check_other_context.
Where to Put The Common Code?
So, now we have the standard architectural problem of refactoring upwards. We need to move these functions somewhere accessible. It's above the original app, and into the package of apps.
The dumb, obvious choice is the package-level __init__.py file.
Why this is dumb isn't obvious -- at first. This file is implicitly imported. Doesn't seem like a bad thing. With one exception.
The settings.
If the settings file is in a package, and the package-level __init__.py file has any Django stuff in it -- any at all -- that stuff will be imported before your settings have finished being imported. Settings are loaded lazily -- as late as possible. However, in the process of loading settings, there are defaults, and Django may have to use those defaults in order to finish the import of your settings.
This leads to the weird situation that Django is clearly ignoring fundamental things like DATABASE_ENGINE and similar settings. You get the dummy database engine, Yet, a basic from django.conf import settings; print settings.DATABASE_ENGINE shows that you should have your expected database.
Moral Of the Story
Nothing with any Django imports can go into the package-level __init__.py files that may get brought in while importing settings. | http://slott-softwarearchitect.blogspot.com/2009/10/painful-python-import-lessons.html | CC-MAIN-2018-26 | en | refinedweb |
On Thu, Nov 10, 2011 at 1:03 PM, "Martin v. Löwis" <martin at v.loewis.de>wrote: > > Actually, scratch that part of my response. *Existing* namespace > > packages that work properly already have a single owner > > How so? The zope package certainly doesn't have a single owner. Instead, > it's spread over a large number of subpackages. > In distro packages (i.e. "system packages") there may be a namespace-defining package that provides an __init__.py. For example, I believe Debian (system) packages peak.util this way, even though there are many separately distributed peak.util.* (python) packages. >". > Nick is speaking again about system packages released by OS distributors. A naive system package built with setuptools of a namespace package will not contain an __init__.py, but only a .nspkg.pth file used to make the __init__.py unnecessary. (In this sense, the existing setuptools namespace package implementation for system-installed packages is actually a primitive partial implementation of PEP 402.) In summary: some system packages are built with an owning package, some aren't. Those with an owning package will need to drop the __init__.py (from that one package), and the others do not, because they don't have an __init__.py. In either case, PEP 402 leaves the directory layout alone. A version of setuptools intended for PEP 402 support would drop the nspkg.pth inclusion, and a version of "packaging" intended for PEP 402 would simply not add one. -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/import-sig/2011-November/000370.html | CC-MAIN-2018-26 | en | refinedweb |
.system.controller.support;23 24 /**25 * Order.26 * 27 * @author <a HREF="adrian@jboss.com">Adrian Brock</a>28 * @version $Revision: 1.1 $29 */30 public class Order31 {32 private static int order = 0;33 34 public static synchronized int getOrder()35 {36 return ++order;37 }38 39 public static synchronized void reset()40 {41 order = 0;42 }43 }44
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/jboss/test/system/controller/support/Order.java.htm | CC-MAIN-2018-26 | en | refinedweb |
* DirectedSubgraph.java27 * ---------------------28 * (C) Copyright 2003-2006, by Barak Naveh and Contributors.29 *30 * Original Author: Barak Naveh31 * Contributor(s): Christian Hammer32 *33 * $Id: DirectedSubgraph.java 504 2006-07-03 02:37:26Z perfecthash $34 *35 * Changes36 * -------37 * 05-Aug-2003 : Initial revision (BN);38 * 11-Mar-2004 : Made generic (CH);39 * 28-May-2006 : Moved connectivity info from edge to graph (JVS);40 *41 */42 package org.jgrapht.graph;43 44 import java.util.*;45 46 import org.jgrapht.*;47 48 49 /**50 * A directed graph that is a subgraph on other graph.51 *52 * @see Subgraph53 */54 public class DirectedSubgraph<V, E>55 extends Subgraph<V, E>56 implements DirectedGraph<V, E>57 {58 59 //~ Static fields/initializers --------------------------------------------60 61 private static final long serialVersionUID = 3616445700507054133L;62 63 //~ Constructors ----------------------------------------------------------64 65 /**66 * Creates a new directed subgraph.67 *68 * @param base the base (backing) graph on which the subgraph will be based.69 * @param vertexSubset vertices to include in the subgraph. If <code>70 * null</code> then all vertices are included.71 * @param edgeSubset edges to in include in the subgraph. If <code>72 * null</code> then all the edges whose vertices found in73 * the graph are included.74 */75 public DirectedSubgraph(76 DirectedGraph<V, E> base,77 Set<V> vertexSubset,78 Set<E> edgeSubset)79 {80 super(base, vertexSubset, edgeSubset);81 }82 }83
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ | | http://kickjava.com/src/org/jgrapht/graph/DirectedSubgraph.java.htm | CC-MAIN-2018-26 | en | refinedweb |
#include <LookoutEvents.h>
Definition at line 55 of file LookoutEvents.h.
List of all members.
Definition at line 61 of file LookoutEvents.h.
Referenced by clone().
Definition at line 62 of file LookoutEvents.h.
0
Definition at line 64 of file LookoutEvents.h.
Definition at line 67 of file LookoutEvents.h.
copy constructor (shallow copy)
Definition at line 72 LookoutPointAtEvent.
Definition at line 76 79 of file LookoutEvents.h.
Definition at line 60 of file LookoutEvents.h.
Definition at line 75 of file LookoutEvents.h.
Referenced by DualCoding::MapBuilder::processImage().
[private]
[static, protected]
causes class type id to automatically be regsitered with EventBase's FamilyFactory (getTypeRegistry())
Definition at line 82 of file LookoutEvents.h.
Referenced by getClassTypeID().
[protected]
Definition at line 57 of file LookoutEvents.h.
Referenced by getSketch(). | http://tekkotsu.org/dox/classLookoutSketchEvent.html | CC-MAIN-2018-26 | en | refinedweb |
Update 11/29/2012: This article is current as of the 6.3.173.1 version of the SQL MP.
Using RunAs accounts and profiles is an often poorly understood area of OpsMgr. As I began to investigate setting this up for the SQL MP, I quickly realized how little I understood it. Chatting with my PFE peers, I found that while everyone felt they had a “big picture” idea of how to configure it, nobody I talked to really understood all the options and impact.
This post will be a primer on the basics of using RunAs accounts and profiles, and an in depth example of applying them to the SQL Management pack under different typical scenarios..
In SOME cases, the application being monitored does not allow Local System (or a local admin/domain user account) to have any, or enough rights to the application. When this happens, we need to use a RunAs account and profile..
Following are the most common scenarios I run into, in the field:
Scenario #1. You use Local System as the default agent action account.
You accept the default SQL permissions, or modify them to ensure that Local System has the “SA” role to the SQL instance. In this case – the default agent action account has full rights to the Operating System and to SQL. No other configuration or use of Run-As accounts is necessary. The SQL MP will discover and monitor the SQL instances. This is not the most secure scenario, but likely the simplest to manage..
Scenario #3. You use Local System as the default action account.
However, the SQL team has restricted the NT AUTHORITY\SYSTEM (Local System) SQL login, and removed the “SA” right. In this case, the Local System account has full rights to monitor the server OS, however, does not have enough rights to discover and monitor the SQL application. In this case – we would use a Run-As account(s) to manage access for the SQL workflows only, to execute under this Run-As account. This account(s) can be created and fully managed by the SQL team. This is the MOST common scenario.
Scenario #4. You use a Domain User account as the default action account.
This account is a member of the Local Administrators group on the OS. It is used by the OpsMgr team as their agent account. However, the SQL team has restricted or deleted the BUILTIN\Administrators SQL login, thereby removing the “SA” right from local admins. The SQL team will not allow this account, which they do not control, to have any access to SQL. In this case, the default agent action domain user account has full rights to monitor the server OS, however, does not have enough rights to discover and monitor the SQL application. In this scenario – we would use a Run-As account(s) to manage access for the SQL workflows only, to execute under this Run-As account. This account(s) can be created and fully managed by the SQL team.
So – in summary – the most common scenarios are:
- Use a default agent action account that has Local Admin rights to the OS and SA rights to SQL
- Use a default agent action account that has no rights to SQL and therefore configure RunAs accounts and profiles to gain access to SQL.”
4. Type in the credentials for username, password, and domain..
I really don’t like the terminology we chose of “Less Secure”. I think they were trying to stress that using “more secure” is a better way to ensure that the tightest security model is upheld. Theoretically, someone could take an agent management machine they had access to, and hack the credential presented until they got the password. This model has completely changed from SP1, where were distributed the credential anywhere that needed it, automatically. This presented a risk, because a server admin who didn’t get access to the credential, could theoretically “fake” that he had an application which needed the credential by placing a dummy registry entry, having this class discovered, get the credential distributed, and start trying to hack the credential. "The new “more secure” absolutely controls the distribution of the Run As credential, and only OpsMgr admins have access to this.
Less Secure really isn't a valid option. The reason for this, is that the agent, as soon as it receives a RunAs credential, performs a series of tests to make sure that we can use the Run As credential. This includes testing for the “Log On Locally”. If you create a Run As account, and choose “Less Secure” you will immediately get a flood of alerts from all your Domain Controllers, Exchange servers, and any other servers that restrict the Log on Locally right. In enterprise server environments, this is very typical to remove “Domain Users” or the local “Users” group from this user right via group policy – or to deny “Log on Locally” for service accounts. This essentially makes “Less Secure” unusable for any practical purpose.
Therefore we WILL be using “More Secure”.
Now that we have that settled, this means we need to choose the Health Services to distribute the Run As credential to. Go ahead and finish creating the Run As Account using more secure, then open the properties of the newly created account. There is a distribution tab:” in almost all my SQL server names, so this is easy enough for me – I type in “sql” and his – if a new MonitoringHost.exe process will spin up,”. This warrants yet another deep dive discussion.
Normally – the MP Author should let you pick “All targeted objects” and you can be on your way. have to choose “All targeted objects” or “A selected class, group, or object”.
Since ALL workflows that leverage the SQL Server Monitoring Account profile are targeting a class that is a SQL specific discovered class, we can use “All Targeted Objects”.
4. Done!
Last – let’s examine the SQL Server Default Action Account profile., or DBEngine objects. The Windows Computer object Hosts/Contains all other application classes, that is why you can use this one for your groups. Alternatively – you can use Database Engine objects if you prefer…. as the DBEngine Hosts/Contains almost all the SQL classes in the MP that leverage a Run As profile.
-.
Here is how it will look when complete:.
Conclusion
I hope this helps provide more answers than it does questions. This is just one example of how to use the Run As in OpsMgr 2007 and 2012. Other strategies could be formed. Keep in mind – usually the simplest solution is best, so don’t over-complicate things. Make a strategy that is the easiest to maintain, while providing good security separation.
Additionally – this example above is not the “most secure” configuration…. because we assumed our Run As account would have SA rights to SQL. That is not technically required. The SQL MP guide does a good job of documenting the minimum instance and database rights needed to grant a Run As account so it can fully monitor SQL. That said – the Run As account and profile configuration (the focus of this article) will not be any different if you further restrict SQL rights – Run As will work exactly the same way.
***Thanks to many conversations with Jimmy Harper, Jesse Harris, Russ Slaten, Tim McFadden, and Bonnie Feinberg for assisting me with the data provided.
SQL MP Run As 6.1.314.36.xlsx
I have learned a lot from you!
Thanks!
Hi Kevin,
I am running SCOM 2007 R2 with SQL 2000/05/08 MPs deployed, and I am experiencing an issue where SCOM is only discovering half of my SQL virtual instance. For excample, I have a 2 node physical SQL cluster running six virtual SQL instance (i.e. vsqla; vsqlb; vsqlc; vsqld; vsqle; vsqlf) and when I go to the Discovered Inventory view and search for theseinstances I only find vsqla/b/e.
>All instances are running under the same SQL service domainaccount
>SCOM SQL Run-As profile is configured to use the default account (SCOM admin account) which is part of the local admin group of this cluster, I even granted this ID domain admin rights but that didn't do anything.
>SCOM agent is installed on both nodes and the PROXY setting is set to enable
>If I look under "Agentless Managed" I only see the said three virtual instance
Any idea what I am missing hereor have you seen something like this before with SQL Management Pack?
Thanks is advance.
Murad
Hi Kevin – I have maybe a little bit different problem. I would like to configure a custom RunAs profile for agent installation, and use a RunAs account associated with it. The reason for this is I'm an admin for SCOM, but not domain admin, and our domain security policy has things pretty tightly locked down. There is an installation account configured with Local Admin rights on the servers, and we are using a domain user service account for the agent action account. All the documentation I've found says the agent installation is only able to be performed by the Managemnt Server action account, or an optional account that doesn't save credentials. Is there any way around this that you know of?
@Vivak –
Since your SQL server is ON the RMS…. this wont use Local System. This will use the Management Server Action Account as the default action account for all monitoring workflows.
Therefore – your MSAA needs the rights to SQL…. OR you need to set up the Run-As account and profile to be used by the SQL MP for the RMS.
Hi Kevin, I have just started with SCOM . Could you please suggest me some links/docs that starts with scratch for beginners. Thanks
Thx Kevin for the detailed explanation.
Kevin, I am using the 2nd option for my SQL MP
"
I've verified each instance has "local administrator" "SA" right , but I am still having issues discovery all virtual instances. Anything else I can look at to see whySCOM is unable to see all VSQLs in a two node cluster?
Thanks, Murad
Cannot be resolved generally means you did not distribute the run-as account to that particular health service.
Yes – that will be a soon to come post – I am going to get some assistance and figure out a good way to handle dynamic run as account distribution using the SDK – since the UI leaves a lot to be desired. Stay tuned.
@Murad
Discovery is pretty basic…. you dont need a lot of rights to discover the items.
Are you discovering some, or none of the virtuals?
Do the virtual instances show up in the Windows Computer and Windows Server class by name?
Have you enabled Agent proxy for all cluster nodes, and have an agent on all nodes in the cluster?
Bounce the agent health service. Wait 5 minutes. Are there any discovery errors or 10801 or 33333 events on the agent OpsMgr event log, or the Management server it is assigned to?
One of the big problems with using the "More Secure" option for distributing Run-As accounts is that every time a new (SQL, in this example) server comes online, the SQL team needs to tell the SCOM team that they've added a new SQL box and to go in and add in the new server. Unfortunately, there's no way to distribute to a dynamic grouping easily (I suppose you could hack something in with powershell, maybe…)
Look in your event logs on the nodes – you will see discovery errors, or – look on the management server event logs that the agents report to – you will see 10801/33333 events about failing to insert discovery data.
Next – your account being a domain admin is irrellevant (and should NEVER be used). What is relevant – is doe the account be used for discovery (default agent action account) have enough rights on the machine to be able to discover the instance….. if some are discovering but not others on the same node – then look at the differences in SQL INSTANCE security.
Last – if the events seen on the agent are timeouts for discovery – increase the timeout via override.
@Udit –
What monitor or rule specifically do you feel is not working? There is no monitor for a decommissioned DB.
Hi Graham –
I didnt. I started to… got some SDK code in C#, but what I really want is for this to be included in the product, or it to be portable to PS script.
Kevin – Thanks for getting back so quickly: Yes I am receiving hundereds of 88888 & 10801 alerts on my Management Server
Log Name: Operations Manager
Source: DataAccessLayer
Date: 9/29/2010 2:38:14 PM
Event ID: 33333
Task Category: None
Level: Warning
Keywords: Classic
User: N/A
Computer: management server.domain.com
Description:
Data Access Layer rejected retry on SqlError:
Request: p_RelationshipDiscovered — (RelationshipId=624cbcb3-fe58-8090-b174-cb788533791f), (SourceEntityId=42a835c6-81f3-4473-7365-2dee46ac999c), (TargetEntityId=63e73ddc-5438-63a1-c50d-89d08e436b28), (RelationshipTypeId=0c18d101-8d27-4333-6d54-682fe19a5896), (DiscoverySourceId=e898cacb-3382-bf39-40b0-e7eb930cff14), (HealthServiceEntityId=3dc442a6-0a0d-c47b-d812-4cb242531eb0), (PerformHealthServiceCheck=True),
———————————-
Log Name: Operations Manager
Source: Health Service Modules
Date: 9/29/2010 2:38:14 PM
Event ID: 10801
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: cho3wse9sco03.corp.transunion.com following details should help to further diagnose:
DiscoveryId: a344fd27-6d81-6a3b-7b9f-e9e55778465e
HealthServiceId: 3dc442a6-0a0d-c47b-d812-4cb242531eb0
Invalid relationship source specified in the discovery data item.
RelationshipSourceBaseManagedEntityId: 42a835c6-81f3-4473-7365-2dee46ac999c
RuleId: a344fd27-6d81-6a3b-7b9f-e9e55778465e
Instance:
<?xml version="1.0" encoding="utf-16"?><RelationshipInstance TypeId="{0c18d101-8d27-4333-6d54-682fe19a5896}" SourceTypeId="{387eadfe-1fde-623d-015a-cd39a8b40d55}" TargetTypeId="{02c5162c-b89f-1d61-349a-9b0667fbd60e}"><Settings /><SourceRole><Settings><Setting><Name>f0721862-8f99-78b7-6ede-936326423438</Name><Value>custdev.transunion.com</Value></Setting></Settings></SourceRole><TargetRole><Settings><Setting><Name>f0721862-8f99-78b7-6ede-936326423438</Name><Value>custdev.transunion.com</Value></Setting><Setting><Name>7a7a1247-5661-8a5f-4262-dc42bbdcf5b9</Name><Value>CUSTDEVCHU2WSE1CDC03</Value></Setting></Settings></TargetRole></RelationshipInstance>.
Event Xml:
—>No MPs were recently deleted
—> Database server has plenty of space
—>Network connectivity problems – Not that I am away of.
—>Discovery data received is not valid = Whats does that mean?
What would you recommend the next steps should be?
Thanks
Murad
@John –
I am actually working on that this week – there are three blog posts floating around out there with examples, I am going to try and build out a comprehensive post which covers all of them.
"Wed, Sep 8 2010 4:56 PM: Yes – that will be a soon to come post – I am going to get some assistance and figure out a good way to handle dynamic run as account distribution using the SDK – since the UI leaves a lot to be desired. Stay tuned."
any update on that dynamic run as account distribution? adding each sql server manually is an absolutely fantastic solution in a lab with a half-dozen machines. adding each sql server manually in a production environment is, in a word, ridiculous.
Automatic distribution would indeed be great. Right now I've got a series of PS scripts that ops can run as a task to map the various RunAs profiles for SQL. It helps some, but is only as good as the underlying process. Nice detailed explanation, Kevin.
i'm still puzzled why a domain account with local admin rights and sa rights is generally considered more secure than the system account (scenario 1 vs 2 in your post). it seems to me the domain account has extra rights in the domain whereas the system account can't go beyond the local system.
But i guess that's a bit off topic to the blogpost 🙂
Thanks for another great article Kevin – I have notied that this rule also fires an alert against the SQL Server if the SQL Server hosts databases that are set to auto-close. Workaround of creating a group of databases and over-riding the rule (enable = false) for the group seems to work.
I am trying to figure out how to modify a runas accounts distribution list using the SDK.
Do you have any idea which class, property or methods to use ?
The class MonitoringSecureData represents the runas account, but I can't find anything about distribution list or less/more secure.
Thanks,
Jan
Very helpfull! thanks…
Dear Kevin,
Fantastic explaination
Here is my issue:
I have implemented Scenario #1. You use Local System as the default agent action account.
SQL 2008 SCOM 2007 R2 , The OPSMGR DB is on the RMS itself
I was getting alert based on follwoing rule :
Run As Account does not exist on the target system or does not have enough permissions
After checked i saw Local System ahs SA rights on DB instanctance
Still i got the same alert for all the DB's
I checked the Action Accounts Listed in Console and saw 2 action accounts ,
1. Local System Action Account
2. Managemnet Server action account that was created during setup
As checked the Management server action account doesnt have SA privilages , it just has Public rights on the DB
I gave it SA rights and the event which caused the alert stopped.
Any reason why that must have to be done ?
Hi Kevin
Did you ever get a chance to look at this in more depth?
"Yes – that will be a soon to come post – I am going to get some assistance and figure out a good way to handle dynamic run as account distribution using the SDK – since the UI leaves a lot to be desired. Stay tuned. "
Cheers
Graham
I went through your directions.. Now I just got dozens of alerts saying "An account specified in the Run As profile "Microsoft.SQLServer.SQLDefaultAccount" cannot be resolved."
What is causing that?
And how do I distribute it?
You wrote: "Our goal is to make the initial discoveries – which target the “Windows Server” class, run under the default agent action account. THEN – ALL subsequent discoveries should run under the SQL Discovery Profile/Run As account. Therefore – we should add the “SQL DB Engine” class."
Most of my sql servers can be monitored using the default action account (local system), but I have some sql servers that need to be monitored using another account (domain account). How do I target this runas account? I can not use the SQL DB Engine class because then all sql servers will use this run as account.
Regards,
Chris
Hi Murad.. I Have the same scenario.. Did you find out anything more regarding this. In advance thank you. Regard Nicklas
Hi Kevin, thanks for this information. My concern is if you are using the Local System account for the “Default Agent Action Account”, what would stop someone from making the agent run scripts that could potentally do damage to the system?
Kevin,
Thanks for your work here. I am having a problem recently where a SQL instance was recently encrypted and as a result the SCOM agent is not able to query the MSCluster namespace via WMI. I am getting repeating event 5605 in the app log of the cluster node.
Here is an article that appears to describe the problem I am having.
support.microsoft.com/…/2590230
As a nice bonus for me, this seemed to cause all SQL and cluster monitors on that SQLcluster to thrash offline and online causing an impressive state storm filling up our Ops DB and taking the RMS offline. Temporarily placing the SQL/cluster objects in
maintenanace mode addressed the state storm and allowed me to free space and make sure the RMS was back online again. However now I am not monitoring the SQL or clustered objects on that cluster so I need to find a way to address the root cause.
I am not sure, but could perhaps you could confirm if setting up a Run as account with higher level permissions to execute the SCOM SQL/Cluster workloads on these encrypted instances would address this issue?
I am concerned that it won’t because the eventid suggests that the resolution has to do with changing the authentication level to Pkt_Privacy.
Any thoughts?
Thanks,
Keith
Great and very interesting blog. I think it’s also an informative. Thanks for sharing.
Please, also remember that, as stated by the MP documentation, the use of run as profiles – Low Privilege Environment – is not supported in clustered environment.
By the way : why isn't it supported ?
Kevin,
Awesome post! I am almost ashamed I have only recently been able to implement RunAs for SQL in our environment. But I do not understand why RunAs setup seems to double the work. Distributing the creds to certain systems from the account, then setting up the profile to apply the account to those same systems seems duplicitous. I do not understand but am glad it works to optimize our monitoring capabilities. We are in a decentralized environment where SQL team is not responsible for every SQL server in our enterprise, distribution of specific RunAs creds to specific SQL servers. I was able to point the profile config to a group but you can't push RunAs account creds to a group. But one of the options for adding systems to RunAs cred distribution is 'Show suggested computers'. I picked that and hit Search and the agents for each of the systems from the group were listed. A quick way to add all the correct agents to get those creds! Thanks very much!
Hi Kevin, Do you have any update about a way to do a dynamic run as account distribution. Thanks so much in advance
Hi Kevin, I have SCOM 2007 R2 in my environment & I am using a SQL Action account as specified above for the SQL monitoring. I have noticed that we are not getting any alerts from the DB, we tested this by decommissiong the DB also, but no alert from the host server on SCOM. We feel, there is no monitoring happening for the SQL DB. How can I troubleshoot this issue?
Pingback from SCOM QUICK Install | config.re!
Very good explanation!!!! I’m already making use of scenarios in my LAB.
One of the best explanation on Run As Account & Profile. I was struggling to understand the concept for a while as, I am new to OM. Now, I understood what it mean and how to relate to profile. Thank you very much.
Hi Kevin,
It means that SCOM MP only discover and monitor SQL Servers and MP doesn’t change and modyfying any settings data in SQL?
Thank you!
I wrote a post explaining Run As accounts a while back here:
Kevin,
First I would like to thank you for one of the best explanations on how to implement this.
Second, I see that if my SQL Server has the proxy checked I get "An account specified in the Run As profile "Microsoft.SQLServer.SQLDiscoveryAccount" cannot be resolved." error on machines that look to that SQL Server. So if I add these servers to the Distribution
list under the Account it goes away. Is this right?
Again Thank you
David D.
@David –
1. I don’t think this has anything to do with whether proxy is enabled or not. I turn proxy on as a default setting so all new agents inherit proxy enabled, for EVERYONE. Dealing with that setting is pointless IMHO.
2. Whenever a profile associates an account to a class hosted by an agent – you will see these "cannot be resolve" errors, which simply means the RunAs account has not been distributed yet. Distributing the account will resolve those. For automation – see:
Hi Kevin,
I get "An account specified in the Run As profile "Microsoft.SQLServer.SQLDiscoveryAccount" cannot be resolved." error on machines that look to that SQL Server. The error still even after I distributed the Run As accounts to all the computers where the error
is thrown.
Any idea?
@Srini –
Cannot be resolved will be expected on machines that either are not distributed, or are not in a trusted domain.
Thanks for the reply Kevin. But I have distributed the accounts to this server by editing the distribution list manually still the error persists. The server is also in the same domain as the Management servers. Really strange. Infact this appears on all
the SQL servers though the account has been distributed to all of them.
Any other reason?
Hi Kevin. quick help needed please. I am installing SCOM2012R2 and just configured the first management server. However I am getting the below error and MS is showing in greyed out state:
“The Health Service could not log on the RunAs account (database read account) for management group (SCOMMG) because it has not been granted the "Allow log on locally" right.”
Not sure why it is asking Read account to have log on access locally but still I added that account as administrator on MS, Operation DB server and DWH and checked local profile setting that it allows administrators to logon locally. But no help. Could you
please suggest a solution here.
@ Avijit –
The Datawarehouse Read account opens up a process on all management servers to run operations associated with the DW. This account should be granted local administrator on the SCOM management servers. This is documented in our deployment guide. To make things
simple, I recommend having a global group for SCOM admins, and add all the service accounts to this. This use this group as member of each SCOM server’s local administrators group, and as a SCOM Administrator role. If you are still getting failed to log on
locally, then this account is NOT a local administrator, or your organization has implemented explicity policies for log on locally and it must be added there as well as an advanced user right.
I an a SQl DBA got an alert on daily basis on different servers"Run As Account does not exist on the target system or does not have enough permissionst".So my question is what things i have to check on server to resolve this and how can i resolve this
issue.
Excellent explanation. I did not understand that RunAs Profiles in essence are optional, as long as the Default Action Account for that particular server/service has the necessary permissions.
that’s a bingo
Hi Kevin,
Hope you are doing good.
Is there any powershell commandlet or SQL Query to pull the list of Rules which are associated / mapped to a specific Run as Profile ?
I have a SQL Query which can give the output for Monitors, But was not able to find one for Runes.
select distinct
managementpackview.Name as ‘Management Pack Name’,
monitorname as ‘Monitor Name’,
SecureReferenceView.displayname as ‘RunasProfile Name’
from dbo.Monitor
inner join SecureReferenceView on monitor.runas = SecureReferenceView.id
inner join managementpackview on managementpackview.id = SecureReferenceView.ManagementPackId
where SecureReferenceView.displayname like ‘%YOUR RUNAS PROFILE NAME%’
Most of this information is provided in the Management pack Guide, But incase if we do not have a guide or lost it then it will be useful so asked,
Are groups consisting of windows computer objects sufficient when associating the always on discovery and monitoring profiles?
Should I be including availability groups?
Yes – they are – because Windows Computer objects contain all the other hosted instances on the agents.
Hi Kevin ,
I have SCOM 2012 R2 UR9 running in my environment.
SQL 2008,2012,2014,2016 are running
Have created the Accounts and Distributed it to the SQL Servers.(Accounts have admin level access to Servers and Sysadmin level access to SQL)
Have Associated the Accounts with the below Classes and groups in the profiles
Class::
SQL Server 2*** Agent
SQL Server 2*** Agent Job
SQL Server 2*** DB
SQL Server 2*** DB Engine
Group::
SQL Server 2*** DB Engine Group
In many Servers i get this below error for profiles
”
Microsoft.SQLServer.SQLDiscoveryAccount
Microsoft.SQLServer.SQLProbeAccount ”
Log Name: Operations Manager
Source: HealthService
Date: 12/19/2016 12:03:37 AM
Event ID: 1108
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer:
Description:
An Account specified in the Run As Profile “Microsoft.SQLServer.SQLProbeAccount” cannot be resolved. Specifically, the account is used in the Secure Reference Override “SecureOverride7abf93bb_3f52_39b9_1229_9233b8a1b14c”..
Management Group:
Run As Profile: Microsoft.SQLServer.SQLProbeAccount
SecureReferenceOverride name: SecureOverride7abf93bb_3f52_39b9_1229_9233b8a1b14c
SecureReferenceOverride ID: {F7391A0D-BBDB-AFAC-5FDD-665FA0228630}
Object name: MsDtsServer110
Object ID: {B8568913-7DD5-79D0-8660-EA20A2623BF9}
Account SSID: 00F665CC36675DFA25A72DA4777F6273E2AED2E2B400000000000000000000000000000000000000
What could be the reason.?
how to fix the same?
@Dinesh kumar
Account not resolved simply means it is not distributed.
Why not skip all this and use Service SID’s?
Hi Kevin ,
Saw your Run as Account Addendum Management Pack.
Champion Idea.
There are more than 800 DB servers running with Profile/Run as Account/distribution setup in my environment.
How can i implement SSID method , Without any impact to SQL monitoring
Any thoughts?
Morning, thank you for all this great info! Our SQL environment is one of those “low privileged” environemnts. I inherited them and have been trying to wrap my head around why there are these additional run as accounts for SQL and how they work!! These accounts (one for each of the three tiers in our environments) are “assigned to ALL of the SQL servers….
A question: would the same procedure/process apply to a run as account/run as profile configuration for Domain Controllers in a similar environment?
Thank you again so very mch!
Tony
Hi Kevin,
I have stared working on SCOM 2012 SP1 and I am geeing maximum alert regarding “RUN AS ACCOUNT” issue. i have run a query on operation Shell “cannot bind parameter’Process’. Cannot convert the “Domain\Account name” value of type “System.String” to type “System.Management.Automation.Scrpitblock”.
So please let me know what should i need to do in this scenario.
Hi kevin,
I have started working on SCOM SP1. And i am getting maximum error of “RUN AS ACCOUNT”. And i run a script on operation manger shell. And i got something like that. there are many account on which i found this error.
“cannot bind parameter’Process’. Cannot convert the “Domain\Account name” value of type “System.String” to type “System.Management.Automation.Scrpitblock”.
Please let me know what should i need to do in this scenario.
Thanks,
Gourav
Hi Kevin, I working on an MP form Netapp.
I got 4 groups with run as account used in the discovery DS.
This is an sample from the Mp
Discovery
$MPElement$
$MPElement[Name=”DataONTAP.Cluster.AdminVserversGroup”]$
$MPElement[Name=”DataONTAP.Cluster.AdminVserver”]$
$MPElement[Name=”SCIG!Microsoft.SystemCenter.InstanceGroupContainsEntities”]$
The group was not populate.
If i build a custom group with same without the runas It’s working.
I am not sure to understang why they used a runas in a group discovery.
I try to distribute the runas on my RMS it’s working.
Can you Explain pls
Sorry Bad Copy paste.
Discovery
$MPElement$
$MPElement[Name=”DataONTAP.Cluster.AdminVserversGroup”]$
$MPElement[Name=”DataONTAP.Cluster.AdminVserver”]$
$MPElement[Name=”SCIG!Microsoft.SystemCenter.InstanceGroupContainsEntities”]$
Hi Kevin,
I have upgraded SCOM 2012 R2 to UR12 yesterday in our production environment. I have observed the below alerts in all SQL Agent servers (1000 servers). Password for this account is never expires option selected. I re-entered the password in Run as account but still the problem persists. Can you help me with this.
Description:
The password for RunAs account Domain\SQLAccount for management group MG is expiring on Thursday, January 01, 1970. If the password is not updated by then, the health service will not be able to monitor or perform actions using this RunAs account. There are 0 days left before expiration.
Thanks,
Ramu Chittiprolu | https://blogs.technet.microsoft.com/kevinholman/2010/09/08/configuring-run-as-accounts-and-profiles-in-opsmgr-a-sql-management-pack-example/ | CC-MAIN-2018-26 | en | refinedweb |
Hello All, I am looking to plot the wavefunction (both magnitude and phase on separate plots). The kwant.plotter.map method doesn't give a very appealing look as it plots only in large sized triangles. The kwant.plotter.plot method seems better but I get stuck in one part. There is a piece of code that reads:
Advertising
def family_shape(i): site = sys.site(i) return ('p', 3, 180) if site.family == a else ('p', 3, 0) However, it gives the following error: AttributeError: 'FiniteSystem' object has no attribute 'site' This snippet was taken from online kwant tutorials itself. I am unable to get to the bottom of this error message because the tutorial uses the exact same code. Is there an update to some library of kwant that solves this? Or is there any other way to get better plots for wavefunctions on a system without using the sys.site(i) command? I would like the wave functions to look smooth and continuous. Any help would be appreciated. Best Regards, Shivang -- *Shivang Agarwal* Junior Undergraduate Discipline of Electrical Engineering IIT Gandhinagar Contact: +91-9869321451 | https://www.mail-archive.com/kwant-discuss@kwant-project.org/msg01635.html | CC-MAIN-2018-26 | en | refinedweb |
For many years, I've found myself frustrated with the tools of various
programming languages, primarily IDEs, previously with Java, currently
with Scala.
"In the beginning" I used to use a simple text-editor, like Emacs and later JEdit. Finally in 2004 I converted to Eclipse for Java development, an uneasy relationship that fell apart in 2008 when I moved to IntelliJ after trying out NetBeans for a while. In the last year, my relationship with IntelliJ has fallen apart when it comes to Scala development, as it seems IntelliJ is unable to keep up with syntax that is more functional in nature in general, and use of Scalaz in particular.
Being curious, I've also done quite a bit of Clojure development in the
last year, and also started to dabble more seriously with Haskell.
These two wonderful languages have taught me one thing: if the language
is good enough, an IDE is strictly not needed as long as you have good
support for syntax highlighting and parens matching in the case of
clojure, or indentation in the case of Haskell.
In the case of Scala, I don't think an IDE is required either, provided you can reign in your bad habits in terms of source organization from Java.
IDE's as Code Navigators
In Java-land, IDEs are absolutely required for a number of reasons, not least because Java is verbose, has a high "tap the keyboard-to-though"-ratio and is generally clunky. But furthermore, because a combination of shortcomings in the Java compiler and Javas OO nature, we end up with lots and lots of small files for every interface and class in our system. On any less than trivial Java system, development quickly turns into a game of code- and file-system nagivation rather than programming and code editing. This nature of Java development requires IDE's to become navigation tools above all.
In functional programming, without Java's code organizational patterns however, there is nothing forcing us to scatter our functionality across hundreds and thousands of tiny files. We can think in terms of namespaces instead, and there is absolutely nothing wrong with having quite a few funtions and data types in a single namespace as long as they seem to be relatively related. If we think about our FP code in this manner instead, the requirement for a 200mb download of a code navigator like Eclipse or IDEA becomes a lot less important. All of a sudden, it becomes very easy "to get by" with something like Emacs, which is still a superior text- and code editor to anything out there.
What About the Other Things IDEs Do?
Of course there are many other things that IDE's do, like allow for running individual tests, refactor, code complete etc. However, with a good REPL and a good build tool, these things are not only not needed, they quickly become the inferior tools. What's quicker, running an individual Scala test in IDEA or running continuous compilation/testing in SBT? SBT will win every time. What's better, experimenting in your IDE or on a REPL? A REPL will win hands down every time.
Conclusion: A Need For IDEs Is a Language Smell.
Edit: Defining IDE vs Editor
It might be worth definining "IDE vs editor": what I primarily turn
against in this post is languages and the complexity in them that
necessitate the use massive monolothic IDE's like IntelliJ IDEA, Eclipse
and Visual Studio.
I don't include editors such as Emacs, Vi/Vim, Sublime Text 2 etc in the "IDE" - I think the approach taken by these editors in their "language awareness" is good in that they piggyback on existing infrastructure such as REPL's, build systems etc, instead of trying to re-implement large parts of it, or supplement the lack thereof as is the case with IDEs in Javaland.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/ide-bad-programming-language | CC-MAIN-2016-07 | en | refinedweb |
(For more resources on this topic, see here.)
What is ORM?
ORM connects business objects and database tables to create a domain model where logic and data are presented in one wrapping.
In addition, the ORM classes wrap our database tables to provide a set of class-level methods that perform table-level operations. For example, we might need to find the Employee with a particular ID. This is implemented as a class method that returns the corresponding Employee object. In Ruby code, this will look like:
employee= Employee.find(1)
This code will return an employee object whose ID is 1.
Exploring Rhom
Rhom is a mini Object Relational Mapper (ORM) for Rhodes. It is similar to another ORM, Active Record in Rails but with limited features. Interaction with the database is simplified, as we don't need to worry about which database is being used by the phone. iPhone uses SQLite and Blackberry uses HSQL and SQLite depending on the device.
Now we will create a new model and see how Rhom interacts with the database.
Time for action – Creating a company model
We will create a model company. In addition to a default attribute ID that is created by Rhodes, we will have one attribute name that will store the name of the company.
Now, we will go to the application directory and run the following command:
$ rhogen model company name
which will generate the following:
[ADDED] app/Company/index.erb
[ADDED] app/Company/edit.erb
[ADDED] app/Company/new.erb
[ADDED] app/Company/show.erb
[ADDED] app/Company/index.bb.erb
[ADDED] app/Company/edit.bb.erb
[ADDED] app/Company/new.bb.erb
[ADDED] app/Company/show.bb.erb
[ADDED] app/Company/company_controller.rb
[ADDED] app/Company/company.rb
[ADDED] app/test/company_spec.rb
We can notice the number of files generated by the Rhogen command.
Now, we will add a link on the index page so that we can browse it from our homepage.
Add a link in the index.erb file for all the phones except Blackberry. If the target phone is a Blackberry, add this link to the index.bb.erb file inside the app folder. We will have different views for Blackberry.
<li>
<a href="<%= url_for :controller => :Company %>"><span class
="title"> Company</span><span class="disclosure_indicator"/></a>
</li>
We can see from the image that a Company link is created on the homepage of our application. Now, we can build our application to add some dummy data.
You can see that we have added three companies Google, Apple, and Microsoft.
What just happened?
We just created a model company with an attribute name, made a link to access it from our homepage, and added some dummy data to it. We will add a few companies' names because it will help us in the next section.
Association
Associations are connections between two models, which make common operations simpler and easier for your code. So we will create an association between the Employee model and the Company model.
Time for action – Creating an association between employee and company
The relationship between an employee and a company can be defined as "An employee can be in only one company but one company may have many employees". So now we will be adding an association between an employee and the company model. After we make entries for the company in the company model, we would be able to see the company select box populated in the employee form.
The relationship between the two models is defined in the employee.rb file as:
belongs_to :company_id, 'Company'
Here, Company corresponds to the model name and company_id corresponds to the foreign key.
Since at present we have the company field instead of company_id in the employee model, we will rename company to company_id.
To retrieve all the companies, which are stored in the Company model, we need to add this line in the new action of the employee_controller:
@companies = Company.find(:all)
The find command is provided by Rhom, which is used to form a query and retrieve results from the database. Company.find(:all) will return all the values stored in the Company model in the form of an array of objects.
Now, we will edit the new.erb and edit.erb files present inside the Employee folder.
<h4 class="groupTitle">Company</h4>
<ul>
<li>
<select name="employee[company_id]">
<% @companies.each do |company|%>
<option value="<%= company.object%>"
<%= "selected" if company.object == @employee.
company_id%>
>
<%=company.name %></option>
<%end%>
</li>
</ul>
If you observe in the code, we have created a select box for selecting a company. Here we have defined a variable @companies that is an array of objects. And in each object we have two fields named company name and its ID. We have created a loop and shown all the companies that are there in the @companies object.
In the above image the companies are populated in the select box, which we added before and it is displayed in the employee form.
What just happened?
We just created an association between the employee and company model and used this association to populate the company select box present in the employee form.
As of now, Rhom has fewer features then other ORMs like Active Record. As of now there is very little support for database associations.
(For more resources on this topic, see here.)
Exploring methods available for Rhom
Now, we will learn various methods available in Rhom for CRUD operation. Generally, we need to Create, Read, Update, and Delete an object of a model. Rhom provides various helper methods to carry out these operations:
- delete_all: deletes all the rows that satisfy the given conditions.
Employee.delete_all(:conditions => {gender=>'Male'})The above command will delete all the male employees.
- destroy: this destroys the Rhom object that is selected.
@employee = Employee.find(:all).firstThis will delete the first object of employees, which is stored in @employee variable.
@employee.destroy
- find: this returns Rhom object(s) based on arguments.
We can pass the following arguments:
- :all: returns all objects from the model
- :first: returns the first object
- :conditions: this is optional and is a hash of attribute/values to match with (i.e. {'name' => 'John'})
- :order: it is an optional attribute that is used to order the list
- :orderdir: it is an optional attribute that is used to order the list in the desired manner ('ASC' - default, 'DESC' )
- :select: it is an optional value which is an array of strings that are needed to be returned with the object
- :per_page: it is an optional value that specifies the maximum number of items that can be returned
- :offset: it is an optional attribute that specifies the offset from the beginning of the list
- Example:
@employees = Employee.find(:all, :order => 'name', :orderdir => 'DESC')This will return an array of employee objects that are ordered by name in the descending order:
Employee. find( :all,:conditions =>["age > 40"], :select => [name,company] )It will return the name and company of all the employees whose age is greater than 40.
- new : Creates a new Rhom object based on the provided attributes, or initializes an empty Rhom object.
@company = Company.new({'name'=>'ABC Inc.')It will only create an object of Company class and will save to the database only on explicitly saving it.
- save : Saves the current Rhom object to the database.
@company.saveIt will save company object to the database and returns true or false depending on the success of the operation.
- Create : Creates a new Rhom object and saves it to the database. This is the fastest way to insert an item to a database.
@company = Company.create({'name' => 'Google'})It will insert a row with the name "Google" in the database.
- Paginate: It is used to display a fixed number of records on each page.
paginate(:page => 1, :per_page => 20)It will return records numbered from 21 to 40.
- update_attributes(attributes): Updates the specified attributes of the current Rhom object and saves it to the database.
@employee = Employee.find(:all).firstThe age of the first employee stored in the database is updated to 23.
@employee. update_attributes({'age' => 23})
We have now understood all the basic helper methods available in Rhom that will help us to perform all the basic operations on the database. Now we will create a page in our application and then use the find method to show the filtered result.
Time for action – Filtering record by company and gender
We will create a page that will allow us to filter all the records based on company and gender, and then use the find command to show the filtered results on the next page.
We will follow these steps to create the page:
- Create a link for filter page on the home page i.e. index.erb in the app folder:
<li>
<a href="<%= url_for :controller => :Employee, :action => :
filter_employee_form %>"><span class="title"> Filter Employee </
span><span class="disclosure_indicator"/></a>
</li>
We can see in the screenshot that a Filter Employee link is created on the home page.
- Create an action filter_employee_form in employee_controller.rb:
def filter_employee_formWe have used the find helper provided by Rhom that will retrieve all the companies and store them in @companies.
@companies = Company.find(:all)
end
- Create a page filter_employee_form.erb in the app/Employee folder and write the following code:
As we can see, this page is divided into three sections: toolbar, title, and content. If we see the content section, we have created radio buttons for Gender and a select box to list all the companies. We can select either Male or Female from the radio button and one company from the list of dynamically populated companies.
<div class="pageTitle">
<h1>Filter Page</h1>
</div>
<div class="toolbar">
<div class="leftItem backButton">
<a class="cancel" href="<%= url_for :action => :index
>">Cancel</a>
</div>
</div>
<div class="content">
<form method="POST" action="<%= url_for :controller =>
:Employee, :action => :filter_employee_result %>">
<h4 class="groupTitle">Gender</h4>
<ul>
<li><label for="gender">Male</label>
<input type="radio" name="gender" value="Male"/>
</li>
<li><label for="gender">Female</label>
<input type="radio" name="gender" value="Female"/>
</li>
</ul>
<h4 class="groupTitle">Company</h4>
<ul>
<li>
<select name="company_id">
<% @companies.each do |company|%>
<option value="<%= company.object%>"
>
<%=company.name %></option>
<%end%>
</li>
</ul>
<input type="submit" class="standardButton" value="Filter" />
</form>
</div>
- Create an action filter_employee_result in employee_controller.rb:
The conditions symbol in the find statement is used to specify the condition for the database query and @params is a hash that contains the selections made by the user in the filter form. @params['gender'] and @params['company'] contains the gender and company_id is selected on the filter page.
def filter_employee_result
@employees = Employee.find(:all, :conditions =>{'gender' => @
params['gender'],'company_id'=> @params['company_id']})
end
- Create a file called filter_employee_result.erb and place it in the app/Employee folder.
<div class="pageTitle">
<h1>Filter by Company and Gender</h1>
</div>
<div class="toolbar">
<div class="leftItem regularButton">
<a href="<%= Rho::RhoConfig.start_path %>">Home</a>
</div>
<div class="rightItem regularButton">
<a class="button" href="<%= url_for :action => :new %>">New</a>
</div>
</div>
<div class="content">
<ul>
<% @employees.each do |employee| %>
<li>
<a href="<%= url_for :action => :show,
:id => employee.object %>">
<span class="title"><%= employee.name %>
</span><span class="disclosure_indicator"></span>
</a>
</li>
<% end %>
</ul>
</div>
This result page is again divided into three sections: toolbar, title, and content. All the employees filtered on the basis of specified selections made on the filter page are stored in @employees and displayed inside the content section of this page.
What just happened?
We created a filter page to filter all the employees on the basis of their gender and company. Then, by using the find method of Rhom, we filtered employees for the specified gender and company and displayed the results on a new page.
(For more resources on this topic, see here.)
Have a go hero – find (*args) Advanced proposal
We have learnt in this section to use find helper to write only simple queries. To write advanced queries, Rhom provides find (*args) Advanced.
A normal query would look like this:
@employees = Employee.find(:all, :conditions =>{'gender' =>
@params['gender'],'company_id'=> @params['company_id']})
This can also be written as:
@employees = Employee.find(:all, :conditions =>{
{:name =>'gender' ,:op =>"like"} => @params['gender'],
{:name =>'company_id', :op => 'like'}=> @params['company_id']},
:op => 'AND'
)
The advantage of using the latter form is that we can write advanced options with our query.
Let's say we want to create a hash condition for the following SQL:
find( :all,
:conditions =>["LOWER(description) like ? or
LOWER(title) like ?", query, query],
:select => ['title','description'] )
It can be written in this way:
find( :all,
:conditions => { {:func=>'LOWER', :name=>'description',
:op=>'LIKE'}=>query,
{:func=>'LOWER', :name=>'title', :op=>'LIKE'}=>query}, :op => 'OR',
:select => ['title','description'])
How Rhodes stores data
As we have already discussed, iPhone and Android use SQLite. And for Blackberry it uses SQLite on a device it supports, otherwise it will use HSQL database. But the question is how does Rhom store data and how can we handle migration?
Rhodes provides two ways to store data in a phone:
- Property Bag
- Fixed Schema
Property Bag
Property Bag is the default option available for our models. In Property Bag, the entire data is stored in a single table with a fixed number of columns.
The table contains the following columns:
- Source_id
- attribute
- object
- value
- update_type
When you use the Property Bag model, you don't have to track schema changes (adding or removing attributes). However, Rhodes uses the Property Bag schema to store app data in a SQL database. If the internal Property Bag schema changes after an application is updated or reloaded, the database will be (re)created and all existing data would be erased. See rhodes\lib\rhodes.rb and rhodes\lib\framework\rhodes.rb for the internal database schema version:
DBVERSION = '2.0.3'
On the first launch of the application after it is installed/updated/reloaded, a database will be (re)created if app_db_version in the rhoconfig.txt is different from what it was before. If the database version is changed and the database is recreated, then all data in the database will be erased.
Since Rhodes 2.2.4, the Rhosync session is kept in the database, so SyncEngine.logged_in will return true. At the start of the application we can check if the database is empty and the user is still logged in and then run sync without interactive login.
Application db version in rhoconfig.txt:
app_db_version = '1.0'
We can list a few advantages and disadvantages of the fixed schema:
Advantages
- It is simple to use, attributes are not required to be specified before use
- We don't need to migrate data if we add/remove attributes
Disadvantages
- Size is three times bigger than the fixed schema
- Slow while synchronizing
Fixed Schema model
While using the Fixed Schema model, the developer is entirely responsible for the structure of the SQL schema. So when you add or delete some properties, or just change app logic you may need to perform data migration or database reset. To track schema changes, use the schema_version parameter in the model:
class Employee
include Rhom::FixedSchema
set :schema_version, '1.1'
end
We can see that we have set the schema version to 1.1. Now, if we change the schema then we have to change the version in the model.
We can list a few advantages and disadvantages of the fixed schema:
Advantages
- Smaller size, you can specify index for only required attributes.
- Faster sync time than Property bag.
Disadvantage
- You have to support all schema changes.
This is how the model looks in a Fixed Schema:
To add a column in the table we have to add a property and then reset the Device. It is important to note that we have to reset the database to reflect the changes.
To create an index in fixed schema we will write following line in model:
index :by_name_tag, [:name, :tag] #will create index for name and tag columns
To create a primary key in the fixed schema we will write following line in model:
unique_index :by_phone, [:phone]
Choosing between Property Bag or Fixed Schema depends on your requirement. If your schema is not fixed and keeps changing then use Property Bag and if we have huge data and non-changing schema then we use Fixed Schema.
Summary
We have the covered following topics in this article:
- What is ORM
- What is Rhom
- What is an Association
- We have explored various commands provided by Rhom
- Difference between Property Bag and Fixed Schema.
Further resources on this subject:
- Rhomobile FAQs [Article]
- An Introduction to Rhomobile [Article]
- Getting Started with Internet Explorer Mobile [Article]
- jQuery Mobile: Organizing Information with List Views [Article]
- jQuery Mobile: Collapsible Blocks and Theming Content [Article]
- Fundamentals of XHTML MP in Mobile Web Development [Article] | https://www.packtpub.com/books/content/how-interact-database-using-rhom | CC-MAIN-2016-07 | en | refinedweb |
Details
Description
The current thrift version is virtually useless because it's not getting updated when backward compatibility is broken (it's always 20080411-exported on all the snapshots,) I can't just tell people to use the trunk and not breaking things due things like namespace changes etc. Many projects maintain a reasonable versioning scheme even when system is in alpha state.
Thrift overall is stable enough to warrant a working versioning scheme. Can we at least start to discuss a version scheme, e.g., major.minor.patch or major.minor.micro.patch, where patch number changes are bug fixes and minor or micro are backward compatible changes and major changes indicate breaking backward compatibility or just marketing hype.
I propose that we call the current thrift version 1.0.0.0 after all the namespace changes and stick to a reasonable scheme instead of using suffixes (like beta<n>/rc<n> etc.) as it's much friendlier to other components that uses thrift.
Issue Links
- is depended upon by
THRIFT-1259 Automate versioning
- Closed
- is related to
-
Activity
- All
- Work Log
- History
- Activity
- Transitions | https://issues.apache.org/jira/browse/THRIFT-274?page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel | CC-MAIN-2016-07 | en | refinedweb |
netmagTutorial
Build an OpenEMI music app with Echo Nest
The Echo Nest offers a great API for creating apps using content licensed via the OpenEMI initiative. Marc George explains how to use it to build an app to play Gorillaz tracks
This article first appeared in issue 233 of .net magazine – the world's best-selling magazine for web designers and developers.
Last November, EMI Music teamed up with music intelligence platform The Echo Nest to provide developers with access to an incredible collection of music-related assets. Including material from high profile and well-loved artists such as Gorillaz, Pet Shop Boys, Tinie Tempah, Professor Green and many others. This collaboration represents the most extensive collection of licensed music, video and images to ever be made available in this way.
The Echo Nest provides a fantastic developer API that will not only serve up the assets but can also analyse audio tracks and provide detailed metadata about them. If you’re into music, The Echo Nest API has some amazing tools to build mashups and apps with.
The first decision that you’ll need to make is over which of the huge treasure chest of assets you want to make use of in your app. Assets are organised into ‘sandboxes’: individual collections grouped by artist, label or genre. You can get an overview of a sandbox’s contents by browsing the available ones – these range from the Blue Note label’s jazz, to an electronic music retrospective and Robbie Williams’s back catalogue, so quite a few bases are covered!
In order to get access, you’ll need to sign up to the Echo Nest developer programme. When you’ve registered, log in and go to the Sandboxes tab. Click on any of the sandboxes that might take your fancy, then read and agree to the terms, and press the Sign Up button. You might have to wait a little while, but you’ll be sent an email notification when your sandbox access has been approved and then you can start using the API.Client libraries
The API responds to HTTP requests with JSON or XML responses. There are many Echo Nest client libraries available that will construct API requests for you. You’ll find lots on the Downloads page. One thing to watch is that not all of the libraries listed there have implemented the Sandbox API. In this tutorial we’re going to be using PHP, with a fork of Brent Shaffer’s client, which includes the Sandbox methods. The code is hosted on GitHub; if you have Git installed you can clone the repository with:
git clone github.com/theflyingbrush/php-echonest-api.git
If you don’t happen to have Git yet, then download it from the zip link on the same GitHub URL. The PHP client library depends upon cURL support, so you’ll need your PHP install to have been compiled with it. You’re also going to want to run the code examples below in your local L/M/WAMP set-up.Using the API
Create a new project folder on your development server and Git clone the repository into it. If you’ve downloaded the zip, unzip it to your project and rename it php-echonest-api.
To authenticate with the API you’ll need the following pieces of information from your Echo Nest developer account: your API Key, Consumer Key and Shared Secret. All of these can be found on your Account page at the Echo Nest developer centre.
You’ll also need to know the name of the sandbox that you have been given access to. This is found on the Sandboxes tab of your developer account, in the middle column. As an example the Gorillaz sandbox has a key of emi_gorillaz.
The Sandbox API has two different methods: sandbox/list and sandbox/access. The sandbox/list method returns a paginated list of assets and doesn’t require any special authentication (other than your API key, and approved Sandbox access). However, sandbox/access requires additional authentication with OAuth.
Here’s how to instantiate the Sandbox API and authenticate it:
<?php require("php-echonest-api/lib/EchoNest/Autoloader.php"); EchoNest_Autoloader::register(); define("API_KEY", "{YOUR API KEY}"); define("SANDBOX_KEY", "{YOUR SANDBOX KEY}"); define("CONSUMER_KEY", "{YOUR CONSUMER KEY}"); define("SHARED_SECRET", "{YOUR SHARED SECRET}"); $echonest = new EchoNest_Client(); $echonest->authenticate( API_KEY ); $echonest->setOAuthCredentials( CONSUMER_KEY, SHARED_SECRET ); $sandbox = $echonest->getSandboxApi(array("sandbox" => SANDBOX_KEY));?>
Now the $sandbox variable is an authenticated instance of the API, and you can get a list of the contents of the sandbox like this:
<?php $assets = $sandbox->getList(); var_dump($assets);?>
The response from a getList() call is a PHP associative array, with the following keys: status (an array of information about the request), start (the index of the first asset returned), total (the number of assets in the sandbox) and assets (an array of asset descriptions). The assets key points to a subarray, each item describing an available asset. A typical asset looks like this:
Array ( [echonest_ids] => Array ( [0] => Array ( [foreign_id] => emi_artists:track:EMIDD0779716 [id] => TRGHIRC1334BAE7AD6 ) ) [title] => Spitting Out The Demons [filename] => emi_gorillaz/EMIDD0779716.mp3 [release] => D-Sides (Special Edition) [type] => audio [id] => 6aff61700721ca5b9262a06ef8cea717 )
Without parameters, getList() will return up to 15 assets per call, starting from the first one. You can pass an array of options to getList() and modify the default behaviour, which looks like this:
<?php $options = array("results" => 100, "start" => 100); $assets = $sandbox->getList($options); var_dump($assets);?>
Providing there are enough assets in the sandbox, the previous request would return 100 assets starting at the 100th. If there aren’t enough (or indeed any at all), then you’ll end up with a smaller or empty assets array.Retrieving assets
So, now you have some assets to work with, how do you get hold of them? Here, sandbox/access is the method to use, and it accepts one parameter: a unique asset ID. Many assets have only one simple ID, but audio assets can also have echonest_ids that associate assets with tracks in the wider Echo Nest API, and provide interoperability with other sandboxes via Project Rosetta Stone namespacing; read more on this in the documentation online.
So, taking the Spitting Out The Demons asset above, we’d access it with the following code:
<?php $asset = $sandbox->access("6aff61700721ca5b9262a06ef8cea717"); var_dump($asset);?>
The response from this request is an associative array, organised in the same way as a call to getList(), except that the items in assets use only URL and ID keys, URL being the one of interest as it is a link to the actual asset file. Asset links are timestamped, so cannot be cached or used after expiry.Indexing a sandbox
Although the Sandbox API enables us to page through assets using the results and start parameters of getList() it doesn’t provide any facility to search for specific items. If you want to locate a particular asset you’re going to have to make repeated getList() calls until you find it – which is inconvenient, slow, and will devour your API allowance. The approach to take, of course, is to create your own inventory and cache it so you can refer to it later.
There are many ways you can go about accomplishing this, but probably the easiest method is to store your getList() results in a database. After you’ve created a new database, you’ll need to loop over getList() calls, paging through the sandbox and storing the results as you go. In the example below, I’m making use of an existing MySQL connection, database, and an assets table:
<?php $page = 0; $num_assets = 0; do { $list = $sandbox->getList(array("results" => 100, "start" => $page * 100)); foreach($list["assets"] as $asset){ if(!empty($asset["echonest_ids"])){ $track_id = $asset["echonest_ids"][0]["id"]; } else { $track_id = ""; } if(!empty($asset['title'])){ $title = mysql_real_escape_string($asset['title']); } else { $title = "Untitled"; } $sandbox_id = $asset['id']; $type = $asset['type']; $filename = $asset['filename']; $query = "INSERT INTO 'assets' SET 'title' = '" . $title . "', 'sandbox_id' = '" . $sandbox_id . "', 'type' = '" . $type . "', 'filename' = '" . $filename . "', 'track_id' = '" . $track_id . "'"; $success = mysql_query($query) or die(mysql_error()); if($success === true){ $num_assets += 1; } } $page += 1; } while(!empty($list["assets"])); echo "Complete. Retrieved $num_assets assets.";?>
All being well, you should have a full table of assets that you can query against filename, asset type, title or Echo Nest ID. Having a searchable index like this is the cornerstone of building your own apps.
As mentioned earlier, The Echo Nest API can analyse audio and describe it in detail. The Echo Nest Analyzer is described as the world’s only ‘listening’ API: it uses machine listening techniques to simulate how people perceive music.
When you’re working with Sandbox API audio assets, the methods of most use are song/search (part of the Song API), playlist/static (from the Playlist API), and track/profile (part of the Track API). Songs and tracks are conceptually separate: the Song API returns broadly discographic information, where the Track API deals with specific audio data and analysis.
The song/search method enables you to query the Echo Nest database for songs by title or artist, but also those that meet any listening criteria that you may have, including categories like ‘danceability’, ‘energy’, or ‘mood’.
To use the Song API with sandboxes typically involves code like this:
<?php $song_api = $echonest->getSongApi(); $params = array("bucket" => array("id:emi_artists","tracks"), "artist" => "Gorillaz", "limit" => true, "min_danceability" => 0.5); $results = $song_api->search($params); var_dump($results);?>
The critical parameters in the call above are bucket and limit. The sandbox index that you made earlier will have track IDs, rather than song IDs in it, so you’ll need to instruct the Song API to return track IDs. You can do this by passing a two element array to bucket, containing tracks and a Rosetta namespace.
The namespace that you use depends on which sandbox you have access to. So for instance, if you’re working with the Bluenote sandbox, you would use id:emi_bluenote here. (You can check the Project Rosetta Stone entry in the online documentation at the developer centre for further information on this.)
You also need to make sure that results are limited to the namespace, so pass true to the limit parameter also. There’s a wealth of track profile parameters to use, for instance min_danceability as I’ve used here. Again, refer to the documentation for the whole shebang.
Each result returned from a song/search request will contain a track subarray that lists track IDs. Use these IDs to match the track_id in your index database. In this way you can search for a song, look up the track ID in your index and then retrieve the asset using the unique asset ID: playlist/static is interchangeable with song/search, except the songs returned are random and non-repeating.
The track/profile method works in reverse by accepting a track ID and returning not only the high-level information, but also the URL of a beat-bybeat analysis of the entire track. These complete analyses are incredibly detailed and have been used as the basis of some highly impressive Echo Nest hacks.
I’ve put together an example app that employs all of the methods described above. It uses the Gorillaz sandbox, but you’re welcome to download it and then configure it with your own keys if you want to try it out against a different one.
You’ll need an Amazon Product Advertising API developer account for this, because it draws in artwork from there. For fun, I’m visualising the data with three.js, the WebGL library – and it’s only compatible with Google Chrome at the moment.
First the app makes a playlist/static call for Gorillaz tracks in the id:emi_ artists namespace. It then makes a title search against the Amazon API and the iTunes Search API, and tries to find album artwork for each Echo Nest song.
When the data is loaded I use three.js to make a 3D carousel. When a carousel item is clicked, the app looks up the track is in the local index and then makes two calls – the first to track/profile for the high-level audio analysis (danceability, energy, speechiness and tempo), which is then shown as a simple 3D bar graph.
Finally we make a call to sandbox/access to download the associated mp3 and play it. You can preview the app and get the source from GitHub.
Discover 15 stunning examples of CSS3 animation at our sister site, Creative Bloq. | http://www.creativebloq.com/netmag/build-openemi-music-app-echo-nest-4135677 | CC-MAIN-2016-07 | en | refinedweb |
In this Tutorial we are going to discuss HQL with example.
Hibernate Query Language :
Hibernate Query Language(HQL) is introduced by Hibernate. It is just like SQL
except it is Object oriented query language.
As SQL operate on tables and columns as it is HQL interact with persistent objects and their properties.
Hibernate Queries is case insensitive ,except for names of Java classes and properties. It is like a bridge between database and application.
Reason to use HQL :
Example :
In this example we are using HQL.Student is persistent class mapped to student table. We are selecting student such student whose roll number is greater than 2.
Our HQL is -
String hql = "FROM Student stud WHERE stud.roll > 2";
Main Class code :
package net.roseindia.main; import net.roseindia.table.Student; import net.roseindia.util.HibernateUtil; import java.util.*; import org.hibernate.HibernateException; import org.hibernate.Query; import org.hibernate.Session; public class MainClazz { public static void main(String[] args) { Session session = HibernateUtil.getSessionFactory().openSession(); try { String hql = "FROM Student stud WHERE stud.roll > 2"; Query query = session.createQuery(hql); List list = query.list(); Iterator iterator = list.iterator(); while (iterator.hasNext()) { Student student = (Student) iterator.next(); System.out.println(student.getName()); } session.flush(); } catch (HibernateException e) { e.printStackTrace(); } finally { session.close(); } } }
Output
Hibernate: select student0_.roll_no as roll1_0_, student0_.course as course0_, student0_.name as name0_ from student student0_ where student0_.roll_no>2 Ron Roxi Ram
Download complete source code
Advertisements
Posted on: July Query Language
Post your Comment | http://www.roseindia.net/hibernate/HibernateHQL.shtml | CC-MAIN-2016-07 | en | refinedweb |
I am having trouble using a function statement in order to count vowels in an input statement here is a detailed description on what the program is intended to do:
1) Get line from user (ex. hi, this is a sentence.)
2) Program must recognize every vowel in the input sentence and count.
3) Must be done using some kind of loop and a function statement.
+++(I was thinking of a for loop)++++
4) Must total up every vowel found in the input sentence and out put data
Here is what I have so far, any suggestions will be greatly appreciated. Thank You
#include <iostream> #include <string> using namespace std; char vowels(char a, e, i, o, u); int main() { string input; int i; char ch; cout << "Please enter text: "; getline(cin, input); cout << "You entered: " << input << endl; cout << "The string is " << input.length() << " characters long." << endl; for(i = 0; i < input.length(); i++) cout << input[i]; cout << endl << endl; return 0; } char vowels(char a, e, i, o, u) { char ch; int vowelCounter = 0; if (ch = vowels; vowelCounter++) cout << input[i]; return vowelCounter; } | http://www.dreamincode.net/forums/topic/194820-counting-vowels-from-line-with-a-function-statement-no-pointer/page__p__1139633 | CC-MAIN-2016-07 | en | refinedweb |
So I was a sick for some time and missed a few uni lectures and I was at a complete loss when we were supposed to do the mock test. It was based on these resources I found on our module webpage. Obviously, it's not just a single thing I missed out on. I know the very basics of Java, I know a little bit about creating and using multiple classes with objects and I know a little bit about constructors. What do I need to catch up on to understand any of this code?
public class BankAccount { private double balance; public BankAccount () { balance = 0; } public BankAccount (double b) { balance = b; } public void setBalance (double b) { balance = b; } public double getBalance () { return balance; } public String toString() { return "\n\tbalance: " + balance; } } //end class BankAccount
/** Base class for clerk and customers: name and age. */ public class Person { public static final String NONAME = "No name yet"; private String name; private int age; //in years public Person () { name = NONAME; age = 0; } public Person (String initialName, int initialAge) { /* name = initialName; if (initialAge >= 0) age = initialAge; else { System.out.println("Error: Negative age or weight."); System.exit(0); } */ this.set(initialName); this.set(initialAge); } public void set (String newName) { name = newName; //age is unchanged. } public void set (int newAge) //name is unchanged. { if (newAge >= 0) { age = newAge; } else { System.out.println("Error: Negative age."); System.exit(0); } } public String getName ( ) { return name; } public int getAge ( ) { return age; } public String toString( ) { return (name + " is " + age + " years old\n"); } } //end class Person
public class Clerk extends Person { private String socialSecurityNumber; // constructor public Clerk( String initialName, int initialAge, String ssn ) { /* name = initialName; age = initialAge; */ super (initialName, initialAge); socialSecurityNumber = ssn; } public Clerk () { super(); socialSecurityNumber = ""; } // sets social security number public void setSocialSecurityNumber( String number ) { socialSecurityNumber = number; // should validate as } // returns social security number public String getSocialSecurityNumber() { return socialSecurityNumber; } // returns String representation of Clerk object public String toString() { return getName() + " (" + getAge() + " years old)" + "\n\tsocial security number: " + getSocialSecurityNumber(); } } // end class Clerk
public class Customer extends Person { private BankAccount ba; // constructors public Customer () { super(); ba = new BankAccount(); } public Customer ( String initialName, int initialAge, double b ) { super (initialName, initialAge); ba = new BankAccount (b); } // returns String representation of Clerk object public String toString() { return getName() + " (" + getAge() + " years old)" + "\n\tbank account balance: " + ba.getBalance(); } } // end class Customer | http://www.javaprogrammingforums.com/java-theory-questions/34478-what-do-i-need-learn-understand-code.html | CC-MAIN-2016-07 | en | refinedweb |
XML to JavaScript object parser based on libxmljs).
var parser = require'libxml-to-js';var xml = 'xml string';parserxmlif errorconsole.errorerror;elseconsole.logresult;;
With XPath query:
parserxml '//xpath/query'if errorconsole.errorerror;elseconsole.logresult;;
Due to the fact that libxmljs does not have any method for returning the namespace attributes of a specific element, the returned namespaces aren't returned as expected:
Example from the WordPress RSS 2 feed:
<!-- the rest of the doc -->
is parsed as:
'@':version: '2.0'xmlns:atom: ''sy: ''dc: ''content: ''wfw: ''slash: ''// the rest of the doc | https://www.npmjs.com/package/libxml-to-js | CC-MAIN-2016-07 | en | refinedweb |
A commonly asked question via ADN support and on our public forums is how to read or write DWG files from a standalone executable without having to install AutoCAD on the same machine.
This can be done by licensing the Autodesk RealDWG SDK. This SDK allows you to build DWG capability into your own application without having to install AutoCAD on the same machine and automate it from your executable. RealDWG is essentially the DatabaseServices part of the AutoCAD .NET API (or AcDb part of ObjectARX), along with supporting namespaces.
RealDWG doesn’t include AutoCAD ‘editor’ APIs, and so you can’t easily use it for viewing and plotting DWG files (unless you do a lot of work implementing your own graphics/plotting engine). If your customer won’t buy AutoCAD for that, but they need viewing and plotting with the same fidelity that AutoCAD provides, then consider AutoCAD OEM. AutoCAD OEM is a customizable AutoCAD that you can ‘brand’ as your own application, and from which you can expose a subset of the full AutoCAD functionality, and also add your own additional functionality. AutoCAD LT and DWG TrueView are examples of Autodesk products built using AutoCAD OEM.
Both RealDWG and AutoCAD OEM are licensed technologies. You can find out more from the Tech Soft 3D website. (Tech Soft 3D are our global distributor for RealDWG and AutoCAD OEM).
Here’s a video on RealDWG programming basics, recorded by DevTech’s Adam Nagy. | http://adndevblog.typepad.com/autocad/2012/03/readingwriting-dwg-files-from-a-standalone-application.html | CC-MAIN-2016-07 | en | refinedweb |
Online presence of Mridul Blog for mridul 2011-10-22T01:08:00+00:00 Apache Roller Last day at sun mridul 2007-10-25T17:58:37+00:00 2007-10-26T00:58:37+00:00 <br> At the last day of my first innings at Sun - 6 years and 3 months after I joined right out of college.<br> From the word go, I got the opportunity to work with some really brilliant engineers on really cool ideas, technologies & products - and that was one thing which continued to be so throughout my career here.<br> <br> I am joining Yahoo! research engineering in Bangalore on monday ...<br> I can be reached at<br> mail : mridulm80 <at> yahoo.com<br> xmpp: mridul <at> gmail.com<br> ymsgr: mridulm80<br> aim: mridulm80<br> <br> ... thats all folks and thanks to all !<br> <br> File transfer to an MUC mridul 2007-08-08T04:39:02+00:00 2007-08-08T11:39:02+00:00 We recently submitted a <a href="">proposal</a> to <a href="">XSF</a> to standardize what was a proprietary implementation for message moderation ... which we have had for a couple of years now. The other effort which we should probably kick off soon is file transfer to <a href="">multi-user chat room</a> - this is another custom implementation which we have been carrying on for a few releases now ... <br/> ! <br/> To give a basic idea of what we do:<br/> Currently, our client supports <a href="">IBB</a> -. <br /> As all the participants who can understand this are Sun IM clients - we just use IBB for now. <br /> The caveats are obvious - potentially very heavy server traffic due to the IBB. But the main advantages are: <ol> <li> The sender transmits only a single copy of the file - in comparison to a p2p model (imagine streaming a 512 KB file to a muc with 100 participants from a moderately low b/w connection).</li> <li> It will always work for all cases - irrespective of firewall, nat, etc. IBB is the lowest common denominator.</li> <li> Ability to analyze the content before sending it on to the recipients - including the possibility of archiving it (for participants who might join the room 'later'). </li> </ol> There are ofcourse other means of achieving the same while satisfying the requirements above - including other means of file transfer which are yet to be standardized (custom, future additions).<br/>. XKCD on lisp mridul 2007-07-31T22:50:07+00:00 2007-08-01T05:50:07+00:00 This, is absolutely inspired <img src="" class="smiley" alt=":-)" title=":-)" /><br /> <br /> <a href=""><img src="" /></a><br /> <br /> <a href="">xkcd</a> is as brilliant as ever ! Message moderation specs mridul 2007-07-28T11:22:25+00:00 2007-07-29T04:44:53+00:00 We recently submitted two proposals for message moderation - <a href="">Message Moderated Conferences</a> and <a href="">Managing message moderators in MUC rooms</a> - both of which are extensions to <a href="">Multi-User Chat</a> (muc) spec.<br> Taken together, they allow a message moderation system to be put in place. It has been a conscious decision not to introduce another set of acl's for this, but to reuse the affiliations and roles specified in muc. I will try to go through the basic intent and design decisions behind the specs. <h4>Message Moderated Conferences</h4> This proposal specifies how a conference which has message moderation enabled would interact with participants - particularly, participants who dont have 'voice' or the ability to post message to the room.<br> If message moderation is enabled, these occupants would be able to request the room for approving the messages they want to post - and on approval, room would publish them to all occupants of the room. Simple scenarios would be a celebrity chat, moderated webcast/presentation, etc.<br> The spec does not deal with how the message moderation happens - just the interaction between a occupant and the room - how they interact, state changes, notifications, etc. So in effect, you could have any number of backend moderation implementations - but the interface between the occupant and the room will remain the same.<br> The basic flow is as follows : <ol start="1"> <li>Occupant sends message to room for approval.</li> <li>Room assigns a message id to the submitted message (which is then used to identify the submitted message for all further interaction, in this and other moderation related specs) and returns that to the user with message in pending state.</li> <li>After the backend moderation system decides on the message, room will inform the submitted about the 'decision' - approved, rejected (or error).</li> <li>If approved, room will then multicast it to all the occupants.</li> </ol> That is it !<br> The actual moderation might be quite involved, with multiple moderators/moderation modules chained, clustered/distributed rooms, etc in complex deployments - but the occupant will have a simple and clean interface to communicate with the room (room's bare jid actually).<br><br> The way we designed it, hopefully this will someday get rolled into XEP 45 itself <img src="" class="smiley" alt=":-)" title=":-)" /> <h4>Managing message moderators in MUC rooms</h4> This spec defines one possible way in which message moderation can actually be implemented - where the moderation is done by participants who have sufficient rights (owner's and moderators of the room).<br> It defines how a room moderator/owner can become a message moderator (and other state changes), how they notify room of the moderation decision, and how the room is expected to manage these moderators and submitted messages for moderation.<br> <br><br> Both of these specs are based on our implementation - though what we support currently is quite different from what we have submitted.<br><br> <i>As a sidenote : It should be noted that, in both the specs we have tried to make sure that the interface between the various entities in question remains as simple as possible - while not precluding more complicated scenarios .. like usecases which are not specifically related to users chatting over a moderated conference (for example an approval workflow) - there could be custom configurations with extensions to room configurations which allow for more complicated scenarios while keeping the actual client interfaces (at both submitter and moderator side) constant.<br> </i> S2S connection availability & state recovery mridul 2007-03-26T15:50:00+00:00 2007-03-27T00:28:49+00:00 some thoughts on xmpp s2s availability and failover : and lack of protocol support for it. Server to server connections in xmpp is peculiar when compared to client to server connections.<br> In the latter case, it is clearly understood that on termination of the connection, the user is considered as logged out - but there is no such behaviour specified for s2s connections.<br> Actually, the xmpp spec is silent about these connections entirely (other than how to establish a new one and constraints on the stream).<br> <br> Let us consider a simple example to illustrate what I am trying to get to :<br> <br> 1) ServerA hosting domainA has userA@domainA.<br> 2) ServerB hosting domainB has userB@domainB.<br> 3) Assume they have 'both' subscriptions to each other - so userA & userB can see each others presence.<br> 4) Further, assume that both are online as userA@domainA/resA and userB@domainB/resB.<br> <br> So after step 4 we have -<br> a) ServerA has a outbound s2s to ServerB over which it pushed userA's presence (and probed for userB).<br> b) ServerB has a outbound s2s to ServerA over which it pushed userB's presence (and probed for userA).<br> It should be noted that for s2s, all outbound stanza's go in its own socket connection - so (a) and (b) are two different socket connection with xml streams in opposite directions.<br> <br> Now comes the interesting part - what happens when the connections break ? (one or both).<br> The spec is,silent about this and leaves that as an implementation detail.<br> What makes it interesting is a follow up of scenario above like this :<br> <br> 5.ServerA & ServerB break connection (assume both for simplicity).<br> After some time -<br> 6.a) userA logs out.<br> 6.b) ServerA crashes.<br> 6.c) ServerA is temporarily unavailable (network issues, etc).<br> <br> 6.a will result in ServerA reopening connection to ServerB and sending the user status update - the happy path.<br> <br> The other two - 6.b & 6.c have no easy solution - and actually each server implementation handles it in its own way (and usually not very appropriately if I am not wrong).<br> So, it brings up to the weird situation where a remote user is shown as online while he might not be ... or is not reachable.<br> This question is particularly relevant since most xmpp server's have the 'feature' to terminate outbound/inbound connections after some 'time' (usually if connection is idle for a period of time).<br> Which mean, step (5) is going to happen sooner or later.<br> <br> Hence the simplistic logic which is used for client sessions - if broken, consider unavailable - can definitetly not be used.<br> Simplistic workarounds like - when outbound is reestablished : send probe is also extremely expensive. (if 6.b/6.c did not happen - then 6.a will take care of keeping server's in sync !)<br> Presence is usually the bulk of xmpp traffic - and this problem directly pertains to that.<br> <br> Hence - when does a server refresh the status of a remote user ? (sending a presence probe)<br> How often does it send it ?<br> Is there any better 'solution' to this problem ?<br> <br> Interesting thoughts with implementation specific logic in place currently - which can potentially be a strain on open federation for admins worried about s2s traffic, load and the like.<br> In order delivery in xmpp servers mridul 2006-12-16T11:59:58+00:00 2006-12-18T06:31:59+00:00 Discusses some rationale behind thoughts on why strict in order delivery maynot be a 'good thing'. Looks like I opened a can of worms by asking about <a href="">ibb</a> (inband data transfer) in the standards list <img src="" class="smiley" alt=":-)" title=":-)" /><br> We always interpreted the <a href="">xmpp specs</a> to mean that the processing between any two entities at the server MUST be in order, but the spec does not mandate about actual delivery. It is indeed quite simple for client (not just a chat client) implementors if this assumption is mandated: the default binding to tcp encourages the assumption that this is the intention.<br><br> <b>Why is it that I believe this so tough to enforce for a server ? or is it really that tough ? And why do I mention ibb in this context ?</b><br> I cannot answer the second question for sure, but few of my previous attempts to enforce this when subjected to very high loads have lead to inferior performance. But yes, it is indeed possible that we might be able to get it working at marginally lower throughput at a later point in time.<br><br> <b>To the first, is it really that tough ?</b><br> Let us consider the case of a simple server which uses one thread per client. A request comes in, server processes it, writes/queues response (if any) and goes back to picking up another request from client.<br> Things cant get simpler than this (I hope) and for such a server, in order delivery is built into the design.<br> Would it scale ? Dont immediately write this approach off : you could have a tree of multiplexors which progressively combine input into fatter and fatter pipes, and assuming no latency for reads/writes to multiplexors from server, it could be made to scale for quite a bit.<br> <i>(Note: I am not intending to make the above look simple: just from the basic server design point of view - rest of the xmpp requirements : the spec requirements, xml parser, xmpp extensions, server management, etc would soon make any competent server product non trivially tough.)</i><br> But you will hit the limits to this approach soon enough.<br><br> Towards the other end there are various approaches using asynchronous IO, multiple stages and pipelines, thread pools, queues, buffering IO as much as possible (usually necessiated by async io anyway), etc : all carefully tuned considering the load characterstics and deployment considerations along with the behaviour of xmpp : large number clients persistently connected for a long time, but (typically) small xml stanza's and not sustained load of high IO traffic per client. (ofcourse like us, most implementations can and do handle sustained high load too, but that is more uncommon user scenario).<br> In spite of these, there are limits to how much you can vertically scale a single node: you are bound to hit the limits - be it cpu, memory, IO, etc. Hence, like our product, other xmpp server also support the concept of a pool of servers which collectively behave as a single server.<br> Horizontal scalability allows you to just add more nodes to the pool as the usage of the server increases (Ofcourse, this is not the only reason for a server pool: fault tolerance, lower latency to client, etc are other reasons but we will focus on this for now).<br><br> Now why mention all this ? As the number of pipelines increase as a server implementation/deployment scales: potentially spanning multiple nodes for the purpose of delivery, it becomes progressively tougher to enforce in order delivery. In order processing is much more simpler, the recepient of the stanza processes the input, generates the output to be delivered and it is done with it. The actual delivery could span multiple nodes within a logical server, and potentially get pushed out to a different server altogether.<br><br> Does that mean that there will be absolutely no inorder delivery possible at all ? Ofcourse not, atleast in our server, the server always makes a best case effort to deliver in order: but there are a lot of corner cases where a few stanza's could get delivered out of order.<br> Is this always a bad thing ? Not really - like if two unrelated xml stanza's get 'mixed up', big deal: neither will it be noticable, nor will it have any impact. But there are usecases where this will have impact.<br> A common enough usecase where it will make it more harder for clients (chat and others) is out of order delivery of messages: like if the recepient client gets "How are you ?" before "Hi", typically it means something is amiss. (There are means to handle corner cases this too btw, but would make clients slightly more complex).<br> Will this happen very commonly ? Atleast in our server, this can happen when a restricted set of conditions are satisfied: one of which is when a very high number and size of stanzas are transferred between two entities usually hosted on different nodes.<br><br> <b>IBB to the problem</b><br> And yes, you guessed it, file transfer using ibb for a large enough file which results in transfer of large number of 'big' stanzas : sizes which are an order of magnitude higher than typical xmpp stanzas, could hit this problem when clients push it fast enough.<br> Does it mean we do not support ibb ? Ofcourse not !<br> Even with the way the spec stands today (strict enforcement of inorder delivery), there are a bunch of things which are done to handle the higher sustained load of ibb file transfers and to ensure that all ibb stanza's within a transfer are always delivered in order.<br> Is this a good solution ? Definitely not. Other than the fact that this approach leads to all sorts of special casing and small overhead within some of the critical server codepaths, it is also an approach which does not scale at all. A high enough number of fast clients can make the expierence for all users in a server node a bit sluggish: not to mention additional resource overhead if remote servers cannot drain the stanza's fast enough.<br> It was this which lead me to query the list: each stanza is marked with a sequence id, use that to 'fill in the gaps' if stanza's are delivered out of order for a small enough window - but the very idea of out of order delivery was percieved as a bit too radical I guess <img src="" class="smiley" alt=":-)" title=":-)" /><br><br> A best case effort at inorder delivery is different from gaurentee'ed inorder delivery - the latter is much more stringent and has some very severe repurcussions on server scaling. When you position xmpp not just as a means to chat, but as a presence enabled, authenticated, realtime messaging middleware it becomes extremely useful as a messaging infrastructure for applications. As an example try doing async (gamma) notifications using webservices and through xmpp : the former will have all sorts of webservice callbacks, saml assertions, etc and the latter will be a simple message to the reciepent JID. Applications typically 'ramp up' the throughput per session remarkedly (it is not a user typing at 60 words per min anymore) and if current expierences with ibb is anything to go by, it is going to be a very interesting challenge ahead for xmpp server implementations if strict inorder delivery semantics is to be mandated <img src="" class="smiley" alt=":-)" title=":-)" /> server pool and expirimental stuff mridul 2006-11-28T09:03:18+00:00 2006-11-28T19:02:14+00:00 One of the things which we introduced with the previous interim release was the concept of server pool.The admin can have a combination of disparate set of boxes - solaris sparc/x86, linux and logically combine them to create a single XMPP deployment.Ofcourse, one deployment could support multiple hosted domains - so typically it is a single deployment : not really a single domain which is hosted on a pool.<br> <br> For the purpose of minimising internode communication, we have also introduced to concept of an xmpp aware load balancer called redirect server.The actual redirection mechanism is an spi and can be customized, but there are a bunch of out of the box methods - including ones which allow for roster based logical grouping of contacts to nodes.The server pool will redistribute any new load across itself in the face of failures ... So, introducing redundency into the pool not only helps in performance of a single node under normal operation, but allows the pool to operate smoothly when you are faced with network, power, etc outages.<br> <br> Taken together this results in some interesting usecases.<br> There is minimal network overhead and yet high degree of failover and scalability can be achieved - we have not really tested to reach the limits of how 'large' a server pool can be .... This essentially means that a service provider can not only support a large number of hosted domains, but also provide high availability for all of them with a single deployment.<br> As far as the end user is concerned, there is no difference whether he is talking to a server pool or to a single server: the behaviour is uniform and spec compliant.<br> If you consider the distributed nature of features like pubsub, muc, etc in the server pool - it essentially means that you do not have the concept of failure: unless every node in the pool goes down that is <img src="" class="smiley" alt=":-)" title=":-)" /><br> So essentially, you can use the server as a near-realtime messaging middleware with very high availability and scalability.<br> <br> And yes, the next release we have does include expirimental support for <a href="">caps</a> and <a href="">pep</a> at the server and in client api ... might not be the latest version of the spec though. But developers wishing to hack at it are welcome : this impl is not very rigourously tested - so I am not going to popularise it <img src="" class="smiley" alt=":-)" title=":-)" />, but we have used it for avataar and seen it working beautifully !<br> The <a href="">api which is hosted</a> at <a href="">collab project</a> of netbeans has the client side support for this (:pserver:anoncvs-AT-cvs.netbeans-DOT-org:/cvs , collab/service/src/org/netbeans)<br> Btw, did I mention that we have always supported <a href="">privacy lists</a> ? File transfer to a multi user chat mridul 2006-11-17T17:17:21+00:00 2006-11-18T02:01:31+00:00 File transfer to a Multi user chat is not defined in the current set of xmpp extensions. Though this does not really describe what our server/client does today, this post discusses a simple means to achieve this required functionality. <br> <h5>Background.</h5> When a user wants to chat with his contact, typically he does so with one to one messages - that is, he sends a <message /> with the to set explictly to the reciepent. Now, suppose he invites another contact into this chat : it ceases to be a one to one chat - there is an <a href="">xmpp</a> protocol extension for <a href="">multi user chat</a> (muc) defined for this.<br> In most clients, like ours, there is no visible difference in both usecases : the client upgrades the one to one chat to a private conference behind the scenes and things continue from there.<br> Actually in our client, there is no visible difference between a private one to one chat, a private conference or a public conference - the UI is consistent across all these usecases.<br> <br> <h5>The problem</h5> Unfortunately, functionality is not strictly the same ... and one of the key areas where this affects is in file transfer : something which users are used to. Currently, there are a <a href="">number of ways</a> defined on how to negotiate file transfer between two entities - how to negotiate the parameters, how to do the actual transfer - peer to peer/inband, etc.<br> But when it comes to file transfer in a muc, there is no standard extension or proposal.<br> This does not mean that file transfer is not supported at all - it just means that there is no standard way to support this.<br> For more than a year now, our client and server has been supporting filetransfer even in the case of conferences.<br> The problem is that, this works only with our client and server - as there is no standard in this space, none of the other clients can interoperate with our clients when it comes to this feature.<br> <br> <h5>Requirements</h5> Let us look at a minimal set of requirements :<br> <ol> <li>Different clients used by the participants in a conference might support different set of file transfer profiles. Hence, there need not be a intersection of transfer profile supported by all.</li> <li>If a single client is to multicast the file to all the participants, the bandwidth requirements could choke it.</li> <li>The server might want to decide who gets the files in a conference, and might want means to enforce this. Ditto for who can send a file, file properties, etc.</li> </ol> <h5>Potential solution ?</h5> I am not discribing what we support, but thinking of a really simple solution.<br> What could be done is :<br> <ol> <li>XMPP Server has an affiliated 'hosting service' - say a http, ftp, etc server : on which it has reasonably high control. Let us assume HTTP service in this discussion.<br> </li> <li>This service is not browsable, so you cannot get listing of files - similarly, it must not be possible to easily guess/find arbitrary files from this store - unless the server has divulged the URI to you. Hence URI's generated should also be opqaue and <i>non-guessable</i> to a reasonable degree.<br> </li> <li>When client wants to send a file to a conference, it will negotiate with the room - not with the participants.</li> <li>If transfer is to be allowed, the room will assign the client a URI to <a href="">PUT</a> the file to. Also, it might give it some sort of <a href="">authorization credentials</a> to be used at the HTTP service.<br> </li> <li>Once the client has transferred the file to the URI, it will notify the room - which in turn will multicast the location of the file at this URI to required participants (along with creds to fetch it, if any) : the sender would be specified as the initial user.<br> </li> <li>The participant clients will use GET to pick up the file from this service - while using the passed on creds.This way, even if you guess the URI, you will need to also guess the creds to retrieve it.</li> <li>After successful transfer, participant clients would notify the room of completion. A file could get purged when all participants have downloaded it, or if some room configured timeout expires - need not be a pro-active cleanup, but possibility of cleanup after both could be high (impl detail actually).</li> </ol> <br> The caveats are obvious :<br> <ol> <li>We are introducing a new form of file transfer profile - though we are using HTTP above, it is still a new mechanism !</li> <li>You are going to have N participants per conference hitting this service for the file - so the service should be able to scale well, in terms of client connections and bandwidth. Might not really be a concern for modern webservers though.</li> <li>Associating URI based credentials might not be that simple : especially if you consider the tie-in required with the XMPP server for achieving this at runtime (PUT, GET creds) : can it be done ? Ofcourse it can be - out of the box support exists ? I am not very sure if it does.</li> <li>You might need some mechanism to clean/purge stale files from the hosting service.<br> </li> </ol> <br> Comments, suggestions ?<br> Corner cases of presence probes in xmpp mridul 2006-11-14T20:23:54+00:00 2006-11-15T04:25:32+00:00 Some random thoughts on presence probes in rfc 3921 and the xmpp bis specs ... just random rambling, I could be grossly wrong ! Presence probes is an interesting idea in XMPP - you probe the presence of a contact.<br> Ofcourse, the presence is divulged only if the policies of the contact's server (handled by the hosting server : not client) allow user to query for the presence : but the basic idea is, you can do a pull of the presence - not just the customery push from the server.<br> The caveat is that, this is <a href="">expected to be used</a> by the user's server - not directly by the user. <br>There are two interesting things w.r.t presence probes which I am not very comfortable with.<br><br> The <a href="">initial presence section</a> in rfc 3921 has a slightly interesting snippet.<br> It says <i>):"</i><br> Now, what is interesting here is that, you cannot really say if a directed presence over S2S is a response to a presence probes, or a directed presence from a remote entity.<br> Which essentially means that, there can be no reliable caching of presence info on the local server for a remote user even if there are other resources of the same user are online on the server ... <br> That is, when a specific resource user comes online - it becomes mandatory for correct implementations to send presence probes to all remote contacts in roster with a both/to subscription : irrespective of whether there are other resources of the user online.<br> <br> Now let us come to the bis verions of the XMPP spec <a href="">here</a>. If you look at the definition of <a href="">presence probe</a>, you see see that it is expected to be used by the server on behalf of the user - same as is in <a href="">rfc 3921</a>.<br> Now, let us take a look at <a href="">directed presence</a>.<br> It talks about using probes <i>"A server SHOULD respond to presence probes from entities to which a user has sent directed presence" ...</i> as a means to find out the current availability of a contact.<br> The examples below do not really use <i>previous post</a> along with some clarifications and corrections. My <a href=''>previous post</a> outlined a brief idea, I will try to flesh it out a bit more in this post.<br> The main areas of focus would be :<br> <ol> <li>What are these 'named list' ? How do you use them ? What are its required properties ?</li> <li>When would it be a good idea to use this proposal ? When not ?</li> <li>What all can be optimised this way ?</li> <li>Some examples on how to implement it goddamnit !</li> </ol> <br> Hopefully this post will address my thoughts in the same order in which I list them above.<br> <h4>Named lists</h4> Named lists are a construct loosely based on <a href="">XEP 33</a>.<br> That is, their idea came to me through that spec - but they dont really share much with it except that I believe that XEP 33 could logically include this as an extension.<br> In this proposal, the main properties of named lists (in this context) are :<br> <ol> <li>Each named list has a 'sender jid' and a list of 'n' reciepent jids.</li> <li>Named lists can be removed at any point of time by the hosting server - so, remote servers cannot make assumptions about a lists existance.</li> <li>For an xml stanza recieved for a named list, the hosting server generates 'n' packets with 'from' replaced by the 'sender jid' and 'to' being the i<small>th</small> reciepent in the list : and then processes each of these stanza's as though they were recieved individually from remote server.</li> </ol> Now, given these we have the following cases :<br> <ul> <li>serverA sends stanza to named list 'routing_list' in serverB, list exists : so stanza's delivered -- Ideal case, maximum savings !<br> </li> <li>serverA sends stanza to named list 'routing_list' in serverB, list does not exist : error returned -- either serverA recreates a new list, or fallsback on more traditional approach of stanza delivery (suppose it finds that serverB is a bit too aggressive in list cleanup and decides against using lists, etc).</li> </ul> In both of these cases, serverA should recieve some sort of acknowledgement that the stanza was either recieved or error was encountered.<br> In XMPP, request-response pattern is modelled using 'iq' stanza's, so my current thoughts are to use that for this purpose.<br> The general structure will be :<br> <br> <ul> <li>serverA sending stanza to serverB.<br> </li> </ul> <iq from='domainA' to='named_list@domainB' type='set' id='someId'><br> <A single xml stanza to be delivered to the list without from and to attributes. /><br> </iq><br> <br> <ul> <li>Failure at serverB to deliver this stanza.</li> </ul> If this fails for whatever reason, we will have :<br> <br> <iq from='named_list@domainB' to='domainA' type='error' id='someId' /><br> <br> The reason is not important, what matters is that it failed and that the sending server should retry.<br> It could be 'cos the list was removed, or there was some other error - bottomline, cant deliver - so retry.<br> Typically, this will trigger serverA to either create a new list and try against that list name or stop using lists itself and fallback on 'traditional' methods (impl detail).<br> <br> Note: Here, I have removed the constrain for error responses that I placed yesterday : namely that the response MUST contain the actual stanza to be delivered within it.<br> This means that, until the iq response comes back from serverB, serverA will need to hold on to that stanza for retransmission purposes.<br> The reason why I removed this MUST requirement was 'cos we cant enforce constraints on the contained xml stanza returned in the error response : neither will we be able to validate or enforce it in case remote server is misbehaving for whatever reason.<br> This will also reduce the S2S traffic payload.<br> <ul> <li>Successful delivery </li> </ul> <iq from='named_list@domainB' to='domainA' type='result' id='someId' /><br> <br> This indicates that the delivery was successful.<br> Note that, this is delivery to the list that was successful - that is, names list existed when stanza was recieved and serverB could process the stanza for that list.<br> There might be errors while processing the actual generated stanza's later on - we are not concerned about that now.<br> <br> <br> <ul> <li>Creating a named list.<br> </li> </ul> Now that we have established 'how' serverA uses a named list on serverB - let us look into how it can create it.<br> As stated initially, the list creation is a simple enough stanza.<br> <br> <iq from='domainA' to='domainB' type='set' id='someId'><br> <create_list xmlns='sun:im:namedlist' sender='userA00@domainA'>> That is, we specify the sender JID and the list of jid's who form the list.<br> The response will either be a success (<iq from='domainB' to=domainA' type='result' id='someId' name='named_list@domainB'/> ) or error (<iq from='domainB' to=domainA' type='error' id='someId' /> ) - if error, dont retry but fallback on 'traditional methods' for delivery : like current XEP 33 defined methods or directed delivery.<br> Note: <br> <ol> <li>The name of the list is assigned by serverB - not serverA and no meaning MUST be associated with this at serverA other than as a jid (that is, no attempts to encode/decode info from node/resource, etc : both are opaque to serverA)..</li> <li>serverB CANNOT control the participants of a list - it MUST either create a list with all jids specified by serverA or return error (like invalid jid, access denied to a jid, other policy constraints, etc).<br> </li> </ol> <br> <ul> <li>Removing a named list.</li> </ul> In case the endpoint for whom serverA was maintaining this routing info on serverB does not need it anymore, then serverA could request serverB to remove this list.<br> <br> <iq from='domainA' to='domainB' type='set' id='someId'><br> <remove_list xmlns='sun:im:namedlist' sender='userA00@domainA' /><br> </iq><br> <br> The response to this stanza is not really relevent to serverA : it will be an error if list was already removed or result in case removal succeeds - in either case, there is nothing serverA can do or must attempt to do - both responses essentially mean that the named list is no longer present on serverB.<br> It is a MUST requirement that serverB periodically remove 'old' lists after some internal timeout - so even if serverA 'forgets' to remove a list, serverB MUST do its cleanup.<br> <br> Note that, even if serverA does not request a explict list removal - serverB is free to kick a named list out at any point of time without notifying serverA (as part of its cleanup).<br> It is expected that the lists are removed only after a reasonable timeout - but it is still purely the discreation of the list hosting server.<br> Similarly, serverB could refuse creation of a list without any reason - serverA MUST have alternate mechanism to deliver stanza's (the current - traditional approach).<br> <br> <ul> <li>How do you advertise this ?</li> </ul> <br> As of now, my thoughts are to advertise this as a stream feature.<br> The way I look at it, this is a basic enhancement to stream routing - so servers will exhibit this as a stream feature and named list MUST be enabled only if this stream feature is sucessfully enabled.<br> Ofcourse, it need not be enabled in both directions - so serverB might expose and allow it (so serverA can use it) - but not vice versa.<br> <br> <h4>When to use ? </h4> A few things are obvious :<br> <ul> <li>Use this approach when number of reciepents on serverB is above some minimum (Implementation detail of serverA - but obviously more than 1 <img src="" class="smiley" alt=":-)" title=":-)" /> ).</li> <li>When you are expecting to use the list frequently enough - or atleast enough number of times to justify the cost of list creation .<br> </li> <li>Presence broadcasts at start of a session would be a good usecase : you have atleast two stanza's to be sent - one directed presence, and a probe : so the cost is 'recovered'.</li> <li>Lists can, and MUST go out of scope - list hosting implementations MUST NOT depend on list creators to remove a list explictly : and removals MUST NOT be notified to the list creator.<br> </li> </ul> <h4>What all can be optimised ?</h4> A rough list would be :<br> <ol> <li>Presence information - both broadcasts and probes.</li> <li>Multicasting messages : in xmpp, this would typically mean <a href="">MUC</a>.<br> </li> <li>All other usecases mentioned in XEP 33 which can recur.</li> </ol> The MUC usecase can become tricky and is implementation dependent - but the basic idea would be that number of messages sent should be higher than list (re)creation (when users in a remote server join or leave). It also requires a higher amount of coupling between the server and the MUC component. <h4>An example.</h4> Let us consider the same example as yesterday - but this time, we look at the packets too !<br> <br> When userA00 comes online, serverA does not have a named list associated with userA00's contacts on serverB who should recieve his presence updates.<br> Hence, server creates that first.<br> <br> serverA:><br> <br> <iq from='domainA' to='domainB' type='set' id='someId1'><br> <create_list xmlns='sun:im:namedlist' sender='userA00@domainA/resource'>> serverB:><br> <iq from='domainB' to='domainA' type='result' id='someId1' name='named_list@domainB' /><br> <br> Now serverA will send the directed presence and probe to serverB.<br> <br> serverA:><br> <iq from='domainA' to='named_list@domainB' type='set' id='someId2'><br> <presence /><br> </iq><br> <br> <iq from='domainA' to='named_list@domainB' type='set' id='someId3'><br> <presence type='probe'/><br> </iq><br> <br> <br> serverB:><br> <iq from='named_list@domainB' to='domainA' type='result' id='someId2' /><br> <iq from='named_list@domainB' to='domainA' type='result' id='someId3' /><br> <br> Here I am assuming that the list of users who are subscribed to userA00's presence and to whom userA00 has subscribed to are the same - which was a constraint in our scenario.<br> In case both are not the same (like in case there are privacy rules applied, etc) , you will end up creating two lists.<br> <br> For each of the stanza's dispatched to the list, serverB ends up creating these stanza's and processes them as though serverA directly sent it across.<br> <br> <presence from='userA00@domainA/resource' to='userB00@domainB'/><br> <presence from='userA00@domainA/resource' to='userB01@domainB'/><br> <presence from='userA00@domainA/resource' to='userB02@domainB'/><br> <presence from='userA00@domainA/resource' to='userB03@domainB'/><br> <presence from='userA00@domainA/resource' to='userB04@domainB'/><br> <presence from='userA00@domainA/resource' to='userB05@domainB'/><br> <presence from='userA00@domainA/resource' to='userB06@domainB'/><br> <presence from='userA00@domainA/resource' to='userB07@domainB'/><br> <presence from='userA00@domainA/resource' to='userB08@domainB'/><br> <presence from='userA00@domainA/resource' to='userB09@domainB'/><br> <br> Similarly for probe.<br> serverB now responds back to serverA for the probe requests as though it was individually sent by the server.<br> <br> Let us consider a subsequent presence push by which time serverB has already removed the list.<br> <br> serverA:><br> <iq from='domainA' to='named_list@domainB' type='set' id='someId4'><br> <presence xml:<br> <show>away</show><br> <status>be right back</status><br> </presence><br> </iq><br> <br> (Note again - no from or to !).<br> <br> server:B><br> <iq from='named_list@domainB' to='domainA' type='error' id='someId4' /><br> <br> serverA can not either fallback on current approach sending out the stanza individually to the reciepents (serverA always knows who the reciepents (participants in the list) are !).<br> or it can recreate the list as above and retry.<br> <br> <br> Hope this clarifies the proposal a bit more ....<br> <span style="font-style: italic;">Updates:<br> <ol start="1"> <li>The careful reader will notice that the way I am encapsulating a stanza to be sent to a list can result in a schema violation. To solve it ? Have a wrapper element 'x' in a custom namespace 'ns' - this element just gets discarded and is present to be conforment with the schema. The mashup above is illustrative, not normative or formal <img src="" class="smiley" alt=":-)" title=":-)" /></li> <li>I do mention it in this post, but let me put it explictly here - if the presence-out and presence-in lists are different (privacy policy , assymetrical rosters, etc) you just end up creating different lists : and if the overhead is deemed high, just dont create a list ! There is nothing forcing server to use this approach in all cases ! It should be noted though that, these are slightly towards the corner usecases ... so the benifit to the server hosting a large number of users using it in a 'normal' way will be high enough.</li> </ol> Minimising S2S traffic in XMPP mridul 2006-11-08T08:30:44+00:00 2006-11-09T19:02:07+00:00 Discusses a potential proposal we are considering to implement which will minimise the interserver traffic tremendously without compromising on security, responsibility and accountability. <span style="font-style: italic;">Please refer to <a href=''>next post</a> for more clarification and correction to this proposal</span><br><br> <a href="">XMPP</a> has federation built into the protocol - not an afterthought retrofitted in.<br> Even though it allows to build a mesh of trusted federated network, it does have issues of scale in terms of handling traffic.<br> Let us look at a typical issue and how we are thinking of solving it (since there is no standard way to 'fix' this right now) ...<br> Ofcourse, this would be an extension as of now and so would work only on/between Sun servers if/when we end up supporting it.<br> <h3>A simple problem statement</h3> <h4>The scenario :</h4> The scenario is slightly contrived <img src="" class="smiley" alt=":-)" title=":-)" /> But is intentionally so to illustrate the problem.<br> Consider the following 'configuration'.</b> <ul> <li>serverA and serverB serving domainA and domainB respectively.</li> <li>serverA has users userA00 to userA09 - similar user set userB\* for serverB : twenty users in all.</li> <li>Additionally, all users in serverA are subscribed to each other and to users in serverB - similarly for serverB users : that is, every user is subscribed to every other user.</li> </ul> Now consider this runtime situation : <ol start='1'> <li>Users userA08, userA09 (domainA on serverA) and users userB08, userB09 (domainB on serverB) are online - all others are offline.</li> <li>User userA00 connects.</li> </ol> Let us analyze the situation when userA00 <a href="">sends its available presence intially</a> to serverA (a single xml stanza). In the current situation, this result in the following traffic between the various entities. <h5>Local traffic</h5> <ul> <li>serverA sends the presence of userA00 to userA08 and userA09.</li> <li>serverA sends the presence of all local users (userA\* - userA00) to userA00 : 9 stanza's in all.</li> </ul> As you can see, it is as optimal as it can get .. only info to required subscribers is sent - and only info about subscribers who are available are sent. Now let us consider the traffic over the federated link (over S2S). <h5>S2S traffic</h5> <ul> <li>For each user hosted on serverB which is in userA00's roster, serverA sends the directed presence on behalf of userA00 over the s2s link to the 'barejid' of contact on serverB : that is userB00 to userB09.</li> <li>For each user hosted on serverB which is in userA00's roster, serverA sends a presence probe on behalf of userA00 over the s2s link to query for the presence : that is userB00 to userB09.</li> <li>serverB responds to the presence probe with status of the userB00 to userB09 - 10 stanza's in all.</li> <li>serverB sends the presence of userA00 to userB08 and userB09 (not really relevant to this discussion).</li> </ul> If you compare the traffic between serverA to serverB connection and that of userA to serverA, it is immediately evident that it is highly skewed - the traffic between both server is suboptimal as compared to the traffic within a single server.<br> You have 30 stanza's over S2S, while the same thing has only 10 for user case - that is 2 \* 'number of contacts' more !<br> <br> Now, to be fair, the reasons for this should be evident.<br> serverA cannot trust serverB to 'do the right thing' - it cannot trust that the user's roster in serverA is in sync in serverB's users roster, etc.<br> The issues are many and boils down essentially to, who is responsible and accountable to whom : so serverA cannot offload his responsibility to serverB - that is a given.<br> <br> How can we 'solve' this issue with these restriction ?<br> That is : <ol start='1'> <li>serverA is accountable for its users and should make best case effort to make sure that there are no presence leaks and only contacts who are subscribed to user's presence according to serverA's roster get the update</li> <li>Minimize traffic as much as possible</li> </ol> <h3>A potential solution ?</h3> What we are thinking of doing to solve this problem is the following :<br> Make following extensions to <a href="">Extended Stanza Addressing</a>, namely : <ul> <li>We will not be using the XEP for a mailing list type functionality - purely for creating adhoc routing lists.</li> <li>We will be extending it such that, we can create a list on a server and reference it with a 'name'.</li> <li>This named list could go out of scope at any time (depending on how long the server keeps it around)</li> <li>In this usecase, if list goes out of scope - serverA will recrete it.</li> </ul> So how does this help ?<br> Let us consider how we will use these extensions to solve our 'problems': <h5>How it will work now : S2S traffic</h5> <ul> <li>serverA will create a named list with owner set to userA00 on serverB with list of 'to' set to all subscribers of userA00 on serverB.</li> <li>named lists MUST contain only local users when created by a foreign entity (prevent abuse)</li> <li>serverA will send a single directed presence to this named list.</li> <li>serverA will send a single presence probe to this named list.</li> <li>For each user in this named list, serverB will handle the packet as if it was a) with 'from' set to the JID of the list owner, b) destined to the user in the list.</li> <li>serverB will respond back to serverA with 'to' set to that list owner.</li> </ul> In this case, we will have - 'list creation overhead' + 2 stanzas (presence and probe) + response for each jid in the list.<br> So we have the traffic coming down to optimal case + constant overhead !<br> Note : <ol start='1'> <li>The actual list name would be assigned by serverB - and would be totally random to prevent collusions : no meaning can be derived out of it.</li> <li>The list can 'expire' at any time - so if serverA sends to a list and finds that it does not exist (serverB returns error), it MUST recreate it and send the stanza again to this new list : serverB MUST include the original stanza to nonexistant list in the error response.</li> <li>Other than the creator of the list - serverA and host of the list serverB, no one else knows the list details : and there should be no way for any other entity to query for this information.</li> </ol> No progress for a loooong time mridul 2006-09-12T09:30:00+00:00 2006-09-12T16:34:14+00:00 Yep , been ages since I did anything at <a href="">xmpp-im-client</a> : and all blame to be laid on me.<br>One thing let to another and I was neck deep in fixing the <a href="">server</a> up quite a bit !<br>The next release when it does come out is going to be quite interesting <img src="" class="smiley" alt=":-)" title=":-)" /><br>Some very interesting usecases are going to be possible ... patience for a couple of months more !<br><br>So , I will need to tie up loose ends for a couple of more weeks : and then hopefully I will be able to get enough of my life back to actually start working on the project.<br> Closures in JDK7 ? mridul 2006-09-03T15:18:34+00:00 2006-09-03T22:18:34+00:00 A very clear and concise post by Neal Gafter on the use , differences and benifits of closures <a href="">here</a>.<br>When I read the initial proposal , it was slightly confusing.<br>I felt that the return syntax smelt like goto's , ability to access non-final local variables was worrisome , etc.<br>The ideas on the last two posts (<a href="">here </a>and <a href="">here</a>) are very instructive and clarify things quite a bit.<br>The synchronous usecase is something I have tried time and again to solve within my code - usually inefficiently with hacked up callbacks : closures would be a much more cleaner fix for this problem.<br><br> AJAX and HTTP - transport management : JEP 124 ? mridul 2006-08-27T22:57:06+00:00 2006-08-28T06:01:54+00:00 As more and more websites and applications start moving to using the AJAX paradigm , people are going to hit this problem of how to use HTTP as a transport 'reliably'.<br>What I mean is , HTTP is essentially similar to a one way datagram approach to communication.<br>You send request A , wait for response A - move on.<br>Since this need not be a direct connection from client to server (even if it was) , the request could fail to be delivered , response could be 'lost' , connection errors , etc could intervene.<br>If they are <a href="">idempotent</a> methods being made , then you can reissue them in case the request fails for whatever reason.<br>If not , things are slightly trickier - but there is some sort of support in most frameworks on how to handle this currently.<br>Add HTTP/1.1 persistent connections to this mix and things get really interesting - one socket fails : and a whole bunch of requests are lost.<br>Also, AJAX typically issues POST in most cases - a non-idempotent method.<br>Add to it the requirement to 'poll' the backend for data or updates and you have a interesting problem : you are trying to simulate a HTTP client into being something similar to a bidirectional stream : simulate is a key word here , mind you !<br><br>So you have requirements to 'manage HTTP as a transport' coming both at the client side and at the server side.<br>In the XMPP world , when support for HTTP was initially added the first approach was a <a href="">pure polling</a> solution.<br>As can be guessed , they later developed the HTTP Binding solution under <a href="">JEP 124</a>.<br><br>This is a very very interesting extension and I believe is the key to solving most of the problems above.<br>Though defined for XMPP , it is a solution to essentially act as a transport for any xml based protocol : only requirement, one or more "full xml stanza's" are exchanged - not partial xml data.<br>It solves the problems of retransmission , session management (no need for cookie's , url rewriting , etc) , support against replay attacks , etc.<br>Heck , we have XMPP - a purely xml based streaming protocol working on top of this successfully !<br><br>How does this help AJAX ?<br>All the browser based XMPP clients which support this JEP have some variants or other javascript library in place to actually do all this.<br>So , potentially you could have :<br><your application> --> <jep 124 client side js library> --- HTTP ---> <HTTP Bind gateway> <-- Custom protocol --> <Your server><br>With suitable modification (IF your app is not XMPP that is <img src="" class="smiley" alt=":-)" title=":-)" /> ... why not move to it? Read prev <a href="">post</a> on some advantages) , you can use the ideas from JEP 124 to solve a whole bunch of problems with using HTTP as a reliable transport.<br>Add to it the fact that you already have server side and client side opensource component's available to support this - and you have the perfect solution just waiting to be used.<br><br>Happy coding ! XMPP as an infrastructure mridul 2006-08-24T12:40:00+00:00 2006-08-24T21:28:51+00:00 In this post , I briefly try to push how XMPP can be used as a infrastructure - and not just as 'yet another chat protocol'.<br>It is not as descriptive or authoritative as I would like it to be ... but here goes nothing.<br><br>The abstract from <a href="">RFC 3920</a> says : <br>"<span style="font-style: italic;".</span>"<br><br>For the purpose of this discussion , let us take a small subset of what XMPP provides which allows us this :<br><ol><li>As specified in section 4 of <a href="">3920</a> , an xml stream between two entities which allows for exchange of structured xml stanza's at realtime.</li><li>A binding to tcp and defines how to <a href="">secure</a> and <a href="">authenticate</a> the stream.</li><li>Defines the top level stanzas allowed in this xml stream. These model request-response , broadcast and one way meps.</li><li>Server federation , routing rules , multiple resources per node built into the protocol.</li></ol>Let us consider the implications of these.<br>We have an authenticated and secure stream between two entities - so this asserts the client identity.<br>This explictly identifies an entity within , not only its server - but if federated , across the XMPP universe of federated servers ! This guarantees that any stanza orginating anywhere in this grand federation of XMPP servers will be unambigously routed to this specific client endpoint<br>Defines an xml stream ... most data transfer protocols have already moved , or are moving towards standardizing on usage of xml : so XMPP provides a native transport mechanism for these.<br>The MEP's supported allows us both synchronous communication and asynchronous notification - in either direction: hence both push and pull models are natively supported.<br>The protocol itself allows for presence notifications through a publish subscribe model.<br><br>Now , the XMPP community has come up with a variety of protocol extensions which have been standardized as <a href="">JEP's</a>.<br>A few notable ones which are relevent for this discussion are :<br><ul><li>A flexible <a href="">discovery</a> mechanism : allowing various clients/servers to interoperate. It helps either side to discover the supported subset and allows protocol extension without breaking compatibility.</li><li><a href="">Multi User Chat</a> - though it defines how to set up a 'chat room' : it can be used or modeled to multicast message's to a subset of interested entities.<br></li><li>A powerful <a href="">publish subscribe</a> extension for XMPP.</li><li><a href="">Advanced message processing</a> allows specifying additional delivery semantics for messages.</li><li><a href="">HTTP Binding</a> - allowing XMPP to be bound to HTTP. (More on this later)<span style="text-decoration: underline;"></span></li></ul>All this taken together allows for providing solutions for some challenging problems.<br>We could use xmpp as a middleware for realtime communication and use it for both synchronous and asynchronous notifications. Some examples :<br><ol><li>Pushing information to interested entities - like stock quotes to customers or brokers , etc.</li><li>A whole bunch of publish subscribe usecases can be solved.</li><li>Multicast stanza's as messages to interested entities.<br></li><li>Allows us to write presence aware applications - the entity in question need not really be another 'person' ! Also , use of <a href="">CAPS</a> and <a href="">PeP</a> will definitely make this world of applications much more interesting !<br></li></ol>Now , add <a href="">JEP 124</a> (httpbind) into this picture and things become really interesting.<br>Httpbind essentially can be considered as way to tunnel a xml stream based protocol on top of http : no , not the old CONNECT hack that most implementations use , nor a dumb frequest poll mechanism.<br>As an analogy , you can consider it to be similar to how you would simulate a persistent stream on top of UDP - only it is HTTP here instead of UDP.<br>The extension takes care of retransmissions , packet loss , request/response id , etc and makes allowances for HTTP characteristics .<br><br>Hence , all the power of XMPP is now available over HTTP.<br>So be it a rich client application or a rich internet application - both can talk XMPP and harness its power !<br>And with it , allows all usecases of using XMPP as a realtime messaging middleware available to all developers/users on most platforms - be it a RIA or a thick client.<br><br>XMPP - coming to a browser near you <img src="" class="smiley" alt=":-)" title=":-)" /><br> "Sun recoups server market share" mridul 2006-08-23T07:30:00+00:00 2006-08-23T14:33:22+00:00 <a href="">Here</a> is a report from CNET News talking about Sun regaining market share - and more importantly , gaining revenue while all other major vendors lost !<br> XMPP Interop event mridul 2006-07-27T00:10:08+00:00 2006-07-27T07:10:08+00:00 Just got back yesterday from Portland after the <a href="">XMPP interop event</a> ... it was a really cool event.<br>We had 6 completely different servers interoperating without too much of a problem - including when some funky multi-byte testcases were tried in S2S which <a href="">stpeter</a> dreamed up <img src="" class="smiley" alt=":-)" title=":-)" /> There were some hiccups when TLS was enabled with SASL External / dialback ... but nothing too serious which could not be fixed.<br>The achievments of the event are pretty incredible - there is no other protocol on instant messaging which can possibly boast of achieving something like this - most dont even have an implementation of size more than 1 !<br><br>While the testing was going on , in parallel we had some very productive discussions , including to clarify on aspects of the <a href="">RFC's</a> and <a href="">JEP's</a> : what was debated upon would get translated into changes or discussions at <a href="">standards-jig</a> pretty soon !<br>Portland looked "pretty" - from what little I saw , and the event venue itself allowed for a real nice view of the downtown.<br>I could not attend OSCON unfortunately ...<br> Explaining the auth code ... mridul 2006-06-23T10:21:30+00:00 2006-06-25T18:10:26+00:00 Now to explain what is checked in ... the main purpose of the whole project <img src="" class="smiley" alt=":-)" title=":-)" /><br><br>Essentially a simple framework for authentication has been checked in. Let us take a look at the Classes involved.<br>There is a AuthenticationManager in net.java.dev.xmpp.client.core , which handles the actual session creation and management.<br><br>In the collab api , you create a collab session using a CollaborationSessionFactory - which delegates the actual session creation to an instance of CollaborationSessionProvider.<br>The default CollaborationSessionProvider is org.netbeans.lib.collab.xmpp.XMPPSessionProvider - this allows us to create a direct socket based session to a XMPP server.<br>Other CollaborationSessionProvider that can be used are :<br><ol><li>org.netbeans.lib.collab.xmpp.XMPPSecureSessionProvider - This provider allows you to use legacy SSL to the server. This is a deprecated jabber protocol where the server always listens on SSL mode : like what you have in http/https case.</li><li>org.netbeans.lib.collab.xmpp.XMPPComponentSessionProvider - This allows you to create a 'trusted' component session : please refer to <a href="">JEP 0114</a> for technical info.</li><li>org.netbeans.lib.collab.xmpp.ProxySessionProvider - This provider allows you to create a XMPP session tunneling through HTTPS or SOCKS proxy. The actual proxy to talk to is picked up from the service URL.</li><li>org.netbeans.lib.collab.xmpp.XMPPSecureComponentSessionProvider - allows creation of component session when server is in legacy SSL mode.</li><li>org.netbeans.lib.collab.xmpp.httpbind.HTTPBindSessionProvider - allows creation of a HTTP based XMPP session as per <a href="">JEP 0124</a>. The connection related info is picked up from the service URL.<br></li></ol><br>To use a non-default provider , you will just need to specify the classname as the parameter to CollaborationSessionFactory constructor (in our case , constructor of AuthenticationManager) ! Rest of the details are taken care of by the collab api - and it exposes a uniform behaviour irrespective of the underlying transport : even when the underlying transport is not a dedicated socket stream like in httpbind case !<br>Ok , almost ... there are a couple of little details a user still has to take care of .... let us finish those of too.<br><br>If in the process of session establishment , an untrusted certificate is provided as part of TLS , the decision whether to accept the certificate or not depends on what the provided CollaborationSessionListener says.<br>That is , if the listener is an instance of org.netbeans.lib.collab.SecureSessionListener , then the corresponding "onX509Certificate" method is invoked ... else the certificate is treated as untrusted and session terminated.<br><br>If the service URL points to the form "host:port" , the collab api uses that directly as the destination XMPP server/port.<br>In case it is of the form "domain" , it tries two things one after the other.<br><ol><li>It tries to do a dns query for the specified "domain" to try to find out if there is a xmpp service registered for it. If yes , it uses that as the destination server. This way , the actual xmpp server hosting a domain could be decoupled from the domain itself. Refer <a href="">here</a> for more details.</li><li>If (1) is not the case , the api treats "domain" as XMPP host with default port of "5222".</li></ol>Well , not always - service URL is overloaded to mean something specific in case of two providers ... as explained below.<br><br>ProxySessionProvider and HTTPBindSessionProvider use the service URL as a way to obtain required parameters.<br>In all the other providers , the service URL is just the destination "host : port" (or domain in case you want to do a dynamic lookup of the XMPP host servicing that domain).<br>In these two providers , the semantics are different , let us see how :<br><br><ul><li>ProxySessionProvider</li></ul>As of now , the service URL is expected to be in the form :<br><protocol>://<proxy host>:<proxy port>?(name=value)(&name=value)\*<br><br>The protocol can be one of "socks" or "http"/"https" : socks will tunnel through a socks proxy , while http or https will tunnel using http CONNECT.<br>The query names currently handled are :<br><ol><li>"keepalive" : After authentication succeeds , the client will send whitespace pings to the server after each keepalive interval : this is so that intermediate proxies do not terminate the connection assuming it is inactive.</li><li>"authname" , "password" : The authentication credentials to talk to the proxy.</li><li>"service" : The 'actual service url' !</li><li>"usessl" : Whether to use legacy SSL to talk to the XMPP server after tunneling through the proxy.</li></ol>So an example service URL would be of the form :<br><br><br><br><ul><li>HTTPBindSessionProvider</li></ul>The service URL in the case of this provider is the actual URL used to get in touch with the httpbind gateway ... except for the fact that the query parameters are removed.<br>The actual query parameters are defined in org.netbeans.lib.collab.xmpp.httpbind.HTTPBindConstants ... and allows for a wide range of customisations. The only mandatory query parameter is the "to" parameter. <br>This specifies the domain the user wants to log into : and should be serviced by the specified httpbind connection manager.<br><br>Another important point related to httpbind is related to proxy support.<br>The default two ConnectionProvider's for httpbind <span style="font-weight: bold;">do not</span> support proxies directly.<br><span style="font-weight: bold;">But</span> , they use URLConnection to get to the httpbind gateway , so you can specify the appropriate proxy using the corresponding java properties and expect them to be used.<br>The main reason why they do not support it is 'cos the 1.4 java.net.\* code does not provide a way to explictly specify a proxy... as and when the api moves completely to 1.5 , this implementation will get revised.<br>Ofcourse , you can always write and provide a custom ConnectionProvider in the service url which picks up the proxies and uses it !<br>Please note that "proxytype" and "proxyhostport" is <span style="font-weight: bold;">expected</span> to be provided even if you explictly set the java proxy vairables ... this is so that is the collab api's move to 1.5 and start using the java.net.Proxy class , the client code should not need to be modified.<br><br>So a sample HTTPBindSessionProvider service URL would be :<br><br><br>Other than specific customisation's , there is nothing much else that needs to be taken care of !<br><br><br>[Technorati Tag: <a href="">XMPP</a>]<br>[Technorati Tag: <a href="">Sun IM api</a>]<br>[Technorati Tag: <a href="">xmpp-im-client</a>]<br> Checkins finally ! mridul 2006-06-21T16:43:43+00:00 2006-06-25T18:10:46+00:00 First off , I took much more time than I anticipated before I could finally make some checkin's to <a href="">xmpp-im-client</a>.<br>Without giving excuses , I will just mention work as the reason <img src="" class="smiley" alt=":-)" title=":-)" /><br><br>Ok , now that we have cleared that , what has been checked in ?<br><ol><li>Very basic skeleton of the project - how it would be structured , how it will use external api , etc.</li><li>The first few pieces of code ! (More on this below).</li></ol><br>The basic skeleton of the package structure , build structure , etc has been checked in - if you are part of the project (if you are not , what are you waiting for !) , feel free to comment about it in case you have any reservations or suggestions.<br><br>A few classes have been checked in :<br><ol><li>A session manager which takes care of creating session's and handles reconnections as needed. This is envisioned to be common across UI modules.</li><li>A basic simple CLI based UI has been checked in - as of now , it does nothing other than create a XMPP session using the collab api (using the wrapper described in prev bullet).</li></ol><br>So session creation is done .... almost.<br>What else can be done ?<br><ol><li>Change the sessionprovider passed to AuthenticationManager to try out different providers : legacy ssl , httpbind , proxy support , etc !</li><li>If required , implement org.netbeans.lib.collab.AuthenticationListener also in CliSessionListener - this will allow you to select and control SASL based authentication.</li><li>For more advanced SASL support - like custom client SASL modules , along with (2) above use <AuthenticationManager>.getFactory().getCollaborationSessionProvider().registerProvider( <SASLClientProviderFactory> ) !<br></li></ol><br>Though the code checked in looks deceptively simple , it handles almost everything required for session creation - including reconnection when connection gets dropped.<br>The next functionality that we target will be just reusimg the CollaborationSession that we created using the above to build on to add more complex functionality !<br><br>Check out the <a href="">discussion forum</a> where I have added some blurb on how to test this.<br><br>So here to happy IM'ing !!<br><br><br><br>[Technorati Tag: <a href="">XMPP</a>]<br>[Technorati Tag: <a href="">Sun IM api</a>]<br>[Technorati Tag: <a href="">xmpp-im-client</a>] Discussions at xmpp-im-client mridul 2006-05-17T14:00:00+00:00 2006-06-25T18:11:10+00:00 Initial idea was that I will just forge ahead with design/coding and add stuff/refactor things as they develop.<br>When I saw that a small number of other developers have joined in the <a href="">effort</a> , decided to slow things a bit and atleast discuss the design before going ahead. The smattering of snippets on my workspace will remain there until we come up with some basic agreement.<br>This is the right time to join in case you want to influence the project from the start !<br>The unexpected delays have been , as I explained before , due to some deadlines at work - which have been cleared : full speed ahead !<br><br>I have also created a new Category "XMPP_IM_Client" in my blog under which I will post discussions and posts on this topic.<br><br><br>[Technorati Tag: <a href="">XMPP</a>]<br>[Technorati Tag: <a href="">Sun IM api</a>]<br>[Technorati Tag: <a href="">xmpp-im-client</a>] No updates mridul 2006-05-09T17:21:06+00:00 2006-05-10T00:21:06+00:00 Unfortunately , caught up with beta build for JES5 ... so that means not much time to blog and even less to work on <a href="">xmpp-im-client</a> <img src="" class="smiley" alt=":-(" title=":-(" /><br>I checked in a barebones structure there - no code , just a build file which pulls the required dependency.<br><br>Will get to posting some actual code this weekend ... so will keep all posted !<br>In the meantime , feel free to join the project like others have done already <img src="" class="smiley" alt=":-)" title=":-)" /><br> Matisse for eclipse mridul 2006-05-01T13:01:39+00:00 2006-05-01T20:01:39+00:00 Have you seen <a href="">Matisse</a> ? Most probably you have ! It is an award winning GUI builder which comes with <a href="">netbeans</a>.<br>It is one kickass GUI builder ... and even a person like me can create a UI using it <img src="" class="smiley" alt=":-)" title=":-)" /><br><br>Now , eclipse folks have long since been clamouring for an equivalent ... and looks like they are finally getting their wish !<br><a href="">Matisse4MyEclipse</a> is out ... hope it is as good as the netbeans version.<br><br> xmpp-im-client mridul 2006-04-27T11:38:18+00:00 2006-06-25T18:11:32+00:00 As previously promised , I plan to start off writing a client.<br>The focus will be more on exposing the api exposed and (very) less of the UI.<br>So , either this code can be used directly to write a more fancy UI layer on top , or it can be used as a tutorial on how to understand and use the api.<br>Primary focus will be on the latter : hence code will be as clean and readable as possible with copious meaningful comments <img src="" class="smiley" alt=":-)" title=":-)" /> (Normally , both of these would be false for my code <img src="" class="smiley" alt=":-P" title=":-P" /> )<br>Where is the code going to be ? It will be <a href="">here</a> at <a href="">dev.java.net</a> - feel free to join the project !<br>I have just got the project approved and am a bit busy right now ... so the project is empty as of now - but expect action pretty soon !<br>The rate of progress would typically be not very fast and this blog will be in sync with it : as a new feature gets implemented , the related code and api would be discussed here.<br><br>Next entry : what are the dependencies of the api , how and where do we get them from , how to get and setup the api.<br><br><br>[Technorati Tag: <a href="">XMPP</a>]<br>[Technorati Tag: <a href="">Sun IM api</a>]<br>[Technorati Tag: <a href="">xmpp-im-client</a>] Writing an IM client mridul 2006-04-21T02:54:45+00:00 2006-04-21T09:54:45+00:00 I was thinking up how to illustrate the various aspects of the IM api ... and rather than write up posts about each facet with info about them , my 'current' idea is this :<br> We will author a client - you and me together.<br> It wont be a fancy swing client : I am horrible at UI anyway <img src="" class="smiley" alt=":-)" title=":-)" /><br> Instead of just writing dry posts with some code snippet , what I will do is actually build a new text based client from scratch using the IM api.<br> We will incrementally add functionality to it ... showcasing each aspect of it as we proceed.<br> It will not be exhaustive , obviously ... and you will be able to write far more fancier and powerful clients than what I will end up authoring here.<br> But the basic idea of how to use the IM api should be evident.<br> <br> As I illustrated in the previous post , the client as such is going to be agnostic to the underlying communication mechanism ... so you can easily switch to using different session provider to use httpbind , tunnel through socks/https proxy , use legacy SSL mode , etc : just a couple of line changes.<br> The rest of the client would remain the same.<br> <br> Most probably , I will create a new project in dev.java.net with view-all permission.<br> So let the series begin from next post !<br> Counterstrike Source mridul 2006-04-20T03:10:00+00:00 2006-04-20T10:18:43+00:00 When I had bought <a href="">Halflife 2</a> about year or so back , the bundle had come along with <a href="">Counterstrike Source</a>.<br> Initial pains with the <a href="">steam platform</a> which <a href="">Valve</a> uses almost made me uninstall the whole thing ... 5 cd's for insallation and then another 5 hours to update (/decrypt/whatever they call it) ?!<br> Thankfully , I stuck with it and continued on with HL2.<br> Never got around to completing HL2 ... the stress on puzzle solving was irritatingly high and pretty soon I got bored and turned my attention to Counterstrike.<br> I have never looked back since - though the bots typically suck , the online gameplay is pretty awesome !<br> <br> Considering my connection speeds at home , I am really impressed at the low latency and the gameplay expierence. You do get to meet lot of 'interesting characters' online - including damn good players , people who are pretty passionate about the game and the normal bunch of cheating losers , so-so players like me , and n00bs <img src="" class="smiley" alt=":-P" title=":-P" /> ... it is nice fun all in all !<br> So now , when I am not doing anything interesting , I am on CSS <img src="" class="smiley" alt=":-)" title=":-)" /><br> A simple client mridul 2006-04-17T10:00:00+00:00 2006-04-17T18:07:29+00:00 Let us consider a simple client which will use IM api and talk XMPP directly to the server.<br> The only modification to this client to use HTTP would be in the session creation phase , so I will restrict the client to that part of the code.<br> If you want to take a look at a more 'full fledged' client , take a look at <a href="">org.netbeans.lib.collab.tools.Shell</a>.<br> <br> <div style="margin-left: 40px;">CollaborationSessionFactory _factory = new CollaborationSessionFactory();<br> </div> <div style="margin-left: 40px;">CollaborationSession _session = _factory.getSession(server , user, password, new CustomCollaborationSessionListener());<br> </div> <br> The behavior of this code is as follows :<br> <ul><li>Lookup the system property "org.netbeans.lib.collab.CollaborationSessionFactory" if specified , use that as the session provider factory , else.</li><li>Fallback onto the default CollaborationSessionFactory delegates to a direct socket based XMPP stream : org.netbeans.lib.collab.xmpp.XMPPSessionProvider.</li></ul> The parameters are :<br> <ol><li>Server url - this is an overloaded parameter. For a direct stream based XMPP connection , this will be of the form "host:port". For our HTTP case , it is slightly different - and I will detail it below.</li><li>The user and password specified are used to authenticate the user. In a later post , I will write about how to use SASL.</li><li>The CollaborationSessionListener specified is the default listener to dispatch events to. More interesting things can be done with subinterfaces of CollaborationSessionListener - more on that too later <img src="" class="smiley" alt=":-)" title=":-)" /><br> </li></ol> <br> For a direct xmpp case , the above will look like this :<br> <br> <div style="margin-left: 40px;">CollaborationSessionFactory _factory = new CollaborationSessionFactory();<br> </div> <div style="margin-left: 40px;">CollaborationSession _session = _factory.getSession("share.java.net:5222", "dummyuser", "dummypassword", new CustomCollaborationSessionListener());<br> </div> A bare bone CustomCollaborationSessionListener will just implement the "public void onError(CollaborationException e);" without taking any action about them.<br> <div style="margin-left: 40px;">public class CustomCollaborationSessionListener implements CollaborationSessionListener {<br> <div style="margin-left: 40px;">public void onError(CollaborationException e) { System.err.println("Collaboration exception : " + e); }<br> </div> }<br> </div> <br> Thats it , your basic code will work now (the server specified above is the netbeans collab server - create a valid userid and give the code above a whirl !).<br> <br> How to make this HTTP enabled ?<br> Just two changes :<br> <ul><li>Specify the HTTP session provider class explicitly using :</li></ul> <ol><li>CollaborationSessionFactory _factory = new CollaborationSessionFactory("com.sun.im.service.xmpp.httpbind.HTTPBindSessionProvider");</li><li>Or , set the env variable "org.netbeans.lib.collab.CollaborationSessionFactory" to "com.sun.im.service.xmpp.httpbind.HTTPBindSessionProvider" before creating the session factory.<br></li></ol> <ul><li>Change the server to point to a httpbind connection manager which is JEP124 compliant. Additional parameters can be specified to this url as query params : refer to <a href="">HTTPBindConstants</a> for list of actual parameters.<br> </li></ul> <div style="margin-left: 40px;"> </div> So , a JEP124 version of the initial code would be :<br> <br> <div style="margin-left: 40px;">CollaborationSessionFactory _factory = new CollaborationSessionFactory("com.sun.im.service.xmpp.httpbind.HTTPBindSessionProvider");<br> </div> <div style="margin-left: 40px;">CollaborationSession _session = _factory.getSession("", "dummyuser", "dummypassword", new CustomCollaborationSessionListener());<br> </div> <br> It is mandatory to specify the domain you want to talk to using the 'to' parameter. I am using two other parameters just to illustrate how you would go about customising more.<br> The rest of the code which manipulates and uses the returned _session is the same irrespective of whether it is direct XMPP or through HTTP.<br> <br> Some caveats about the HTTP provider in collab codebase from the top of my head (ignore these if required) :<br> <ol><li>It does not support proxies directly , but just relies on URLConnection. So to use proxies , the user will need to explicitly set the "http.proxyHost" and "http.proxyPort" system properties. (\*More on this below.)<br> </li><li>It does not honor the timeout's set as part of the initial handshake. (\*\* More below).<br> </li></ol> <br> \* This limitation exists since the im module is also used from java 1.4 VM's. The ability to specify a per connection proxy was introduced only in java 1.5 ... <br> How to fix this ? <br> It is very easy for a developer to write a custom impl of <a href="">ConnectionProvider</a> to pick up the "proxytype" and "proxyhostport" param from the service url and use it to connect to the httpbind gateway (If you want to go with default impl itself , just extend it and override the openConnection(URL) method).<br> <br> \*\* As part of the initial handshake with the gateway, the client and ht<font size="3">tpbind </font>gateway arrive at timeout's for the request.<br> These are not honored at the client side 'cos of the same limitation above. Custom implementation can override and use the setReadTimeout() and setConnectionTimeout() methods are appropriate.<br> <br> <br> If there is a need , I could always write a simple implementation of ConnectionProvider for java 1.5 which gets over the above limitations.<br> What to expect next ?<br> What about how to use the basic client and start using TLS and SASL ?!<br> Yep , that should be fun - so watch out for next update soon !<br> <br> JEP 124 - enabling XMPP through HTTP mridul 2006-04-12T01:40:00+00:00 2006-06-25T18:11:49+00:00 In the recent IFR release of <a href="">Sun Java System Instant Messaging</a> , we shipped our first implementation of <a href="">JEP 124</a> compliant client and gateway.<br> <br> What does this provide ?<br> <ul> <li>Access to the Sun's XMPP server through HTTP.</li> <li>The IM client that we bundle with the product can now talk HTTP through any compliant connection manager to allow XMPP access.</li> </ul> <br> The httpbind gateway (as we call it) is a servlet deployed in a webcontainer which provides HTTP access to clients. You can consider it as a protocol gateway between compliant HTTP clients and our pure XMPP server - thats right , the server \*does not\* know anything about http : everything is encapsulated and managed at the gateway itself.<br> <br> The default client that we ship is built on top of our IM client api which is hosted in netbeans under the <a href="">collab</a> project. The actual protocol is abstracted away at the api level and the client does not deal with XMPP (or whatever is the underlying protocol). The api implementation which provides this client side access to XMPP through HTTP is available opensource , give it a whirl ! (Any and every help will be provided by me - drop me a note here <img src="" class="smiley" alt=":-)" title=":-)" /> )<br> <br> What all does this allow for :<br> <br> <ul> <li>Obvious first case : use our client and our httpbind gateway and talk over http through http proxies (not persistent CONNECT , but POST requests <img src="" class="smiley" alt=":-)" title=":-)" /> )<br> </li> </ul> <ul> <li>Use a third party JEP 124 compliant client - either a AJAX client like JWChat , or more heavier cousins and talk via HTTP to httpbind enable XMPP access.<br> </li> </ul> <ul> <li>Use the api and develop your own client - like what the collab folks have done with the collab UI within netbeans : and how do you make this HTTP enabled ? Just change the session provider - thats it ! (I know it works 'cos that is all I did to support this in our client <img src="" class="smiley" alt=":-)" title=":-)" /> ) <br> </li> </ul> <ul> <li>Use constrained clients - like from a mobile phone , etc : talk http and still be 'online' (subcase of 2 actually) ... internally for demo's , we do have a J2ME midlet which does just this.</li> </ul> <br> Ofcourse , this was just one of the main feature additions to IFR release - the most notable among them being server pooling for high availability and enhanced security through starttls and sasl support (finally 1.0 compliant !) among many others.<br> <br> Next blog entry , I will try to give a rough idea about how to go about using this new code - some snippets are being promised <img src="" class="smiley" alt=":-)" title=":-)" /> You will see how simple it really is to dish out your own client , all the protocol heavylifting is already handled at the api level - so let loose your UI skills !<br><br><br><br>[Technorati Tag: <a href="">XMPP</a>]<br>[Technorati Tag: <a href="">Sun IM api</a>]<br>[Technorati Tag: <a href="">xmpp-im-client</a>] Extending Netbeans mridul 2006-03-01T19:53:03+00:00 2006-03-02T03:53:03+00:00 A friend of mine forwarded <a href="">this excellent preso</a> by <a href="">Roumen </a>, give it a whirl !<br> Sun IM details at Jabber.org mridul 2005-11-13T02:30:50+00:00 2005-11-13T10:30:50+00:00 Check out the server details page for <a href="">Sun Java System Instant Messaging</a> at jabber.org.<br> The feature score <a href="">here</a> will definitely be improved in coming releases <img src="" class="smiley" alt=":-)" title=":-)" /> | http://blogs.oracle.com/mridul/feed/entries/atom | CC-MAIN-2016-07 | en | refinedweb |
A documentation-block parser.
tunic
A documentation-block parser. Generates a DocTree abstract syntax tree using a customizable regular-expression grammar. Defaults to parsing C-style comment blocks, so it supports C, C++, Java, JavaScript, PHP, and even CSS right out of the box.
Documentation blocks follow the conventions of other standard tools such as Javadoc, JSDoc, Google Closure, PHPDoc, etc. The primary difference is that nothing is inferred from the code. If you want it documented, you must document it. This is why you can use
tunic to parse inline documentation out of almost any language that supports multi-line comments.
Tags are parsed greedily. If it looks like a tag, it's a tag. What you do with them is completely up to you. Render something human-readable, perhaps?
$ npm install --save tunic
var tunic = require'tunic';// parse javadoc-style commentsvar jsDocAst = tunicparse'/** ... */';// parse Mustache and Handlebars commentsvar hbDocAst = tunicparse'{{!--- ... --}}'blockIndent: /^[\t !]/gmblockParse: /^[\t ]*\{\{!---\s*--\}\}/mblockSplit: //mnamedTags: 'element' 'attribute';
Or with ES6:
import parse from 'tunic';// parse perlpod-style commentsconst perlDocAst = parse'=pod\n ... \n=cut'blockParse: /^=pod\n\n=cut$/mblockSplit: //mtagSplit: false;
tunic.parse(code[, grammar]) : DocTree
code
{String}- Block of code containing comments to parse.
grammar
{?Object}- Optional grammar definition.
blockIndent
{RegExp}- Matches any valid leading indentation characters, such as whitespace or asterisks. Used for unwrapping comment blocks.
blockParse
{RegExp}- Matches the content of a comment block, where the first capturing group is the content without the start and end comment characters. Used for normalization.
blockSplit
{RegExp}- Splits code and docblocks into alternating chunks.
tagParse
{RegExp}- Matches the various parts of a tag where parts are captured in the following order:
tag
type
name
description
tagSplit
{RegExp}- Matches characters used to split description and tags from each other.
namedTags
{Array.<String>}- Which tags should be considered "named" tags. Non-named tags will have their name prepended to the description and set to
undefined.
Parses a given string and returns the resulting DocTree AST object. Defaults to parsing C-style comment blocks.
Several pre-defined grammars are available. To use, import the desired grammar and pass it to the parser.
var parse = require'tunic'parse;var grammar = require'tunic/grammars/css';var cssDocAst = parse'/** ... */' grammar; // -> ast object
Or with ES6:
import parse from 'tunic';import * as grammar from 'tunic/grammars/css';const cssDocAst = parse'/** ... */' grammar; // -> ast object
$ npm test
Standards for this project, including tests, code coverage, and semantics are enforced with a build tool. Pull requests must include passing tests with 100% code coverage and no linting errors.
© 2016 Shannon Moeller me@shannonmoeller.com | https://www.npmjs.com/package/tunic | CC-MAIN-2016-07 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.