text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Op 2005-11-03, Steven D'Aprano schreef <steve at REMOVETHIScyber.com.au>: > On Thu, 03 Nov 2005 13:35:35 +0000, Antoon Pardon wrote: > >> Suppose I have code like this: >> >> for i in xrange(1,11): >> b.a = b.a + i >> >> Now the b.a on the right hand side refers to A.a the first time through >> the loop but not the next times. I don't think it is sane that which >> object is refered to depends on how many times you already went through >> the loop. > > Well, then you must think this code is *completely* insane too: > > py> x = 0 > py> for i in range(1, 5): > ... x += i > ... print id(x) > ... > 140838200 > 140840184 > 140843160 > 140847128 > > Look at that: the object which is referred to depends on how many times > you've already been through the loop. How nuts is that? It is each time the 'x' from the same name space. In the code above the 'a' is not each time from the same namespace. I also think you new very well what I meant. -- Antoon Pardon | https://mail.python.org/pipermail/python-list/2005-November/298404.html | CC-MAIN-2014-15 | refinedweb | 182 | 91.31 |
This post discusses HTTP URLs in Java and how to avoid data loss due to encoding/escaping issues. Special mention is made of the query part, since it is frequently used to store data.
Thursday, 17 December 2009
Java: safe character handling and URL building
Thursday, 26 November 2009
Java: application to check binary class versions
Here's a simple application based on the code from a previous
post. You can run it on
.class files,
.jar
files or directories (it'll recursively search them for
.class
files). It will tell you what version of Java the contained code was
compiled for.
Download: javaClassVersionLib_1.1.zip
Tuesday, 17 November 2009
Java: how to use an IllegalArgumentException
Calling my web log Illegal Argument Exception seemed like a clever idea at the time. It is probably just a recipe for confusing Java neophytes searching for their program errors. I should've listened to what my granny used to tell me about clarity, precision, and terseness when choosing identifiers.
To make up for it, here's a short post about
IllegalArgumentException
(the exception type).
Friday, 9 October 2009
JSF: working with component identifiers (id/clientId)
This.
Friday, 18 September 2009
Java: character inspector application
>>IMAGE
Saturday, 1 August 2009
JSF: a custom format panel control for localising component layout
This post describes a custom JavaServer Faces component for controlling the flow layout of controls based on localised strings.
Wednesday, 22 July 2009
Java: finding class versions
The JVM versions your Java classes will run on is often determined by how you compile them. Failure to take care with your classes and dependencies can lead to an UnsupportedClassVersionError. This post demonstrates how to check your class files.
.
Thursday, 28 May 2009
Java: using XPath with namespaces and implementing NamespaceContext
XPath is a handy expression language for running queries on XML. This post is about how to use it with XML namespaces in Java (javax.xml.xpath).
Wednesday, 27 May 2009
JSF: using component IDs in a data table (h:dataTable vs clientId)
Updated 2009/11/24
This post is obsolete; go read this one instead: JSF: working with component identifiers The approach described in this post may fail if the component identifiers are not unique within the view.
Tuesday, 26 May 2009
Java: dynamically loading scripting engines (Groovy, JRuby and Jython)
Java.
Tuesday, 19 May 2009
JSF: IDs and clientIds in Facelets
Updated 2009/11/24
This post is obsolete; go read this one instead: JSF: working with component identifiers The approach described in this post may fail if the component identifiers are not unique within the view.
Friday, 1 May 2009
Java: a rough guide to character encoding
It can be tricky figuring out the difference between character handling code that works and code that just appears to work because testing did not encounter cases that exposed bugs. This is a post about some of the pitfalls of character handling in Java.
Friday, 10 April 2009
Java: Unicode on the Windows command line
By.
Thursday, 9 April 2009
I18N: Unicode at the Windows command prompt (C++; .Net; Java)
Strange things can happen when working with characters. It is
important to understand why problems occur and what can be done about
them. This post is about getting Unicode to work at the Windows command
prompt (
cmd.exe).
Wednesday, 11 March 2009
Java: using JPDA to write a debugger
The.
Saturday, 21 February 2009
Java: finding the validation mechanism for an arbitrary XML document
Unfortunately,.
Monday, 16 February 2009
JSF: working with component IDs (id vs clientId)
Updated 2009/10/17
WARNING: this post is obsolete; go read this one instead.
Tuesday, 3 February 2009
I18N: a non-technical software bug
Can you spot the problem with the following dialog?
I expect the developers of the Steam installer are making trade-offs for the benefit of younger users - providing visuals to help match them with their language. However, using flags in software products is generally a bad idea.
| http://illegalargumentexception.blogspot.co.uk/2009/ | CC-MAIN-2018-22 | refinedweb | 663 | 51.68 |
I need to write some programs, filters I think they are called, to change and reformat the output from some land surveying data recorders. These are ascii filesof records and I want to rearrange, reformat and do some calcualtions with each record.
I did this many years ago using Turbo Pascal and wonder what is an equivalent tool to use these days. I have pretty much forgotten howto program since those Turbo Pascal days. I have looked at using VBA in Excel but that seems such a tedious learning process.
Can anyone suggest an easy to learn tool here? I would prefer to have an executable file for this task and it doesn't need fancy Windows controls - a pure text base system is all that is needed.
6 Replies
Jan 28, 2011 at 3:02 UTC
You may want to consider Python. The included csv tools are robust and easy to work with.
Jan 28, 2011 at 3:24 UTC
It may be different from what you've done in the past, but the Microsoft .NET languages - I'm partial to C# myself - has a million simple tools for file IO and string parsing. You can make a simple console application like you're talking about very rapidly.
You can get Visual C# Express 2010 for free here: http:/
You'll want to check out the System.IO namespace for file input/output, which is very simple.
You can use the string natural built-in functions for most basic parsing.
Jan 28, 2011 at 4:22 UTC
Final note, if you decide to go with C#, I could also easily/happily supply examples of the basic functionality you need. You'd probably be able to use my examples and the built-in IntelliSense to figure your way through things fairly quickly.
Jan 28, 2011 at 5:12 UTC
Niagara Technology Group (NTG) is an IT service provider.
You may want to consider Python. The included csv tools are robust and easy to work with.
Ditto. Python is considered the standard for low-performance, scientific processing which it sounds like is perfect for your scenario.
Jan 29, 2011 at 12:27 UTC
You could do it in powershell, but python would probably be easier to learn.
Jan 29, 2011 at 6:32 UTC
I like using AutoIT, which is very easy to learn and is easily compiled into an exe. There is a tool available for python also for creating exe's called Py2exe
Both Python and AutoIT are interpreted languages. | https://community.spiceworks.com/topic/126534-simple-programming-task-what-tool-to-use | CC-MAIN-2016-44 | refinedweb | 421 | 71.34 |
std::trunc, std::truncf, std::truncl
arg.
std::intmax_t), when stored in an integer variable.
The implicit conversion from floating-point to integral types also rounds towards zero, but is limited to the values that can be represented by the target type.
[edit] Example
#include <cmath> #include <iostream> #include <initializer_list> int main() { const auto data = std::initializer_list<double>{ +2.7, -2.9, +0.7, -0.9, +0.0, 0.0, -INFINITY, +INFINITY, -NAN, +NAN }; std::cout << std::showpos; for (double const x : data) { std::cout << "trunc(" << x << ") == " << std::trunc(x) << '\n'; } }
Possible output:
trunc(+2.7) == +2 trunc(-2.9) == -2 trunc(+0.7) == +0 trunc(-0.9) == -0 trunc(+0) == +0 trunc(+0) == +0 trunc(-inf) == -inf trunc(+inf) == +inf trunc(-nan) == -nan trunc(+nan) == +nan | https://en.cppreference.com/w/cpp/numeric/math/trunc | CC-MAIN-2022-21 | refinedweb | 127 | 52.26 |
Failover Clustering and Network Load Balancing Team Blog
Hi NLB Fans,
NLB provides users with various methods to manage clusters. In Windows Server 2008, there are 3 ways to manage an NLB cluster:
1. Network Load balancing Manager GUI (nlbmgr.exe)
2. NLB command line tool (Nlb.exe)
3. NLB WMI Provider (root\MicrosoftNLB namespace)
In Windows Server 2008 R2, the NLB team has introduced a PowerShell interface for configuring, managing and debugging NLB. This awesome new feature makes it very easy to administer systems in an automated way.
In this blog post we will explore NLB's support for PowerShell. We will elaborate on the original post PowerShell for NLB, providing more details on naming mechanism, samples and CMDlet discovery.
This blog post contains the following sections:
· PowerShell Naming convention
· Exploring NLB CMDlets
o Using Get-Command
o Using command Auto-completion
o Using Argument auto completion
o Getting examples to use
Future blog posts in this series will discuss:
· NLB common scenarios
· Basics of Debugging NLB with PowerShell
NLB PowerShell follows the PowerShell CMDlet guidelines in naming and execution of the NLB CMDlets. Here we will explore the general naming conventions that will make it easy to further understand and explore NLB CMDlets.
A CMDlet is made up of two parts a Noun and a Verb. These two parts of speech are combined together with a hyphen in between. A NLB example would be:
PS > Get-NlbCluster
The ‘Get’ example above is split into 2 parts, the verb (Get) and the noun (NlbCluster), and these 2 words are separated by a hyphen. As rule of thumb, the verb defines the action to be performed on the noun. In the above example, we want to "Get" all instances of "NlbCluster".
To view all the NLB CMDlets, run PS > Get-Command –module NetworkLoadBalancingClusters
A list of all the NLB supported verbs can be seen below:
A list of all the NLB supported nouns can be seen below:
PowerShell makes it quite easy to use CMDlets, even if you have no prior knowledge of the NLB CMDlets. PowerShell provides two main features that help with exploring/learning CMDlets.
You can use Get-Command to explore existing CMDlets that are available. This CMDlet, in conjunction with the knowledge of Verb-Noun pairing is a powerful way to getting to the CMDlet of interest.
> Get-command -module NetworkLoadBalancingClusters [-Noun | -Verb <String>]
> Get-command <CommandFilter> -commandtype <commandtype>
Let say we want to delete a node from the current cluster. We know our end goal, but don’t know how to achieve it via PowerShell. Using the above syntax we can try to reach our goal. So the action we want to perform is "delete", and the noun that we want to act on is "NLB Cluster Node".
First we try to find all commands that start with "delete" verb and are of type CMDlet, by running > Get-Command delete-* -commandType cmdlet, but do not find any results.
Instead of "delete" let’s try “Remove". Below we see that we found the CMDlet we are looking for.
We could have approached this in a different way. We could have searched for the noun "Node" and filtered further on the exact verb.
As we can see with the above examples, we can intuitively guess the Verb-Noun pair for the NLB operation we want to perform, and use the Get-Command CMDlet to get the exact CMDlet.
The list below shows the usage of Get-Command to list out all the supported NLB CMDlets:
Another way to find out what CMDlets exist is to use the command auto-completion key <TAB>.
PowerShell provides a feature where the arguments of a CMDlet autocomplete.
1. Open PowerShell window with the NLB modules loaded.
2. Type Add-NLBCluster<Tab>
This will automatically complete the above CMDlet, and display "Add-NLBClusterNode" on the screen.
This is another handy way to see what all CMDlets are supported. Another example would be:
Start-NLB<TAB> would display Start-NLBCluster
Hitting <TAB> again, would display Start-NLBClusterNode
Now that we know how to find the CMDlet of interest, let’s see how we further use this information to formulate the exact command that we need to execute. PowerShell supports automatic expansion of the command arguments. Once you have typed in a CMDlet you can type a hyphen (-) and hit <TAB> key to automatically expand the available arguments for the given CMDlet.
1. Open a PowerShell Window with the NLB Module loaded
2. Enter > Get-NLBCluster-<TAB>
3. You will see that the “HostName” parameter will be auto-completed
4. Hit <TAB> again and you will see the text “InterfaceName” appear by the text prompt.
Using the <TAB> you can cycle through all the available arguments that the give CMDlet supports. If you went past an argument while hitting <TAB>, you can go back to it using the <SHIFT+TAB> key sequence.
This auto-completion can be further “filtered” by typing the first few characters of the argument you are interested in. For example, if I want to look for a parameter “InterfaceName”, you can try the following:
2. Type “Get-NLBCluster -i“ <TAB>
3. This will directly show you all the CMDlets that begin with the letter “I”, in this case “InterfaceName”
As you may know from the ‘Help Documentation’ section of the earlier NLB blog post that the Get-Help CMDlet is incredibly powerful.
The final thing that I would like to bring up in this section is the use of the –example argument for the help. As the name suggests, you can quickly see the examples of a given CMDlet via the “-example” argument.
Another awesome support option is the “-Online” option. This will launch the web browser with online content that is up-to-date with the latest information regarding the CMDlet (of course, this may not work if you are using a Server Core installation which cannot access Internet Explorer).
Example:
> Get-Help –Online New-NlbCluster
Rohan Mutagi & Gary JackmanClustering & High-Availability Test TeamMicrosoft
Thanks for sharing , really helpful.
Does this cmdlet apply to PS version 2.0 or only 3.0 and 4.0?
I experience the following error message when I run get-nlbcluster: The term 'Get-NlbCluster' is not recognized as the name of a cmdlet, function, script file, or operable program. How do I install the nlb module. I am using Windows server 2008 PS 3.0 | http://blogs.msdn.com/b/clustering/archive/2009/10/28/9913877.aspx | CC-MAIN-2014-35 | refinedweb | 1,072 | 61.97 |
The application I’m working on right now has a search box that makes suggestions as the user types and does quick, inline searches to provide extra-fast results. Yesterday, I talked about how we improve our timing with debouncing. Today I’ll dive into the technical details of how we built the autocomplete behavior using React–Redux and Apollo.
Implementation
We’re working with a React-Redux front end connected to a GraphQL server via Apollo, in TypeScript. If you’re not familiar with TypeScript, the below should still be fairly readable; if you’re not familiar with the other technologies mentioned, it probably won’t be. If all of the above sounded like a nerdy word-salad, check out my friend Drew’s post on the TypeScript/GraphQL/ReactJS stack.
The application is made up of React components, which take advantage of react-apollo to drive their props from the result of a GQL call on render. React components re-render when their props change, so
react-apollo connected components automatically rerun their queries every time they get new props.
Our search box component needs to make a GQL query, but an input component gets new props with every key entered. As mentioned above, we don’t want to run the autosuggestion query tens of times in a second, and we certainly don’t want to render the autosuggestion text once for every new result that comes back.
To debounce the query calls effectively, we need more fine-grained control of when the query is made. An early attempt at this involved tinkering with the
shouldComponentUpdate function of the component, but because it needs to update some things on every keystroke, this got hairy quickly. So, we departed from our usual
react-apollo pattern that automatically makes queries on render, and built a function that we could debounce.
The Query
To start, we have an
AutoSuggestingSearchBox component that takes in its props, among other things, a handle to our
ApolloClient. It creates a lambda,
getAutoSuggestion, that closes over that client and makes the autosuggestion query for a term in the search box. It looks something like this:
function AutoSuggestingSearchBox( props: { client: ApolloClient; } & OtherProps ) { const getAutoSuggestion = async ( term: string ): Promise<string | undefined> => { const query = require("./autosuggest.graphql"); const results = await props.client.query<AutoSuggestQuery>({ query, variables: { term } }); return results.data.searchSuggestion; }; return ( <SearchBox getAutoSuggestion={getAutoSuggestion} {...props} /> ); }
Okay, so we have a function that gets an autosuggestion for a given term. Now, when do we call it? You’ll see above that the
SearchBox component is taking that
getAutoSuggestion function that we just made. Let’s see what it does with that.
Actions and State
We keep the current autoSuggestion string in part of our Redux store:
export interface SearchboxState { enteredText: string; suggestion?: string; [...] }
So, in Redux style, we want to dispatch an action that makes the query and updates that state when we get a keypress. The
autoSuggestion query is an async function, so this action needs to be asynchronous.
We use thunk for this, but there’s more than one way to skin that cat. The async action we’re going to dispatch looks like this:
export function queryUpdated( text: string, getAutoSuggestion: (text: string) => Promise<string | undefined> ) { return async (dispatch: Dispatch<any>) => { let suggestion = undefined; try { if (text) { suggestion = await getAutoSuggestion(text); } } catch (e) { suggestion = undefined; } dispatch(updateSuggestion(suggestion)); //* See below }; }
* We’re using the action builder pattern described here. Dispatch an action however you like to do so. Our
suggestionUpdated action updates the state of the SearchBox with a new suggestion string.
Debouncin’ It
You’ll note that we still haven’t gotten to the debouncing part! We’ll do this inside our
mapDispatchToProps function on the SearchBox component. We’ll create a function that closes over our dispatch and dispatches our new thunk, and debounce /that/.
Here’s what it looks like:
interface ExternalProps { [...] getAutoSuggestion: (searchText: string) => Promise<string | undefined>; } [...] function mapDispatchToProps( dispatch: Dispatch<any>, ownProps: ExternalProps ): DispatchProps { const updateAutoSuggestion = debounce((text: string) => { dispatch( Actions.queryUpdated( text, ownProps.getAutoSuggestion ) ); }, 100); return { onSearchTextChanged: updateAutoSuggestion, [...] }; }
The operative lesson here is that we’re debouncing the dispatching of the action that makes the query, not the query itself. This protects us from accidentally tinkering with the state when we end up throwing out a query, and it means that the presentational component for the SearchBox now has a handle to a single function to call every time the text is updated—nice and tidy.
By commenting below, you agree to the terms and conditions outlined in our (linked) Privacy Policy3 Comments
I don’t usually write comments after reading a guide/tutorial post. In fact, this is my first time, and I’m doing so because this is, by far, the most technically sophisticated tutorial I could find out there. Combined with good writing and code clarity, this is great stuff (kudos Rachel!).
Too many guides elsewhere don’t suit my needs, but this one has perfectly captured mine. I am currently developing a React app with Redux, Apollo, and TypeScript, and Lodash (and other stacks, obviously).
Thank you! *on to reading other articles*
Thanks, Ionwyn! I’m so thrilled to hear it was helpful. One of my biggest concerns writing this post was whether there was even anyone on earth who might happen to need to do something exactly like this, in this exact stack. It’s good to know that there is! It’d be interesting to trade notes on what working with React/Apollo/TypeScript has been like for you- we’ve been using this stack extensively at Atomic and it’s been working out really well for us.
Hey, Rachael.
Haha, I think I should be the one taking notes from you, as I’m still relatively new to the stack!
Adopting TypeScript to React was not much of a problem having studied C++ (though TypeScript looks more like C# to me), and now I believe it has improved my development process overall. Combined with this awesome cheatsheet (), no problem.
I still use Redux with Apollo because I’m working on what I think is a medium-sized web app where local data management and debugging can get complicated. I’m still unsure why Apollo decided to remove Redux in 2.0, while I started with Apollo 2.0, it seemed promising to be able to access Apollo data through Redux. That said, I’m putting my money on apollo-link-state in the future. At the moment, there’s absolutely no way I’m using apollo-link-state until it matures.
I think the most painful part was getting AWS Lambda to work with Apollo. But then again, most documentations for AWS Services are counterintuitive :)
I hope to see more interesting articles like this from you and the team at Atomic! | https://spin.atomicobject.com/2018/06/05/autocomplete-react-redux-apollo/ | CC-MAIN-2018-43 | refinedweb | 1,137 | 53.81 |
Opened 6 years ago
Closed 6 years ago
#3476 closed enhancement (wontfix)
Create new ticket with known ID
Description
As part of my desire to integrate Trac with a CRM, I'd like the ability to specify the ID of the ticket that is created.
I think a simple update to ticket.create to add an optional parameter would work. Keeps backwards compatibility as well.
Currently:
def create(self, req, summary, description, attributes = {}, notify=False): """ Create a new ticket, returning the ticket ID. """ t = model.Ticket(self.env) ...
could be changed to:
def create(self, req, summary, description, attributes = {}, notify=False, tkt_id=None): """ Create a new ticket, returning the ticket ID. """ t = model.Ticket(self.env, tkt_id) # tkt_id is allowed in trac.ticket.model ...
or something similar. A nicer way might be to have it as an attribute or something.
Thanks, Cameron.
Attachments (0)
Change History (1)
comment:1 Changed 6 years ago by athomas
- Resolution set to wontfix
- Status changed from new to closed
Note: See TracTickets for help on using tickets.
model.Ticket(env, tkt_id) is used to fetch an existing ticket. There is no way to create a ticket with a specific ID. | http://trac-hacks.org/ticket/3476 | CC-MAIN-2014-23 | refinedweb | 196 | 56.45 |
* Martin v. Loewis | | <?xml version="1.0" encoding="iso-8859-1"?> | <ns:doc xmlns:<ns:doc/> | | (or, alternatively, the element could just be empty). Is that the | XML that would produce above sequence of SAX events? Nope, it's not. No XML document could produce that particular sequence of events. | It seems to me that this XML is ill-formed, the namespace prefix ns | is not defined here. Is that analysis correct? Not entirely. The XML is perfectly well-formed, but it's not namespace-compliant. | Furthermore, the test checks whether the generator produces | | <?xml version="1.0" encoding="iso-8859-1"?> | <ns1:doc xmlns:</ns1:doc> | | It appears that the expected output is bogus; I'd rather expect to get | the original document back. What original document? :-) | My proposal would be to correct the test case to pass "ns1:doc" as | the qname, I see that as being the best fix, and have now committed it. | and to correct the generator to output the qname if that was | provided by the reader. We could do that, but the namespace name and the qname are supposed to be equivalent in any case, so I don't see any reason to change it. One problem with making that change is that it no longer becomes possible to roundtrip XML -> pyexpat -> SAX -> xmlgen -> XML because pyexpat does not provide qnames. --Lars M. | https://mail.python.org/pipermail/xml-sig/2000-September/003448.html | CC-MAIN-2016-30 | refinedweb | 230 | 65.01 |
Provided by: libncarg-dev_6.3.0-6build1_amd64
NAME
LINE3 - Draws the straight-line segment joining the projections of two points in 3-space.
SYNOPSIS
CALL LINE3 (UA,VA,WA,UB,VB,WB)
C-BINDING SYNOPSIS
#include <ncarg/ncargC.h> void c_line3 (float ua, float va, float wa, float ub, float vb, float wb)
DESCRIPTION
UA,VA,WA (input expressions of type REAL) are the coordinates of the first point in 3-space. UB,VB,WB (input expressions of type REAL) are the coordinates of the second point in 3-space. The statement "CALL LINE3 (UA,VA,WA,UB,VB,WB)" is equivalent to the three statements "CALL FRST3 (UA,VA,WA)", "CALL VECT3 (UB,VB,,WB)", and "CALL PLOTIF (0.,0.,2), but is slightly more efficient. To approximate a curve defined by three or more points, though, it is not efficient to use LINE3, because the first point of each line segment after the first will be a repeat of the second point of the previous line segment and will therefore be repeated in the metafile. Thus, to approximate a curve, you should use FRST3 and VECT3 or CURVE3. Straight-line segments drawn by LINE3, fthex02.
ACCESS
To use LINE3 or c_line3, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order.
SEE ALSO
Online: threed, curve3, fence3, frst3, perim3, point3, psym3, pwrz, pwrzt, set3, threed, tick3, tick43, vect3, ncarg_cbind. Hardcopy: NCAR Graphics Fundamentals, UNIX Version
Copyright (C) 1987-2009 University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement. | http://manpages.ubuntu.com/manpages/xenial/man3/line3.3NCARG.html | CC-MAIN-2019-47 | refinedweb | 260 | 62.17 |
On Wed, 2006-02-22 at 15:18 -0500, Jakub Jelinek wrote: > On Wed, Feb 22, 2006 at 11:40:20AM -0800, Nicholas Miell wrote: > > On Wed, 2006-02-22 at 07:09 -0500, Jakub Jelinek wrote: > > >. > > > > > > > Oh great, they're breaking the ABI again!? When will those idiots > > learn... > > > > How about Fedora stops including all new versions of libstdc++ until the > > maintainers manage to wrap their tiny little minds around the concept of > > a "stable ABI"? > > Please don't insult people who know about C++ ABI stuff far more than > you apparently do. Yeah, that was kind of harsh. This is one of my pet peeves. > There are several C++ ABI bugs both on the compiler > side (things you currently get with -fabi-version=0, PR22488 etc.) > and several things in the STL land are simply not doable without > ABI break, backwards compatibility in C++ world is a few orders of magnitude > harder in C++ than in C. G++ 3.4/4.0/4.1 have C++ ABI compatibility > and nobody said yet to my knowledge if 4.2 will or won't be compatible, > libstdc++so7 is simply a playground for things that aren't doable > without ABI break (similarly to -fabi-version=0). If/when it is decided > to do an ABI break, all the stuff in there can be used. > > >From my own experience when I was doing the long double switch also in > libstdc++, aiming for libstdc++.so.6 compatibility with both DFmode > and TFmode long double, maintaining C++ ABI compatibility is a nightmare, > you basically can't use symbol versioning and due to inheritance, inlining > and ODR things are really hard to do. > > We included this in Fedora, because SCIM IM modules are written in C++ > and as they need to be loaded into arbitrary Gtk (and KDE) applications, > there is a big problem if the application uses some older libstdc++.so. > This can be solved only with dlmopen (but then, SCIM IM modules us > like 40 or 50 shared library dependencies, so dlmopen isn't really a > good idea), or by namespace versioning, which is one of the things > libstdc++so7 has. Nice to see that somebody is finally trying to solve the problem, instead of just letting the users suffer. Maybe C++ will actually be a viable language for library development one day soon. (Also re: dlmopen & namespace versioning -- couldn't the same thing be accomplished using linker groups? -- assuming glibc supported them) -- Nicholas Miell <nmiell comcast net> | https://www.redhat.com/archives/fedora-devel-list/2006-February/msg01159.html | CC-MAIN-2015-11 | refinedweb | 416 | 60.35 |
went about the "Lowest Common Ancestor" problem..
The first tendency in this problem is to want to walk back up the tree. This is obviously problematic because we do not have a parent node link at each node. But, it turns out, there’s a better way anyway.
The first thing we want to do is examine the two values we are sent to find. If the values are the same, they are obviously their own common ancestor – as long as the node actually exists in the tree.
If the nodes aren’t the same, we will “order” them by simply finding which node is less and which is greater. Why do we care? Because, it will simplify our search to be O(log n) by allowing us to drive down the tree in the BST fashion.
What we can do at each node, is take advantage of the ordering to tell us where to go. If the current node’s value is < the smaller value, we know both nodes are on the right (if they exist in the tree). Similarly, if the current node’s value is > the larger value, we know both nodes are on the left (if they exist in the tree).
If neither of these conditions are true, one of two things are true: either a) one of the nodes equals the current node or b) the smaller is on the left and the larger is on the right. Either way, we have a candidate for the LCA as long as both nodes are in the tree. So, once we know this is the point that the LCA would be, we simply find the smaller and larger in this subtree.
1: public class Trees
2: {
3: // This is the public kick-off method, orders the first/second
4: public Node<int> FindLowestCommonAncestor(int first, int second, Node<int> root)
5: {
6: // if first and second happen to be same, it's simply a find
7: if (first == second)
8: {
9: return Find(first, root);
10: }
11:
12: // otherwise, order the first and second by value
13: return first < second
14: ? FindLowestCommonAncestorTraversal(first, second, root)
15: : FindLowestCommonAncestorTraversal(second, first, root);
16: }
17:
18: // a helper method that simply finds a node in the BST
19: public Node<int> Find(int value, Node<int> current)
20: {
21: if (current != null)
22: {
23: if (current.Value == value)
24: {
25: return current;
26: }
27:
28: return value < current.Value
29: ? Find(value, current.Left)
30: : Find(value, current.Right);
31: }
32:
33: return null;
34: }
35:
36: // the actual traversal
37: private Node<int> FindLowestCommonAncestorTraversal(int smaller, int larger, Node<int> current)
38: {
39: if (current != null)
40: {
41: // if larger < value then smaller is also by definition, both left
42: if (larger < current.Value)
43: {
44: return FindLowestCommonAncestorTraversal(smaller, larger, current.Left);
45: }
46:
47: // if smaller is > value, then larger is also by definition, both right
48: if (smaller > current.Value)
49: {
50: return FindLowestCommonAncestorTraversal(smaller, larger, current.Right);
51: }
52:
53: // otherwise, we found divergent point, make sure nodes actually exist
54: if (Find(smaller, current) != null && Find(larger, current) != null)
55: {
56: return current;
57: }
58: }
59:
60: return null;
61: }
62: }
This algorithm ends up being O(log n) – assuming a well balanced BST implementation, which is fairly optimal. At most, we’d find the LCA at the root, which would mean we’d do two finds, both of which are O(log n)..
Print | posted on Thursday, August 27, 2015 12:52 AM |
Filed Under [
My Blog
C#
Software
.NET
Little Puzzlers
Technology
] | http://geekswithblogs.net/BlackRabbitCoder/archive/2015/08/27/solution-to-little-puzzlers---lowest-common-ancestor.aspx | CC-MAIN-2018-17 | refinedweb | 599 | 70.02 |
This preview shows
page 1. Sign up
to
view the full content.
Unformatted text preview: ISSN 1183-1057 Department of Economics
Discussion Papers 02-3
Oh No! I Got The Wrong
Sign! What Should I Do?
P. Kennedy
2002 NO
US S
ET
S O M M E S PR SIMON FRASER UNIVERSITY Oh No! I Got the Wrong Sign! What Should I Do?
Peter Kennedy
Professor of Economics,
Dept. of Economics
Simon Fraser University
Burnaby, BC
Canada V5A 1S6
Tel. 604-291-4516
Abstract
Getting a “wrong” sign in empirical work is a common phenomenon. Remarkably,
econometrics textbooks provide very little information to practitioners on how this
problem can arise. This paper exposits a long list of ways in which a “wrong” sign can
occur, and how it might be corrected. Oh No! I Got the Wrong Sign! What Should I Do?
We have all experienced, far too frequently, the frustration caused by finding that the
estimated sign on our favorite variable is the opposite of what we anticipated it would be.
This is probably the most alarming thing "that gives rise to that almost inevitable
disappointment one feels when confronted with a straightforward estimation of one's
preferred structural model." (Smith and Brainard, 1976, p.1299). To address this problem,
we might naturally seek help from applied econometrics texts, looking for a section
entitled "How to deal with the wrong sign." Remarkably, a perusal of existing texts does
not turn up sections devoted to this common problem. Most texts mention this
phenomenon, but provide few examples of different ways in which it might occur.1 This
is unfortunate, because expositing examples of how this problem can arise, and what to
do about it, can be an eye-opener for students, as well as a great help to practitioners
struggling with this problem. The purpose of this paper is to fill this void in our textbook
literature by gathering together several possible reasons for obtaining the "wrong" sign,
and suggesting how corrections might be undertaken.
A wrong sign can be considered a blessing, not a disaster. Getting a wrong sign is
a friendly message that some detective work needs to be done – there is undoubtedly
some shortcoming in the researcher’s theory, data, specification, or estimation
procedure. If the “correct” signs had been obtained, odds are that the analysis would
not be double-checked. The following examples provide a checklist for this doublechecking task, many illustrating substantive improvements in specification. 1. Bad Economic Theory. Suppose you are regressing the demand for Ceylonese
tea on income, the price of Ceylonese tea and the price of Brazilian coffee. To
your surprise you get a positive sign on the price of Ceylonese tea. This dilemma
is resolved by recognizing that it is the price of other tea, such as Indian tea, that
is the relevant substitute here. Rao and Miller (1971, p.38-9) provide this
example. Gylfason (1981) refers to many studies which obtained “wrong” signs
because they used the nominal rather than real interest rate when explaining
consumption spending.
1 Wooldridge (2000) is an exception; several examples of wrong signs are scattered throughout this text. 2. Omitted Variable. Suppose you are running an hedonic regression of automobile
prices on a variety of auto characteristics such as horsepower, automatic
transmission, and fuel economy, but keep discovering that the estimated sign on
fuel economy is negative. Ceteris paribus, people should be willing to pay more,
not less, for a car that has higher fuel economy, so this is a “wrong” sign. An
omitted explanatory variable may be the culprit. In this case, we should look for
an omitted characteristic that is likely to have a positive coefficient in the hedonic
regression, but which is negatively correlated with fuel economy. Curbweight is a
possibility, for example. (Alternatively, we could look for an omitted
characteristic which has a negative coefficient in the hedonic regression and is
positively correlated with fuel economy.) Here is another example, in the context
of a probit regression. Suppose you are using a sample of females who have been
asked whether they smoke, and then are resampled twenty years later. You run a
probit on whether they are still alive after twenty years, using the smoking
dummy as the explanatory variable, and find to your surprise that the smokers are
more likely to be alive! This could happen if the non-smokers in the sample were
mostly older, and the smokers mostly younger, reflecting Simpson’s paradox.
Adding age as an explanatory variable solves this problem, as noted by Appleton,
French, and Vanderpump (1996).
3. High Variances. Suppose you are estimating a demand curve by regressing
quantity of coffee on the price of coffee and the price of tea, using time series
data, and to your surprise find that the estimated coefficient on the price of coffee
is positive. This could happen because over time the prices of coffee and tea are
highly collinear, resulting in estimated coefficients with high variances – their
sampling distributions will be widely spread, and may straddle zero, implying that
it is quite possible that a draw from this distribution will produce a “wrong” sign.
Indeed, one of the casual indicators of multicollinearity is the presence of
“wrong” signs. In this example, a reasonable solution to this problem is to
introduce additional information by using the ratio of the two prices as the
explanatory variable, rather than their levels. This example is one in which the
wrong sign problem is solved by incorporating additional information to reduce high variances. Multicollinearity is not the only source of high variances,
however; they could result from a small sample size, or minimal variation in the
explanatory variables. Leamer (1978, p.8) presents another example of how
additional information can solve a wrong sign problem. Suppose you regress
household demand for oranges on total expenditure E, the price po of oranges, and
the price pg of grapefruit (all variables logged), and are surprised to find wrong
signs on the two price variables. Impose homogeneity, so that if prices and
expenditure double, the quantity of oranges purchased should not change; this
implies that the sum of the coefficients of E, po, and pg is zero. This extra
information reverses the price signs.
4. Selection Bias. Suppose you are regressing academic performance, as measured
by SAT scores (the scholastic aptitude test is taken by many students to enhance
their chances of admission to the college of their choice) on per student
expenditures on education, using aggregate data on states, and discover that the
more money the government spends, the less students learn! This “wrong” sign
may be due to the fact that the observations included in the data were not obtained
randomly – not all students took the SAT. In states with high education
expenditures, a larger fraction of students may take the test. A consequence of this
is that the overall ability of the students taking the test may not be as high as in
states with lower education expenditure and a lower fraction of students taking the
test. Some kind of correction for this selection bias is necessary. In this example,
putting in the fraction of students taking the test as an extra explanatory variable
should work. This example is taken from Guber (1999). Currie and Cole (1993)
exposit another good example of selection bias. Suppose you are regressing the
birthweight of children on several family and background characteristics,
including a dummy for participation in AFDC (aid for families with dependent
children), hoping to show that the AFDC program is successful in reducing low
birthweights. To your consternation the slope estimate on the AFDC dummy is
negative! This probably happened because mothers self-selected themselves into
this program – mothers believing they were at risk for delivering a low
birthweight child may have been more likely to participate in AFDC. This could be dealt with by using the Heckman two-stage correction for selection bias or an
appropriate maximum likelihood procedure. A possible alternative solution is to
confine the sample to mothers with two children, for only one of which the
mother participated in the AFDC program. A panel data method such as fixed
effects (or differences) could be used to control for the unobservables that are
causing the problem.
5. Data Definitions/Measurement Error. Suppose you are regressing stock price
changes on a dummy for bad weather, in the belief that bad weather depresses
traders and they tend to sell, so you expect a negative sign. But you get a positive
sign. Rethinking this, you change your definition of bad weather from 100 percent
cloud cover plus relative humidity above 70 percent, to cloud cover more than
80% or relative humidity outside the range 25 to 75 percent. Magically, the
estimated sign changes. This example illustrates more than the role of variable
definitions/measurement in affecting coefficient signs – it illustrates the dangers
of data mining and underlines the need for sensitivity analysis. This example
appears in Kramer and Runde (1997). This is not the only way in which
measurement problems can contribute to generating a wrong sign. It is not
uncommon to regress the crime rate on the per capita number of police and obtain
a positive coefficient, suggesting that more police engender more crime. One
possible reason for this is that having extra police causes more crime to be
reported. Another reason for how measurement error can cause a wrong sign is
exposited by Bound, Brown, and Mathiowetz (2001). They document that often
measurement errors are correlated with the true value of the variable being
measured (contrary to the usual econometric assumption) and show how this can
create extra bias sufficient to change a coefficient’s sign.
6. Outliers. Suppose you are regressing infant mortality on doctors per thousand
population, using data on the 50 US states plus the District of Columbia, but find
that the sign on doctors is positive. This could happen because the District of
Columbia is an outlier – relative to other observations, it has large numbers of
doctors, and pockets of extreme poverty. Removing the outlier should solve the sign dilemma. This example appears in Wooldridge (2000, p.303-4). Rowthorn
(1975) points out that a nice OECD cross-section regression confirming Kaldor’s
law resulted from a random scatter of points and an outlier, Japan.
7. Simultaneity/Lack of Identification. Suppose you are regressing quantity of an
agricultural product on price, hoping to get a positive coefficient because you are
interpreting it as a supply curve. Historically, such regressions produced negative
coefficients and were interpreted as demand curves – the exogenous variable
“weather” affected supply but not demand, rendering this regression an identified
demand curve. Estimating an unidentified equation would produce estimates of an
arbitrary combination of the supply and demand equation coefficients, and so
could be of arbitrary sign. The lesson here is check for identification. A classic
example here is Moore (1914) who regressed quantity of pig iron on price,
obtained a positive coefficient and announced a new economic discovery – an
upward-sloping demand curve. He was quickly rebuked for confusing supply and
demand curves. Morgan (1990, chapter 5) discusses historical confusion on this
issue. The generic problem here is simultaneity. More policemen may serve to
reduce crime, for example, but higher crime will cause municipalities to increase
their police force, so when crime is regressed on police, it is possible to get a
positive coefficient estimate. Identification is achieved by finding a suitable
instrumental variable. This suggests yet another reason for a wrong sign – using a
bad instrument.
8. Bad Instruments. Instrumental variable (IV) estimation is usually employed to
alleviate the bias caused by correlation between an explanatory variable and the
equation error. Suppose you are regressing incidence of violent crime on
percentage of population owning guns, using data on U.S. cities. Because you
believe that gun ownership is endogenous (i.e., higher crime causes people to
obtain guns), you use gun magazine subscriptions as an instrumental variable for
gun ownership and estimate using two-stage least squares. You have been careful
to ensure identification, and check that the correlation between gun ownership and
gun magazine subscriptions is substantive, so are very surprised to find that the IV slope estimate is negative, the reverse of the sign obtained using ordinary least
squares. This was caused by negative correlation between gun subscriptions and
crime. The instrumental variable gun subscriptions was representing gun
ownership which is culturally patterned, linked with a rural hunting subculture,
and so did not represent gun ownership by individuals residing in urban areas,
who own guns primarily for self-protection.2 Another problem with IV estimation
is that if the IV is only weakly correlated with the endogenous variable for which
it is serving as an instrument, the IV estimate is not reliable and so a wrong sign
could result.
9. Specification Error. Suppose you have student scores on a pretest and a posttest
and are regressing their learning, measured as the difference in these scores, on
the pretest score (as a measure of student ability), a treatment dummy (for some
students having had an innovative teaching program) and other student
characteristics. To your surprise the coefficient on pretest is negative, suggesting
that better students learn less! Becker and Salemi (1977) spell out several ways in
which specification bias could cause this. One example is that. Measurement error could also be playing a role
here. A positive measurement error in pretest appears negatively in the score
difference, creating a negative correlation between the pretest explanatory
variable and the equation error term, creating bias.
10. Ceteris Paribus Confusion. Suppose you have regressed house price on square
feet, number of bathrooms, number of bedrooms, and a dummy for a family room,
and are surprised to find the family room coefficient has a negative sign. The
coefficient on the family room dummy tells us the change in the house price if a
family room is added, holding constant the other regressor values, in particular
holding constant square feet. So adding a family room under this constraint must
2 I am indebted to Tomislav Kovandzic for this example. entail a reduction in square footage elsewhere, such as smaller bedrooms or loss
of a dining room, which will entail a loss in house value. In this case the net effect
on price is negative. This problem is solved by asking what will happen to price
if, for example, a 600 square foot family room is added, so that the proper
calculation of the value of the family room involves a contribution from both the
square feet regressor coefficient and the family room dummy coefficient. As
another example, suppose you are regressing yearling (racehorse) auction prices
on various characteristics of the yearling, plus information on its sire (father) and
dam (mother). To your surprise you find that although the estimated coefficient
on dam dollar winnings is positive, the coefficient on number of dam wins is
negative, suggesting that yearlings from dams with more race wins are worth less.
This wrong sign problem is resolved by recognizing that the sign is
misinterpreted. In this case, the negative sign means that holding dam dollar
winnings constant, a yearling is worth less if its dam required more wins to earn
those dollars. Although proper interpretation solves the sign dilemma, in this case
an adjustment to the specification seems appropriate: replace the two dam
variables with a new variable, earnings per win. This example is taken from
Robbins and Kennedy (2001).
11. Interaction Terms. Suppose you are regressing economics exam scores on grade
point average (GPA) and an interaction term which is the product of GPA and
ATTEND, percentage of classes attended. The interaction term is included to
capture your belief that attendance benefits better students more than poorer
students. Although the estimated coefficient on the interaction term is positive, as
you expected, to your surprise the estimated coefficient on GPA is negative,
suggesting that students with higher ability, as measured by GPA, have lower
exam scores. This dilemma is easily explained – the partial derivative of exam
scores with respect to GPA is the coefficient on GPA plus the coefficient on the
interaction term times ATTEND. The second term probably outweighs the first
for all ATTEND observations in the data, so the influence of GPA on exam scores
is positive, as expected. Wooldridge (2000, p.190-1) presents this example. 12. Regression to the Mean. Suppose you are testing the convergence hypothesis by
regressing average annual growth over the period 1950-1979 on GDP per work
hour in 1950. Now suppose there is substantive measurement error in GDP. Large
underestimates of GDP in 1950 will result in low GDP per work hour, and at the
same time produce a higher annual growth rate over the subsequent period
(because the 1979 GDP measure will likely not have a similar large
underestimate). Large overestimates will have an opposite effect. As a
consequence, your regression is likely to find convergence, even when none
exists. This is a type of “wrong” sign, in this case produced by the regression to
the mean phenomenon. For more on this example, see Friedman (1992). A similar
example is identified by Hotelling (1933). Suppose you have selected a set of
firms with high business-to-sales ratios and have regressed this measure against
time, finding a negative relationship i.e., over time the average ratio declines. This
result is likely due to the reversion to the mean phenomenon – the firms chosen
probably had high ratios by chance, and in subsequent years reverted to a more
normal ratio.
13. Nonstationarity. Regressing a random walk on an independent random walk
should produce a slope coefficient insignificantly different from zero, but far too
frequently does not, as is now well-known. This spurious correlation represents a
“wrong” sign – the sign should not be significantly positive or negative. This is a
very old problem, identified by Yule (1926) in an article entitled “Why do we
sometimes get nonsense correlations between time series?”
14. Common Trends. A common trend could swamp what would otherwise be a
negative relationship between two variables; omitting the common trend would
give rise to the wrong sign.
15. Functional Form Approximation. Suppose you are running an hedonic
regression of house prices on several characteristics of houses, including number
of rooms and the square of the number of rooms. Although you get a positive
coefficient on the square of number of rooms, to your surprise you get a negative coefficient on number of rooms, suggesting that for a small number of rooms
more rooms decreases price. This could happen because in your data there are no
(or few) observations with a small number of rooms, so the quadratic term
dominates the linear term throughout the range of the data. The negative sign on
the linear term comes about because it provides the best approximation to the
data. Wooldridge (2000, p.188) provides this example.
16. Dynamic Confusion. Suppose you have regressed income on lagged income and
investment spending. You are interpreting the coefficient on investment as the
multiplier and are surprised to find that it is less than unity, a type of “wrong
sign.” Calculating the long-run impact on income this implies, however, resolves
this dilemma. This example appears in Rao and Miller (1971, p.44-5). Suppose
you have panel data on the US states and are estimating the impact of public
capital stock (in addition to private capital stock and labor input) on state output.
You estimate using fixed effects and to your surprise obtain a negative sign on the
public capital stock coefficient estimate. Baltagi and Pinnoi (1995) note that this
could be because fixed effects estimates the short-run reaction; pooled OLS, the
“between” estimator, and random effects all produce the expected positive sign,
suggesting that the long-run impact is positive. Suppose you believe that x affects
y positively but there is a lag involved. You regress yt on xt and xt-1 and are
surprised to find a negative coefficient on xt-1. The explanation for this is that the
long-run impact of x is smaller than its short-run impact.
17. Reversed Measure. Suppose you are regressing consumption on a consumer
confidence measure, among other variables, and unexpectedly obtain a negative
sign. This could happen because you didn’t realize that small numbers for the
consumer confidence measure correspond to high consumer confidence. It has
been known3 for an economist to present an entire seminar trying to explain a
wrong sign only to discover afterwards that it resulted from his software reversing
the coding on his logit analysis. 3 I am indebted to Marie Rekkas for this anecdote. 18. Heteroskedasticity. Suppose you are estimating a probit model, with the latent
equation a linear function of x, namely y* = α + βx + ε, but the error ε is
heteroskedastic, with variance σ2 proportional to the square of x. Probit estimates
β/σ, not β, because the likelihood function is based on the cumulative standard
normal density. So the operative latent equation is proportional to α/x + β, in
which the influence of x is reversed in sign. See Wooldridge (2001, p.479) for
discussion.
19. Underestimated Variances. If the variance of a coefficient estimate is
underestimated, an irrelevant variable could be statistically “significant,” of either
sign. The Poisson model assumes that the variance of the counts is equal to its
expected value. Because of this Poisson estimation produces marked
underestimates of coefficient estimates’ variances in the typical case in which
there is overdispersion (the count variance is larger than its expected value).
Researchers often rely on asymptotic properties of test statistics which could be
misleading in small samples. A classic example appears in Laitinen (1978) who
showed that failure to use small-sample adjustments explained why demand
homogeneity had been rejected so frequently in the literature. What should be done if your double-checking can turn up no reasonable
explanation for the “wrong” sign? Try and get it published. Wrong sign puzzles, such
as the Leontief paradox, are a major stimulus to the development of our discipline.
For example, recent evidence suggests that there is a positive relationship between
import tariffs and growth across countries in the late 19th century, a “wrong” sign in
many economists’ view. Irwin (2002) extends the relevant economic theory to offer
an explanation for this.
There is no definitive list of ways in which “wrong” signs can be generated. In
general, any theoretical oversight, specification error, data problem, or inappropriate
estimating technique could give rise to a “wrong” sign. Observant readers might have
noted that many could be classified under a single heading: Researcher Foolishness. This serves to underline the importance of the first of Kennedy’s (2002) ten commandments of
applied econometrics: Use Common Sense. REFERENCES
Appleton, D. R., J. M. French, and M. P. J. Vanderpump. 1996. Ignoring a Covariate: An
Example of Simpson’s Paradox. American Statistician 50: 340-1.
Baltagi, B. H. and N. Pinnoi. 1995. Public Capital Stock and State Productivity Growth:
Further Evidence from an Error Components Model. Empirical Economics 20: 351-9.
Becker, W. E. and M. K. Salemi. 1977. The Learning and Cost Effectiveness of AVT
Supplemented Instruction: Specification of Learning Models. Journal of Economic
Education 8: 77-92.
Bound, J., C. Brown and N. Mathiowetz. 2001. Measurement Error in Survey Data. In J.
J. Heckman and E. E. Leamer (eds), Handbook of Econometrics, vol.V. Amsterdam:
North Holland, 3705-3843.
Currie, J. and N. Cole. 1993. Welfare and Child Health: The Link between AFDC
Participation and Birth Weight. American Economic Review 83: 971-83.
Friedman, M. 1992. Do Old Fallacies Ever Die? Journal of Economic Literature 30:
2129-32.
Guber, D. C. 1999. Getting What You Pay For: The Debate over Equity in Public School
Expenditures. Journal of Statistics Education 7(2). [Online] Available at:
Gylfason, H. 1981. Interest Rates, Inflation, and the Aggregate Consumption Function.
Review of Economics and Statistics 63: 233-45.
Hotelling, H. 1933. Review of The Triumph of Mediocrity in Business, by Horace Secrist.
Journal of the American Statistical Association 28: 463-5.
Irwin, D. A. 2002. Interpreting the Tariff-Growth Correlation of the Late 19th Century.
American Economic Review, Papers and Proceedings 92: 165-9.
Kennedy, P. E. 2002. Sinning in the Basement: What are the Rules? The Ten
Commandments of Applied Econometrics. Journal of Economic Surveys, 16: Kramer, W. and R. Runde. 1997. Stocks and the Weather: An Exercise in Data Mining or
Yet Another Capital Market Anomaly? Empirical Economics 22: 637-41.
Laitinen, K. (1978) Why is Demand Homogeneity So Often Rejected? Economics Letters
1: 187-91.
Leamer, E. E. 1978. Specification Searches: Ad Hoc Inference with Nonexperimental
Data. New York: John Wiley.
Morgan, M. S. (1990) The History of Econometric Ideas. Cambridge: Cambridge
University Press.
Moore, H. L. (1914) Economic Cycles – Their Law and Cause. New York: Macmillan.
Rao, P. and R. Miller (1971) Applied Econometrics. Belmont, CA: Wadsworth.
Robbins, M. and P. E. Kennedy. 2001. Buyer Behaviour in a Regional Thoroughbred
Yearling Market, Applied Economics 33: 969-77.
Rowthorn, R. E. 1975. What remains of Kaldor’s Law? Economic Journal 85: 10-19.
Smith, G. and W. Brainard. 1976. The Value of A Priori Information in Estimating a
Financial Model. Journal of Finance 31: 1299–322.
Wooldridge, J. M. (2000) Introductory Econometrics. Cincinnati: South-Western.
Wooldridge, J. M. (2002) Econometric Analysis of Cross Section and Panel Data.
Cambridge, Mass.: MIT Press.
Yule, G. U. 1926. Why Do We Sometimes Get Nonsense Correlations Between TimeSeries? Journal of the Royal Statistical Society 89: 1-64. ...
View Full Document
This note was uploaded on 02/29/2012 for the course EC 413 taught by Professor Staff during the Spring '08 term at Alabama.
- Spring '08
- Staff
- Economics
Click to edit the document details | https://www.coursehero.com/file/6828111/oh-no-I-got-the-wrong-sign/ | CC-MAIN-2017-09 | refinedweb | 4,301 | 53.71 |
hoi :)On Fri, Nov 26, 2004 at 10:13:57PM +0100, Christian Mayrhuber wrote:> Regarding namespace unification + XPath:> For files: cat /etc/passwd/[. = "joe"] should work like in XPath.> But what to do with directories?> Would 'cat /etc/[. = "passwd"]' output the contents of the passwd file> or does it mean to output the file '[. = "passwd"]'?> If the first is the case then you have to prohibit filenames looking > like '[foo bar]'.perhaps we should create a XML/XPath shell and a replacement for thetextutils package instead of implementing all these utilities inside thekernel.Then convert /etc/passwd to /etc/passwd.xml and all is well.-- Martin Waitz[unhandled content-type:application/pgp-signature] | http://lkml.org/lkml/2004/11/30/105 | CC-MAIN-2013-48 | refinedweb | 113 | 61.43 |
We may face a situation that the excel files are located in many different folders and we need to merge them into one table and do some analysis. In this blog post, I’d like to share my experience that we can use Python machine learning client for SAP HANA to do this job in a very convenient way.
In my case, I have sensor data for different devices recorded day by day and stored in many folders.
There are 1010 files. It’s impossible to do it manually. We need write a few scripts. I recommend to use Python machine learning client for SAP HANA (). It will do the table type conversion automatically and supports pandas input. Firstly, let’s get the file name list and store in a variable called
files.
import os path = './' files = [] # r=root, d=directories, f = files for r, d, f in os.walk(path): for file in f: if '.xlsx' in file: files.append(os.path.join(r, file))
In my case, we only care about the columns “DEVNUM”, “RTIME”, “FTPTFSPN”, “FTPTFFMI”. For “FTPTFSPN”, we need store it in “VARCHAR(100)”. We will store all the data into the SAP HANA table called “PDMS”. To use the append mode, we set drop_exist_tab=False in create_dataframe_from_pandas() function.
for file in files: df = pd.read_excel(file, header=1)[["DEVNUM", "RTIME", "FTPTFSPN", "FTPTFFMI"]] hana_df = create_dataframe_from_pandas(conn, pandas_df=df, table_name="PDMS", drop_exist_tab=False, table_structure={"FTPTFSPN": "VARCHAR(100)"}, batch_size=50000)
Now, I can use the dataframe
hana_df to do the analysis. In my case, I use distribution_fit() and cdf() function to plot survival curve based on the data.
We can also fetch the data from SAP HANA table into a single csv file.
hana_df.collect().to_csv("my_data.csv")
Python machine learning client for SAP HANA not only provides user-friendly machine learning interface but also the useful functions to import data into SAP HANA table. | https://blogs.sap.com/2020/12/17/import-multiple-excel-files-into-a-single-sap-hana-table/ | CC-MAIN-2021-04 | refinedweb | 317 | 66.74 |
Timeline … …
09/07/12:
- 12:24 Ticket #7229 (Detecting if a process was killed by a signal is impossible) created by
- Currently there is no good way of detecting if a process was terminated by …
- 09:28 Ticket #7228 (ghc-pkg prints an awful lot of usage information) created by
- If you misspell a command with ghc-pkg, you are rewarded with 112 lines of …
- 04:51 Ticket #7227 (cannot build ghc-7.6.1 because haddock seg-faults) created by
- […] Can I try it somehow without haddock?
- 03:48 Ticket #5405 (Strange closure type crash when using Template Haskell on OS X Lion) closed by
- worksforme: Thanks - this bug has been open for 13 months with no further info and …
09/06/12:
- 14:54 Milestone 7.6.1 completed
-
- 14:53 Ticket #7226 (bytestring changes in 7.6 branch) created by
- There have been some bytestring changes in the 7.6 branch since the 7.6.1 …
- 09:42 Ticket #7225 ("ghc -C" failed) created by
- --- source file --- module Main where main = return () --- command line …
- 09:24 Ticket #7224 (Polymorphic kind annotations on type classes don't always work as expected) created by
- Consider the following code for defining Atkey-style parameterised monads: …
- 07:43 Ticket #7210 (Bang in front of type name crashes GHC) closed by
- fixed: Thanks for the patch.
- 06:33 Ticket #7215 (miscompilation due to broken interface hash) closed by
- fixed: Merged as 1aa031e7013caf59f3297d29e81ed573eb306356.
- 03:10 Ticket #7223 (Unregisterised and/or via-C compilation broken) created by
- The new codegen broke unregisterised and/or via-C compilation. It should …
- 02:14 Ticket #7185 (Compiled program crashes) closed by
- fixed: Merged as 13a833e51c141165d927325fa0d1bce9ccdab1de.
- 02:04 Ticket #7218 (No type level distinction between BroadcastTChan and TChan) closed by
- fixed: […]
- 01:55 Ticket #6160 (support sub-second resolutions for file timestamps) closed by
- fixed: Thank you for the patch. Applied as: […]
- 01:07 Ticket #7222 (The text "Possible fix: add an instance declaration for ..." is redundant ...) created by
- The current state of affairs: Given a typical type error, for example …
09/05/12:
- 21:01 Ticket #7221 (DataKinds with recursive data and type synonym causing GHC to crash) created by
- When working on an answer to a stackoverflow …
- 10:35 Ticket #7220 (Confusing error message in type checking related to type family, fundep, ...) created by
- (This is related to, but different from, the message which I posted to …
- 07:32 Commentary/Compiler/NewCodeGen/Cleanup edited by
- (diff)
- 07:30 Ticket #7219 (Reinstate constant propagation in some form) created by
- The new codegen doesn't have a constant propagation pass. This used to be …
- 05:53 Commentary/Compiler/NewCodeGen/Cleanup edited by
- (diff)
- 03:59 Ticket #7218 (No type level distinction between BroadcastTChan and TChan) created by
- There is no type level distinction between BroadcastTChan(added in STM …
- 02:15 Ticket #7212 (GHCi segmentation fault) closed by
- wontfix: This problem is caused by (we think) having an XCode that is too old. See …
- 01:44 Commentary/Compiler/NewCodeGen edited by
- remove old irrelevant stuff (diff)
- 01:40 Commentary/Compiler/NewCodeGen/Cleanup created by
-
- 01:27 Ticket #7217 (Unification of type variables in constraints) closed by
- wontfix: On second thought, it is perfect as it is.
09/04/12:
- 16:32 Ticket #7217 (Unification of type variables in constraints) created by
- The following code works: […] But this doesn't: […] With the …
- 05:49 Ticket #3202 (Make XNoMonomorphismRestriction the default in GHCi) closed by
- fixed: As this is a feature request, I don't think we should merge to 7.6 at this …
09/03/12:
- 21:03 Ticket #7216 (Compositional blocking on file descriptors) created by
- The GHC.Event.Thread module provides threadWaitRead, threadWaitWrite :: Fd …
- 19:24 Ticket #7215 (miscompilation due to broken interface hash) created by
- The following script should print 'MyFalse MyTrue' but it prints …
- 18:06 Ticket #7214 (Missing Typeable instances) closed by
- invalid: OK, two seconds after submitting this report, I found out how to do it. …
- 17:49 Ticket #7214 (Missing Typeable instances) created by
- Data.Typeable defines Typeable instances for tuples of length up to 7. My …
- 08:16 Ticket #7213 (Test codeGen/should_compile/massive_array failing on 32-bits) created by
- ezyang identified this problem with -fnew-codegen a while ago and made a …
- 07:48 SharedLibraries edited by
- fix link for PE format part 2 (diff)
- 06:29 Ticket #7212 (GHCi segmentation fault) created by
- Using OS X 10.6.8 on a 2.5 GHz Intel Core i5 machine. XCode 4.0.2 …
- 03:47 Ticket #7193 (darcs 2.8 fails to compile with ghc 7.6) closed by
- fixed: The following patch fixes this ticket, #7193 (NOT #7196 as claimed): …
- 02:36 Ticket #6042 (GHC is bloated) closed by
- invalid: Closing, as there doesn't seem to be anything wrong here, just more code …
- 01:18 Ticket #7211 (Huge space leak on a program that shouldn't leak) created by
- I have a program that works in a small amount of memory on a computer …
09/02/12:
- 17:30 Ticket #7210 (Bang in front of type name crashes GHC) created by
- When adding a bang to a type constructor applied to a type, I forgot to …
09/01/12:
- 18:09 Ticket #7208 (ghci panic: nameModule show{tv a9W}) closed by
- duplicate: It's already fixed then.
- 08:56 Ticket #7196 (Desugarer needs an extra case for casts in coercions) closed by
- fixed: Merged as 3f79e2cf55ac7e002a6fa083821876184f6fe4c9.
- 08:56 Ticket #7177 (Flag -rtsopts not obeyed in hs_init()) closed by
- fixed: Merged as d266a3020038403555f2d2deb903bab4ed1238a6.
- 08:56 Ticket #7175 (Panic when wrongly using a type family as return types for GADTs) closed by
- fixed: Merged as 0d45533cd54ef08fa1e8f432c3f1192c76556504.
- 08:55 Ticket #7173 (Unnecessary constraints in inferred type) closed by
- fixed: Merged as ce721cdc0bc98361fd20defc5f919bb12abe1634.
- 08:55 Ticket #7165 ("match_co bailing out" messages and compiler crash) closed by
- fixed: Merged as cbedd1ce1a96eb330ad938219f0b52801ce862dc.
- 08:54 Ticket #7164 (Confusing "not a (visible) method" warning when method name clashes with ...) closed by
- fixed: Merged as 87511d1ca0f4be6df208287c2a6c84aa85f45b70.
- 08:54 Ticket #7149 (Heap profiling restricted with retainers (+RTS -hrfoo -hc) segfaults) closed by
- fixed: Merged as 66cb7e7293709d573c0d5e320507214e64127fde.
- 08:54 Ticket #7101 (Specialise broken for implicit parameters) closed by
- fixed: Merged as 20b25bc688b7a6257cb466d9c70c214dafa369c6.
- 08:53 Ticket #7092 (Spurious shadowing warnings for names generated with newName) closed by
- fixed: Merged as 2caef4d67eaa3a14d2873df0a31f6afba69a308c.
- 08:53 Ticket #7090 (Panic "mkCoVarLCo" with ConstraintKinds and type-level equality) closed by
- fixed: Merged as 428bee9c31d1f9ea37e72885dd41baba6c016811.
- 08:52 Ticket #7170 (Foreign.Concurrent finalizer called twice in some cases) closed by
- fixed: Merged as 7a6acb111f6013edafcd8761d496fa06c64d7b75.
- 08:51 Ticket #7160 (C finalizers are reversed during GC) closed by
- fixed: Merged as 4709d3e1c493536e6e3058ae15de0d86c01e417a.
- 08:51 Ticket #6156 (Optimiser bug on linux-powerpc) closed by
- fixed: Merged as ef4218994742e8400a48b4d6e1ae7e6b67650dc4.
- 08:50 Ticket #5205 (Control.Monad.forever leaks space) closed by
- fixed: Merged as ef4218994742e8400a48b4d6e1ae7e6b67650dc4.
- 08:49 Ticket #7178 (Panic in coVarsOfTcCo) closed by
- fixed: Merged as ef4218994742e8400a48b4d6e1ae7e6b67650dc4.
- 08:49 Ticket #7172 (GHCi :issafe command doesn't work) closed by
- fixed: Merged as 46e88e6ef397d16c034fc2348867ec2054114bd0 and …
- 08:48 Ticket #7167 (Make it a warning (not error) to hide an import that isn't exported) closed by
- fixed: Merged as 68fd5dcd5118816e03d6c5e23533faa298d34834.
- 08:47 Ticket #7040 (linker failures with foreign global data) closed by
- fixed: Merged as 29ec96c89d19c1b40a8990466424ff35be096780.
08/31/12:
- 08:56 Ticket #7209 (haddock fails with "internal error: spliceURL UnhelpfulSpan") created by
- This bug has already been submitted on the haddock trac system. …
- 07:12 Ticket #7208 (ghci panic: nameModule show{tv a9W}) created by
- I was Editing some modules of my project, then I just tried > ghci …
- 05:32 Ticket #7207 (linker fails to load package with binding to foreign library (win64)) created by
- GHCI is unable to load some packages on Win64, the examples are given for …
- 05:11 Ticket #7202 (Linux bindists don't work on new distros) closed by
- fixed: Our build machines are all on Ubuntu 12.04 now, which has libgmp.so.10, …
- 05:08 Ticket #7201 (ghc assumes that ld can understand --hash-size [regression]) closed by
- duplicate: Thanks for the report - we already have a ticket for this at #6063
- 05:06 Ticket #7206 (Implement cheap build) created by
- We sometimes see stuff like this: […] You might think the (++) would …
- 04:34 Ticket #7205 (Re-introduce left/right coercion decomposition) created by
- Suppose we have […] You might think this would obviously be OK, but …
- 03:54 Ticket #7204 (Use a class to control FFI marshalling) created by
- There has been a string of tickets concerning argument/result types for …
08/30/12:
- 19:10 Ticket #7203 (Add scanl') created by
- The presence of foldl' and foldl1' suggests the addition of scanl' (and …
08/29/12:
- 11:36 Ticket #7202 (Linux bindists don't work on new distros) created by
- All of the binary distributions are built on systems that have …
- 11:33 Ticket #7201 (ghc assumes that ld can understand --hash-size [regression]) created by
- On my Fedora 17 box, I'm using gold as the default linker, and I cannot …
- 08:19 Ticket #7200 (template-haskell-2.7.0.0 fails to build with GHC 7.0.4 due to missing ...) created by
- It looks like there's a missing pragma: […]
- 07:04 Ticket #7199 (Standalone deriving Show at GHCi prompt causes divergence when printing) closed by
- invalid: This code […] does not derive a Show instance. To do that you need …
- 06:01 Ticket #7199 (Standalone deriving Show at GHCi prompt causes divergence when printing) created by
- Deriving a show instance for a data type (defined either with the standard …
- 05:44 Ticket #7198 (New codegen more than doubles compile time of T3294) created by
- I did some preliminary investigation, and there seem to be a couple of …
- 04:03 Ticket #7197 (ghc panic: Irrefutable pattern failed) closed by
- duplicate: Thanks. Always worth searching Trac first... this is just a dup of #7093, …
- 03:45 Ticket #7197 (ghc panic: Irrefutable pattern failed) created by
- I get following error, when trying to compile following snippet: […] …
- 00:21 Commentary/GSoCMultipleInstances edited by
- (diff)
08/28/12:
- 23:55 Ticket #7196 (Desugarer needs an extra case for casts in coercions) created by
- Ganesh (via Darcs) found the code below crashes GHC 7.6rc1, thus: […] …
- 07:21 Ticket #7195 (Add edit warning to Parser.y.pp) created by
- Hi, I lost some (very little, in this case) work because I hacked on …
- 05:29 Commentary/Compiler/Demand edited by
- (diff)
- 04:52 Ticket #7194 (Typechecker allows a skolem to escapt) created by
- This program make GHC 7.4 and GHC 7.6 rc1 give a Lint error, because a …
- 04:45 Ticket #7193 (darcs 2.8 fails to compile with ghc 7.6) created by
- Ganesh Sittampalam reports the following failure when building darcs 2.8 …
- 04:25 Commentary/Compiler/Demand edited by
- (diff)
- 01:56 Ticket #7192 (Bug in -fregs-graph with -fnew-codegen) created by
- This is triggered by running the test dph-diophantine-opt with …
- 01:08 Commentary edited by
- (diff)
- 01:00 Status/Oct12 created by
-
Note: See TracTimeline for information about the timeline view. | http://hackage.haskell.org/trac/ghc/timeline?from=2012-09-27T04%3A40%3A45-0700&precision=second | CC-MAIN-2013-20 | refinedweb | 1,840 | 53.95 |
thank you, I will attempt to start it :D
Type: Posts; User: Leeds_Champion
thank you, I will attempt to start it :D
Convert the C program you wrote in Assignment 1 that decides whether three integers inputted representing a triangle are invalid, equilateral, scalene or isosceles into Java. You should now adapt the...
thanks helloworld! Thats exactly what I needed :D
I'm still at a loss with Arrays, I have no idea how to even start writing this program, it doesn't even seem possible :/
:confused:
um, I'm not sure lol...
this is what I have so far
import B102.*;
ok, I have to write a code for this question
Create a program that reads a list of vowels (a, e, i, o, u) and stores them in an array. The maximum number of vowels to be read should be obtained... | http://www.javaprogrammingforums.com/search.php?s=a5903ad9f73a4e716692532aecb3b96f&searchid=477230 | CC-MAIN-2013-20 | refinedweb | 143 | 64.75 |
A python utils ubirch for ubirch anchoring services.
Project description
ubirch library for ubirch anchoring services
This library contains several useful tools used to connect to a SQS server and anchor messages retrieved from a queue to the IOTA Tangle or the Ethereum Blockchain.
Usage
Configuration, connection to a SQS server and retrieving queues.
To set up the different arguments needed to connect to the SQS Server and to access the ETH Wallet.
from ubirch.anchoring import * args = set_arguments(servicetype='ethereum') # Or 'iota" #To access the SQS Queue url = args.url region = args.region aws_secret_access_key = args.accesskey aws_access_key_id = args.keyid #To unlock your wallet password = args.pwd keyfile = args.keyfile queue1 = getQueue('queue1', url, region, aws_secret_access_key, aws_access_key_id)
Polling a queue. | https://pypi.org/project/ubirch-python-utils/1.0.2/ | CC-MAIN-2019-51 | refinedweb | 120 | 61.33 |
I worked on a tiny piece of the Embrace sculpture at Burning Man 2014.
Inside were 2 hearts, one made here in Portland by Lostmachine Andy & others at Flat Rat Studios. I made electronics to gradually fade 4 incandescent light bulbs in heart beating patterns.
These wonderful photos where taken by Sarah Taylor.
Inside the enormous sculpture were two hearts. The blue on was built by a group in Vancouver, B.C., Canada, and of course this one was built here in Portland, Oregon, USA.
Andy wanted this heart to have a very warm, gentle atmosphere, with warn incandescent bulbs slowly fading to create the heart beat. These effect turned out quite well. Andy really knows his stuff!
Here’s a great time-lapse video where you can see the slow, gradual incandescent light fading as a rapid heart beat. Skip forward to about 0:36 in this video.
The light fading was done using a Teensy-based 4-channel AC dimmer board, on this 4 by 3.5 inch circuit board.
Here’s a quick video, from the first test of the light controller.
Four BT139X Triacs that actually switch the AC voltage are mounted on the bottom side to a heatsink that’s meant to dissipate any heat to the metal case. Originally Andy believed the lights might be 500 watts each, so I was concerned about heat. In the end, four 60 watt bulbs were used and the Triacs did not get noticeably warm.
Here is a parts placement diagram for building the circuit board. Two boards were built, the one that ran the project and a spare… just in case!
The PCB cad files are attached below, if anyone wants to make more of these boards.
The AC switching circuitry was basically Fairchild Semiconductor’s recommended circuit for the MOC3023 optical isolator, which allows a Teensy 2.0 board to safely control the AC voltage. Four copies of this circuit were built on the board.
This circuit requires the Teensy 2.0 to know the AC voltage timing, so it can trigger the Triac at the right moment. Triggering early in the AC waveform causes the Triac to conduct near the full AC voltage for maximum brightness. Triggering later reduces the brightness.
To get the AC timing, I built this special power supply onto the board.
The Teensy 2.0 receives pulses on pins 5 and 6 as the AC waveform cycles positive and negative.
One caveat is this approach depends on the AC voltage being a sine wave. The AC voltage was one of the first questions I asked Andy, and he was told Burning Man would supply a true sine wave AC voltage. When he got out there, it turned out the power was actually a “modified sine wave”, which really isn’t anything like a sine wave. This circuit didn’t work well. Fortunately, they were able to run the lighting from a small generator that produced a true sine wave.
With the AC timing arriving on pins 5 and 6, and 4 pins able to trigger Triacs, and 3 pins connected to analog voltages for changing speed, brightness and pattern, the only other major piece of this technology puzzle is the software.
In this code, loop() tracks the changes in the waveform on pins 5 & 6, and it fires the Triacs at their programmed times. 120 times per second (each AC half cycle), the recompute_levels() function runs, which reads the analog controls and changes the Triac time targets, which loop() uses to actually control the voltage outputs.
Here’s all the code:
void setup() { pinMode(0, INPUT_PULLUP); // unused pinMode(1, INPUT_PULLUP); // unused pinMode(2, INPUT_PULLUP); // unused pinMode(3, INPUT_PULLUP); // unused pinMode(4, INPUT_PULLUP); // unused pinMode(5, INPUT); // Phase A pinMode(6, INPUT); // Phase B pinMode(7, INPUT_PULLUP); // unused pinMode(8, INPUT_PULLUP); // unused pinMode(9, INPUT_PULLUP); // unused pinMode(10, INPUT_PULLUP); // unused digitalWrite(11, LOW); pinMode(11, OUTPUT); // LED digitalWrite(12, HIGH); pinMode(12, OUTPUT); // trigger4, low=trigger digitalWrite(13, HIGH); pinMode(13, OUTPUT); // trigger3, low=trigger digitalWrite(14, HIGH); pinMode(14, OUTPUT); // trigger2, low=trigger digitalWrite(15, HIGH); pinMode(15, OUTPUT); // trigger1, low=trigger pinMode(16, INPUT_PULLUP); // unused pinMode(17, INPUT_PULLUP); // unused pinMode(18, INPUT_PULLUP); // unused analogRead(19); // pot #3 analogRead(20); // pot #2 analogRead(21); // pot #1 pinMode(22, INPUT_PULLUP); // unused pinMode(23, INPUT_PULLUP); // unused pinMode(24, INPUT_PULLUP); // unused } uint8_t pot1=0, pot2=0, pot3=0; uint8_t level1=100, level2=128, level3=0, level4=250; uint8_t phase_to_level(uint16_t phase) { uint16_t amplitude; // 10923 = 32768 / 3 // 0 to 10922 = increasing: 0 -> 32767 // 10923 to 21845 = decreasing: 32767 -> 0 // 21846 to 32768 = increasing: 0 -> 32767 // 32769 to 43691 = decreasing: 32767 -> 0 // 43692 to 65535 = resting: 0 if (phase < 10923) { amplitude = phase * 3; } else if (phase < 21845) { phase = phase - 10923; phase = 10922 - phase; amplitude = phase * 3; } else if (phase < 32768) { phase = phase - 21846; amplitude = phase * 3; } else if (phase < 43691) { phase = phase - 32769; phase = 10922 - phase; amplitude = phase * 3; } else { amplitude = 0; } //amplitude = (phase < 32768) ? phase : 65535 - phase; amplitude >>= 6; // range 0 to 511 amplitude *= (pot2 + 84) / 6; // amplitude += 6000 + pot2 * 8; // minimum brightness return (amplitude < 32768) ? amplitude >> 7 : 255; } void recompute_levels() { static uint16_t phase=0; static uint8_t n=0; analog_update(); //Serial.print("pot: "); //Serial.print(pot1); //Serial.print(", "); //Serial.print(pot2); //Serial.print(", "); //Serial.print(pot3); phase += (((uint16_t)pot1 * 83) >> 5) + 170; //Serial.print(", phase: "); //Serial.print(phase); if (pot3 < 128) { level1 = phase_to_level(phase); level2 = level1; level3 = phase_to_level(phase + pot3 * 52); level4 = level3; } else { uint16_t n = (pot3 - 127) * 26; level1 = phase_to_level(phase); level2 = phase_to_level(phase + 6604 - n); level3 = phase_to_level(phase + 6604); level4 = phase_to_level(phase + 6604 + n); } //Serial.print(", levels: "); //Serial.print(level1); //Serial.print(", "); //Serial.print(level2); //Serial.print(", "); //Serial.print(level3); //Serial.print(", "); //Serial.print(level4); //Serial.println(); } void loop() { uint8_t a, b, prev_a=0, prev_b=0, state=255, triggered=0; uint32_t usec, abegin, bbegin, alen, blen; uint16_t atrig1, atrig2, atrig3, atrig4; uint16_t btrig1, btrig2, btrig3, btrig4; bool any; while (1) { // read the phase voltage and keep track of AC waveform timing a = digitalRead(5); b = digitalRead(6); if (a && !prev_a) { // begin phase A usec = micros(); if (state == 0) { state = 1; abegin = usec; triggered = 0; Serial.print("A"); Serial.println(usec); } else if (state == 255) { state = 11; abegin = usec; } else { state = 255; } } if (!a && prev_a) { // end phase A usec = micros(); if (state == 1) { state = 2; alen = usec - abegin; Serial.print("a"); Serial.print(usec); Serial.print(","); Serial.println(alen); if (alen < 12000) { // compute trigger offsets for next A phase recompute_levels(); atrig1 = level1 ? ((256 - level1) * alen) >> 8 : 30000; atrig2 = level2 ? ((256 - level2) * alen) >> 8 : 30000; atrig3 = level3 ? ((256 - level3) * alen) >> 8 : 30000; atrig4 = level4 ? ((256 - level4) * alen) >> 8 : 30000; } else { state = 255; } } else if (state == 11) { state = 12; alen = usec - abegin; } else { state = 255; } } if (b && !prev_b) { // begin phase B usec = micros(); if (state == 2) { state = 3; bbegin = usec; triggered = 0; Serial.print("B"); Serial.println(usec); } else if (state == 12) { state = 13; bbegin = usec; } else { state = 255; } } if (!b && prev_b) { // end phase B usec = micros(); if (state == 3) { state = 0; blen = usec - bbegin; Serial.print("b"); Serial.print(usec); Serial.print(","); Serial.println(blen); if (blen < 12000) { // compute trigger offsets for next B phase recompute_levels(); btrig1 = level1 ? ((256 - level1) * blen) >> 8 : 30000; btrig2 = level2 ? ((256 - level2) * blen) >> 8 : 30000; btrig3 = level3 ? ((256 - level3) * blen) >> 8 : 30000; btrig4 = level4 ? ((256 - level4) * blen) >> 8 : 30000; } else { state = 255; } } else if (state == 13) { state = 0; blen = usec - bbegin; } else { state = 255; } } prev_a = a; prev_b = b; // trigger triacs at the right moments if (state == 1) { usec = micros(); any = false; if (!(triggered & 1) && usec - abegin >= atrig1) { digitalWrite(15, LOW); triggered |= 1; any = true; //Serial.println("trig1(a)"); } if (!(triggered & 2) && usec - abegin >= atrig2) { digitalWrite(14, LOW); triggered |= 2; any = true; //Serial.println("trig2(a)"); } if (!(triggered & 4) && usec - abegin >= atrig3) { digitalWrite(13, LOW); triggered |= 4; any = true; //Serial.println("trig3(a)"); } if (!(triggered & 8) && usec - abegin >= atrig4) { digitalWrite(12, LOW); triggered |= 8; any = true; //Serial.println("trig4(a)"); } if (any) { delayMicroseconds(25); digitalWrite(15, HIGH); digitalWrite(14, HIGH); digitalWrite(13, HIGH); digitalWrite(12, HIGH); } } else if (state == 3) { usec = micros(); any = false; if (!(triggered & 1) && usec - bbegin >= btrig1) { digitalWrite(15, LOW); triggered |= 1; any = true; //Serial.println("trig1(b)"); } if (!(triggered & 2) && usec - bbegin >= btrig2) { digitalWrite(14, LOW); triggered |= 2; any = true; //Serial.println("trig2(b)"); } if (!(triggered & 4) && usec - bbegin >= btrig3) { digitalWrite(13, LOW); triggered |= 4; any = true; //Serial.println("trig3(b)"); } if (!(triggered & 8) && usec - bbegin >= btrig4) { digitalWrite(12, LOW); triggered |= 8; any = true; //Serial.println("trig4(b)"); } if (any) { delayMicroseconds(25); digitalWrite(15, HIGH); digitalWrite(14, HIGH); digitalWrite(13, HIGH); digitalWrite(12, HIGH); } } } } #define ADMUX_POT1 0x60 #define ADMUX_POT2 0x61 #define ADMUX_POT3 0x64 void analog_update() { static uint8_t count=0; switch (count) { case 0: // start conversion on pot #1 ADMUX = ADMUX_POT1; ADCSRA |= (1<<ADSC); count = 1; return; case 1: // read conversion on pot #1 if (ADCSRA & (1<<ADSC)) return; pot1 = ADCH; ADMUX = ADMUX_POT2; count = 2; return; case 2: // start conversion on pot #2 ADMUX = ADMUX_POT2; ADCSRA |= (1<<ADSC); count = 3; return; case 3: // read conversion on pot #2 if (ADCSRA & (1<<ADSC)) return; pot2 = ADCH; ADMUX = ADMUX_POT3; count = 4; return; case 4: // start conversion on pot #3 ADMUX = ADMUX_POT3; ADCSRA |= (1<<ADSC); count = 5; return; case 5: // read conversion on pot #3 if (ADCSRA & (1<<ADSC)) return; pot3 = ADCH; ADMUX = ADMUX_POT1; count = 0; return; default: count = 0; } }
This article was originally published on the DorkbotPDX website, on September 3, 2014. In late 2018, DorkbotPDX removed its blog section. An archive of the original article is still available on the Internet Archive. I am republishing this article here, in the hope it may continue to be found and used by anyone interested in the Embrace art installation or any other project needing sequenced AC light dimming effects.
These comments where written on the old site:
From Brandon:
Once we determined that the AC source was modified sine, I knew there wasn’t anything I could do to help the situation easily in the middle of the desert 🙂
As someone who looked over the board and what not, nice job! Worked really well and looked amazing in operation (on the right power source).
Rev two, perhaps rectified DC drive of the incandescent lights to avoid the modified sine issue? So many folks use those types of inverters to cut costs on big artwork solar installations.
Cheers and thanks for contributing!
From Anonymous:
The embrace structure was prettty cool, I got a chance to explore it at Burninman this year. Wish I got to see it burn down, the videos looked amazing. I assume you guys removed the heart materials before that happened. | https://www.pjrc.com/embrace-heart-lighting/ | CC-MAIN-2019-09 | refinedweb | 1,762 | 60.75 |
Hi,
I am developing an UI app, in WPF, for the Log window, I have xaml code
So, I have a grid view with 2 headers Message_Name and DateTime, and I want to assign the data to the 2 columns... I am not able to get the data displayed by using
lstViewLogWindow.Items.Add(msg);
Please help me in doing this.Thanks
Ramm
use this way
protected void LinkButton2_Click(object sender, EventArgs e)
{
((TextBox)gvIngredient.HeaderRow.FindControl("lblMsg")).Text = "message here";
((TextBox)gvIngredient.HeaderRow.FindControl("lblDate")).Text = "date here";
}
HI A K...
I couldnt get gvIngredient.HeaderRow ..
I cant add element to the grid view this way.. \
actually I will receive a msg from some other function i have to assign that msg to the first column of the grid view.. so the second column will be date time.
Please help me
Thanks
Hi, you could bind the ItemsSource property to an ObservableCollection<Data> object where the Data is something like this:
public class Data
{
public string Message_Name { get; set; }
public DateTime DateTime { get; set; }
}
You could then add items to the ObservableCollection object instead of adding directly to the Items property.
you can add the datas to datatable or dataset , and set the datatable as the datacontext for listview as
DataTable dt = new DataTable();
dt.Columns.Add("Message_Name", typeof(System.String));
dt.Columns.Add("DateTime", typeof(System.DateTime));
DataRow dr = dt.NewRow();
dr[0] = "test";
dr[1] = DateTime.Now;
dt.Rows.Add(dr);
DataRow dr1 = dt.NewRow();
dr1[0] = "test 2";
dr1[1] = DateTime.Now.AddDays(1);
dt.Rows.Add(dr1);
DataRow dr2 = dt.NewRow();
dr2[0] = "test 3";
dr2[1] = DateTime.Now.AddDays(2);
dt.Rows.Add(dr2);
lstViewLogWindow.DataContext = dt;
System.Data.
dt.Columns.Add(
dr[0] = msg;
dr[1] =
dt.Rows.Add(dr);
lstViewLogWindow.DataContext = dt;
As I see only one column, DateTime.now is not visible in the second column...Also, it shows only one log?? this will be called manytimes...
Please help me..
<GridViewColumn Header="DateTime" Width="Auto" DisplayMemberBinding="{Binding Path= DateTime}" />
in above XAML, you are given binding column name as 'DateTime'.
but in your datatable you are named as 'LogDate'.
change the column name as 'DateTime'.
And , you have added only one row to datatable, so it will show only one row.
HI Vasanth..
I have changed the Header name..but it doesnt take it..
Actually I should list all the msg's in the log window.. so I should nt clear it everytime right(by creating new data table...)
My req is.. if I select 1. downloading of data.. then on click of button.. the log window should have the msg.. "downloading started.." 08/09/09 ... 2. downloading stopped... then log window msg should append to the first msg(previously printed....) " downloading stopped 08/09/09
downloading started.. 08/09/09
downloading stopped 08/09/09
But as of now i see only downloading started msg.. not the other msgs... | http://www.nullskull.com/q/10117182/how-to-add-text-to-the-gridview-in-wpf.aspx | CC-MAIN-2014-41 | refinedweb | 489 | 61.53 |
This patch adds a python method to get openGL bind code of material's texture according to the texture slot.
Example:
import bge
cont = bge.logic.getCurrentController()
own = cont.owner
bindId = own.meshes[0].materials[0].getTextureBindcode(0)
Test file:
This can be used to play with texture in openGL, for example, remove mipmap on the texture or play with all wrapping or filtering options.
And this can be used to learn openGL with Blender.
I don't know in what case existing "bindId" attribute can be used. But I nevertheless replaced "OpenGL Bind Name" with "OpenGL Bind code/Id/Number" (I know that we don't have to make "unrelated" patch modifs but it's a small modification).
The patch can be used to learn how to use textures with openGL in Blender. I try to learn openGL and this is the main reason why I propose this patch. I think Blender Game Engine is not only a tool to make games, but can be a great tool to learn python, GLSL, openGL.
How is this property different from bindId? Taking a quick look at the source code, it looks like this is meant to be an OpenGL name. However, it looks like there are some cases where it has not been set. Could we just make sure m_actTex gets set in more cases?
I changed the place of the code (moved from Texture.cpp to KX_BlenderMaterial.cpp) because of the discussion I had with Kupoman:
Sorry, a mistake in the previous patch.
This looks better :)
That's great!
It's normal to specify the argument type here ?
Same thing here for all prints and doc.
inline comments done
Looks good to me.
We have a similar option for VideoTexture images if I remember correctly. There we have a direct access to the bind id without using a get method. Shouldn't we unify this?
See
@Dalai Felinto (dfelinto): This was my first intention to replace existing attribute (bindId) that returns m_actTex (dynamic texture generated in Texture.cpp) openGL binId with the original texture bind Id, because I don't understand in what case m_actTex can be useful.
But: | https://developer.blender.org/D1804?id=6124 | CC-MAIN-2020-50 | refinedweb | 360 | 75.71 |
Improving Our Use of PHP Namespaces
We were right to do it wrong
Let's step back and think about why we use namespaces, and how to realize their real advantages. I suspect there's a lingering hesitance to embrace their usefulness. For years we've built the appropriate habit of naming our symbols with appropriate specificity to avoid naming collisions. I'm talking about naming your class something like MyCompany_Loader, or something similarly specific to your context. This informal namespacing was a great stop-gap. But once we make the switch to formal namespacing, we should reconsider what impact this should have on our symbol names.
How to properly use PHP namespaces
The specific notion on my mind is that I'm seeing a redundant mix of formal and informal namespacing. Let's start a conversation about this, while it's still somewhat early in the game. At this point (I'm *always* willing to be convinced otherwise), I contend that we should completely eradicate the informal namespacing conventions if we are going to adopt formal namespacing in our apps. Here's a simple example that illustrates the general direction I think we should move in (derived from the Drupal Extension project, a Behat Mink component).
Example (Let's move away from this):
namespace MyCompany; use Drupal\DrupalExtension\Context\DrupalContext; class MyCompanyContext extends DrupalContext {}
Proposal (Let's move toward this):
namespace MyCompany; class Context extends \Drupal\Extension\Context {}
Notice a few specific things:
- In the first example's use statement, the word Drupal occurs 3 times; 2 can be removed
- In the first example's use statement, the word Context occurs twice; 1 can be removed.
- In the first example, both "localized" class names had redundant occurrences of the company name. This is specifically what formal namespaces exist to eliminate.
These are essentially equivalent, but the latter removes several redundant components and delivers what namespaces are supposed to bring us. Notice also that I removed a `use` statement and instead used the fully-qualified class name in the code. This is an important part of what I'm proposing. I think it's too simple to advocate for moving all use of namespaces to the top of a file and only use "short names" for symbols in other namespaces. This is particularly important when you are extending a symbol. Code readability is inversely related to how often I need to scroll to the top of the file to figure out which specific version of a symbol is being used.
To Summarize
- Don't use redundant name components
- Don't be afraid to use fully-qualified names in-context, if appropriate.
The cases where it's not really necessary to use fully-qualified names in-context are when you are only "consuming" a symbol, and not extending it. For example, if you are going to use a Widgetizer library, but not extend it, feel free to use the `use` statement and simplify the use of the symbol throughout the file. The important distinction is that you don't have potentially ambiguous names if you are only pulling in one namespace with that symbol.
Class aliasing
Instead of extending parent or base classes by referencing the fully-qualified namespace, I would personally suggest to use the aliasing feature of "use" statements. In your example:
namespace MyCompany; use Drupal\DrupalExtension\Context as BaseContext; class Context extends BaseContext {}
Wed, 11/07/2012 - 19:59
In reply to Class aliasing by tstoeckler
`use` vs in-context fully-qualified names
Thanks for the comment, tstoeckler. I think the alias feature of namespaces is useful, and you're right that it is underrepresented in the post. As I mull over my thoughts on it's application I'm trying to come to a rule of thumb which explains my attitude. I think it has to do with in-context readability (think line-level context) and the plurality of the use of the name.
For instance when we are extending a class, we are only likely to use that name exactly once in our entire file. Personally, I think it improves understandability to use the fully-qualified name in-context in that case. I have similar disposition toward other single-use cases.
The value of `use` (and then the added value of the alias) comes when you have a plural use of a name, and that name is not likely to have cognitive interference with similar names in your file. For example if you are using another library's "Decorator" class in several places, and you don't have your own "Decorator" class in your local (or semi-local) namespace, this is a fantastic time to use `use`. Further, if you are using sever variants of "Decorator", or perhaps you *do* have a (semi-)local "Decorator" class, aliases are absolutely a great tool to improve readability across the entire file.
use ACME\Framework\Components\Decorator\HtmlWrapper as ACMEHtmlWrapper; use MyCompany\Decorator\HtmlWrapper as MyCompanyHtmlWrapper;
Wed, 11/07/2012 - 19:45
Editorial Note
My use of language is always improved when Jaymz Rhime takes a look at my work. Thanks for the input!
[removed use of "leverage" as a verb.]
good proposal
+2 from me, the second case is so much friendlier and easier to read for newer developers as well.
Is there an issue in the queue about this? perhaps?
Wed, 11/07/2012 - 21:05
In reply to good proposal by kscheirer
Thanks, started conversation
Thanks!
I've opened the conversation on that page, hopefully we'll get a good conversation on the topic.
Thu, 11/08/2012 - 06:43
In reply to Thanks, started conversation by Chris Trahey
Don't comment on handbook pages
They get deleted. Instead, open an issue in the issue queue and tag it "coding standards". Note: There's a metric ton of prior discussion on this subject for Drupal 8, and while that doesn't mean we can't change things anymore it does mean you're likely to run into a fair bit of "we've been over this a hundred times already!" resistence.
One of the main impetuses for the current standard (always "use") was simplicity. There was a lot of pushback against \ characters in code when PHP 5.3 came out, so we decided to just avoid them anywhere but in the file headers so as to not confuse people. It also means we have an automatic index of all dependencies at the top of the file. That's nothing to sneeze at.
Thu, 11/08/2012 - 17:52
In reply to Don't comment on handbook pages by Larry Garfield
Thanks
Thanks for the tip.
I do consider the 'use' thing to be a secondary concern and purely about style, so I'm not too concerned with trying to change the community's mind about that. The more compelling change, in my opinion, is the removal of redundant identifiers in names, as I think it reduces the value of segmented namespaces in general by reducing the semantic value of each component.
Thu, 11/08/2012 - 19:22
In reply to Thanks by Chris Trahey
Already discussed
We already had that discussion, actually. If you find a class with "Drupal" in its name anywhere but the first namespace component, 98.6% chance you should file a bug about it. The ending guideline was "don't be redundant, unless the class shortname is unacceptably ambiguous otherwise, then put in just the little bit you need." And "Drupal" is *almost* never appropriate in a class name.
I wish you'd been around for these discussions in the issue queue over the past year...
Fri, 11/09/2012 - 18:03
In reply to Already discussed by Larry Garfield
Issue created
Thanks, Larry! Now that I'm here at Metal Toad, my participation in the community will be well supported.
I've created an issue for the DrupalExtension in particular, but note in that ticket my reformed opinion on \Drupal\DrupalExtension (due to the project name not being simply 'extension'... brings up interesting other thoughts).
However, I still found a handful of redundancies in the project to discuss. Perhaps I'll patch if the maintainers are positive about the idea.
I haven't scoured Drupal yet, but I did notice that the "Exception" namespace is guilty of several redundancies.
tstoeckler
Wed, 11/07/2012 - 22:00
Coding standards, schmoding standards
While I agree that use can be more or less useful depending on circumstance, the Drupal (8) coding standards dictate to always use namespaced classes at the top of the file, even if they are used only once... The point, as far as I know, is that in such a case consistency wins over the added clarity you describe in that particular case.
Wed, 11/07/2012 - 22:31
In reply to Coding standards, schmoding standards by tstoeckler
Hopefully we are early in the trend
I agree about consistency/clarity so long as it's not a stark contrast between the two (in this case it really isn't).
However, I hope we are early enough in the collective conversation about how we (the PHP community, not just the Drupal community) will use namespaces.
boombatower (J…
Wed, 11/07/2012 - 22:10
Agree
I agree with removing essentially redundant namespacing as it makes thing cleaner, proper use of namespaces, and easier to read. I've been using this approach in some of my side projects and have been pleased with the results.
As for aliasing if just extending and never referencing again it makes more sense not to bother with the alias as you wrote as well.
I work with some code that does not follow this convention and the names get absurdly long and are no better for it + harder to read.
Ken
Wed, 11/07/2012 - 22:16
Nice article, Chris. Was a
Nice article, Chris. Was a good read.
Wed, 11/07/2012 - 22:17
Quite a no-brainer
Am I the only one who thought this was a no-brainer?
I mean, namespaces are there to avoid name conflicts, this allows us to use the best names for the classes without worrying about conflicts.
I am definitely in favor of moving in the way you're describing there. It's a no-brainer.
Thanks for the article! I've
Thanks for the article! I've only used namespaces in a limited fashion in other languages, so it's helpful to see that what I'm encountering in the core isn't a) the only pattern or b) unusual. I look forward to the changes in D8 and would love to follow along with a core issue to remove the redundancy in identifiers.
larowlan
Fri, 11/09/2012 - 21:25
on a lighter note
The term for the first example is 'Smurf naming conventions see
;-)
Michael Butler
Fri, 10/06/2017 - 16:16
How strict to follow?
Commenting on a 5 year old article, not sure if anyone will see this...
The argument in this article makes sense, repeating "Drupal" multiple times is not a good usage of namespace naming. However, take the example of an Exceptions package. Would you want to have:
"MyProject\Exception\Database\DuplicateKeyException" -- Notice how the word "Exception" is repeated in the classname, and the corresponding filename (if using PSR-4) would be "DuplicateKeyException.php".
Or would you want to have, as this article suggests:
"MyProject\Exception\Database\DuplicateKey" with the corresponding filename (if using PSR-4) DuplicateKey.php? In the latter, it's not immediately clear that DuplicateKey is an exception... you'd have to traverse up the namespace stack to understand that.
throw new DuplicateKey() looks weird in reading.
tstoeckler
Wed, 11/07/2012 - 19:36 | https://www.metaltoad.com/blog/improving-our-use-php-namespaces | CC-MAIN-2020-16 | refinedweb | 1,954 | 60.45 |
Python’s Innards: pystate
2010/05/26 § 11 Comments
We started our series discussing the basics of Python’s object system (Objects 101 and 102), and it’s time to move on. Though we’re not done with objects by any stretch of the imagination, when I think of Python’s implementation I visualize this big machine with a conveyor belt feeding opcodes into a hulking processing plant with cranes and cooling towers sticking out, and I just have to friggin’ peer inside already. So whether our knowledge of objects is complete or not, we’ll put them aside for a bit and look into the Interpreter State and the Thread State structures (both implemented in ./Python/pystate.c). I may be naïve, but I chose these structures because I’d like this post to be a simple basis for our understanding of actual bytecode evaluation that we’ll begin in the next few posts. Soon enough we’ll pry open Pandora boxes like frames, namespaces and code objects, and before we tackle these concepts I’d like us see the broad-picture of how the data structures that bind them together are laid out. Finally, note that this post is a tad heavier on OS terminology, I assume at least passing familiarity with it (kernel, process, thread, etc).
As you probably know, in many operating systems user-space code is executed by an abstraction called threads that run inside another abstraction called processes (this includes most Unices/Unix-likes and the decent members of the Windows family). The kernel is in charge of setting up and tearing down these processes and execution threads, as well as deciding which thread will run on which logical CPU at any given time. When a process invokes Py_Initialize another abstraction comes into play, and that is the interpreter. Any Python code that runs in a process is tied to an interpreter, you can think of the interpreter as the root of all other concepts we’ll discuss. Python’s code base supports initializing two (or more) completely separate interpreters that share little state with one another. This is rather rarely done (never in the vanilla executable), because too much subtly shared state of the interpreter core and of C extensions exists between these ‘insulated’ interpreters. Still, since support exists for it and for completeness’ sake, I’ll try anyway to write this post from the standpoint of a multi-interpreter world. Anyhow, we said all execution of code occurs in a thread (or threads), and Python’s Virtual Machine is no exception. However, Python’s Virtual Machine itself is something which supports the notion of threading, so Python has its own abstraction to represent Python threads. This abstraction’s implementation is fully reliant on the kernel’s threading mechanisms, so both the kernel and Python are aware of each Python thread and Python threads execute as separate kernel-managed threads, running in parallel with all other threads in the system. Uhm, almost.
There’s an elephant in that particular living room, and that is the Global Interpreter Lock or the GIL. Due to many reasons which we’ll cover briefly today and revisit at length some other time, many aspects of Python’s CPython implementation are not thread safe. This is has some benefits, like simplifying the implementation of easy-to-screw-up pieces of code and guaranteed atomicity of many Python operations, but it also means that a mechanism must be put in place to prevent two (or more) Pythonic threads from executing in parallel, lest they corrupt each other’s data. The GIL is a process-wide lock which must be held by a thread if it wants to do anything Pythonic – effectively limiting all such work to a single thread running on a single logical CPU at a time. Threads in Python multitask cooperatively by relinquishing the GIL voluntarily so other threads can do Pythonic work; this cooperation is built-in to the evaluation loop, so ordinarily authors of Python code and some extensions don’t need to do something special to make cooperation work (from their point of view, they are preempted). Do note that while a thread doesn’t use any of Python’s APIs it can (and many threads do) run in parallel to another Pythonic thread. We will discuss the GIL again briefly later in this post and at length at a later time, but for the time being I can refer the interested readers to this excellent PyCon lecture by David Beazely for additional information about how the GIL works (how the GIL works is not the main subject of the lecture, but the explanation of how it works is very good).
With the concepts of a process (OS abstraction), interpreter(s) (Python abstraction) and threads (an OS abstraction and a Python abstraction) in mind, let’s go inside-out by zooming out from a single opcode outwards to the whole process. This should give us a good overview, since so far we mainly went inwards from the implementation of some object-centric opcodes to the actual implementation of how they operate on objects. Let’s look again at the disassembly of the bytecode generated for the simple statement spam = eggs - 1:
# what's 'diss'? see 'tools' under 'metablogging' above! >>> diss("spam = eggs - 1") 1 0 LOAD_NAME 0 (eggs) 3 LOAD_CONST 0 (1) 6 BINARY_SUBTRACT 7 STORE_NAME 1 (spam) 10 LOAD_CONST 1 (None) 13 RETURN_VALUE >>>
In addition to the actual ‘do work’ opcode BINARY_SUBTRACT, we see opcodes like LOAD_NAME (eggs) and STORE_NAME (spam). It seems obvious that evaluating such opcodes requires some storage room: eggs has to be loaded from somewhere, spam has to be stored somewhere. The inner-most data structures in which evaluation occurs are the frame object and the code object, and they point to this storage room. When you’re “running” Python code, you’re actually evaluating frames (recall ceval.c: PyEval_EvalFrameEx). For now we’re happy to lump frame objects and code objects together; in reality they are rather distinct, but we’ll explore that some other time. In this code-structure-oriented post, the main thing we care about is the f_back field of the frame object (though many others exist). In frame n this field points to frame n-1, i.e., the frame that called us (the first frame that was called in any particular thread, the top frame, points to NULL).
This stack of frames is unique to every thread and is anchored to the thread-specific structure ./Include.h/pystate.h: PyThreadState, which includes a pointer to the currently executing frame in that thread (the most recently called frame, the bottom of the stack). PyThreadState is allocated and initialized for every Python thread in a process by _PyThreadState_Prealloc just before new thread creation is actually requested from the underlying OS (see ./Modules/_threadmodule.c: thread_PyThread_start_new_thread and >>> from _thread import start_new_thread). Threads can be created which will not be under the interpreter’s control; these threads won’t have a PyThreadState structure and must never call a Python API. This isn’t so common in a Python application but is more common when Python is embedded into another application. It is possible to ‘Pythonize’ such foreign threads that weren’t originally created by Python code in order to allow them to run Python code (PyThreadState will have to be allocated for them). APIs exist that can do such a migration so long as only one interpreter exists, it is also possible though harder to do it manually in a multi-interpreter environment. I hope to revisit these APIs and their operation in a later post, possibly one about embedding. Finally, a bit like all frames are tied together in a backward-going stack of previous-frame pointers, so are all thread states tied together in a linked list of PyThreadState *next pointers.
The list of thread states is anchored to the interpreter state structure which owns these threads. The interpreter state structure is defined at ./Include.h/pystate.h: PyInterpreterState, and it is created when you call Py_Initialize to initialize the Python VM in a process or Py_NewInterpreter to create a new interpreter state for multi-interpreter processes. Mostly as an exercise to sharpen your understanding, note carefully that Py_NewInterpreter does not return an interpreter state – it returns a (newly created) PyThreadState for the single automatically created thread of the newly created interpreter. There’s no sense in creating a new interpreter state without at least one thread in it, much like there’s no sense in creating a new process with no threads in it. Similarly to the list of threads anchored to its interpreter, so does the interpreter structure have a next field which forms a list by linking the interpreters to one another.
This pretty much sums up our zooming out from the resolution of a single opcode to the whole process: opcodes belong to currently evaluating code objects (currently evaluating is specified as opposed to code objects which are just lying around as data, waiting for the opportunity to be called), which belong to currently evaluating frames, which belong to Pythonic threads, which belong to interpreters. The anchor which holds the root of this structure is the static variable ./Python/pystate.c: interp_head, which points to the first interpreter state (through that all interpreters are reachable, through each of them all thread states are reachable, and so fourth). The mutex head_mutex protects interp_head and the lists it points to so they won’t be corrupt by concurrent modifications from multiple threads (I want it to be clear that this lock is not the GIL, it’s just the mutex for interpreter and thread states). The macros HEAD_LOCK and HEAD_UNLOCK control this lock. interp_head is typically used when one wishes to add/remove interpreters or threads and for special purposes. That’s because accessing an interpreter or a thread through the head variable would get you an interpreter state rather than the interpreter state owning the currently running thread (just in case there’s more than one interpreter state).
A more useful variable similar to interp_head is ./Python/pystate.c: _PyThreadState_Current which points to the currently running thread state (important terms and conditions apply, see soon). This is how code typically accesses the correct interpreter state for itself: first find its your own thread’s thread state, then dereference its interp field to get to your interpreter. There are a couple of functions that let you access this variable (get its current value or swap it with a new one while retaining the old one) and they require that you hold the GIL to be used. This is important, and serves as an example of CPython’s lack of thread safety (a rather simple one, others are hairier). If two threads are running and there was no GIL, to which thread would this variable point? “The thread that holds the GIL” is an easy answer, and indeed, the one that’s used. _PyThreadState_Current is set during Python’s initialization or during a new thread’s creation to the thread state structure that was just created. When a Pythonic thread is bootstrapped and starts running for the very first time it can assume two things: (a) it holds the GIL and (b) it will find a correct value in _PyThreadState_Current. As of that moment the Pythonic thread should not relinquish the GIL and let other threads run without first storing _PyThreadState_Current somewhere, and should immediately re-acquire the GIL and restore _PyThreadState_Current to its old value when it wants to resume running Pythonic code. This behaviour is what keeps _PyThreadState_Current correct for GIL-holding threads and is so common that macros exist to do the save-release/acquire-restore idioms (Py_BEGIN_ALLOW_THREADS and Py_END_ALLOW_THREADS). There’s much more to say about the GIL and additional APIs to handle it and it’s probably also interesting to contrast it with other Python implementation (Jython and IronPython are thread safe and do run Pythonic threads concurrently). But we’ll leave all that to a later post.
We now have all pieces in place, so here’s a little diagram I quickly jotted1 which shows the relation between the state structures within a single process hosting Python as described so far. We have in this example two interpreters with two threads each, you can see each of these threads points to its own call stack of frames.
Lovely, isn’t it. Anyway, something we didn’t discuss at all is why these structures are needed. I mean, what’s in them? The reason we didn’t discuss the contents so far and will only briefly mention it now is that I wanted the structure to be clear more than the features. We will discuss the roles of each of these objects as we discuss the feature that relies on that role; for example, interpreter states contain several fields dealing with imported modules of that particular interpreter, so we can talk about that when we talk about importing. That said, I wouldn’t want to leave you completely in the dark, so we’ll briefly add that in addition to managing imports they hold bunch of pointers related to handling Unicode codecs, a field to do with dynamic linking flags and a field to do with TSC usage for profiling (see last bullet here), I didn’t look into it much.
Thread states have more fields but to me they were more easily understood. Not too surprisingly, they have fields that deal with things that relate to the execution flow of a particular thread and are of too broad a scope to fit particular frame. Take for example the fields recursion_depth, overflow and recursion_critical, which are meant to trap and raise a RuntimeError during overly deep recursions before the stack of the underlying platform is exhausted and the whole process crashes. In addition to these fields, this structure accommodates fields related to profiling and tracing, exception handling (exceptions can be thrown across frames), a general purpose per-thread dictionary for extensions to store arbitrary stuff in and counters to do with deciding when a thread ran too much and should voluntarily relinquish the GIL to let other threads run.
I think this pretty much sums up what I have to say about the general layout of a Python process. I hope it was indeed simple enough to follow, I plan on getting into rough waters soon and wanted this post to sink in well with you (and me!). The next two (maybe three) posts will start chasing a real heavyweight, the frame object, which will force us to discuss namespaces and code objects. Engage.
I would like to thank Antoine Pitrou and Nick Coghlan for reviewing this article; any mistakes that slipped through are my own.
You say ‘note that theoretically threads can be created which will not be under the interpreter’s control; these threads won’t have a PyThreadState structure and must never call a Python API; this is not very common’.
I’d suggest that in embedded systems this is more common than you think. It is quite likely to be the case in an embedded system that you will have foreign threads created outside of the interpreter which will be calling into the interpreter as opposed to always having Python threads that call out.
For the main interpreter, so long as they use the PyGILState_Ensure()/PyGILState_Release() functions appropriately around the call into Python API, the thread state will be created on demand.
It gets more complicated for sub interpreters as the PyGILState_???() functions only work for main interpreter. In sub interpreters you have to create the thread state objects manually through other APIs and this can get tricky as you need to avoid having multiple thread states for same thread against same sub interpreter. To ensure that thread local storage persists across calls into Python API, you should reuse the thread state rather than destroying it after each use.
For an example of thread state usage and differences between main interpreter/sub interpreter usage, suggest you have a look at code for mod_wsgi.
You should also perhaps look more at what PyGILState_Ensure()/PyGILState_Release() do in contect of main interpreter and how they allow foreign threads with no existing thread state to call into Python API.
Thanks for the detailed comment, Graham!
re. non-Python threads and embedded systems: This is the obvious use-case for such threads. I didn’t do a survey, but I tend to think there is far more CPython code running unembedded than otherwise. I might be wrong with that assumption, but when I said ‘not very common’, I was thinking about that.
re. PyGILState_*, in writing each article I often tiptoe around terms which I don’t want to introduce, so as not to confuse my readers (or myself, I’m new here!). This article isn’t about the GIL, it’s about pystate.c. I know the topics are related, but then again, if I were to chase every related topic I’d never publish! When I said ‘There’s much more to say about the GIL and additional APIs to handle it’, these APIs were exactly what I was talking about, but, alas, we’ll have to get there some other day indeed.
On a side-ish note, I’m honored to see the owner of mod_wsgi here! Perhaps you’d care to read a bit about Labour, a WSGI server ‘durability benchmark’ following Ian Bicking’s suggestion, I’d greatly appreciate your comment on that. Labour doesn’t support mod_wsgi (yet?), but I sorta despaired of the effort as I felt not many people are interested and I wasn’t certain I’m heading in the right direction.
The PyGILState_* functions are a bit deceptively named as the prime reasons for their existence is the creation of a thread state when none exists against the thread already and the subsequent acquiring and release of that thread state object without actually having a reference to it. The GIL is really a side issue and they could have implemented the PyGILState_Ensure() function with a prerequisite that you have already separately acquired the GIL and it would still have had a purpose. It was likely just more convenient to have it also acquire the GIL as well to avoid a separate function call.
As to durability testing of WSGI servers, there are people interested in it, including myself, but I just don’t have to the time to do much these days on anything let alone that area. I imagine it is the same for others who care as well, other things just take priority.
Hmm. Yes, I understand what you’re saying, I’ll see how I can update the post accordingly.
re. durability testing, I wasn’t expecting active participation (not that I’d object :), but more some comment regarding if the effort is in the right direction. That said, I’m aware that comments take time to write, too. It can wait.
Re: your CSS question, the reason your right border is chopped is because:
1. The image + border is greather than 460px, the size of the #content div ( in the markup).
2. The CSS style of your content div is overflow: hidden, which means when a block element of greater width than it appears, it will actually be “hidden” inside the content area.
So, your three options: remove overflow: hidden from #content in your CSS style, make #content width bigger relative to your sidebar, or make the image + border less than 460px to fit inside your content area.
Damn, the Internet is a cool thing. :)
Thanks for the help, your solution did the trick (and made me understand a WordPress.com setting).
About your plea for help with diagramming on Linux. I had the exactly same problem, having tried all these tools you’ve mentioned to easily create diagrams. At work I’m using Visio all the time to easily draw any diagram I have enough creativity to envision.
Eventually I broke down and bought a discounted version of Visio 2007 via work, and this is what I use now, happily (but guiltily) ever after. I run it on Windows, of course, but I guess it can also be done through something like Wine (or worst things worst, a Windows installation on top of VirtualBox)
How depressing. I too need a good Linux based diagramming tool. I can’t believe we don’t have one. Inkscape really should do the job but every time I try to learn it I give up in frustration.
Tracy, Inkscape is a wonderful piece of software, but it is not a diagramming tool! Trying to use it as such can only bring you misery. Inkscape is great for drawings and shapes, which are the atoms of diagrams, but not for connecting those.
[…] Python’s Innards: Pystate […]
diagrams: you could use tikz latex package (google “tikz diagrams” to see lots of examples). Sphinx (the python documentation module) has an extension to make tikz diagrams . | http://tech.blog.aknin.name/2010/05/26/pythons-innards-pystate/ | CC-MAIN-2014-52 | refinedweb | 3,523 | 54.86 |
import "upper.io/db.v3/internal/cache"
Hash returns a hash of the given struct.
Cache holds a map of volatile key -> values.
NewCache initializes a new caching space with default settings.
NewCacheWithCapacity initializes a new caching space with the given capacity.
Clear generates a new memory space, leaving the old memory unreferenced, so it can be claimed by the garbage collector.
Read attempts to retrieve a cached value as a string, if the value does not exists returns an empty string and false.
ReadRaw attempts to retrieve a cached value as an interface{}, if the value does not exists returns nil and false.
Write stores a value in memory. If the value already exists its overwritten.
HasOnPurge type is (optionally) implemented by cache objects to clean after themselves.
Hashable types must implement a method that returns a key. This key will be associated with a cached value.
String returns a Hashable that produces a hash equal to the given string.
Package cache imports 6 packages (graph) and is imported by 7 packages. Updated 2018-10-02. Refresh now. Tools for package owners. | https://godoc.org/upper.io/db.v3/internal/cache | CC-MAIN-2018-51 | refinedweb | 183 | 59.6 |
.
Needless to say, processing of any sort of information is of utmost importance in software. Much of this “information” is stored in databases in the form of rows and tables. To process that data, developers use relatively sophisticated query and data manipulation mechanisms. Yet not all data is stored in databases. I would even argue that today, most data is not stored in databases. Much of it is also stored in places like XML files, HTML pages, e-mails, and the like. The ability to query this sort of information is currently much less developed than for databases. Furthermore, data is not useful just stored in databases or XML files. Instead, applications bring data into memory to process, and once data leaves its original place of storage, the fundamental need to handle and manipulate that data does not change, yet in current versions of .NET (as well as many-but not all-other programming languages), the ability to handle data at that point is relatively poor. It is easy to retrieve a list of customers joined with their invoice information from SQL Server, but it is not easy to use customer information in-memory in .NET and join it with the customer’s e-mails. From a .NET point of view, different types of information is usually available in object form. Unfortunately, Microsoft has not provided a good way to join lists of objects or perform any other sort of query operation.
LINQ solves this problem.
In fact, LINQ solves this problem and many others as well. This makes LINQ one of the features at the very top of my “technologies I want today” list. Unfortunately, Microsoft has only made LINQ available as a CTP (Community Technology Preview), which means that it isn’t even in beta yet. Ultimately, the expectation is that LINQ will ship with Visual Studio “Orcas.” You can install the LINQ CTP bits on top of Visual Studio 2005, which provides a number of additional assemblies as well as new versions of the C# and VB.NET compilers. Using this constellation, you can use and compile the new LINQ syntax Visual Studio. (Note: IntelliSense and syntax coloring are not always appropriate for the new features since the Visual Studio editor is not yet aware of the new LINQ features).
A First Example
In SQL Server, queries are pretty simple. For instance, you can easily query all records from a Customer table in the following fashion:
SELECT * FROM Customer
The return value is a “result set” (which really behaves and appears very much like a table) that contains all fields from the Customer table. The overall situation is relatively simple and predictable for the compiler (or interpreter) that has to process this statement. “Customer” is always a table (or equivalent construct, such as a view, which is really also a table in terms of behavior and functionality). Inside the Customer table you’ll find rows of data, and each row is composed of a number of fields, all of which you expect to be part of the result set.
Using LINQ you can perform a similar operation right in C# or VB.NET. The main difference is that C# doesn’t deal with tables but objects, and in particular, lists of objects (be it collections or arrays or any similar construct). To start out with a simple example, I will use one of the simplest data sources LINQ can use: an array of strings. Here it is in C#:
string[] names = {"Markus", "Ellen", "Franz", "Erna" };
Or, the VB.NET equivalent:
Dim names As String() = {"Markus", "Ellen", "Franz", "Erna" }
Using the new LINQ syntax your code could query from this “data source” very similar to querying from a table in T-SQL. Here is a simple VB.NET query that retrieves all “records” from that array:
Select name From name in names
This is somewhat similar to the T-SQL statement above. The main difference is the “from x in y” syntax that you might find a little confusing at first. Let’s take a look at what really happens here. Fundamentally, you’ll retrieve data from a list of objects called “names.” This list is an array-object which is the equivalent of the Customer table in my previous T-SQL statement. The main difference is that while it is completely clear that a Customer table contains rows, it is not at all clear what objects are in collections of other objects. Therefore, you also need to specify what you expect inside that collection. In my case, I’ve stated that I want to refer to each object inside the names array as “name.” You might compare this to a for-each loop:
ForEach name As String in names ' name.xxx EndFor
You must remember to name each element you expect inside the collection to subsequently use that named reference to specify the expected result (among other things). In the above example, I stated that I want to select the entire string (called “name”) as my result set, since that is really all there is to select in this simple example.
Of course, since VB.NET is an object-oriented environment, the result set must also be a list of objects. I therefore need to assign the SELECT statement (or the result of the SELECT statement) to a variable reference:
Dim result As IEnumerable(Of String) = _ Select name From name in names
SELECT statements return a list of type IEnumerable<T>. In other words, the result set is a list typed as the generic version of IEnumerable. In my case, the elements in that generic IEnumerable list are strings, since each “name” in the SELECT statement is a string.
Of course, C# also supports LINQ natively. Consider this C# version.
IEnumerable<string> result = from name in names select name;
The main difference here is that C# always puts the “select” part as the last part of the command. This looks a bit odd at first, but I could argue that it makes more sense. For instance, a few paragraphs above where I described what the VB.NET example does, I had to start out my explanation with the “from” part. Also, it is more convenient for IntelliSense. Once you’ve typed in the “from name in names” part, IntelliSense can display a sensible list of possible “selectable” members, while the VB version can not do so by the time you’re likely to enter the select part. Ultimately, this comes down to personal preference since the functionality is exactly identical. (This statement seems to be true for the majority of features in C# and VB.NET.)
A More Useful Example
My example so far was perfectly functional, but it was also completely useless because the result set is identical to the source version. This example makes more sense.
IEnumerable<string> result = from name in names orderby name select name;
In this case my result set is ordered by the name. I can print these names in the following fashion:
foreach (string s in result) { Console.WriteLine(s); }
This prints the following result to the Output window:
Ellen Erna Franz Markus
I can, of course, also add a WHERE clause to my query.
IEnumerable<string> result = from name in names orderby name where name.StartsWith("E") select name;
And my result looks like this.
Ellen Erna
LINQ will bring practically all standard query operators (GROUP BY, SUM, JOIN, UNION,…) to the core C# and VB.NET languages.
Note: The C# flavor is LINQ is currently documented in a much more complete fashion. For this reason, I am using mostly C# examples. Nevertheless, VB.NET supports LINQ just as well as C# does. Some would even argue that VB.NET supports LINQ better than C#.
The Magic of Objects
Everything in .NET is objects, and therefore, LINQ is based exclusively on objects. This little fact turns the LINQ query language into something that is a lot more powerful than a query language that just deals with data. To understand why, I must show you a few examples.
Listing 1 shows a Customer class, which I will use for some examples, as well as a helper method that instantiates a list of customers. Once you have this list of customer objects in memory, I can query from that list like so:
List<Customer> customers = Helper.GetCustomerList(); IEnumerable<Customer> result = from c in customers orderby c.CompanyName where c.ContactName.StartsWith("A") select c;
This returns a list of customers where the contact person’s name starts with an “A.” My example also sorts the result by company name. Note that the result set is an enumerable list of Customer objects, since I selected “c”, and “c” is the name I assigned to each customer in the list. Of course, the result could have also been something entirely different, such as a single property of that Customer object.
IEnumerable<string> result = from c in customers orderby c.CompanyName where c.ContactName.StartsWith("A") select c.Country;
In this example, the result is a list of country names (strings) for the same customers.
Note that not just the SELECT clause changed, but the declaration of the result type as well. In the previous example I used IEnumerable<Customer>, while the current example results in IEnumerable<string>. The result type is dictated by the SELECT part of the command and can not be altered in any other way. Therefore, one could argue that it is redundant and developers should not have to declare the type of the result variable. As it turns out, Anders Hejlsberg (the “father” of C#) agrees with that viewpoint and has added a new feature to C# 3.0 known as “type inference.” Using this feature, I could also define the last example in the following fashion:
var result = from c in customers orderby c.CompanyName where c.ContactName.StartsWith("A") select c.Country;
The declaration of “result” as “var” simply indicates to the compiler that it is to infer the real type based on the expression. The compiler can analyze the SELECT statement and therefore figure out that “var” really needs to be “IEnumerable<string>” (in this example). Don’t confuse “var” with “variant” in which scripting languages use. Instead, “var” is still a strongly typed statement. You just leave it up to the compiler to figure out what the type should be.
You can also use type inference in other instances. Look at these perfectly fine and strongly typed C# 3.0 statements:
var name = "Markus"; var frm = new System.Windows.Forms.Form();
But, I digress. There still are many unexplored things you can do with objects as data sources. In the examples so far, I’ve shown you how to perform simple queries that use features available on .NET standard types such as strings. Selecting names starting with “A” is the equivalent of the following SQL Server statement:
SELECT * FROM Customers WHERE ContactName like 'A%'
SQL Server knows a number of standard types (such as strings) and can thus apply certain operations, such as an “=” or “like” operator. In LINQ, on the other hand, data sources could be any type of objects, and the features and abilities of those objects are only limited by your imagination. The Customer class I’ve used in my examples has such custom functionality. Here is another example. You could use the IsOddCustomer() method, which tells you whether or not the customer number is odd (or even). You could use this method in LINQ queries:
var result = from cust in customers where cust.IsOddCustomer() select cust;
This has significant implications since it means that you have complete control over the behavior of the WHERE clause (or any other part of the statement for that matter). For instance, it is possible to include a significant amount of business logic in the code called by the WHERE clause, which would not be feasible in the same way in SQL Server. For instance, the method called could in turn call Web services or invoke other objects. (Note that it is fine to do this in terms of architecture, because this code is likely to run in the middle tier).
Another aspect of using objects instead of data in a query language is that the result set can be any type of objects. In the following example, I’ll assume an array of objects of type ShortCustomer. This is a list of Customer objects where each object has two properties: Country and PrimaryKey. I can use a LINQ query to SELECT all primary keys for customers from a certain country, but instead of returning that primary key directly, I can use it to instantiate new Customer objects.
var result = from c in customerList where c.Country == "USA" select new Customer(c.PrimaryKey);
This means that for each selected primary key, the code must instantiate a new Customer object. (Presumably, the Customer class loads the complete customer information into the object when launched this way, but this is completely up to that class). The result of this query is a list of Customer objects. This is interesting, because in essence, the result set is a list of objects that was in no way contained in the original query source in any way other than the objects being identified by their primary key.
Now I’ll spin this example a bit further as well and do this:
var result = from c in customerList where c.Country == "USA" select new CustomerEditForm(c.PrimaryKey);
This returns a list of edit forms for each customer from the US. The only problem at this point is that those forms are not displayed yet, so we still need to make them visible.
foreach (Form frm in result) frm.Show();
Of course, this query may end up opening a very large number of windows, so you probably don’t want to use it in real-life applications. However, this example demonstrates how you could apply the LINQ query language to anything .NET has to offer and not just data. Of course, this also works the other way around. For instance, you could query all controls on a form that have certain content (or no content, or…) and then join the result with data from a different data source and union it together with… well, you get the idea.
Anonymous Types and Object Initialization
When you run a query, you often expect a result set that contains a limited selection of information contained in the data source. Consider this SQL Server example:
SELECT CompanyName, ContactName FROM Customer
This returns two fields from the much larger list of fields in the Customer table. Of course, you might want to do this in C# and VB.NET. However, since every result must be an object (or a list of objects), this requires some object type with exactly these two properties. Chances are that you don’t have such a class, and creating such a class for each query result would be cumbersome and seriously take away from the power of the query language. Therefore, new features are needed and C# 3.0 will offer them! Two in particular are important for this scenario: object initialization and anonymous types.
Object initialization deals (surprise!) with the initialization of public members (properties and fields). Consider this conventional example:
Customer cust = new Customer(); cust.CompanyName = "EPS Software Corp."; cust.ContactName = "Markus Egger";
Using object initialization I could also write this example in a single source line (single statement):
Customer cust = new Customer() { CompanyName = "EPS Software Corp.", ContactName = "Markus Egger"};
Note: Due to column width constraints in the magazine, this statement ends up as three lines, but it is only a single line of source code as far as the compiler is concerned.
This feature is particularly useful in LINQ queries:
var result = from n in names select new Customer() {ContactName = n.FirstName + " " + n.FirstName};
The second important C# 3.0 feature, anonymous types, allows you to create a new object type simply based on necessity and requirements derived from the type’s usage. Here’s an example:
new {CompanyName = "EPS Software Corp.", ContactName = "Markus Egger"};
While similar to my previous example, the new operator does not specify the name of the class that is to be instantiated. In fact, you don’t need to instantiate a defined class. Instead, the compiler realizes that you need a type with two properties based on the fact that the code attempts to initialize them. Therefore, the compiler creates a class behind the scenes that has the required properties (and fields) and uses it on the spot. Note that the only way to use such a type is through type inference (see above).
var customer = new { CompanyName = "EPS Software Corp.", ContactName = "Markus Egger"};
You can use object initialization and anonymous types features in queries. For instance, you can query from a list of Customer objects (Listing 1) and return brand new objects with two properties.
var result = from cust in customers select new { Name = cust.CompanyName, Contact = cust.ContactName};
Note that this example also would not be possible without type inference since there would be no way to define the type of the “result” variable, since the name of that type is unknown.
Object Syntax
Purists may have noted that the LINQ syntax is similar to T-SQL, but it is not entirely C#-like. In other words: almost everything else in C# is expressed as objects, while LINQ introduces the longest C# command sequence ever. As it turns out, the SELECT syntax is only window dressing. Behind the scenes, the compiler actually turns every SELECT statement into pure object syntax. Consider this example:
var result = from c in customers where c.ContactName == "Egger" select c.Country;
You could also write this statement like so:
var result = customers.Where( c => c.ContactName == "Egger" ) .Select( c => c.Country );
This will be normal C# 3.0 syntax. However, not a lot of people are using C# 3.0 yet so I need to explain a few details. I’ve already discussed the new “var” keyword used by type inference (see above). I need to explain a new feature of C# 3.0 called lambda expression which you see as the passed parameter. Lambda expressions are an evolution of C# 2.0’s anonymous methods. Using lambda expressions you can pass code instead of data as method parameters. Basically, the Where() and Select() method accept a delegate as their parameters, and the lambda expression provides the code for the delegate to execute.
The expression itself appears a bit unusual at first but is easy to understand. It starts out with input parameters (“c” in this case) followed by the “=>” operator, followed by the return value (or alternatively, a complete method body). You could also express the lambda expression c => c.ContactName == "Egger" as a complete method.
public var MyMethod(var c) { return (c.ContactName == "Egger"); }
Note that this example uses type inference in the lambda expression to determine the parameter as well as the return type. You could also explicitly type parameters for lambda expressions.
(Customer c) => c.ContactName == "Egger"
Lambda expressions are very powerful and can do everything delegates and anonymous methods can do, plus a few extra tricks I will show you below. Unfortunately, a complete exploration of features provided by lambda expressions is beyond the scope of this article.
The remaining mystery is the puzzling appearance of the Where() and Select() methods. For the object-syntax version to work, every object in .NET would have to have these methods. And in fact, with LINQ, they do! The reason is a mechanism that is also new in C# 3.0 called extension methods. These are special static methods defined on a static class. Whenever such as class is in scope (either because it is in the current namespace or by way of a using statement), then the extension method gets added to all objects that are currently in scope who do not already have a method of identical signature.
You should only use this somewhat radical feature when you really need to. However, in some scenarios, it is extremely useful. As an example, consider the string type and the possible need to add new methods to that class. In many scenarios you can do this through subclassing, but if you want to add a method to all strings, you cannot do that with subclassing. Using extension methods, this isn’t technically possible either, but through a little compiler magic, you can at least create the illusion of an added method. Consider the following example which shows a “ToXml()” extension method:
public static class EM { public static string ToXml( this object extendedObject) { return "<value>" + extendedObject.ToString() + "</value>"; } }
Note that this is a method that is only different from standard static methods in that it uses the “this” modifier with the first parameter. The “this” modifier indicates that the first parameter is a reference to the object that is extended (extension methods must always have at least one parameter, which is a reference to the object it extends).
With your extension method created you can use it on all objects as long as the “EM” class is in scope (either because it is in the current namespace, or because it is in scope due to a USING statement). Therefore, the following statement is now valid:
string name = "Markus"; string xmlName = name.ToXml();
Note that the parameter does not appear in this version. Instead, the parameter is the object the method is seemingly used on. Behind the scenes, the compiler changes this example to standard object notation.
string name = "Markus"; string xmlName = EM.ToXml(name);
Extension methods create the illusion of added methods, and LINQ uses this ability extensively to add methods such as Select(), Where(), and Join(). Note that you can only add extension methods to objects that do not already have methods of identical name and signature.
Extension methods have a number of side effects that turn out to be quite useful. For one, developers can use individual pieces of functionality LINQ provides, without having to use other, possibly unwanted LINQ functionality. For instance, this example takes the contents of an array and returns them grouped by string length; a feature that is not available on arrays without LINQ.
string[] names = {"Markus", "Ellen", "Franz", "Erna" }; var result = names.GroupBy(s => s.Length);
Considering the different features available through LINQ (sorting, grouping, calculations, joins, unions,…), this certainly is a rather interesting side-effect.
Another side effect of extension methods is that extension methods are only added on objects that do not already have a method of that name and signature. This means that developers can purposely implement such methods to explicitly replace how certain features of LINQ work. For instance, if you do not like how LINQ calculates averages using the Average() function (or “average” keyword), then you can implement your own Average() method which you can then use on your object. (Standard LINQ functionality keeps being used on all other objects).
DLINQ
The LINQ functionality I have introduced so far introduces query features to an object-oriented environment. Nevertheless, you’d find it useful if you could use the same functionality and feature-set seamlessly against “real” databases. DLINQ provides this type of functionality. DLINQ is a set of special classes provided in addition to regular LINQ features. DLINQ will provide object-oriented representations of database objects such as tables and fields. Listing 2 shows a DLINQ representation of the Customers table in the Northwind SQL Server demo database. Note that the class itself is just a standard C# class, but the attributes are DLINQ attributes. You can use them to map relational data onto objects and to express information that is not available in C# (such as a field being of type “nchar(5)”).
Once database objects are represented in object-notation, you can use LINQ to query data directly from the database. To do so, you first open a connection to the database. In DLINQ you’ll do this through a DataContext object, which is conceptually very similar to a database connection. Once the context is established, the table-mapping class has to be instantiated. DLINQ does this through a Table<> generic that is typed as the special mapping class. With these objects in place you can use standard LINQ syntax. The following example queries all customers from the SQL Server Northwind database whose company name starts with an “A”.
DataContext context = new DataContext("Initial Catalog=Northwind;” + "Integrated Security=sspi"); Table<CustomerTable> customers = context.GetTable<CustomerTable>(); var result = from c in customers where c.CompanyName.StartsWith("A") select c;
Note that the syntax used in the actual query is standard C# LINQ syntax. SQL Server does not support methods on field names, nor does it support “StartsWith()” in any way. Nevertheless, this works perfectly fine. The WHERE clause in this statement is internally handled as a lambda expression by the C# compiler (see above). One of the advanced features of lambda expressions is the ability to compile them either as IL (Intermediate Language) that can be executed on the CLR, or to compile them as a pure data representation of itself known as an expression tree. Whether the compiler creates IL or expression trees depends on your exact use of the lambda expression. In all previous demos I’ve used in this article, the compiler would have compiled the lambda as IL. In this example, the compiler will create an expression tree.
Expression trees are language neutral since they are only data. This allows DLINQ to translate the expression into something the database can understand and thus, you can execute the example above on SQL Server.
To see how expression trees are represented in memory, you could take a lambda expression and assign it to an expression tree delegate, which forces the compiler to create an expression tree instead of IL code. You can then explore individual pieces of data within the expression tree.
Expression<Func<int,bool>> expr = para1 => para1 < 10; BinaryExpression body = (BinaryExpression)expr.Body; ParameterExpression left = (ParameterExpression)body.Left; ConstantExpression right = (ConstantExpression)body.Right; MessageBox.Show("Expression: " + left.Name + " " + body.NodeType + " " + right.Value);
The output of this code snippet is this:
Para1 LT 10
Some readers might be wondering whether that can really work with all possible expressions. Actually, yes. In a pure .NET environment, the list of possible expressions is unlimited, and some such expressions can be so complex that translations would be impossible. However, when running queries against a database you have a limited and well-defined list of possible expressions. After all, since you can only map SQL Server character fields to .NET strings, that limits the list of possible expressions to the features available for strings and character fields. The only exception is the ability to define custom field types in SQL Server 2005, but in that case, you’d create the custom fields using a .NET language, so no translation is needed.
XLINQ
What DLINQ is to data, XLINQ is to XML, and more. XLINQ provides a handful of extra objects that provide the ability to run standard LINQ queries against XML data sources. Consider the following XML string:
< customers > <customer> <companyName>Alfred’s Futterkiste</companyName> <contactName>Maria Anders</contactName> </customer> <customer> ... More customer records ... </customer> </ customers >
You could load this XML string into an XElement object which you can then use as a LINQ-query data source just like any other object-based data source.
XElement names = XElement.Parse(xmlString); var result = from n in names.Descendants("customer") where n.Descendants("companyName") .Value.StartsWith("A") select n.Descendants("contactName").Value;
This query returns a list of strings that contains the names of all contacts for customers whose company name starts with “A”. As you can see, XLINQ provides objects that represent alternate ways of parsing XML. You can also use these objects outside of LINQ.
XLINQ also provides another interesting feature: The ability to create XML. This is also done using XElement and other XLINQ objects. Consider this XML snippet:
< root > <sub>Test</sub> </ root >
You can create this snippet using XLINQ objects.
XElement xml = new XElement("root", new XElement("sub","Test"); Console.Write(xml.ToString());
This example creates a new XElement instance by passing the name of the element as the constructor parameter. The content of that element is another XElement, which is also passed to the constructor of the first element. The second element is also instantiated with an element name. The second parameter provides the content, which in this case is the string “Test”.
You can use this in queries as well. The next example queries all customers from the USA and returns it as an XElement structure, which in turn can be converted to a string.
XElement customerXml = new XElement("Customers", from c in customers where c.Country == "USA" select new XElement("Customer", new XAttribute("ID",c.CustomerId), new XElement("Company",c.CompanyName)));
The result of this example is this:
< Customers > <Customer ID="ALFKI"> <Company>Alfred’s Futterkiste</Company> </Customer> <Customer ID="XXX"> ... More customer records ... </Customer> </ Customers >
The VB.NET version of LINQ takes this idea even a step further. VB.NET incorporates XML directly into the core language, allowing developers to create the same result in the following fashion:
Dim x = _ <Customers> Select <Customer ID=(c.CustomerId)> <Company>(c.CompanyName</Company> </Customer> _ From c In customers _ Where c.Country = "USA" _ </Customers>
The support of XML natively in the VB.NET language has been the source of numerous discussions. Some feel that it makes the language “messy”. Personally, I like that developers have the choice between the purist, object-oriented approach of C#, and the straightforward, productivity-driven approach of VB.NET.
Conclusion
LINQ is powerful. Much more so than can be expressed in a single article. Unfortunately, LINQ also isn’t available yet. It is one of those technologies that is so immediately applicable that it is hard to wait for the release version. That release version should also include other functionality such as insert, update, and delete commands. I look forward to all of these things, just as I am looking forward to more information being available for the VB.NET version of LINQ, which appears to be at least as promising as the C# version. | https://www.codemag.com/article/0603021 | CC-MAIN-2020-24 | refinedweb | 5,063 | 64.1 |
Added patch
ufraw fails to build with clang-3.4 due to additional diagnostics. It indicated an error in ufraw.h which is fixed by proper namespacing:
--- ufraw.h.orig 2014-01-11 11:04:08.000000000 -0800
+++ ufraw.h 2014-01-11 11:04:54.000000000 -0800
@@ -41,6 +41,10 @@
/ An impossible value for conf float values /
#define NULLF -10000.0
+#ifdef cplusplus
+extern "C" {
+#endif // cplusplus
+
/ Options, like auto-adjust buttons can be in 3 states. Enabled and disabled
* are obvious. Apply means that the option was selected and some function
* has to act accourdingly, before changing to one of the first two states /
@@ -78,10 +82,6 @@ extern UFName ufRawImage;
extern UFName ufRawResources;
extern UFName ufCommandLine;
-#ifdef cplusplus
-extern "C" {
-#endif // cplusplus
-
UFObject ufraw_image_new();
#ifdef HAVE_LENSFUN
UFObject ufraw_lensfun_new();
Jeremy Huddleston Sequoia
2014-01-11
Added patch
Niels Kristian Bech Jensen
2014-01-12
Thanks for the report and the patch. I have fixed this in the cvs repository.
Regards,
Niels Kristian
Niels Kristian Bech Jensen
2014-01-12 | http://sourceforge.net/p/ufraw/bugs/365/ | CC-MAIN-2016-07 | refinedweb | 172 | 68.67 |
Didier, See below... legajid wrote: > Hi, > My purpose is to get data from the keyboard and return a maybe value, > saying "nothing" if the data entered is invalid. > I get a message "could'nt match expected IO Integer against inferred > type maybe integer" on line x<- getdata > Why should x be Maybe ? getdatanum is, but getdata is not. > > Is it possible to write only one function rather than 2 distinct ones ? > > Thanks for your help, > Didier. > > > Here's my code > > getdata :: IO Integer > getdata=do x<-getLine > let xn=read x ::Integer > return xn > > getdatanum :: Maybe Integer > getdatanum = do > x <- getdata > {- > if x < 5 > then do > return (Just x) > else do > return Nothing > -} > return (Just x) The type of getdatanum should be IO (Maybe Integer). Maybe and IO are both monads, and that's why the error messages can be a bit confusing sometimes. Yes, you certainly can write this as one function. Steve | http://www.haskell.org/pipermail/beginners/2009-December/003087.html | CC-MAIN-2014-10 | refinedweb | 154 | 63.29 |
Back to index
#include <nsFileStreams.h>
Definition at line 77 of file nsFileStreams.h.
Definition at line 90 of file nsFileStreams.h.
: nsFileStream() { mLineBuffer = nsnull; mBehaviorFlags = 0; }
Definition at line 95 of file nsFileStreams.h.
Reimplemented in nsSafeFileOutputStream.
Definition at line 121 of file nsFileStreams.cpp.
{ nsresult rv = NS_OK; if (mFD) { if (mCloseFD) if (PR_Close(mFD) == PR_FAILURE) rv = NS_BASE_STREAM_OSERROR; mFD = nsnull; } return rv; }
Close the stream.
Definition at line 106 of file nsFileStreams.cpp.
{ NS_ENSURE_TRUE(mFD == nsnull, NS_ERROR_ALREADY_INITIALIZED); // // this file stream is dependent on its parent to keep the // file descriptor valid. an owning reference to the parent // prevents the file descriptor from going away prematurely. // mFD = fd; mCloseFD = PR_FALSE; mParent = parent; return NS_OK; }
Internal, called to open a file.
Parameters are the same as their Init() analogues.
Definition at line 224 of file nsFileStreams.cpp.
{ nsresult rv = NS_OK; // If the previous file is open, close it if (mFD) { rv = Close(); if (NS_FAILED(rv)) return rv; } // Open the file nsCOMPtr<nsILocalFile> localFile = do_QueryInterface(aFile, &rv); if (NS_FAILED(rv)) return rv; if (aIOFlags == -1) aIOFlags = PR_RDONLY; if (aPerm == -1) aPerm = 0; PRFileDesc* fd; rv = localFile->OpenNSPRFileDesc(aIOFlags, aPerm, &fd); if (NS_FAILED(rv)) return rv; mFD = fd; if (mBehaviorFlags & DELETE_ON_CLOSE) { // POSIX compatible filesystems allow a file to be unlinked while a // file descriptor is still referencing the file. since we've already // opened the file descriptor, we'll try to remove the file. if that // fails, then we'll just remember the nsIFile and remove it after we // close the file descriptor. rv = aFile->Remove(PR_FALSE); if (NS_FAILED(rv) && !(mBehaviorFlags & REOPEN_ON_REWIND)) { // If REOPEN_ON_REWIND is not happenin', we haven't saved the file yet mFile = aFile; } } return NS_OK; }
Read data from the stream.
Read a single line from the stream, where a line is a possibly zero length sequence of 8bit chars terminated by a CR, LF, CRLF, LFCR, or eof.
The line terminator is not returned.).
seek
This method moves the stream offset of the steam implementing this interface.
Definition at line 367 of file nsFileStreams.cpp.
{ PR_FREEIF(mLineBuffer); // this invalidates the line buffer if (!mFD) { if (mBehaviorFlags & REOPEN_ON_REWIND) { nsresult rv = Reopen(); if (NS_FAILED(rv)) { return rv; } } else { return NS_BASE_STREAM_CLOSED; } } return nsFileStream::Seek(aWhence, aOffset); }
setEOF
This method truncates the stream at the current offset.
tell
This method reports the current offset, in bytes, from the start of the stream.
If this is set, the file will close automatically when the end of the file is reached.
Definition at line 73 of file nsIFileStreams.idl..
Definition at line 67 of file nsIFileStreams.idl.
Flags describing our behavior.
See the IDL file for possible values.
Definition at line 124 of file nsFileStreams.h.
Definition at line 72 of file nsFileStreams.h.
Definition at line 69 of file nsFileStreams.h.
The file being opened.
Only stored when DELETE_ON_CLOSE or REOPEN_ON_REWIND are true.
Definition at line 110 of file nsFileStreams.h.
The IO flags passed to Init() for the file open.
Only set for REOPEN_ON_REWIND.
Definition at line 115 of file nsFileStreams.h.
Definition at line 104 of file nsFileStreams.h.
Definition at line 70 of file nsFileStreams.h.
The permissions passed to Init() for the file open.
Only set for REOPEN_ON_REWIND.
Definition at line 120 of file nsFileStreams.h.
Definition at line 62 of file nsISeekableStream.idl.
Definition at line 68 of file nsISeekableStream.idl.
Definition at line 56 of file nsISeekableStream.idl.
If this is set, the file will be reopened whenever Seek(0) occurs.
If the file is already open and the seek occurs, it will happen naturally. (The file will only be reopened if it is closed for some reason.)
Definition at line 80 of file nsIFileStreams.idl. | https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/classns_file_input_stream.html | CC-MAIN-2017-51 | refinedweb | 607 | 68.67 |
Via this blog we have discussed the fundamentals of Exchange Autodiscover, and also issues around the Set-AutodiscoverVirtualDirectory cmdlet.
At this point the message should be out there with regards to how Outlook functions internally and externally to locate Autodiscover and the difference that having the workstation domain joined makes. Lync on the other hand is a different beastie!
Both the Outlook Client and the Lync client want to get to the Exchange Autodiscover endpoint, but they differ in how to get to Sesame Street. **
Same But Different
At one of my recent engagements the customer experienced a situation around Lync 2010 and Exchange 2010 integration. Exchange was successfully upgraded to Exchange 2010, and OCS was still in use. When piloting Lync 2010 and the Lync 2010 client they noted errors in the Lync client. There were a couple of reasons for this. The required configuration on the load balancer was not in place, and the device’s firmware was not at the required build level.
When we investigated what Lync and Exchange Autodiscover were doing, we noted that Lync was not locating the Exchange Autodiscover endpoint. Hmm. That’s a bit strange, innit? Outlook was running perfectly, and all the domain joined clients were always able to located Autodiscover by querying for the SCP. The Lync client on the other hand does not leverage SCP when locating Exchange Autodiscover.
Dave Howe’s whitepaper Understanding and Troubleshooting Microsoft Exchange Server Integration discusses this in more detail and is a great read! The one line that distils the important message is:
Unlike Outlook, which uses an SCP object to locate the Autodiscover URL, UC clients and devices will only use the DNS-based discovery method.
There is also a flow diagram in the whitepaper showing the DNS records used.
Note that nowhere in Dave’s article does he change or view the properties of the Autodiscover virtual directory. The same is also true in Prerequisites for Integrating Microsoft Lync Server 2013 and Microsoft Exchange Server 2013.
There are some differences between Exchange 2007 and 2010 with regards to how the requests get serviced. Exchange 2007 only does POX (Plain Old Xml) whereas newer Exchange does SOAP (Simple Object Access Protocol) in addition. Lync can leverage SOAP, Outlook kicks it old School with POX.
Letting Lync Play Nicely With Exchange Autodiscover
The customer above had deployed Exchange, but had not created any internal DNS records for Autodiscover.domain.com. Technically this was not needed for their Exchange + Outlook design, as they have an environment with HA load balancers and multiple CAS servers behind each load balancer. Their Autodiscover namespace had been set as the load balancer FQDN. As such the FQDN Autodoscover.domain.com was not on any of the Exchange CAS Certificates. And as mentioned in the busting Autodiscover myth post on Set-AutodiscoverVirtualDirectory their Autodiscover URI was previously configured by running:
Set-ClientAccessServer –AutoDiscoverServiceInternalUri “”
In order to change this they:
- Request and install new certificates that included the Autodiscover.domain.com namespace
- Update the service bindings on Exchange to use the new certificate
- Update the configuration on the load balancers
- Create internal DNS entries for the Autodiscover.domain.com namespace
- Test
- Update build documentation
- Update DR documentation
Cheers,
Rhoderick
** - That 8 foot tall yellow bird still freaks me out!!
>>>
HI
Great article.
Where multiple sip domains and associated smtp domains exist does this pose a challenge. For instance exchange 2010 and lync 2010 may support multiple sip and smtp domains like @a.com and @b.com. In this example do you need multiple autodiscover a records for each lync sip domain and multiople virtual directories on exchange along with the other items you mention like certifcate entries etc.
That's my understanding. There is an option to use DNS SRV records, but there is currently a known issue with this for the Lync 2013 client
office.microsoft.com/…/lync-2013-known-issues-HA102919641.aspx
I haven't check to see a fix for that was checked into the last update bundle.
Cheers,
Rhoderick
Hi Rhoderick
Thanks for the rapid reply.
I think the EMS powershell set command to set the url and virtual diorectories on exchange only allow you to set one external and internal virtual directory etc – thats my understanding but i could be wrong.
When you mention the alternative using SRV records are you proposing this method to reduce the complexity over using the autodiscover A record method. For instance for each sip domain you have an SRV record in the case of a.com the SRV record _autodiscover._tcp.a.com which has a target of mail.a.com and in the case of b.com the SRV record _autodiscover._tcp.b.com which has a target of mail.a.com. This means that there is an SRV record per domain but it points to the same target mail.a.com thereby reducing the number of virtual directories and SAN names in certificates etc. I am not sure if the SRV standard supports targets outside of its zone but certainly Windows DNS manager allows targets to be set outside of the SRV zone so it may be possible and supported?
Many Thanks
That’s correct – in Exchange we set a single URL onto the various directories. Using ADSI edit to fnangle and add extral URLs onto them is not supported.
For Exchange we can do this with SRV redirect, but this requires an update to the Lync 2013 client. See link above.
We can target a machine out of the zone, and it would be up to the app to decide if that is OK. For example my fried Ed wrote this post here:
social.technet.microsoft.com/…/6818.exchange-2010-multi-tenant-autodiscover-service.aspx
In this example, the Autodiscover service does the following when the client tries to contact the Autodiscover service:
1. Autodiscover posts to
testorg1.org/…/Autodiscover.xml. This fails.
2. Autodiscover posts to
autodiscover.testorg1.org/…/Autodiscover.xml. This fails.
3. Autodiscover posts to
autodiscover.testorg1.org/…/Autodiscover.xml This fails.
4. Autodiscover performs the following redirect check using the looking for SRV record:
5. Autodiscover uses DNS SRV lookup for _autodiscover._tcp.testorg1.com, and then "mail.contoso.com" is returned.
6. Outlook asks permission from the user to continue with Autodiscover to post to
mail.contoso.com/…/autodiscover.xml.
7. Autodiscover’s POST request is successfully posted to
mail.contoso.com/…/autodiscover.xml.
Cheers,
Rhoderick
Hi Rhoderick,
Thanks for the Excellent Post!!!
Based on the above customer scenario, Per you description post you added the Autodiscover.domain.com DNS entry , did the AutodiscoverInernalURI which was set to use the Loadbalancer FQDN changed to the autodiscover.domain.com, if so then shall we point the Autodiscover.domain.com DNS entry to the same Loadbalaner IP and when we do this do we achieve the same experience as it was before for the client connection to get loadbalanced as the SCP is getting updated to use the autodiscover.domain.com and this eventually points the LB and the client experience is same and with this change we also full fill the Lync requirement to work better with Exchange.
Review and correct me if the above said configurations are correct or any changes needed for my better understanding.
Hi Shakthi
In my customer’s case they were using a different namespace for Autodiscover in each of the many AD sites. For example:
NA-Mail.contoso.com
EUR-Mail.contoso.com
APAC-Mail.contoso.com
All of those names were on the relevant certificate. Since Autodiscover.contoso.com was not on the cert, they had to re-issue the cert with that additional name and enable that on the LB device.
As to pointing to the same LB VIP or to use a separate one that is going to depend on the LB that you have. Are there any issues/challenges with doing that. I don’t know, that’s up to the LB.
We continued to have the same client experience, as domain joined Outlook clients on the internal network look for SCP first. Only if the SCP lookups fail will they drop to DNS.
Cheers,
Rhoderick
Hi Rhoderick,
Is there a technical reason why Lync cannot use SCP? It would be good to have the option at least, in the case where the DNS zone for the primary email is not something easy to update, and the customer domain is all internal with domain joined devices.
Cheers,
David
Hi David,
I think about the simplest denominator — simple phones and other UC devices. They are not domain joined and resort to using DNS to locate AutoD.
If customer has internal only DNS zone, that would be a challenge for internet located devices. What sort of situations do you encounter where you are unable to update the zone for the primary SMTP namespace? That sounds interesting – anything you can share on that?
Cheers,
Rhoderick
Thanks for the update!!!
I have a small query i have multiple SCP records in my environment and have set autodiscover internal uri to all them as and also have DNS entry created for autodiscover.domain.com which points to a LB device and has all the CAS servers added
to the LB and set the site scope accordingly and mine is a Hybrid Exchange environment and with respect to onpremise environment the client selects one SCP and makes the connection without issues but for the outlook client with a cloud user who are domain
joined I can see multiple SCP lookup performed and failed ( as it would) with each of the available SCP in the site before it makes appropriate connection to cloud and my concern is why there are many scp lookup performed can’t it pickup just one SCP and once
its get to know it fails and fall back to the DNS and then do the rest of the redirection . I need to know is this the default behavior of the domain joined cloud user outlook. I do have a similar requirement for Lync like the above customer and also need
to know will these SCP failures cause any issues with Lync EWS or its just the DNS entry which Lync requires to talk to Exchange via EWS which I already made available for my environment and pointed to the LB which contains the CAS servers. Kindly clarify
on the same
is there a reason why lync fails to access DNS after DNS is well configured?
ShakthiRavi — that is a great question and one that I’m digging into for other reasons as well. I do all of this blogging etc. in my own time so I never get to do as much as I really want to 🙁
Cheers,
Rhoderick
Ronald – is this referring to an A record or SRV please?
Cheers,
Rhoderick
Hi Rohderick,
well, you are correct. The SCP problem with Lync is that it doesn’t make sense for Lync querying SCP, if e.g. Exchange is not installed. So DNS is enough and DNS is also the choice for Lync finding all required resources.
I would be happy if you could also check my blog article here, Thanks so much
Thomas
Hi Thomas,
I did take a peek – the AutoD URLS are present in the Exchange 2013 schema, and are not listed on Exchange 2013 TechNet page. Your post has that reversed.
I’d also suggest removing the internal and external URLs from the set-autodiscovervirtualdirectory command. Setting this to -ExternalURL ‘‘
-InternalURL ‘‘ is also a tad confusing and we do not want customers to confuse the AutoD namespace with the EWS namespace.
Cheers,
Rhoderick
Rhoderick,
could you please comment if there are still issues with use of Exchange autodiscover DNS SRV records for Lync (client or server)?
Thanks,
Markus
Just as a clarification to my previous post: I’m referring to Exchange 2013 and Lync 2013 (server and client) with all available CUs/updates.
Hi Markus,
Are you past that build?
Cheers,
Rhoderick
Thanks, Rhoderick, for he quick pointer to this KB article. This shall help us to verify client installations.
Regards,
Markus
Rhoderick, you said, "What sort of situations do you encounter where you are unable to update the zone for the primary SMTP namespace? That sounds interesting – anything you can share on that?"
I work for a subsidiary of a very large corporation, where we all have the same email domain, user@domain.com, and the main corporate office manages the inbound/outbound email flow of 20 or so separate Forests and Exchange environments using AD FS GalSync and
email aliases like user@child.forest.domain.com. They also control the domain.com DNS namespace. We set our SIP address to the email address for simplicity. So when Lync (or non-domain joined Outlook clients) try to find the autodiscover they go to autdiscover.domain.com,
when they really need to go to autodiscover.forest.domain.com, or the dns namespace of whatever the child is. So the end result is those clients can never find the right autodiscover FQDN using DNS or srv records because they use the email address to look
that up, instead of looking at the dns namespace the client host is hitting.
MO
Hi Rhoderick,
For Exchange we can do this with SRV redirect, but this requires an update to the Lync 2013 client. See link above.
Where can I find this update? | https://blogs.technet.microsoft.com/rmilne/2013/09/11/exchange-autodiscover-lync/ | CC-MAIN-2018-05 | refinedweb | 2,238 | 62.78 |
Pattern Summaries: Cache Management
This article is part of an ongoing series in which I summarize patterns from my Patterns in Java article is the last of a group of articles that focus on patterns related to the low-level structure of the classes in an application. This article discusses a pattern called Cache Management, which comes from Volume 1 of Patterns in Java.
Cache ManagementSuppose you are writing a program that allows people to fetch information about products in a catalog. Fetching all of a product's information can take a few seconds because it may have to be gathered from multiple sources. Keeping a product's information in the program's memory allows the next request for the product's information to be satisfied more quickly, since it is not necessary to spend the time to gather the information.
Keeping information in memory that takes a relatively long time to fetch into memory for quick access the next time it is needed is called caching. The large number of products in the catalog makes it infeasible to cache information for all of the products in memory. What can be done is to keep information for as many products as feasible in memory, selecting products guessed to be the most likely to be used are in memory so they are there when needed. Deciding which and how many objects to keep in memory is called cache management.
Figure 1 shows how cache management would work for the product information example.
Figure 1. Product cache management collabortion.
Figure 2 show the general structure of the Cache Management pattern.
Figure 2. Cache Management Pattern.
Here are descriptions of the classes that participate in the Cache Management pattern and the roles that they play:
Sometimes applications of the Cache Management pattern are added to the design of a program after the need for a performance optimization has been discovered. This is usually not a problem because the impact of the Cache Management pattern on the rest of a program is minimal. If the CacheManager class is implemented as a subclass of the ObjectFetcher class then, using the Proxy pattern, an implementation of the Cache Management pattern can be inserted into a working program with minimal modification to existing code.
The primary consequence of using the Cache Management pattern is that a program spends less time fetching objects from expensive sources. The simplest way of measuring the effectiveness of caching is by computing a statistic called its hit rate. The hit rate is the percentage of object fetch requests that the cache manager satisfies with objects stored in the cache. If every request is satisfied with an object from the cache, then the hit rate is 100%. If no request is satisfied, then the hit rate is 0%. The hit rate depends largely on how well the implementation of the Cache Management pattern matches the way that objects are requested. You can find a more detailed analysis of the performance implications of caching in Patterns in Java.
When objects are created with data from an external source, another consequence of using the Cache Management pattern is that the cache may become inconsistent with the original data source. The consistency problem breaks down into two separate problems that can be solved independently of each other. Those problems are read consistency and write consistency.
Read consistency means that the cache always reflects updates to information in the original object source. If the objects being cached are stock prices, then the prices in the object source can change while the prices in the cache will no longer be current. Write consistency means that the original object source always reflects updates to the cache. To achieve absolute read or write consistency for objects in a cache with the original object source requires that you implement a mechanism that keeps them synchronized. Such mechanisms can be complicated to implement and add considerable execution time. They generally involve techniques such as locking and optimistic concurrency, which are beyond the scope of this article.
As an example, we will now consider a problem that can benefit from an application of the Cache Managment pattern. Suppose you are writing software for an employee timekeeping system. The system consists of timekeeping terminals and a timekeeping server. The terminals are small boxes mounted on the walls of a place of business. When an employee arrives at work or leaves work, the employee notifies the timekeeping system by running his or her ID card through a timekeeping terminal. The terminal reads the employee's ID on the card and acknowledges the card by displaying the employee's name and options. The employee then selects an option to indicate that he or she is starting work, ending work, going on break or other options. The timekeeping terminals transmit these timekeeping events to the timekeeping server. At the end of each pay period, the business's payroll system gets the number of hours each employee worked from the timekeeping system and prepares paychecks.
The exact details of what an employee sees will depend on an employee profile that a terminal receives from the timekeeping server. The employee profile will include the employee's name, the language in which to display prompts for the employee and what special options apply to the employee.
Most businesses assign their employees a fixed location in the business place to do their work. Employees with a fixed work location will normally use the timekeeping terminal nearest to their work location. To avoid long lines in front of timekeeping terminals, it is recommended that the terminals be positioned so that fewer than 70 employees with fixed work locations will use the same timekeeping terminal.
A substantial portion of the cost of the timekeeping system will be the cost of the terminals. To keep their cost down, the timekeeping terminals will have a minimal amount of memory. However, to keep response time down, we will want the terminals to cache employees profiles so that most of the time they will be able to respond immediately when presented with an employee's ID card. That means that you will have to impose a maximum cache size that is rather modest. A reasonable basis for an initial maximum cache size is the recommendation that the terminals be positioned so that no more than 70 employees with fixed work locations use the same terminal. Based on that, we come up with an initial cache size of up to 80 employee profiles.
The reason for picking a number larger than 70 is that under some situations more than 70 employees may use the same timekeeping terminal. Sometimes one part of a business will borrow employees from another part of a business when they experience a peak workload. Also, there will be employees, such as maintenance staff, that float from one location to another.
Figure 3 is a class diagram that shows how the Cache Management pattern is applied to this problem.
Figure 3. Timekeeping Cache Management
Lets look at some code that implements the design in Figure 3. First, here is the code for the EmployeeProfileManager class:
class EmployeeProfileManager { private EmployeeCache cache = new EmployeeCache(); private EmployeeProfileFetcher server = new EmployeeProfileFetcher(); /** * Fetch an employee profile for the given employee id from the * internal cache or timekeeping server if not in the internal cache. * @return employee's profile or null if employee profile not found. */ EmployeeProfile fetchEmployee(EmployeeID id) { EmployeeProfile profile = cache.fetchEmployee(id); if (profile == null) { // if profile not in cache try server profile = server.fetchEmployee(id); if (profile != null) { // Got the profile from the server // put profile in the cache cache.addEmployee(profile); } // if != null } // if == null return profile; } // fetchEmployee(EmployeeID) } // class EmployeeProfileManager
The logic in the EmployeeProfileManager class is straightforward conditional logic. The logic of the EmployeeCache class is more intricate, since it has to manipulate a data structure to determine which employee profile to remove from the cache when adding an employee profile to a full cache.
class EmployeeCache {
/**
* We use a linked list to determine the least recently used employee
* profile. The cache that itself is implemented by a Hashtable
* object. The Hashtable values are linked list objects that refer
* to the actual EmployeeProfile object.
*/
private Hashtable cache = new Hashtable();
/**
* This is the head of the linked list that refers to the most
* recently used EmployeeProfile.
*/
LinkedList mru = null;
/**
* this is the end of the linked list that referes to the least
* recently used EmployeeProfile.
*/
LinkedList lru = null;
/**
* Maximum number of EmployeeProfile objects that may be in the cache.
*/
private final int MAX_CACHE_SIZE = 80;
/**
* The number of EmployeeProfile objects currently in the cache.
*/
private int currentCacheSize = 0;
/**
* Objects are passed to this method for addition to the cache.
* However, this method is not required to actually add an object
* to the cache if that is contrary to its policy for what object
* should be added. This method may also remove objects already in
* the cache in order to make room for new objects.
*/
public void addEmployee(EmployeeProfile emp) {
EmployeeID id = emp.getID();
if (cache.get(id) == null) { // if profile not in cache
// Add profile to cache, making it the most recently used.
if (currentCacheSize == 0) {
// treate empty cache as a special case
lru = mru = new LinkedList();
mru.profile = emp;
} else { // currentCacheSize > 0
LinkedList newLink;
if (currentCacheSize >= MAX_CACHE_SIZE) {
// remove least recently used EmployeeProfile from the cache
newLink = lru;
lru = newLink.previous;
cache.remove(newLink);
lru.next = null;
} else {
newLink = new LinkedList();
} // if >= MAX_CACHE_SIZE
newLink.profile = emp;
newLink.next = mru;
newLink.previous = null;
mru = newLink;
} // if 0
// put the now most recently used profile in the cache
cache.put(id, mru);
currentCacheSize++;
} else { // profile already in cache
// addEmployee shouldn't be called when the object is already
// in the cache. Since that has happened, do a fetch so
// that so object becomes the most recently used.
fetchEmployee(id);
} // if cache.get(id)
} // addEmployee(EmployeeProfile)
/**
* Return the EmployeeProfile associated with the given EmployeeID or
* null if no EmployeeProfile is associated with the given EmployeeID.
*/
public EmployeeProfile fetchEmployee(EmployeeID id) {
LinkedList foundLink = (LinkedList)cache.get(id);
if (foundLink == null)
return null;
if (mru != foundLink) {
if (foundLink.previous != null)
foundLink.previous.next = foundLink.next;
if (foundLink.next != null)
foundLink.next.previous = foundLink.previous;
foundLink.previous = null;
foundLink.next = mru;
mru = foundLink;
} // if currentCacheSize > 1
return foundLink.profile;
} // fetchEmployee(EmployeeID)
/**
* private doublely linked list class for managing list of most
* recently used employee profiles.
*/
private class LinkedList {
EmployeeProfile profile;
LinkedList previous;
LinkedList next;
} // class LinkedList
} // class EmployeeCache
Finally, here are the EmployeeProfile and EmployeeID classes:
class EmployeeProfile { private EmployeeID id; // Employee Id private Locale locale; // Language Preference private boolean supervisor; private String name; // Employee name public EmployeeProfile(EmployeeID id, Locale locale, boolean supervisor, String name) { this.id = id; this.locale = locale; this.supervisor = supervisor; this.name = name; } // Constructor(EmployeeID, Locale, boolean, String) public EmployeeID getID() { return id; } public Locale getLocale() { return locale; } public boolean isSupervisor() { return supervisor; } } // class EmployeeProfile
class EmployeeID { private String id; /** * constructor * @param id A string containing the employee ID. */ public EmployeeID(String id) { this.id = id; } // constructor(String) /** * Returns a hash code value for this object. */ public int hashCode() { return id.hashCode(); } /** * Return true if the given object is an EmployeeId equal to this one. */ public boolean equals(Object obj) { return ( obj instanceof EmployeeID && id.equals(((EmployeeID)obj).id) ); } // equals(Object) /** * Return the string representation of this EmployeeID. */ public String toString() { return id; } } // class EmployeeID
About the Author
Mark Grand is the author of a series of books titled "Patterns in Java." He is a consultant who specializes in object-oriented design and Java. He is currently working on a framework to create integrated enterprise applications.
This article was originally published on October 2, 2003 | https://www.developer.com/design/article.php/630481/Pattern-Summaries-Cache-Management.htm | CC-MAIN-2021-10 | refinedweb | 1,948 | 54.12 |
MCC Universal Library Python API for Windows
Project description
About
The mcculw package contains an API (Application Programming Interface) for interacting with the I/O Library for Measurement Computing Data Acquisition products, Universal Library. This package was created and is supported by MCC. The package is implemented in Python as a wrapper around the Universal Library C API using the ctypes Python Library.
mcculw is supported for Universal Library 6.55 and later. Some functions in the mcculw package may be unavailable with earlier versions of Universal Library. Visit to upgrade your version of UL.
mcculw supports only the Windows operating system.
mcculw supports CPython 2.7 and 3.4+.
The mcculw package is available on GitHub and PyPI.
Installation
Install Python version 2.7, 3.4, or later from .
Install the latest version of InstaCal from .
Install the the MCC UL Python API for Windows (mcculw) and any dependencies using pip:
Open the Windows command prompt: press Win+R, type cmd.exe and press Enter.
Upgrade pip to the latest version by entering the following command:
pip install --upgrade pip
Install the mcculw library by entering the following command:
pip install mcculw
Note: If you get a message like “pip is not recognized as an internal or external command…”, or if you have multiple Python installations, enter the full path to the pip executable, such as C:\Python27\Scripts\pip install –upgrade pip or C:\Python27\Scripts\pip install mcculw. The pip command is in the Scripts subdirectory of your Python install location.
Examples
Download the examples zip file from the mcculw GitHub repository.
Unzip the examples to a known location, such as:
C:\Users\Public\Documents\Measurement Computing\DAQ\Python
Refer to the knowledgebase article Importing Python for Windows example programs into an IDE for detailed instructions on how to import examples into popular IDEs such as Eclipse and Visual Studio.
Usage
The following is a basic example of using the Universal Library to perform analog input. Further examples may be found on GitHub.
from mcculw import ul from mcculw.enums import ULRange from mcculw.ul import ULError board_num = 0 channel = 0 ai_range = ULRange.BIP5VOLTS try: # Get a value from the device value = ul.a_in(board_num, channel, ai_range) # Convert the raw value to engineering units eng_units_value = ul.to_eng_units(board_num, ai_range, value) # Display the raw value print("Raw Value: " + str(value)) # Display the engineering value print("Engineering Value: " + '{:.3f}'.format(eng_units_value)) except ULError as e: # Display the error print("A UL error occurred. Code: " + str(e.errorcode) + " Message: " + e.message)
Support/Feedback
The mcculw package is supported by MCC. For support for mcculw, contact technical support through . Please include version information for Python, Universal Library and the mcculw packages used as well as detailed steps on how to reproduce the problem in your request.
Bugs/Feature Requests
To report a bug or submit a feature request, please use the mcculw GitHub Issues page.
Documentation
Documentation is available in the Universal Library Help.
License
mcculw is licensed under an MIT-style license. Other incorporated projects may be licensed under different licenses. All licenses allow for non-commercial and commercial use.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/mcculw/ | CC-MAIN-2018-43 | refinedweb | 543 | 50.23 |
Hello everyone,
I am currently following this tutorial on YouTube describing how to create a SnapChat like menu for your applications. I am running into some difficulty in trying to center my nested UIViewController within my UIScrollView.
Within the first couple of minutes of the video, the creator shows how you can create a UIViewController, change its .xib file background to Green, and then add it as a subview of the UIScrollView. Only after adding a few lines of code and then running the application, you can see how his underlying UIViewController is centered within the UIScrollView.
I have followed his exact instructions and end up with a nested UIViewController that is off-centered and shifted about 25% up the screen. Here is a screenshot:
Here is also the code I have written thus far:
using System; using CoreGraphics; using UIKit; namespace Test2.iOS { public partial class ViewController : UIViewController { public ViewController(IntPtr handle) : base(handle) { } public override void ViewDidLoad() { base.ViewDidLoad(); // Perform any additional setup after loading the view, typically from a nib. var m = new Menu(); this.AddChildViewController(m); this.mainScrollView.AddSubview(m.View); m.DidMoveToParentViewController(this); } public override void DidReceiveMemoryWarning() { base.DidReceiveMemoryWarning(); // Release any cached data, images, etc that aren't in use. } } }
Menu is a custom UIViewController class that I have created with the only change being a color of Green applied to its .xib file background.
Any ideas as to how I can center my UIViewController? Can someone offer me some intuition as to why this may be occurring?
Thank you to all:
Answers
I am assuming you are using auto layout since that is in your Tag. Your code is not setting up any constraints. You could put that subview wherever you want if you setup the constraints properly. Do it right after you add the sub view.
I'm sorry but I'm not quite sure what you mean. If I set up my own Auto Layout constraints within the designer, are they supposed to show up in my code?
@RafaelNegron:
@JGoldberger
I understand what you mean now. I know I have to make at least 4 different constraints for my subview which are:
Any idea on how I can accomplish this quickly using the suggested article? Just asking in case you already know how to do this off the top of your head.
Hard to say without seeing your xib. This code would seem to at least be setting the centers to be the same:
m.View.Center = this.View.Center;
so there may be some intrinsic size happening where the Menu view controller's view is set to the full size of the screen.
Also the ContentSize property of a scroll view should be set to the full size of the view that is the direct child of the scroll view, so this would seem more appropriate: | https://forums.xamarin.com/discussion/91132/centering-a-uiviewcontroller-within-a-uiscrollview | CC-MAIN-2017-30 | refinedweb | 475 | 64.1 |
Term. If no size is specified, the service defaults to ten.
Examples
Before providing examples, we need to provide a context. Assume there is a class with a string property, a list of strings property, and a property of a complex type that also has a string property.
public class Book { public string Title { get; set; } public string Author { get; set; } public List<string> Tags { get; set; } } public class Author { public string Name { get; set; } }: You use the Take method to exclude actual search results, since you are not interested in them in this scenario. After getting the search results, you can extract the terms facet for the tags.
var tagCounts = searchResults .TermsFacetFor(x => x.Tags).Terms; foreach(var tagCount in tagCounts) { string tag = tagCount.Term; int count = tagCount.Count; Console.WriteLine(tag + ": " + count); }
The above code would print:
fiction: 2
crime: 1
science: 1
scifi: 1
Grouping authors
If you modify the code and instead retrieve a terms facet for author name, the code would look like this:
var searchResults = client.Search<Book>() .TermsFacetFor(x => x.Author.Name) .Take(0) .GetResult(); var authorCounts = searchResults .TermsFacetFor(x => x.Author.Name).Terms; foreach(var authorCount in authorCounts) { string authorName = authorCount.Term; int count = authorCount.Count; Console.WriteLine(authorName + ": " + count); }
The above code would print:
Agatha Christie: 1
Charles Darwin: 1
Charles Dickens: 1
Grouping by single words
In contrast, if you used the TermsFacetForWord method (instead of the TermsFacetFor method) in the above code, it would print:
charles: 2
agatha: 1
christie: 1
darwin: 1
dickens: 1 | https://world.episerver.com/documentation/Items/Developers-Guide/EPiServer-Find/9/DotNET-Client-API/Searching/Facets/Terms-facets/ | CC-MAIN-2018-51 | refinedweb | 256 | 61.87 |
! 6.2 - patch 20210109 [ncurses.git] / doc / html / hackguide.html 1 <!-- 2 $Id: hackguide.html,v 1.33 2020/02/02 23:34:34 tom Exp $ 3 **************************************************************************** 4 * Copyright 2019,2020 Thomas E. Dickey * 5 * Copyright 2000-2013,2017 <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"> 33 34 <html> 35 <head> 36 <meta name="generator" content= 37 "HTML Tidy for HTML5 for Linux version 5.2.0"> 38 39 <title>A Hacker's Guide to Ncurses Internals</title> 40 <link rel="author" href="mailto:bugs-ncurses@gnu.org"> 41 <meta http-<!-- 43 This document is self-contained, *except* that there is one relative link to 44 the ncurses-intro.html document, expected to be in the same directory with 45 this one. 46 --> 47 </head> 48 49 <body> 50 <h1>A Hacker's Guide to NCURSES</h1> 51 52 <h1>Contents</h1> 53 54 <ul> 55 <li><a href="#abstract">Abstract</a></li> 56 57 <li> 58 <a href="#objective">Objective of the Package</a> 59 60 <ul> 61 <li><a href="#whysvr4">Why System V Curses?</a></li> 62 63 <li><a href="#extensions">How to Design Extensions</a></li> 64 </ul> 65 </li> 66 67 <li><a href="#portability">Portability and Configuration</a></li> 68 69 <li><a href="#documentation">Documentation Conventions</a></li> 70 71 <li><a href="#bugtrack">How to Report Bugs</a></li> 72 73 <li> 74 <a href="#ncurslib">A Tour of the Ncurses Library</a> 75 76 <ul> 77 <li><a href="#loverview">Library Overview</a></li> 78 79 <li><a href="#engine">The Engine Room</a></li> 80 81 <li><a href="#input">Keyboard Input</a></li> 82 83 <li><a href="#mouse">Mouse Events</a></li> 84 85 <li><a href="#output">Output and Screen Updating</a></li> 86 </ul> 87 </li> 88 89 <li><a href="#fmnote">The Forms and Menu Libraries</a></li> 90 91 <li> 92 <a href="#tic">A Tour of the Terminfo Compiler</a> 93 94 <ul> 95 <li><a href="#nonuse">Translation of 96 Non-<strong>use</strong> Capabilities</a></li> 97 98 <li><a href="#uses">Use Capability Resolution</a></li> 99 100 <li><a href="#translation">Source-Form Translation</a></li> 101 </ul> 102 </li> 103 104 <li><a href="#utils">Other Utilities</a></li> 105 106 <li><a href="#style">Style Tips for Developers</a></li> 107 108 <li><a href="#port">Porting Hints</a></li> 109 </ul> 110 111 <h1><a name="abstract" id="abstract">Abstract</a></h1> 112 113 <p>This document is a hacker's tour of the 114 <strong>ncurses</strong> library and utilities. It discusses 115 design philosophy, implementation methods, and the conventions 116 used for coding and documentation. It is recommended reading for 117 anyone who is interested in porting, extending or improving the 118 package.</p> 119 120 <h1><a name="objective" id="objective">Objective of the 121 Package</a></h1> 122 123 <p>The objective of the <strong>ncurses</strong> package is to 124 provide a free software API for character-cell terminals and 125 terminal emulators with the following characteristics:</p> 126 127 <ul> 128 <li>Source-compatible with historical curses implementations 129 (including the original BSD curses and System V curses.</li> 130 131 <li>Conformant with the XSI Curses standard issued as part of 132 XPG4 by X/Open.</li> 133 134 <li>High-quality — stable and reliable code, wide 135 portability, good packaging, superior documentation.</li> 136 137 <li>Featureful — should eliminate as much of the drudgery 138 of C interface programming as possible, freeing programmers to 139 think at a higher level of design.</li> 140 </ul> 141 142 <p>These objectives are in priority order. So, for example, 143 source compatibility with older version must trump featurefulness 144 — we cannot add features if it means breaking the portion 145 of the API corresponding to historical curses versions.</p> 146 147 <h2><a name="whysvr4" id="whysvr4">Why System V Curses?</a></h2> 148 149 <p>We used System V curses as a model, reverse-engineering their 150 API, in order to fulfill the first two objectives.</p> 151 152 <p>System V curses implementations can support BSD curses 153 programs with just a recompilation, so by capturing the System V 154 API we also capture BSD's.</p> 155 156 <p>More importantly for the future, the XSI Curses standard 157 issued by X/Open is explicitly and closely modeled on System V. 158 So conformance with System V took us most of the way to 159 base-level XSI conformance.</p> 160 161 <h2><a name="extensions" id="extensions">How to Design 162 Extensions</a></h2> 163 164 <p>The third objective (standards conformance) requires that it 165 be easy to condition source code using <strong>ncurses</strong> 166 so that the absence of nonstandard extensions does not break the 167 code.</p> 168 169 <p>Accordingly, we have a policy of associating with each 170 nonstandard extension a feature macro, so that ncurses client 171 code can use this macro to condition in or out the code that 172 requires the <strong>ncurses</strong> extension.</p> 173 174 <p>For example, there is a macro 175 <code>NCURSES_MOUSE_VERSION</code> which XSI Curses does not 176 define, but which is defined in the <strong>ncurses</strong> 177 library header. You can use this to condition the calls to the 178 mouse API calls.</p> 179 180 <h1><a name="portability" id="portability">Portability and 181 Configuration</a></h1> 182 183 <p>Code written for <strong>ncurses</strong> may assume an 184 ANSI-standard C compiler and POSIX-compatible OS interface. It 185 may also assume the presence of a System-V-compatible 186 <em>select(2)</em> call.</p> 187 188 <p>We encourage (but do not require) developers to make the code 189 friendly to less-capable UNIX environments wherever possible.</p> 190 191 <p>We encourage developers to support OS-specific optimizations 192 and methods not available under POSIX/ANSI, provided only 193 that:</p> 194 195 <ul> 196 <li>All such code is properly conditioned so the build process 197 does not attempt to compile it under a plain ANSI/POSIX 198 environment.</li> 199 200 <li>Adding such implementation methods does not introduce 201 incompatibilities in the <strong>ncurses</strong> API between 202 platforms.</li> 203 </ul> 204 205 <p>We use GNU <code>autoconf(1)</code> as a tool to deal with 206 portability issues. The right way to leverage an OS-specific 207 feature is to modify the autoconf specification files 208 (configure.in and aclocal.m4) to set up a new feature macro, 209 which you then use to condition your code.</p> 210 211 <h1><a name="documentation" id="documentation">Documentation 212 Conventions</a></h1> 213 214 <p>There are three kinds of documentation associated with this 215 package. Each has a different preferred format:</p> 216 217 <ul> 218 <li>Package-internal files (README, INSTALL, TO-DO etc.)</li> 219 220 <li>Manual pages.</li> 221 222 <li>Everything else (i.e., narrative documentation).</li> 223 </ul> 224 225 <p>Our conventions are simple:</p> 226 227 <ol> 228 <li><strong>Maintain package-internal files in plain 229 text.</strong> The expected viewer for them <em>more(1)</em> or 230 an editor window; there is no point in elaborate mark-up.</li> 231 232 <li><strong>Mark up manual pages in the man macros.</strong> 233 These have to be viewable through traditional <em>man(1)</em> 234 programs.</li> 235 236 <li><strong>Write everything else in HTML.</strong> 237 </li> 238 </ol> 239 240 <p>When in doubt, HTMLize a master and use <em>lynx(1)</em> to 241 generate plain ASCII (as we do for the announcement 242 document).</p> 243 244 <p>The reason for choosing HTML is that it is (a) well-adapted 245 for on-line browsing through viewers that are everywhere; (b) 246 more easily readable as plain text than most other mark-ups, if 247 you do not have a viewer; and (c) carries enough information that 248 you can generate a nice-looking printed version from it. Also, of 249 course, it make exporting things like the announcement document 250 to WWW pretty trivial.</p> 251 252 <h1><a name="bugtrack" id="bugtrack">How to Report Bugs</a></h1> 253 254 <p>The <a name="bugreport" id="bugreport">reporting address for 255 bugs</a> is <a href= 256 "mailto:bug-ncurses@gnu.org">bug-ncurses@gnu.org</a>. This is a 257 majordomo list; to join, write to 258 <code>bug-ncurses-request@gnu.org</code> with a message 259 containing the line:</p> 260 261 <pre> 262 subscribe <name>@<host.domain> 263 </pre> 264 265 <p>The <code>ncurses</code> code is maintained by a small group 266 of volunteers. While we try our best to fix bugs promptly, we 267 simply do not have a lot of hours to spend on elementary 268 hand-holding. We rely on intelligent cooperation from our users. 269 If you think you have found a bug in <code>ncurses</code>, there 270 are some steps you can take before contacting us that will help 271 get the bug fixed quickly.</p> 272 273 <p>In order to use our bug-fixing time efficiently, we put people 274 who show us they have taken these steps at the head of our queue. 275 This means that if you do not, you will probably end up at the 276 tail end and have to wait a while.</p> 277 278 <ol> 279 <li>Develop a recipe to reproduce the bug. 280 281 <p>Bugs we can reproduce are likely to be fixed very quickly, 282 often within days. The most effective single thing you can do 283 to get a quick fix is develop a way we can duplicate the bad 284 behavior — ideally, by giving us source for a small, 285 portable test program that breaks the library. (Even better 286 is a keystroke recipe using one of the test programs provided 287 with the distribution.)</p> 288 </li> 289 290 <li>Try to reproduce the bug on a different terminal type. 291 292 <p>In our experience, most of the behaviors people report as 293 library bugs are actually due to subtle problems in terminal 294 descriptions. This is especially likely to be true if you are 295 using a traditional asynchronous terminal or PC-based 296 terminal emulator, rather than xterm or a UNIX console 297 entry.</p> 298 299 <p>It is therefore extremely helpful if you can tell us 300 whether or not your problem reproduces on other terminal 301 types. Usually you will have both a console type and xterm 302 available; please tell us whether or not your bug reproduces 303 on both.</p> 304 305 <p>If you have xterm available, it is also good to collect 306 xterm reports for different window sizes. This is especially 307 true if you normally use an unusual xterm window size — 308 a surprising number of the bugs we have seen are either 309 triggered or masked by these.</p> 310 </li> 311 312 <li>Generate and examine a trace file for the broken behavior. 313 314 <p>Recompile your program with the debugging versions of the 315 libraries. Insert a <code>trace()</code> call with the 316 argument set to <code>TRACE_UPDATE</code>. (See <a href= 317 "ncurses-intro.html#debugging">"Writing Programs with 318 NCURSES"</a> for details on trace levels.) Reproduce your 319 bug, then look at the trace file to see what the library was 320 actually doing.</p> 321 322 <p>Another frequent cause of apparent bugs is application 323 coding errors that cause the wrong things to be put on the 324 virtual screen. Looking at the virtual-screen dumps in the 325 trace file will tell you immediately if this is happening, 326 and save you from the possible embarrassment of being told 327 that the bug is in your code and is your problem rather than 328 ours.</p> 329 330 <p>If the virtual-screen dumps look correct but the bug 331 persists, it is possible to crank up the trace level to give 332 more and more information about the library's update actions 333 and the control sequences it issues to perform them. The test 334 directory of the distribution contains a tool for digesting 335 these logs to make them less tedious to wade through.</p> 336 337 <p>Often you will find terminfo problems at this stage by 338 noticing that the escape sequences put out for various 339 capabilities are wrong. If not, you are likely to learn 340 enough to be able to characterize any bug in the 341 screen-update logic quite exactly.</p> 342 </li> 343 344 <li>Report details and symptoms, not just interpretations. 345 346 <p>If you do the preceding two steps, it is very likely that 347 you will discover the nature of the problem yourself and be 348 able to send us a fix. This will create happy feelings all 349 around and earn you good karma for the first time you run 350 into a bug you really cannot characterize and fix 351 yourself.</p> 352 353 <p>If you are still stuck, at least you will know what to 354 tell us. Remember, we need details. If you guess about what 355 is safe to leave out, you are too likely to be wrong.</p> 356 357 <p>If your bug produces a bad update, include a trace file. 358 Try to make the trace at the <em>least</em> voluminous level 359 that pins down the bug. Logs that have been through 360 tracemunch are OK, it does not throw away any information 361 (actually they are better than un-munched ones because they 362 are easier to read).</p> 363 364 <p>If your bug produces a core-dump, please include a 365 symbolic stack trace generated by gdb(1) or your local 366 equivalent.</p> 367 368 <p>Tell us about every terminal on which you have reproduced 369 the bug — and every terminal on which you cannot. 370 Ideally, send us terminfo sources for all of these (yours 371 might differ from ours).</p> 372 373 <p>Include your ncurses version and your OS/machine type, of 374 course! You can find your ncurses version in the 375 <code>curses.h</code> file.</p> 376 </li> 377 </ol> 378 379 <p>If your problem smells like a logic error or in cursor 380 movement or scrolling or a bad capability, there are a couple of 381 tiny test frames for the library algorithms in the progs 382 directory that may help you isolate it. These are not part of the 383 normal build, but do have their own make productions.</p> 384 385 <p>The most important of these is <code>mvcur</code>, a test 386 frame for the cursor-movement optimization code. With this 387 program, you can see directly what control sequences will be 388 emitted for any given cursor movement or scroll/insert/delete 389 operations. If you think you have got a bad capability 390 identified, you can disable it and test again. The program is 391 command-driven and has on-line help.</p> 392 393 <p>If you think the vertical-scroll optimization is broken, or 394 just want to understand how it works better, build 395 <code>hashmap</code> and read the header comments of 396 <code>hardscroll.c</code> and <code>hashmap.c</code>; then try it 397 out. You can also test the hardware-scrolling optimization 398 separately with <code>hardscroll</code>.</p> 399 400 <h1><a name="ncurslib" id="ncurslib">A Tour of the Ncurses 401 Library</a></h1> 402 403 <h2><a name="loverview" id="loverview">Library Overview</a></h2> 404 405 <p>Most of the library is superstructure — fairly trivial 406 convenience interfaces to a small set of basic functions and data 407 structures used to manipulate the virtual screen (in particular, 408 none of this code does any I/O except through calls to more 409 fundamental modules described below). The files</p> 410 411 <blockquote> 412 <code>lib_addch.c lib_bkgd.c lib_box.c lib_chgat.c lib_clear.c 413 lib_clearok.c lib_clrbot.c lib_clreol.c lib_colorset.c 414 lib_data.c lib_delch.c lib_delwin.c lib_echo.c lib_erase.c 415 lib_gen.c lib_getstr.c lib_hline.c lib_immedok.c lib_inchstr.c 416 lib_insch.c lib_insdel.c lib_insstr.c lib_instr.c 417 lib_isendwin.c lib_keyname.c lib_leaveok.c lib_move.c 418 lib_mvwin.c lib_overlay.c lib_pad.c lib_printw.c lib_redrawln.c 419 lib_scanw.c lib_screen.c lib_scroll.c lib_scrollok.c 420 lib_scrreg.c lib_set_term.c lib_slk.c lib_slkatr_set.c 421 lib_slkatrof.c lib_slkatron.c lib_slkatrset.c lib_slkattr.c 422 lib_slkclear.c lib_slkcolor.c lib_slkinit.c lib_slklab.c 423 lib_slkrefr.c lib_slkset.c lib_slktouch.c lib_touch.c 424 lib_unctrl.c lib_vline.c lib_wattroff.c lib_wattron.c 425 lib_window.c</code> 426 </blockquote> 427 428 <p>are all in this category. They are very unlikely to need 429 change, barring bugs or some fundamental reorganization in the 430 underlying data structures.</p> 431 432 <p>These files are used only for debugging support:</p> 433 434 <blockquote> 435 <code>lib_trace.c lib_traceatr.c lib_tracebits.c lib_tracechr.c 436 lib_tracedmp.c lib_tracemse.c trace_buf.c</code> 437 </blockquote> 438 439 <p>It is rather unlikely you will ever need to change these, 440 unless you want to introduce a new debug trace level for some 441 reason.</p> 442 443 <p>There is another group of files that do direct I/O via 444 <em>tputs()</em>, computations on the terminal capabilities, or 445 queries to the OS environment, but nevertheless have only fairly 446 low complexity. These include:</p> 447 448 <blockquote> 449 <code>lib_acs.c lib_beep.c lib_color.c lib_endwin.c 450 lib_initscr.c lib_longname.c lib_newterm.c lib_options.c 451 lib_termcap.c lib_ti.c lib_tparm.c lib_tputs.c lib_vidattr.c 452 read_entry.c.</code> 453 </blockquote> 454 455 <p>They are likely to need revision only if ncurses is being 456 ported to an environment without an underlying terminfo 457 capability representation.</p> 458 459 <p>These files have serious hooks into the tty driver and signal 460 facilities:</p> 461 462 <blockquote> 463 <code>lib_kernel.c lib_baudrate.c lib_raw.c lib_tstp.c 464 lib_twait.c</code> 465 </blockquote> 466 467 <p>If you run into porting snafus moving the package to another 468 UNIX, the problem is likely to be in one of these files. The file 469 <code>lib_print.c</code> uses sleep(2) and also falls in this 470 category.</p> 471 472 <p>Almost all of the real work is done in the files</p> 473 474 <blockquote> 475 <code>hardscroll.c hashmap.c lib_addch.c lib_doupdate.c 476 lib_getch.c lib_mouse.c lib_mvcur.c lib_refresh.c lib_setup.c 477 lib_vidattr.c</code> 478 </blockquote> 479 480 <p>Most of the algorithmic complexity in the library lives in 481 these files. If there is a real bug in <strong>ncurses</strong> 482 itself, it is probably here. We will tour some of these files in 483 detail below (see <a href="#engine">The Engine Room</a>).</p> 484 485 <p>Finally, there is a group of files that is actually most of 486 the terminfo compiler. The reason this code lives in the 487 <strong>ncurses</strong> library is to support fallback to 488 /etc/termcap. These files include</p> 489 490 <blockquote> 491 <code>alloc_entry.c captoinfo.c comp_captab.c comp_error.c 492 comp_hash.c comp_parse.c comp_scan.c parse_entry.c 493 read_termcap.c write_entry.c</code> 494 </blockquote> 495 496 <p>We will discuss these in the compiler tour.</p> 497 498 <h2><a name="engine" id="engine">The Engine Room</a></h2> 499 500 <h3><a name="input" id="input">Keyboard Input</a></h3> 501 502 <p>All <code>ncurses</code> input funnels through the function 503 <code>wgetch()</code>, defined in <code>lib_getch.c</code>. This 504 function is tricky; it has to poll for keyboard and mouse events 505 and do a running match of incoming input against the set of 506 defined special keys.</p> 507 508 <p>The central data structure in this module is a FIFO queue, 509 used to match multiple-character input sequences against 510 special-key capabilities; also to implement pushback via 511 <code>ungetch()</code>.</p> 512 513 <p>The <code>wgetch()</code> code distinguishes between function 514 key sequences and the same sequences typed manually by doing a 515 timed wait after each input character that could lead a function 516 key sequence. If the entire sequence takes less than 1 second, it 517 is assumed to have been generated by a function key press.</p> 518 519 <p>Hackers bruised by previous encounters with variant 520 <code>select(2)</code> calls may find the code in 521 <code>lib_twait.c</code> interesting. It deals with the problem 522 that some BSD selects do not return a reliable time-left value. 523 The function <code>timed_wait()</code> effectively simulates a 524 System V select.</p> 525 526 <h3><a name="mouse" id="mouse">Mouse Events</a></h3> 527 528 <p>If the mouse interface is active, <code>wgetch()</code> polls 529 for mouse events each call, before it goes to the keyboard for 530 input. It is up to <code>lib_mouse.c</code> how the polling is 531 accomplished; it may vary for different devices.</p> 532 533 <p>Under xterm, however, mouse event notifications come in via 534 the keyboard input stream. They are recognized by having the 535 <strong>kmous</strong> capability as a prefix. This is kind of 536 klugey, but trying to wire in recognition of a mouse key prefix 537 without going through the function-key machinery would be just 538 too painful, and this turns out to imply having the prefix 539 somewhere in the function-key capabilities at terminal-type 540 initialization.</p> 541 542 <p>This kluge only works because <strong>kmous</strong> is not 543 actually used by any historic terminal type or curses 544 implementation we know of. Best guess is it is a relic of some 545 forgotten experiment in-house at Bell Labs that did not leave any 546 traces in the publicly-distributed System V terminfo files. If 547 System V or XPG4 ever gets serious about using it again, this 548 kluge may have to change.</p> 549 550 <p>Here are some more details about mouse event handling:</p> 551 552 <p>The <code>lib_mouse()</code> code is logically split into a 553 lower level that accepts event reports in a device-dependent 554 format and an upper level that parses mouse gestures and filters 555 events. The mediating data structure is a circular queue of event 556 structures.</p> 557 558 <p>Functionally, the lower level's job is to pick up primitive 559 events and put them on the circular queue. This can happen in one 560 of two ways: either (a) <code>_nc_mouse_event()</code> detects a 561 series of incoming mouse reports and queues them, or (b) code in 562 <code>lib_getch.c</code> detects the <strong>kmous</strong> 563 prefix in the keyboard input stream and calls _nc_mouse_inline to 564 queue up a series of adjacent mouse reports.</p> 565 566 <p>In either case, <code>_nc_mouse_parse()</code> should be 567 called after the series is accepted to parse the digested mouse 568 reports (low-level events) into a gesture (a high-level or 569 composite event).</p> 570 571 <h3><a name="output" id="output">Output and Screen Updating</a></h3> 572 573 <p>With the single exception of character echoes during a 574 <code>wgetnstr()</code> call (which simulates cooked-mode line 575 editing in an ncurses window), the library normally does all its 576 output at refresh time.</p> 577 578 <p>The main job is to go from the current state of the screen (as 579 represented in the <code>curscr</code> window structure) to the 580 desired new state (as represented in the <code>newscr</code> 581 window structure), while doing as little I/O as possible.</p> 582 583 <p>The brains of this operation are the modules 584 <code>hashmap.c</code>, <code>hardscroll.c</code> and 585 <code>lib_doupdate.c</code>; the latter two use 586 <code>lib_mvcur.c</code>. Essentially, what happens looks like 587 this:</p> 588 589 <ul> 590 <li> 591 <p>The <code>hashmap.c</code> module tries to detect vertical 592 motion changes between the real and virtual screens. This 593 information is represented by the oldindex members in the 594 newscr structure. These are modified by vertical-motion and 595 clear operations, and both are re-initialized after each 596 update. To this change-journalling information, the hashmap 597 code adds deductions made using a modified Heckel algorithm 598 on hash values generated from the line contents.</p> 599 </li> 600 601 <li> 602 <p>The <code>hardscroll.c</code> module computes an optimum 603 set of scroll, insertion, and deletion operations to make the 604 indices match. It calls <code>_nc_mvcur_scrolln()</code> in 605 <code>lib_mvcur.c</code> to do those motions.</p> 606 </li> 607 608 <li> 609 <p>Then <code>lib_doupdate.c</code> goes to work. Its job is 610 to do line-by-line transformations of <code>curscr</code> 611 lines to <code>newscr</code> lines. Its main tool is the 612 routine <code>mvcur()</code> in <code>lib_mvcur.c</code>. 613 This routine does cursor-movement optimization, attempting to 614 get from given screen location A to given location B in the 615 fewest output characters possible.</p> 616 </li> 617 </ul> 618 619 <p>If you want to work on screen optimizations, you should use 620 the fact that (in the trace-enabled version of the library) 621 enabling the <code>TRACE_TIMES</code> trace level causes a report 622 to be emitted after each screen update giving the elapsed time 623 and a count of characters emitted during the update. You can use 624 this to tell when an update optimization improves efficiency.</p> 625 626 <p>In the trace-enabled version of the library, it is also 627 possible to disable and re-enable various optimizations at 628 runtime by tweaking the variable 629 <code>_nc_optimize_enable</code>. See the file 630 <code>include/curses.h.in</code> for mask values, near the 631 end.</p> 632 633 <h1><a name="fmnote" id="fmnote">The Forms and Menu Libraries</a></h1> 634 635 <p>The forms and menu libraries should work reliably in any 636 environment you can port ncurses to. The only portability issue 637 anywhere in them is what flavor of regular expressions the 638 built-in form field type TYPE_REGEXP will recognize.</p> 639 640 <p>The configuration code prefers the POSIX regex facility, 641 modeled on System V's, but will settle for BSD regexps if the 642 former is not available.</p> 643 644 <p>Historical note: the panels code was written primarily to 645 assist in porting u386mon 2.0 (comp.sources.misc v14i001-4) to 646 systems lacking panels support; u386mon 2.10 and beyond use it. 647 This version has been slightly cleaned up for 648 <code>ncurses</code>.</p> 649 650 <h1><a name="tic" id="tic">A Tour of the Terminfo Compiler</a></h1> 651 652 <p>The <strong>ncurses</strong> implementation of 653 <strong>tic</strong> is rather complex internally; it has to do a 654 trying combination of missions. This starts with the fact that, 655 in addition to its normal duty of compiling terminfo sources into 656 loadable terminfo binaries, it has to be able to handle termcap 657 syntax and compile that too into terminfo entries.</p> 658 659 <p>The implementation therefore starts with a table-driven, 660 dual-mode lexical analyzer (in <code>comp_scan.c</code>). The 661 lexer chooses its mode (termcap or terminfo) based on the first 662 “,” or “:” it finds in each entry. The 663 lexer does all the work of recognizing capability names and 664 values; the grammar above it is trivial, just "parse entries till 665 you run out of file".</p> 666 667 <h2><a name="nonuse" id="nonuse">Translation of 668 Non-<strong>use</strong> Capabilities</a></h2> 669 670 <p>Translation of most things besides <strong>use</strong> 671 capabilities is pretty straightforward. The lexical analyzer's 672 tokenizer hands each capability name to a hash function, which 673 drives a table lookup. The table entry yields an index which is 674 used to look up the token type in another table, and controls 675 interpretation of the value.</p> 676 677 <p>One possibly interesting aspect of the implementation is the 678 way the compiler tables are initialized. All the tables are 679 generated by various awk/sed/sh scripts from a master table 680 <code>include/Caps</code>; these scripts actually write C 681 initializers which are linked to the compiler. Furthermore, the 682 hash table is generated in the same way, so it doesn't have to be 683 generated at compiler startup time (another benefit of this 684 organization is that the hash table can be in shareable text 685 space).</p> 686 687 <p>Thus, adding a new capability is usually pretty trivial, just 688 a matter of adding one line to the <code>include/Caps</code> 689 file. We will have more to say about this in the section on 690 <a href="#translation">Source-Form Translation</a>.</p> 691 692 <h2><a name="uses" id="uses">Use Capability Resolution</a></h2> 693 694 <p>The background problem that makes <strong>tic</strong> tricky 695 is not the capability translation itself, it is the resolution of 696 <strong>use</strong> capabilities. Older versions would not 697 handle forward <strong>use</strong> references for this reason 698 (that is, a using terminal always had to follow its use target in 699 the source file). By doing this, they got away with a simple 700 implementation tactic; compile everything as it blows by, then 701 resolve uses from compiled entries.</p> 702 703 <p>This will not do for <strong>ncurses</strong>. The problem is 704 that that the whole compilation process has to be embeddable in 705 the <strong>ncurses</strong> library so that it can be called by 706 the startup code to translate termcap entries on the fly. The 707 embedded version cannot go promiscuously writing everything it 708 translates out to disk — for one thing, it will typically 709 be running with non-root permissions.</p> 710 711 <p>So our <strong>tic</strong> is designed to parse an entire 712 terminfo file into a doubly-linked circular list of entry 713 structures in-core, and then do <strong>use</strong> resolution 714 in-memory before writing everything out. This design has other 715 advantages: it makes forward and back use-references equally easy 716 (so we get the latter for free), and it makes checking for name 717 collisions before they are written out easy to do.</p> 718 719 <p>And this is exactly how the embedded version works. But the 720 stand-alone user-accessible version of <strong>tic</strong> 721 partly reverts to the historical strategy; it writes to disk (not 722 keeping in core) any entry with no <strong>use</strong> 723 references.</p> 724 725 <p>This is strictly a core-economy kluge, implemented because the 726 terminfo master file is large enough that some core-poor systems 727 swap like crazy when you compile it all in memory...there have 728 been reports of this process taking <strong>three hours</strong>, 729 rather than the twenty seconds or less typical on the author's 730 development box.</p> 731 732 <p>So. The executable <strong>tic</strong> passes the 733 entry-parser a hook that <em>immediately</em> writes out the 734 referenced entry if it has no use capabilities. The compiler main 735 loop refrains from adding the entry to the in-core list when this 736 hook fires. If some other entry later needs to reference an entry 737 that got written immediately, that is OK; the resolution code 738 will fetch it off disk when it cannot find it in core.</p> 739 740 <p>Name collisions will still be detected, just not as cleanly. 741 The <code>write_entry()</code> code complains before overwriting 742 an entry that postdates the time of <strong>tic</strong>'s first 743 call to <code>write_entry()</code>, Thus it will complain about 744 overwriting entries newly made during the <strong>tic</strong> 745 run, but not about overwriting ones that predate it.</p> 746 747 <h2><a name="translation" id="translation">Source-Form 748 Translation</a></h2> 749 750 <p>Another use of <strong>tic</strong> is to do source 751 translation between various termcap and terminfo formats. There 752 are more variants out there than you might think; the ones we 753 know about are described in the <strong>captoinfo(1)</strong> 754 manual page.</p> 755 756 <p>The translation output code (<code>dump_entry()</code> in 757 <code>ncurses/dump_entry.c</code>) is shared with the 758 <strong>infocmp(1)</strong> utility. It takes the same internal 759 representation used to generate the binary form and dumps it to 760 standard output in a specified format.</p> 761 762 <p>The <code>include/Caps</code> file has a header comment 763 describing ways you can specify source translations for 764 nonstandard capabilities just by altering the master table. It is 765 possible to set up capability aliasing or tell the compiler to 766 plain ignore a given capability without writing any C code at 767 all.</p> 768 769 <p>For circumstances where you need to do algorithmic 770 translation, there are functions in <code>parse_entry.c</code> 771 called after the parse of each entry that are specifically 772 intended to encapsulate such translations. This, for example, is 773 where the AIX <strong>box1</strong> capability get translated to 774 an <strong>acsc</strong> string.</p> 775 776 <h1><a name="utils" id="utils">Other Utilities</a></h1> 777 778 <p>The <strong>infocmp</strong> utility is just a wrapper around 779 the same entry-dumping code used by <strong>tic</strong> for 780 source translation. Perhaps the one interesting aspect of the 781 code is the use of a predicate function passed in to 782 <code>dump_entry()</code> to control which capabilities are 783 dumped. This is necessary in order to handle both the ordinary 784 De-compilation case and entry difference reporting.</p> 785 786 <p>The <strong>tput</strong> and <strong>clear</strong> utilities 787 just do an entry load followed by a <code>tputs()</code> of a 788 selected capability.</p> 789 790 <h1><a name="style" id="style">Style Tips for Developers</a></h1> 791 792 <p>See the TO-DO file in the top-level directory of the source 793 distribution for additions that would be particularly useful.</p> 794 795 <p>The prefix <code>_nc_</code> should be used on library public 796 functions that are not part of the curses API in order to prevent 797 pollution of the application namespace. If you have to add to or 798 modify the function prototypes in curses.h.in, read 799 ncurses/MKlib_gen.sh first so you can avoid breaking XSI 800 conformance. Please join the ncurses mailing list. See the 801 INSTALL file in the top level of the distribution for details on 802 the list.</p> 803 804 <p>Look for the string <code>FIXME</code> in source files to tag 805 minor bugs and potential problems that could use fixing.</p> 806 807 <p>Do not try to auto-detect OS features in the main body of the 808 C code. That is the job of the configuration system.</p> 809 810 <p>To hold down complexity, do make your code data-driven. 811 Especially, if you can drive logic from a table filtered out of 812 <code>include/Caps</code>, do it. If you find you need to augment 813 the data in that file in order to generate the proper table, that 814 is still preferable to ad-hoc code — that is why the fifth 815 field (flags) is there.</p> 816 817 <p>Have fun!</p> 818 819 <h1><a name="port" id="port">Porting Hints</a></h1> 820 821 <p>The following notes are intended to be a first step towards 822 DOS and Macintosh ports of the ncurses libraries.</p> 823 824 <p>The following library modules are “pure curses”; 825 they operate only on the curses internal structures, do all 826 output through other curses calls (not including 827 <code>tputs()</code> and <code>putp()</code>) and do not call any 828 other UNIX routines such as signal(2) or the stdio library. Thus, 829 they should not need to be modified for single-terminal 830 ports.</p> 831 832 <blockquote> 833 <code>lib_addch.c lib_addstr.c lib_bkgd.c lib_box.c lib_clear.c 834 lib_clrbot.c lib_clreol.c lib_delch.c lib_delwin.c lib_erase.c 835 lib_inchstr.c lib_insch.c lib_insdel.c lib_insstr.c 836 lib_keyname.c lib_move.c lib_mvwin.c lib_newwin.c lib_overlay.c 837 lib_pad.c lib_printw.c lib_refresh.c lib_scanw.c lib_scroll.c 838 lib_scrreg.c lib_set_term.c lib_touch.c lib_tparm.c lib_tputs.c 839 lib_unctrl.c lib_window.c panel.c</code> 840 </blockquote> 841 842 <p>This module is pure curses, but calls outstr():</p> 843 844 <blockquote> 845 <code>lib_getstr.c</code> 846 </blockquote> 847 848 <p>These modules are pure curses, except that they use 849 <code>tputs()</code> and <code>putp()</code>:</p> 850 851 <blockquote> 852 <code>lib_beep.c lib_color.c lib_endwin.c lib_options.c 853 lib_slk.c lib_vidattr.c</code> 854 </blockquote> 855 856 <p>This modules assist in POSIX emulation on non-POSIX 857 systems:</p> 858 859 <dl> 860 <dt>sigaction.c</dt> 861 862 <dd>signal calls</dd> 863 </dl> 864 865 <p>The following source files will not be needed for a 866 single-terminal-type port.</p> 867 868 <blockquote> 869 <code>alloc_entry.c captoinfo.c clear.c comp_captab.c 870 comp_error.c comp_hash.c comp_main.c comp_parse.c comp_scan.c 871 dump_entry.c infocmp.c parse_entry.c read_entry.c tput.c 872 write_entry.c</code> 873 </blockquote> 874 875 <p>The following modules will use 876 open()/read()/write()/close()/lseek() on files, but no other OS 877 calls.</p> 878 879 <dl> 880 <dt>lib_screen.c</dt> 881 882 <dd>used to read/write screen dumps</dd> 883 884 <dt>lib_trace.c</dt> 885 886 <dd>used to write trace data to the logfile</dd> 887 </dl> 888 889 <p>Modules that would have to be modified for a port start 890 here:</p> 891 892 <p>The following modules are “pure curses” but 893 contain assumptions inappropriate for a memory-mapped port.</p> 894 895 <dl> 896 <dt>lib_longname.c</dt> 897 898 <dd>assumes there may be multiple terminals</dd> 899 900 <dt>lib_acs.c</dt> 901 902 <dd>assumes acs_map as a double indirection</dd> 903 904 <dt>lib_mvcur.c</dt> 905 906 <dd>assumes cursor moves have variable cost</dd> 907 908 <dt>lib_termcap.c</dt> 909 910 <dd>assumes there may be multiple terminals</dd> 911 912 <dt>lib_ti.c</dt> 913 914 <dd>assumes there may be multiple terminals</dd> 915 </dl> 916 917 <p>The following modules use UNIX-specific calls:</p> 918 919 <dl> 920 <dt>lib_doupdate.c</dt> 921 922 <dd>input checking</dd> 923 924 <dt>lib_getch.c</dt> 925 926 <dd>read()</dd> 927 928 <dt>lib_initscr.c</dt> 929 930 <dd>getenv()</dd> 931 932 <dt>lib_newterm.c</dt> 933 934 <dt>lib_baudrate.c</dt> 935 936 <dt>lib_kernel.c</dt> 937 938 <dd>various tty-manipulation and system calls</dd> 939 940 <dt>lib_raw.c</dt> 941 942 <dd>various tty-manipulation calls</dd> 943 944 <dt>lib_setup.c</dt> 945 946 <dd>various tty-manipulation calls</dd> 947 948 <dt>lib_restart.c</dt> 949 950 <dd>various tty-manipulation calls</dd> 951 952 <dt>lib_tstp.c</dt> 953 954 <dd>signal-manipulation calls</dd> 955 956 <dt>lib_twait.c</dt> 957 958 <dd>gettimeofday(), select().</dd> 959 </dl> 960 961 <hr> 962 963 <address> 964 Eric S. Raymond <esr@snark.thyrsus.com> 965 </address> 966 (Note: This is <em>not</em> the <a href="#bugtrack">bug 967 address</a>!) 968 </body> 969 </html> ncurses, with patches starting at ncurses-5.6; new users should use RSS Atom | https://ncurses.scripts.mit.edu/?p=ncurses.git;a=blob;f=doc/html/hackguide.html;h=71312a565f4c5c9d68cae44f7b9e70e95a59adba;hb=152c5a605234b7ea36ba3a03ec07e124bb6aac75 | CC-MAIN-2022-40 | refinedweb | 6,851 | 52.8 |
If.
Table of Contents
Prerequisites
This tutorial assumes you have Elixir 1.5 or higher, and
mix installed already.
We’ll start by creating a new OTP project, with a supervision tree.
$ mix new example --sup $ cd example
We need our Elixir app to include a supervision tree because we will use a Supervisor to start up and run our Cowboy2 server.
Dependencies
Adding dependencies is a breeze with mix.
To use Plug as an adapter interface for the Cowboy2 webserver, we need to install the
PlugCowboy package:
Add the following to your
mix.exs file:
def deps do [ {:plug_cowboy, "~> 2.0"}, ] end
At the command line, run the following mix task to pull in these new dependencies:
$ mix deps.get
The Plug Specification
In order to begin creating Plugs, we need to know, and adhere to, the Plug spec.
Thankfully for us, there are only two functions necessary:
init/1 and
call/2.
Here’s a simple Plug that returns “Hello World!”:
defmodule Example.HelloWorldPlug do import Plug.Conn def init(options), do: options def call(conn, _opts) do conn |> put_resp_content_type("text/plain") |> send_resp(200, "Hello World!\n") end end
Save the file to
lib/example/hello_world_plug.ex.
The
init/1 function is used to initialize our Plug’s options.
It is called by a supervision tree, which is explained in the next section.
For now, it’ll be an empty List that is ignored.
The value returned from
init/1 will eventually be passed to
call/2 as its second argument.
The
call/2 function is called for every new request that comes in from the web server, Cowboy.
It receives a
%Plug.Conn{} connection struct as its first argument and is expected to return a
%Plug.Conn{} connection struct.
Configuring the Project’s Application Module
We need.
Our
lib/example/application.ex file should implement the child spec in its
start/2 function:
defmodule Example.Application do use Application require Logger def start(_type, _args) do children = [ {Plug.Cowboy, scheme: :http, plug: Example.HelloWorldPlug, options: [port: 8080]} ] opts = [strategy: :one_for_one, name: Example.Supervisor] Logger.info("Starting application...") Supervisor.start_link(children, opts) end end
Note: We do not have to call
child_spec here, this function will be called by the supervisor starting this process.
We simply pass a tuple with the module that we want the child spec built for and then the three options needed.
This starts up a Cowboy2 server under our app’s supervision tree.
It starts Cowboy running under the HTTP scheme (you can also specify HTTPS), on the given port,
8080, specifying the plug,
Example.HelloWorldPlug, as the interface for any incoming web requests.
Now we’re ready to run our app and send it some web requests! Notice that, because we generated an OTP app with the
--sup flag, our
Example application will start up automatically thanks to the
application function.
In
mix.exs you should see the following:
def application do [ extra_applications: [:logger], mod: {Example.Application, []} ] end
We’re ready to try out this minimalistic, plug-based web server. On the command line, run:
$ mix run --no-halt
Once everything is finished compiling, and
[info] Starting application... appears, open a web
browser to.
It should display:
Hello World!
Plug.Router
For most applications, like a web site or REST API, you’ll want a router to route requests for different paths and HTTP verbs to different handlers.
Plug provides a router to do that.
As we are about to see, we don’t need a framework like Sinatra in Elixir since we get that for free with Plug.
To start let’s create a file at
lib/example/router.ex and copy the following into it:
defmodule Example.Router do use Plug.Router plug :match plug :dispatch get "/" do send_resp(conn, 200, "Welcome") end match _ do send_resp(conn, 404, "Oops!") end end
This is a bare minimum Router but the code should be pretty self-explanatory.
We’ve included some macros through
use Plug.Router and then set up two of the built-in Plugs:
:match and
:dispatch.
There are two defined routes, one for handling GET requests to the root and the second for matching all other requests so we can return a 404 message.
Back in
lib/example/application.ex, we need to add
Example.Router into the web server supervisor tree.
Swap out the
Example.HelloWorldPlug plug with the new router:
def start(_type, _args) do children = [ {Plug.Cowboy, scheme: :http, plug: Example.Router, options: [port: 8080]} ] opts = [strategy: :one_for_one, name: Example.Supervisor] Logger.info("Starting application...") Supervisor.start_link(children, opts) end
Start the server again, stopping the previous one if it’s running (press
Ctrl+C twice).
Now in a web browser, go to.
It should output
Then, go to, or any other path.
It should output
Oops! with a 404 response.
Adding Another Plug
It is common to use more than one plug in a given web application, each of which is dedicated to its own responsibility. For example, we might have a plug that handles routing, a plug that validates incoming web requests, a plug that authenticates incoming requests, etc. In this section, we’ll define a plug to verify incoming requests parameters and we’ll teach our application to use both of our plugs–the router and the validation plug.
We want to create a Plug that verifies whether or not the request has some set of required parameters.
By implementing our validation in a Plug we can be assured that only valid requests will make it through to our application.
We will expect our Plug to be initialized with two options:
:paths and
:fields.
These will represent the paths we apply our logic to and which fields to require.
Note: Plugs are applied to all requests which is why we will handle filtering requests and applying our logic to only a subset of them. To ignore a request we simply pass the connection through.
We’ll start by looking at our finished Plug and then discuss how it works.
We’ll create it at
lib/example/plug/verify_request.ex:
defmodule Example.Plug.VerifyRequest do defmodule IncompleteRequestError do @moduledoc """ Error raised when a required field is missing. """ defexception message: "" end def init(options), do: options def call(%Plug.Conn{request_path: path} = conn, opts) do if path in opts[:paths], do: verify_request!(conn.params, opts[:fields]) conn end defp verify_request!(params, fields) do verified = params |> Map.keys() |> contains_fields?(fields) unless verified, do: raise(IncompleteRequestError) end defp contains_fields?(keys, fields), do: Enum.all?(fields, &(&1 in keys)) end
The first thing to note is we have defined a new exception
IncompleteRequestError which we’ll raise in the event of an invalid request.
The second portion of our Plug is the
call/2 function.
This is where we decide whether or not to apply our verification logic.
Only when the request’s path is contained in our
:paths option will we call
verify_request!/2.
The last portion of our plug is the private function
verify_request!/2 which verifies whether the required
:fields are all present.
In the event that some are missing, we raise
IncompleteRequestError.
We’ve set up our Plug to verify that all requests to
/upload include both
"content" and
"mimetype".
Only then will the route code be executed.
Next, we need to tell the router about the new Plug.
Edit
lib/example/router.ex and make the following changes:
defmodule Example.Router do use Plug.Router end
With this code, we are telling our application to send incoming requests through the
VerifyRequest plug before running through the code in the router.
Via the function call:
plug VerifyRequest, fields: ["content", "mimetype"], paths: ["/upload"]
We automatically invoke
VerifyRequest.init(fields: ["content", "mimetype"], paths: ["/upload"]).
This in turn passes the given options to the
VerifyRequest.call(conn, opts) function.
Let’s take a look at this plug in action! Go ahead and crash your local server (remember, that’s done by pressing
ctrl + c twice).
Then restart the server (
mix run --no-halt).
Now go to in your browser and you’ll see that the page simply isn’t working. You’ll just see a default error page provided by your browser.
Now let’s add our required params by going to. Now we should see our ‘Uploaded’ message.
It’s not great that when we raise an error, we don’t get any page. We’ll look at how to handle errors with plugs later.
Making The HTTP Port Configurable
Back when we defined the
Example module and application, the HTTP port was hard-coded in the module.
It’s considered good practice to make the port configurable by putting it in a configuration file.
We’ll set an application environment variable in
config/config.exs
use Mix.Config config :example, cowboy_port: 8080
Next we need to update
lib/example/application.ex read the port configuration value, and pass it to Cowboy.
We’ll define a private function to wrap up that responsibility
defmodule Example.Application do use Application require Logger def start(_type, _args) do children = [ {Plug.Cowboy, scheme: :http, plug: Example.Router, options: [port: cowboy_port()]} ] opts = [strategy: :one_for_one, name: Example.Supervisor] Logger.info("Starting application...") Supervisor.start_link(children, opts) end defp cowboy_port, do: Application.get_env(:example, :cowboy_port, 8080) end
The third argument of
Application.get_env is the default value, for when the configuration directive is undefined.
Now to run our application we can use:
$ mix run --no-halt
Testing a Plug
Testing Plugs is pretty straightforward thanks to
Plug.Test.
It includes a number of convenience functions to make testing easy.
Write the following test to
test/example/router_test.exs:
defmodule Example.RouterTest do use ExUnit.Case use Plug.Test alias Example.Router @content "<html><body>Hi!</body></html>" @mimetype "text/html" @opts Router.init([]) test "returns welcome" do conn = :get |> conn("/", "") |> Router.call(@opts) assert conn.state == :sent assert conn.status == 200 end test "returns uploaded" do conn = :get |> conn("/upload?content=#{@content}&mimetype=#{@mimetype}") |> Router.call(@opts) assert conn.state == :sent assert conn.status == 201 end test "returns 404" do conn = :get |> conn("/missing", "") |> Router.call(@opts) assert conn.state == :sent assert conn.status == 404 end end
Run it with this:
$ mix test test/example/router_test.exs
Plug.ErrorHandler
We noticed earlier that when we went to without the expected parameters, we didn’t get a friendly error page or a sensible HTTP status - just our browser’s default error page with a
500 Internal Server Error.
Let’s fix that now by adding in
Plug.ErrorHandler.
First, open up
lib/example/router.ex and then write the following to that file.
defmodule Example.Router do use Plug.Router use Plug.ErrorHandler defp handle_errors(conn, %{kind: kind, reason: reason, stack: stack}) do IO.inspect(kind, label: :kind) IO.inspect(reason, label: :reason) IO.inspect(stack, label: :stack) send_resp(conn, conn.status, "Something went wrong") end end
You’ll notice that at the top, we are now adding
use Plug.ErrorHandler.
This plug catches any error, and then looks for a function
handle_errors/2 to call in order to handle it.
handle_errors/2 just needs to accept the
conn as the first argument and then a map with three items (
:kind,
:reason, and
:stack) as the second.
You can see we’ve defined a very simple
handle_errors/2 function to see what’s going on. Let’s stop and restart our app again to see how this works!
Now, when you navigate to, you’ll see a friendly error message.
If you look in your terminal, you’ll see something like the following:
kind: :error reason: %Example.Plug.VerifyRequest.IncompleteRequestError{message: ""} stack: [ {Example.Plug.VerifyRequest, :verify_request!, 2, [file: 'lib/example/plug/verify_request.ex', line: 23]}, {Example.Plug.VerifyRequest, :call, 2, [file: 'lib/example/plug/verify_request.ex', line: 13]}, {Example.Router, :plug_builder_call, 2, [file: 'lib/example/router.ex', line: 1]}, {Example.Router, :call, 2, [file: 'lib/plug/error_handler.ex', line: 64]}, {Plug.Cowboy.Handler, :init, 2, [file: 'lib/plug/cowboy/handler.ex', line: 12]}, {:cowboy_handler, :execute, 2, [ file: '/path/to/project/example/deps/cowboy/src/cowboy_handler.erl', line: 41 ]}, {:cowboy_stream_h, :execute, 3, [ file: '/path/to/project/example/deps/cowboy/src/cowboy_stream_h.erl', line: 293 ]}, {:cowboy_stream_h, :request_process, 3, [ file: '/path/to/project/example/deps/cowboy/src/cowboy_stream_h.erl', line: 271 ]} ]
At the moment, we’re still sending a
500 Internal Server Error back. We can customise the status code by adding a
:plug_status field to our exception. Open up
lib/example/plug/verify_request.ex and add the following:
defmodule IncompleteRequestError do defexception message: "", plug_status: 400 end
Restart your server and refresh, and now you’ll get back a
400 Bad Request.
This plug makes it really easy to catch the useful information needed for developers to fix issues, while being able to also give our end user a nice page so it doesn’t look like our app totally blew up!
Available Plugs
There are a number of Plugs available out-of-the-box. The complete list can be found in the Plug docs here.
Caught a mistake or want to contribute to the lesson? Edit this page on GitHub! | https://elixirschool.com/en/lessons/specifics/plug/ | CC-MAIN-2021-25 | refinedweb | 2,186 | 60.61 |
Advice on Migrating to Sencha Cmd
Advice on Migrating to Sencha Cmd
I am migrating an existing ExtJS 4 (4.1.3) app to use Sencha Cmd (GA).
The app is a single page app and uses a custom structure. I red all the guides for Sencha Cmd and
managed to migrate the app structure and produce a custom build for our app.
This is the test script we use to build all application files (excluding ext files).
Code:
sencha compile --classpath=lib/extjs/src,js ^ --debug=true ^ exclude -not -namespace Fms and ^ save AppOnly and ^ concat build/fms-all-debug.js and ^ --debug=false ^ restore AppOnly and ^ concat -yui build/fms-all.js
Code:
- js (folder) - app (folder) app.js (Ext.application();)
this is the result:
Code:
...class definitions... Ext.application({...}); ...more class definitions
Is it possible to enforce the file app.js to be appended to the build after the build process ?
I tried excluding the file from the build but then I don't know how to append it afterwards.
Any help is appreciated.
Ext JS Development Team Lead | http://www.sencha.com/forum/showthread.php?248524-Advice-on-Migrating-to-Sencha-Cmd | CC-MAIN-2014-42 | refinedweb | 182 | 68.16 |
Search the Community
Showing results for tags 'help'.
How to chain timelines with masterTl.add(tl1).add(tl2)...
skyslide posted a topic in GSAPHow can i chain timelines which waits for the last one to finish? Do i have to calculate manually the length of the last timeline added with tl.add(tl1) (tl1.duration())? I would like to play the added timelines as "chained clips", same behavior as tl.to() or tl.from(), but for timelines, not individual animations. const tl1 = getMy10sAnimation1() //gsap.timeline() ...to().to().to()... const tl2 = getMy2sAnimation1() //gsap.timeline() const tl3 = getMy7sAnimation1() //gsap.timeline() const masterTl = gsap.timeline({repeat: -1}) .add(tl1) .add(tl2) .add(tl3) // expected result: // ----------- // -- // --------- // repeat // but i get: // ----------- // -- // -------.
What's the right way to animate this hover effect?
zinjo posted a topic in GSAPHi! I'm new to javascript and found gsap while trying to animate some objects. I was trying to animate this sequence of squares to create a gallery effect but the result is very unpredictable, sometimes works all fine, but when i hover from one square to another some squares don't reverse the animation and end up in wrong positions. It's like if when i play a timeline right after another, and both timelines have animations to a certain objetc, the first timeline overrides the second and then, the objects end in the wrong position. How can i avoid this? Another issue i'm having is that i wanted both properties "scale" and "transform" to change the object simultaneously (i believe this way the animation would be smoother), but the way it is, the object first scales and then transforms. Does anyone knows what am I missing? Thanks a lot!
Please help me with GSAP Timeline Progress() bug ( ReactJS )
Mila A posted a topic in GSAPI've been struggling with the issue for 3 days, rewriting, refactoring code few times. Please help me if possible, guys. I use ReactJS and GSAP to create different computed animations ( overlays over a video ). What happens is that when I seek to specific percentage completed, for example 0.19 out of 49s timeline total length, it does seek to the first 1s part of the animation timeline cycle, and doesn't show the animation at the stage expected based on the progress percentage. I couldn't upload project to codesandbox as 1) it is nda signed and 2) it says that it has exceeded the 500-module items limit; I'm really sorry for that. Could someone please help me? I can share the source code or give access to my github repository. Thanks in advance everyone! import gsap from 'gsap'; import RightTitleStyles from '../../../../styles/right-title.module.css'; import React from 'react'; interface RightTitleProps { range: Object; name: string; currentTime: number; isPreview: boolean; type: 'smaller' | 'bigger'; isVisible: boolean; style: any; subtitle: string; title: string; } const RightTitle = React.memo( ({ videoRef, setStyle, range, name, currentTime, isPreview, type, isVisible, style, title, subtitle, }: RightTitleProps) => { const titleRef = React.useRef(); const { current: tl } = React.useRef(gsap.timeline({ paused: true })); const [ rangeIntervals, setRangeIntervals ] = React.useState< Array< number > >( range.timeIntervals ); const connectTitleRef = ( el : HTMLElement ) => { if (titleRef.current || !el || !videoRef || isPreview ) { if ( isPreview || !el || rangeIntervals === range.timeIntervals ) { return; } else { tl.killAll(); // just clearing out some tweens for repeated recreation } } tl.progress(1 - (range.timeIntervals[1] - currentTime) / (range.timeIntervals[1] - range.timeIntervals[0])); titleRef.current = el; console.log( titleRef.current.id, videoRef, ); console.log('configuring...'); tl.fromTo(videoRef, { width: '100%' }, { duration: 1, width: '63%' }).to(videoRef, { duration: range.timeIntervals[1] - range.timeIntervals[0] - 1 - 1, width: '63%' }).to(videoRef, { duration: 1, width: '100%' }); console.log( 'video configured', ); tl.fromTo( el, { x: name === 'Right Title' ? 150 : -150 }, { duration: 1, x: 0 }, ) .to(el, { x: 0, duration: range.timeIntervals[1] - range.timeIntervals[0] - 1 - 1, }) .to(`#${ el.id }`, { duration: 1, x: name === 'Right Title' ? 150 : -150, }); console.log(range.timeIntervals[1] - range.timeIntervals[0] - 1 - 1); }; // console.log( style, ); React.useEffect(() => { if (!titleRef.current || isPreview) return; console.log( 'styles applied to titleRef', titleRef.current._gsTransform ); console.log( 'these are tweens', tl.getChildren().map( child => child.vars.x || child.vars.width ) ); console.log( 'these are tweens', tl.getChildren().map( child => child.vars ) ); if (!(range.timeIntervals[0] <= currentTime && currentTime <= range.timeIntervals[1])) { console.log( 'current timing doesn`t fit the intervals' ); setStyle({}); tl.progress(0); return; } setStyle({ marginLeft: name === 'Left Title' ? 'auto' : 'unset', marginRight: name === 'Right Title' ? 'auto' : 'unset', }); tl.progress(1 - (range.timeIntervals[1] - currentTime) / (range.timeIntervals[1] - range.timeIntervals[0])); console.log(range.timeIntervals[1] - range.timeIntervals[0] - 1 - 1) console.log( currentTime, range.timeIntervals, 1 - (range.timeIntervals[1] - currentTime) / (range.timeIntervals[1] - range.timeIntervals[0]), ); }, [range.timeIntervals, currentTime]); const show = isVisible; if ( isPreview ) { return <div style={{ top: type === 'smaller' && 0, height: type === 'smaller' && '100%', ...style }} className={RightTitleStyles.aligningWrapper} > <div style={{ transform: isPreview && 'scale(0.55)' }}> <h1> {title} </h1> <p> {subtitle} </p>{' '} </div> </div> } return ( <div ref={ connectTitleRef } id={`${isPreview ? 'previewMode' : 'notPreviewMode'}3${range.color.slice(1)}`} style={{ visibility : !( currentTime + 1 >= range.timeIntervals[0] && currentTime - 1 <= range.timeIntervals[1] ) ? 'hidden' : 'visible', top: type === 'smaller' && 0, height: type === 'smaller' && '100%', ...style }} className={RightTitleStyles.aligningWrapper} > <div style={{ transform: isPreview && 'scale(0.55)' }}> <h1> {title} </h1> <p> {subtitle} </p>{' '} </div> </div> ); } ); export default RightTitle; Title.tsx animation.tsx
[HELP] Scrolltrigger Text Animation with the same class
takachan posted a topic in GSAPGood day, I am trying to figure out a way to have the animation run individually by the same class. I read the page about the common mistake on the loop but still couldn't figure it out. I know Im supposed to use the array but code did not run. can someone help me please?
[HELP] GSAP Scrolltrigger Text Animation
takachan posted a topic in GSAPGood day, I want to create a text reveal animation as shown on the codepen as a scrolltrigger. To have the animation starts when the trigger reached the middle of the screen when scrolled. Can someone help me?
Scroll Trigger: Two separate motions on a single SVG
Keipen123 posted a topic in GSAPHello, I am having a problem in making motions with Scroll Trigger. I am trying to make an SVG appear when triggered, and disappear after a few scrolls. I tried making two separate timelines for each motion, which did not work at all... ,and I have also tried using .reverse( ) or .fromTo( ) with no avail. Would any of you guys teach me how to make my SVG appear and later on disappear using the scroll trigger? To get the precise image of what motion I am trying to make: I want this appearing and disappearing to be triggered through scrolling. I am sorry, I cannot provide a codepen of my project (as I do not have the pro to upload the svg assets), instead I will clip a video of what I my current circumstance. I am trying to make the forest kinda thing sink out of the view after another stroke of scrolls. Please help me out here. Sincerely,.
Looking for HELP for my Portfolio
mr_a posted a topic in Jobs & FreelanceI am currently renewing my portfolio. And notice that I don't have enough skills for what I want to implement. It's only about the home page of my portfolio, I would continue the rest myself. I need help for the following. the following should be done Fullscreen slider (with my works) with text animation, image / video liquid animation on hover Page transition for the selected work in the slider (without content only the transition) simple page transition for other page (like about,..) Preloader animation Hover animation of the navigation points Draggable/Scrollable endless grid (with image and on hover 'optional' gif/video) - on scrolling/dragging a little liquid effect ) I have a page here in which direction it should go.Here is the link for example (slider show, page transition, hover animation and at the bottom left when clicking on the layer icon the endless grid ) I would provide the designs as well the necessary animation. I hope someone can support me there (and can also implement this as in the example) and make my work a little easier. For info only I'm not a company or agency, i'm a ux / ui designer. Please send a message or answer to this topic (with the days you need for it and what you want for it). Cheers Mr%', ease: 'power3', paused: true, }); if (menuOpen) { tween.play(); } else { tween.reverse(); } }, [menuOpen]); I've tried starting it as paused, and when the `menuOpen` is true it animates the width fine. When it's `false` it's not reversing it back to its original position as I would expect — clearly I'm missing something. I've tried a bunch of different approaches using state and ref to set the original width and trying to access that but I've had no joy so far. I'm sure it's something simple I'm missing. Thanks for any help, Chrish?
Need help to animate a analog clock
noineter posted a topic in GSAPHello guys, Greensock and his plugins is very new to me and I have trouble understanding it, I hope the answers on this post will help me out! I need help to animate a analog clock, the animations have to be done with 'Greensock'. Basically, I already have the analog clock coded. This link leads to my repo on github: I was searching on the internet I just need two animations added with the Greensock plug-in: I want the background color changing gradually with the current time, so as example: from dark at night to bright at day. I want the clock to slides in from the left when the page loads.
help Stagger issue, 2 elements animating without delay
olhapi posted a topic in GSAPAs you can see I have 3 elements with the same class. At the start of animation 2 of them starting instantaneously and only third one after a desired delay. How to fix that? }); | https://greensock.com/tags/help/ | CC-MAIN-2022-21 | refinedweb | 1,656 | 59.3 |
The Dining Philosophers Problem in Java Last modified: June 14, 2017 by baeldung Java The new Certification Class of REST With Spring is out: >> CHECK OUT THE COURSE 1. Introduction The Dining Philosophers problem is one of the classic problems used to describe synchronization issues in a multi-threaded environment and illustrate techniques for solving them. Dijkstra first formulated this problem and presented it regarding computers accessing tape drive peripherals. The present formulation was given by Tony Hoare, who is also known for inventing the quicksort sorting algorithm. In this article, we analyze this well-known problem and code a popular solution. 2. The Problem The diagram above represents the problem. There are five silent philosophers (P1 – P5) sitting around a circular table, spending their lives eating and thinking. There are five forks for them to share (1 – 5) and to be able to eat, a philosopher needs to have forks in both his hands. After eating, he puts both of them down and then they can be picked by another philosopher who repeats the same cycle. The goal is to come up with a scheme/protocol that helps the philosophers achieve their goal of eating and thinking without getting starved to death. 3. A Solution An initial solution would be to make each of the philosophers follow the following protocol: while(true) { // Initially, thinking about life, universe, and everything think(); // Take a break from thinking, hungry now pick_up_left_fork(); pick_up_right_fork(); eat(); put_down_right_fork(); put_down_left_fork(); // Not hungry anymore. Back to thinking! } As the above pseudo code describes, each philosopher is initially thinking. After a certain amount of time, the philosopher gets hungry and wishes to eat. At this point, he reaches for the forks on his either side and once he’s got both of them, proceeds to eat. Once the eating is done, the philosopher then puts the forks down, so that they’re available for his neighbor. 4. Implementation We model each of our philosophers as classes that implement the Runnable interface so that we can run them as separate threads. Each Philosopher has access to two forks on his left and right sides: public class Philosopher implements Runnable { // The forks on either side of this Philosopher private Object leftFork; private Object rightFork; public Philosopher(Object leftFork, Object rightFork) { this.leftFork = leftFork; this.rightFork = rightFork; } @Override public void run() { // Yet to populate this method } } We also have a method that instructs a Philosopher to perform an action – eat, think, or acquire forks in preparation for eating: public class Philosopher implements Runnable { // Member variables, standard constructor private void doAction(String action) throws InterruptedException { System.out.println( Thread.currentThread().getName() + " " + action); Thread.sleep(((int) (Math.random() * 100))); } // Rest of the methods written earlier } As shown in the code above, each action is simulated by suspending the invoking thread for a random amount of time, so that the execution order isn’t enforced by time alone. Now, let’s implement the core logic of a Philosopher. To simulate acquiring a fork, we need to lock it so that no two Philosopher threads acquire it at the same time. To achieve this, we use the To achieve this, we use the synchronized keyword to acquire the internal monitor of the fork object and prevent other threads from doing the same. A guide to the synchronized keyword in Java can be found here. We proceed with implementing the run() method in the Philosopher class now: public class Philosopher implements Runnable { // Member variables, methods defined earlier @Override public void run() { try { while (true) { // thinking doAction(System.nanoTime() + ": Thinking"); synchronized (leftFork) { doAction( System.nanoTime() + ": Picked up left fork"); synchronized (rightFork) { // eating doAction( System.nanoTime() + ": Picked up right fork - eating"); doAction( System.nanoTime() + ": Put down right fork"); } // Back to thinking doAction( System.nanoTime() + ": Put down left fork. Back to thinking"); } } } catch (InterruptedException e) { Thread.currentThread().interrupt(); return; } } } This scheme exactly implements the one described earlier: a Philosopher thinks for a while and then decides to eat. After this, he acquires the forks to his left and right and starts eating. When done, he places the forks down. We also add timestamps to each action, which would help us understand the order in which events occur. To kick start the whole process, we write a client that creates 5 Philosophers as threads and starts all of them: public class DiningPhilosophers { public static void main(String[] args) throws Exception { Philosopher[] philosophers = new Philosopher[5]; Object[] forks = new Object[philosophers.length]; for (int i = 0; i < forks.length; i++) { forks[i] = new Object(); } for (int i = 0; i < philosophers.length; i++) { Object leftFork = forks[i]; Object rightFork = forks[(i + 1) % forks.length]; philosophers[i] = new Philosopher(leftFork, rightFork); Thread t = new Thread(philosophers[i], "Philosopher " + (i + 1)); t.start(); } } } We model each of the forks as generic Java objects and make as many of them as there are philosophers. We pass each Philosopher his left and right forks that he attempts to lock using the synchronized keyword. Running this code results in an output similar to the following. Your output will most likely differ from the one given below, mostly because the sleep() method is invoked for a different interval: Philosopher 1 8038014601251: Thinking Philosopher 2 8038014828862: Thinking Philosopher 3 8038015066722: Thinking Philosopher 4 8038015284511: Thinking Philosopher 5 8038015468564: Thinking Philosopher 1 8038016857288: Picked up left fork Philosopher 1 8038022332758: Picked up right fork - eating Philosopher 3 8038028886069: Picked up left fork Philosopher 4 8038063952219: Picked up left fork Philosopher 1 8038067505168: Put down right fork Philosopher 2 8038089505264: Picked up left fork Philosopher 1 8038089505264: Put down left fork. Back to thinking Philosopher 5 8038111040317: Picked up left fork All the Philosophers initially start off thinking, and we see that Philosopher 1 proceeds to pick up the left and right fork, then eats and proceeds to place both of them down, after which `Philosopher 5` picks it up. 5. The Problem with the Solution: Deadlock Though it seems that the above solution is correct, there’s an issue of a deadlock arising. A deadlock is a situation where the progress of a system is halted as each process is waiting to acquire a resource held by some other process. We can confirm the same by running the above code a few times and checking that some times, the code just hangs. Here’s a sample output that demonstrates the above issue: Philosopher 1 8487540546530: Thinking Philosopher 2 8487542012975: Thinking Philosopher 3 8487543057508: Thinking Philosopher 4 8487543318428: Thinking Philosopher 5 8487544590144: Thinking Philosopher 3 8487589069046: Picked up left fork Philosopher 1 8487596641267: Picked up left fork Philosopher 5 8487597646086: Picked up left fork Philosopher 4 8487617680958: Picked up left fork Philosopher 2 8487631148853: Picked up left fork In this situation, each of the Philosophers has acquired his left fork, but can’t acquire his right fork, because his neighbor has already acquired it. This situation is commonly known as the circular wait and is one of the conditions that results in a deadlock and prevents the progress of the system. 6. Resolving the Deadlock As we saw above, the primary reason for a deadlock is the circular wait condition where each process waits upon a resource that’s being held by some other process. Hence, to avoid a deadlock situation we need to make sure that the circular wait condition is broken. There are several ways to achieve this, the simplest one being the follows: All Philosophers reach for their left fork first, except one who first reaches for his right fork. We implement this in our existing code by making a relatively minor change in code: public class DiningPhilosophers { public static void main(String[] args) throws Exception { final Philosopher[] philosophers = new Philosopher[5]; Object[] forks = new Object[philosophers.length]; for (int i = 0; i < forks.length; i++) { forks[i] = new Object(); } for (int i = 0; i < philosophers.length; i++) { Object leftFork = forks[i]; Object rightFork = forks[(i + 1) % forks.length]; if (i == philosophers.length - 1) { // The last philosopher picks up the right fork first philosophers[i] = new Philosopher(rightFork, leftFork); } else { philosophers[i] = new Philosopher(leftFork, rightFork); } Thread t = new Thread(philosophers[i], "Philosopher " + (i + 1)); t.start(); } } } The change comes in lines 17-19 of the above code, where we introduce the condition that makes the last philosopher reach for his right fork first, instead of the left. This breaks the circular wait condition and we can avert the deadlock. Following output shows one of the cases where all the Philosophers get their chance to think and eat, without causing a deadlock: Philosopher 1 88519839556188: Thinking Philosopher 2 88519840186495: Thinking Philosopher 3 88519840647695: Thinking Philosopher 4 88519840870182: Thinking Philosopher 5 88519840956443: Thinking Philosopher 3 88519864404195: Picked up left fork Philosopher 5 88519871990082: Picked up left fork Philosopher 4 88519874059504: Picked up left fork Philosopher 5 88519876989405: Picked up right fork - eating Philosopher 2 88519935045524: Picked up left fork Philosopher 5 88519951109805: Put down right fork Philosopher 4 88519997119634: Picked up right fork - eating Philosopher 5 88519997113229: Put down left fork. Back to thinking Philosopher 5 88520011135846: Thinking Philosopher 1 88520011129013: Picked up left fork Philosopher 4 88520028194269: Put down right fork Philosopher 4 88520057160194: Put down left fork. Back to thinking Philosopher 3 88520067162257: Picked up right fork - eating Philosopher 4 88520067158414: Thinking Philosopher 3 88520160247801: Put down right fork Philosopher 4 88520249049308: Picked up left fork Philosopher 3 88520249119769: Put down left fork. Back to thinking It can be verified by running the code several times, that the system is free from the deadlock situation that occurred before. 7. Conclusion In this article, we explored the famous Dining Philosophers problem and the concepts of circular wait and deadlock. We coded a simple solution that caused a deadlock and made a simple change to break the circular wait and avoid a deadlock. This is just a start, and more sophisticated solutions do exist. The code for this article can be found over on GitHub. Go deeper into building a REST API with Spring: >> CHECK OUT THE COURSE Learning to "Build your API with Spring"? >> Get the eBook | http://www.baeldung.com/java-dining-philoshophers | CC-MAIN-2017-26 | refinedweb | 1,678 | 65.96 |
One which html tag you invoked. For instance, it can just invoke a specific id on a page.
However, when using a portal system like say JSF under Liferay, the id values are generated on the fly, so you’d record one test, then never be able to run it successfully again.
One really nice feature of Selenium is you can invoke a HTML xpath so in the Liferay example, your code would still find the tag it needed to click. Lets say I record myself logging into the page below…
Now because this page is generated with liferay, I can see the input text id for the form is…
<input aria-
As JSF under Liferay will create a new id quite regularly for this textbox (each time the server is restarted I believe, although it may be even more frequent), this means we can’t just get the id and hook into that, as the tests will only ever run once.
What we can do however is hook into liferay by using the html tag directly as this won’t be different each time Liferay loads the JSF. I noticed I had to use this same technique for every page in Liferay as the id for nearly all the html rendered through JSF had a different id each time the page was accessed.
We can then export this to a junit class from the file menu File | Export Test Case As… | Java / JUnit 4 / Web Driver which would give us the following class to run and test.
import static org.junit.Assert.fail;; public class TestExample {. | http://www.javacodegeeks.com/2013/06/java-testing-with-selenium-and-dynamic-ids-in-html.html | CC-MAIN-2014-15 | refinedweb | 267 | 61.9 |
An Overview of Basic Calculation for Calls
For all long positions, the basic calculations are very straightforward. To review, there are various terms used to describe an outcome, including "return on investment," "yield," and numerous other wording. The expression net return is useful because it is simple, but it qualifies the return. By "net," this expression means actual dollar values realized and expressed as a percentage.
The long position calculation is simple compared to calculations for short positions. In the short position, you sell first and realize a profit when one of three events occurs: (1) the position is closed when you enter a buy; (2) the option is exercised; or (3) the option expires worthless.
ExampleThe Safety Net: You purchase a call for 0.75 ($75) and also pay a brokerage fee of $12.50. Your basis in the long position is $87.50. Two months later, you sell for 1.5 ($150). Your brokerage firm deducts another $12.50 from proceeds and credits your account with $137.50. To calculate net return, first calculate the net profit:
$137.50 - $87.50 = $50.00Next, divide the net profit by the original net basis:
$50.00 ÷ $137.50 = 36.4%If you also want to annualize this return (to compare it to other long positions), you divide the percentage by the holding period (2 months) and multiply by 12 months:
36.4% ÷ 2 × 12 = 218.4%As with all other instances of annualizing returns, this should not be used to set a standard for outcomes in future long positions. It is useful only for comparisons between similar risk levels of trades.
Short-position calculations for calls are complicated by three factors:
Smart Investor TipCalculating net return for long positions is simple because the levels of risk, capital requirements, and outcomes are well understood. The same argument is not true for short position net returns.
- Different levels of capital have to be kept in your brokerage account for uncovered calls or puts.
- When you write covered calls, you have to consider your basis in the stock as part of your capital at risk.
- If a covered call is exercised, you have to consider gain on the option and gain or loss on the stock, as well as deciding whether to include dividends in your net return.
If you own stock and sell covered calls, you can perform one type of net return calculation separate and apart from the value or profit on stock. Assuming your purpose in selling the calls is to increase current income and not to force exercise, you may consider only call premium and calculate the return in one of four ways:
- You close the position and calculate option-based net return. When you close a covered call, either to take a profit or to avoid or defer exercise, you can calculate net return in the same way you do for trading long options. The difference between sell and buy is divided by the net buy price, and the resulting percentage is your preannualized net profit or loss.
ExampleUpside-Down Return: You sold a covered call four months ago and received a premium of 5 ($500). Net proceeds came to $487.50. Last week, you entered a buy to close order at 2 ($200). Net cost was $212.50. Your net return considering only the option transaction was $275 ($487.50 - $212.50). That was 129.4 percent based on the closing buy price. This is a somewhat unrealistic form of return, because the transaction occurs in reverse. You cannot, however, calculate the return based on the initial sales price of the option. This format may be useful for comparative purposes, but it does not give you a full view of how net return worked in this example.
- You close the position and calculate net return based on the entire position. A different point of view for covered call returns will be based on outcome for the whole position, including option premium, capital gain or loss on the stock, and dividend income during the holding period. The calculation includes everything, and there is a justification for performing the calculation in this manner: Your selection of one striking price over another affects the outcome in case of exercise. Consider the difference between making a two-point capital gain or accepting a three-point capital loss. This comparison is valid if you choose between two striking prices when current value of the stock resides in between.
ExampleA Striking Proposal: You may base potential profit or loss on the striking price of the option, regardless of your actual basis in the stock. You own 100 shares of stock you originally purchased at $32 per share. Today, the stock's value is at $42.50. You want to write a covered call and you have reviewed both 40 and 45 striking prices. The 40 call provides higher premium, but the 45 is also attractive and out of the money. So you calculate the total net return including dividends you will earn between now and expiration date; capital gain or loss (based on current value rather than original price), and the option premium.Both situations are somewhat distorted because option profit or loss is combined with the stock capital gain. However, net returns aside, it is clear that the comparison has to be made in order to judge the viability of one striking price over the other.
ExampleYour Basic Basis: Given the same facts as in the previous example, you may consider striking prices of 40 and 45, given the current value of stock at $42.50. However, in making the comparison, you use your original cost per share of $32. This enables you to judge the relative value of one option over the other in deciding whether to write the covered call.
- The covered call is exercised and you calculate option and stock profits separately. The solution to the dilemma of mixing option, stock, and dividend sources of net return is to perform analysis separately. So you use the stock basis to consider whether or not you actually want to create one level of profit or another. But when it comes to judging the results, you separate the stock and option profits.
ExampleSeparate but Equal: You are considering writing a covered call on stock you originally bought at $28 per share. Today, you can write calls with striking prices of 25 or 30, and both are attractively priced. However, in a separate analysis of each, you abandon the 25 striking price because, if exercised, that would create a capital loss in the stock of three points. The 25 call is available for 4.50 today. The combined income from stock and option would only be $150, whereas exercise of the 30 call would include three points of capital gain in the stock plus two points in the option. You calculate the potential profit or loss separately, but you use the comparison to eliminate the in-the-money call.
- Any covered call outcome is computed strictly on the basis of capital on deposit. Yet another method for calculating option profits is based on the use of margin rather than actual basis in either stock or option. This return on capital employed (ROCE) is a cash-for-cash calculation when applied to options trading. Those investing solely in stocks often use this leveraged approach to analysis. For example, if you buy stock at $50 per share, you are required to have only $2,500 in your account, and the rest is loaned to you by your brokerage firm. The calculation of net return has to include the interest charged during your holding period; but the return is potentially higher because you have less cash committed to the position. If you net a five-point gain, that is 10 percent based on the stock's growth in value (from $50 to $55 per share). But if you have only $2,500 at risk, a five-point gain is 20 percent net return (assuming the return is net of interest expense).
The same approach can be used when you buy or sell options or when you write covered calls. You might have only half the stock's value on deposit with the balance on margin; you may also be required to leave only a portion of an option's value in your account in order to open an option position. The calculation of net return in this case is not going to be based on the movement of a number of points, but rather on the change in your actual cash position. It requires that you add together the stock capital gains, option profits, and dividend income, and deduct any losses as well as transaction and interest charged by your brokerage firm. The net income is not based on the prices of stock and option but on the amount of cash you had on deposit.
The leveraged approach will produce much higher percentage gains, but it also involves greater risk. When you suffer net losses, you will be required to make up the difference in cash. For example, if you have $2,500 at risk on a $50 stock and it falls five points (10 percent), you will lose $500, or 20 percent of your cash on deposit. The same doubling effect applies to options as well. For example, if you deposit $200 to buy an option priced at 4 ($400) and it expires worthless, you not only lose your $200 on deposit; you also have to pay your brokerage firm another $200 plus transaction fees and interest. In that example, your net loss will exceed 100 percent.
Smart Investor TipWhen you base net return calculations on cash actually at risk, you have two variables. First is the higher risk of trading on margin, and second is the greater potential gained from leverage. These are two aspects of the same advantage/problem.
By Michael C. Thomsett | http://www.investorguide.com/article/12624/an-overview-of-basic-calculation-for-calls-wo/ | CC-MAIN-2017-09 | refinedweb | 1,660 | 60.85 |
Another comment from Fumiaki Yoshimatsu, noting that I seemed to have ended up in more or less the same place, asked what I think of the XSD guidelines that Kohsuke Kawaguchi posted several years ago. I remember reading and dismissing his guidance at the time because of his assertion that you shouldn't seek to understand XSD. In my experience, if you are going to argue against the conventional wisdom with any hope of success, you need to ground your argument in a deep understanding of the topic not a claim that it would take too long to understand.
Take, for example, the argument he makes in favor of avoiding local element decls (LED) because they aren't namespace qualified, making it harder to write instances. The lack of LED namespace qualification is just a default and you can change it by marking your schema elementFormDefault="qualified". In short, his argument is easily overcome if you understand schema. There are arguments to be made against the use of local element decls: they can't be used to start a validation episode and you can define two local elements with the same qualified name but totally different semantics.
All that said, yes, I reached more or less the same conclusions and I now agree with many of Kawaguchi's conclusions, if not his arguments. In retrospect, part of me wishes I (and many others) had just taken his (and many others') arguments on faith without deep understanding, but that's not in my nature. Maybe, if enough people had, the industry wouldn't have so blindly embraced XSD. But I was caught up in the value of nominal complex types, and object mapping; it took me a long time to see that there is no future there and to realize that the best I could hope for XSD was to salvage something workable. On the upside, I've now spent enough time with XSD to know that when you balance what it offers and what it costs, and I have been able to get to a place that makes me reasonably happy.
nmCgY9 <a href="wpdnmebmczsa.com/.../a>, [url=]zaitwiovyfwu[/url], [link=]rtcagergykzo[/link], | http://www.pluralsight.com/community/blogs/tewald/archive/2004/08/25/2030.aspx | crawl-002 | refinedweb | 363 | 54.66 |
Is there better ways to randomly shuffle two related lists without breaking their correspondence in the other list? I've found related questions in
numpy.array
c#
zip
import random
a = [[1, 2], [3, 4], [5, 6], [7, 8], [9, 10]]
b = [2, 4, 6, 8, 10]
c = zip(a, b)
random.shuffle(c)
a = [e[0] for e in c]
b = [e[1] for e in c]
print a
print b
[[1, 2], [7, 8], [3, 4], [5, 6], [9, 10]]
[2, 8, 4, 6, 10]
Given the relationship demonstrated in the question, I'm going to assume the lists are the same length and that
list1[i] corresponds to
list2[i] for any index
i. With that assumption in place, shuffling the lists is as simple as shuffling the indices:
from random import shuffle # Given list1 and list2 list1_shuf = [] list2_shuf = [] index_shuf = range(len(list1)) shuffle(index_shuf) for i in index_shuf: list1_shuf.append(list1[i]) list2_shuf.append(list2[i]) | https://codedump.io/share/3w5Did7CYT9m/1/better-way-to-shuffle-two-related-lists | CC-MAIN-2017-04 | refinedweb | 160 | 59.77 |
Dear type
them over again but I don't know how to do it. I'd appreciated very much if you
could help me on this.
Warmest regards, Kim
Dear Kim,
What you need to do, is first export the contacts from your old Outlook and then
import it to your new Outlook.
First, to export:
In the Outlook window, click on "File" and then "Import & Export".
In the import & export wizard window, click "Export to a File" and then click on
For the type of file you want to export to, chose "Comma Separated Values" and click
You will see a list of your Outlook folders, chose your contacts folder and click
The next window prompts you to enter a file name and location to save the file to.
Click the "Browse" button to select a folder. "My Documents" is a great place to
save it. Now type a name for your file. I always name it addbk. Click "Next", and
then "Finish".
Once the export has run, go and check your "My Documents" folder to see if the file
is there. It should be named "addbk.csv" and will probably be small enough to fit
on a floppy.
Transfer the file to your new computer, and follow these directions to import the
addresses.
In the import & export wizard window, click "Import from another program or file"
and then click on "Next".
For the file type, chose "comma separated values" and click "Next".
Click the Browse button and find the file you exported.
Now, you need to select the folder you want to import to, this would be your contacts
folder.
Click "Next" and then "Finish". Wait while the computer runs the import, and you
will have your contacts in your new computer.
Elizabeth | http://www.asktcl.com/page79.html | crawl-001 | refinedweb | 295 | 82.04 |
You Shouldn't Follow Rules… Blindly
You Shouldn't Follow Rules… Blindly
Join the DZone community and get the full member experience.Join For Free
Learn how to build stream processing applications in Java-includes reference application. Brought to you in partnership with Hazelcast.
Some resources on the Internet are written in a very imperative style – you must do that in this way. And beware those that don’t follow the rule! They remind me of a french military joke (or more precisely a joke about the military) – but I guess other countries probably have their own version, regarding military rules. They are quite simple and can be summarized in two articles:
Art. 1: It’s mandatory to obey the orders of a superior.
Art. 2: When the superior is obviously wrong, refer to article 1.
What applies to the military domain, however, doesn’t apply to the software domain. I’ve been fighting for a long time about good practices having to be put in context, so that a specific practice may be the right in one context but plain wrong in another context. The reason is that in the latter case, disadvantages outweigh advantages. Of course, some practices have a wider scope than others. I mistakenly thought that some even have an all-encompassing scope, meaning they give so many benefits, they apply in all contexts. I have been proven wrong this week, regarding 2 of such practices I use:
- Use JavaConfig over XML for Spring configuration
- Use constructor injection over attribute injection for Dependency Injection
The use-case is the development of Spring CGLIB-based aspects (the codebase is legacy and interfaces may or may not exist) to collect memory metrics. I must admit this context is very specific, but that doesn’t change that it’s still a context.
First thing first, Spring aspects are not yet completely compatible with JavaConfig – and in any case, the Spring version is also legacy (3.x), so JavaConfig is out of the question. But at least annotations? In this case, two annotations may come into play:
@Aspect for the class and
@Around for the method that has to be used. The first is used in a very straightforward way, while the second needs to be passed the pointcut… as a String argument.
@Aspect public class MetricsCollectorAspect @Around("execution(...)") // This spans many many lines public Object collectMetrics { ... } }
The corresponding XML is the following:
<bean id="metricsCollectorAspect" class="ch.frankel.blog.MetricsCollectorAspect" /> <aop:config> <aop:aspect <aop:pointcut <aop:around </aop:aspect> </aop:config>
Benefits of using annotations over XML? None. Beside, the platform’s product we use do not embed Spring configuration fragments, so that it’s quite easy to update it and check results in deployed environment – pointcut included. XML: 1, annotations 0.
Another fun stuff: I’ve been an ardent defender of using constructor injection. This has some advantages, including highlighting dependencies, fewer boiler plate code and immutability. The 3.x version of Spring uses a version of CGLIB that cannot create proxies when there’s no no-args constructor on the proxied class. The paradox is that “good” design prevents proxying, while “bad” design – attribute injection with no-args constructor allows it. Sure, there are a couple of solutions to allow this: add interfaces to allow pure Spring proxies, add a no-args constructor on or filter out those unproxyable classes, but none of them are without impact.
Morality: rules are meant to help you, not hinder you. If you cannot follow them because of a good reason (like the cost is prohibitive), just ignore the. Just write down in comments the reason why you didn’t for your future code’s maintainers. }} | https://dzone.com/articles/you-shouldnt-follow-rules%E2%80%A6 | CC-MAIN-2018-30 | refinedweb | 616 | 54.63 |
Postgres: set_config (). current_setting () private / reliable stack for application variables?
In my application, I have triggers that need access to things like user ID. I save this information with
set_config('PRIVATE.'|'user_id', '221', false)
then when I do operations that alter the database, triggers can do:
user_id = current_setting('PRIVATE.user_id');
it works fine. My database activities are mostly from python, psycopg2, as soon as I get a connection, I will do set_config () as my first operation and then go about my business on the database. Is this a good practice, or could data leak from one session to another? I did things like this with SD and GD variables in plpython, but this language was too heavy for what I was trying to do, so I had to switch to plpgsql.
source to share
While that's not exactly what they are for, you can use GUCs as session variables.
They can also be associated with transactions, with an equivalent
SET LOCAL
or
set_config
.
As long as you do not allow the user to run arbitrary SQL, they are a reasonable choice, and session local GUCs are not shared with other sessions. They are not intended for secure local storage, but they are convenient places to store things like the current user app if you are not using
SET ROLE
or
SET SESSION AUTHORIZATION
for that.
Remember the user can define them via environment variables if you allow them to run a client based on
libpq
eg.
$ PGOPTIONS="-c myapp.user_id=fred" psql -c "SHOW myapp.user_id;" myapp.user_id --------------- fred (1 row)
Also, on older versions of PostgreSQL, you had to declare the namespace in
postgresql.conf
before you could use it.
source to share | https://daily-blog.netlify.app/questions/2166698/index.html | CC-MAIN-2021-49 | refinedweb | 284 | 63.59 |
Ample Bass P Keygen Download BETTER 💭
Ample Bass P Keygen Download
Ample Sound Agf Agg Agp Torrent Download with keygen, crack.. Ample Sound Ample Bass P II (ESD); bass plugin; virtual bass guitar .
Spire VST Free Download Mac/Win + Crack R2R Editorial Team 0 Comments crack. WIN} cubase pro 9 crack r2r notable Keygen-R2R Ample Sound AGF2 v2.
2 VST Torrent Full Version Free Download Reveal Sound Spire VST Crack 1.. Ample Bass P Lite II aim to bring the Fender Precision Bass sound to your studio .
. keygen download, ample sound keygen challenge code, ample. ABP, ABJ, Virtual Instruments, Sample library, Tab Player, Strummer .
00 AAX, VST, VST3 Windows Full Version Free Download. Ample Bass P III aim to bring the Fender Precision Bass sound to your studio. Includes .
ezbass download crack, EZbass works great with a MIDI keyboard controller.. Cubase Pro 10.5 Full version Crack engine is the maximum efficient and rapid. Sound Ample Metal Eclipse v3.2.0 [WiN, MacOSX] Ample Sound Ample Bass .
Jun 12, 2020 · ample sound keygen, ample sound keygen download, ample sound. AIMP is a full-featured free music player designed with sound quality and .
Ample Bass P Lite II (ABPL) is a free lite version of ABP, recorded on. 0 Update Incl Keygen Download Ample Sound Ample Guitar Series v2.
Download Ample Bass P Lite II by Ample Sound Free Vibrato, Guitar Synth VST Instrument. Win 32Bit, Win 64Bit, Mac 32Bit, Mac 64Bit. FREE download.
Ample Bass P Keygen Downloadclass SystemConfigFile
def initialize(f)
@f = f
@lines = []
f.each_line do |l|
@lines
Ample Bass P v1.0.0 Incl Keygen, VSTi, AAX, RTAS. AmpleSound.. What is the full suite of tunes for the paradigm I want. Keygen_ampliesound_agp_full_package.rar and.It’s been two years since ‘The New Great Depression’ was first announced, and while a lot has changed, a lot hasn’t. This week, the second annual Progress Report made its debut – and it came with some startling news: the economy is still the worst it’s been since the Great Depression.
The New Great Depression, in case you’re wondering, was the idea of my friend, prog economist Paul B. Farrell, that is being used as a call for further action to deal with the stagnant recovery. It stands in stark contrast to the feckless policies imposed by virtually every major political party in the country, including the Democratic Party, which has pursued a failed set of “stimulus” policies that have only exacerbated the morass of unemployment, debt, and economic stagnation.
It’s been two years since ‘The New Great Depression’ was first announced, and while a lot has changed, a lot hasn’t.
As we said, we’ve gotten the Great Depression back. Unemployment has risen from 7 percent to over 9 percent, and is still stuck above 8 percent. Meanwhile, the number of unemployed, underemployed, and long-term discouraged workers reached a staggering 37.6 million in September. And that doesn’t even count the 18 million people who have dropped out of the labor force altogether over the last three years.
Ironically, despite the economic backslide, we’ve seen very little improvement in the policies of most politicians: in fact, we’ve actually seen a dramatic shift to the right. President Obama’s “stimulus” plan, his lowest-rated legislative accomplishment, produced basically nothing in terms of employment, and gave subsidies to borrowers of dubious morality and honesty.
And the Democrats’ plan for a “stimulus”-adjacent “jobs bill” has also failed to deliver. Basically, Democrats have proposed to put a few hundred billion in taxpayer money into banks or the stock market, without actually providing any jobs to the unemployed or coming up with a long-term strategy
a2fa7ad3d0 | https://setewindowblinds.com/ample-bass-p-keygen-download-better/ | CC-MAIN-2022-40 | refinedweb | 622 | 64.81 |
Someone who is pro-"using namespace std" said this:
Does anyone have any thoughts as to how I might convince them thatDoes anyone have any thoughts as to how I might convince them thatQuote:
Code is much more readable if the namespace is not specified with each call but simply specified at the beginning of the file. As you mentioned, there is a benefit to specifying the namespace with each use. However, the cost is higher than the benefit.
is better thanis better thanCode:
std::cout << x;
??Code:
using namespace std;
cout << x;
I guess I shouldn't say that they are pro-"using namespace std". They're just in favour of making code as readable as possible. | http://cboard.cprogramming.com/cplusplus-programming/107356-thoughts-about-using-namespace-std-printable-thread.html | CC-MAIN-2015-18 | refinedweb | 118 | 69.31 |
Standard C++ Library Copyright 1998, Rogue Wave Software, Inc.
basic_ios, ios, wios - A base class that includes the common functions required by all streams.
#include <ios> template<class charT, class traits = char_traits<charT> > class basic_ios : public ios_base
The class basic_ios is a base class that includes the common functions required by all streams. It maintains state infor- mation that reflects the integrity of the stream and stream buffer. It also maintains the link between the stream classes and the stream buffer classes via the rdbuf member functions. Classes derived from basic_ios specialize opera- tions for input and output.
template<class charT, class traits = char_traits<charT> > class basic_ios : public ios_base { public: typedef basic_ios<charT, traits> ios_type; typedef basic_streambuf<charT, traits> streambuf_type; typedef basic_ostream<charT, traits> ostream_type; typedef typename traits::char_type char_type; typedef traits traits_type; typedef typename traits::int_type int_type; typedef typename traits::off_type off_type; typedef typename traits::pos_type pos_type; operator void*() const; bool operator!() const; iostate rdstate() const; void clear(iostate state = goodbit); void setstate(iostate state); bool good() const; bool eof() const; bool fail() const; bool bad() const; void exceptions(iostate except); iostate exceptions() const; explicit basic_ios(basic_streambuf<charT, traits> *sb_arg); virtual ~basic_ios(); ostream_type *tie() const; ostream_type *tie(ostream_type *tie_arg); streambuf_type *rdbuf() const; streambuf_type *rdbuf( streambuf_type *sb); ios_type& copyfmt(const ios_type& rhs); char_type fill() const; char_type fill(char_type ch); locale imbue(const locale& loc); char narrow(charT, char) const; charT widen(char) const; protected: basic_ios(); void init(basic_streambuf<charT, traits> *sb); };
char_type The type char_type is a synonym of type traits::char_type. ios The type ios is an instantiation of basic_ios on char: typedef basic_ios<char> ios; int_type The type int_type is a synonym of type traits::in_type. ios_type The type ios_type is a synonym for basic_ios<charT, traits> . off_type The type off_type is a synonym of type traits::off_type. ostream_type The type ostream_type is a synonym for basic_ostream<charT, traits> . pos_type The type pos_type is a synonym of type traits::pos_type. streambuf_type The type streambuf_type is a synonym for basic_streambuf<charT, traits> . traits_type The type traits_type is a synonym for the template param- eter traits. wios The type wios is an instantiation of basic_ios on wchar_t: typedef basic_ios<wchar_t> wios;
explicit basic_ios(basic_streambuf<charT, traits>* sb); Constructs an object of class basic_ios, assigning ini- tial values to its member objects by calling init(sb). If sb is a null pointer, the stream is positioned in error state by triggering its badbit. basic_ios(); Constructs an object of class basic_ios, leaving its member objects uninitialized. The object must be initial- ized by calling the init member function before using it.
virtual ~basic_ios(); Destroys an object of class basic_ios.
bool bad() const; Returns true if badbit is set in rdstate(). void clear(iostate state = goodbit); If (state & exception()) == 0, returns. Otherwise, the function throws an object of class ios_base::failure. After the call returns state == rdstate() if rdbuf() !=0 otherwise state == state|ios_base::badbit. basic_ios& copyfmt(const basic_ios& rhs); Assigns to the member objects of *this the corresponding member objects of rhs, except the following: - rdstate() and rdbuf() are left unchanged - Calls ios_base::copyfmt - exceptions() is altered last by calling exceptions(rhs.exceptions()).RE bool eof() const; Returns true if eofbit is set in rdstate(). iostate exceptions() const; Returns a mask that determines what elements set in rdstate() cause exceptions to be thrown. void exceptions(iostate except); Sets the exception mask to except, then calls clear(rdstate()). bool fail() const; Returns true if failbit or badbit is set in rdstate(). char_type fill() const; Returns the character used to pad (fill) an output conversion to the specified field width. char_type fill(char_type fillch); Saves the field width value, then replaces it by fillch and returns the previously saved value. bool good() const; Returns rdstate() == 0. locale imbue(const locale& loc); Saves the value returned by getloc(), then assigns loc to a private variable. If rdbuf() != 0 calls rdbuf()- >pubimbue(loc) and returns the previously saved value. void init(basic_streambuf<charT,traits>* sb); Performs the following initialization: rdbuf() sb tie() 0 rdstate() goodbit if sb is not null otherwise badbit exceptions() goodbit flags() skipws | dec width() 0 precision() 6 fill() the space character getloc() a copy of the locale returned by locale::locale() char narrow(charT c, char dfault) const; Uses the stream's locale to convert the wide character c to a tiny character, and then returns it. If no conver- sion exists, it returns the character dfault. bool operator!() const; Returns fail() ? 1 : 0; streambuf_type* rdbuf() const; Returns a pointer to the stream buffer associated with the stream. streambuf_type* rdbuf(streambuf_type* sb); Associates a stream buffer with the stream. All the input and output is directed to this stream buffer. If sb is a null pointer, the stream is positioned in error state, by triggering its badbit. iostate rdstate() const; Returns the control state of the stream. void setstate(iostate state); Calls clear(rdstate() | state). ostream_type* tie() const; Returns an output sequence that is tied to (synchronized with) the sequence controlled by the stream buffer. ostream_type* tie(ostream_type* tiestr); Saves the tie() value, then replaces it by tiestr and returns the value previously saved. operator void*() const; Returns fail() ? 0 : 1; charT widen(char c) const; Uses the stream's locale to convert the tiny character c to a wide character, then returns it.
ios_base(3C++), basic_istream(3C++), basic_ostream(3C++), basic_streambuf(3C++), char_traits(3C++) Working Paper for Draft Proposed International Standard for Information Systems--Programming Language C++, section 27.4.5.
ANSI X3J16/ISO WG21 Joint C++ Committee | http://docs.oracle.com/cd/E19205-01/820-4180/man3c++/ios.3.html | CC-MAIN-2014-52 | refinedweb | 916 | 54.32 |
Hi -
I have hacked together a prototype web version of my transformer
diagnostics app, which is 100% Python. For four days (April 1-4), I ran the
web app on a laptop with a video projector at a big electric power industry
trade show to get reactions from current and prospective customers. I am
happy to say that the response from practically everybody was "I want it as
soon as it is available."
Because newbie Webware app developers may be interested, I will mention
some of the things I have done or figured out so far. Apologies to the
Webware gurus if this seems elementary, but it may be amusing to see how
your beautiful tool is actually being used.
My setup: 400-MHz Pentium II, 128 MB RAM, Windows 98, Apache 1.3.14, BeOpen
Python 2.0, MSIE 4.0. Laptop is the same except 200 MHz.
The core application code is in 6 python packages, each one a subdirectory
of the application's main directory c:/program files/toa4a2, which is
called the "top code directory" below. Each module in each package imports
classes from other packages like this: "from db.spam import Spam". One of
those packages, called "web," contains the base class modules (e.g.
BaseSidebarPage.py) for the web interface.
Another directory, c:/program files/webtoa, contains the servlet class
modules which are directly involved in the web interface. I call that
directory the "servlet directory." The servlet classes are subclasses of
the base classes defined in the "web" package, and they do their imports
like this: "from web.BaseSidebarPage import BaseSidebarPage." One of the
servlets is index.py, containing a class "index" which is basically the
front page of the web application.
Webware is installed in C:/Python20/Webware. In
Webware/WebKit/Configs/Application.config under Contexts, I inserted the
name of my app and the path of the servlet directory. Having the code
directory separate from the servlet directory is important so that users
can only see the servlets.
I modified OneShot.cgi and WebKit.cgi, naming them as webtoa1.cgi and
webtoa.cgi, and put them into the Apache cgi-bin directory. In Windows, the
top line of each cgi script should say #!c:/python20/python.exe. The
modifications clean up sys.path and insert the code directory into sys.path
along with WebwareDir. I think I will probably also do
os.environ['CODEDIR'] = 'c:/program files/toa4a1' in the cgi scripts so
that some of the core modules will be able to find their INI files and
other resources.
In the application code directory I put a modified copy of WebKit's
Launch.py for launching the adapter. It does the same sys.path
modifications that the cgi scripts do, because AsyncThreadedHTTPServer does
not run a cgi script. On the other hand, the one-shot cgi script does not
use an adapter. So for now the path initialization code has to be in two
places.
My BaseSidebarPage class and BasePage class (no sidebar) are derived from
WebKit's SidebarPage and Page. The writeHeader() method of each inserts a
reference to a style sheet for the app, like this:
def writeHeader(self):
uri = self.request().uriWebKitRoot() + 'WebTOA'
self.writeln('<head>')
self.writeln(' <title>%s</title>' % self.title())
self.writeln(' <link rel="stylesheet" type="text/css" href="%s">' % (uri
+ 'webtoa.css',))
self.writeln('</head>')
self.writeln('<body>')
All the HTML-generating code in the derived page classes uses the styles.
This helps to reduce the tonnage of the page text and makes it easy to
change layout details.
The web app displays an assortment of tables of transformer diagnostic
information. On two of the pages, the transformer ID in each row of the
table is a link which feeds information from that row to another page which
displays detail information for that particular equipment. That turned out
to be surprisingly easy to do. The detail page uses
self.request().field('equipmentid') to find out what to display. Note that
self.request() does not work in __init__(), but should be used in awake()
after the parent class awake() method has been called. It also dawned on me
that the session object is a place where I can keep user-specific
information such as "which transformer is the user currently fiddling with".
Soon I will use UserKit to add logins.
Webware is really great stuff, and big thanks to Chuck and the other
developers for doing such a good job.
Jim Dukarm
DELTA-X RESEARCH
Victoria BC Canada | http://sourceforge.net/p/webware/mailman/webware-discuss/thread/3.0.1.32.20010406160739.00dedfb8@pop.islandnet.com/ | CC-MAIN-2015-48 | refinedweb | 754 | 67.96 |
NAME
::clig::Float - declare an option with parameters of type float
SYNOPSIS
package require clig namespace import ::clig::* setSpec db Float -opt varname usage [-c min max] {[-d default ...] | [-m]} [-r rmin rmax]
DESCRIPTION
The Float command declares -opt to have zero or more floating point.0 600.0 In this case, if option is not on the command line, the clig parser will set variable varname for its caller to the out-of-range value 1.0. (Maybe this is a feature?) Example use of Float: Float -p prob {list of probabilities adding up to one} \ -r 0.0 1.0 -d 0.1 0.2 0.7 \ -c 1 oo Please note, that the additional constraint requiring the values summing to 1.0 cannot be checked by the parser. float*, otherwise it has type float.
SEE ALSO
clig(1), clig_Commandline(7), clig_Description(7), clig_Double(7), clig_Flag(7), clig_Int(7), clig_Long(7), clig_Name(7), clig_Rest(7), clig_String(7), clig_Usage(7), clig_Version(7), clig_parseCmdline(7) | http://manpages.ubuntu.com/manpages/hardy/man7/clig_Float.7.html | CC-MAIN-2013-20 | refinedweb | 165 | 65.73 |
Your Account
This excerpt is from Enterprise Development with Flex. If you want to use Adobe Flex to build production-quality Rich Internet Applications for the enterprise, this groundbreaking book shows you exactly what's required. You'll learn efficient techniques and best practices, and compare several frameworks and tools available for RIA development -- well beyond anything you'll find in Flex tutorials and product documentation. Through many practical examples, the authors impart their considerable experience to help you overcome challenges during your project's life cycle.
“Excuse me, where can I find For Sale signs?”
“Probably they are in the Hardware section.”
“Why there?”
“If we don’t know where to shelve an item, we put it in
Hardware.”
For a successful project, you need the right mix of team members,
tools, and techniques. This chapter covers a variety of topics that are
important for development managers and enterprise and application architects
who take care of the ecosystem in which Flex teams operate. The fact that
Flex exists in a variety of platforms and that BlazeDS and LCDS can be
deployed under any Java servlet container sounds great. But when you
consider that today’s enterprise development team often consists of people
located all around the globe, such flexibility can make your project
difficult to manage.
This chapter is not as technical as the others. It’s rather a grab bag
of little things that may seem unrelated, but when combined will make your
development process smoother and the results of your development cycle more
predictable.
Specifically, you’ll learn about:
Staffing enterprise Flex projects
Working with the version control repository
Stress testing
Creating build and deployment scripts
Continuous integration
Logging and tracing
Open source Flex component libraries
Integration with Spring and Hibernate
The chapter’s goal is to give you a taste of your options and help
make your Flex team more productive. Without further ado, let’s start
building a Flex team.
Any project has to be staffed first. Developers of a typical
enterprise RIA project can be easily separated into two groups: those who
work on the client tier and those who work on the server-side components.
You can further divide this latter group into those who develop the middle
tier with business logic and those who take care of the data. In all
cases, however, how does a project manager find the right
people?
The number of formally trained Flex programmers is increasing daily,
but the pool of Flex developers is still relatively small compared to the
multimillion legions of Java and .NET professionals.
The main concern of any project manager is whether enough people
with Flex skills can be found to staff, but what does the title of “Flex
developer” mean? In some projects, you need to develop a small number of
Flex views, but they have very serious requirements for the communication
layer. In other projects, you need to develop lots of UI views (a.k.a.
screens) supported by standard LCDS or BlazeDS features. Any of these
projects, however, require the following Flex personnel:
UI developers
Component developers
Architects
For the sake of simplicity, this discussion assumes that the
project’s user interface design is done by a professional user
experience designer.
The better you understand these roles, the better you can staff your
project.
GUI developers create the view portion of an
RIA. This is the easiest skill to acquire if you already have some
programming language under your belt. The hard work of the Adobe
marketing force and technical evangelists did a good job in creating the
impression that working with Flex is easy: just drag and drop UI
components on the what-you-see-is-what-you-get (WYSIWYG) area in Flash Builder, align them
nicely, and write the functions to process button clicks or row
selections in the data grid—sort of a Visual Basic for the
Web.
The GUI development skillset is low-hanging fruit that many people
can master pretty quickly. Savvy project managers either outsource this
job to third-party vendors or send their own developers to a one-week
training class. There is rarely a staffing problem here.
GUI developers interact with user experience
designers who create wireframes of your application in
Photoshop, some third-party tool, or even in Flex itself. But even in
the Flex case, GUI developers should not start implementing screens
until approved by a Flex component developer or an architect.
In addition to having the skills of GUI developers, Flex
component developers are well versed in object-oriented and
event-driven programming.
They analyze each view created by a web designer to decide which
Flex components should be developed for this view and how these
components will interact with each other (see Figure 2.4, “An abstract UI design that includes eight custom
components”). Most likely they
will be applying a mediator pattern (described in Chapter 2, Selected Design Patterns) to the initial wireframe.
Experienced Flex component developers know that even though the
syntax of ActionScript 3 looks very similar to Java, it has provisions
for dynamic programming and often they can use this to avoid creating
well-defined Java Bean–ish objects.
Flex architects know everything the GUI and
component designers know, plus they can see the big picture. Flex
architects perform the following duties:
Decide which frameworks, component libraries, and utilities
should be used on the project
Decide on communication protocols to be used for communication
with the server tier
Enhance the application protocols if need be
Decide how to modularize the application
Arrange for the unit, functional, and stress tests
Make decisions on application security issues, such as how to
integrate with external authentication/authorization mechanisms
available in the organization
Act as a technical lead on the project, providing technical
guidance to GUI and component developers
Coordinate interaction between the Flex team and the
server-side developers
Promote the use of coding best practices and perform code
reviews
Conduct technical job interviews and give recommendations on
hiring GUI and component developers
These skills can’t be obtained in a week of training. Flex
architects are seasoned professionals with years of experience in
RIA development. The goal of any project manager is to find the best Flex architect
possible. The success of your project heavily depends on this
person.
Not every Flex developer can be profiled as a member of one of
these three groups. In smaller teams, one person may wear two hats:
component developer and architect.
RIAs require new skills to develop what was previously known as
boring-looking enterprise applications. In the past, development of the
user interface was done by software developers to the best of their
design abilities. A couple of buttons here, a grid there, a gray
background—done. The users were happy because they did not see anything
better. The application delivered the data. What else was there to wish
for? Enterprise business users were not spoiled and would work with
whatever was available; they needed to take care of their business. It
was what it was.
But is it still? Not anymore. We’ve seen excellent (from the UI
perspective) functional specs for financial applications made by
professional designers. Business users are slowly but surely becoming
first-class citizens!
The trend is clear: developer art does not cut it anymore. You
need to hire a professional user experience designer for your
next-generation web application.
The vendors of the tools for RIA development recognize this trend
and are trying to bring designers and developers closer to each other.
But the main RIA tool vendors, Adobe and Microsoft, face different
issues.
Adobe is a well-known name among creative people (Photoshop,
Illustrator, Flash); during the last two years, it has managed to
convince enterprise developers that it has something for them, too
(Flex, AIR). Adobe is trying to win developers’ hearts, but it does not
want to scare designers either. In addition to various designer-only
tools, Adobe’s Flash Catalyst tool allows designers create the Flex UI
of an application without knowing how to program.
Today, a designer creates artwork in Illustrator or Photoshop, and
then developers have to somehow mimic all the images, color gradients,
fonts, and styles in Flash Builder. But this process will become a lot
more transparent.
A web designer will import his Illustrator/Photoshop creations
into Flash Catalyst, then select areas to be turned into Flex components
and save the artwork as a new project: a file with extension .fxp. Adobe did a good job of maintaining
menus and property panes in Flash Catalyst, similar to what designers
are accustomed to in Illustrator and Photoshop. The learning curve for
designers is not steep at all.
.fxp
Designers will definitely appreciate the ability to work with Flex
view states without the need to write even a line of code. Creating two
views for master/detail scenarios becomes a trivial operation.
Flash Catalyst is a handy tool not only for people trained in
creating artwork but also for those who need to create wireframe mockups
of their application using built-in controls including some dummy
data.
Working with Flash Catalyst requires UI designers to use Flash
Creative Studio version 4 or later for creation of original artworks.
This is needed, because Flash Catalyst internally uses the new .fxg format for storing just the graphic part
of the Flex controls.
.fxg
Flash Catalyst will become a valuable addition to the toolbox of a
web designer working in the Flex RIA space.
Microsoft comes from quite the opposite side: it has legions of
faithful .NET developers, and released Silverlight, which includes great
tools for designers creating UI for RIA. Microsoft Expression Design and
Expression Blend IDEs take the artwork and automatically generate code
for .NET developers and help animate the UI to make it more rich and
engaging.
Adobe invests heavily in making the designer/developer workflow as
easy and smooth as possible. Adobe’s Catalyst generates Flex code based
on the artwork created with tools from Creative Studio 4 and later. Most
of the work on the application design is done using Adobe Photoshop,
Illustrator, or Fireworks, and application interactions you can create
in Flash Catalyst. During conversion, the selected piece of the artwork
becomes the skin of a Flex component. Figure 4.1, “Converting artwork into Flex components” shows how you can
convert an area in the artwork into a Flex TextInput component.
TextInput
Figure 4.1. Converting artwork into Flex components
Flash Catalyst allows you to create animated transitions between
states and, using the timeline, adjust the length and timing of the
effects. It allows developers and designers to work on the same project.
Designers create the interface of the RIA, and developers add business
logic and program communication with the server.
In an effort to foster understanding between the developers and
designers, Adobe consults with professors from different colleges and
universities on their visual design and software engineering
disciplines. The intent is to help designers understand programming
better and help software developers get better at designing a user
experience. It’s a complex and not easily achievable goal, breeding
these new creatures called “designopers” and “devigners.”
If you are staffing an RIA project and need to make a decision
about the position of web designer, you’re better off hiring two
different talents: a creative person and a web developer. Make sure that
each party is aware of decisions made by the other. Invite designers to
decision-making meetings. If the project budget is tight, however, you
have no choice but to bring on board either a designoper or
devigner.
With the right staff on board, you’re ready to dig into your
project. Even though the Flex SDK includes a command-line compiler and a
debugger and you can write code in any plain-text editor of your choice,
this is not the most productive approach. You need an IDE—an integrated
development environment—and in the next section, you’ll get familiar
with IDE choices.
While configuring developers’ workstations, ensure that each of them
has at least 2 GB of RAM; otherwise, compilation by your IDE may take a
large portion of your working day. As to what that IDE is, the choice is
yours.
At the time of this writing, enterprise Flex developers can work
with one of the following IDEs:
Flash Builder
3 or 4 Beta (Adobe)
RAD 7.5
(IBM)
IntelliJ
IDEA 9 (JetBrains)
Tofino 2
(Ensemble)
You can install Flash Builder either as a standalone IDE or as an
Eclipse plug-in. The latter is the preferred choice for those projects
that use Java as a server-side platform. Savvy Java developers install
Eclipse JEE version or MyEclipse from Genuitec; both come with useful
plug-ins that simplify development of the Java-based web applications.
Today, Flash Builder is the most popular IDE among Flex enterprise
developers. It comes in two versions: Standard and Professional. The
latter includes the data visualization package (charting support,
AdvancedDataGrid, and Online
Analytical Processing [OLAP] components). Besides offering a convenient
environment for developers, Flash Builder has room for improvement in
compilation speed and refactoring.
AdvancedDataGrid
IBM’s RAD 7.5 is a commercial IDE built on the Eclipse platform.
RAD feels heavier when compared to Flash Builder. It can substantially
slow down your developers if they have desktops with less than 2 GB of
RAM.
For many years IntelliJ IDEA was one of the best Java IDEs.
IntelliJ IDEA supports Flex development and is more responsive and
convenient for Flex/Java developers than Flash Builder. The current
version of IDEA, however, does not allow the creation of Flex views in
design mode, which is clearly a drawback. It does not include the Flex
profiler, which is an important tool for performance tuning of your
applications. On the other hand, if you prefer Maven for building
projects, you will appreciate the fact that IDEA includes a Maven
module.
Tofino is a free plug-in for Microsoft Visual Studio that allows
development of a Flex frontend for .NET applications.
At the time of this writing, Flash Builder is the richest IDE
available for Flex developers. Flash Builder 4 is going to be released
in early 2010. Besides multiple changes in the code of the Flex SDK,
it’ll have a number of improvements in the tooling department: for
example, a wizard for generation of the Flex code for remote data
services, project templates, autogeneration of event handlers,
integration with Flash Catalyst, a FlexUnit code generator, a Network
Monitoring view, better refactoring support, and more.
In some enterprises, developers are forced to use specific IDE and
application servers for Flex development, such as RAD and WebSphere from
IBM. We believe that developers should be able to select the tools that
they are comfortable with. Some are more productive with the Flash
Builder/Tomcat duo; others prefer RAD/Resin. During development, no such
combinations should be prohibited, even if the production server for
your application is WebLogic.
Likewise, members of a Flex application group may be physically
located in different parts of the world. Third-party consultants may be
working in different operational environments, too. They may even
install the Flex framework on different disk drives
(C:, D:, etc.).
C:
D:
All this freedom can lead to issues in using version control
repositories, because Flash Builder stores the names of physical drives
and directories in the property files of the Flash Builder project. Say
Developer A has the Flex framework installed in a particular directory
on disk drive D:. He creates a project pointing at
Tomcat and checks it into a source code repository. Developer B checks
out the latest changes from the repository and runs into issues, because
either her Flex framework was installed on the disk drive
C: or her project was configured to use WebSphere.
In addition to this issue, developers will be reusing specific shared
libraries, and each of the Flex modules may depend on other shared
libraries as well as the server-side BlazeDS or LCDS
components.
To simplify the process of configuring the build path and compile
options of the Flex projects (developers may have different deployment
directories), use soft links rather than hardcoded
names of the drives and directories (this is the equivalent of what’s
known as symbolic links in the Unix/Linux
OS).
For implementing soft links in the Windows environment, use the
junction utility, which is available for download
at.
This utility is a small executable file that allows the mapping of a
soft link (a nickname) to an actual directory on disk.
For example, run the following in the command window:
junction c:\serverroot "c:\ Tomcat 6.0\webapps\myflex"
It’ll create a soft link C:\serverroot that
can be treated as a directory on your filesystem. In the example,
c:\serverroot points at the application deployment
directory under the Apache Tomcat servlet container. Similarly, another
member of your team can map C:\serverroot to the
deployment directory of WebSphere or any other JEE server.
From now on, all references in the build path and compiler options
will start with C:\serverroot\ regardless of
what physical server, disk drive, and directory are being used. By
following these conventions, all Flash Builder projects will be stored
in the source control repositories with the same reference to
C:\serverroot.
Using soft links simplifies the development of the Ant build
scripts, too.
We recommend at least two soft links:
C:\serverroot and C:\flexsdk,
where the former is mapped to a document root of the servlet container
and the latter is mapped to the installation directory of the Flex SDK.
An example of creating a soft link C:\flexsdk is
shown here:
C:\>junction C:\flexsdk "C:\Program Files\Adobe\Flash Builder 3 Plug-in\sdks\3.0.0"
When Flex SDK 4.1 or even 5.0 becomes available, this should have
minimal effect on your build scripts and Flash Builder projects: just
rerun the junction utility to point C:\flexsdk to the
newly installed Flex framework.
By now, your team has selected the IDE, come to an agreement on
the use of soft links, and considered various recommendations regarding
Flex code, such as embedding into HTML, testing, build automation, and
logging.
Flash.
html-template
index.template.html
.swf
bin-debug
bin-release.
iFrame.
SWFObject
<object>
A simple example contrasts the standard Flash Builder approach and
SWFObject. Say you have this application called HelloSWFObject.mxml:.
HelloSWFObject.swf
HelloSWFObject.html
Now try the solution offered by SWFObject. First, download and
unzip into some folder the file swfobject_2_2.zip from. Copy HelloSWFObject.swf there,
too.
swfobject_2_2.zip
To generate an HTML wrapper, download swfobject_generator_1_2_air.zip, a handy AIR
utility from SWFObject’s site. After unzipping, run the application
swfobject_generator (Figure 4.2, “SWFObject’s HTML generator”).
swfobject_generator_1_2_air.zip
Figure 4.2. SWFObject’s HTML generator.
<div>
In the lower portion of the window, you’ll find HTML that looks
like Example 4.1, “HTML wrapper generated by SWFObject”..
In large enterprises, usually you don’t start a new Enterprise Flex
project from scratch without worrying about existing web applications
written in JSP, ASP, AJAX, and the like.
More often, enterprise architects gradually introduce Flex into the
existing web fabric of their organizations. Often, they start with adding
a new Flex widget into an existing web page written in HTML and
JavaScript, and they need to establish interaction between JavaScript and
ActionScript code from the SWF widget.
Flex can communicate with JavaScript using an ActionScript class
called ExternalInterface. This class allows you to
map ActionScript and JavaScript functions and invoke these functions
either from ActionScript or from JavaScript. The use of the class
ExternalInterface requires coding in
both languages.
External
Interface
ExternalInterface
For example, to allow JavaScript’s function jsIsCalling() to invoke a function asToCall(), you write in
ActionScript:
jsIsCalling()
asToCall()
ExternalInterface.addCallback("jsIsCalling", asToCall);
Then, you use the ID of the embedded .swf (e.g., mySwfId set in the HTML object) followed by a
JavaScript call like this:
mySwfId
if(navigator.appName.indexOf("Microsoft") != -1){
window["mySwfId"].asToCall();
} else {
document.getElementById("mySwfId").asToCall();
}
For the applications that are written by teams of AJAX developers,
there is another option for JavaScript/ActionScript interaction. Flex
SDK comes with a small library called Flex AJAX Bridge
(FABridge).
Say you already have an AJAX application, but want to delegate
some input/output (I/O)
functionality to Flex or implement some components for the web page
(media players, charts, and the like) in Flex. FABridge allows your AJAX
developers to continue coding in JavaScript and call the API from within
Flex components without the need to learn Flex programming.
With FABridge, you can register an event listener in JavaScript
that will react to the events that are happening inside the .swf file. For instance, a user clicks the
button inside a Flex portlet or some Flex remote call returns the data.
Using FABridge may simplify getting notifications about such events (and
data) from Flex components into existing AJAX portlets.
You can find a detailed description of how and when to use
FABridge versus ExternalInterface at.
A third mechanism of passing data to a .swf from the enclosing HTML page is to use
the flashVars variable.
flashVars
Consider an assignment: write a Flex application that can run
against different servers—development, user acceptance
testing (UAT), and production—without the need to recompile the
.swf file. It does not take a
rocket scientist to figure out that the URL of the server should be
passed to the .swf file as a
parameter, and you can do this by using a special variable, flashVars, in an HTML wrapper.
While embedding a .swf in
HTML, Flash Builder includes flashVars parameters in the tags Object and Embed. ActionScript code can read them using
Application.application.parameters,
as shown in the next example.
Object
Embed
Application.application.parameters
The script portion of Example 4.2, “Reading flashVars values in Flex” gets the values of the
parameters serverURL and port (defined by us) using the Flex Application object. The goal is to add the
values of these parameters to the HTML file via flashVars. In a Flex application, these values
are bound to the Label as a part of
the text string.
serverURL
port
Application
Label
Example 4.2. Reading flashVars values in Flex
<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:
<mx:Label
<mx:Script>
<![CDATA[
[Bindable]
var serverURL:String;
[Bindable]
var port:String;
function initApp():void{
serverURL=Application.application.parameters.serverURL;
port=Application.application.parameters.port
}
]]>
</mx:Script>
</mx:Application>
Open the generated HTML file, and you’ll find the JavaScript
function AC_FL_RunContent
that includes flashVars parameters in
the form of key/value pairs. For example, in my sample application it
looks like this:
AC_FL_Run
Content
"flashvars",'historyUrl=history.htm%3F&lconid=' + lc_id +''
If you used SWFObject to embed SWF, use different syntax for
passing flashVars to SWF as shown
in Example 4.2, “Reading flashVars values in Flex”.
Add the parameters serverURL
and port to this string to make it
look as follows:
"flashvars",'serverURL=MyDevelopmentServer&port=8181&historyUrl=history.htm%3F&lconid
='+ lc_id
Run the application, and it’ll display the URL of the server it
connects to, as shown in Figure 4.3, “Running the flashVars sample—BindingWithString.mxml”. If you’d like to deploy this
application on the UAT server, just change the values of the flashVars parameters in the HTML file.
Figure 4.3. Running the flashVars sample—BindingWithString.mxml
There’s one last little wrinkle to iron out: if you manually
change the content of the generated HTML file, the next time you clean
the project in Flash Builder, its content will be overwritten and you’ll
lose added flashVars
parameters.
There’s a simple solution: instead of adding flashVars parameters to the generated HTML,
add them to the file index.template.html from the html-template directory.
Of course, this little example does not connect to any server, but
it shows how to pass the server URL (or any other value) as a parameter
to Flash Player, and how to assemble the URL from a mix of text and
bindings.
The sooner you start testing your application, the shorter the
development cycle will be. It seems obvious, but many IT teams haven’t
adopted agile testing methodologies, which costs them dearly. ActionScript
supports dynamic types, which means that its compiler won’t be as helpful
in identifying errors as it is in Java. To put it simply, Flex
applications have to be tested more thoroughly.
To switch to an agile test-driven development, start with accepting
the notion of embedding testing into your development process rather than
scheduling testing after the development cycle is complete. The basic
types of testing are:
Unit
Integration
Functional
Load
The sections that follow examine the differences between these
testing strategies, as well as point out tools that will help you to
automate the process.
Unit testing is performed by a developer and
is targeted at small pieces of code to ensure, for example, that if you
call a function with particular arguments, it will return the expected
result.
Test-driven development principles suggest that you write test
code even before you write the application code. For example, if you are
about to start programming a class with some business logic, ask
yourself, “How can I ensure that this function works fine?” After you
know the answer, write a test ActionScript class that calls this
function to assert that the business logic gives
the expected result. Only after the test is written, start programming
the business logic. Say you are in a business of shipping goods. Create
a Shipment class that implements
business logic and a ShipmentTest
class to test this logic. You may write a test that will assert that the
shipping address is not null if the order quantity is greater than
zero.
Shipment
ShipmentTest
In addition to business logic, Flex RIAs should be tested for
proper rendering of UI components, changing view states, dispatching,
and handling events. Integration testing is a
process in which a developer combines several unit tests to ensure that
they work properly with each other. Both unit and integration tests have
to be written by application developers.
Several tools can help you write unit and integration
tests.
FlexUnit4 is a unit testing framework for Flex and ActionScript
3.0 applications and libraries. With FlexUnit4 and Flash Builder, you
can generate individual unit tests and combine them into test suites.
Flash Builder 4 allows automatic creation of test cases (see New
→ TestCase Class in the menus). Just
enter the name of the class to test, and Flash Builder will generate a
test application and a test case class in a separate
package.
For each method of your class, say calculateMonthlyPayment(), Flash Builder
will generate a test method, for example testCalculateMonthlyPayment(). You just need
to implement it:
calculateMonthlyPayment()
testCalculateMonthlyPayment()
public function testCalculateMonthlyPayment(){
//A $200K mortgage at 7% for 30 years should have
// a monthly payment of $1199.10
Assert.assertEquals(
MortgageCalculator.calculateMonthlyPayment (200000, 7,30 ),1199.1 );
}
After the test case class is ready, ask Flash Builder to
generate the test suite for you (see New → Test Suite Class). To execute your test
suite, right-click on the project in Flash Builder and select Execute
FlexUnit Tests.
Unit testing of visual components is not as straightforward as
unit testing of business logic in ActionScript classes. The Flex
framework makes lots of internal function calls to properly display
your component on the Flash Player’s stage. And if you need to get a
hold of a particular UI component to ensure that it’s properly
created, laid out, and populated, use the Application.application object in your
tests.
Application.application
A free tool from Gorilla Logic, FlexMonkey is a unit testing
framework for Flex applications that also automates testing of Flex UI
functionality. FlexMonkey can record and play back UI interactions.
For example, Figure 4.4, “Recording command list in FlexMonkey”
illustrates the command list that results from the user entering the
name of the manager and selecting a date.
Figure 4.4. Recording command list in FlexMonkey
FlexMonkey not only creates a command list, but also generates
ActionScript testing scripts for FlexUnit (Figure 4.5, “Test-generated matching command list”) that you can easily
include within a continuous integration process.
Figure 4.5. Test-generated matching command list
Technically, if the test scripts generated by FlexMonkey would
allow a programming language simpler than ActionScript, you could
consider it both a unit and functional testing framework. In the small
IT shops where developers have to perform all kinds of testing, you
may use FlexMonkey in this double mode. Even in larger organizations
it may be beneficial if a developer runs these prefunctional tests to
minimize the number of errors reported by the QA team. For more
information on FlexMonkey, see.
An open source framework for testing the visual appearance of
components, Visual Flex Unit also introduces visual
assertions, which assert that a component’s appearance is identical
to a stored baseline image file. Developers can instantiate and
initialize UI components, define view states and styles, and test that
these components look the same as presaved images of the same. For
output, you’ll get a report on how many pixels differ. You can run
tests in Ant mode and send notifications about the test results. At
the time of this writing, Visual Flex Unit is still in alpha version,
but you can find more information at
Functional testing (a.k.a. black-box, QA, or
acceptance testing) is aimed at finding out whether the application
properly implements business logic. For example, if the user clicks on a
row in the customer data grid, the program should display a form view
with specific details about the selected customer. In functional testing
business users should define what has to be tested, unlike unit or
integration testing where tests are created by software
developers.
Functional tests can be performed manually, in which a real person
clicks through each and every view of the RIA, confirming that it
operates properly or reporting discrepancies with the functional
specifications. A better approach, however, is to engage specialized
software that allows you to prerecord the sequence of clicks (similar to
what FlexMonkey does) and replay these scripts whenever the application
has been modified to verify that the functionality has not been broken
by the last code changes.
Writing scripts for testing may sound like an annoying process,
but this up-front investment can save you a lot of grief and long
overtime hours during the project life cycle. Larger organizations have
dedicated Quality Assurance teams who write these tests. In smaller IT
shops, Flex developers write these tests, but this is a less efficient
approach, as developers may not have the correct vision of the entire
business workflow of the application and their tests won’t cover the
whole functionality of the system.
Automated test scripts should be integrated with the build process
of your application and run continuously. There are several commercial
(and expensive) offerings for automation of functional testing:
During the recording phase, QTP creates a script in the
VBScript language in which each line represents an action of the
user. The checkpoints included in the script are used for
comparison of the current value with expected values of the
specified properties of application objects. Flex 3 Professional
includes the libraries (.swc)
required for automated testing with QTP, and your Flex application
has to be compiled with these libraries. In addition, the QA
testers need to have a commercial license for the QTP itself. The
process of installing QTP for testing Flex applications is
described at.
.swc
Rational Functional Tester supports functional and
regression testing of Flex applications. You can see the demo and
download a trial version of this product at.
Flex Vulnerability Tests
IBM’s Rational AppScan helps test your web application against
the threat of SQL injection attacks and data breaches. Staring from
version 7.8, AppScan supports a wide array of Flash Player–based
applications, including Adobe Flex and Adobe AIR. For more
information, visit.
RIATest (Figure 4.6, “RIATest: Visual creation of verification code”) is a
commercial testing tool for QA teams working with Flex
applications. It includes Action Recorder (an RIAScript language
similar to ActionScript), a
script debugger, and synchronization capabilities.
Because of the event-driven nature of Flex, UI testing tools
need to be smart enough to understand that some events take time
to execute and your tests can run only after a certain period of
time. RIATest allows you to not only rely on this tool to make
such synchronization decisions, but also to specify various wait
conditions manually. For example, if a click on the button
requires an asynchronous remote call to populate a data grid,
RIATest offers you the script command waitfor, which won’t perform the data
verification until the data grid is populated. The Action Recorder
creates human-readable scripts. To download a demo, go to.
waitfor
Figure 4.6. RIATest: Visual creation of verification code
While rearchitecting an old-fashioned HTML-based application with
RIA, you should not forget that besides looking good, the new
application should be at least as scalable as the one you are replacing.
Ideally, it should be more scalable than the old one if faster data
communication protocols such as AMF and Real Time Messaging Protocol
(RTMP) are being used. How many concurrent users can work with your
application without bringing your server to its knees? Even if the
server is capable of serving a thousand users, will performance suffer?
If yes, how bad is it going to be?
It all comes down to two factors: availability and response time.
These requirements for your application should be well defined in the
service level agreement (SLA),
which should clearly state what’s acceptable from the user’s
perspective. For example, the SLA can include a clause stating that the
initial download of your application shouldn’t take longer than 30
seconds for users with a slow connection (500 kbps). The SLA can state
that the query to display a list of customers shouldn’t run for more
than five seconds, and the application should be operational 99.9
percent of the time.
To avoid surprises after going live with your new mission-critical
RIA, don’t forget to include in your project plan a set of heavy stress
tests, and do this well in advance before it goes live. Luckily, you
don’t need to hire 1,000 interns to find out whether your application
will meet the SLA requirements. The automated load (a.k.a. stress or
performance testing software) allows you to emulate required number of
users, set up the throttling to emulate a slower connection, and
configure the ramp-up speed. For example, you can simulate a situation
where the number of users logged on to your system grows at the speed of
50 users every 10 seconds. Stress testing software also allows you to
prerecord the action of the business users, and then you can run these
scripts emulating a heavy load.
Good stress-testing software allows simulating the load close to
the real-world usage patterns. You should be able to create and run
mixed scripts simulating a situation in which some users are logging on
to your application while others are retrieving the data and performing
data modifications. Each of the following tools understands AMF protocol
and can be used for stress testing of Flex applications:
NeoLoad is a commercial stress-testing tool. It offers
analysis of web applications using performance monitors without
the need to do manual scripting. You start with recording and
configuring a test scenario, then you run the tests creating
multiple virtual users, and finally, you monitor client
operational system load and web and application server components.
As you’ll learn in Chapter 6, Open Source Networking Solutions, we at Farata
Systems have been using a scalable stress-test solution based on
BlazeDS installed under a Jetty server. For more information on
NeoLoad, go to.
A commercial stress-testing software, WebLOAD 8.3 offers
similar functionality to NeoLoad. It includes analysis and
reporting, and a workflow wizard that helps with building scripts.
It also supports AJAX. WebLOAD also allows you to enter SLA
requirements right into the tests. To learn more, visit.
The commercial Borland test suite includes Borland
SilkPerformer, stress-testing software for optimizing performance
of business applications, and the functional testing tool Borland
SilkTest, among other tools.
SilkPerformer allows you to create thousands of users with
its visual scenario modeling tools. It supports Flex clients and
the AMF 3 protocol.
SilkTest automates the functional testing process, and
supports regression, cross-platform, and localization testing. For
more details, see.
An open source load-testing tool, Data Services Stress
Testing Framework helps developers with stress testing of
LiveCycle Data Services ES. This is a tool for putting load on the
server and is not meant for stress testing an individual Flex/LCDS
application running in the Flash Player. This framework is not
compatible with BlazeDS. To download it or learn more, visit.
For testing BlazeDS, consider using JMeter as described at the
JTeam blog.
Even if you are using testing tools, can you be sure that you have
tested each and every scenario that may arise in your
application?
Code coverage describes the degree to which
your code has been tested. It’s also known as white-box
testing, which is an attempt to analyze the code and test
each possible path your application may go through. In large projects
with hundreds of if statements, it’s
often difficult to cover each and every branch of execution, and
automated tools will help you with this.
if
An open source project, Flexcover is a code coverage tool for Flex
and AIR applications. This project provides code coverage
instrumentation, data collection, and reporting tools. It incorporates a
modified version of the ActionScript 3 . For more information, go to.
The document “Flex SDK coding conventions and best practices”
lays out the coding standards for writing open source components in
ActionScript 3, but you can use
it as a guideline for writing code in your business application, too.
This document is available at the following URL:.
FlexPMD
is a tool that helps to improve code quality by auditing any AS3/Flex
source directories and detecting common bad practices, such as unused
code (functions, variables, constants, etc.), inefficient code (misuse
of dynamic filters, heavy constructors, etc.), overly long code (classes, methods, etc.), incorrect
use of the Flex component life cycle (commitProperties,
etc.), and more.
commitProperties
The code coverage tools will ensure that you’ve tested all
application code, and the coding conventions document will help you in
adhering to commonly accepted practices, but yet another question to be
answered is, “How should you split the code of a large application into
a smaller and more manageable modules?” This becomes the subject of the
brief discussion that comes next.
Even a relatively small Flex application has to be modularized. More
often than not, a Flex application consists of more than one Flash Builder
project. You’ll learn more about modularization in Chapter 7, Modules, Libraries, Applications, and
Portals; for now, a brief
overview will expose you to the main concepts that each Flex
developer/architect should keep in mind.
Your main Flash Builder project will be compiled into a main
.swf application, and the size of
this .swf should be kept as small as
possible. Include only must-have pieces of the application that have to be
delivered to the client’s computer on the initial application load. The
time of the initial application load is crucial and has to be kept as
short as possible.
Modularization of the Flex application is achieved by splitting up
the code into Flex libraries (.swc
files) and Flex modules (.swf files).
Initially, the application should load only the main .swf and a set of shared libraries that contain
objects required by other application modules. Flex modules are .swf files that have <mx:Module> as a root tag. They can be
loaded and unloaded during the runtime using Flex’s ModuleLoader loader. If the ability to unload
the code during the runtime is important to your Flex application, use
modules. If this feature is not important, use Flex libraries, which are
loaded in the same application domain and allow direct referencing of the
loaded objects in the code with the strong type checking.
<mx:Module>
ModuleLoader
Although .swf files are created
by the mxmlc compiler, Flex libraries are compiled
into .swc files via the
compc compiler. Flex libraries can be linked to an
application in one of three ways:
Merged into code
Externally
Via Runtime Shared Libraries (RSLs)
The linkage type has to be selected based on the needs of the
specific application.
Chapter 8, Performance Improvement: Selected
Topics
describes pros and cons of each type of linkage, as well as a technique
that allows you to create so-called self-initialized libraries that can be reused
in Flex applications in a loosely coupled fashion.
self-initialized libraries
Application fonts and styles are good candidates for being compiled
into a separate .swf file that is
precompiled and is loaded during the application startup. This will
improve the compilation speed of the Flash Builder’s projects, because
compiling fonts and styles is a lengthy process.
Modularizing the application also simplifies work separation between
Flex developers, as each small team can work on a different module. Flex
3.2 has introduced so-called subapplications, which
are nothing but Flex application .swf
files that can be compiled in different versions of Flex. SWFloader can load this subapplication either in
its own or in a separate security sandbox.
SWFloader
A modularized Flex application consists of several Flash Builder
projects. Each of the individual projects contains the build.xml file that performs the build and
deployment of this project. Additionally, one extra file should be created
to run individual project builds in an appropriate order and to deploy the
entire application in some predefined directory, for example,
C:\serverroot as described in the section the section called “Flex Developer’s Workstation”.
build.xml
Such a main build file should account for dependencies that may
exist in your project. For example, the application that produces the main
.swf file can depend on some
libraries that are shared by all modules of your application. Hence the
main Ant build file needs to have multiple targets that control the order
of individual project builds.
In some cases, for auditing purposes, if a build task depends on
other builds—i.e., .swc libraries—all
dependent builds should be rerun even if the compiled version of .swc already exists.
Apache Ant is a popular Java-based tool for automating the
software build process. You can run Ant builds of the project either
from Flash Builder or from a command line. To run the build script from
Flash Builder, right-click on the name of the build file, such as
build.xml, and choose the Ant Build
from the pop-up menu. The build will start and you’ll see Ant’s output
in the Flash Builder console. To build your application from a command
line you can use a standalone Ant
utility. To be able to run Ant from any directory, add the
bin directory of Ant’s install to the PATH environment variable on your
computer.
bin
PATH
Ant uses the tools.jar file
that comes with the Java SDK. Modify your environment variable
CLASSPATH to include the location
of tools.jar on your PC. For
example, if you did a standard install of Java 6 under MS Windows, add
the following to the CLASSPATH
variable: C:\Program
Files\Java\jdk1.6.0_02\lib\tools.jar.
tools.jar
CLASSPATH
To run the Ant build from a command line, open a command window,
change directory to the project you are planning to build, and enter
ant, as in:
ant
C:\myworkspace> cd my.module.met1
C:\myworkspace\my.module.met1> ant
cd my.module.met1
In addition to the developer’s workstation, all build scripts need
to be deployed under a dedicated server, and developers should run test
builds first on their local workstation and then under this
server.
Writing Ant build scripts manually is a time-consuming process. To
help you, we created Fx2Ant (it comes as a part of
Clear Toolkit; see). After
installing the Clear Toolkit Eclipse plug-in, just right-click on “Flash
Builder project” and select the menu Generate Build Files, and within a
couple of seconds you’ll get an Ant build script that reflects all
current settings of your Flash Builder project.
There is also an open source project called Antennae that provides
templates for building Flex projects with Ant. Antennae can also
generate scripts for FlexUnit. It’s available at.
Maven is a more advanced build tool than Ant. Maven supports
builds of modules and creation of applications that use the Flex
framework RSL. It works with FlexUnit and ASDoc. If your organization
uses Maven, get flex-mojos at. This is a collection of
Maven plug-ins to allow Maven to build and optimize Flex and AIR
.swf and .swc files.
You can find an example of configuring a
Flex/Maven/Hibernate/Spring/BlazeDS project at.
If you use the IntelliJ IDEA IDE, you’ll have even more
convenient integration of Flex and Maven projects.
Introduced by Martin Fowler and Matthew Foemmel, the theory of
continuous integration recommends
creating scripts and running automated builds of your application at
least once a day. This allows you to identify issues in the code a lot
sooner.
continuous integration
You can read more about the continuous integration practice at.
We are successfully using an open source framework called CruiseControl for
establishing a continuous build process. When you use CruiseControl, you
can create scripts that run either at a specified time interval or on
each check-in of the new code into the source code repository. You may
also force the build whenever you like.
CruiseControl has a web-based application to monitor or manually
start builds (Figure 4.7, “Controlling CruiseControl from the Web”). Reports on the
results of each build are automatically emailed to the designated
members of the application group. At Farata Systems, we use it to ensure
continuous builds of the internal projects and components for Clear
Toolkit.
Figure 4.7. Controlling CruiseControl from the Web
IT shops that have adopted test-driven development can make the
build process even more bulletproof by including test scripts in the
continuous integration build process. If unit, integration, and
functional test scripts (which automatically run after each successful
build process) don’t produce any issues, you can rest assured that the
latest code changes did not break the application logic.
Hudson is yet another
popular open source continuous integration server.
When you develop distributed applications, you can’t overestimate
the importance of a good logging facility.
Imagine life without one: the user pressed a button and…nothing
happened. Do you know if the client’s request reached the server-side
component? If so, what did the server send back? Add to this the inability
to use debuggers while processing GUI events like focus change, and you
may need to spend hours, if not days, trying to spot some sophisticated
errors.
That’s why a reliable logger is a must if you work with an
application that is spread over the network and is written in different
languages, such as Adobe Flex and Java.
At Farata Systems, we created a Flash Builder plug-in for Log4Fx,
which is available as a part of the open source project Clear Toolkit.
This is an advanced yet simple-to-use component for Flex applications. You
can set up the logging on the client or the server side (Java), redirect
the output of the log messages to local log windows, or make the log
output easily available to the production support teams located
remotely.
Think of a production situation where a particular client complains
that the application runs slowly. Log4Fx allows you to turn on the logging
just for this client and you can do it remotely with web browser access to
the log output.
Log4Fx comes with several convenient and easy-to-use display panels
with log messages. In addition, it automatically inserts the logging code
into your ActionScript classes with hot keys (Figure 4.8, “Log4Fx hot keys to insert log statements into
ActionScript”).
Figure 4.8. Log4Fx hot keys to insert log statements into
ActionScript
For example, place the cursor in the script section of your
application and press Ctrl-R followed by M to insert the following lines
into your program:
import mx.logging.Log;
import mx.logging.ILogger;
private var logger:ILogger = Log.getLogger("MyStockPortfolio");
Say you are considering adding this trace statement into the
function getPriceQuetes():
getPriceQuetes()
trace("Entered the method getPriceQuotes");
Instead of doing this, you can place the cursor in the function
getPriceQuotes() and press Ctrl-R
followed by D. The following line will be added at your cursor
location:
getPriceQuotes()
if (Log.isDebug()) logger.debug("");
Enter the text Entered the method
getPriceQuotes() between the double quotes, and if you’ve set
the level of logging to Debug, this message will be sent to a destination
you specified with the Logging Manager.
Entered the method
getPriceQuotes()
If a user calls production support complaining about some unexpected
behavior, ask her to press Ctrl-Shift-Backspace; the Logging Manager will
pop up on top of her application window (Figure 4.9, “A user enables logging”).
Figure 4.9. A user enables logging
The users select checkboxes to enable the required level of logging,
and the stream of log messages is directed to the selected target. You can
change the logging level at any time while your application is running.
This feature is crucial for mission-critical production applications where
you can’t ask the user to stop the application (e.g., financial trading
systems) but need to obtain the logging information to help the customer
on the live system.
You can select a local or remote target or send the log messages to
the Java application running on the server side, as shown in Figure 4.10, “Logging in the Local panel”.
Figure 4.10. Logging in the Local panel
Log4Fx adds a new application, RemoteLogReceiver.mxml, to your Flex project,
which can be used by a remote production support crew if need
be.
RemoteLogReceiver.mxml
Say the user’s application is deployed at the URL. By
pressing Ctrl-Shift-Backspace, the user opens the Logging Manager and
selects the target Remote Logging (Figure 4.11, “Specifying the remote destination for logging”).
Figure 4.11. Specifying the remote destination for logging
The destination RemoteLogging
is selected automatically, and the user needs to input a password, which
the user will share with the production support engineer.
RemoteLogging
Because RemoteLogReceiver.mxml is an application that
sits right next to your main application in Flash Builder’s project, it
gets compiled into a .swf file, the
HTML wrapper is generated, and it is deployed in the web server along
with your main application. The end users won’t even know that it
exists, but a production engineer can enter its URL
()
in his browser when needed.
Think of an undercover informant who lives quietly in the
neighborhood, but when engaged, immediately starts sending information
out. After entering the password provided by the user and pressing the
Connect button, the production support engineer will start receiving log
messages sent by the user’s application (Figure 4.12, “Monitoring log output from the remote machine”).
Log4Fx is available as a part of the open source project Clear
Toolkit at.
Troubleshooting with Charles
Although lots of programs allow you to trace HTTP traffic, Flex
developers need to be able to trace not just HTTP requests, but also
AMF calls made by Flash Player to the server. At Farata Systems, we’ve
been successfully using a program called Charles, which is a very
handy tool on any Flex project.
Charles is an HTTP proxy and monitor that allows developers to
view all of the HTTP traffic between their web browser and the
Internet. This includes requests, responses, and HTTP headers (which
contain cookies and caching information). Charles allows viewing
Secure Sockets Layer (SSL) communication in plain text. Because some
users of your application may work over slow Internet connections,
Charles simulates various modem speeds by throttling your bandwidth
and introducing latency—an invaluable feature.
Charles is not a free tool, but it’s very inexpensive. It can be
downloaded at.
Figure 4.12. Monitoring log output from the remote machine
Regardless of your decision about using Flex frameworks, you should
be aware of a number open source libraries of components. The Flex
community includes passionate and skillful developers that are willing to
enhance and share components that come with the Flex SDK. For example, you
may find an open source implementation of the horizontal accordion,
autocomplete component, tree grid control, JSON serializer, and much
more.
Following you’ll find references to some of the component libraries
that in many cases will spare you from reinventing the wheel during the
business application development cycle:
The FlexLib project is a community effort to create open
source user interface components for Adobe Flex 2 and 3. Some of its
most useful components are: AdvancedForm, EnhancedButtonSkin,
CanvasButton, ConvertibleTreeList, Highlighter, IconLoader,
ImageMap, PromptingTextInput, Scrollable Menu Controls, Horizontal
Accordion, TreeGrid, Docking ToolBar, and Flex Scheduling Framework.
as3corelib is an open source library of ActionScript 3 classes
and utilities. It includes image encoders; a JSON library for
serialization; general String, Number and Date APIs; as well as HTTP
and XML utilities. Most of the classes don’t even use the Flex
framework. AS3corelib also includes AIR-specific classes.
FlexServerLib includes several useful server-side components:
MailAdapter is a Flex Messaging Adapter for sending email from a
Flex/AIR application. SpringJmsAdapter is an adapter for sending and
receiving messages through a Spring-configured Java Message Service
(JMS) destination. EJBAdapter is an adapter allowing the invocation of EJB methods
via remote object calls.
asSQL is an ActionScript 3 MySQL driver that allows you to
connect to this popular DBMS directly from AIR
applications.
The Facebook ActionScript API allows you to write Flex
applications that communicate with Facebook using the
REpresentational State Transfer (REST) protocol.
These libraries allow you to access the Twitter API from
ActionScript.
Geographical mapping libraries are quite handy if you’d like
your RIA to have the ability to map the location of your business,
branches, dealers, and the like. These libraries may be free for
personal use, but may require a commercial license to be used in
enterprise applications. Please consult the product documentation of
the mapping engine of your choice.
The Astra Web API gives your Flex application access to Yahoo!
Maps, Yahoo! Answers, Yahoo! Weather, Yahoo! Search, and a social
events calendar. The Google
Maps API for Flash lets Flex developers embed Google Maps in their
application. The MapQuest Platform has similar functionality.
as3syndicationlib parses the Atom format and all versions of
Really Simple Syndication (RSS). It hides the differences between
the formats of the feeds.
Away3D is a real-time 3D engine for Flash.
Papervision3D is a real-time 3D engine for Flash.
The YouTube API is a library for integrating your application
with this popular video portal.
as3flickrlib is an ActionScript API for Flickr, a popular
portal for sharing photographs.
Text Layout Framework is a library that supports advanced
typographic and text layout features. This library is requires Flash
Player 10. It’s included in Flex 4, but can be used with Flex 3.2 as
well.
To stay current with internal and third-party Flex components and
libraries, download and install the AIR application called Tour de Flex. It contains
easy-to-follow code samples on use of various components. It’s also a
place where commercial and noncommercial developers can showcase their
work (Figure 4.13, “Component explorer Tour de Flex”).
Figure 4.13. Component explorer Tour de Flex
Although most of the previous components cater to frontend
developers, because Flex RIAs are distributed applications, some of the
components and popular frameworks will live on the server side. The next
two sections will give you an overview of how to introduce such server
frameworks as Spring and Hibernate.
The Java Spring framework is a popular server-side container that
has its own mechanism of instantiating Java classes—it implements a design
pattern called Inversion of Control. To put it
simply, if an object Employee has a
property of type Bonus, instead of
explicit creation of the bonus instance
in the class employee, the framework
would create this instance and inject it into the variable bonus.
Employee
Bonus
bonus
employee
BlazeDS (and LCDS) knows how to instantiate Java classes configured
in remoting-config.xml, but this is
not what’s required by the Spring framework.
remoting-config.xml
In the past, a solution based on the Class Factory design pattern
was your only option. Both BlazeDS and LCDS allow you to specify not the
name of the class to create, but the name of the class
factory that will be creating instances of this class. An
implementation of such a solution
was available in the Flex-Spring library making Spring framework
responsible for creating instances of such Java classes (a.k.a.
Spring beans).
Today, there is a cleaner solution developed jointly by Adobe and
SpringSource. It allows you to configure Spring beans in Extensible Markup
Language (XML) files, which can be used by the BlazeDS component on the
Java EE server of your choice.
James Ward and Jon Rose have published a reference card with code
samples on Flex/Spring integration at.
At the time of this writing, the project on the integration of
BlazeDS and the Spring framework is a work in progress, and we suggest
you to follow the blog of
Adobe’s Christophe Coenraets, who publishes up-to-date information about
this project.
These days, writing SQL manually is out of style, and lots of
software developers prefer using object-relational
mapping (ORM) tools for data persistence.
With ORM, an instance of an object is mapped to a database table.
Selecting a row from a database is equivalent to creating an instance of
the object in memory. On the same note, deleting the object instance will
cause deletion of the corresponding row in a database table.
In the Java community, Hibernate is the most popular open source ORM
tool. Hibernate supports lazy loading, caching, and object versioning. It
can either create the entire database from scratch based on the provided
Java objects, or just create Java objects based on the existing
database.
Mapping of Java objects to the database tables and setting their
relationships (one-to-many, one-to-one, many-to-one) can be done either
externally in XML configuration files or by using annotations right inside
the Java classes, a.k.a. entity beans. From a Flex
remoting perspective, nothing changes: Flex still sends and receives DTOs
from a destination specified in remoting-config.xml.
After downloading and installing the Hibernate framework under the
server with BlazeDS, the integration
steps are:
Create a server-side entity bean Employee that uses annotations to map
appropriate values to database tables and specify queries:
@Entity
@Table(name = "employees")
@NamedQueries( {
@NamedQuery(name = "employeess.findAll", query = "from Employee"),
@NamedQuery(name = "employees.byId", query = "select c from Employee e where
e.employeeId= :employeeId") })
public class Employee {
@Id
@GeneratedValue(strategy = GenerationType.AUTO)
@Column(name = "employeeId", nullable = false)
private Long employeeId;
@Column(name = "firstName", nullable = true, unique = false)
private String firstName;
Create a file called persistence.xml under the META-INF directory of your BlazeDS project.
In this file, define the database location and connectivity
credentials.
persistence.xml
META-INF
Write a Java class EmployeeService with method getEmployees() that retrieves and updates
the data using Hibernate—for example:
EmployeeService
getEmployees()
public List<Employee> getEmployees() {
EntityManagerFactory entityManagerFactory =
Persistence.createEntityManagerFactory(PERSISTENCE_UNIT);
EntityManager em = entityManagerFactory.createEntityManager();
Query findAllQuery = em.createNamedQuery("employees.findAll");
List<Empoyee> employeess = findAllQuery.getResultList();
return employees;
}
Define a destination in the BlazeDS remoting-config.xml file that points at the
class EmployeeService:
<destination id="myEmployee">
<properties>
<source>com.farata.EmployeeService</source>
</properties>
</destination>
The rest of the process is the same as in any Flex remoting
scenario.
The only issue with this approach is that it has problems supporting
lazy loading. BlazeDS uses the Java
adapter to serialize Java objects, along with all related objects
regardless of whether you want them to be lazy-loaded.
The entire process of the integration of Flex, BlazeDS, Hibernate,
and MySQL Server is described in detail in an article published at the
Adobe Developer’s Connection website. You can find it at.
If your Flex application uses LCDS, this issue is solved by applying
special Hibernate adapter for Data Management Services. Digital Primates’
dpHibernate is a custom Flex library and a custom BlazeDS Hibernate
adapter that work together to give you support for lazy loading of
Hibernate objects from inside your Flex applications. You can get
dpHibernate at.
There is one more open source product that supports Hibernate.
It’s called Granite Data Services and is an alternative to
BlazeDS.
Programmers don’t like writing comments. They know how their code
works. At least, they think they do. Six months down the road, they will
be wondering, “Man, did I actually write this myself? What was I planning
to do here?”
Program documentation is as important as the code itself. If you are
managing the project, make sure that you encourage and enforce proper
documentation. Some developers will tell you that their code is
self-explanatory. Don’t buy this. Tomorrow, these developers won’t be
around, for whatever reason, and someone else will have to read their
code.
Flex comes with ASDoc, a tool that works
similarly to JavaDoc, which is well known in the Java community. ASDoc
reads the comments placed between the symbols /** and */;
reads the names of the classes, interfaces, methods, styles, and
properties from the code; and generates easily viewable help
files.
/**
*/
The source code of the Flex framework itself is available, too.
Just Ctrl-click on any class name in Flash Builder, and you’ll see the
source code of this ActionScript class or MXML object. Example 4.3, “A fragment of the Button source code” is the beginning of
the source code of the Flex Button
component.
Button
Example 4.3. A fragment of the Button source code
package mx.controls
{
import flash.display.DisplayObject;
import flash.events.Event;
...
/**
* The Button control is a commonly used rectangular button.
* Button controls look like they can be pressed.
* They can have a text label, an icon, or both on their face.
*
* Buttons typically use event listeners to perform an action
* when the user selects the control. When a user clicks the mouse
* on a Button control, and the Button control is enabled,
* it dispatches a click event and a buttonDown event.
* A button always dispatches events such as the mouseMove,
* mouseOver, mouseOut, rollOver,rollOut, mouseDown, and
* mouseUp events whether enabled or disabled.
*
* You can customize the look of a Button control
* and change its functionality from a push button to a toggle button.
* You can change the button appearance by using a skin
* for each of the button's states.
*/
public class Button extends UIComponent
implements IDataRenderer, IDropInListItemRenderer,
IFocusManagerComponent, IListItemRenderer,
IFontContextComponent, IButton
{
include "../core/Version.as";
/**
* @private
* Placeholder for mixin by ButtonAccImpl.
*/
mx_internal static var createAccessibilityImplementation:Function;
/**
* Constructor.
*/
public function Button(){
super();
//.)
mouseChildren = false;
//.)
Beside the /** and */ symbols, you have a small number of the
markup elements that ASDoc understands (@see,
@param, @example).
@see
@param
@example
The beginning of the Help screen created by the ASDoc utility
based on the source code of the Button class looks like Figure 4.14, “A fragment of the Help screen for Button”.
Figure 4.14. A fragment of the Help screen for Button
Detailed information on how to use ASDoc is available at.
Documenting MXML with ASDoc has not been implemented yet, but is
planned to be released with Flex 4. The functional design specifications
of the new ASDoc are already published at the Adobe
open source site.
Unified Modeling Language (UML) diagrams are convenient for
representing relationships among the components of your application.
There are a number of tools that turn the creation of diagrams into a
simple drag-and-drop process. After creating a class diagram, these
tools allow you to generate code in a number of programming
languages.
In a perfect world, any change in the class definition would be
done in the UML tool first, followed by the code generation. Future
manual additions to these classes wouldn’t get overwritten by subsequent
code generations if the model changes.
UML tools are also handy in situations where you need to become
familiar with poorly commented code written by someone else. In this
case, the process of reverse engineering will allow you to create a UML
diagram of all the classes and their relationships from the existing
code.
There are a number of free UML tools that understand ActionScript
3 (UMLet, VASGen, Cairngen) with
limited abilities for code generation.
Commercial tools offer more features and are modestly priced.
Figure 4.15, “Enterprise Architect: a UML class diagram” shows a
class diagram created by Enterprise Architect from Sparx Systems. This diagram
was created by autoreverse engineering of the existing ActionScript classes.
Figure 4.15. Enterprise Architect: a UML class diagram
The process is pretty straightforward: create a new project and a
new class diagram, then right-click anywhere on the background, select
the menu item “Import from source files,” and point at the directory
where your ActionScript classes are located. The tool supports
ActionScript, Java, C#, C++, PHP, and other languages.
Some users can’t see, hear, or move, or have difficulties in
reading, recognizing colors, or other disabilities. The World Wide Web
Consortium has published a document called Web Content Accessibility Guidelines
1.0, which contains guidelines for making web content available
for people with disabilities.
Microsoft Active Accessibility (MSAA) technology and its successor,
the UI Automation (UIA) interface, are also aimed at helping such users.
Adobe Flex components were designed to help developers in creating
accessible applications.
Did you know that blind users of your RIA mostly use the keyboard as
opposed to the mouse? They may interact with your application using
special screen readers (e.g., JAWS from Freedom Scientific) or need to
hear special audio signals that help them in application
navigation.
A screen reader is a software application that tries to identify
what’s being displayed on the screen, and then reads it to the user either
by text-to-speech converters or via a Braille output device.
The computer mouse is unpopular not only among blind people, but
also among people with mobility impairments. Are all of the Flex
components used in your application accessible by the keyboard?
If your application includes audio, hearing-impaired people would
greatly appreciate captions. This does not mean that from now on every
user should be forced to watch captions during audio or hear loud
announcements of the components that are being displayed on the monitor.
But you should provide a way to switch your Flex application into
accessibility mode. The Flex compiler offers a special
option—compiler.accessible—to build an
accessible .swf.
You can find more materials about Flex accessibility at.
For testing accessibility of your RIA by visually impaired people,
use aDesigner, a disability simulator from IBM. aDesigner supports Flash
content and is available at.
This chapter was a grab bag of various recommendations and
suggestions that each Flex development manager or architect may find of
use over the course of the project. We sincerely hope that materials and
leads from this chapter will ensure that your next Flex project is as
smooth and productive as possible.
We hope that the variety of commercial and open source tools
reviewed in this chapter represent Adobe Flex as a mature and evolving
ecosystem, well suited to your next RIA project.
This chapter talked about tools that help in building and testing
both the client and server portions of Flex RIA; the next chapter will
concentrate on using powerful server-side technology from Adobe, called
LiveCycle Data Services.
If you enjoyed this excerpt, buy a copy of Enterprise Development with Flex.
© 2012, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners. | http://oreilly.com/flex/excerpts/enterprise-development-with-flex/equipping-enterprise-flex-projects.html | crawl-003 | refinedweb | 11,468 | 53.92 |
MediaPlayer with HLS doesn't switch resolution on OSX....
Hello,
Doing some tests with HLS and MediaPlayer.
Here a small test :
import QtQuick 2.8 import QtQuick.Window 2.2 import QtMultimedia 5.9 Window { visible: true width: 1280 height: 720 title: qsTr("test") MediaPlayer { id: mediaplayer source: "" } VideoOutput { anchors.fill: parent source: mediaplayer } MouseArea { id: playArea anchors.fill: parent onPressed: mediaplayer.play(); } }
The player stays on the same resolution on OSX ( the smaller one ).
On iOS, it's switching, but above the native screen resolution of the device.... ( on iPhone 6 I reach to 1920x1080 )
Am I doing something wrong ?
Thanks for your help.
Best Regards.
Scoob'
same here +1
Is there some changes in Qt5.10 ? | https://forum.qt.io/topic/84133/mediaplayer-with-hls-doesn-t-switch-resolution-on-osx | CC-MAIN-2018-39 | refinedweb | 118 | 63.15 |
Sometimes it is required to quickly determine details like kernel name, version, hostname, etc of the Linux box you are using.
Even though you can find all these details in respective files present under the proc filesystem, it is easier to use uname utility to get these information quickly.
The basic syntax of the uname command is :
uname [OPTION]...
Now lets look at some examples that demonstrate the usage of ‘uname’ command.
uname without any option
When the ‘uname’ command is run without any option then it prints just the kernel name. So the output below shows that its the ‘Linux’ kernel that is used by this system.
$ uname Linux
You can also use uname -s, which also displays the kernel name.
$ uname -s Linux
Get the network node host name using -n option
Use uname -n option to fetch the network node host name of your Linux box.
$ uname -n dev-server
The output above will be the same as the output of the hostname command.
Get kernel release using -r option
uname command can also be used to fetch the kernel release information. The option -r can be used for this purpose.
$ uname -r 2.6.32-100.28.5.el6.x86_64
Get the kernel version using -v option
uname command can also be used to fetch the kernel version information. The option -v can be used for this purpose.
$ uname -v #1 SMP Wed Feb 2 18:40:23 EST 2011
Get the machine hardware name using -m option
uname command can also be used to fetch the machine hardware name. The option -m can be used for this purpose. This indicates that it is a 64-bit system.
$ uname -m x86_64
Get the processor type using -p option
uname command can also be used to fetch the processor type information. The option -p can be used for this purpose. If the uname command is not able to fetch the processor type information then it produces ‘unknown’ in the output.
$ uname -p x86_64
Sometimes you might see ‘unknown’ as the output of this command, if uname was not able to fetch the information on processor type.
Get the hardware platform using -i option
uname command can also be used to fetch the hardware platform information. The option -i can be used for this purpose. If the uname command is not able to fetch the hardware platform information then it produces ‘unknown’ in the output.
$ uname -i x86_64
Sometimes you might see ‘unknown’ as the output of this command, if uname was not able to fetch the information about the platform.
Get the operating system name using the -o option
uname command can also be used to fetch the operating system name. The option -o can be used for this purpose.
For example :
$ uname -o GNU/Linux
Get all the information using uname -a option
All the information that by far we have learned to access using different flags can be fetched in one go. The option -a can be used for this purpose.
$ uname -a Linux dev-server 2.6.32-100.28.5.el6.x86_64 #1 SMP Wed Feb 2 18:40:23 EST 2011 x86_64 x86_64 x86_64 GNU/Linux
Unknown value in the uname output
While writing this article, I was a bit curious as to why the uname utility is returning ‘unknown’ for processor type(-p) and hardware platform(-i) on my laptop that is running Ubuntu. I researched a bit over this issue.
One explanation I found was that uname command uses the uname() function(man 2 uname) that reads all information from the following kernel structure : };
Since information on processor type and hardware platform is not present in this structure so uname command returns ‘unknown’ for them.
The other explanation that I found was that inside uname.c the handling of the -p option is like :
... ... ... char const *element = unknown; #if HAVE_SYSINFO && defined SI_ARCHITECTURE { static char processor[257]; if (0 <= sysinfo (SI_ARCHITECTURE, processor, sizeof processor)) element = processor; } #endif ... ... ...
The macros HAVE_SYSINFO and SI_ARCHITECTURE are not defined anywhere in the kernel and hence unknown is returned. Same could be true for the option -i.
I am not sure about the exact problem but we can safely assume that the -p and -i options are not standard and merely extensions and hence should be avoided while using uname command in a script.
Get the Linux Sysadmin Course Now!
{ 3 comments… read them below or add one }
good. also leave a question to other users.
Hi,
Thanks a lot….
Hi Ramesh,
First of all, thank you for all your lessons, they are great.
I have a question if I may about Linux kernel and processor,
If the output of uname -imp are all related to the hardware, then what is the best way to tell if the kernel is 32bit or 64bit?
Most linux forums indicate that uname -imp outputs kernel info, but the manpage for uname suggests they are hardware info.
Thank you in advance.
Regards,
Ali Hussain
Sydney. | http://www.thegeekstuff.com/2012/09/uname-command-examples/ | CC-MAIN-2014-52 | refinedweb | 835 | 62.88 |
zzip_disk_munmap (3) - Linux Man Pages
zzip_disk_munmap: turn a filehandle into a mmapped zip disk archive handle
NAME
zzip_disk_mmap, zzip_disk_init, zzip_disk_new, zzip_disk_munmap, zzip_disk_open, zzip_disk_buffer, zzip_disk_close - turn a filehandle into a mmapped zip disk archive handle
SYNOPSIS
#include <zzip/mmapped.h>
- zzip__new__ ZZIP_DISK * zzip_disk_mmap((int fd));
- int zzip_disk_init((ZZIP_DISK
* disk, void *buffer, zzip_size_t buflen));
- zzip__new__ ZZIP_DISK * zzip_disk_new((void));
- int zzip_disk_munmap((ZZIP_DISK
* disk));
- zzip__new__ ZZIP_DISK * zzip_disk_open((char
*filename));
- zzip__new__ ZZIP_DISK * zzip_disk_buffer((void
*buffer, size_t buflen));
- int zzip_disk_close((ZZIP_DISK
* disk));
DESCRIPTION_buffer function will attach a buffer with a zip image that was acquired from another source than a file. Note that if zzip_disk_mmap fails then zzip_disk_open will fall back and try to read the full file to memory wrapping a ZZIP_DISK around the memory buffer just as the zzip_disk_buffer function will do. Note that the zzip_disk_buffer function will not own the buffer, it will neither be written nor free()d.
The zzip_disk_close function will release all data needed to access a (mmapped) zip archive, including any malloc()ed blocks, sharedmem mappings and it dumps the handle struct as well.
AUTHOR
- • Guido Draheim <guidod [at] gmx.de>
Copyright (c) 2003,2004,2006 Guido Draheim All rights reserved, use under the restrictions of the Lesser GNU General Public License or alternatively the restrictions of the Mozilla Public License 1.1 | https://www.systutorials.com/docs/linux/man/3-zzip_disk_munmap/ | CC-MAIN-2020-45 | refinedweb | 214 | 51.11 |
why i^=j^=i^=j isn't equal to *i^=*j^=*i^=*j
In C, when there are variables (assume both as int) i less than j, we can use the equation
i^=j^=i^=j
to exchange the value of the two variables. For example, let int i = 3, j = 5; after computed i^=j^=i^=j, I have i = 5, j = 3.
However, if I use two int pointers to re-do this, with *i^=*j^=*i^=*j, using the example above, what I have will be i = 0 and j = 3.
In C
1
int i=3, j=5; i^=j^=i^=j; // after this i = 5, j=3
2
int i = 3, j= 5; int *pi = &i, *pj = &j; *pi^=*pj^=*pi^=*pj; // after this, $pi = 0, *pj = 5
In JavaScript
var i=3, j=5; i^=j^=i^=j; // after this, i = 0, j= 3
the result in JavaScript makes this more interesting to me
my sample code , on ubuntu server 11.0 & gcc
#include <stdio.h> int main(){ int i=7, j=9; int *pi=&i, *pj=&j; i^=j^=i^=j; printf("i=%d j=%d\n", i, j); i=7, j=9; *pi^=*pj^=*pi^=*pj printf("i=%d j=%d\n", *pi, *pj); }
undefined behavior in c
Will the undefined behavior in c be the real reason leads to this question?
1
code compiled use visual studio 2005 on windows 7 produce the expected result ( Output i = 7, j = 9 twice.)
2
code compiled use gcc on ubuntu ( gcc test.c ) produce the unexpected result ( Output i = 7, j = 9 then i = 0, j = 9 )
3
code compiled use gcc on ubuntu ( gcc -O test.c ) produce the expected result ( Output i = 7,j = 9 twice. )
Answers
i^=j^=i^=j is undefined behavior in C.
You are violating sequence points rules by modifying i two times between two sequence points.
It means the implementation is free to assign any value or even make your program crash.
For the same reason, *i^=*j^=*i^=*j is also undefined behavior.
(C99, 6.5p2) "Between the previous and next sequence point an object shall have its stored value modified at most once by the evaluation of an expression."
Need Your Help
How to remove black edge on UIImageView with rounded corners and a border width?
ios objective-c uiimageview quartz-graphicsI have the following code to make the UIImageView in each of my UITableView's cells have rounded corners: | http://unixresources.net/faq/12331518.shtml | CC-MAIN-2019-04 | refinedweb | 416 | 67.49 |
Sort characters in a string in alphabetical order
We have to sort the characters of the string in an alphabetical order. Note that unlike our traditional alphabets, in computer’s memory the characters are recognised by their ASCII values. Hence, ‘A’ and ‘a’ are different. Also, the ASCII value of ‘A’ is smaller than ‘a’ hence, ‘A’ will be occupying first position followed by ‘a’ if they are just the constituent characters of the string. Special characters like ‘?’ ,”, ‘#’ etc and numbers also come prior to the alphabets in order of their ASCII values.
Refer ASCII table for the same.
APPROACH 1: Without using string functions:
- We scan the input from the user.
- Since we are using getline function, it returns the number of blocks read which is one greater than the length of string ; as the last character is newline character.
- If using an alternate method to scan input then you have to use a loop to increment a counter till the ‘\0’ character is encountered. Refer length of string program for various techniques.
- Two loops are used, one for the ith character and the next for the i+1 th character as we have to compare the adjacent characters in order to decide to swap them or not.
- We use a variable of type character here – temp , in order to store the character at ith place if a swap has to be performed in order to sort string alphabetically.
Code:
#include <stdio.h> #include <string.h> int main () { char temp, *str; int i, j, l, size = 100; printf("Enter the string to be sorted: "); str = (char*)malloc(size); l = getline(&str, &size, stdin); l--; //length of string is no of blocks read - 1 printf("String before sorting: %s \n", str); for (i = 0; i < l-1; i++) { for (j = i+1; j < l; j++) { if (str[i] > str[j]) { temp = str[i]; str[i] = str[j]; str[j] = temp; } } } printf("String after sorting: %s \n", str); return 0; }
Output:
Enter the string to be sorted: How you doing ? (mixture of uppercase, lowercase and special characters) String before sorting: How you doing ? String after sorting: ?Hdginooouwy Enter the string to be sorted: BACDE (all are uppercase characters) String before sorting: BACDE String after sorting: ABCDE
Report Error/ Suggestion | https://www.studymite.com/c-programming-language/examples/program-to-sort-characters-in-a-string-in-alphabetical-order/?utm_source=related_posts&utm_medium=related_posts | CC-MAIN-2020-05 | refinedweb | 378 | 69.21 |
Is is possible to do this;
for i in range(some_number):
#do something
Off the top of my head, no.
I think the best you could do is something like this:
def loop(f,n): for i in xrange(n): f() loop(lambda: <insert expression here>, 5)
But I think you can just live with the extra
i variable.
Here is the option to use the
_ variable, which in reality, is just another variable.
for _ in range(n): do_something()
Note that
_ is assigned the last result that returned in an interactive python session:
>>> 1+2 3 >>> _ 3
For this reason, I would not use it in this manner. I am unaware of any idiom as mentioned by Ryan. It can mess up your interpreter.
>>> for _ in xrange(10): pass ... >>> _ 9 >>> 1+2 3 >>> _ 9
And according to python grammar, it is an acceptable variable name:
identifier ::= (letter|"_") (letter | digit | "_")* | https://codedump.io/share/QhQooNCKwaE6/1/is-it-possible-to-implement-a-python-for-range-loop-without-an-iterator-variable | CC-MAIN-2017-26 | refinedweb | 158 | 78.08 |
Help:Navigating
There are several ways to explore Wikibooks:
- Follow links from page to page.
- Search for something specific using the search box that appears on every page.
- Browse through our books by subject, by completion status, reading level, or by title.
- Explore a random page, or from the navigation bar a random book.
- View what recent changes people have been making, or only to pages you're watching.
- From any page you can find out what links there.
- From any page you can click Special pages to access Newly created pages and All pages by title
- Use keyboard shortcuts to move around even more quickly.
Searching
Wikibooks has an extremely powerful search engine built in, which can be used to locate material on Wikibooks more easily and more precisely than well known external web search engines such as Google and Yahoo!.
The search box is located at the top right on every page on the standard Wikibooks skin (Vector). It will take you to the page which matches your query, otherwise it displays the search results. To display the full search results, click on the last item in drop-down list (which says «⧼vector-simplesearch-containing⧽»), or perform an empty search. The direct link for the standard interface is Special:Search and this for advanced search.
Search results page
The default search only applies to the Mainspace (where most books are stored), Wikijunior (a collection of books for children), and Cookbook (a large collection of recipes). Other types of content pages can be searched by selecting an option from the grey search types box below the search input box.
If Multimedia is selected, you can search images, videos, and audio stored on Wikibooks or our shared media repository, Wikimedia Commons. This option will search their file names and descriptions.
If Help and Project pages is selected, you can search the "Help" and "Wikibooks" namespaces. These namespaces contain help pages, Wikibooks guidelines and policies, and all pages used for administration and maintenance of the site.
If Everything is selected, you can search all namespaces.
To search in any subset of namespaces, click Advanced on the search form. A quicker way to search a single namespace is to type the namespace, a colon, then the search term in the search box, for example Wikibooks:Categories returns search results for catetgories in the Wikibooks namespace.
Registered users can modify the default namespace to search in "My Preferences". They can also choose how much context and how many hits per page to display when viewing search results.
Navigation pages will attempt to guide you to the correct book. You may encounter two types of these pages when searching for a topic.
- Shelves
- You may type in something like Languages, that takes you to a list of the many things that you could mean by it. This type of page is called a shelf, and it's there to make things easier for you. Such a page prevents you from having to guess the exact phrase used to identify each book.
- Redirection
- Some things can be referred to by many names. The United Kingdom History, for instance, could be called the History of the United Kingdom, the History of the UK, or many similar things. a book. In such a case, you may look through the search results for an appropriate topic, try searching for an alternative spelling or name for the term, or try something related like History. Once you have found what you were looking for, consider adding redirect pages for the expressions that you tried that did not lead to a book or shelf because chances are that you are not the only one thinking about the topic in this way. Thus, in doing so you will make life easier for those who later search for the term.
Search engine features
The internal search engine can search for parts of page titles or page title prefixes, and in specific categories and namespaces. It can also limit a search to pages with specific words in the title or located in specific categories or namespaces. It can handle parameters an order of magnitude more sophisticated than most external search engines, including user-specified words with variable endings and similar spellings. When presenting results, the internal search understands and will link to relevant sections of a page (although to a limited degree some other search engines may do this as well).
The internal search is also able to search all pages for project purposes, whereas external search engines cannot be used on any talk page, a large part of project space, and any page tagged with __NOINDEX__ (usually used on user pages.
The source text (what one sees in the edit box) is searched. This distinction is relevant for piped links, for interlanguage links (to find links to Chinese books, search for zh, not for Zhongwen), special characters (if ê is coded as ê it is found searching for ecirc), etc.
Upper and lower case as well as some umlauts and accents are disregarded in search. For example, a search for citroen will find pages containing the word Citroën (c = C, e = ë). Some ligatures match the separate letters. For example, a search for aeroskobing will find pages containing Ærøskøbing (ae = Æ).
The following features can be used to refine searches:
- Phrases in double quotes - A phrase can be searched by enclosing it in double quotes. For example, "holly dolly" returns fewer few results as opposed to holly dolly (two standalone words). This technique also allows one to find all occurrences of a word or name across Wikibooks rather than being directed automatically to a book or a shelf.
- Boolean search - By default logical AND is applied to all search terms, just as on all major search engines. Parentheses and "OR" can also be used. For example windows OR system and combined: microsoft (windows OR system) (note the uppercase OR).
- Exclusion - Terms can be excluded with -, for example windows -system (note there is no space between "-" and the excluded term)
- Wildcard search - Wildcards (characters taking the place of any other character or string that is not known or specified) can be prefixed and suffixed, for example, the query "*stan" would match books like Kazakhstan and Afghanistan.
- Fuzzy search - Adding a tilde (~) at the end of a search word matches words with similar spelling. For example, searching for james~ watt~ would identify James Watt, James Wyatt, and James Watts as the first three search results.
- intitle: - using the intitle: parameter, query results can be narrowed by title. The search word(s) given to intitle: can be anywhere in the title. Example searches using intitle:
- prefix: - use the prefix: parameter to limit the results to book titles starting with the given characters. If a namespace is also given to prefix:, that page name will override any and all other namespace searches. Prefix: should be the last parameter in the query. Example searches using prefix:
Using the search to directly get to a page
When using the search to directly get to a page, it doesn't matter whether you enter capitals or lower case letters (unless there are two book titles which differ only in capitalization). Umlauts and accents are also disregarded, but ligatures do not match the separate letters.
Specialized uses of the search to directly get to a page include the following:
- To navigate to a section of a page using anchor notation. For example, Poland#History.
- To navigate to a special page, including one with a parameter following a slash. For example, Special:Log/Example.
- To navigate directly to a page on another language Wikipedia or Wikimedia project, using the appropriate interwiki prefix; some other prefixes work too. For example, enter fr:France to go to the book "France" on French Wikibooks, or wikt:help to see the Wiktionary entry for the word "help".
- To go quickly to the user contributions of an IP address – just enter the address. For example, 123.45.56.89.
What links here
Within the Toolbox section on the left-hand side of every page is a link labeled "What links here". This is used to see a list of the pages that link to (or redirect to, or are included on the current page. These are sometimes referred to as backlinks.
To use the tool, click Special:WhatLinksHere and type in the page title.
Overview
The "What links here" facility lists the pages on the same site (English Wikibooks) which link to (or redirect to, or transclude) a given page. It is possible to limit the search to pages in a specified namespace. To see this information, click the "What links here" link while looking at any page. The list is sorted by date of creation of the page.
Pages redirected to the given page are marked "redirect". Pages transcluding the given page are marked "transclusion"; for these pages it is not shown whether they also link to the given page. For image and other file pages, the pages using the image or file appear on the list and are marked "image link".
The list of links to a page is useful in a number of ways:
- The number of incoming links gives a rough indication of how important or popular a page is.
- Where the intended subject material of a page is unclear, the list of pages linking to it might provide useful context.
The function works even for a page title that does not exist (recording redlinks to that title). The "What links here" link appears on the edit page on which one arrives when following a broken link.
To invoke a "What links here" list directly (in the search box, browser address bar, or wikilinks) use the syntax Special:WhatLinksHere/John Smith (replacing "John Smith" with the desired target page title).
It is also possible to make a wikilink to the "What links here" list for a particular page; to do this type
[[Special:WhatLinksHere/Page name]], replacing Page name with the title of the target page.
Limitations
The following links are not listed at "What links here":
- automatically generated links from categories to their subcategories and member pages (and vice versa)
- automatically generated links from subpages to their parent pages
- links in edit summaries
In the case of links to sections, the precise target is not shown. "What links here" cannot list the backlinks of a specific section only. (It may be possible to work around this by making a new title that redirects to a particular section, and encouraging people to make links to the redirect rather than the section.
Also note that if a page's links change due to a change in a template, the backlinks for that page are not updated immediately.
Redirects
The backlinks feature shows which backlinks are redirects. The backlinks of the redirect are also shown, and if they include a redirect, the backlinks of that also (not more). This makes it a useful tool for finding double redirects, which should generally be replaced by redirects to the final target. There are options to hide redirects, or to show only redirects (by hiding links and transclusions).
Transclusions
The backlinks list includes pages that include the current page using {{SOMEPAGE}}
It also includes links which exist on certain pages due to the transclusion of other pages (usually templates). For example, if page A contains a transclusion of template B, and B contains a link to C, then the link to C will appear on page A, and A will be listed among the backlinks of C.
Keyboard shortcuts
The Vector skin, which is the default layout for Wikibooks, contains many keyboard shortcuts. You can use them to access certain features of Wikibooks more quickly.
Depending on your browser, you should:
- Mozilla Firefox 1.5: hold Alt, press access key
- Mozilla Firefox 2 & 3 on Windows and Linux: hold Alt+⇧ Shift, press access key
- Mozilla Firefox on Mac OS X: hold Ctrl, press access key
- Internet Explorer 6: hold Alt, press access key
- Internet Explorer 7: hold Alt+⇧ Shift, press access key
- Internet Explorer 8: hold Alt, press access key (to follow a link, then press ↵ Enter)
- Opera (all platforms): press ⇧ Shift+Esc, then press access key (Shift-Esc will display the list of choices)
- Google Chrome: hold Alt, press access key
- Safari on Windows: hold Alt, press access key | https://en.wikibooks.org/wiki/Wikibooks:KS | CC-MAIN-2020-50 | refinedweb | 2,062 | 59.13 |
By this point you should understand the concept of Prefabs at a fundamental level. They are a collection of predefined GameObjects & Components that are re-usable throughout your game. If you don’t know what a Prefab is, we recommend you read the Prefabs page for a more basic introduction.
Prefabs come in very handy when you want to instantiate complicated GameObjects at runtime. The alternative to instantiating Prefabs is to create GameObjects from scratch using code. Instantiating Prefabs has many advantages over the alternative approach:
To illustrate the strength of Prefabs, let’s consider some basic situations where they would come in handy:
This explanation will illustrate the advantages of using a Prefab vs creating objects from code.
First, lets build a brick wall from code:
// JavaScript function Start () { for (var y = 0; y < 5; y++) { for (var x = 0; x < 5; x++) { var cube = GameObject.CreatePrimitive(PrimitiveType.Cube); cube.AddComponent.<Rigidbody>(); cube.transform.position = Vector3 (x, y, 0); } } } // C# public class Instantiation : MonoBehaviour { void Start() { for (int y = 0; y < 5; y++) { for (int x = 0; x < 5; x++) { GameObject cube = GameObject.CreatePrimitive(PrimitiveType.Cube); cube.AddComponent<Rigidbody>(); cube.transform.position = new Vector3(x, y, 0); } } } }
If you execute that code, you will see an entire brick wall is created when you enter Play mode. There are two lines relevant to the functionality of each individual brick: the
CreatePrimitive line, and the
AddComponent line. Not so bad right now, but each of our bricks is un-textured.. This relieves you from maintaining and changing a lot of code when you decide you want to make changes. With a Prefab, you just make your changes and Play. No code alterations required.
If you’re using a Prefab for each individual brick, this is the code you need to create the wall.
// JavaScript var brick : Transform; function Start () { for (var y = 0; y < 5; y++) { for (var x = 0; x < 5; x++) { Instantiate(brick, Vector3 (x, y, 0), Quaternion.identity); } } } // C# public Transform brick; void Start() { for (int y = 0; y < 5; y++) { for (int x = 0; x < 5; x++) { Instantiate(brick, new Vector3(x, y, 0), Quaternion.identity); } } }
This is not only very clean but also very reusable. There is nothing saying we are instantiating a cube or that it must contain a rigidbody. All of this is defined in the Prefab and can be quickly created in the Editor.
Now we only need to create the Prefab, which we do in the Editor. Here’s how:
We’ve created our Brick Prefab, so now we have to attach it to the brick variable in our script. When you select the empty GameObject that contains the script, the Brick variable will be visible in the inspector.
Now drag the “Brick” Prefab from the Project View onto the brick variable in the Inspector. Press Play and you’ll see the wall built using the Prefab.
This is a workflow pattern that can be used over and over again in Unity. In the beginning you might wonder why this is so much better, because the script creating the cube from code is only 2 lines longer.
But because you are using a Prefab now, you can adjust the Prefab in seconds. Want to change the mass of all those instances? Adjust the Rigidbody in the Prefab only once. Want to use a different Material for all the instances? Drag the Material onto the Prefab only once. Want to change friction? Use a different Physic Material in the Prefab’s collider. Want to add a Particle System to all those boxes? Add a child to the Prefab only once.
Here’s how Prefabs fit into this scenario:
While it would be possible to build a rocket GameObject completely from code, adding Components manually and setting properties, it is far easier to instantiate a Prefab. You can instantiate the rocket in just one line of code, no matter how complex the rocket’s Prefab is. After instantiating the Prefab you can also modify any properties of the instantiated object (e.g. you can set the velocity of the rocket’s Rigidbody).
Aside from being easier to use, you can update the prefab later on. So if you are building a rocket, you don’t immediately have to add a Particle trail to it. You can do that later. As soon as you add the trail as a child GameObject to the Prefab, all your instantiated rockets will have particle trails. And lastly, you can quickly tweak the properties of the rocket Prefab in the Inspector, making it far easier to fine-tune your game.
This script shows how to launch a rocket using the Instantiate() function.
// JavaScript // Require the rocket to be a rigidbody. // This way we the user can not assign a prefab without rigidbody var rocket : Rigidbody; var speed = 10.0; function FireRocket () { var rocketClone : Rigidbody = Instantiate(rocket, transform.position, transform.rotation); rocketClone.velocity = transform.forward * speed; // You can also acccess other components / scripts of the clone rocketClone.GetComponent.<MyRocketScript>().DoSomething(); } // Calls the fire method when holding down ctrl or mouse function Update () { if (Input.GetButtonDown("Fire1")) { FireRocket(); } } // C# // Require the rocket to be a rigidbody. // This way we the user can not assign a prefab without rigidbody public Rigidbody rocket; public float speed = 10f; void FireRocket () { Rigidbody rocketClone = (Rigidbody) Instantiate(rocket, transform.position, transform.rotation); rocketClone.velocity = transform.forward * speed; // You can also acccess other components / scripts of the clone rocketClone.GetComponent<MyRocketScript>().DoSomething(); } // Calls the fire method when holding down ctrl or mouse void Update () { if (Input.GetButtonDown("Fire1")) { FireRocket(); } }
Let’s say you have a fully rigged enemy character who dies. You could simply play a death animation on the character and disable all scripts that usually handle the enemy logic. You probably have to take care of removing several scripts, adding some custom logic to make sure that no one will continue attacking the dead enemy anymore, and other cleanup tasks.
A far better approach is to immediately delete the entire character and replace it with an instantiated wrecked prefab. This gives you a lot of flexibility. You could use a different material for the dead character, attach completely different scripts, spawn a Prefab containing the object broken into many pieces to simulate a shattered enemy, or simply instantiate a Prefab containing a version of the character.
Any of these options can be achieved with a single call to Instantiate(), you just have to hook it up to the right prefab and you’re set!
The important part to remember is that the wreck which you Instantiate() can be made of completely different objects than the original. For example, if you have an airplane, you would model two versions. One where the plane consists of a single GameObject with Mesh Renderer and scripts for airplane physics. By keeping the model in just one GameObject, your game will run faster since you will be able to make the model with less triangles and since it consists of fewer objects it will render faster than using many small parts. Also while your plane is happily flying around there is no reason to have it in separate parts.
To build a wrecked airplane Prefab, the typical steps are:
The following example shows how these steps are modelled in code.
// JavaScript var wreck : GameObject; // As an example, we turn the game object into a wreck after 3 seconds automatically function Start () { yield WaitForSeconds(3); KillSelf(); } // Calls the fire method when holding down ctrl or mouse function KillSelf () { // Instantiate the wreck game object at the same position we are at var wreckClone = Instantiate(wreck, transform.position, transform.rotation); // Sometimes we need to carry over some variables from this object // to the wreck wreckClone.GetComponent.<MyScript>().someVariable = GetComponent.<MyScript>().someVariable; // Kill ourselves Destroy(gameObject); // C# public GameObject wreck; // As an example, we turn the game object into a wreck after 3 seconds automatically IEnumerator Start() { yield return new WaitForSeconds(3); KillSelf(); } // Calls the fire method when holding down ctrl or mouse void KillSelf () { // Instantiate the wreck game object at the same position we are at GameObject wreckClone = (GameObject) Instantiate(wreck, transform.position, transform.rotation); // Sometimes we need to carry over some variables from this object // to the wreck wreckClone.GetComponent<MyScript>().someVariable = GetComponent<MyScript>().someVariable; // Kill ourselves Destroy(gameObject); } }
Lets say you want to place a bunch of objects in a grid or circle pattern. Traditionally this would be done by either:
So use Instantiate() with a Prefab instead! We think you get the idea of why Prefabs are so useful in these scenarios. Here’s the code necessary for these scenarios:
// JavaScript // Instantiates a prefab in a circle var prefab : GameObject; var numberOfObjects = 20; var radius = 5; function Start () { for (var i = 0; i < numberOfObjects; i++) { var angle = i * Mathf.PI * 2 / numberOfObjects; var pos = Vector3 (Mathf.Cos(angle), 0, Mathf.Sin(angle)) * radius; Instantiate(prefab, pos, Quaternion.identity); } } // C# // Instantiates a prefab in a circle public GameObject prefab; public int numberOfObjects = 20; public float radius = 5f; void Start() { for (int i = 0; i < numberOfObjects; i++) { float angle = i * Mathf.PI * 2 / numberOfObjects; Vector3 pos = new Vector3(Mathf.Cos(angle), 0, Mathf.Sin(angle)) * radius; Instantiate(prefab, pos, Quaternion.identity); } }
// JavaScript // Instantiates a prefab in a grid var prefab : GameObject; var gridX = 5; var gridY = 5; var spacing = 2.0; function Start () { for (var y = 0; y < gridY; y++) { for (var x=0;x<gridX;x++) { var pos = Vector3 (x, 0, y) * spacing; Instantiate(prefab, pos, Quaternion.identity); } } } // C# // Instantiates a prefab in a grid public GameObject prefab; public float gridX = 5f; public float gridY = 5f; public float spacing = 2f; void Start() { for (int y = 0; y < gridY; y++) { for (int x = 0; x < gridX; x++) { Vector3 pos = new Vector3(x, 0, y) * spacing; Instantiate(prefab, pos, Quaternion.identity); } } } | https://docs.unity3d.com/Manual/InstantiatingPrefabs.html | CC-MAIN-2017-17 | refinedweb | 1,637 | 53.41 |
There is no supported way of changing OWA settings using WebDAV for Exchange 2007. Under Exchange 2000 and 2003, there were properties on the mailbox root which could be changed via code. However OWA 2007 is a much different animal.
Under Exchange 2000 and 2003, there are properties which you could get at with WebDAV using the namespace. While you may be still able to modify these using WebDAV under Exchange 2007, you will find that they won't affect OWA. Some examples of these are:
There are 'Set' and 'Get' Exchange Web Services (EWS) which do most of what developers want under Exchange 2007. These web services are for User availability and OOF setttings. Note though that there is no API which is for modifying OWA properties - so there will be some limitations on what can be done with Web Services.
GetUserAvailability Operation
WorkingPeriod
WorkingHours
GetUserOofSettings Operation
SetUserOofSettings Operation
PowerShell (Exchange 2007) or CDOEXM are more mailbox/store oriented and are not geared to modifying item/folder/OWA properties. CDOEX in most cases can get to the same properties as WebDAV.
Now, if you want to do some general OWA Customization (ex: tweak the UI) or add-in your own custom forms for your own message types, this is possible for OWA 2007 SP1. This ability is not in the RTM version of OWA 2007.
Whats new in Exchange server 2007 SP1 -
Customizing Outlook Web Access
Introduction to OWA customization
OWA forms registry
Outlook Web Access User Interface Customization XML Elements
I’ve put together a list of articles which cover common questions on Exchange Web Services (EWS). These
schemas.microsoft.com/exchange
An error has occurred while accessing above link. Requesting you to update.
Thanks!
Hello Sudhir,
That's not a link/URL. Its a URI (like a namespace). Its not much different than System.Windows.Forms or System.Net in .NET.
Thanks,
Dan | https://blogs.msdn.microsoft.com/webdav_101/2008/01/31/how-to-access-or-change-owa-settings-for-exchange-200720032000/ | CC-MAIN-2018-47 | refinedweb | 316 | 63.8 |
Developing a 2D game in Java ME - Part 3
Revision as of 05:45, 15 November 2011
This article shows how to use Java ME's low-level interface classes to create the game screen and access key presses. This is the third custom paint method.
import javax.microedition.lcdui.Canvas;
import javax.microedition.lcdui.Graphics;
public class MyCanvas extends Canvas{
int width;
int height;
public MyCanvas() {
}
protected void paint(Graphics g) {
// stores width and height
width = getWidth();
height = getHeight();
// set background color
g.setColor(0,0,0);
// clear screen
g.fillRect(0, 0, width, height);
// draw a red circle that represents a ball
g.setColor(255,0,0);
g.drawArc(100, 100, 5, 5, 0, 360);
// draws a blue rectangle for the pad
g.setColor(0,0,255);
g.fillRect(100, 200, 15, 15);
}
}
To activate the Canvas, create it in the MIDlet class and display it on the CommandAction method.
public Displayable initGameCanvas() {
if (gameCanvas == null){
gameCanvas = new MyCanvas();
// add a back Command to return to the menu screen
gameCanvas.addCommand(initBackCommand());
// set the listener to our actions
gameCanvas.setCommandListener(this);
}
return gameCanvas;
}
After you add this code, you, how fast the game screen should be refreshed). To put it in coding terms:
public class MyCanvas extends GameCanvas implements Runnable{
…
public void start() {
run = true;
Thread t = new Thread(this);
t.start();
}
public void stop() {
run = false;
}
public void run(){
init();
while (run){
// update game elements, positions,*2, 0, 360);
}
/***
* Updates the ball position.
*/
public void update() {
// update position
oldX=x;
oldY=y;
x += speedX;
y += speedY;
}
}
For the pad:
public class Pad extends Entity{
int minLimit = 0;
int maxLimit = 1;
public Pad(int width, int height) {
this.width = width;
this.height = height;
}
public void paint(Graphics g) {
g.setColor(0,0,255);
g.fillRect(x, y, width, height);
}
public void update() {
// change x position according the speed
x += speedX;
// check if world bounds are reached
if (x < minLimit) {
x = minLimit;
}
if (x+width > maxLimit){
x = maxLimit - width;
}
}
}
And finally for the bricks:
Now create and configure all these classes on the canvas class. Create an init() method on the Canvas class.
public void init(){
// resets lifes
lifes = 3;
// resets score
score = 0;
// resets time
time = 0;
// bricks hit
bricksHit = 0;
// create a pad
pad = new Pad(getWidth()/10,getWidth()/10/4);
pad.x = (this.getWidth()-pad.width) / 2;
pad.y = this.getHeight() - (2*pad.height);
pad.maxLimit = getWidth();
pad.minLimit = 0;
// create;
// create bricks
Brick brick;
bricks = new Vector();
for (int i=0; (i*(BRICK_WIDTH+2))<getWidth(); i++){
brick = new Brick(Util.setColor(255,0,0));
brick.width = BRICK_WIDTH;
brick.height = BRICK_HEIGHT;
brick.x = (i*(brick.width+2));
brick.y = 20;
bricks.addElement(brick);
}
}
After all objects have been created, update their state and paint them. Add some code to the updateGameState() and updateGameScreen() methods:
// draws elements to the screen
protected void updateGameScreen(Graphics g) {
// stores width and height
width = getWidth();
height = getHeight();
// set background color
g.setColor(0,0,0);
// clear screen
g.fillRect(0, 0, width, height);
// draw score
g.setColor(255,255,255);
g.drawString("Score:"+score+" Lifes:"+lifes+" Time: "+time, 0, 0, Graphics.TOP|Graphics.LEFT);
// draw game elements
pad.paint(g);
ball.paint(g);
// draw bricks stored in the Vector bricks
for (int i=0; i < bricks.size(); i++){
Brick brick = (Brick)(bricks.elementAt(i));
brick.paint(g);
}
}
//
public void checkUserInput() {
int state = getKeyStates();
if ( (state & GameCanvas.LEFT_PRESSED) > 0) {
// move left
pad.speedX=-1;
} else if ( (state & GameCanvas.RIGHT_PRESSED) > 0) {
// move right
pad.speedX=1;
} else {
// don't move
pad.speedX=0;
}
}
Now if you run the game in the emulator you should have a real game screen, where you can play your own game: Move the pad, hit the ball, and erase all the bricks.
The next article explains how to use images to get a better looking game. | http://developer.nokia.com/community/wiki/index.php?title=Developing_a_2D_game_in_Java_ME_-_Part_3&diff=117471&oldid=117445 | CC-MAIN-2014-10 | refinedweb | 638 | 66.74 |
In the previous post, we talked about lazy evaluation in Scala. At the end of that post, we asked an interesting question: Does a
Lazy value hold an state?
In order to answer that question, we’ll try to define a type that could represent the Lazy values:
trait Lazy[T] { val evalF : () => T val value: Option[T] = None } object Lazy{ def apply[T](f: => T): Lazy[T] = new Lazy[T]{ val evalF = () => f } }
As you can see, our
Lazy type is parameterized by some
T type that represents the actual value type(
Lazy[Int] would be the representation for a lazy integer).
Besides that, we can see that it’s composed of the two main Lazy type features:
- evalF : Zero-parameter function that, when its ‘apply’ method is invoked, it evaluates the contained T expression.
- value : The result value of the interpretation of the
evalFfunction. This concrete part denotes the state in the
Lazytype, and it only admit two possible values:
None(not evaluated) or
Some(t)(if it has been already evaluated and the result itself).
We’ve also added a companion object that defines the Lazy instance constructor that receives a by-name parameter that is returned as result of the
evalF function.
Now the question is, how do we join both the evaluation function and the value that it returns so we can make
Lazy an stateful type? We define the ‘eval’ function this way:
trait Lazy[T] { lzy => val evalF : () => T val value: Option[T] = None def eval: (T, Lazy[T]) = { val evaluated = evalF.apply() evaluated -> new Lazy[T]{ mutated => val evalF = lzy.evalF override val value = Some(evaluated) override def eval: (T, Lazy[T]) = evaluated -> mutated } } }
The ‘eval’ function returns a two-element tuple:
- The value result of evaluating the expression that stands for the lazy value.
- a new
Lazyvalue version that contains the new state: the T evaluation result.
If you take a closer look, what ‘eval’ method does in first place is to invoke the evalF function so it can retrieved the T value that remained until that point not-evaluated.
Once done, we return it as well as the new Lazy value version. This new version (let’s call it
mutated version) will have in its ‘value’ attribute the result of having invoked the
evalF function. In the same way, we change its
eval method, so in future invocations the Lazy instance itself is returned instead of creating new instances (because it actually won’t change its state, like Scala’s lazy definitions work).
The interesting question that comes next is: is this an isolated case? Could anything else be defined as stateful? Let’s perform an abstraction exercise.
Looking for generics: stateful stuff
Let’s think about a simple stack:
sealed trait Stack[+T] case object Empty extends Stack[Nothing] case class NonEmpty[T](head: T, tail: Stack[T]) extends Stack
The implementation is really simple. But let’s focus in the
Stack trait and in a hypothetical
pop method that pops an element from the stack so it is returned as well as the rest of the stack:
sealed trait Stack[+T]{ def pop(): (Option[T], Stack[T]) }
Does it sound familiar to you? It is mysteriously similar to
trait Lazy[T]{ def eval: (T, Lazy[T]) }
isn’t it?
If we try to re-factor for getting a common trait between
Lazy and
Stack, we could define a much more abstract type called
State:
trait State[S,T] { def apply(s: S): (T, S) }
Simple but pretty: the
State trait is parameterized by two types: S (state type) and T (info or additional element that is returned in the specified state mutation). Though it’s simple, it’s also a ver common pattern when designing Scala systems. There’s always something that holds certain state. And everything that has an state, it mutates. And if something mutates in a fancy and smart way…oh man.
That already exists…
All this story that seems to be created from a post-modern essay, has already been subject of study for people…that study stuff. Without going into greater detail, in ScalaZ library you can find the
State monad that, apart from what was previously pointed, is fully-equipped with composability and everything that being a monad means (semigroup, monoid, …).
If we define our Lazy type with the State monad, we’ll get something similar to:
import scalaz.State type Lazy[T] = (() => T, Option[T]) def Lazy[T](f: => T) = (() => f, None) def eval[T] = State[Lazy[T], T]{ case ((f, None)) => { val evaluated = f.apply() ((f, Some(evaluated)), evaluated) } case s@((_, Some(evaluated))) => (s, evaluated) }
When decrypting the egyptian hieroglyph, given the
State[S,T] monad, we have that our S state will be a tuple composed of what exactly represents a lazy expression (that we also previously described):
type Lazy[T] = (() => T, Option[T])
- A Function0 that represents the lazy evaluation of T
- The T value that might have been evaluated or not
For building a Lazy value, we generate a tuple with a function that stands for the expression pointed with the by-name parameter of the
Lazy method; and the None value (because the Lazy guy hasn’t been evaluated yet):
def Lazy[T](f: => T) = (() => f, None)
Last, but not least (it’s actually the most important part), we define the only state transition that is possible in this type: the evaluation. This is the key when designing any State type builder: how to model what out S type stands for and the possible state transitions that we might consider.
In the case of the Lazy type, we have two possible situations: the expression hasn’t been evaluated yet (in that case, we’ll evaluate it and we’ll return the same function and the result) or the expression has been already evaluated (in that case we won’t change the state at all and we’ll return the evaluation result):
def eval[T] = State[Lazy[T], T]{ case ((f, None)) => { val evaluated = f.apply() ((f, Some(evaluated)), evaluated) } case s@((_, Some(evaluated))) => (s, evaluated) }
In order to check that we can still count on the initial features we described for the Lazy type (it can only be evaluated once, only when necessary, …) we check the following assertions:
var sideEffectDetector: Int = 0 val two = Lazy { sideEffectDetector += 1 2 } require(sideEffectDetector==0) val (_, (evaluated, evaluated2)) = (for { evaluated <- eval[Int] evaluated2 <- eval[Int] } yield (evaluated, evaluated2)).apply(two) require(sideEffectDetector == 1) require(evaluated == 2) require(evaluated2 == 2)
Please, do notice that, as we mentioned before, what is defined inside the for-comprehension are the same transitions or steps that the state we decide will face. That means that we define the mutations that any S state will suffer. Once the recipe is defined, we apply it to the initial state we want.
In this particular case, we define as initial state a lazy integer that will hold the 2 value. For checking the amount of times that our Lazy guy is evaluated, we just add a very dummy var that will be used as a counter. After that, we define inside our recipe that the state must mutate twice by ussing the
eval operation. Afterwards we’ll check that the expression of the Lazy block has only been evaluated once and that the returning value is the expected one.
I wish you the best tea for digesting all this crazy story 🙂
Please, feel free to add comments/menaces at the end of this post or even at our gitter channel.
See you on next post.
Peace out! | https://scalerablog.wordpress.com/tag/lazy/ | CC-MAIN-2017-39 | refinedweb | 1,272 | 52.83 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
convert Many2one field into Many2many field without losing any data?
I have a Many2one field already works in a lot of records in a point of sale , now is needed that this field to convert Many2many , is possible just change the field type from Many2one to Many2many without losing any data?
If yes, what's the correct way to do it? if not, how to do it?
thanks
Better to create a new field for m2m and then move the data from m2o to m2m using script.
Ex:
my_m2m_ids = fields.Many2many(......)
# Script code to move the m2o data to m2m
@api.multi
def move_m2o_to_m2m(self):
for rec in self.search([('my_m2o_field_id', '!=', False)]): # Search all the records that have value in m2o field
rec.write({'my_m2m_ids': [(6, 0, [rec.my_m2o_field_id.id])]}) # Move data from m2o to m2m
This way you can move all the data for an object from m2o to m2m field.
from openerp import models, fields, api,_
import openerp.addons.decimal_precision as dp
from math import *
class traceur(models.Model):
_inherit = ['account.analytic.account']
traceur_ids = fields.One2many('contrat_traceur', 'contract_id', 'Traceurs')
contrat_traceur_ids = fields.Many2many('contrat_traceur')
# Script code to move the m2o data to m2m
@api.multi
def convert_o2m(self):
for rec in self.search([('traceur_ids', '!=', False)]):
rec.write({'contrat_traceur_ids': [(6, 0, [rec.traceur_ids.id])]})
traceur()
**************************
def compute_tva(self):
^
IndentationError: unexpected indent
def convert_o2m(self):
This should be exactly align with the @api.multi
In python everything is indented by 1 tab (4 spaces) instead of braces.
please i need to print the date of a field one2many but still they display empty content
contrat_traceur_ids = fields.One2many('contrat_traceur', 'contract_id', 'Traceurs')
in template Qweb :
<t t-
<td class="text-center"><span t-</td>
<td class="text-center"><span t-</td>
<td class="text-center"><span t-</td>
<td class="text-center"><span t-</td>
</t>
thank you
thank you so much
please i also need to convert one2many to many2many
if possible ?
i can access database and print "qweb" from field one2many is their fields
You still need to have a o2m object to use it in m2m.
The script will be similar to move the fields to m2m. You just need to access the o2m field in loop. | https://www.odoo.com/forum/help-1/question/convert-many2one-field-into-many2many-field-without-losing-any-data-144019 | CC-MAIN-2019-04 | refinedweb | 397 | 67.45 |
This is a very simple project to build a temperature humidity controller with 2 relays to control a hot source and a humidifier. In this project i used an Arduino Uno and a DHT11 sensor combined with a lcd, potentiometer, button and relays. The button used have only the function to turn on the backlight of lcd while pressing it.
The potentiometer, regolate the setting of temperature (in Celsius degrees) and humidity (in percentage) from 0 to 99.
When the temperature or humidity reaches the set value the associated relay close the contact and turn off the associated device.
Step 1: The Supplies
To make this project we need:
- 1x Arduino Uno rev3
- 1x DHT11 sensor
- 1x LCD 16x2
- 2x 2.2KOhm Resistor
- 1x 10KOhm Resistor
- 2x 22pF Ceramic Condensator
- 1x 220uF Electrolitic Condensator
- 2x 100nF Poliester Condensator
- 1x 47uF Electrolitic Condensator
- 2x BC337 Transistor
- 3x 1N4007 Diode
- 1x 16Mhz Oscillator
- 3x 10KOhm Potentiometer
- 1x TL7805C Voltage Regolator
- 2x G5V2 Omron or Axicom Relay
-2x 3 PIN Terminal blocks
-1x 2 PIN Terminal blocks
- 2x 3 PinHead
- 1x Panel Switch
- 1x Panel momentary Button or PCB momentary Button
- 1x 100x75 mm Copper Clad Board Single Layer for PCB
- 1x A4 Glossy paper sheet
- Laser Printer
- Ferric Chloride
- 1x Plywood sheet 420x210x4mm Or a 3D Printer for the box
- Hot glue
- Wire
- Solder Iron
- Acrylic Trasparent (or your favourite color) spray paint
- 4x M4x20mm Bolt with nut
- 2x M4x15mm Bolt with nut
Step 2: The Code
Let's star from the code.
This is the simple code i used in this project.
You need the specific library of the sensor to use with Arduino. In my case i used those librarys from adafruit:
#include <DHT.h> #include<DHT_U.h> #include<Adafruit_Sensor.h> #include <LyquidCrystal.h> #define DHTPIN 8 // 8 Pin sensor #define DHTTYPE DHT11 // dht11 type of sensor used DHT dht(DHTPIN, DHTTYPE); //); //potentiometer const int tempP = A2; const int umidP = A3; //relays const int releT = A0; const int releU = A1; // costant int PMin = 0; int PMax = 1023; int statotsu = 0; int statotgiu = 0; int statohsu = 0; int statohgiu = 0; int tempimpostata = 0; int umidimpostata = 0;void setup() { //LCD initialize lcd.begin(16, 2); lcd.print("TEMP"); lcd.setCursor(9, 0); lcd.print("UMID"); lcd.setCursor(1, 1); lcd.print("IMP"); lcd.setCursor(10, 1); lcd.print("IMP"); //Out Pin pinMode(releT, OUTPUT); pinMode(releU, OUTPUT); } void loop() { statotsu = analogRead (tempP); //Read potentiometer value tempimpostata = map(statotsu, PMin, PMax, 0, 99); //Set the value from 0 to 99 based on potentiometer value statohsu = analogRead (umidP);//Read potentiometer value umidimpostata = map(statohsu, PMin, PMax, 0, 99);//Set the value from 0 to 99 based on potentiometer value int t = dht.readTemperature() - 2; //read and adjust the temperature value in Celsius from sensor int h = dht.readHumidity(); //read the humidity from sensor //Print the temperature value on the LCD lcd.setCursor(5, 0); lcd.print(t); //Print the temperature value setted by potentiometer lcd.setCursor(5, 1); if (tempimpostata < 10) { lcd.print("0"); lcd.print(tempimpostata); } else { lcd.print(tempimpostata); } //Print the humidity value on the LCD lcd.setCursor(14, 0); lcd.print(h); //Print the humidity value setted by potentiometer lcd.setCursor(14, 1); if (umidimpostata < 10) { lcd.print("0"); lcd.print(umidimpostata); } else { lcd.print(umidimpostata); } //Turn On or Off the Temperature Relay if ( t < tempimpostata ) { digitalWrite ( releT, HIGH ); } else { digitalWrite (releT, LOW); } //Turn On or Off the Humidity Relay if ( h < umidimpostata ) { digitalWrite ( releU, HIGH ); } else { digitalWrite (releU, LOW); } }
Step 3: The PCB
It's time to make the pcb.
First we need a circuit draw.
I made with Eagle and retouched with Illustrator
Next step is to make the pcb.
I used the toner transfer method to make a pcb. You can find a lot of this metod here
With this metod you don't need a UV Lamp and a photoresist. This metod it's simple and fast and i have good results with a not very thin lines.
Step 4: The Box
I designed the box with SolidWorks and printed the shape to cut and assemble.
I transfered the shape to the playwood, cut it and glued.
Next I painted the various parts in black and printed the front panel
here the pdf for plywood cut and for the 3D printer
Step 5: The Circuit Assembly
Now we can solder the componet to the pcb and place in to the box.
This is the fun part for me.
At the and of soldering. i Spryed the back souface of the pcb with the acrylic trasparent paint to protect the circuit from the oxidation, next i glued the pcb on the back panel
Step 6: Boxing and Testing
When i finished to soldering and positioning the component, I've closed the box and tuned on using a 12V DC 1A power supply.
Now let's connect the hot source and the humidifier, set the value of temperature (in celsius degrees) and Humidity (in percentage) and go.
2 Discussions
1 year ago
I'd love to put something like that in our greenhouse :)
Reply 1 year ago
This project it's simple. You can try and put everything in a plastic case IP67 so you have a waterproof case ;) | https://www.instructables.com/id/TempUino-a-TemperatureHumidity-Controller-Arduino-/ | CC-MAIN-2018-51 | refinedweb | 870 | 60.95 |
with Hidden Features of PHP?
stackoverflow php tutorial (24)
My list.. most of them fall more under the "hidden features" than the "favorite features" (I hope!), and not all are useful, but .. yeah.
// swap values. any number of vars works, obviously list($a, $b) = array($b, $a); // nested list() calls "fill" variables from multidim arrays: $arr = array( array('aaaa', 'bbb'), array('cc', 'd') ); list(list($a, $b), list($c, $d)) = $arr; echo "$a $b $c $d"; // -> aaaa bbb cc d // list() values to arrays while (list($arr1[], $arr2[], $arr3[]) = mysql_fetch_row($res)) { .. } // or get columns from a matrix foreach($data as $row) list($col_1[], $col_2[], $col_3[]) = $row; // abusing the ternary operator to set other variables as a side effect: $foo = $condition ? 'Yes' . (($bar = 'right') && false) : 'No' . (($bar = 'left') && false); // boolean False cast to string for concatenation becomes an empty string ''. // you can also use list() but that's so boring ;-) list($foo, $bar) = $condition ? array('Yes', 'right') : array('No', 'left');
You can nest ternary operators too, comes in handy sometimes.
// the strings' "Complex syntax" allows for *weird* stuff. // given $i = 3, if $custom is true, set $foo to $P['size3'], else to $C['size3']: $foo = ${$custom?'P':'C'}['size'.$i]; $foo = $custom?$P['size'.$i]:$C['size'.$i]; // does the same, but it's too long ;-) // similarly, splitting an array $all_rows into two arrays $data0 and $data1 based // on some field 'active' in the sub-arrays: foreach ($all_rows as $row) ${'data'.($row['active']?1:0)}[] = $row; // slight adaption from another answer here, I had to try out what else you could // abuse as variable names.. turns out, way too much... $string = 'f.> <!-? o+'; ${$string} = 'asdfasf'; echo ${$string}; // -> 'asdfasf' echo $GLOBALS['f.> <!-? o+']; // -> 'asdfasf' // (don't do this. srsly.) ${''} = 456; echo ${''}; // -> 456 echo $GLOBALS['']; // -> 456 // I have no idea.
Right, I'll stop for now :-)
Hmm, it's been a while..
// just discovered you can comment the hell out of php: $q/* snarf */=/* quux */$_GET/* foo */[/* bar */'q'/* bazz */]/* yadda */;
So, just discovered you can pass any string as a method name IF you enclose it with curly brackets. You can't define any string as a method alas, but you can catch them with __call(), and process them further as needed. Hmmm....
class foo { function __call($func, $args) { eval ($func); } } $x = new foo; $x->{'foreach(range(1, 10) as $i) {echo $i."\n";}'}();
Found this little gem in Reddit comments:
$foo = 'abcde'; $strlen = 'strlen'; echo "$foo is {$strlen($foo)} characters long."; // "abcde is 5 characters long."
You can't call functions inside {} directly like this, but you can use variables-holding-the-function-name and call those! (*and* you can use variable variables on it, too)
I know this sounds like a point-whoring question but let me explain where I'm coming from.
Out of college I got a job at a PHP shop. I worked there for a year and a half and thought that I had learned all there was to learn about programming.
Then I got a job as a one-man internal development shop at a sizable corporation where all the work was in C#. In my commitment to the position I started reading a ton of blogs and books and quickly realized how wrong I was to think I knew everything. I learned about unit testing, dependency injection and decorator patterns, the design principle of loose coupling, the composition over inheritance debate, and so on and on and on - I am still very much absorbing it all. Needless to say my programming style has changed entirely in the last year.
Now I find myself picking up a php project doing some coding for a friend's start-up and I feel completely constrained as opposed to programming in C#. It really bothers me that all variables at a class scope have to be referred to by appending '$this->' . It annoys me that none of the IDEs that I've tried have very good intellisense and that my SimpleTest unit tests methods have to start with the word 'test'. It drives me crazy that dynamic typing keeps me from specifying implicitly which parameter type a method expects, and that you have to write a switch statement to do method overloads. I can't stand that you can't have nested namespaces and have to use the :: operator to call the base class's constructor.
Now I have no intention of starting a PHP vs C# debate, rather what I mean to say is that I'm sure there are some PHP features that I either don't know about or know about yet fail to use properly. I am set in my C# universe and having trouble seeing outside the glass bowl.
So I'm asking, what are your favorite features of PHP? What are things you can do in it that you can't or are more difficult in the .Net languages?
Probably not many know that it is possible to specify constant "variables" as default values for function parameters:
function myFunc($param1, $param2 = MY_CONST) { //code... }
Strings can be used as if they were arrays:
$str = 'hell o World'; echo $str; //outputs: "hell o World" $str[0] = 'H'; echo $str; //outputs: "Hell o World" $str[4] = null; echo $str; //outputs: "Hello World"
One not so well known feature of PHP is
extract(), a function that unpacks an associative array into the local namespace. This probably exists for the autoglobal abormination but is very useful for templating:
function render_template($template_name, $context, $as_string=false) { extract($context); if ($as_string) ob_start(); include TEMPLATE_DIR . '/' . $template_name; if ($as_string) return ob_get_clean(); }
Now you can use
render_template('index.html', array('foo' => 'bar')) and only
$foo with the value
"bar" appears in the template.
The single most useful thing about PHP code is that if I don't quite understand a function I see I can look it up by using a browser and typing:
Last month I saw the "range" function in some code. It's one of the hundreds of functions I'd managed to never use but turn out to be really useful:
That url is an alias for. That simple idea, of mapping functions and keywords to urls, is awesome.
I wish other languages, frameworks, databases, operating systems has as simple a mechanism for looking up documentation.
You can use minus character in variable names like this:
class style { .... function set_bg_colour($c) { $this->{'background-color'} = $c; } }
Why use it? No idea: maybe for a CSS model? Or some weird JSON you need to output. It's an odd feature :)
PHP enabled webspace is usually less expensive than something with (asp).net. You might call that a feature ;-)
I'm a little surprised no-one has mentioned it yet, but one of my favourite tricks with arrays is using the plus operator. It is a little bit like
array_merge() but a little simpler. I've found it's usually what I want. In effect, it takes all the entries in the RHS and makes them appear in a copy of the LHS, overwriting as necessary (i.e. it's non-commutative). Very useful for starting with a "default" array and adding some real values all in one hit, whilst leaving default values in place for values not provided.
Code sample requested:
// Set the normal defaults. $control_defaults = array( 'type' => 'text', 'size' => 30 ); // ... many lines later ... $control_5 = $control_defaults + array( 'name' => 'surname', 'size' => 40 ); // This is the same as: // $control_5 = array( 'type' => 'text', 'name' => 'surname', 'size' => 40 );
Variable variables and functions without a doubt!
$foo = 'bar'; $bar = 'foobar'; echo $$foo; //This outputs foobar function bar() { echo 'Hello world!'; } function foobar() { echo 'What a wonderful world!'; } $foo(); //This outputs Hello world! $$foo(); //This outputs What a wonderful world!
The same concept applies to object parameters ($some_object->$some_variable);
Very, very nice. Make's coding with loops and patterns very easy, and it's faster and more under control than eval (Thanx @Ross & @Joshi Spawnbrood!).t
__autoload() (class-) files aided by
set_include_path().
In PHP5 it is now unnecessary to specify long lists of "include_once" statements when doing decent OOP.
Just define a small set of directory in which class-library files are sanely structured, and set the auto include path:
set_include_path(get_include_path() . PATH_SEPARATOR . '../libs/');`
Now the
__autoload() routine:
function __autoload($classname) { // every class is stored in a file "libs/classname.class.php" // note: temporary alter error_reporting to prevent WARNINGS // Do not suppress errors with a @ - syntax errors will fail silently! include_once($classname . '.class.php'); }
Now PHP will automagically include the needed files on-demand, conserving parsing time and memory.
HEREDOC syntax is my favourite hidden feature. Always difficult to find as you can't Google for <<< but it stops you having to escape large chunks of HTML and still allows you to drop variables into the stream.
echo <<<EOM <div id="someblock"> <img src="{$file}" /> </div> EOM;
Here's one, I like how setting default values on function parameters that aren't supplied is much easier:
function MyMethod($VarICareAbout, $VarIDontCareAbout = 'yippie') { }
Arrays. Judging from the answers to this question I don't think people fully appreciate just how easy and useful Arrays in PHP are. PHP Arrays act as lists, maps, stacks and generic data structures all at the same time. Arrays are implemented in the language core and are used all over the place which results in good CPU cache locality. Perl and Python both use separate language constructs for lists and maps resulting in more copying and potentially confusing transformations.
The standard class is a neat container. I only learned about it recently.
Instead of using an array to hold serveral attributes
$person = array(); $person['name'] = 'bob'; $person['age'] = 5;
You can use a standard class
$person = new stdClass(); $person->name = 'bob'; $person->age = 5;
This is particularly helpful when accessing these variables in a string
$string = $person['name'] . ' is ' . $person['age'] . ' years old.'; // vs $string = "$person->name is $person->age years old.";
Fast block comments
/* die('You shall not pass!'); //*/ //* die('You shall not pass!'); //*/
These comments allow you to toggle if a code block is commented with one character.
Range() isn't hidden per se, but I still see a lot of people iterating with:
for ($i=0; $i < $x; $i++) { // code... }
when they could be using:
foreach (range(0, 12) as $number) { // ... }
And you can do simple things like
foreach (range(date("Y"), date("Y")+20) as $i) { print "\t<option value=\"{$i}\">{$i}</option>\n"; }
Stream Handlers allow you to extend the "FileSystem" with logic that as far as I know is quite difficult to do in most other languages.
For example with the MS-Excel Stream handler you can create a MS Excel file in the following way:
);
Include files can have a return value you can assign to a variable.
// config.php return array( 'db' => array( 'host' => 'example.org', 'user' => 'usr', // ... ), // ... ); // index.php $config = include 'config.php'; echo $config['db']['host']; // example.org
The
static keyword is useful outside of a OOP standpoint. You can quickly and easily implement 'memoization' or function caching with something as simple as:
<?php function foo($arg1) { static $cache; if( !isset($cache[md5($arg1)]) ) { // Do the work here $cache[md5($arg1)] = $results; } return $cache[md5($arg1)]; } ?>
The
static keyword creates a variable that persists only within the scope of that function past the execution. This technique is great for functions that hit the database like
get_all_books_by_id(...) or
get_all_categories(...) that you would call more than once during a page load.
Caveat: Make sure you find out the best way to make a key for your hash, in just about every circumstance the
md5(...) above is NOT a good decision (speed and output length issues), I used it for illustrative purposes.
sprintf('%u', crc32(...)) or
spl_object_hash(...) may be much better depending on the context.
Documentation. The documentation gets my vote. I haven't encountered a more thorough online documentation for a programming language - everything else I have to piece together from various websites and man pages.
Quick and dirty is the default.
The language is filled with useful shortcuts, This makes PHP the perfect candidate for (small) projects that have a short time-to-market. Not that clean PHP code is impossible, it just takes some extra effort and experience.
But I love PHP because it lets me express what I want without typing an essay.
PHP:
if (preg_match("/cat/","one cat")) { // do something }
JAVA:
import java.util.regex.*; Pattern p = Pattern.compile("cat"); Matcher m = p.matcher("one cat") if (m.find()) { // do something }
And yes, that includes not typing Int.
You can take advantage of the fact that the
or operator has lower precedence than
= to do this:
$page = (int) @$_GET['page'] or $page = 1;
If the value of the first assignment evaluates to
true, the second assignment is ignored. Another example:
$record = get_record($id) or throw new Exception("...");
You can use functions with a undefined number of arguments using the
func_get_args().
<?php function test() { $args = func_get_args(); echo $args[2]; // will print 'd' echo $args[1]; // will print 3 } test(1,3,'d',4); ?>
One nice feature of PHP is the CLI. It's not so "promoted" in the documentation but if you need routine scripts / console apps, using cron + php cli is really fast to develop!
I'm a bit like you, I've coded PHP for over 8 years. I had to take a .NET/C# course about a year ago and I really enjoyed the C# language (hated ASP.NET) but it made me a better PHP developer.
PHP as a language is pretty poor, but, I'm extremely quick with it and the LAMP stack is awesome. The end product far outweighs the sum of the parts.
That said, in answer to your question:
I love the SPL, the collection class in C# was something that I liked as soon as I started with it. Now I can have my cake and eat it.
Andrew | https://code.i-harness.com/en/q/efd9 | CC-MAIN-2019-39 | refinedweb | 2,315 | 72.16 |
NAME
ng_sscop - netgraph SSCOP node type
SYNOPSIS
#include <netnatm/saal/sscopdef.h> #include <netgraph/atm/ng_sscop.h>
DESCRIPTION
The sscop netgraph node type implements the ITU-T standard Q.2110. This standard describes the so called Service Specific Connection Oriented Protocol (SSCOP) that is used to carry signalling messages over the private and public UNIs and the public NNI. This protocol is a transport protocol with selective acknowledgements, and can be tailored to the environment. This implementation is a full implementation of that standard. After creation of the node, the SSCOP instance must be created by sending an “enable” message to the node. If the node is enabled, the SSCOP parameters can be retrieved and modified and the protocol can be started. The node is shut down either by a NGM_SHUTDOWN message, or when all hooks are disconnected.
HOOKS
Each sscop node has three hooks with fixed names: lower This hook must be connected to a node that ensures transport of packets to and from the remote peer node. Normally this is a ng_atm(4) node with an AAL5 hook, but the sscop node is able to work on any packet-transporting layer, like, for example, IP or UDP. The node handles flow control messages received on this hook: if it receives a NGM_HIGH_WATER_PASSED message, it declares the “lower layer busy” state. If a NGM_LOW_WATER_PASSED message is received, the busy state is cleared. Note that the node does not look at the message contents of these flow control messages. upper This is the interface to the SSCOP user. This interface uses the following message format: struct sscop_arg { */ SSCOP_RETRIEVE_COMPL_indication,/* -> */ }; The arrows in the comment show the direction of the signal, whether it is a signal that comes out of the node (‘->’), or is sent by the node user to the node (‘<-’). The arg field contains the argument to some of the signals: it is either a PDU sequence number, or the CLEAR-BUFFER flag. There are a number of special sequence numbers for some operations: SSCOP_MAXSEQNO maximum legal sequence number SSCOP_RETRIEVE_UNKNOWN retrieve transmission queue SSCOP_RETRIEVE_TOTAL retrieve transmission buffer and queue For signals that carry user data (as, for example, SSCOP_DATA_request) these two fields are followed by the variable sized user data. If the upper hook is disconnected and the SSCOP instance is not in the idle state, and the lower hook is still connected, an SSCOP_RELEASE_request is executed to release the SSCOP connection. manage This is the management interface defined in the standard. The data structure used here is: struct sscop_marg { */ }; The flags field contains the following flags influencing SSCOP operation: SSCOP_ROBUST enable atmf/97-0216 robustness enhancement SSCOP_POLLREX send POLL after each retransmission The bitmap has the following bits: SSCOP_SET_TCC set timer_cc SSCOP_SET_TPOLL set timer_poll SSCOP_SET_TKA set timer_keep_alive SSCOP_SET_TNR set timer_no_response SSCOP_SET_TIDLE set timer_idle SSCOP_SET_MAXK set maxk SSCOP_SET_MAXJ set maxj SSCOP_SET_MAXCC set maxcc SSCOP_SET_MAXPD set maxpd SSCOP_SET_MAXSTAT set maxstat SSCOP_SET_MR set the initial window SSCOP_SET_ROBUST set or clear SSCOP_ROBUST SSCOP_SET_POLLREX set or clear SSCOP_POLLREX The node responds to the NGM_SSCOP_SETPARAM message with the following response: struct ng_sscop_setparam_resp { uint32_t mask; int32_t error; }; Here mask contains a bitmask of the parameters that the user requested to set, but that could not be set and error is an errno(2) code describing why the parameter could not be set. NGM_SSCOP_GETPARAM This message returns the current operational parameters of the SSCOP instance in a sscop_param structure. NGM_SSCOP_ENABLE This message creates the actual SSCOP instance and initializes it. Until this is done, parameters may neither be retrieved nor set, and all messages received on any hook are discarded. NGM_SSCOP_DISABLE Destroy the SSCOP instance. After this, all messages on any hooks are discarded. NGM_SSCOP_SETDEBUG Set debugging flags. The argument is a uint32_t. NGM_SSCOP_GETDEBUG Retrieve the actual debugging flags. Needs no arguments and responds with a uint32_t. NGM_SSCOP_GETSTATE Responds with the current state of the SSCOP instance in a uint32_t. If the node is not enabled, the retrieved state is 0.
FLOW CONTROL
Flow control works on the upper and on the lower layer interface. At the lower layer interface, the two messages, NGM_HIGH_WATER_PASSED and NGM_LOW_WATER_PASSED, are used to declare or clear the “lower layer busy” state of the protocol. At the upper layer interface, the sscop node handles three types of flow control messages: NGM_HIGH_WATER_PASSED If this message is received, the SSCOP stops moving the receive window. Each time a data message is handed over to the upper layer, the receive window is moved by one message. Stopping these updates means that the window will start to close and if the peer has sent all messages allowed by the current window, it stops transmission. This means that the upper layer must be able to still receive a full window amount of messages. NGM_LOW_WATER_PASSED This will re-enable the automatic window updates, and if the space indicated in the message is larger than the current window, the window will be opened by that amount. The space is computed as the difference of the max_queuelen_packets and current members of the ngm_queue_state structure. NGM_SYNC_QUEUE_STATE If the upper layer buffer filling state, as indicated by current, is equal to or greater than high_watermark then the message is ignored. If this is not the case, the amount of receiver space is computed as the difference of max_queuelen_packets and current if automatic window updates are currently allowed, and as the difference of high_water_mark and current if window updates are disabled. If the resulting value is larger than the current window, the current window is opened up to this value. Automatic window updates are enabled if they were disabled.
SEE ALSO
netgraph(4), ng_atm(4), ng_sscfu(4), ngctl(8)
AUTHORS
Harti Brandt 〈harti@FreeBSD.org〉 | http://manpages.ubuntu.com/manpages/lucid/man4/ng_sscop.4freebsd.html | CC-MAIN-2014-15 | refinedweb | 942 | 50.67 |
FXGL: A JavaFX library for game developers — Basic game example
.
I assume that you have completed Setting up FXGL one way or another and have a Java project in your IDE that has access to the latest version of the FXGL library. If you get lost at any point during the tutorial, there’s a link at the end of the page to the full source code.
Game requirements
First and foremost, let’s define some requirements for our simple game:
- A 600×600 window.
- There is a player on the screen, represented by a blue rectangle.
- The user can move the player by pressing W, S, A or D on the keyboard.
- UI is represented by a single line of text.
- When the player moves, the UI text updates to show how many pixels the player has moved during his lifetime.
At the end of this tutorial you should have something like this:
Although it may not sound (or look) like a game, it will help you understand the basic features of FXGL. After you have finished this tutorial, you should be able to build a variety of simple games.
Preparation
Now that we have a rough idea of what we are expecting from the game, we can go back to the IDE and create a package for our game. (Note: the directory structure is similar to the Maven directory structure, however, if you don’t know what this is, don’t worry. We will cover the structure at a later stage. At this point having “src” as the main source directory is sufficient.) I’m going to use “com.myname.mygame” as the package name, where myname can optionally be replaced with your username and mygame with the game name.
- Create package “com.myname.mygame” in your IDE.
- Create a Java class with the name BasicGameApp in that package.
It is quite common to append “App” to the class where your
main() is. This allows other developers to easily identify where the main entry point to your game / application is. Also just to make your next steps a lot easier I suggest that you open your BasicGameApp class and add in these imports:
import com.almasb.fxgl.app.GameApplication; import com.almasb.fxgl.entity.Entities; import com.almasb.fxgl.entity.GameEntity; import com.almasb.fxgl.input.Input; import com.almasb.fxgl.input.UserAction; import com.almasb.fxgl.settings.GameSettings; import javafx.beans.property.IntegerProperty; import javafx.beans.property.SimpleIntegerProperty; import javafx.scene.input.KeyCode; import javafx.scene.paint.Color; import javafx.scene.shape.Rectangle; import javafx.scene.text.Text;
We are now ready to do some coding.
Coding stage
In order to use the FXGL library you need to extend GameApplication and override its abstract methods. The most straightforward way is to make your main class (BasicGameApp that we created) extend it as follows:
public class BasicGameApp extends GameApplication { @Override protected void initSettings(GameSettings settings) {} @Override protected void initInput() {} @Override protected void initAssets() {} @Override protected void initGame() {} @Override protected void initPhysics() {} @Override protected void initUI() {} @Override protected void onUpdate(double tpf) {} }
Most IDEs will generate the overridden methods automatically, as soon as you extend GameApplication. Now we want to be able to start the application. To do that simply add the following:
public static void main(String[] args) { launch(args); }
If you’ve done any JavaFX programming before, then you’ll notice that it is the exact same signature that we use to start a JavaFX application. In a nutshell, FXGL is a JavaFX application with game development features, nothing more.
Requirement 1 (Window)
At this point you should already be able to run your game, but first let’s tweak some settings.
@Override protected void initSettings(GameSettings settings) { settings.setWidth(600); settings.setHeight(600); settings.setTitle("Basic Game App"); settings.setVersion("0.1"); settings.setIntroEnabled(false); // turn off intro settings.setMenuEnabled(false); // turn off menus }
As you can see all the settings are changed within
initSettings(). Once they are set, the settings cannot be changed during runtime (we’ll talk about them in more detail some other time). Since this is supposed to be a very basic game, we are going to turn off intro and menus. Ok, you can now click run in your IDE, which should start the game with a 600×600 window and Basic Game App as a title.
So we now achieved our requirement 1. Was easy, right?
Requirement 2 (Player)
Next step is to add a player and show him on the screen. We are going to do this in
initGame(). In short, this is where you set up all the stuff that needs to be ready before the game starts.
private GameEntity player; @Override protected void initGame() { player = Entities.builder() .at(300, 300) .viewFromNode(new Rectangle(25, 25, Color.BLUE)) .buildAndAttach(getGameWorld()); }
If you are not familiar with fluent API, then this might be quite a lot to take in one go. So we’ll start slowly. There is an instance level field named player of type GameEntity. A game entity is basically a game object. This is everything you need to know about it for now. Entities class contains a collection of convenience static methods to simplify dealing with entities. We first call
.builder(), which gives us a new entity builder. By calling
.at() we position the entity where we want. In this case it’s x = 300, y = 300, so the center of the screen.
(Note: a position of an object in FXGL is it’s top-left point, akin to the JavaFX convention.)
We then tell the builder to create view of the object by using whatever UI node we pass in as the parameter. Here it’s a standard JavaFX Rectangle with width = 25, height = 25 and color blue.
(Note: you can use any JavaFX node based object, which is pretty cool.)
Finally, we call
.buildAndAttach() and pass
getGameWorld(). The method
getGameWorld()simply returns the reference to our game world. By calling build we can obtain the reference to the game entity that we were building. As for the “attach” part, it conveniently allows to attach the built entity straight to the game world. If run the application you should now see a blue rectangle near the center of the screen.
Great, we just hit requirement number 2!
Requirement 3 (Input)
We will now proceed with the requirement related to user input. We put the input handling code in
initInput().
@Override protected void initInput() { Input input = getInput(); // get input service input.addAction(new UserAction("Move Right") { @Override protected void onAction() { player.getPositionComponent().translateX(5); // move right 5 pixels } }, KeyCode.D); }
Let’s go through this snippet line by line. We first get the input service, alternatively you can just call
getInput() directly. Then we add an action, followed by a key code. Again, if you’ve used JavaFX before then you’ll know that these are exactly the same key codes used in event handlers. Anyway, we are basically saying: when ‘D’ is pressed do the action I’ve created. Now let’s look at the action itself. When we create an action we also give it a name – “Move Right”. It is important, as this is fed directly to the controls and menu systems where the user can change them anytime. So the name must be meaningful to the user and also unique. Once we’ve created the action, we override one of its methods (
onAction() this time), and supply some code. That code will be called when the action happens. From the requirements we remember that we want to move the player, so when ‘D’ is pressed we want to move the player to the right. We call
player.getPositionComponent().translateX(5), (ignore the component bit for now, it just means we are interested in its position), which translates its X coordinate by 5 pixels.
(Note: translate is a terminology used in computer graphics and basically means move.)
This results in the player object moving 5 pixels to the right. I think you can guess what the rest of the input code will look like, but just in case, here it is for ‘W’, ‘S’ and ‘A’.
@Override protected void initInput() { Input input = getInput(); // get input service input.addAction(new UserAction("Move Right") { @Override protected void onAction() { player.getPositionComponent().translateX(5); // move right 5 pixels } }, KeyCode.D); input.addAction(new UserAction("Move Left") { @Override protected void onAction() { player.getPositionComponent().translateX(-5); // move left 5 pixels } }, KeyCode.A); input.addAction(new UserAction("Move Up") { @Override protected void onAction() { player.getPositionComponent().translateY(-5); // move up 5 pixels } }, KeyCode.W); input.addAction(new UserAction("Move Down") { @Override protected void onAction() { player.getPositionComponent().translateY(5); // move down 5 pixels } }, KeyCode.S); }
Requirement 3 – done and dusted. We are more than halfway through, well done!
Requirement 4 (UI)
We now move on to the next bit – UI, which we handle in, you’ve guessed it,
initUI().
@Override protected void initUI() { Text textPixels = new Text(); textPixels.setTranslateX(50); // x = 50 textPixels.setTranslateY(100); // y = 100 getGameScene().addUINode(textPixels); // add to the scene graph }
For most UI objects we simply use JavaFX objects, since there is no need to re-invent the wheel. The only thing you should note is that when we added a game entity to the world, the game scene picked up the fact that the entity had a view associated with it and so the game scene magically added the entity to the scene graph. With UI objects we are responsible for their additions to the scene graph and we can do so by simply calling
getGameScene().addUINode().
That’s it for the requirement 4.
Requirement 5 (Gameplay)
In order to complete the last one we are going to use some JavaFX data binding.
(Note: data binding in this case refers to view being bound to some data, so that to change the view, we change the data instead. In most cases this is the preferred way and it is very convenient too.)
We start by creating an integer property named pixelsMoved:
private IntegerProperty pixelsMoved;
JavaFX properties are very much like normal wrapped primitive data types, except they can be bound to other properties and other properties can bind to them. We now initialize it in `initGame()’ right after our player creation.
... player creation code pixelsMoved = new SimpleIntegerProperty();
(Note: it is important for saving/loading systems that we don’t initialize instance level fields on declaration but do it in ‘initGame()’.)
Then we need to let it know how far the player has moved. We can do this in the input handling section.
input.addAction(new UserAction("Move Right") { @Override protected void onAction() { player.getPositionComponent().translateX(5); pixelsMoved.set(pixelsMoved.get() + 5); } }, KeyCode.D);
I’ll let you do the same with the rest of movement actions (left, up and down). The last step (both for the requirement and the tutorial) is to bind our UI object to the data object. In
initUI()once we’ve created the textPixels object, we can do the following:
textPixels.textProperty().bind(pixelsMoved.asString());
After that the UI text will show how many pixels the player has moved automatically.
You now have a basic FXGL game. Hopefully you’ve had fun.
This post was originally published on GitHub.
Read the previous FXGL article: | https://jaxenter.com/fxgl-a-javafx-library-for-game-developers-basic-game-example-127921.html | CC-MAIN-2019-04 | refinedweb | 1,882 | 58.48 |
Greetings ruby fans,
I’m a greenhorn at this cool lang ruby. Much to learn. Perhaps you
chaps could help me with an issue I have. I’ve read through a number of
the post on sorting Arrays and Hashes. And yet I can’t seem to put my
finger on the solution. I want to sort on the second column. So it
seemed from what information I gathered, that I need to gather my
results into a hash. Am I on the right track? Oh, let me tell you what
your looking at here; I am scanning each mail file in our queue for
commonalites (spammer) instead of the useless (my opinoin) qmHandle we
have for qmail. So, I’ve got a working prototype. If you could help me
on my sort and if you have any other comments/suggestions to throw my
way I’m sure I could learn a thing or two. Being new to ruby, there’s a
lot of new ideas here. Thank guys.
Code:
#!/usr/local/bin/ruby -w
require ‘find’
@results = Array.new
Iterate through the child directories & call the parse file method
def scan_dirs
root = “/var/qmail/queue/mess”
Find.find(root) do |file|
parse_file(file)
end
@results.sort!
print_results
end
Parse each file for the information we want
def parse_file(path)
file = path[(path.length-7), path.length]
sourceip = “”
subject = “”
line_no = 0
File.open(path, ‘r’).each do |line|
line = line.strip # Remove any \n\r nil, etc line_no += 1 if line_no == 1 if line.match("invoked for bounce") # Internal Bounce Msg sourceip = "SMTP" end end if (line_no == 2 and sourceip.empty?) if line.match("webmail.commspeed.net") sourceip = "Webmail" else sourceip = line.scan(/\b(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\b/) if sourceip.empty? sourceip = "No Source IP**" end end end if (line.match("SquirrelMail") and sourceip == "Webmail") or (line.match("From:") and sourceip != "Webmail") if email.empty? email = get_email(line) end end if line.match("Subject:") and subject.empty? subject = truncate(line,50) end if line_no == 20 #Nothing more we want to read in the file @results << ["#{file}", "#{sourceip}", "#{email}", "#{subject}"] line_no = 0 return end
end
end
Truncate subject line
def truncate(string, width)
if string.length <= width
string
else
string[0, width-3] + “…”
end
end
Print out results
def print_results
print “\e[2J\e[f”
print “Mess#”.ljust(10," “)
print “Source”.ljust(18,” ")
print “Email Addrress”.ljust(30, " ")
print “Subject”.ljust(50, " ")
1.times { print “\n” }
111.times { print “-” }
1.times { print “\n” }
@results.each do |line|
print line[0].ljust(10," “)
print line[1].ljust(18,” ")
print line[2].ljust(30, " ")
print line[3].ljust(50, " ")
1.times { print "\n" }
end
end
Get email address from line/string
def get_email(line_to_parse)
Pull the email address from the line
line_to_parse.scan(/\b[A-Z0-9._%±][email protected][A-Z0-9.-]+.[A-Z]{2,4}\b/i).flatten
end
Ok, begin our scan
scan_dirs
exit
Partial results listing: (I’ve modified the content to protect privacy)
Mess# Source Email Addrress Subject
3360108 111.111.17.1 [email protected]
3360167 111.111.7.213 [email protected] Subject:
Removed to protect the innocent…
3360186 Webmail [email protected] Subject:
Removed to protect the innocent
3360209 111.111.40.10 [email protected]
3360215 111.111.15.110 [email protected] Subject:
Removed to protect the innocent
3360217 111.111.9.248 [email protected] Subject:
Removed to protect the innocent
3360226 111.111.11.43 [email protected] Subject:
Removed to protect the innocent
3360228 111.111.16.34 [email protected] Subject:
3360241 111.111.18.73 [email protected] Subject:
Removed to protect the innocent
3360242 111.111.14.109 [email protected] Subject: | https://www.ruby-forum.com/t/help-on-best-way-to-gather-sort-results-array-hash/133238 | CC-MAIN-2021-31 | refinedweb | 614 | 70.5 |
No. You need to downgrade ghc. See my thread:
Ido.
Offline
Another alternative is to install the darcs version from AUR--it does not appear to suffer from this versioning mismatch.
Offline
Do i have to wait for a new version of xmonad ?
No, just recompile those packages and you're good to go.
Ricardo Martins ><>< ricardomartins.cc ><>< GPG key: 0x1308F1B4
Offline
hamlet wrote:
Do i have to wait for a new version of xmonad ?
No, just recompile those packages and you're good to go.
error occurs while compiling xmonad:
==> Making package: xmonad 0.5-6 (Sun Jan 20 14:16:07 CST 2008) ==> Checking Runtime Dependencies... ==> Checking Buildtime Dependencies... ==> Retrieving Sources... -> Downloading xmonad-0.5.tar.gz... --14:16:07-- => `xmonad-0.5.tar.gz' Resolving hackage.haskell.org... 69.30.123.197 Connecting to hackage.haskell.org|69.30.123.197|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 48,853 (48K) [application/x-tar] 100%[====================================>] 48,853 4.64K/s ETA 00:00 14:16:39 (1.91 KB/s) - `xmonad-0.5.tar.gz' saved [48853/48853] ==> Validating source files with md5sums... xmonad-0.5.tar.gz ... Passed ==> Extracting Sources... -> bsdtar -x -f xmonad-0.5.tar.gz ==> Entering fakeroot environment... ==> Starting build()... Configuring xmonad-0.5... Preprocessing library xmonad-0.5... Preprocessing executables for xmonad-0.5... Building xmonad-0.5... ( XMonad/Main.hs, dist/build/XMonad/Main.o ) [8 of 8] Compiling XMonad ( XMonad.hs, dist/build/XMonad.o ) /usr/bin/ar: creating dist/build/libHSxmonad-0.5.a [1 of 8] Compiling XMonad.StackSet ( XMonad/StackSet.hs, dist/build/xmonad/xmonad-tmp/XMonad/StackSet.o ) [2 of 8] Compiling XMonad.Core ( XMonad/Core.hs, dist/build/xmonad/xmonad-tmp/XMonad/Core.o ) [3 of 8] Compiling XMonad.Layout ( XMonad/Layout.hs, dist/build/xmonad/xmonad-tmp/XMonad/Layout.o ) [4 of 8] Compiling XMonad.Operations ( XMonad/Operations.hs, dist/build/xmonad/xmonad-tmp/XMonad/Operations.o ) [5 of 8] Compiling XMonad.ManageHook ( XMonad/ManageHook.hs, dist/build/xmonad/xmonad-tmp/XMonad/ManageHook.o ) [6 of 8] Compiling XMonad.Config ( XMonad/Config.hs, dist/build/xmonad/xmonad-tmp/XMonad/Config.o ) [7 of 8] Compiling XMonad.Main ( XMonad/Main.hs, dist/build/xmonad/xmonad-tmp/XMonad/Main.o ) [8 of 8] Compiling Main ( ./Main.hs, dist/build/xmonad/xmonad-tmp/Main.o ) Linking dist/build/xmonad/xmonad ... Writing registration script: register.sh for xmonad-0.5... Unregistering xmonad-0.5... Installing: /home/hk2717/abs/xmonad/pkg/usr/lib/ghc-6.8.2/site-local/xmonad-0.5 Installing: /home/hk2717/abs/xmonad/pkg/usr/bin install: cannot stat `/home/hk2717/abs/xmonad/examples/README': No such file or directory install: cannot stat `/home/hk2717/abs/xmonad/examples/*.hs': No such file or directory ==> ERROR: Build Failed. Aborting...
How should I solve this problem please?
Offline/
Ricardo Martins ><>< ricardomartins.cc ><>< GPG key: 0x1308F1B4
Offline
hk2717 wrote:/
Yes, problem solved. Thank you.
Offline
If you use yaourt (if not, you should) this command will fix your xmonad
yaourt -Sb haskell-x11 xmonad xmonad-contrib
Offline
Hello, first of all. Since I used this thread to configure xmonad in the first place, i'll post my issue here.
I updated to 0.7 a few days ago. Since some things changed(e.g. tabbed layout), I made some modifications to my xmonad.hs. But somehow dynamicPP doesn't report the tabbed layout anymore. When i switch to tabbed on a WS then dzen doesnt show any info about it(other layouts work, and tabbed worked in 0.6). Further, any WS in tabbed layout isn't highlighted as CurrentWS, it just stays the way it was when i left the WS.
A second thing, unrelated to above, is, that my Mathematica won't shift to "math"-WS when started. Strangely, sometimes it does. All the other apps shift like they are used to. But, dont bother about this if it sound strange to you.
Here is my xmonad.hs:
import XMonad import System.Exit import XMonad.ManageHook import XMonad.Operations import XMonad.Layout import XMonad.Layout.Tabbed import XMonad.Layout.PerWorkspace import XMonad.Hooks.DynamicLog import XMonad.Actions.CycleWS import XMonad.Actions.Warp import XMonad.Actions.Submap import qualified XMonad.StackSet as W import qualified Data.Map as M main = xmonad defaults defaults = defaultConfig { terminal = "urxvt +sb -bg grey4 -fg antiquewhite -fn -dec-terminal-medium-r-normal--14-140-75-75-c-80-iso8859-1", focusFollowsMouse = myFocusFollowsMouse, borderWidth = 1, modMask = mod1Mask, numlockMask = mod2Mask, workspaces = ["main", "web", "read", "ding", "math", "6", "7", "ctrl", "dei"], normalBorderColor = "#1e364f", focusedBorderColor = "#8e2800", defaultGaps = [(16,0,0,0)], keys = myKeys, mouseBindings = myMouseBindings, layoutHook = myLayout, manageHook = myManageHook, logHook = dynamicLogWithPP myPP >>pointerFollowsFocus 1 1 } myKeys conf@(XConfig {XMonad.modMask = modMask}) = M.fromList $ [ ((modMask , xK_F1 ), spawn "firefox") , ((modMask , xK_F2 ), spawn "thunderbird") , ((modMask , xK_F3 ), spawn "thunar /home/floyd/uni") , ((modMask , xK_F4 ), spawn "ding -R -x") , ((modMask , xK_F9 ), spawn "qmpdclient") , ((modMask , 0x5e ), toggleWS) -- , ((modMask, xK_a), submap . M.fromList $ -- [ ((0, xK_p), spawn "/usr/bin/mpc pause") -- ]) -- launch a terminal , ((modMask .|. shiftMask, xK_Return), spawn $ XMonad.terminal conf) -- launch dmenu , ((modMask, xK_p ), spawn "exe=`dmenu_path | dmenu -nb grey4 -nf antiquewhite -sb '#8e2800' -sf white -p 'launch:' -fn '-dec-terminal-medium-r-normal--14-140-75-75-c-80-iso8859-1'` && eval \"exec $exe\"") -- launch gmrun , ((modMask .|. shiftMask, xK_p ), spawn "gmrun") -- close focused window , ((modMask .|. shiftMask, xK_c ), kill) --_Tab ), windows W.focusDown) -- Move focus to the next window , ((modMask, xK_j ), windows W.focusDown) -- Move focus to the previous window , ((modMask, xK_k ), windows W.focusUp ) -- Move focus to the master window , ((modMask, xK_m ), windows W.focusMaster ) -- Swap the focused window and the master window , ((modMask, xK_Return), windows W.swapMaster) -- "xmonad" True) ] ++ -- -- mod-[1..9], Switch to workspace N -- mod-shift-[1..9], Move client to workspace N -- [((m .|. modMask, k), windows $ f i) | (i, k) <- zip (XMonad.workspaces conf) [xK_1 .. xK_9] , (f, m) <- [(W.greedyView, 0), (W.shift, shiftMask)]]) ] --myLayout = tiled ||| tabbed shrinkText myTabConfig ||| Mirror tiled ||| Full myLayout = onWorkspace "web" (tab ||| tiled ||| Mirror tiled) $ onWorkspace "read" (tab ||| Full) $ (tiled ||| tab ||| Mirror tiled ||| Full) where -- default tiling algorithm partitions the screen into two panes tiled = Tall nmaster delta ratio -- The default number of windows in the master pane nmaster = 1 -- Default proportion of screen occupied by master pane ratio = 6/10 -- Percent of screen to increment by when resizing panes delta = 3/100 -- Tabbed layout tab = tabbed shrinkText myTabConfig myTabConfig = defaultTheme { activeColor = "#1e364f" , inactiveColor = "#132333" , activeBorderColor = "#8e2800" , inactiveBorderColor = "#101010" , activeTextColor = "#e6e6e6" , inactiveTextColor = "#e6e6e6" , fontName = "-dec-terminal-medium-r-normal--12-140-75-75-c-80-iso8859-1" } myManageHook = composeAll [ className =? "MPlayer" --> doFloat , className =? "Gimp" --> doFloat , resource =? "desktop_window" --> doIgnore , resource =? "kdesktop" --> doIgnore , resource =? "gecko" --> doF (W.shift "web") , resource =? "base" --> doF (W.shift "math") , resource =? "XMathematica" --> doF (W.shift "math") , resource =? "ding" --> doF (W.shift "ding") , resource =? "JabRef" --> doF (W.shift "ding") , resource =? "qmpdclient" --> doF (W.shift "dei") , resource =? "kpdf" --> doF (W.shift "read") ] myFocusFollowsMouse :: Bool myFocusFollowsMouse = True myPP = defaultPP { ppCurrent = wrap "^fg(#e6e6e6)^bg(#8e2800)^p(2)^i(/home/floyd/.xmonad/icons/has_win.xbm)" "^p(2)^fg()^bg()" . \wsId -> if (':' `elem` wsId) then drop 2 wsId else wsId -- Trim the '[Int]:' from workspace tags -- , ppVisible = wrap "^bg(grey30)^fg(grey75)^p(2)" "^p(2)^fg()^bg()" , ppHidden = wrap "^fg(#8e2800)^bg()^p(2)^i(/home/floyd/.xmonad/icons/has_win_nv.xbm)" "^p(2)^fg()^bg()" . \wsId -> if (':' `elem` wsId) then drop 2 wsId else wsId , ppHiddenNoWindows = wrap "^fg(#404040)^bg()^p(2)" "^p(2)^fg()^bg()" . id . \wsId -> if (':' `elem` wsId) then drop 2 wsId else wsId , ppSep = " ^fg(#8e2800)|||^fg() " , ppWsSep = "^fg(#1e364f):" , ppLayout = dzenColor "#d0a825" "" . ()
Thanks, floyd
Offline
The first issue is because the usual tabbed changed its "name" from "Tabbed" to "Tabbed Simplest", so you need to change that in myPP.
I don't know about the second issue - maybe mathematica doesn't always set its resource name to "XMathematica"? Try xprop on it when it doesn't shift to determine if it's a problem with xmonad or with mathematica.
Offline
Btw. the case statement should have something of the form:
_ -> x
to catch everything; otherwise the pattern is non-exaustive.
Incidentally, this will also print the name of the Layout (without an icon) for your viewing pleasure.
Offline
Great, thanks!
About the Mathematica Problem:
No, xprop gives :
WM_CLASS(STRING) = "XMathematica"
Mathematica starts a session in the current WS, but when I open a new notebook it shifts to "math". Sometimes, I wasn't able to reproduce, it shifts the first session to where it belongs. Well it's the only app showing that behaviour, and the only commercial program is use.
floyd
Offline
From the documentation
Each ManageHook has the form:
property =? match --> action
Where property can be:
title: the window's title
resource: the resource name
className: the resource class name.
stringProperty somestring: the contents of the property somestring.
So you can try to play around and try matching some other property than 'resource'.
Offline
Sorry, i read this part of the documentation before. But I did not know what to choose as "stringProperty somestring", className doesn't work.
xprop gives on mathematica-window:
floyd ~ $ xprop WM_COMMAND(STRING) = { "/usr/local/Wolfram/Mathematica/6.0/SystemFiles/FrontEnd/Binaries/Linux/Mathematica -topDirectory /usr/local/Wolfram/Mathematica/6.0" } _XFE_INPUT_WINDOW(WINDOW): window id # 0x16008e2 WM_STATE(WM_STATE): window state: Normal icon window: 0x0 _NET_WM_USER_TIME(CARDINAL) = 24766953 _NET_WM_ICON(CARDINAL) = 32, 32, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1317801229, 1424766503, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ------------------------------- ----BUNCH OF NUMBERS--- ------------------------------- XdndAware(ATOM) = BITMAP _MOTIF_DRAG_RECEIVER_INFO(_MOTIF_DRAG_RECEIVER_INFO) = 0x6c, 0x0, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x8f, 0xbf, 0x10, 0x0, 0x0, 0x0 _NET_WM_NAME(UTF8_STRING) = 0x2f, 0x68, 0x6f, 0x6d, 0x65, 0x2f, 0x66, 0x6c, 0x6f, 0x79, 0x64, 0x2f, 0x75, 0x6e, 0x69, 0x2f, 0x70, 0x72, 0x6f, 0x6a, 0x2f, 0x66, 0x72, 0x65, 0x79, 0x2f, 0x6d, 0x61, 0x74, 0x68, 0x65, 0x6d, 0x61, 0x74, 0x69, 0x63, 0x61, 0x2f, 0x65, 0x6c, 0x6c, 0x69, 0x70, 0x73, 0x65, 0x2e, 0x6e, 0x62 WM_CLIENT_LEADER(WINDOW): window id # 0x1600007 WM_WINDOW_ROLE(STRING) = _NET_WM_PID(CARDINAL) = 6202 _NET_WM_WINDOW_TYPE(ATOM) = _NET_WM_WINDOW_TYPE_NORMAL _MOTIF_WM_HINTS(_MOTIF_WM_HINTS) = 0x2, 0x1, 0x7e, 0x0, 0x0 WM_PROTOCOLS(ATOM): protocols WM_DELETE_WINDOW, WM_TAKE_FOCUS, _NET_WM_PING WM_NAME(STRING) = "/home/floyd/uni/proj/frey/mathematica/ellipse.nb" WM_LOCALE_NAME(STRING) = "de_DE.utf8" WM_CLASS(STRING) = "XMathematica" WM_HINTS(WM_HINTS): Client accepts input or input focus: True Initial state is Normal State. bitmap id # to use for icon: 0x16008d1 bitmap id # of mask for icon: 0x16008d1 window id # of group leader: 0x1600007 WM_NORMAL_HINTS(WM_SIZE_HINTS): user specified location: 700, 16 program specified location: 700, 16 user specified size: 698 by 1032 program specified size: 698 by 1032 program specified minimum size: 76 by 155 window gravity: NorthWest WM_CLIENT_MACHINE(STRING) = "deepthought"
Btw, is there any short text about Haskell-Syntax&Symbols for xmonad users. I really don't have the time to learn haskell from scratch but would like to configure my xmonad without copy/paste&manipulate.
For example: It's clear to me that the case statement is "incomplete", since it says nothing about the other cases.
But when I paste
_ -> x
, I get
xmonad.hs:211:59: parse error on input `->'
form the compiler.
A quick tutorial would be very usefull. Xmonad is excellent, but the configuration...
Last edited by floyd (2008-04-08 16:46:08)
Offline
Please note that Haskell uses layout instead of brackets to determine which lines of text belong to each other.
()" _ -> x )
where "_ -> x" may be replaced by
otherwise -> x
should work.
I used _exactly_ 2 space character in front of each line.
Last edited by choener (2008-04-08 17:06:04)
Offline
Ah, ok. It works that way, thanks!
Layout means the space between stuff and not "visual" layout on the screen, right? What about tabulator?
Offline
Layout means the space between stuff and not "visual" layout on the screen, right? What about tabulator?
Tab = 8 spaces for haskell. It's better not to use any tabs, or then stick to *only* tabs.
Offline
Hi, all. I just start learning Haskell. How to make such a workspace that every window moved into this workspace becomes floating?
BTW: Why doesn't my GHCI (v6.8.2) have String type in prelude environment?
> :t "abc"
"abc" :: [Char]
Fear is the path to the darkside.
{My Config}
Offline
Time to revive an old thread!
Decided to try out Xmonad. I've been using awesome-git, but getting tired of changing my config daily to keep up with the daily syntax changes.
I've managed to get a few of the things working that I needed. Urgency hints, keybindings, etc (downloaded a nice haskell ebook from wikibooks), but There are some things that I'm not quite sure about.
I'm currently using xmobar, and I have it set up very nicely, but I have a couple questions about it:
1) Is it possible to add mouse functionality for clicking on tabs to go to workspaces? (If not I've been looking into dzen2), or is dzen2 just a better option all around?
2) Is there a way to show all windows on the taskbar?
And a question about Xmonad itself:
3) In awesome, I was able to launch new clients but keep the master window the same (i.e. my workspace for internet, when I open a console i'd like it launch in the bottom 1/2 of the screen, and not replace opera.) I looked into Xmonad.Application.DwmPromote, but I don't think that's what I want.
Offline
1) Is it possible to add mouse functionality for clicking on tabs to go to workspaces? (If not I've been looking into dzen2), or is dzen2 just a better option all around?
It's been a while since I last ran Xmonad, so please don't take this as the final answer, but at the time there was no mouse-to-desktop interaction with dzen either. This may not be true any longer...
Offline
heleos, for question 3, this is what I've figured out: if the focus is on the master area, the newly launched client will be placed in the master area, and the "oldest" client in the master area is demoted. If the focus is on the stacking area, the new client is created in the same area no matter what. If there are fewer clients than the maximum allowed in the master area, then there is no "slave stack" and all clients are in the master area.
Here's my question: does anyone know how to write a layout?
Offline
Hi All?
Offline
I took the plunge and installed xmonad but i have a couple of questions i am hoping you can solve.
How do i make a keybinding to change scren focus with control+right/left? And change with alt+ctrl+shift+right/left?
Is there a way to get the spiral/dwindle layout from awesome, i tried the xmonad.layout.spiral but that is different. I want it to keep dividing the left space in two.
I want an xmobar on both of my monitors, how do i make that happen. And is there a way to show on wich tag i am on/in xmobar?
Last edited by Vintendo (2008-07-29 22:44:39)
Offline?
Sorry far0k, but I haven't used Xmonad in months and I can't recall the specifics. I'm sure someone else here can offer some advice though.
Offline
Hello,
how can i let the mouse look normal ( not like an X ) when i use gtk applications ( or other )
and how can i set the icon path for gtk applications?
Offline
Offline | https://bbs.archlinux.org/viewtopic.php?pid=319561 | CC-MAIN-2017-30 | refinedweb | 2,604 | 59.7 |
Pre-validating Many to Many fields.
- here.
Django’s form validation is great. You can rely on it to parse data that you got from the user, and ensure that the rules you have implemented are all applied. Model validation is similar, and I tend to use that in preference, as I often make changes from outside of the request-response cycle. Indeed, I’ve started to rewrite my API framework around using forms for serialisation as well as parsing.
One aspect of validation that is a little hard to grok is changes to many-to-many fields. For instance, the part of the system I am working on right now has
Tags that are applied to
Units, but a change to business requirements is that these tags need to be grouped, and a unit may only have one tag from a given
TagGroup.
Preventing units from being saved with an invalid combination of Tags is simple if you use the
django.db.models.signals.m2m_changed signal.
from django.db.models.signals import m2m_changed from django.dispatch import receiver @receiver(m2m_changed, sender=Tag.units.through) def prevent_duplicate_tags_from_group(sender, instance, action, reverse, model, pk_set, **kwargs): if action != 'pre_add': return if reverse: # At this point, we know we are adding Tags to a Unit. tags = Tag.objects.filter(pk__in=pk_set).select_related('group') existing_groups = TagGroup.objects.filter(tags__units=instance).distinct() invalid_tags = set() for tag in tags: if tag.group in existing_groups: invalid_tags.add(tag) group_count = 0 for other_tag in tags: if other_tag.group == tag.group: group_count += 1 if group_count > 1: invalid_tags.add(tag) if invalid_tags: raise ValidationError(_(u'A unit may only have one Tag from a given Tag Group')) else: # At this point, we know we are adding Units to a Tag. units = Unit.objects.filter(pk__in=pk_set) group = instance.group invalid_units = [] for unit in units: if unit.tags.exclude(pk=instance.pk).filter(group=group).exists(): invalid_units.append(unit.name) if invalid_units: raise ValidationError(_(u'The unit%s "%s" already ha%s a Tag from group "%s"' % ( "s" if len(invalid_units) > 1 else "", ", ".join(invalid_units), "ve" if len(invalid_units) > 1 else "s", group.name )))
Now, this on it’s own is nice enough. However, if you try to save invalid data from within the admin interface, then you will get an ugly trackback. If only there was a way to get this validation code to run during the validation phase of a form. i.e., when you are cleaning it…
So, we can create a form:
from django import forms from models import Tag, prevent_duplicate_tags_from_group class TagForm(forms.ModelForm): class Meta: model = Tag def clean_units(self): units = self.cleaned_data.get('units', []) if units: prevent_duplicate_tags_from_group( sender=self.instance.units, instance=self.instance, action="pre_add", reverse=False, model=self.instance.units.model, pk_set=units ) return self.cleaned_data
You can create a complementary form on the other end (or, if you already have one, then just hook this into the field validator). The bonus here is that the validation errors will be put on the field with errors, in this case units. | http://schinckel.net/tags/forms/ | crawl-003 | refinedweb | 510 | 58.08 |
My last post generated a lot of great comments. I think it's important not to confuse your schema with your contract. A client and a service have to agree on all sorts of things, only some of which are captured in your WSDL/XSD(/Policy). My goal in proposing that almost everything in your XSD be optional is to find the sweet-spot between easy coding and flexibility for evolution.
In general, I want to provide an XSD so that consumers can build an object model and get intellisense if they want to. But once I commit something to XSD, changing it (as opposed to extending it) typically means changing the XSD namespace. Most OX mappers bake the target namespace into code. So if I change that, I'm forcing a lot of changes into clients. I'm also forcing my service to work with two essentially identical type models if I want to support multiple versions. That leads to loads of boiler-plate object-to-object mapping code. If a client is using the same types to talk to more than one service as well, changing schema namespace may introduce the same sort of mapping there as well. So my goal is to maximize support for evolution without changing target namespaces. This is the heart of my versioning model, which breaks the world into three sorts of changes: additions, extensions and changes. I can handle the first two without changing namespaces. I want to see what changes I can handle without changing namespaces. The most obvious (are they also the most common?) change is occurance requirements.
Consider a simple example. I have a system that receives Customer data. In V1, I define customer like this:
<complexType name=”CustomerType”> <sequence> <element name=”FirstName” type=”string” /> <element name=”LastName” type=”string” /> <element name=”Address” type=”tn:AddressType” minOccurs=”0” /> </sequence></complexType>
FirstName and LastName are required, Address is optional. Then, in V2, I decide that Address is actually required. I can implement this change two ways:
The advantage of (2) is that all systems that were already sending the optional Address data simply continue to work, and I don't have to write lots of object model to object model mapping code. It's only those that do not work that way which need to be modified (but if Address is a real requirement, they needed to be modified anyway). The benefit for those systems is that the modification does not require moving to a new schema definition, which means more work on their part (if they’re using the same types to talk to other systems that aren’t moving to the V2 namespace at the same time, this saves writing lots of boilerplate object-to-object mapping code).
You might argue that this approach makes it so the client developer doesn't or can't learn what the real requirements are for your service. I disagree. That information just has to be conveyed some other way. Further, this encourages the client developer to understand that your service will evolve and that they should expect that they may need to move to keep up.
It's worth noting that if you are designing an XSD for use by lots of different services, you probably don't know what occurance constraints they will require. Making content optional makes loads of sense in that case too. | http://www.pluralsight.com/community/blogs/tewald/archive/2006/04/24/22570.aspx | crawl-002 | refinedweb | 565 | 61.16 |
Problem Statement
A robot is located at the top-left corner of a m x n grid .
The robot can only move either down or right at any point in time. The robot is trying to reach the bottom-right corner of the grid.
Now consider if some obstacles are added to the grids. How many unique paths would there be?
Sample Test Cases
Input: [ [0,0,0], [0,1,0], [0,0,0] ] Output: 2 Explanation: There is one obstacle in the middle of the 3x3 grid above. There are two ways to reach the bottom-right corner: 1. Right -> Right -> Down -> Down 2. Down -> Down -> Right -> Right
Problem Solution
The most efficient solution to this problem can be achieved using dynamic programming. Like every dynamic problem problem, we will not recompute the subproblems. A temporary 2D matrix will be constructed and value will be stored using the bottom up approach.
We will create a 2D matrix of the same size as the given matrix.
Next step will be to traverse the array row wise and fill the values in it.
So, if an obstacle is found we will assign that cell value of 0.
For the first row and column, set the value to 1 if obstacle is not found.
Set the sum of the left and the lower values if obstacle is not present at that corresponding position in the given matrix ie F(x, y) = F(x-1, y ) +F(x , y-1).
Return the last value of the created 2d matrix.
Complexity Analysis
Time Complexity: O(mn) Since we are traversing the entire matrix.
Space Complexity: O(mn) As we are creating a dp matrix of the same size as given in the problem.
Code Implementation
#include<bits/stdc++.h> using namespace std; #define m 3 #define n 4 int get_unique_paths_with_obstacle(int arr[m][n]) { int path [m] [n]; // Base condition // initialize the array to 0 for(int i=0;i<m;i++) { for(int j=0;j<n;j++) { path[i][j]=0; } } // Base condition if(arr[0][0] == 1) return 0; //no paths if the starting place is itself an obstacle path[0][0] = 1; //initializing the first position // set the first row for(int i=1;i<m;i++) { if(arr[i][0]==0) { path[i][0] = path[i-1][0]; } } // set the first column for(int i=1;i<n;i++) { if(arr[0][i]==0) { path[0][i] = path[0][i-1]; } } // apply the formula path[i][j] = path [i - 1][j] + path [i][j - 1] if input_array[i][j] != 1 and 0 otherwise. for (int i = 1; i < m; i++) for (int j = 1; j < n; j++) if (!arr[i][j]) path[i][j] = path[i - 1][j] + path[i][j - 1]; return path[m - 1][n - 1]; } int main() { int arr [m][n]; for (int i = 0; i < m; ++i) { for (int j = 0; j < n; ++j) { // inserting obstacle if ( (i == 0 && j == 2) || (i == 1 && j == 0)||(i == 1 && j == 2)) { arr[i] [j] = 1; continue; } arr[i] [j] = 0; } } cout<<"Input array is "<<endl; for (int i = 0; i < m; ++i) { for (int j = 0; j < n; ++j) { cout<< arr[i][j]<<" "; } cout<<endl; } int result = get_unique_paths_with_obstacle(arr); cout<<"The number of unique paths for a "<<m <<" x "<< n<<" matrix is = "<< result<<endl; return 0; } | https://prepfortech.in/interview-topics/dynamic-programming/find-total-number-of-unique-paths-in-a-grid-with-obstacles | CC-MAIN-2021-17 | refinedweb | 559 | 68.81 |
I am creating my own dataset class which takes a path to a csv file containing image paths and labels:
class MyDataset(): def __init__(self, csv_path): ... def __len__(self): return len(...) def __getitem__(self, index): ... return (img, label)
As you can see it contains the methods
__init__,
__getitem__ and
__len__.
Now, this is the source code for the Pytorch’s Dataset:
class Dataset(object): def __getitem__(self, index): raise NotImplementedError def __add__(self, other): return ConcatDataset([self, other])
and in docs it says that:
All datasets that represent a map from keys to data samples should subclass it
But I don’t see anything special except the
__add__ method which I think in my case is not needed (otherwise I could write my own). Is it still necessary to inherit from Dataset after having implemented my own
__getitem__ and
__len__, to be able to create a dataloader later on? What advantage is there from subclassing it? | https://discuss.pytorch.org/t/is-inheriting-from-dataset-necessary-for-the-creation-of-a-custom-dataset/71098 | CC-MAIN-2022-27 | refinedweb | 156 | 67.08 |
9.3. Fitting a function to data with nonlinear least squares show an application of numerical optimization to nonlinear least squares curve fitting. The goal is to fit a function, depending on several parameters, to data points. In contrast to the linear least squares method, this function does not have to be linear in those parameters.
We will illustrate this method on artificial data.
How to do it...
1. Let's import the usual libraries:
import numpy as np import scipy.optimize as opt import matplotlib.pyplot as plt %matplotlib inline
2. We define a logistic function with four parameters:
def f(x, a, b, c, d): return a / (1. + np.exp(-c * (x - d))) + b
3. Let's define four random parameters:
a, c = np.random.exponential(size=2) b, d = np.random.randn(2)
4. Now, we generate random data points by using the sigmoid function and adding a bit of noise:
n = 100 x = np.linspace(-10., 10., n) y_model = f(x, a, b, c, d) y = y_model + a * .2 * np.random.randn(n)
5. Here is a plot of the data points, with the particular sigmoid used for their generation (in dashed black):
fig, ax = plt.subplots(1, 1, figsize=(6, 4)) ax.plot(x, y_model, '--k') ax.plot(x, y, 'o')
6. We now assume that we only have access to the data points and not the underlying generative function. These points could have been obtained during an experiment. By looking at the data, the points appear to approximately follow a sigmoid, so we may want to try to fit such a curve to the points. That's what curve fitting is about. SciPy's
curve_fit() function allows us to fit a curve defined by an arbitrary Python function to the data:
(a_, b_, c_, d_), _ = opt.curve_fit(f, x, y)
7. Now, let's take a look at the fitted sigmoid curve:
y_fit = f(x, a_, b_, c_, d_)
fig, ax = plt.subplots(1, 1, figsize=(6, 4)) ax.plot(x, y_model, '--k') ax.plot(x, y, 'o') ax.plot(x, y_fit, '-')
The fitted sigmoid appears to be reasonably close to the original sigmoid used for data generation.
How it works...
In SciPy, nonlinear least squares curve fitting works by minimizing the following cost function:
Here, \(\beta\) is the vector of parameters (in our example, \(\beta =(a,b,c,d)\)).
Nonlinear least squares is really similar to linear least squares for linear regression. Whereas the function \(f\) is linear in the parameters with the linear least squares method, it is not linear here. Therefore, the minimization of \(S(\beta)\) cannot be done analytically by solving the derivative of \(S\) with respect to \(\beta\). SciPy implements an iterative method called the Levenberg-Marquardt algorithm (an extension of the Gauss-Newton algorithm).
There's more...
Here are further references:
- Reference documentation of curvefit available at
- Nonlinear least squares on Wikipedia, available at
- Levenberg-Marquardt algorithm on Wikipedia, available at
See also
- Minimizing a mathematical function | https://ipython-books.github.io/93-fitting-a-function-to-data-with-nonlinear-least-squares/ | CC-MAIN-2019-09 | refinedweb | 500 | 58.79 |
JTable.setRowHeight() in CellRenderer/Editor: new rows not rendered
Nicole Williams
Greenhorn
Joined: Oct 08, 2009
Posts: 9
posted
Jul 10, 2011 06:28:41
0
Hello!
I am using
JTable
. One of the cells contains multiple elements (panels, JLabels, JTextAreas and JButtons) and needs to change its height dynamically depending on user's input while editing. My Renderer and editor are in the same class and are using the same method to build the main panel. In the end of this method the preferred height is calculated:
public Component getTableCellEditorComponent(JTable table, Object value, boolean isSelected, int row, int col) { buildPanel(); return new JScrollPane(panel); } public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int col) { buildPanel(); return panel; } private void buildPanel() { panel.removeAll(); ... if (table.getRowHeight() < prefSize) { table.setRowHeight(row, prefSize); }
Now this is working fine for rendering and editing of the cells. But I'm facing a problem when adding new rows into this table by adding data to table model. If the table was first rendered with 6 rows, then only max. 6 rows are rendered, no matter how many will be inserted later. So if I add rows in the beginning of the table, these rows are being shown and my old rows "disappear". Updating UI doesn't help. If I remove this code which is setting row height, inserting of rows is working fine!!
Any ideas? Thanks!
Rob Camick
Ranch Hand
Joined: Jun 13, 2009
Posts: 2225
8
posted
Jul 10, 2011 09:24:29
0
if (table.getRowHeight() < prefSize) { table.setRowHeight(row, prefSize); }
I'm guessing you have an infinite loop.
The condition will always be true so you keep trying to repaint the table.
It's never a good idea to customize a propery of the table in the render but if you do you need to prevent infinite loops. I would guess the code should be:
if (table.getRowHeight(row) < prefSize) { table.setRowHeight(row, prefSize); }
If you need more help then post your
SSCCE
that demonstrates the problem.
Nicole Williams
Greenhorn
Joined: Oct 08, 2009
Posts: 9
posted
Jul 10, 2011 10:56:29
0
Hello Rob,
thanks for the answer! I guess it's not an infinite loop because there is no loop, but the condition is not needed, you are right. Now I created a small executable example of what is happening
This is the main class:
class MyTable extends JFrame { private JPanel topPanel; private JTable table; private JScrollPane scrollPane; public MyTable() { setSize(300, 200); topPanel = new JPanel(); topPanel.setLayout(new BorderLayout()); getContentPane().add(topPanel); final ArrayList<String> data = new ArrayList<String>(); data.add("1"); data.add("2"); table = new JTable(new MyTableModel(data)); topPanel.add(table, BorderLayout.CENTER); table.getColumnModel().getColumn(0) .setCellEditor(new MyRendererEditor()); table.getColumnModel().getColumn(0) .setCellRenderer(new MyRendererEditor()); JButton button = new JButton("add row"); button.addActionListener(new AbstractAction() { @Override public void actionPerformed(ActionEvent arg0) { data.add("one more row"); table.updateUI(); System.out.println(table.getRowCount()); } }); topPanel.add(button, BorderLayout.SOUTH); } public static void main(String args[]) { MyTable mainFrame = new MyTable(); mainFrame.setVisible(true); } }
Here goes the table model:
public class MyTableModel extends AbstractTableModel { ArrayList<String> data; public MyTableModel(ArrayList<String> data) { this.data = data; } @Override public int getColumnCount() { return 1; } @Override public int getRowCount() { return data.size(); } @Override public Object getValueAt(int row, int col) { return data.get(row); } /* (non-Javadoc) * @see javax.swing.table.AbstractTableModel#isCellEditable(int, int) */ @Override public boolean isCellEditable(int arg0, int arg1) { return true; } }
and the renderer:
public class MyRendererEditor extends AbstractCellEditor implements TableCellRenderer, TableCellEditor { JPanel panel = new JPanel(); JTextArea ta = new JTextArea(); @Override public Object getCellEditorValue() { return ta.getText(); } @Override public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int col) { buildPanel((String) value); // table.setRowHeight(row, panel.getPreferredSize().height); return panel; } @Override public Component getTableCellEditorComponent(JTable table, Object value, boolean isSelected, int row, int col) { buildPanel((String) value); // table.setRowHeight(row, panel.getPreferredSize().height); return panel; } private void buildPanel(String text) { panel.removeAll(); ta = new JTextArea(); ta.setText(text); panel.add(ta); } }
With this code you can add lines to the table. If you uncomment 2 lines which set the cell height in the last snippet, you don't see added lines. System.out shows, how many lines there should be in the table..
The problem with my real table is that all each cell has a different height which is being changed while editing, so I want to resize it dynamically..
Rob Camick
Ranch Hand
Joined: Jun 13, 2009
Posts: 2225
8
posted
Jul 10, 2011 12:45:20
0
I guess it's not an infinite loop because there is no loop
Just because you don't have a loop in your code doesn't mean you didn't create a loop. Add a System.out.println(...) to the renderer code to see the loop.
There are many thrings wrong with your code and I can't explain exactly why you are getting the problem because its a combination of things.
table.updateUI();
Never, invoke the updateUI() method directly. This is invoked when the LAF is changed. You did not change the LAF.
data.add("one more row");
This is not how you update data in the table. All updates should be done throught the model. The model will then notify the table that a change has been made and the table will repaint itself. So you will need to create a custom addRow(...) method in your TableModel and you will need to fire the appropriate events. Reread the Swing tutorial to see how the custom model there fire events.
Of course the easier solution is to just use the
DefaultTableModel
you you don't need to do this. Then your code would be something like:
model = new DefaultTableModel(0, 1); // where model is defined as a class variable model.addRow( new String[]{"1"} ); model.addRow( new String[]{"2"} ); table = new JTable(model);
Then when you want to add a row all you do is:
model.addRow( new String[]{"another row"} );
Finally, you will still have the looping problem, so you will need to make a change similiar to the suggestion I made in my first posting. Although you might want to use "!=" intead of "<" to handle situations when the size becomes smaller as well are larger.
Again as I mentioned in my first reply this code doesn't not belong in the renderer. Maybe you should be adding a TableModelListener to the TableModel so you can listen for changes in the data and then reset the row height at that time.
Nicole Williams
Greenhorn
Joined: Oct 08, 2009
Posts: 9
posted
Jul 10, 2011 14:53:01
0
thank you so much!!
I took a look at the
DefaultTableModel
implementation and implemented insertRow() for my model. Basically it is same code I was using outside of the table model, but it also fires an event:
fireTableRowsInserted(row, row);
This actually solved the problem!!! Now I can insert rows in my table. I will refactor my code a bit and do same for delete/move rows. And will try to remove the resizing from the renderer, but it might be more complicated..
Thank you for your time!
I agree. Here's the link:
subject: JTable.setRowHeight() in CellRenderer/Editor: new rows not rendered
Similar Threads
How to disable the button in JTable
JTable
JTable
How do i add JList as a column in JTable, very urgent
Tabbing between cells in a JTable
All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter
JForum
|
Paul Wheaton | http://www.coderanch.com/t/544823/GUI/java/JTable-setRowHeight-CellRenderer-Editor-rows | CC-MAIN-2014-52 | refinedweb | 1,263 | 56.45 |
Manager Menu
Last Updated: 2018-10-14
When adding new functionality to the manager you'll want to add items for your new page in the menu. The menu is a
static singleton object that can be changed or updated from anywhere, at any time. However, since the menu is the same for all users, changing it in runtime for the state of single user will not work.
If you are building a custom module, our recommendation is that you add your menu items in the
Init() method of your module. This will ensure that it will only be execute once in the application lifecycle.
Creating Menu Items
The
MenuItem class has the following properties that you should set when adding a new item to the menu.
InternalId
As the menu is ordered into sections, at least the topmost level of items should have an Internal Id so that it's easy to access them when adding child items.
Name
This is the display name that will printed out in the menu when it is folded out.
Controller
The controller that generated link should point to. As the topmost level of items are only used as sections, this property is not needed for these items.
Action
The action that generated link should point to. As the topmost level of items are only used as sections, this property is not needed for these items.
Css
The css class that will be added to the icon for the menu item. The manager interface uses the font icons from Font Awesome, so any of the icons there will work.
Policy
If you want to restrict the menu item depending on the claims of the currently logged in user. Please note that each item can only have one policy, a policy can however be a set of several claims.
Adding Menu Items
Adding a Menu Section
The following code will add a new section below the Settings section.
using Piranha.Manager;
Menu.Items.Insert(2, new Menu.MenuItem
{
InternalId = "MyModule",
Name = "My Module",
Css = "fas fa-fish"
}
Adding a Menu Item
The following code will add a menu item under our new group.
using Piranha.Manager;
Menu.Items["MyModule"].Items.Add(new Menu.MenuItem
{
InternalId = "FirstFunction",
Name = "First Function",
Controller = "MyModule",
Action = "FirstAction",
Css = "fas fa-brain",
Policy = "MyFirstFunction"
} | http://piranhacms.org/docs/manager-extensions/manager-menu | CC-MAIN-2018-47 | refinedweb | 385 | 69.72 |
Craig Grummitt
3pods
Swift is awesome - but do you ever reminisce about the old days of ActionScript 3.0? The old days of MovieClips, DisplayObjects, Sprites, SimpleButtons, EventDispatchers - oh and who can forget gotoAndPlay? Well, now you can enjoy iOS native development using the power of Swift syntax but with the AS3 SDK! Whaa? How is this possible? Is this heresy?
ActionSwift3
Underneath the hood ActionSwift3 is based on the SpriteKit Framework but ActionSwift 3 SDK is based on familiar AS3 SDK classes:
Easing classes are also included for convenience from here.
API documentation can be found at cocoadocs
ActionSwift is available through CocoaPods. To install it, simply add the following line to your Podfile:
ruby
pod "ActionSwift3"
Don't forget to import the Pod where you would like to use it:
Swift
import ActionSwift3
Alternatively, if you would like access to the example project as well, clone the github project here.
ActionSwift
ActionSwift is a sample project that you can use to play with ActionSwift3. Start with taking a look at GameViewController.swift. GameViewController does the following:
How to use ActionSwift3
Here is some sample code to get your head around how to use ActionSwift3.
Stage
To begin with, you need to set up the Stage in a ViewController that contains a SKView(this is done for you automatically if you set up a Game-SpriteKit project)
Swift
let stage = Stage(self.view as! SKView)
Sprite
You can now call familiar methods on the stage. For example, you could instantiate a sprite, and draw a rectangle on its graphics property, and then add this sprite to the stage:
Swift
let sprite = Sprite()
sprite.graphics.beginFill(UIColor.redColor())
sprite.graphics.drawRect(10,10,100,44)
sprite.name = "shapes"
stage.addChild(sprite)
MovieClip
To create a movieclip, you will need images within a folder with the extension 'atlas' in your project (eg.'images.atlas'). This will automatically generate a Texture Atlas. Set up an array of these image file names, and pass them in when you instantiate a MovieClip. These will now be the 'frames' of your movieclip, which you will be able to call familiar methods - gotoAndPlay(), gotoAndStop(), stop() and play(). Use Stage.size to get the dimensions of the device. Oh and x=0, y=0 is the top left of the stage. Hooray!
Swift
let walkingTextures = ["walking1","walking2","walking3"]
let movieClip = MovieClip(textureNames: walkingTextures)
movieClip.x = 0
movieClip.y = Stage.size.height - movieClip.height
stage.addChild(movieClip)
SimpleButton
You can create a SimpleButton object, with an up and down state(not much point of over states on touch screens!) You can use sprites(with shapes on the graphics object) or movieclips(with textures) as the states.
Swift
let play = SimpleButton(upState: playUpState, downState: playDownState)
stage.addChild(sprite)
TextField
Use familiar syntax to create a textfield. Build the basics of the textfield using the TextField class, and then apply text formatting to the defaultTextFormat property, using the TextFormat class.
```Swift let text = TextField() text.width = 200 text.text = "Salutations to you, world"
let textFormat = TextFormat(font: "ArialMT", size: 20, leading: 20, color: UIColor.blackColor(), align:.Center) text.defaultTextFormat = textFormat
stage.addChild(text) ```
Sound
Use sound to play sounds included in your project - a big difference though - now wav files are supported as well as mp3. Hooray! Loop the audio, or play it from a point in the file. As per the strange AS3 API, use SoundChannel to stop the sound.
Swift
sound = Sound(name: "ButtonTap.wav")
sound.play()
EventDispatcher
Just as you would expect, Sprites, SimpleButtons and MovieClips will dispatch events. As Swift is not able to check equality between two functions, an additional class called 'EventHandler' stores the EventHandler, along with a string representing the EventHandler, that can be checked for equality. For example, here's how to set up an enterFrame event handler:
Swift
movieClip.addEventListener(EventType.EnterFrame.rawValue, EventHandler(enterFrame, "enterFrame"))
func enterFrame(event:Event) -> Void {
trace("This is called every frame")
}
trace
Oh yeah - and trace is back!
Swift
trace("This is the most amazing thing I've ever seen, trace is back! How did they do this?")
Enhancements
ActionSwift3 is a work in progress, feel free to contribute!
Ideas for enhancements:
Updates
1.1 * Added int and Boolean data types
1.2 * Added TextField * Resolved issue with stage updates not propogating * Added license
1.3 * Added SimpleButton * Added UIColor extension for hexidecimal support
1.4 * Added Sound, SoundChannel
1.5 * Updated for Swift 2
1.6 * Resolved issue with labels not registering taps * Resolved issue with rotation
1.7 * Updated for Swift 2.3
2.0 * Updated for Swift 3.0
Credits
With GEDCOMConverter, parsing a GEDCOM file to native Swift objects is too easy!
Just create a
GEDCOM object, passing in the name of your GEDCOM file:
let gedcom = try GEDCOM(fileName:"sample")
The
GEDCOM object will automatically generate a
head, and
individuals,
families and
sour arrays of data.
Better Easing for SpriteKit in Swift This easing library began life as a port of buddingmonkey's Objective C SpriteKit Easing library to Swift. This library extends upon the basic easing equations available in the SpriteKit framework by Apple. | https://cocoapods.org/owners/7852 | CC-MAIN-2021-49 | refinedweb | 855 | 57.37 |
This question is a result of a student of mine asking a question about the following code, and I'm honestly completely stumped. Any help would be appreciated.
When I run this code:
#test 2
a = 1
def func2(x):
x = x + a
return(x)
print(func2(3))
# test 3
a = 1
def func3(x):
a = x + a
return(x)
print(func3(3))
local variable 'a' referenced before assignment
a = 1 def func3(x): global a a = x + a return(x) print(func3(3))
Now it should work.
When you put the statement
a=x+a inside the function, it creates a new local variable
a and tries to reference its value(which clearly hasnt been defined before). Thus you have to use
global a before altering the value of a global variable so that it knows which value to refer to.
EDIT:. | https://codedump.io/share/nximdsO0nA1g/1/why-does-the-scope-work-like-this | CC-MAIN-2017-30 | refinedweb | 143 | 62.61 |
Yesterday's article on concurrency discussed the basic concepts of concurrency. Now I'd like to start talking about how you deal with concurrency... :))
"the C runtime library allows you to declare variables in TLS by simply decorating them with __declspec(thread)"
I think you mean the Microsoft C Compiler, rather than the "C runtime library", as it is not a function that you can call, but rather a compiler attribute that generates code to call TLS functions automatically.
Rob, actually it’s both – the MS C compiler AND the runtime library conspire to bring that feature.
But if you write code without the C runtime library, you can’t take advantage of it (and I work on components that don’t link with the C runtime libraries)
Being picky – you’re getting your principals and your principles mixed up again. 🙂
Contributing to the conversation: I’m listening pretty hard. I don’t get to do a lot of concurrent work and haven’t had much training in it. Most of what I know I’ve learned from Jeff Richter’s excellent books "Programming Applications for Microsoft Windows" and "Programming Server-Side Applications for Microsoft Windows". They were written around 2000-2001, but are still hugely relevant.
This is happening around the right time as I’m currently working on a service to adapt from a narrow-band RF hand-held computer (accessed from a base station over TCP/IP or serial cable) to our custom UDP-based application server. I aim to write it correctly this time, with much use made of asynchronous I/Os and thread pooling where possible.
Wouldn’t the notes previously linked to about the double-check locking paradigm in Java possibly also apply to the create-enqueue-dequeue paradigm you talk about? I.e, in an architecture with non-ordered writes (like x64?), you can’t be sure that the pointer you put into the queue is really filled with appropriate data.
CN: Ah – Did you notice how I queued the request? I called QueueUserWorkItem – that’s a Win32 API call that handles all the concurrency issues for me.
I’m a firm believer of letting the OS get the concurrency issues right – they’re almost certainly more likely to get them right than I am (more on this one in a later article in the series).
Letting the OS or app framework do the right thing for you is also a good security principle – simplicity.
Security and concurrency don’t usually come up in the same sentence, but there are significant security issues with concurrency.
Race conditions are veterans of privilege escalation attacks and is generically called time of check – time of use (TOCTOU) attacks. The primary issue is that one thread may assume that it’s atomic to check that something is okay and then goes ahead and tries to use it, but another thread (and remember in NT, all processes are really just threads) does something shifty between the time of check and the time of use.
Areas to check for race conditions:
* file system calls, particularly those which check and then create files with high privilege permissions
* temporary file handling – generally awful in my experience. This is the favoured attack under Unix, and I honestly don’t know why it’s not used more frequently under Win32 attacks
* dealing with semaphores and WaitFor… when the event fires, and you assume you have the resource to yourself… you’re probably wrong
This is particularly prevalent on Unix due to the setuid / setgid architecture (the NT kernel uses impersonation, which is far more flexible, or simply runs everything as LOCALSYSTEM, urgh).
If you’re doing .NET development, read this:
Michael Howard wrote this in 2002, and it’s still pertinent today:
It deals with race conditions for things I hadn’t even thought about, but it also doesn’t deal with the usual suspects.
Race conditions can also affect distributed systems, particularly those which uses broadcast load balancing. In this instance, an attacker may be able to slow down the real server making its announcements by DoSing it, and then answering quickly to the broadcast, creating a man-in-the-middle attack. Such issues have affected Microsoft’s own code in the past.
thanks,
Andrew
Good point Andrew.
In fact, IIRC the original UPnP bug was a result of code that attemped to solve a concurrency issue.
Most of the points you make in this article are true in the general case, and not just C/C++ under win32. Of note: While you can’t rely on TLS existing in any specific way, most C runtimes have some sort of TLS. Most, I think, make you manage it somewhat more explicitly, though.
I think your stack is always private — at least from the point your thread is started onward.
Of course, this only applies at all to languages of the same basic sort as C/C++ — for example, in current Perl, /everything/ is thread-local, unless you explicitly make it otherwise. This means less worrying about concurrency issues, at the price of making starting threads very slow, and communication between threads somewhat cumbersome.
> On Win32, the data on your stack is owned by the thread (this might not be true for other architectures, I don’t know :().
Me neither, but, yikes, how would this work? Assuming the stack stores return addresses, each thread <b>needs</b> to have a private stack or you lose coherent function calls. I think this is reliably true everywhere.
Tom,
I was thinking of some RISC or mainframe-like architectures where the "stack" wasn’t really a stack in the conventional sense of the word, but instead a pointer to some kind of per task memory, and thus wasn’t necessarily shared.
There are some really wierd computer architectures out there.
Andrew, temporary files aren’t a viable attack vector in Windows because there’s no global temporary directory, every user has its own. You can’t pollute another user’s temporary namespace, unless you’re already privileged (but there are several other shared namespaces to pollute, instead…). And in Windows you can’t forget to set O_EXCL, because CreateFile has an explicit creation disposition parameter
Also good points KJK – Clearly I’m having swiss-cheese-brain issues today, and turned of my critical thinking.
Andrew’s points about security ARE valid – if a high privileged component that makes decisions based on external input and then creates objects with those high privileges, then there IS a potential issue.
And temporary files CAN be issues, but, as KJK pointed out, there’s no /tmp in Windows, which mitigates many of the issues (but not all of them).
This isn’t to say that Windows doesn’t have its own class of issues – the semaphore issue that Andrew pointed out IS real, and has been the cause of vulnerabilities in applications before – just because your call to WaitForSingleObject wakes up, if you don’t check the return code and assume that you’ve got access to the object, you’re in for a surprise.
The bottom line is that there have been security holes that were made exploitable because of concurrency issues, and thus this needs to be considered.
Larry,
Sometime in your series on concurrency, could you also cover the following:
1) Techniques to partition your code so that concurrency issues become more apparent. It is easy to get get confused between the "object view" and the "thread view" in a program — threads weave paths through objects in a way that is not obvious at first sight when reading the code.
2) Coding and commenting conventions that might help highlight concurrency issues. An obvious thing is to explicitly mark all shared variables in comments (variables shared across threads, that is). Perhaps also use a naming convention for these variables.
3) Deadlock prevention techniques. (Years ago, Ruediger Asche wrote an article for MSDN that used Petri Nets to detect deadlocks. It was a bit over the top for me at the time though. Note to self: Read and understand it sometime.)
Thanks,
-K
Larry, please publicize this. Apologies for offtopic, but this is VERY IMPORTANT.
<br>
<br><a href="<a target="_new" href="">Bruce">">Bruce</a> Schneier</a> reports that SHA-1, a commonly used cryptographic hashing protocol, <a href="<a target="_new" href="">has">">has</a> reportedly been broken</a> by a prestigious research team from Shanghai University. Together with recent attacks on MD5, as <a href="<a target="_new" href="">previously">">previously</a> covered by /.</a>, we need new hashing functions as a matter of urgency, and we need them now.
I disagree with the no global temporary file issue. There are three alternatives:
<br>
<br>* Running as the user (%tmp%) – most apps the user invokes
<br>* Running as a unique per-service account (rare!)
<br>* Running as a (semi-)privileged system account (LOCAL SYSTEM, LOCAL SERVICE, or NETWORK SERVICE) (%temp% = %systemroot%temp) – most services
<br>
<br>You really want the last one for a privilege escalation attack (or just to do something interesting). Until the day Windows gives each process their own %temp% and provides strong isolation, race conditions will still be an issue.
<br>
<br>Lastly, it’s not just temporary file handling. It’s any *shared* or *potentially* shared resource which your app uses could be useful to an attacker. If your app does not perform adequate checks (whether it is files, registry keys, semaphores, events or WaitablePorts), it may be vulnerable. There are good well known solutions to TOCTOU issues. It just takes a bit of care is all.
<br>
<br>Andrew
LocalService and NetworkService have their own profiles so they shouldn’t use %systemroot%temp.
A lot of potential issues with %systemroot%temp are mitigated by the strong DACLs that are by default assigned to files created there. So as long as you specify CREATE_NEW when creating the file you should be fine.
Hi, Larry
IMHO, TLS in Win32 is nearly unusable. At least in EXEs (as opposed to DLLs).
The the problem is that in EXE it’s impossible to free data referred by TLS slot.
TLS neither provides "destructors" similar to pthread, nor any other means to intercept thread exit. So, when, say, ExitThread() is called, there is no way to free data stored in TLS.
Good book for other platforms:
Vassili,
If it’s an EXE, then you wrote the code to create the threads, you wrote the code to signal that the threads are going to terminate, and you wrote the code that cleans up for the threads.
Since you wrote the thread routine in the first place, why can’t you free the memory? THat’s what the C runtime library does (that’s why the C runtime library recommends/requires that you use __beginthread() – it’s to do per-thread initialization and tear down of data).
In a DLL, you don’t get to control the threads, but in an EXE, you do.
Actually you can register callbacks to be run during thread startup or termination with EXEs. It just so happens that VC++ doesn’t support this, but it’s in the PE spec and (IIRC) LDR supports calling the callbacks at the appropriate times.
Yes, In EXE I have control over thread entry point. But when I call ExitThread(), thread dies "here and now" – without getting to the point where TLS stuff gets freed.
Comparing PTHREADS and Win32 threading makes Win32 threading to look slightly "underdone". It concerns TLS destructors, cleanup handlers – a stuff obviously implemented via some sort of "thread exit callback". On Win32, the presence of "DLL_THREAD_ATTACH" in DllMain hints that some sort of such callback is available for "private use". And I wonder why this callback wasnt made public…
From the doc for ExitThread:
> However, in C++ code, the thread is
> exited before any destructors can be
> called or any other automatic cleanup
> can be performed. Therefore, in C++
> code, you should return from your
> thread function.
It sounds like what you really want to do is have some kind of flag or IPC that tells your thread when to return from the thread proc. Then you would be able to do all your cleanup prior to dying, and you wouldn’t even need to call ExitThread.
I’m gonna add to Tom’s comment – storing return addresses on the stack is not that common, x86 does it but other architectures (if I’m not mistaken IA-64 for example) do not. It’s one of the thinks that should kill the x86 architecture since it allows for easy buffer overflow attacks but it doesn’t look that likely anymore, now with x86-64.
PingBack from
PingBack from | https://blogs.msdn.microsoft.com/larryosterman/2005/02/15/concurrency-part-2-avoiding-the-problem/ | CC-MAIN-2018-34 | refinedweb | 2,127 | 59.13 |
View this project on my website!)
Step 1: Draw Controller
You are literally drawing your controller on a piece of paper:
- Be sure to use pencil (graphite is conductive)
- Make a few buttons and maybe a slider or two.
- Be sure to draw a line leading to the edge of the paper to give room to the alligator clips
Step 2: Make Circuit
I forgot to include the two led resistors (just pretend they're between ground and the anode on both). The three black boxes on the right are meant to be the alligator clips leading to the paper. (The top is the slider and the other two are buttons)
Step 3: Install Library
The code for this project requires a library designed to use capacitive sensing. Download it here:...
According to the Arduino page for this library, ."
Step 4: Upload Code
#include <Servo.h> #include <CapacitiveSensor.h> Servo myservo; CapacitiveSensor button1 = CapacitiveSensor(4, 2); CapacitiveSensor button2 = CapacitiveSensor(4, 3); CapacitiveSensor slider = CapacitiveSensor(4, 5); int total1val = 1000;//you'll need to edit these int total2val = 1000; int total3val1 = 100; int total3val2 = 1000; void setup() { //button1.set_CS_AutocaL_Millis(0xFFFFFFFF); Serial.begin(9600); pinMode(10, OUTPUT); pinMode(13, OUTPUT); myservo.attach(6); } void loop() { long start = millis(); long total1 = button1.capacitiveSensor(1000); long total2 = button2.capacitiveSensor(1000); long total = 0; long total3 = 0; for (int i = 1; i <= 10; i++) {//averages the value for the slide to make the servo smoother total3 = slider.capacitiveSensor(10000); total = total + total3; delay(1); } long avg = total / 10; int angle; Serial.print(millis() - start); Serial.print("\t"); Serial.print(avg); Serial.print("\t"); Serial.print(total2); Serial.print("\t"); Serial.println(total3); if (total1 > total1val) { digitalWrite(13, HIGH); } else { digitalWrite(13, LOW); } if (total2 > total2val) { digitalWrite(10, HIGH); } else { digitalWrite(10, LOW); } angle = map(avg, total3val1, total3val2, 180, 0); myservo.write(angle); delay(10); }
You will likely need to adjust the values at the top by the "//you'll need to edit these" comment:
- Open the serial monitor to watch the values come in
- Look at the value both "low" and "high" (when you are and aren't touching the button)
- Adjust the values in the code until everything works properly (the LEDs should turn on when you press the buttons and the servo should turn accordingly)
2 Discussions
4 months ago
Please, post your code here.
You can add your page as reference but, without code, this Instructable is incomplete. Instructables is not for advertising and SEO...
4 months ago
Fun, I didn' t know that this is possible, thanks for sharing.
I' d like to make use of this technique in some way in the future. | https://www.instructables.com/id/Arduino-Paper-Controller-Buttons-Slider/ | CC-MAIN-2019-13 | refinedweb | 443 | 53.21 |
Provided by: libncarg-dev_6.3.0-6build1_amd64
NAME
MAPIQM - Terminates a string of calls to the routine MAPITM.
SYNOPSIS
CALL MAPIQM (IAMA,XCRA,YCRA,MCRA,IAAI,IAGI,NOGI,ULPR)
C-BINDING SYNOPSIS
#include <ncarg/ncargC.h> void c_mapiqm (int *iama, float *xcra, float *ycra, int mcra, int *iaai, int *iagi, int nogi, int (*ulpr)(float *xcra, float *ycra, int *mcra, int *iaai, int *iagi, int *nogi))
DESCRIPTION
IAMA (an input/output array of type INTEGER, dimensioned as specified in a call to the AREAS initialization routine ARINAM) is the array containing the area map against which lines drawn by MAPIQM will be masked. XCRA and YCRA (scratch arrays of type REAL, each dimensioned MCRA) are to be passed by MAPIQM to the AREAS routine ARDRLN, which uses them in calls to the user line- processing routine ULPR. They will hold the X and Y coordinates of points in the fractional coordinate system defining some portion of the projection of a user-defined polyline on the globe. MCRA (an input expression of type INTEGER) is the size of each of the arrays XCRA and YCRA. The value of MCRA must be at least two. For most applications, the value 100 works nicely. IAAI and IAGI (scratch arrays of type INTEGER, each dimensioned NOGI) are to be passed by MAPIQM to the AREAS routine ARDRLN, which uses them in calls to the user line- processing routine ULPR. They will hold area identifier/group identifier pairs for the area containing the polyline fragment defined by XCRA and YCRA. NOGI (an input expression of type INTEGER) is the size of each of the arrays IAAI and IAGI. The value of NOGI must be greater than or equal to the number of groups of edges placed in the area map in IAMA. ULPR is the name of a user-provided line-processing routine. This name must appear in an EXTERNAL statement in the routine that calls MAPITM, so that the compiler and loader will know that it is the name of a routine to be called, rather than the name of a variable.
C-BINDING DESCRIPTION
The C-binding argument descriptions are the same as the FORTRAN argument descriptions.
USAGE
You must call MAPITM once for each point along the line. After the last call to MAPITM for a given line, you must call MAPIQM to signal the end of the line. For more information, see the man pages for the routines MAPIT and MAPITM. SH EXAMPLES Use the ncargex command to see the following relevant example: cmpitm,
ACCESS
To use MAPIQM or c_mapiq
Copyright (C) 1987-2009 University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement. | http://manpages.ubuntu.com/manpages/xenial/man3/mapiqm.3NCARG.html | CC-MAIN-2020-34 | refinedweb | 449 | 58.32 |
Microsoft. can now#™ or Microsoft Visual C++®, like.
Visual Basic .NET enables a fundamental shift from traditional Windows development to building next-generation Web and n-tier applications. For this reason, your code will need to be upgraded to take advantage changes to your code. an ActiveX control written in Visual Basic 6.0 onto a Visual Basic .NET Windows Form, use a Visual Basic 6.0 COM object from a Visual Basic .NET class library, or add a reference to a Visual Basic .NET library to a Visual Basic 6.0 executable.
Components compiled with Visual Basic .NET have subtle run-time differences from components compiled with Visual Basic 6.0. For starters, because Visual Basic .NET objects are released through garbage collection, when objects are explicitly destroyed, there may be a lag before they are actually removed from memory. There are additional differences such as the variant/object changes described later in this document. The combined result of these differences is that Visual Basic .NET applications will have similar but not identical run-time behavior to Visual Basic 6.0 applications.
In addition, Visual Basic .NET makes binary compatibility between Visual Basic .NET components and those in Visual Basic 6.0 unnecessary. Components now have a more robust versioning and deployment system.
Visual Basic 6.0 and Microsoft Visual Studio® 6.0 offered several technologies for creating browser-based Internet and intranet applications::
For more information about building applications with the Microsoft multi-tier architecture, see the Microsoft Windows DNA Web site.
Visual Basic 6.0 offered several technologies for creating client/server applications:.
Visual Basic 6.0 supported building several types of single-tier applications:.
Visual Basic 6.0 offered several types of data access:.
When your code is upgraded, Visual Basic .NET creates a new upgraded project and makes most of the required language and object changes for you. The following sections provide a few examples of how your code is upgraded., Empty, Nothing, Null, and as a pointer to an object.
When your project is upgraded to Visual Basic .NET, all variables declared as Variant are changed to Object. Also, when code is inserted into the editor, the Variant keyword is replaced with Object.
In Visual Basic .NET, the datatype for 16-bit whole numbers is now Short, and the datatype for 32-bit whole numbers is now Integer (Long is now 64 bits). When your project is upgraded, the variable types are changed:
Dim x As Integer
dim y as Long
is upgraded to:
Dim x As Short
dim y as Integer
Property MyProperty() As Short
Get
MyProperty = m_MyProperty
End Get
Set
m_MyProperty = Value
End Set
End Property
Visual Basic .NET has a new forms package, Windows Forms, which has native support for accessibility and has an in-place menu editor. Your existing Visual Basic Forms are upgraded to Windows Forms.
Figure 2. Windows Forms in-place menu editor. (Click figure to see larger image.)
In previous versions of Visual Basic, interfaces for public classes were always hidden from the user. In Visual Basic .NET, they can be viewed and edited in the Code Editor. When your project is upgraded, you choose whether to have interface declarations automatically created for your public classes..
Figure 3. Upgrade comments are added to Visual Basic code as well as the Task List. (Click figure to see larger image.)
This section provides recommendations for how you should write code to minimize the changes you will need to make after upgrading your project to Visual Basic .NET.
Both Visual Basic 6.0 and Visual Basic .NET support late-bound objects, which is the practice of declaring a variable as the Object datatype+’, then the string version is called. Code that passes a Variant or Object datatype.
Earlier versions of Visual Basic supported using the Double datatype datatype to store dates.
In Visual Basic 6.0, many objects expose default properties, which could, as these cannot be resolved and you will have to fix the code yourself in the upgraded project. if a database field contains Null. In these cases you should check results using the function IsNull() and perform.
Due to changes made which
Because they have been removed from the language, you should avoid using the following keywords:
These are explained in more detail<type> statements, you should explicitly declare variables.
Computed GoTo/GoSub statements take this form:
On x GoTo 100, 200, 300
These are not supported in Visual Basic .NET. Instead, you should use If statements, and Select Case constructs.
GoSub and Return statements are not supported in Visual Basic .NET. In most cases you can replace these with functions and procedures.
Option Base 0|1 was used to specify the default lower bound of an array. As mentioned previously, this statement has been removed from the language since Visual Basic .NET natively only supports arrays with a zero lower bound. Non-zero lower bound arrays are supported through a wrapper class.
VarPtr, VarPrtArray, VarPtrStringArray, ObjPtr and StrPtr were undocumented functions used to get the underlying memory address of variables. These functions are not supported in Visual Basic .NET.
In Visual Basic 6.0, the LSet statement could be used to assign a variable of one user-defined type to another variable of a different user-defined type. This functionality is not supported in Visual Basic .NET.
Private Declare Function GetVersion Lib "kernel32" () As Integer
Function GetVer()
Dim Ver As Integer
Ver = GetVersion()
MsgBox("System Version is " & Ver)
End Function
In addition to numeric datatype upgrades, Visual Basic 6.0 had a fixed-length string data type which is not supported in Visual Basic .NET, and which, since type As Any.
Visual Basic .NET has a new forms package, Windows Forms. Windows Forms is largely compatible with the forms package found in Visual Basic 6; however, there are some key differences that are outlined below: | http://msdn.microsoft.com/en-us/library/aa260644(VS.60).aspx | crawl-002 | refinedweb | 979 | 59.19 |
dal.aspx, dal.aspx.vb, customer.vb, consumers.vb, dalrequest.vb, AbstractProvider.vb, dalfactory.vb, sqlprovider.vb, xmlsettings.vb, dal.xml
(extract 10 files into new web project named dal or whatever, set dal.aspx as startup, press F5)
After reading "Professional Design Patterns in VB .NET, Building Adaptable Applications", Johnny Papa's articles, Steven Smith's seminar on DAL, and then using
LLBLGen Data Access Layer, plus other lesser inputs, I felt I needed 5 things from a DAL.
1) The automatic generation of data manipulation classes and stored procedures for each table in your database like LLBLGen
2) The ability to easily add another Data Provider even within the same application
3) Shortcut interfaces for ado.net commands that can be used from the middle tier without being specific to one Data Provider
4) The ability to receive back various types of classes like DataReader, DataSet, DataTable, DataRow, and custom classes.
5) and various miscellaneous features such as stored procedures vs text commands, built in error checking, security levels, etc.
Since I couldn't find this thing laying around, I began finding time to work on it and ran into many interesting OOP concepts that I hoped that I already understood,
but came to find out no I didn't until struggling with the code for a while. For example, even though I've read many books entries on Factory Design Pattern I couldn't
grasp what it was that was really gained from it.
In following diagram note that middle classes have Data Provider specific code in them (eg Sql Server). This is to show you the difference between my DAL and
using LLBLGen in the strict way it was made to be used. Classes on left will not have any Data Provider specific code in them. On the right we have Stored
Procedures. I want to compare what I am doing with using the LLBLGen in the standard way. I still generate with LLBLGen the classes and SPs for my database,
but then I modify the generated classes to be Data Provider non-specific Middle-Tier classes. I then often modify some of the SPs to be more specifically practical
for the current Application. 05/13/2010
Data Access Layer Page 2 of 9
I will have a detailed code example that I am currently using for SQL Server 2000 that you may want to use or easily modify for Oracle or any provider like OleDb,
available, but I will be focusing mostly on the different ways of going about keeping this process simple and maintainable and easy to use from different types of
classes and why I made the choices I have. The main goal is making a DAL that anyone can maintain.
From the requirements list, the DataProvider variation will be handled by different subclasses of one AbstractDataProvider class. You copy my SqlProvider class in
sqlprovider.vb to a file called oracleprovider.vb and edit away for an hour at most and you have your new concrete data provider subclass. To handle
CommandType variation we will use a DALRequest class with default settings to our favorite. I will be calling the output variation consumers to contrast with the
usage of provider. To handle the variation of consumers, I will vary the input signature of overloaded method called Execute of the SqlProvider subclass. To handle
the variation of Commands I will overload Execute and vary the code from overload to overload very slightly so it is easy to understand and change. 05/13/2010
Data Access Layer Page 3 of 9
Below is diagram of Class Design showing all the main files in this download, with client-tier on bottom, business-tier as customer, and rest is DAL. 05/13/2010
Data Access Layer Page 4 of 9
Now if I could just get a generator to compile so that the classes are ready to use this DAL, that would be a great savings of coding time, (maybe a next article?).
Let's look at each class individually to see what we can learn about the whole simple process starting with the front end. I am not delivering any Stored Procedures
with this because you can test all of this with just a few tweaks to Northwind's SPs or place some text commands very easily with this once you understand the
structure. It might look complex, but once you see the design things fall easily into place. First in dal.aspx there is a datagrid and a textbox and 3 buttons: getorders,
delete one record, and insert a customer.
Looking at the code for these button in the codebehind: dal.aspx.vb,,, we see that we are merely instantiating a class from the middle tier and calling one of its
methods. This is all I allow in this tier's responses to events. Never any data provider specific code. Some may say that a datareader is specific, or dataset, but they
may be made as abstract as you want with wrapper classes as we will see was necessary with the datareader class. You will see that datareader is not the
sqldatareader, but a custom wrapper class. I feel that a dataset class is not specific to sql or oracle or oledb, so I did not choose to make a wrapper as I could have
for the dataset or datatable. The main thing to notice here is that if I want to receive back a datareader for datagrid binding, then I simply input to the middle-tier
method, an new and empty datareader object. I could here, input only a type of datareader and and an instantiation of nothing and get my DAL to work fine and
have been lighter weight in what I passed, but inputing the non null instantiation that is empty saves me code in my sqlProvider class as you will see so I decided on
this course. If I want to receive a dataset back, I simply input a new dataset object, etc This works for any custom class that you want to define also which is very
easy to do, but this only works for consumers that you have handled in the DAL. So, it is a 1 or 2 word change of code to get any type of consumer object back for
binding or whatever use.
First notice the declaration of dataProvider object. AbstractProvider, which we will be looking at soon is an abstract class that is non-specific as regards data
providing, so that if your middle-tier is composed of 100 classes like customer, you won't have to go to 100 places to change to Oracle. You go to the default
constructor of the DALRequest class and change the default from sql to oracle as the default. Also if a client has all data on Oracle, but their customer table on Sql
Server, you can just change one property of the DALRequest class each time you need to switch to sql for a request. The DALFactory class uses the DALRequest
Provider property/field to get the proper concrete provider class instantiated. Also note that to execute a selected stored procedure, I need set the command
property/field, clearallparameters, code one line for each input and output parameter, and then code one line for execution. One line must be added to switch to
using text commands versus stored procs the default. Also note that adding a customer requires the return of the primary key identity that I use so I must input an
integer to tell the concrete data provider which overload to use to return that integer information. In the customer dispose method I close the dataprovider
connection later than for the other consumers in the case of a datareader. (calling close an extra time is safe)
Public Class customer
Dim dataProvider As AbstractProvider = DALFactory.GetProvider(DalRequest.Provider)
Public Function getOrders(ByVal id As String, ByVal consumer As DataReader) As DataReader
DalRequest.Command = "pr_orders_GetOrders"
dataProvider.ClearAllParameters()
dataProvider.AddParameter("@customerID", ssenumSqlDataTypes.ssSDT_String, 8, id, dataProvider.Direction.input)
Return dataProvider.Execute(consumer)
End Function
Public Function DeleteTopOrder(ByVal id As String)
DalRequest.Command = "pr_orders_DeleteTopOrder"
dataProvider.ClearAllParameters()
dataProvider.AddParameter("@customerID", ssenumSqlDataTypes.ssSDT_String, 8, id, 1)
dataProvider.Execute()
End Function
Public Function AddCustomer(ByVal customerID As String, ByVal consumer As Integer) As Integer
dataProvider.ClearAllParameters()
dataProvider.AddParameter("@customerid", ssenumSqlDataTypes.ssSDT_String, 5, customerID, dataProvider.Direction.Input)
dataProvider.AddParameter("@id" , ssenumSqlDataTypes.ssSDT_Integer, 4, 0, dataProvider.Direction.Output)
DalRequest.Command = "pr_customers_Insert"
Return dataProvider.Execute(1)
End Function
Sub dispose() ' needed only if consumer is datareader
dataProvider.Connection.ConSql.Close()
End Sub
End Class
Notice that in the consumer.vb file there are only 3 classes and 1 enumerator defined. That is because dataset doesn't necessarily need one since it is a class that
is already abstracted from being a Sql class or an Oracle class etc. There is no abstracted datareader class except the one we've defined here. There is only
sqldatareader or oledbdatareader, etc. So DataReader is a simple wrapper that will be used to carry the specific thing. Its use is to isolate the specific data provider
code from the codebehind or middle-tier. Order is a custom class that could have as many properties as we need. A very interesting thing to consider is why the
connection and direction constructions are necessary to isolate. Without these constructions I could not find anyway to isolate even though there may surely be
some other way. These are interesting kinds of questions when you are designing a data access layer. If I use the built in ParameterDirection enumerator isolation 05/13/2010
Data Access Layer Page 5 of 9
is corrupted and when I want to change the code for the next client I find I have to go to 100 places and change code instead of one place. One interesting thing I
did not expect is that in customer when I declare DataProvider as type AbstractProvider, and then set it equal to a specific output of the DalFactory if the abstract
factory did not have exactly the same direction property as in the specific concrete provider, what showed up was the abstract direction, not the specific. This forces
the last two classes here. This class library could grow over time. The connection class was necessitated because I needed to close a connection from the middle
tier in the case of the datareader consumer. The direction enumerator copies the ParameterDirection enumerator of the system.data namespace. I'd rather not have
a system.data structure in my abstract provider class, so I created my own. One of these latter is required in order to instantiate the capability of creating both input
and output parameters for stored procedures.
Imports System.Data.SqlClient
Public Class DataReader
Public ReturnedDataReader As IDataReader
End Class
Public Class Order
Public OrderDate As String
End Class
Public Class Connection
Public ConSql As SqlConnection
' add another property here for Oracle etc
End Class
Public Enum Direction
input = 1
output = 2
both = 3
returnitem = 6
End Enum
This class is very useful for controlling defaults and making it easy to refer to any current properties requested as it is shared. SqlDataProvider relies on this. As you
can see I like fields tied to enumerators versus method properties unless the method properties will be used for greater control. Sub New provides the default values
that an application will use mainly. Only one property has to be set each time, Command either as text command or stored procedure name as string. UserType is
linked to dal.xml read/write config file that is accessed via xmlsetting shared class. Each UserType has a different userid and password in dal.xml. These could be in
your employee or customer table as well and make programming more safe as certain kinds of users would not be able to execute certain commands on your data.
Public Class DalRequest
Public Shared Provider As ProviderType
Public Shared RoleObject As UserType
Public Shared Role As String
Public Shared CommandType As CommandType
Public Shared Command As String
Public Shared Transaction As Boolean
Public Shared Locking As Boolean
Public Shared ParamCache As Boolean
Public Shared TableName As String
Shared Sub New()
Provider = ProviderType.Sql
RoleObject = New UserType()
Role = RoleObject.Admin
CommandType = CommandType.StoredProcedure
Command = ""
Transaction = False
Locking = False
ParamCache = False
TableName = "data"
End Sub
End Class
Public Enum ProviderType
Sql
Oracle
Oledb
End Enum
Public Class UserType
Public Shared External As String
Public Shared Internal As String
Public Shared SuperUser As String
Public Shared Admin As String
Sub New()
External = "external"
Internal = "internal"
SuperUser = "superuser"
Admin = "admin"
End Sub
End Class
This abstract class enforces a minimum interface on all its subclasses. You can add more functionality to a concrete provider that has such available, but you must
add the enfiorced interface. Instead of using an abstract class I could have used an Interface instead, which is how I actually first attempted this. I found that the
complexity of defining the connection and direction properties and implementing them was more difficult than using the abstract class and that books with significant
coverage on Interfaces was not available to me even though I have many books supposedly covering this area. Coverage was very spotty. It would be nice if
someone would do an entire book on Interfaces. This is one value of the factory pattern which is comprised of 2 classes here: AbstractProvider class and the
DalFactory class. 05/13/2010
Data Access Layer Page 6 of 9
Public Connection As Connection
Public Direction As Direction
Public MustOverride Sub Execute()
Public MustOverride Function Execute(ByVal consumer As Integer) As Integer
Public MustOverride Function Execute(ByVal consumer As DataReader) As DataReader
Public MustOverride Function Execute(ByVal consumer As DataSet) As DataSet
Public MustOverride Function Execute(ByVal consumer As DataTable) As DataTable
Public MustOverride Function Execute(ByVal consumer As DataRow) As DataRow
Public MustOverride Function Execute(ByVal consumer As Order) As Order
Public MustOverride Sub ClearAllParameters()
Public MustOverride Sub AddParameter(ByVal parameterName As String, ByVal dataType As ssenumSqlDataTypes, _
ByVal size As Integer, ByVal value As String, ByVal direction As Integer)
End Class
The other value of the Factory Design Pattern is that the DalFactory class will choose the proper concrete class for us in a way that is not specific to the
DataProvider so that we can do this instantiation in hundreds of places and never have to go and change this declaration and instantiation code ever again, just add
to the code in the DalFactory, and the default or current value of the DalRequest.provider property.
Public Class DALFactory
Public Shared Function GetProvider(ByVal provider As Integer) As AbstractProvider
Select Case provider
Case ProviderType.Sql
Return New SQLProvider()
Case ProviderType.Oracle
'Return New OracleProvider()
Case ProviderType.Oledb
'Return New OledbProvider()
End Select
End Function
End Class
At the highest level of overview, there are imports, 1 enumeration of sqlDataTypes, a Parameter class for purposes of wrapping the specific parameters of specific
data providers like the sqlParameters class, and then the main SqlProvider which is inheriting the AbstractProvider class. System.IO, and System.Text are needed
for logging exceptions. Microsoft.VisualBasic.ControlChars is for using crlf, and then System.Data and System.Data.Client are necessary to manipulate the Sql
Server database in the most efficient manner currently existing. We will need to be able to convert our abstract parameters into sqlparameters and thus the
SqlDataTypes enumerator is needed.
Imports System.IO
Imports System.Text
Imports System.Data
Imports System.Data.SqlClient
Imports Microsoft.VisualBasic.ControlChars
Public Enum ssenumSqlDataTypes
<Serializable()> Public Class Parameter
<Serializable()> Public Class SQLProvider : Inherits AbstractProvider
With the parameter class and the methods of the SqlDataProvider below we can easily handle stored procedure parameters and make stored procedures as easy
as Sql text commands. The reason we need an abstract parameter class and then convert to sql parameters inside the concrete data provider class is that we must
be abstract in the middle tier and specific inside the data provider. I know I am repeating myself, but if you don't get this stuff it takes some thought time and
repetition helps. I have collected these somewhat disparate pieces here to help you get that this parameter stuff is not that complex. I had a little trouble with it at
first. When you decide to create your OracleDataProvider, you will have to consider how to change the convertParametertoOracleParameter method. You will also
have to make a few changes to the datatype enumerator also. Now on to more interesting things.
<Serializable()> Public Class Parameter
Public DataType As SqlDbType '//--- The datatype of the parameter
Public Direction As ParameterDirection '//--- The direction of the parameter
Public ParameterName As String '//--- The Name of the parameter
Public Size As Integer '//--- The size in bytes of the parameter
Public Value As String '//--- The value of the parameter
Sub New(ByVal sParameterName As String, ByVal lDataType As SqlDbType, ByVal iSize As Integer, _
ByVal sValue As String, ByVal iDirection As Integer )
ParameterName = sParameterName
DataType = lDataType
Size = iSize
Value = sValue
Direction = iDirection
End Sub
End Class
Private m_oParmList As ArrayList = New ArrayList() ' holds parameters for a stored procedure
Public Overloads Overrides Sub ClearAllParameters()
m_oParmList.Clear ()
End Sub
Public Overloads Overrides Sub AddParameter(ByVal sParameterName As String, _
ByVal lSqlType As ssenumSqlDataTypes, ByVal iSize As Integer, ByVal sValue As String, ByVal iDirection As Integer)
Dim eDataType As SqlDbType
Dim oParam As Parameter = Nothing
Select Case lSqlType
Case ssenumSqlDataTypes.ssSDT_String
eDataType = SqlDbType.VarChar 05/13/2010
Data Access Layer Page 7 of 9
Case ssenumSqlDataTypes.ssSDT_Integer
eDataType = SqlDbType.Int
Case ssenumSqlDataTypes.ssSDT_DateTime
eDataType = SqlDbType.DateTime
Case ssenumSqlDataTypes.ssSDT_Bit
eDataType = SqlDbType.Bit
Case ssenumSqlDataTypes.ssSDT_Decimal
eDataType = SqlDbType.Decimal
Case ssenumSqlDataTypes.ssSDT_Money
eDataType = SqlDbType.Money
End Select
oParam = New Parameter(sParameterName, eDataType, iSize, sValue, iDirection)
m_oParmList.Add(oParam)
End Sub
Private Function ConvertParameterToSqlParameter(ByVal oP As Parameter) As SqlParameter
Dim oSqlParameter As SqlParameter = New SqlParameter(oP.ParameterName, oP.DataType, oP.Size)
With oSqlParameter
.Value = oP.Value
.Direction = oP.Direction
End With
Return oSqlParameter
End Function
Below you can see the overview that shows the detail of the sqlDataTypes, the shadowing of the connection and direction properties over the abstract versions in
AbstractProvider, the sub new, the location of the parameter methods, and finally the 7 overloads of the Execute method that does the main work. The 7 overloads
are for the 7 consumers I was interested in. 1-updates/deletes which I don't want to return anything. I could have unified updates/deletes/inserts by having them all
return integers/how many rows affected, but what if your primary key is not integer like mine? 2-inserts, 3-datareader, 4-dataset, 5-datatable, 6-datarow, 7-custom
class order. Actually as you will see, the code difference between the overloads is very small number of lines and so therefore you could easily unify all overloads
into one Execute method. I chose not to because I like the method of overloading using one input object that makes it so easy to remember which overload does
what for me! Also, this overloading makes it very easy to add features in the future since the length of each Execute method is so small. It is very easy to add a new
consumer.
Imports System.IO
Imports System.Text
Imports System.Data
Imports System.Data.SqlClient
Imports Microsoft.VisualBasic.ControlChars
Public Enum ssenumSqlDataTypes ssSDT_Bit
ssSDT_DateTime
ssSDT_Decimal
ssSDT_Integer
ssSDT_Money
ssSDT_String
End Enum <Serializable()> Public Class Parameter
<Serializable()> Public Class SQLProvider : Inherits AbstractProvider
Public Shadows Direction As New Direction()
Public Shadows Connection As New Connection()
Private connectionString As String
Private m_oParmList As ArrayList = New ArrayList() ' holds parameters for a stored procedure
Sub New()
connectionString = XmlSetting.Read("appsettings", DalRequest.Role)
Me.Connection.ConSql = New SqlConnection(connectionString)
End Sub
Public Overloads Overrides Sub Execute() ' handles update and delete commands which return nothing here
Public Overloads Overrides Function Execute(ByVal consumer As Integer) As Integer
Public Overloads Overrides Function Execute(ByVal consumer As DataReader) As DataReader
Public Overloads Overrides Function Execute(ByVal consumer As DataSet) As DataSet
Public Overloads Overrides Function Execute(ByVal consumer As DataTable) As DataTable
Public Overloads Overrides Function Execute(ByVal consumer As DataRow) As DataRow
Public Overloads Overrides Function Execute(ByVal consumer As Order) As Order
Public Sub LogError(ByVal e As SqlException, ByVal Command As String)
Public Overloads Overrides Sub ClearAllParameters()
Public Overloads Overrides Sub AddParameter(ByVal sParameterName As String, ByVal lSqlType As ssenumSqlDataTypes, _
ByVal iSize As Integer, ByVal sValue As String, ByVal iDirection As Integer)
Private Function ConvertParameterToSqlParameter(ByVal oP As Parameter) As SqlParameter
End Class
Let's look at the simplicity of a few of the overloads. Note that you must declare overloads and overrides when you are subclassing. Note that because of the choice
of my method signature differentiation, I am able to use the input to save lines of code. I return what I input. Ooh I like it when things like this happen. For my error 05/13/2010
Data Access Layer Page 8 of 9
handling I chose to throw and log the problems. Note how stored procedure parameter conversion is handled by a simple logic with a 2 line loop to loop through all
parameters you created. The difference between SPs and Text commands boils down to not much.
Not much difference here. Remember that order is a custom class defined in consumer.vb library. It could be defined anywhere. To use this for another custom
class, you would have to create another overload of Execute since each class has different properties that need setting. I wouldn't want one hundred overloads of
Execute in sqlProvider. The way I handle this, since every time that only one datarow is returned I want it to set the properties of the associated class and work with
the class not the datarow, is to use LLBLGen to generate data layer classes for each table, and then I convert them to middle-tier classes by taking out all data
provider specific code. Then I return a datarow from my DAL to the class and set the classes properties with it. Then I work with the class in the front end.
In the past I have found reasons to have a read/write config file like the one below.
<apsettings>
<htmpath>c:\inetpub\wwwroot\dal\</htmpath>
<errpath>c:\inetpub\wwwroot\dal\</errpath> 05/13/2010
Data Access Layer Page 9 of 9
</apsettings>
The xmlsetting.vb file class which has shared methods, allows me to make simple calls like the ones below to read and write to this xml file. This simple class allows
me same element names since I use one level of context to grab a value. I specify a child in the context of a parent node. Note that xmlsetting.vb requires that the
name of the xml file must match the project name. In this case, DAL.
dim connectionString as string = xmlsetting.read("appsettings", "internal")
xmlsetting.write ("appsetting", "Internal", "safe")
I am not writing this article because I consider myself a data architecture expert at all, but I feel a great need for development in this area and for more explanation
with examples like the direction and connection cases which I could not find anywhere. Please contact me with ideas or criticisms so that I can continue to evolve
this important area as I know many must be working on similar things.
Go to the article about modifying the LLBL class generator to work with DAL:
Send mail to Computer Consulting with questions or comments about this web site.
Last modified: January 26, 2006 05/13/2010 | https://www.scribd.com/document/36818388/A-Generic-Data-Access-Layer-in-VB-net-2 | CC-MAIN-2019-30 | refinedweb | 3,918 | 51.38 |
01 June 2012 17:22 [Source: ICIS news]
WASHINGTON (ICIS)--The ?xml:namespace>
In its monthly employment report, the department said the chemicals manufacturing sector lost about 1,200 employees in May, falling to a total workforce strength of 796,100.
In terms of the overall chemicals labour force, however, that employment easing was marginal, representing only a one-tenth of 1% decline.
In the plastics and rubber products industry, the department said the workforce fell by 800 positions last month to 634,900, also a razor-thin decline.
The May declines in employment for both sectors followed similar job losses in April.
For the
The economy added only 69,000 new jobs in May, well below the 150,000 workforce expansion that had been expected and the fourth straight month of declining or underperforming jobs | http://www.icis.com/Articles/2012/06/01/9566436/us-chemicals-and-plastics-producers-shed-jobs-in-may.html | CC-MAIN-2015-18 | refinedweb | 136 | 50.16 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
how to import partner-reference when module base_partner_sequence is installed?
Hello I need to have an automatic sequence on the field "ref" when a partner is created. The module base_partner_sequence works well. But then when I import partners after installation of base_parter_sequence I am not able to import a csv-file with a value for "ref" because the value is always set automatically based on the sequence. Does someone has an idea how to import records with a value in csv when there is a sequence on a field? Maybe it is necessary to modify the code somewhere? Please see my code below.
Can yomeone help? René
class partner_sequence(osv.osv): _inherit = 'res.partner' _columns = { 'ref': fields.char('Code', size=64, readonly=False), }
def create(self, cr, uid, vals, context={}): vals['ref'] = self.pool.get('ir.sequence').get(cr, uid, 'res.partner') res = super(partner_sequence, self).create(cr, uid, vals, context) return res _sql_constraints = [ ('ref_unique', 'unique(ref)', 'reference per customer must be unique!'), ]
partner_sequence()
You can try the following:
def create(self, cr, uid, vals, context={}): if not vals.get('ref', False): vals['ref'] = self.pool.get('ir.sequence').get(cr, uid, 'res.partner') res = super(partner_sequence, self).create(cr, uid, vals, context) return res
But normally in OpenERP sequences are assigned using defaults (
| https://www.odoo.com/forum/help-1/question/how-to-import-partner-reference-when-module-base-partner-sequence-is-installed-5590 | CC-MAIN-2017-04 | refinedweb | 243 | 53.68 |
Agenda - see also: IRC log, day two minutes
meeting room right side
meeting room left side
tbl: meeting of a group of people interested
in SW in the Cambridge area
... 15 people interested
... no particular agenda
... mainly a social thing
... discussion, brainstorming
... in this room (Kiva)
Guus: short round of introductions
Alistair: we have several implementations of SKOS now
Jon: picked up on SKOS at Dublin Core Madrid workshop
Bernard: U. Manchester is adding SKOS to COHSE
Guus: interoperability of vocabularies is core to our work at Vrije University
Diego: my research work is on semantic search
Fabien: my research group is interested in graph-based reasoning on SemWeb
Ivan: every day I take a tram that goes by Guus' office
tbl: i'm not here as W3C director.
... interested in the discussions on RDFa and recipes ("slash")
...I taught a 1-week course [2 weeks ago] and one of the biggest problems was how to configure apache
... if it came out-of-the-box with application/rdf+xml support things would be a *lot* easier
RalphS: the activity of this group is very
important, great impact
... very busy, reduced dedication to this group
...but hope to increase my time in SWD again
[Tom calling from Berlin, Elisa calling from Los Altos]
Tom: project looking at model-based metadata; includes Dublin Core and eventually SKOS
Elisa: working with organizations who are keenly interested in metadata about their ontologies ... core business of SandPiper is ontology development
Guus: three objectives of this meeting
... 1) skos use cases: discuss them
... 2) as a result, obtain a list of requirements for SKOS
... 3) review issue list, priorize them, select critical ones
<Antoine>
Antoine: 12 use cases in the document
... there are more than 20 contributions
... some are not edited (yet), but available at the wiki
<Antoine>
Antoine: thinks the response of the community
was good
... will summarize each one of the UC next
UC1
Guus: two different hierachies for the same thesaurus, is this a requirement?
aliman: multi-hierarchy is an important requirement
Guus: shows an example of the getty vocab
Guus: google for 'tgn getty', then enter 'boston'
Guus: two record types: administrative and geographical, non exclusive
... (shows Utrecht next)
Guus: looks up for an example of a record with two types
Antoine: complex lexical info in the context of this application
Guus: Boston has several alternative names
Guus: Getty shows English, Vernacular, and Historical names
... e.g. Tokyo has 'Edo' historical name
Aliman: multilingual labels are already
solved
... but language is not enough in some cases (see the Boston and Tokyo examples)
... the issue is there are different scripts for some language, but only a language tag
Alistair: potential issues with cardinality constraints and preferredLabel properties if there are multiple scripts in which the label might be written
Guus: this is probably out of scope of this WG
ivan: this WG should not worry about this issue. Maybe forward the issue to the RDF core WG
RalphS: asks for clarification of the multi-hierarchy issue
Alistair: a conceptual node may have more than one parent
guus: back to the issue of making a statement about a label
aliman: we should provide a framework to allow that
guus illustrates his point with an example in the whiteboard
Guus: how would we say the label "Edo" is valid only between 1600 and 1800 AD ?
Alistair: annotation properties
aliman: when you use an annotation propierty,
you are not limited to a literal value
... you can use a resource as a value of the annotation property. Model annotation as an n-ary relation.
ivan: is this a possible use of reification?
guus: seems to be 2 options: to reify, or to lose information
timbl: another option is to put the statement in another document
SKOS annotated label whiteboard discussion
<aliman>
Alistair: the old issue notes a place-holder item for this
... "SKOS does not provide support for ... any type of annotation associated with a non-descriptor"
<timbl> are a few slides about options for modeling things which vary with time
Guus: not sure this is the same thing
ACTION: Alan write up the preferredLabel modelling issue [recorded in]
AlanR: just joined the WG representing Science Commons ... active on HCLS IGUC 2: Iconclass
... has descriptor concepts and non-descriptor concepts
Guus: this case helps define SKOS scope
... Iconclass is a grammar
... permits adding things to parts of the vocabulary
... I'd like to make this feature out of scope for SKOS
... e.g. KEY; it's not pre-defined where in the vocabulary this is used
Antoine: finding modifiers while browsing a vocabulary -- "post coordination"
aliman: this mechanism allows to create new concepts by combination of existing concepts
Guus: shows an example of the vocabulary ("Animal")
aliman: this is related to the "qualifiers" of the MESH medical vocabulary
Alistair: terms in MESH have flags that indicate they can be used with an additional qualifiers vocabulary
<aliman> example of coordination
Alistair: ... e.g. 'aspirin' combined with 'sideEffects' means 'sideEffectsOfAspirin'
Alistair: BLISS classification scheme has similar aspects
Antoine: we lost the possibility to attach
qualifiers
... cannot represent hierarchies of qualifiers
aliman: ambiguity can arise from the use of qualifiers
Alistair: in my master's thesis I conclude that it is an application-specific decision whether order of coordination is significant
aliman: if you don't have a mechanism to attach the qualifier to particular individuals
Guus: Iconclass also has a notion of 'opposite', or counter-example, done by doubling the letter; e.g. 25FF
<aliman> Examples of using bliss classification
Guus: this feature is also used in Iconclass to do male-female distinction
Antoine: 13,000 concepts
... in this vocabulary
... qualifiers allow to reduce the number of concepts
... indexing can use multiple concepts
<scribe> ACTION: Antoine to provide more use cases of uses of qualifiers [recorded in]
Alistair: library world talks about "synthetic" and "enumerative" classification schemes; "synthetic" scheme is meant to be used in combinations to synthesize categories
[15 minute coffee break]
<aliman> Core requirements for automation of analytico-synthetic classifications
<aliman> (I just found this paper, looks highly relevant to preceding discussion.)
antoine: we have to decide at some point what goes into the document
guus: we should keep the overview and the details of examples
ralph: features in the use cases that are important for skos has to be brought out from the examples (for those who do not know the details)
alistair: we could move the examples from the vocabulary to the example, but what ralph said made me think again...
guus: it is good to have that on our list...
UC 3 - Medieval illuminated manuscripts (mandragore)
antoine: next use case an integrated view to
mediaval manuscripts
... there are collections and bridges among these
... we always have info on which vocabularies are used
... an issue of alignment of vocabularies
... it uses the iconclass vocabulary
... and another one that comes from the French national library
... the latter is 15000 subject, simple labels (simple and alternate)
... it is probably a flat list, and they introduce a set of classes for browsing purposes
... you got between 15000 descriptors, and each is linked to a class that is more general
alistair: is it essentially a tree level hierarchy, but you can use the descriptors on the bottom only
antoine: yes, only the leaves of the tree can be used as descriptors
guus: this is a feature I have not distilled
yet
... this problem of representing mandragore
... there are 2 issues coming out: (1) requirement for mapping, you need equivalence
... (2) you have the notion of abstract classes
... things that are not for indexing
Guus: abstract classes appear in AAT also
alistair: i think it is a use cases that has
some basic requirements for vocabulary mapping among
themselves
... there is also a requirement to map between combination of concepts
Alistair: "11U4 Mary and John the Baptist ..."
alistair: 11U4 in the description
... i think that will be a common requirement
antoine: the mapping points that there could
be a link between the non descriptor items
... a descriptor on the one side and a qualifier on the other side, the latter is never be found as a descriptor
guus: is it fair to say we have a mapping requirement and two basic requirements
scribe: with respect to the conjunction type of thing, that is an issue (or a requirement)
alistair: it comes up often in my
experience
... there is a british standard wg rewriting the thesaurus standard
... working on how to represent mapping between thesauri
... i would think that they will come up with something how to model it
bernard: is there a requirement to map the
iconclass to mandragore to identify the ??
... it seems that mandragore is a different type of mapping
Alistair cited ISO 2788 parts 3 and 4 (under development) work on mapping
guus: rephrase the question: do we need more specific than broad and narrow, ie, owl or rdfs vocabulary
bernard: yes, this is what I am asking
... what is the broader term of XX
alistair: there is a browser for mandragore, can we see how this looks like?
antoine showing the mandragore browser example
antoine shows the iconclass vocabulary, one can see the vocabulary and the specialization of the concept
scribe: on the right are the images from the
collection (from the BNF) which have not been indexed
against iconclass
... you browse your vocabulary, then you have access to the images
alistair: can you browse against the mandragore images only?
Project STITCH : Semantic Interoperability to access Cultural Heritage
alistair: when you do a mapping to mandragore, do you use a second level only?
Antoine: there are 15,000 alignment relations in the mapping
guus: I try to summarize, three things
... (1) need for an equivalence mapping
... (2) a less or more specific mapping, should it be more specific than broad/narrow
... (3) links between compostionals
... we recently linked a nist vocabulary for video tracking
... we got into a similar situation
... we got both the conjunctive and disjunctive form
... may be it should be a requirement, or maybe we can handle outside skos
ralph: there a reference to optional rejective
forms
... is that from iconclass?
antoine: this comes from the French vocabulary
ralph: guus showed the double letter example, is it similar
antoine: they are more similar concepts, synonyms
guus: it is quite similar to preferred and non-preferred label
"optional rejected form" means "synonym but deprecated"
alistair: when it comes to mapping
requirement, we need to have in mind of the functionality
it is used for and focus on that
... that might help us in passing by other representations
guus: in this particular domain mappings are
the only thing that adds something to the existing
functionalities to musea
... if you open up the collections to browse to other vocabularies that you get new things
... mapping is 100% crucial,
... the only added value, and a big one
... in medicine it may be different
UC 4 - bio-zen
antoine: 4th example bio-zen
... wait for AlanR to come back on that one
UC 5 - multilingual agricultural thesauri... the 5th use case: semantic search accross multilingual thesauri (agricultural domain)
alistair: that may be like a union
... the last example in the use case they use the mapping vocabulary as it is right now in skos
... it also has 'and or not'
... the second example is exactly an 'or'
guus: ie, they also have the 'and or not' in their usage?
bernard: the more these vocabularies are mashed, they have similar like narrow and broader
alistair: these can be ambigous...
guus: we already have this on the list of the issues (whether we need to represent a specific semantics to broad and narrow)
<scribe> ACTION: guus to check that this issue of more specialization than broad/narrow is on the issues' list [recorded in]
guus: you can say we build into the skos
vocabularies that we define, eg, two subclasses
... or we can say that we leave that to the vocabulary, the authors has the guideline to present this as a subproperty to broad/narrow
... the issue is to resolve this
alistair: ie, if people want to do more specific, how would they do it?
guus: yes, and whether this is part of the skos vocabulary or not
Ivan: were problems with representing multilingual scripts found?
... is there enough in RDF to represent this?
Alistair: there were some interesting language problems in the Chinese mapping
Antoine: but I think they succeeded in representing everything they wanted to represent in RDF, though they needed more than SKOS
<aliman> a document about mapping between agrovoc and chinese agricultural thesaurus
guus: term-to-term relationship?
antoine: the problem of having several labels for the same concepts that comes up, they want o be able to line up the literal translations to one another
guus: why not use for each preferred and alternative lables
alistair: the the preferred label in chinese may be the third alternative example in english
timbl: cat translates to 'chat' in French, you have to label in french
alistair: you are making a link between translations and labels
antoine: a concept in one vocabulary has a
latin name for the pref label, and an alternative label the
common name
... the same in the french versions
... and you want to point ot the fact to translate from the two alternative labels
guus: then the latin is a lingua franca
alistair: another thing they wanted that the label in French has been derived from that and than alternative label in English
guus: we may have an issue of relationship of
linguistic labels
... not clear to me what to do with this
alistair: we have to be careful with a use
case like this is what they do to exactly with this
information
... why do they use it
<scribe> ACTION: antoine to capture the issue on capturing relationships between labels [recorded in]
Antoine: e.g. acronym link
... an example of a semantic relationship between labels
UC 6-7 - tactical situations, product life cycle
antoine: use cases 6 and 7 are similar on
features
... representing quite simple vocabularies, one is on tactical situation objects
... a list of unstructured terms
... each term has some label and a note
... when it should be used
... the support life cycle is similar
ralph: in #6 it was difficult to see what it says about skos
alistair: me too
... this is not the sort of use case i am familiar with
antoine: i tried to interpret it, but apart form simple labelling i did not find anything
alistair: we could ask them what they want to do
guus: this is what they have...
ralph: maybe we want to ask submittors to point at the wg on areas they want additional things
alistair: use case 7 actually adds a question
mark on skos (or owl)
... it is not clear why they want to skos
guus: i could think of reasons
antoine: they were search of standard ways
guus: the problem with use case #7 that it is
out of scope
... or am i misunderstanding
ralph: it would be interesting question to ask them what they want skos for
bernard: may be a marketing issue
<scribe> ACTION: antoine to contact the submittors of #7 to see what they want to use skos for (as opposed to, say, owl) [recorded in]
alistair: it seems that they have a requirement to capture lots of things, that may need to extend skos
antoine: no, they really need only flat
things...
... they need a structure to represent a natural language representation without a reasoner
UC 8-9 - GTAA (audiovisual archives) and CHOICE@CATCH (radio/TV)
antonie: number #8 gtaa web browser, accessing
thesaurus
... want to provide the user with a sophisticated vocabulary
guus: there is an archive for tv and radio
programs
... they do annotation inside the content but also coming from broadcasting companies
... on the top level there are 8 different facets
... and several of the sub hierarchies have separate classifications
... and that is the whole thing
... they are specific for a facet
alistair: there is a thematic and a named hierarchy, and they are orthogonal
guus: we can get test cases out of it
ralph: 'only keyword and genres can also have broader/narrower relation', is that a restriction?
guus: this is a very flat structure, this is not really a restriction
antonie: use case #9, another use of the same
vocabulary of use case #8,
... using a special algorithm that provides the user an indexer
... the idea is to explore the different links in the thesaurus to rank the concepts
... if you have to index a document with a set of candidate terms, if the thesauri includes these terms, than that hierarchy is also presented
guus: I would have personally merged #8 and #9
antoine: #9 provided in a functional
view
... adding a representation to an applicaiton is nice
guus: people in computer science like
automatic things
... but these people like to manually check
ralph: even though it does not add anything technically, it adds a new aspect, good for 'marketing' reasons
alistair: if you look a traditional model, you
manually build a vocabulary and index
... in this case the vocabulary is done manually, but an automatic indexing is good
... a use case document should have a business model section to show how different scenarios are used
guus: summary: #9 does not add anything to the requirements, but is an interesting use case scenario to keep
alistair: applications might want the integrity of their data, and expressing the constraints is a requirement
guus: there is already and issue on the level of semantics that skos has
<scribe> ACTION: alistair to summarizes the aspects of semantics of the skos data model [recorded in]
JonathanRees: I'm part of Science Commons
alistair: a question on #8, relationships on
terms between facets were computed
... question is how were these computed?
guus: the general problem was that there were
lack of relationships
... but I do not think there were much semantics
alistair: it also says the precomputed terms were not part of the iso standards
guus: good question, I do not know
GTAA use case as submitted
<scribe> ACTION: guus to check with veronique on the terms being outside the iso standard [recorded in]
UC 4 - bio-zen
Biozen use case as submitted
antoine: use case #4, bio-zen ontology
framework
... the main point to represent these medical vocabularies, keeping all the infos that are useful for application
... the application was not really detailed
... gene ontology and mash are the two examples for applications
... it has an example of representation of a term
... the main point is the fact that the representation they mix all kind of different of metadata vocabularies
... they created some sort of metamodel using owl, and uses pieces of other vocabularies
... they use all these meta models to represent the medical vocabularies
... they use, eg, dublin core plus skos terms together
... they created an owl specification to mix these metamodel features
guus: why there was within the definition
there is a representation of the part of relationship
... does the mesh have its own hierarchy
alan: 'is a' is not 'part of', careful about that
guus: in skos we use the broader and narrower terms which are less defined
alan: obo originates in the gene
ontology
... the latter has is a and part of relationships in it
... there has been a number of threads using this
... one thread is to translate obo to other formats, people used, eg, skos
... they have to decide where broader, etc, are used
... these actually threw away information but they are part of skos
... from my understanding at the time at least
... there is an effort to translate this into owl
... second thread of discussion is the 'quality' of the whole thing
<Elisa> There is a recently released related portal - Daniel Rubin and his group have created this and are working to develop it as a part of their NCOR work:
alan: what can be related to what, what are the description of that, more philosophical stuff
guus: some people make subproperties from,
say, skos broader
... then you do not throw away things
alan: i had the issue on putting it with owl-dl
guus: that is a separate issue on the agenda (relationship to owl-dl)
Alan: Matthias is asking that as we fiddle with SKOS, we try to keep it OWL-DL compatible
Alistair: it's already not OWL-DL
alistair: if you go into library sciences, you
will find papers on classification
... people there define fundamental facets, time, space, etc
... there are discussions on what these fundamental facets are
... that might come to the skos spec
... but if you want to do that, this should be done as an extension of skos (in my view)
guus: b.t.w., the relationship to owl-dl
should be part of our issues list, not requirement
... maybe if we define a set of constraints, that might lead to skos-dl...
... but this is a topic for discussion
alistair: it is tricky, extension by requirements is one of the major way of extending skos, and all of those are annotation properties, and that leads to problem
<scribe> ACTION: alistair to rephrase the old issue of skos/owl-dl coexistence and semantics [recorded in]
bernard: it was good in the owl days to have implementations submitted, too
guus: for the moment it is good to collect the
information, it is good to use them as test cases
... but this group is much smaller than the old owl group, and we have a resource problem
alistair: there are two wiki pages, and the shiny new skos web site: SKOS home page
<alanr>
alistair: the idea that anyone who has implementation should be able to add it
UC 10 - BirnLEX
antoine: use case #10 birnlex, lexion for
neurosciences
... aims at providing several vocabularies
... they are the same as the bio-zen use case
... there is a mixture of different metadata models, skos, dc, foaf, etc
alistair: all they want is the some of the
properties like pref label, alt label, not in the structure
of label
... if skos has good annotating support, people may just want to use that
guus: i interpreted this as having a lot of
need to various type of relations
... there are many things in the examples term relations with other semantics
alan: the argument is that there is a desire
of the part of type of relationships that we may need in
general
... ie, people insert tags into the rdf labels,
... shows the importance of this issue
Alan: the BIRNlex use case may bring in issues for our vocabulary management work
alistair: this is a bit of annotating just about everything
UC 11 - OBI (Ontology for Biomedical Investigation)
<alanr>
alistair: they do not want skos broader and
narrower
... it is more that they want all type of documentation/annotation support
guus: the issue here is that you have your
concepts
... how to document/annotate various concepts
... and what skos give you on that
<scribe> ACTION: alan to write down the general documentation requirements, in particular to those that are related to literal values, and how to represent that in skos [recorded in]
antoine: use case #11 quite similar
... I have not read it in much details
... it is once again to represent all these various vocabularies and linking/importing skos concepts to an 'own' ontology
... and extending skos relations
guus: my proposal: there are still use cases
coming in
... we have to include facilities to evaluate use cases
... we should go through the list of the requirements and see if we can refine this
... and go through the issues' list
SKOS Requirements (sandbox)
-> SKOS Requirements sandbox
--R0. Information accessible in distributed setting
Guus: is this a requirement on SKOS?
Antoine: doesn't seem to change anything about SKOS or what it represents
Guus: seems to be a general Web requirement
Ralph: comes with RDF and the Semantic Web
RESOLUTION: drop R0. Information accessible in distributed setting as not SKOS-specific
--R1. Representing relationships between concepts
Bernard: "displaying or searching concepts"
might give the impression of constraining our scope
... e.g. excluding annotation
Guus: how about "representing relationships
between concepts"
... the ability to represent hierarchical and non-hierarchical relationships between concepts
-- R2. Representing basic lexical values (labels) associated to concepts
Antoine: "basic" as in "simple" as compared to more sophisticated scope notes
Guus: basic lexical _information_ or do you
really mean to restrict to _labels_ ?
... "access to" not needed
-- R3. Representing links between labels associated to concepts
Guus: we have an issue related to this
... this requirement may change after resolving the issue
Alistair: we could suggest a point of view
without making a hard requirement
... while reviewing all these requirements today
Guus: I suggest that any requirement with a related issue be marked as "soft"
-- R4. Representation of glosses and notes attached to vocabulary concepts
Antoine: "notes" means "scope notes"
Guus: so use the well-known term "scope notes"
Antoine: should we include administrative notes?
Jon: suggest "glossaries" instead of "glosses"
Guus: I thought there is a distinction between a glossary and a scope note
Alistair: what's the difference between
'gloss' and 'definition', then?
... SKOS hasn't used the term 'gloss' previously
Guss: "representation of textual descriptions ", with text mentioning definitions, scope notes, ...
R{6,5}. Multilinguality
Tom: suggest "Representation of lexical information in multiple languages"
Bernard: multiple _natural_ languages?
Guus: yes, good addition
-- R6. Descriptor concepts and non-descriptor ones
Guus: distinction between concepts intended to be used for indexing and other concepts?
Antoine: yes
... what I had in mind was the existing skos:subject
... some concepts cannot be used as subject relationships
Guus: qualifiers are still relevant to
indexing
... e.g. AAT vocabulary
... Furnishings ... furniture ... <furniture by form or function> ... screens
... the terms in <...> are not meant for indexing
Alistair: many folk would not consider the
<...> to be concepts; they call them "node
labels"
... they are labels for a grouping of concepts, the groupings are called 'arrays'
... they say the node label does not represent a 'concept'
... in the British standard it is quite clear that the node labels are only used in a certain way
... but AAT adds things to the thesaurus beyond the British standard
... it's just a matter of us wording this requirement correctly
... consider Mandragore; you're not supposed to use things from levels 1 and 2
... but the British standard demonstrates a requirement to be able to label groupings
Guus: propose to rephrase as "the ability to distingish between concepts to be used for indexing and for non-indexing"
Bernard: is this really a requirement or just an issue?
Guus: is this in the ISO standard?
Alistair: no, in ISO thesaurus any concept can
be used for indexing
... there's no a-priori reason why something not intended for indexing in one context would be inappropriate for use in another context
Guus: suggest R6 is a soft requirement
... and add a new requirement having to do with grouping
...
... "the ability to include grouping constructs in concept hierarchies" -- as a soft requirement
Alistair: hierarchies are not the only place
where node labels can be used
... node labels are also used in related terms
<aliman> see z39.19
-- R7. Composition of concepts
Guus: is this like conjunction and disjunction?
Alistair: the terms 'conjunction' and
'disjunction' don't really make sense as we're not talking
about sets of things
... the classical way of talking about this is to talk about 'coordination', and 'coordination of things'
... I'm afraid to use set-theoretic language, as this would be jumping the gun
... we're not talking about True and False or sets, rather we're talking about concepts
... 'compound concepts' is a term used in the thesaurus world
... 'post-coordination' usually means that things are coordinated at search time but it typically really just means queries with more than one thing
... I don't recommend referring to pre- or post-coordination
Guus: I recommend linking 'coordination' to an explanation
Alistair: I'd be happy using 'composition' rather than 'coordination'
Guus: let's categorize into 'candidate requirements' and 'accepted requirements' (rather than 'hard' and 'soft')
-- R8. Vocabulary interoperability
Guus: mapping at the level of equivalence,
more specific, less specific
... further things under discussion
... suggest dropping this, as we need to be able to test
Ralph: is R8 the general case and R12 a specific case?
Jon: I have another use case; our system
supports the expression of relationships between terms in
vocabularies we own and terms in vocabularies we don't
own
... the reciprocal relationship would need to be endorsed by the owner of the other vocabulary
Guus: I can make equivalence statements in my
own ontology and others can choose to use mine or not use
mine
... valid to have different statements about mapping and determine to which you commit
Jon: imagine two indexing systems but a single retrieval system
Alan: a search for A should include B but not vice-versa?
Jon: yes
Fabien: is this specific to equivalence or is it a filter on the source?
Guus: back when we did OWL, I had to spend a
long time defending owl:imports
... this may be outside the SKOS language, at a different level of the SemWeb stack
Jon: this is not about trust but about representing the intent of the thesaurus writer
Guus: but it's at a reasoning level
Alistair: we refer to 'SKOS concepts' and 'SKOS concept schemes'; perhaps we can also talk about 'mapping schemes'
Guus: like provenance?
Bernard: why isn't a concept scheme the same as a mapping scheme
Alistair: they're handled differently by
applications
... an application wouldn't display a mapping scheme as a hierarchy
Bernard: but if you dereference all the concepts in a mapping scheme wouldn't you end up with a concept scheme?
Alistair: there's current a loose
recommendation that two concepts in a single concept scheme
do not share a label
... this might be expressed as a logical constraint on a concept scheme
... but this constraint would be inappropriate for a mapping scheme
... if someone wants to capture in their RDF graph that there exists a set of mappings that he authored ....
... a concept scheme has a notion of 'containment'
... different integrity constraints if you're just collecting some mappings
Alan: if you use owl:sameAs, you're making a
bi-directional assertion
... but the author of a vocabulary might want only a one-way assertion
Antoine: is related to the issue Sean raised about containment
Bernard: is there something about R8 that is
not included in R12?
... "it shall be possible to record provenance information on mappings between concepts in different vocabularies"
RESOLUTION: R8 reworded to "it shall be possible to record provenance information on mappings between concepts in different vocabularies"
Tom: we use "vocabulary", 'concept scheme', and 'SKOS model'; let's stick to one term
Guus: I propose we drop 'concept scheme' and use only 'vocabulary'
Alistair: the ISO standard does not
distinguish between term-oriented and concept-oriented; I
made up this distinction
... you _can_ talk about whether the data model is term-oriented or concept-oriented, but not the vocabulary itself
Guus: consider a 'bank' vs. 'financial
institution' example
... 'bank' implicitly defines a concept, implicitly it's a lexical label
... consequence for a thesaurus is that the term 'bank' cannot be used anywhere else
... in practice this distinction is useful, which is why I'd prefer to not use the term 'concept scheme'
... 'vocabulary' is more general and makes less commitments
Tom: R10 is really talking about the extension
of the SKOS vocabulary, not the SKOS model
... could be confusing if we use the term 'vocabulary' generically
Alistair: the 'concept scheme' idea came from DCMI
Ralph: it's probably easier to refer to "the SKOS vocabulary" and "a SKOS concept scheme" to differentiate between the SKOS terms and a thesaurus written using the SKOS terms
Bernard: let's define terms near the start of the document
Antoine: I tried to consistently use 'vocabulary' for applications of SKOS and 'model' for SKOS itself
Alistair: there are implicit integrity constraints currently expressed only in prose
Tom: this is a big question that deserves more
thought, let's not decide now
... SKOS model is like an application model (in DCMI world)
... R10 talks about extending both the SKOS model and the vocabulary of properties
Guus: for the time being, let's distinguish between the terms in the SKOS vocabulary and the application terms that use SKOS
Guus: for now, let's use "SKOS vocabulary" and "concept scheme", respectively, for these two
-- R9. Extension of vocabularies
now called: "R9. Extension of concept schemes"
Alistair: how do I express that I want to import another concept scheme into my own, or import only a part of another concept scheme?
Bernard: why import, just reference?
Alistair: how does a browser know the boundary of a new concept scheme?
Bernard: related to Protege issue of how to represent externally-defined items
Guus: do we include maintenance properties,
revision information, etc.?
... I suggest we add a requirement related to versioning information
Bernard: R9 is about tools
Ralph: I suggest we keep the vocabulary management work, including versioning, as a separate task and not mix it into SKOS right now
Jon: example; replacing a single term with two
terms
... not necessarily establishing a broader/narrower relationship but dropping the old term
Guus: we handle this in OWL by deprecating old terms
Alistair: my approach is to worry first about how to represent a static model
Guus: suggest deferring versioning questions to separate vocabulary management work and later evaluate whether any SKOS-specific properties are needed
Alistair: we have requests to be able to
define concept schemes as 'we use everything in that scheme
with the following additions'
... Alistair: I need to find a better use case to motivate this
-- R10. Extendability of SKOS model
now "R10. Extendability of SKOS vocabulary"
Guus: means "local specialization of SKOS
vocabulary"
... propose to rename this to "local specialization of SKOS vocabulary"
... get this for free
-- R11. Attaching resources to concepts
Antoine: this is skos:subject; annotating resource
Fabien: inverse of dc:subject?
Alistair: skos:subject is dc:subject with a range constraint
Guus: propose to rename this to "Ability to
represent the indexing relationship between a resource and
a concept that indexes it"
... I suggest this is a candidate requirement
-- R12. Correspondence/Mapping links between concepts from different vocabularies
now "... different concept schemes"
Bernard: related to mapping between labels in different concept schemes; that can be a separate requirement
Guus: at a minimum, equivalent, less/more specific, and related
Alistair: also composition
Alan: is 'related' a superproperty of 'broader' or 'narrower'?
Alistair: no
Alan: please document this explicitly in the spec
Guus: propose a new candidate requirement: Correspondence mapping links between lexical labels of concepts in different concept schemes
-- R13. Compatibility between SKOS and other metadata models and ontologies
Antoine: may not bring any additional requirements on representational features
Guus: what other models do we want to be compatible with?
Alistair: Dublin Core
... note that changes have been made to Dublin Core specifically to align it with SKOS
<Elisa> Another metadata standard we should consider here is ISO11179
Elisa: ISO 11179 is another related standard,
on which Daniel Rubin and I have spent time recently
... Daniel is interested in 11179 because many biomedical ontologies use it
... by mapping 11179 to SKOS we bring a lot of those into the RDF world
Alistair: 2788 is a thesaurus standard and is
very different from 11179, which is a metadata model
... there is a particular part of 11179 that is intended to talk about classification schemes
... it's obvious how SKOS and that part of 11179 relate
Alan: what does "compatible with" mean?
Bernard: does "compatible with" mean "does not violate the [Dublin Core] abstract model"?
Ralph: what sort of test cases could we construct to decide "is compatible" or "is not compatible"?
Alistair: could we translate a data instance using 11179 to a data instance in SKOS? how much data loss? how much data loss in transforming back?
Alan: the scope of 11179 is much larger than that of SKOS
Guus: it would be good to identify specific
other models
... e.g. 2788, 11179 [part 3]
Alistair: 5964 (multilingual)
... I'd put 2788 as a stronger requirement than 5964
... interpretation of 5964 is harder
Alan: is SKOS a "metadata model"?
Guus: propose omitting the general requirements R13 and R15 and adding specific requirements for 2788, 11179.3
Alan: is all of 11179.3 relevant to SKOS? there's a lot of stuff in there
Elisa: I am happy to help narrow the
scope
... and the US contingent in the 11179 group are physically close to me
-- R.14 OWL-DL compatibility
Guus: we can talk about a SKOS representation that is OWL-DL compliant
Alan: make it formal that annotation [sub]properties are allowed?
Guus: we have to be sure that we can complete
our deliverables without requiring another WG to be
rechartered
... we can make comments to the OWL comment list about annotation properties
Alan: there's a partial workaround available to SKOS
Ivan: what does DL compatibility mean when you
have a processing model that includes some rules into
SKOS?
... regardless of annotation properties, SKOS is already out of DL
Alistair: the rules don't have to be used
-- R.16 Checking the consistency of a vocabulary
Guus: issue raised earlier about semantics
[I think Tom's suggestion is agreed implicitly]
Jon: I've been updating the sandbox wiki in realtime
<JonP>
Guest: Steve Williams, hyperforms
technologies, participating in W3C 2-3 in binary XML
WGs
...interest in semweb, relatied technologies, interested in AI.
Guest: opera software, interested in semweb
since 98, bumped into danbri, then graduate student of
astrophysics, then hired by opera, mostly programmin,
chaals getting me into more on semantic web.
... responsible for my opera foaf stuff
Current issues list:
guus: moving on to discussion of recipes for publishing RDF, have as input recipes document from SWBPD, incomplete document, Jon action to generate issues list, now on wiki
guus: suggest briefly review this list, pick out critical issues, spend time discussing critical issues, diego can play a role because has proposed resolution for one of these issues (we can discuss and decide on)
jon: first four issues left over from previous working group, diego's been working on first issue.
<ivan>
Diego's proposed resolution to COOKBOOK-I1.1
diego: [issue 1.1] already on mailing list, there was a TODO tag, issue regards configuration of apache to serve vocabularies, apache uses configuration files with directives, one of these directives is the overrides ...
<JonP> Diego's verification email:
diego: in original doc there was a TODO tag
next to overrides, to verify this is correct, I checked
this and discovered that the line was correct, no
additional overrides required, both overrides are
required.
... proposed to remove this TODO tag.
jon: is that resolution acceptable? how do we
handle? seems to be fine.
... vote as a group?
guus: can we write a test case?
diego: I have test case.
jon: diego sent around email, describing test cases and results.
<berrueta> test cases
guus: further discussion?
PROPOSED to resolve issue 1.1 as per email of
ralph seconds
no objections
RESOLVED
<scribe> ACTION: jon to update issue list as per resolution of issue 1.1 [recorded in]
jon: skip over second issue, because the TODO
is that it references 6 which doesn't exist
... issue 1.2 and 1.4 are essentially the same.
guus: PROPOSED to drop issue 1.2
diego seconds
no objections
RESOLVED to drop issue 1.2
<scribe> ACTION: jon to update issue list as per dropping of 1.2 [recorded in]
jon: issue 1.3 - why performing content
negotiation on the basis of the "user agent" heading.
... is not considered good practice.
bernard: whole section should be in an appendix, why in main body of text?
aliman: karl suggested move whole content negotiation to appendix
timbl: if you add features to a user agent,
because it stumps the deployment of new browsers, e.g.
folks at opera, resulted in user agents lying about who
they are
... e.g. some browsers ship with lying user agent fields, unsatisfactory, better to look at the mime types
... sometimes in practice necessary to look at user agent field to pick up bugs, where you know there are specific bugs, particular trap for particular browser.
jon: potential resolution is to explain the problem with using user agent, as per stunting development?
timbl: yes
diego: esiting doc to explain this?
ralph: we didn't want to break semantic web applications which don't include accept header, so set RDF as default response
timbl: two cases, one is your serving data, but if you are trying do the trick of doing either rdf html version, but only put if you are content negotiating ...TAG says something about identity of resourse
aliman: but use 303 so don't have to have same info content
timbl: yes
bernard: uncomfortable with hack
jon: real world, applies to IE7?
<timbl> The TAG I think says the same URI may conneg go to different representations ... but they should convey the same information
guus: someone take action to look at IE7
jon: regardless of IE7, should still leave hack in.
aliman: I agree
jon: issue 1.3 is actually to explain why the hack is slightly bad
ralph: new issue would be to look again at the hack
jon: two separate issues
ralph: if IE7 does the wrong thing, leave it, if IE7 does the right thing then drop the hack (except if you have to support a specific ocmmunity)
bernard: I will raise this issue
<scribe> ACTION: bernard to raise new issue re IE6 hack [recorded in]
<scribe> ACTION: diego to look at IE7 accept headers [recorded in]
ralph: test cases?
aliman: just whar's in the document already.
jon: 1.3 issue has been raised.
ralph: I move we conside 1.3 open - there is a TODO that needs to be done, timbl how likely is that TAG write something about use of user agent header?
timbl: may be something already, otherwise need to send email to TAG
ralph: I will own this issue
<scribe> ACTION: ralph propose resolutition to issue 1.3 [recorded in]
jon: move on to issue issue 1.4 ... recipe 6 is not there
bernard: do we need a recipe 6
Alistair: one of the reasons people like slash
namespaces is because the response to a GET is specific to
the requested resource and you can incrementally learn more
with additional GETs
... in recipe 5, if you request RDF you get redirected to a namespace document that describes everything
... recipe 6 was intended to permit serving just a relevant chunk of RDF data
<timbl> see:
TimBL: d2rdf does this ... a SPARQL query can
navigate a graph by recursively pulling in documents
...e.g. d2r server does that virtually, enthusiastic about this group pushing linked data, critical thing about linked data is that when you derefrence linked data you get all arcs in and out then human being can navigate the graph, then also SPARQL query can find all the graphs by pulling in all the relevant docs, not as efficient but good, important to make all the backlinks.
... it's good to remind people to include backlinks; dereferencing a student should give you a pointer back to the class
...minimum spanning graph, RDF molecule
... patrick stickler CBD only arcs out: Concise Bounded Description [Stickler, 2005] ... this is important recipse to include
... proposed workshop for web conference about linked data, didn't have space.
<timbl> The D2R Server generates linked data automatically.
jon: this should be opened, this recipe should be written.
guus: open means we work on it now.
timbl: two separate points to be made, first is the recipe redirects you to a file, but the file is vritual, most web servers are virtual, if you redirect to a SPARQL query, doesn't mean that the SPARQL query is evident in the URI, inside that you use a rewrite rule, give a nice URI to your _part_ then use a rewrite rule to create the SPARQL query, hide the SPARQL query.
jon: useful to say that as part of the recipe, best handled by a service that will handle on the fly.
timbl: important to separate, e.g. FOAF files do it by hand, in some cases there is a lot of hand written stuff, fact that something is genearated automatically may apply to other recipes also.
ralph: we are a deployment group, rewriting nice URIs to query URIs, good to show this to get more deployment.
guus: we are in a position to write a resolution to this section, who can do?
ralph: examples of sparql services we can use?
timbl: geonames? d2r server. one does 303 redirect to URI encoded SPARQL query.
ralph: sounds like some code existed.
jon: two parts to this, first part is data, second part is server configuration. We're looking for a document fragment example and server config.
ralph: wordnet is an obvious choice, but the W3C need to commit to D2R service.
diego: I will own the issue
guus: maybe diego can talk with Ralph about wordnet, nice use case, widely used.
ralph: need to get W3C web servers to support the service, but plausible.
jon: issue 2.1 (QA comments) ... karl raised wordsmithing and structural comments, lots, something for each section, I couldlnt' break out individual issues, I'd like to propose we simply open this, I'll take ownership, I'll implement most of his suggestions and propose as modification to the document.
guus: comments from QA people, we owe them a response. Need to go through each response, say what we did.
jon: Issue 2.1 ... raised by me (wiki lies) recipes are specific to apache server, may be applicable in non-apache environments, do we want to provide general template that describes recipes in general, or say that recipes can be implemented by a script.
bernard: general template is possible?
ralph: is there is one web master would recognise then can look at it. cookbook is very practical, make it simple for server admin to do it, if someone wants to submit recipes for other environments then good.
jon: common principles e.g. redirect based on content negotiation, I don't konw enough about other environments to say how.
ralph: like to encourage others to contribute recipes
jon: suggest we provide a place for people to submit new recipes for other environments
ralph: happy with mailing list for proposed translations.
ivan: wiki? ... esw wiki?
alistair: the diagrams were intended to
provide a schematic overview of the behavior we were trying
to implement
... hopefully this would give people enough information to implement in other environments
jon: leave this in a raised state? open it?
ivan: resolution to open wiki page.
ralph: willing to own issue, proposed resolution to create wiki page.
guus: publication schedule, can say we don't think it's a high priority if we think resources are limited.
ralph: should be ok, but may get not good configs
jon: issue 3.1 (raised by me) discussion about
differentiating between versions, one reason we use
redirects to supply most recent snapshot of a
vocabulary
... actualy document.
Alistair: consider Dublin Core; it has a fixed URI that is
redirected to the current version of the vocabulary
... an application may be able to deal with versioned URIs, and access older snapshots of the vocabulary
jon: why not use mod_rewrite instead of
redirects, can use redirects to make version ???
... proposing a complete suggestion for a naming convention for handling this sort of thing in a recipe ...
... link is in extended requirements section of original doc
<timblwanted to say 1) the redirect does
not convey that semantics and so the semantics need sto be
onveyedelsewhere and (b) the redirect is an overhead
... and (c) metadata in URI is covered by the TAG in a new finding and is in general bad
timbl: problem with redirects is twofold, part
is when you redirect, redirect doesn't say anything about
what the relationship between source and target is, doesn't
deliver the smeantics you want, also takes time - overhead
- adding it thourgh overhead is to be avoided if possible.
However to be able to track differences between versions of
an ontology is very useful, on web different translations,
different content types, design issues note about this
/gen/ont
... TAG is aware of this but hasn't tackled from RDF point of view, I wrote an ontology:
timbl: gen ont if best practices could just
dump the map, some metadata can get from apache, if running
CVS can generate metadata from previous versions if you've
got web CVS. If you've got content negotiation can find out
what all the options are from apache config
... that would be ideal, ideal pattern which nobody does at the moment - don't know if I should ask this group or another to look into this.
... suggest this group push this out.
ralph: I'd like us to consider this as a
candidate requirement for discussion, but not for this
document, because this doc was one of what was expected to
be a collection of docs coming out of SWBPD, several
aspects of VM e.g. how to serve them, versioning,
properties about a vocab e.g. provenance, best practices
for all that. SWBPD imagined there would be other docs to
go along with this, I image this would be part of our work,
Jon's soltuoin is plausible but
... consider this as candidate requirement for other VM work, not to try and solve for this doc (recipes).
jon: ralph is suggestins part of resolution of
this issue is to start another document, and that this part
of recipes should point to that documeent. Currently
recipes punt, lots of different ways to do it, nonbody says
what is best way. Part of utility is to say here is a
recipe, a way to do it, this is a generic enough recipe to
work in enough cases.
... so we should reword this doc?
ralph: shows up on our deliverables page ... "principles for management..." we can point to this document.
guus: propose to leave this issue as raised, go on with recipes without resolving it, indicate that this issue is intended to be resolved by another doc.
Ralph: specifically, our deliverable 3. Principles for Managing an RDF Vocabulary
Alistair: in anticipation of needing to use RDF to describe
the relationship between a dynamic thing and static
snapshots of that thing, I've published net/d4:.
d4 may provide a basis for discussion
... also GRDDL seems to be in a similar space; making assertions about the relationship between various documents you might be able to access from a namespace
jon: issue 3.2 (testing) diego has written
some unit tests, useful if we could provide a service for
developers who wanted to utilise cookbook recipes, provide
as a server validation service officially
... this would allow you to specify you wanted to test a particular server against a particular recipe
guus: not an issue with the current doc,
ralph: intermediate step to publish the test cases?
jon: thinking more like RDF validation
service, you point service at URL and say which
recipe.
... diego has already written the code, we just don't have the service.
ralph: you have test service?
timbl: great idea, presentation suggest that you may get more people who validation than go to the doc, so service could point people out to the doc, start from existing situation and lead people buy the hand to appropriate recipes.
<timbl> (Service could be implemented in Javascript within the document ;-) not.
guus: question of timing, has to be synchronised.. From pragmatic view, suggest take an action to look at possibilities and report back, time frames and synchronisation.
ralph: can you commit to hosting?
diego: I can write the code.. .
timbl: can it run in a browser?
diego: runs on server side.
guus: before resolving, do some suggestion on the list about how to realise this.
ralph: I could put this in category of vocabulary mangement validator, then falls into big validator project we have.
timbl: rethink about how to support validators, logically if you're going to validate, you can go so many ways ... top of an iceberg.
guus: open issue, diego is owner, first to propose a timescale.
jon: issue 3.3 raised by diego, mod_rewrite is required for all recipes, but we don't say so.
ralph: and apparently not there by default.
guus: jon is to be issue owner.
ralph: probably worth saying here's what apache config file to go to to cause it to be loaded.
guus: close this dicussion on the recipes,
thanks to all, moved to state where can see progress.
... have to think carefully about status of RDFa, Note? relationship with HTML? also think carefully about time horizon for SKOS recommendation, test cases and implementations worry, can see six months document to go to last call, also need a test suite in place, similar to OWL so tool developers can test stuff. Document itself in good shape, but getting to candidate rec may take more time.
l... to keep to schedule, we need to have test suite stage by the summer. | http://www.w3.org/2007/01/22-swd-minutes.html | CC-MAIN-2014-35 | refinedweb | 8,911 | 51.28 |
Cookies provide a useful means in Web applications to store user-specific information. JavaScript developers have been doing the bulk of cookie-related work for many years. ASP.NET also provides cookie access through the System.Web namespace. While you shouldn't use cookies to store sensitive data, they're an excellent choice for more trivial data such as color preferences or last-visit date.
Pass the cookies
Cookies are small files stored on the client computer. If you're a Windows user, examine the Cookies directory in your user directory, which is within the Documents And Settings directory. This directory contains text files with this filename format:
username @ Web site domain that created the cookie
The text files may contain name/value pairs, separated by an equal sign, along with more information. Let's turn our attention to working with these files in ASP.NET.
Cookie interaction in ASP.NET
The .NET System.Web namespace has three classes that you can use to work with client-side cookies:
- HttpCookie: provides a type-safe way to create and manipulate individual HTTP cookies.
- HttpResponse: The Cookies property allows client cookies to be manipulated.
- HttpRequest: The Cookies property allows access to cookies that the client maintains.
The Cookies property of both the HttpResponse and HttpRequest objects returns an HttpCookieCollection object. It has methods to add and retrieve individual cookies to and from the collection.
HttpCookie class
The HttpCookie class allows individual cookies to be created for client storage. Once the HttpCookie object is created and populated, you can add it to the Cookies property of the HttpResponse object. Likewise, you can access existing cookies via the HttpRequest object. The HttpCookie class contains the following public properties:
- Domain: Gets or sets the domain associated with the cookie. This may be used to limit cookie access to the specified domain.
- Expires: Gets or sets the expiration date and time for the cookie. You may set this to a past date to automatically expire or delete the cookie.
- Names: Gets or sets the cookie name.
- Path: Gets or sets the cookie's virtual path. This allows you to limit the cookie's scope; that is, access to the cookie may be limited to a specific folder or directory. Setting this property limits its access to the specified directory and all directories beneath it.
- Secure: Signals whether the cookie value is transmitted using Secure Sockets Layer (SSL).
- Value: Gets or sets an individual cookie value.
- Values: Retrieves a collection of key/value pairs contained within the cookie.
While this isn't an exhaustive list, it provides everything you need to work with cookies. A VB.NET example will give you a better idea of how it works:
Dim testCookie As New
HttpCookie("LastVisited")
testCookie.Value = DateTime.Now.ToString
testCookie.Expires = DateTime.Now.AddDays(7)
testCookie.Domain = "builder.com"
Response.Cookies.Add(testCookie)
This code creates a new cookie with the name LastVisited and populates the value with today's date and time. Also, the cookie expiration is set to one week, and the associated domain is populated. Once the object is created, it's added to the client's cookies collection via the Response.Cookies object's Add method. The HttpCookie constructor method has two variations:
- HttpCookie objectName = New HttpCookie("cookieName")
- HttpCookie objectName = New HttpCookie("cookieName", "cookieValue")
Also, the Response object contains a SetCookie method that accepts an HttpCookie object.
Where's my cookie?
Once cookies are stored on the client, there are various ways that you can access them. If you know the cookie name, you can easily access its value(s) with the HttpResponse object. The following VB.NET line displays the value associated with the cookie:
Response.Write(Request.Cookies("LastVisitied").Value)
In addition, the complete list of cookies may be accessed via an HttpCookieCollection object. This allows the cookie list to be processed with a for loop. The following C# code provides a sample:
HttpCookieCollection cookies;
HttpCookie oneCookie;
cookies = Request.Cookies;
string[] cookieArray = cookies.AllKeys;
for (int i=0; I < cookieArray.Length; i++) {
oneCookie = cookies[cookieArray[i]];
Response.Write(oneCookie.Name + " - " + oneCookie.Value);
}
Here's the same code in VB.NET:
Dim i As Integer
Dim oneCookie As HttpCookie
For i = 0 To Request.Cookies.Count - 1
oneCookie = Request.Cookies(i)
Response.Write(oneCookie.Name + " - " + oneCookie.Value)
Next I
Stability is an issue
The cookie files are stored on the client machine, so your users can delete or edit them at any time. In addition, some users may disable cookies. For this reason, never rely on that data. You should store critical information on the server—preferably in a database. Also, you should use cookies only for minor information that may customize the user experience.
Storing critical information in a cookie is considered poor programming because it can be viewed easily, since it resides in a file on the client machine. One way around this is to use SSL; a better approach is to avoid cookies with sensitive information.
Can I use cookies?
Users may disable cookie support in their browser. You can access this setting in your code to determine if cookies are supported. The Request object makes this determination easy. The following VB.NET code shows how it's used:
If Request.Browser.Cookies = True Then
' Work with cookies
Else
' No cookie support
End If
This may be combined with code to utilize cookie values. The following C# code snippet tests for cookie support and populates a text box control accordingly (whether a cookie is present or not):
if (Request.Browser.Cookies == true)
{
if (Request.Cookies["LastVisited1"] == null)
{
HttpCookie newCookie = new HttpCookie("LastVisited1",DateTime.Now.ToString());
newCookie.Expires = DateTime.Now.AddYears(1);
Response.Cookies.Add(newCookie);
this.txtName.Text = "Is this your first time?";
} else {
this.txtName.Text = "We haven't seen you since " +
Request.Cookies["LastVisited1"].Value;
} }
You could place this code in the Page_Load event of an ASP.NET page.
Another way to store data
ASP.NET provides several methods for storing user-specific data. One of the older methods is cookies. While cookies aren't the best vehicle for sensitive data, they're an excellent choice for benign items such as color preferences, last-visit date, and so forth. While this data is important, it isn't the end of the world if it's lost when the user's computer crashes.. | http://www.techrepublic.com/article/maintaining-client-data-with-cookies-in-aspnet/ | CC-MAIN-2017-34 | refinedweb | 1,055 | 50.02 |
meet the
A THIS IS OF ATION C I L B U P
014
may 2
Digital Magazine
Table of content Natudis, Kroon and Hagor... we like Nature
4
The changes at Natudis, Kroon and Hagor are happening one after the other...
Natudis is growing in all areas
5
You must have noticed. Natudis Nederland B.V., including the fresh produce wholesaler Kroon and wholesaler Hagor located in Belgium...
Get inspired by...
6
25 original types of packaging
25 brazen questions for...
8
Peter van der Schoot, Business Unit Manager at Kroon.
25 popular Natudis products All the little treasures in a row...
You know it’s springtime when...
10 12
25 brazen questions for... Damien de Breuck, Business Unit Manager at Hagor.
14
14
page 3
8
12 12
4
6
Natudis, Kroon and Hagor... we like Nature Welcome to the new Nature Magazine, a publication with which to share all kinds of Nature news. The changes at Natudis, Kroon and Hagor are happening one after the other. This is yet another result of the changes. From now on, we will be regularly informing you on what we’re working on; everything that moves us, inspires us and gets us going. For over 30 years, we as a Benelux wholesaler have worked to promote healthy food and a healthy lifestyle. Every day, we make efforts to improve our product ranges even more and to introduce new products that fit in with our natural wholesale trade. For over 25 years, our customers and business contacts have known our Natuurwinkel formula, of which we have recently opened our 25th shop. Reason enough for us to share this with you, our readers. To celebrate this together; a celebration of growth and the future. It’s important to us to shape a sustainable world with you by offering the best products that nature gives us, including both groceries and our fresh product range. To encourage natural nutrition and the use of natural food together with our (future) customers and to achieve a balanced dietary pattern, but also to work towards a more balanced Earth. We are able to receive food from the cross-pollination between human and nature. With that comes a certain responsibility for this nature, for our farmers and manufacturers, for you as an entrepreneur and for our customers. Our dream is for you to share these ideals with us so that everyone can make their own contribution. Wouldn’t that be great? In this publication, you will get to know us a bit better, and we would like to inspire you with fun information and ideas. All connected with nature in a natural lifestyle. Happy reading!
Petra van der Linden - Steenvoorden Brussen CEO of Natudis
Natudis is growing in all areas
page 5
You must have noticed. Natudis Nederland B.V., including the fresh produce wholesaler Kroon and wholesaler Hagor located in Belgium, was officially taken over by the family company Vroegop Ruhe & Co B.V. last April. There has been a lot of media attention surrounding the takeover and how we at Nature experienced it. But how well do you know the Vroegop Ruhe & Co B.V. company? We expect that you are curious about the history of our new shareholder. That’s why we’ve put together a brief history of how Vroegop Ruhe & Co B.V. came to be.
The forties
Bright future
Vroegop-Windig is a family company with a rich history. After a
Pieter Vroegop (CEO Vroegop Ruhe & Co B.V.): “This takeover is
few years of experience in his own produce shop, Piet Vroegop
very important to us, because we can learn from Natudis’
started a wholesale business in 1940 in national produce at the
knowledge and experience with organic food. We already had a
Amsterdam Food Centre. After a few years, his sons also joined
focus on organic food, but with this collaboration, I expect to be
the business, giving the business the name P. Vroegop & Sons.
able to take some important steps sooner, giving Natudis even
To accommodate changing customer needs, their product range
more room to work independently in the organic market”.
was expanded to provide produce from abroad. We had already started our collaboration with fresh produce In 1966 for example, the business merged with Ruhe, a specialist
wholesaler Kroon, but it too will be intensified. We also have a
in citrus fruits and bananas. Furthermore, the Windig takeover in
very positive impression of the Belgian wholesaler Hagor. All in
1996 brought in specific knowledge of exotic foods. In addition
all, the takeover has been an enrichment of the company.
to produce import and sales, logistics services took on a much larger role. In 2006, an ultra-modern distribution centre was
Solid & Passion
opened in Bleiswijk, from which they could supply products to
Through the years, Vroegop Ruhe & Co B.V. has proved to be
mainly retail customers.
a solid business which can adapt well to changing situations and can establish close cooperation with staff, customers and
2014
partners through mutual dedication. The passion for the trade,
Apart from Natudis, Vroegop Ruhe & Co B.V. currently consists of
and market and product knowledge can be found everywhere in
2 subsidiaries; Vroegop-Windig and De Kweker. Vroegop-Windig
the company.
is a produce wholesaler (in potatoes, fruits and vegetables) and organic, sustainable produce selling nationally in the retail
Our companies fit together very well. Therefore, we are very
and food service industries. De Kweker is a wholesaler in fresh
happy with the takeover. We feel that Vroegop Ruhe & Co B.V.
foods (produce, meats, fish, cheese, dairy and bread) and dry
supports our vision and can continue our set course.
groceries as well as non-food (kitchen and restaurant supplies)
“Our view is that joining forces between Vroegop Ruhe & Co B.V.
with self-serve wholesale markets in Amsterdam and Purmerend
and Natudis Nederland B.V. will lead to a great collaboration,
and delivery wholesale markets in Amsterdam, Wervershoof and
further increasing the focus on a healthy Earth and healthy food”,
Texel for food professionals.
says Petra van der Linden - Steenvoorden Brussen (CEO of Natudis).
Through the years, Vroegop Ruhe & Co B.V. has proved to be a solid business which can adapt well to changing situations and can establish close cooperation with staff, customers and partners through mutual dedication. The passion for the trade, and market and product knowledge can be found everywhere in the company.
Get inspi
25 original type
Empire State Sp
agetti!
Handy li ttle olive oi l for your salad to g o
A scoop of butter...
Seeing through rosĂŠ-coloured glasses...
ired by...
page 7
es of packaging
Gn
om
ight
ra s, st cake Cup n e v the o
e cr
um
bs‌
from
Source: Pinterest.com/pacegr/original-packaging/
25 brazen questions for...
Peter van der Schoot, Business Unitmanager at Kr oon
Most of us know Business Unit Manager Peter van der Schoot. We see him at trade fairs, in shops, at branch meetings….. But how well do we really know him? In this interview, we’ll get to know Peter even better as a professional, but especially as a human of Nature. What is the weirdest item in your pro-
n’t work for Kroon?
shop on the ground floor, where they prepare
duct range?
I would probably open my own organic wine
meals on the spot that you can take with you,
Drunk cheese………
bar. It might sound like a dream, but I would
and upstairs on the first floor they have an
love to let people taste delicious wines and tell
organic restaurant, where they use fresh
them all about them.
ingredients to make delicious meals.
To avoid having to pay taxes, the farmers used
What position would you most like
to hide the cheese under the grape must: the
to have if you weren’t a business unit
little skins and seeds that were left over from
manager?
making wine. After a few weeks, when the
I would love to be a reporter for the Michelin
cheeses were taken out again, they found that
Guide.
“Why I prefer not to drink wheat grass juice? You should try it, then you’ll know why...”
What makes it so weird? In Italy, they used to charge taxes on cheese.
they had developed somewhat of a unique taste. Nearby Venice, at the foot of the Alps,
What is your least favourite thing about
a certain cheese factory still makes “Drunk
your current position?
cheese” in the exact same way.
Approving invoices!
Which kind of service could you think of
Which item in your product range will
that does not exist yet?
you get out of bed for?
A practical way of bringing the consumer closer to the manufacturer, famer or grower the moment they make a purchase.
Our Demeter asparagus from the Watertuin. Our grower Gaveshi Reus really has the most fantastic asparagus; I always look forward to having them again. Whenever springtime
What is the most unusual request
comes around, I get that feeling again… and
you’ve ever got from a customer?
Peet de Krom’s strawberries; those are really
A request to provide organic, edible little flo-
just summer sweets for me.
wers for on a customer’s wedding cake. What is your favourite professional And what did you say to that unusual
magazine?
request?
Bouillon, which is not really a professional
Of course we can.
magazine but a culinary magazine, with great culinary stories. It’s a real treat to read.
What item from your product range would you rather not eat/drink?
Which shop inspires you the most?
Wheat grass juice.
Many different ones; I really like Eataly in Italy, where fresh products are prepared on the spot
Why not?
in the shop. Fresh & Wild in London, one of its
You should try it, then you’ll know why...
first organic shops. I also went to an organic shop in Dublin, in the basement they had an
Where would you like to work if you did-
organic wine bar, with an organic delicatessen
In short: it’s inspiring. I find it generally inspiring to visit shops; it is often the smallest details that make the shop more than just a location where you purchase your products. What would you like to see change right away in our branch? I would like to shift our thinking to one that focuses more on the entire chain, valuing all the components of the chain. That also includes the wholesale market, which is often seen as a redundant part between the farmer and the shops. In my opinion, the wholesale market can help to reinforce the chain and functions as the oil between the cogs of the chain. That way, we can achieve efficiency, coordination of supply and demand, and of course the whole logistics circus. Which Facebook or Twitter accounts do you like to follow? None; to be honest, I actually can’t stand Facebook and Twitter. I don’t really feel the need to constantly tell the world where I am. I
also have the feeling that people tend to only
Italy. Buratta is cream-filled mozzarella. I had
put out positive images of their lives, making
some when I was in Italy. Tomato carpaccio
it a one-sided medium, so I guess it’s more of
with buratta, a bit of sea salt and delicious
a Fakebook. I realise this may make me old-
olive oil. It was really amazing. I would love to
fashioned.
import it. It would be quite a challenge since it
page 9
can only be kept up to 14 days, is made by a How do you keep up to date on the
small-scale Italian farmer and is unknown to
newest developments in business/ma-
many consumers. I do, however, hope that we
hall in Papendal. It showed me just how much
can start importing it before summer starts, as
you have to train to reach your goal. And that
it’s too delicious not to.
willpower alone is not enough; you also need
nagement? Mostly by talking to a lot of people in the
to have enormous passion for what you’re
industry. Where will Kroon be in five years? Which development in the previous question do you agree most with? It’s more of an insight than a development. It’s from the book by Stephen Covey, about the 8 habits of effective leadership. According to Covey, effective leadership and change are best done in three steps. The first step comprises three habits, geared towards personal and individual development. They enable you to
doing.
In five years, it will have both feet on the ground. We always keep working hard to pro-
What is the best promotion campaign
vide the service and quality we have had until
you have ever led?
now. You’re only as good as your last achie-
Red de Rijke Weide Kaas [Save the Rich Pas-
vement, which means that you have to offer quality day in day out, and always do your best to find new products, making our customers and consumers happy.
ture Cheese]. That was a great project we set up together with Henk Pelleboer and the Dutch Bird Protection Foundation. It was a prime example of how you can involve consumers in societal issues with a product and make it vi-
establish yourself as an independent person.
How do you want your customers to see
sible in their living environment. The money is
The next three habits are about effective col-
Kroon?
directly invested in Rijke Weide [rich pasture].
laboration and constitute the second step. The
As an innovative, collaborative wholesaler.
seventh habit is about developing and maintai-
One who’s capable of connecting manufac-
Which business decision for Kroon will
ning the other six habits. This habit constitutes
turers and shops/consumers. So that the
you never regret?
the third step, together with the eighth habit:
products we sell are not anonymous and thus have added value for consumers. If a product
The collaboration we entered into with
people’s ability to live up to their own potential and to inspire others to do the same.
has that added value, it’s not about the lowest price, but more about the right price.
into practice?
What other player in the organic foods
By working on the company together with the
industry do you respect a lot?
Kroon team, with an enormous passion for
I respect many players in this industry, inclu-
organic products. In this process, everyone
ding De Groene passage. That collection of
has their own individual qualities which contri-
entrepreneurs has been around for over 15
bute to the total result. Everyone is incredibly
years, and is still a role model in sustainability,
important in this, as we are all dependent on
innovation and collaboration. Not only in its
each other. This togetherness has given our
story, but also in day to day operations.
people pride and great drive to develop Kroon What other player in the non-organic foods industry do you respect a lot? Which item would you like to add to
Daphne Schippers; at the World Champion-
your product range that you don’t al-
ships in Moscow she won the first women’s
ready have?
bronze medal in Dutch history. A year ago, my
Buratta; we found a fantastic manufacturer in
in the most beautiful organic company in the Benelux, with a new entrepreneurial zest, with a focus on service, quality and involvement
And how have you put that development
even further.
Vroegop 3 years ago, which has now resulted
daughter got to train with her at the Olympic
with our customers and suppliers.
Foun ded: In th 16/0 e gr 9/19 oup: Loca 93 22/ tion 0 4 : /201 0 Food Cen • A tre i ppro nA mst x. 10 • Fr erda 00 p esh m r oduc a n d fr t • D s ; a ozen elica ll or gan tess prod en b ic ucts rand s lik e PU UR
25 popular
natudis products
Fertilia
Mild olive oil
BioNut
page 11
Mixed nuts
1.
2.
Morga
Vegetable broth 1 kilo
Schär
BioNut
Rustico multigrain
Walnuts
5.
4.
3.
Terschellinger Cranberry juice
Horizon
Mixed nut paste
Schär
Ciabatta Rustica
Green & Blacks Dark chocolate 85%
EcoMil
Almond drink
6.
8.
7. Vivani
Schär
Dark Chocolate 92%
10.
9.
Ekoland
Apple concentrate
Dr. Karg
Pain Campagnard
Spelt with seeds
Allos
Agave syrup
12.
11.
Morga
13.
15.
14.
Vegetable broth 400 gram
Lima
Omega & More
Rice drink original
Coconut oil
Schär
Pan Carré
Oatmeal
16. Horizon
Almond paste
De Halm
17. Schär
Ertha sourdough bread
20.
18. Provamel Vivani
Dark chocolate 85%
Drink
BioNut
Brown almonds
21.
22.
23.
24.
25.
You know it’s springtime when... 1.
you wake up in the morning and you hear birds singing
2.
cafés have put their tables outside and you have to wait for a seat
3.
it’s too cold for skimpy spring clothes in the shade
4.
your cheeks get sunburned
5.
the smell of freshly mowed grass is in the air
6.
the cows are dancing in the meadow!
7.
buds and blossoms appear in the trees
8.
meadow larks are returning to the Dutch meadows
9.
rosé tastes like rosé again
10. you can get on your bicycle in the morning without putting gloves on first 11. you can hang up your clean laundry outside again 12. you see the first little lambs of the season out in the meadows 13. freckles that had been hiding during winter time come out again 14. you don’t need your blush and bronzing powder anymore 15. doors to the backyard can stay open all day, which pleases the pets 16. you can walk outside in your pyjamas without freezing 17. you get less disciplined in making appointments, deadlines etc. 18. you stay outside much longer, because you want to keep the feeling of that first sun on your skin 19. you feel like eating delicious salads as a main course 20. you refill the flower pots and get the garden furniture outside 21. it’s still just the right amount of nippy out to order a hot beverage 22. bare legs! 23. you wake up on Sunday from the neighbours rinsing their backyard terrace 24. the sun doesn’t go down until late 25. you can order iced tea instead of hot tea
page 13
25 brazen questions for...
Damien de Breuck, Business Unitmanager at Ha gor
Damien Breuck is our Business Unit Manager at Belgian wholesaler Hagor. Not everyone knows about our operations in Belgium and that’s why we’re giving you an inside look in the professional life of Damien, our Nature manager.
What is the weirdest item in your product range? Snail syrup. What makes it so weird? The name. When I first discovered the product in our range, I wondered whether it really had anything to do with snails. And yet, this syrup really does contain extracts from vineyard snails. The product has a healing effect on coughing… so there you go. Which item would you like to add to your product range that you don’t already have? There are many products of which I would like to introduce an organic version. An example is organic yeast, to name one.
Why would you rather not eat it? It has a very strong, quite unpleasant taste. I can’t really describe it; it’s been a long time since I last had it. Where would you like to work if you didn’t work for Kroon/Hagor? Probably for an NGO or at another organic company. What position would you most like to have if you weren’t a business unit manager/country manager? I would certainly be interested in a position as project manager. I would enjoy setting things up and guiding them to a good ending. This may not always be easy, but it’ll certainly keep you busy and often has lots of variety due to its multidisciplinary nature.
Which kind of service could you think of that does not exist yet? I don’t really have an answer for that off the top of my head. But I can imagine that new technologies could play a part in it.
What is your least favourite thing
What item from your product range
Which item in your product range will
would you rather not eat/drink? Peanut butter. That may not be the answer our neighbours to the north want to hear, but I really don’t like the taste. I have had the opportunity to try lots of delicious products.
you get out of bed for? That might be going a bit far... But there are lots of products I really love. I really love Fior di Frutta mandarin jam.” Then I would also enjoy a glass of Pizzolato Prosecco. Truly refreshing and pleasant. The LunaeTerra standard tree apple juice is
about your current position? I don’t like administrative work. It needs to be done, but it really is far from exciting.
top quality. And I could go on. What is your favourite professional magazine? I can always learn a lot from retail magazines. They provide insights on the
“I really love Fior di Frutta mandarin jam.” upcoming retail evolutions and product trends. The French magazine Linéaires is a good example. Linéaires also distributes a version about the organic market in France. Which trade fair inspires you the most? In our industry, I find Biofach the most inspiring. It’s more because of the contacts than the products. What would you like to see change right away in our branch? I would like to have all orders come in digitally, allowing them to be processed more quickly and more accurately. At the moment, we spend loads of time simply entering orders, which can result in mistakes due to lack of time. Which Facebook or Twitter accounts do you like to follow?
To be honest: very few. We are bombarded by so much information/messages as it is that I spend little time doing targeted searches through social media. When I do need specific information, I surf the web until I find something of value. How do you keep up to date on the newest developments in business/ management? Through the internet on websites like De Tijd. I also find that the business magazine CXO is worth looking through. Furthermore, I find that conversations with people from this industry or other industries are a source of inspiration for your activities. Which development in the previous question do you agree most with? The evolution of online shops and their implications for the retail trade. The ways in which consumers are kept up to date through communication technology on promotional offers, innovative products and services. There’s no stopping it. And how have you put that development into practice? We are still far-removed from all of that. But the fact is that we have to be alert and mustn’t ignore this development. Where will Hagor be in five years? Unfortunately, I don’t have a crystal ball, but I do think that the development of various technologies will have influence Hagor’s development. How do you want your customers to see Kroon/Hagor? I want our customers to see Hagor as a reliable, high-quality, customer-friendly and service-oriented partner. A partner who can also provide added value in service and
product range. We are still far from perfect and are aware that there are still steps we can and should take. What other player in the organic foods industry do you respect a lot? Not one company in particular. I do, however, respect entrepreneurs who had the courage to start their company and turn their vision into a success story. I take my hat off to them.
page 15
Do you think there’s a clear difference in your product range sales between French-speaking and Dutch-speaking
What other player in the non-organic foods industry do you respect a lot? I respect starters who put products or services on the market using good, innovative ideas and then turn out to become very successful. Facebook is a good example. Companies who take societal aspects into account also deserve a pat on the back.
consumers? First and foremost, languages on packaging play a role. French-language brands do better in proportion in the south of the country than they do in Flanders. Taste makes a big difference as well.
What about working for a Dutch company took the most getting used to? I had to get used to the culinary side of things. Meat croquettes and milk for lunch was quite the odd combination. But in the meantime, I’ve very much come to enjoy the delicious organic salads at Natudis. Besides that, I suppose the Dutch are a bit more direct. That took a little adjustment as well. But it has been very pleasant working with them. What kinds of products do you market in Belgium that don’t do so well in the Netherlands? Let me think for a moment... I can’t think of anything. Is there a reason for it? I think it’s because the Dutch and Belgian markets are very similar.
Foun ded: 2 In th 9/06 e gr /196 oup: Loca 30/0 3 tion 6/2 : 000 Wijg maa • A l, Be ppro lgiu x. 6 m • al 000 l org prod anic ucts • G roce ry p rodu cts
walk over your twinkling thoughts dance among whirling splashes of colour look through the mirrors of your dreams grow and feel the light smiling at you | https://issuu.com/natudis/docs/digital_magazine_natudis | CC-MAIN-2017-34 | refinedweb | 4,315 | 74.08 |
With our product we stumbled across a segfault in mod_dav_fs / mod_dav, as follows.
When a PROPPATCH attempts to remove a non-existent dead property on a resource for which there is no dead property in the same namespace httpd segfaults (e.g., attempt to remove a property 'foo' in namespace '' on a new resource which has no other dead properties).
Furthermore, if the resource had no dead properties, after this segfault httpd leaves the dead property DBM in a state which causes an httpd segfault on every PROPFIND (the .pag and .dir files are both zero length).
From our analysis the problem boils down to three issues. A patch for each follows.
Created attachment 28228 [details]
First patch to fix PROPPATCH segfault
This patch fixes the described segfault in the PROPPATCH.
In the described case a rollback cannot be created when doing the dav_method_proppacth(), because dav_propdb_get_rollback() fails without returning a rollback object. Thus the call to dav_propdb_apply_rollback() segfaults because rollback is NULL. This patch just protects against that case considering it a success.
Created attachment 28229 [details]
Do not fail PROPPATCH when prop namespace not known
With the patch in attachment 28228 [details]).
Created attachment 28230 [details]
Do not segfault on PROPFIND with a zero length DBM
As described above, when httpd segfaults during the PROPPATCH it leaves a zero length DBM if no other dead properties existed for the resource. Doing a PROPFIND on the resource segfaults httpd.
The cause of the segfault is that dav_get_allprops() does not check the return value of the first_name() nor next_name() DB hooks for errors. When the DBM is of zero length (both the .dir and .pag files are zero length) first_name() returns an error and leaves its 'name' argument uninitialized. But then 'name.ns' is accessed just after the first_name() call, possibly causing a segfault or other errors as 'name' is stack allocated.
The attached patch changes this so that the return value of first_name() and next_name() is checked and the while loop on the properties be stopped in case of error.
As it seems that dav_get_allprops() cannot return an error I could not see another way to handle this situation and this is how errors on the output_value() hook call are treated within dav_get_allprops() anyhow.
Forgot to mention, but the tests have been done against httpd 2.2.21. The attached patches are against this version too.
I can confirm this crash still occurs in Apache 2.4.4: the PROPFIND causes a segfault and leaves the zero-length DAV prop db behind. (However, the zero-length prop db no longer causes later operations to crash the server, so that's progress I guess.)
Program received signal EXC_BAD_ACCESS, Could not access memory.
Reason: KERN_INVALID_ADDRESS at address: 0x0000000000000010
0x0000000100602c53 in dav_propdb_apply_rollback ()
(gdb) bt
#0 0x0000000100602c53 in dav_propdb_apply_rollback ()
#1 0x00000001000c8d2d in dav_prop_rollback ()
#2 0x00000001000c1b0e in dav_process_ctx_list ()
#3 0x00000001000c2074 in dav_method_proppatch ()
#4 0x00000001000c7063 in dav_handler ()
#5 0x00000001000015ef in ap_run_handler ()
#6 0x0000000100001eaf in ap_invoke_handler ()
#7 0x000000010005b7de in ap_process_async_request ()
#8 0x000000010005b8b0 in ap_process_request ()
#9 0x00000001000573fb in ap_process_http_sync_connection ()
#10 0x00000001000574f6 in ap_process_http_connection ()
#11 0x0000000100019906 in ap_run_process_connection ()
#12 0x0000000100019dd7 in ap_process_connection ()
#13 0x00000001000e24d8 in child_main ()
#14 0x00000001000e25e4 in make_child ()
#15 0x00000001000e2c5d in prefork_run ()
#16 0x000000010001c47d in ap_run_mpm ()
#17 0x000000010000d924 in main ()
The crash also still occurs in trunk/2.5 r1470659, no apparent change from 2.4.4
First patch applied to trunk in r1476642.
Second patch applied to trunk in r1476644.
Third patch applied to trunk in r1476645.
Can you test and verify this works for you?
Backport proposed for v2.4.
Backported to v2.4.5.
Proposed for backport to v2.2.
Confirming I can no longer reproduce this in the 2.4.x series. | https://bz.apache.org/bugzilla/show_bug.cgi?format=multiple&id=52559 | CC-MAIN-2020-29 | refinedweb | 609 | 72.26 |
We recently worked with a customer who ran into an interesting situation. This problem deals with SQL Server 2005 Service Pack 3 setup.
Normally, when you launch the SQL 2005 SP3 setup and you reach the screen which shows the components for which you can apply the service pack, you will get a list of all the product components. For a server with one default instance of SQL Server Database Services installed, the list will appear as shown below.
In this customer’s scenario, there were 2 servers which did not list all the components. Their setup screen looked like the following:
Notice that the 3 components highlighted from the previous screen is missing in this screen. Because of this situation, they cannot apply the SQL 2005 SP3 to these 3 components on these servers. The components [database services, integration service and client] were working properly. Only when setup attempts to enumerate the installed components, it was unable to get the complete list.
We started looking at their setup logs and did not find any errors or warning that would indicate any problem. Next we started looking at how the setup enumerates the installed components and qualify them for the upgrade to this service pack. We verified that the instance is listed properly in the registry key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL. If this registry key does not contain the correct information, then you can encounter this problem. That was not the issue for them.
When parsing through the Event logs we noticed the following entries appearing at regular intervals:
Even though this event log entry did not have direct correlation to the setup attempts or timing, it gave a vital clue. Something was wrong with the WMI infrastructure on this machine. We know that setup uses WMI heavily to perform the discovery and enumeration of the installed instances and components. So we turned our attention to WMI logging.
Here is the relevant snippet from wbemcore.log that showed the results of the WMI calls from the SQL Server setup program:
(Mon May 29 14:31:01 2010.759152890) : GetUserDefaultLCID failed, restorting to system verion(Mon May 29 14:31:01 2010.759153015) : CALL CWbemNamespace::OpenNamespace
BSTR NsPath = cimv2
long lFlags = 0
IWbemContext* pCtx = 0x0
IWbemServices **pNewContext = 0x270F058
(Mon May 29 14:31:01 2010.759153062) : STARTING a main queue thread 2592 for a total of 1
(Mon May 29 14:31:02 2010.759153484) : GetUserDefaultLCID failed, restorting to system verion(Mon May 29 14:31:02 2010.759153500) : CALL CWbemNamespace::OpenNamespace
BSTR NsPath = default
long lFlags = 0
IWbemContext* pCtx = 0x0
IWbemServices **pNewContext = 0x279F058
(Mon May 29 14:31:02 2010.759153531) : Error 80041002 occured executing request for root\default
This snippet informs us that connections to root\cimv2 namespace was successful but we encountered a failure when connecting to root\default namespace.
Next we used the WBEMTEST.EXE tool [located @ C:\WINDOWS\SYSTEM32\WBEM\] to isolate this to be a clear WMI problem. When we attempted to connect to the root\default namespace, we got the same error we observed from the wmi logs.
Basically this error code corresponds to (WBEM_E_NOT_FOUND) Object cannot be found. Why does SQL setup uses the WMI namespace root\default? Setup uses the StdRegProv class in WMI. The StdRegProv class provides EnumValues method to query values from registry. The StdRegProv class is available in the root\default namespace.
So the next step was to rebuild the corrupted WMI namespace. We worked with our Windows support team and used the following commands to rebuild the namespace:
In c:\windows\system32\wbem
"for /f %s in ('dir /b /s *.dll') do regsvr32 /s %s"
Then from the root of the drive, run
"for /f %s in ('dir /s /b *.mof *.mfl') do mofcomp %s"
After this the customer needed to perform a reboot and then we were able to connect to the WMI namespace, setup was able to enumerate all components and apply the service pack. If you are doing this procedure on your own, it would be a good idea to perform a system backup to make sure you can restore system components in case of a problem.
During this investigation we also found out that in the recent Operating Systems, WMI logging is done in a much different way. For more details, refer to Tracing WMI Activity.
Thanks
Suresh B. Kandoth
Senior Escalation Engineer – SQL Server | https://blogs.msdn.microsoft.com/psssql/2010/06/12/where-did-the-sql-server-instance-disappear-the-clue-may-be-in-the-wmi-logs/ | CC-MAIN-2019-09 | refinedweb | 736 | 63.59 |
February 15th, 2014
Generate A Mandelbrot Set In H2OShare Category: Uncategorized
By: H2O.ai
Roses are red,
Violets are ~ Blue,
H2O is sweet,
And fractals are too!
In this post, we’ll dive into how to make some nice mathematical Valentine’s day grams using H2O!
(NB: No paper or scissors required! It does require an input dataset corresponding to the image dimensions you desire.
We generated a 1500×1500 pixel grid of (x,y)-coordinates with some Python code.)
While generating a Mandelbrot set and its image is not a big data task, H2O does enable you to produce a fairly high quality Mandelbrot image in just a few seconds.
As you may know, the Mandelbrot set (a subset of the complex numbers) is a set whose elements form the basis of a convergent sequence of complex numbers. The sequence is given by this recurrence relation
$$z_n = z_{n-1}^P + c$$
Where c is a “candidate” complex number. (Typically you’ll see $$P = 2$$ — that’s what we’ll do too).
We set the the size of the sequence to the number of iterations we want, and measure convergence by looking at the modulus of $$z_n$$ at each iteration. If the modulus exceeds some threshold, we say the sequence diverges and “c” is then not in the Mandelbrot set, otherwise it converges!
The point “c” is then colored according to the number of iterations accomplished. To produce really stellar images, it’s better to have fine-grained color gradients in your palette and then alpha compost them. The palette we used to generate the Valentine’s day grams in this post had ~200 colors. Although there is still some “banding”, the gradients look pretty smooth on my screen (Retina 15″ MBP).
OK, let’s get to work!
First Glance
At first glance, there are a few important inputs:
Image dimensions - Height and Width of image in pixels. (We generated data by hand)<br /> Max Iterations - Iterations of the recurrence relation.<br /> Convergence Threshold - What constitutes membership in the Mandelbrot set.<br />
Here are a few more that we may want later when we’re exploring our final images:
Zoom - Zoom in on the Mandelbrot set at the (offsetX,offsetY) position. offsetX - Zoom focus x-coordinate. offsetY - Zoom focus y-coordinate.
The goal is to add a Mandelbrot UI for setting the input parameters and upon submitting produces the Mandelbrot image.
First let’s start off with a skeleton implementation and fill it in a little bit at a time:
/** * The main class that holds all of the Mandelbrot-related functionality. * Any UI page must subclass Request2 and implement Response.serve(). * This is demonstrated below.= */ public class Mandelbrot extends Request2 { /** * Declare any public API arguments here. * Here's where our inputs will go: Height, Width, Threshold, * Max Iterations, Zoom, OffsetX, OffsetY */ @Override protected Response serve() { // This is where we'll make the call to generate the Mandelbrot // set & produce the image. } /** * A class that represents our fractal. * This is the centerpiece of our implementation. * To do distributed & parallel work, we will have this * class extend MRTask2 and override the map() method. * More on this below.... */ protected static class FractalSet extends MRTask2<FractalSet> { private double[] z_squared(double a, double b) { //return the square of the complex number z = a + ib } private double modulus(double a, double b) { //return the modulus of a complex number z = a + ib } public int iterate(double a, double b) { // take the candidate complex number c = a + ib // and iterate with the recurrence relation zn = z_{n-1}^2 + c // return the color } private int palette(int iters) { //choose color from palette based on the number of iterations //Color returned is number of iterations modulo palette size } private int linerp(int color1, int color2, double alpha) { //linear interpolate (AKA alpha compost) two colors. } @Override public void map(Chunk[] cs, NewChunk[] ncs) { /** * Break the input into chunks and compute the * the iteration number per row (i.e. per pixel) * Produce a new frame with x, y, and decimal color. */ } } public class MandelbrotView extends JFrame { /** * Here we set the up details of the JFrame. * We use Java's BufferedImage to make the final image * This part is single threaded... */ } }
Starting with the FractalSet class, we can knock down the z_squared and modulus methods pretty easily:
The square of a complex number can be computed using the FOIL method, and remembering that the imaginary unit follows the same squaring rules as real numbers:
$$z^2 = (a + bi) * (a + bi) = (a^2 - b^2) + i*2*a*b$$
The modulus is a fancy name for length, or magnitude, or distance with respect to the origin. So:
$$|z| = \sqrt(a*a + b*b)$$
Our methods are written as follows:
private double[] z_squared(double a, double b) { double newa = a*a - b*b; double newb = 2*a*b; return new double[]{newa, newb}; } private double modulus(double a, double b) { return Math.sqrt(a*a + b*b); }
The palette is also pretty simple to implement. We chose colors suitable for the day, and there 193 of them. Again we chose colors according to the number of iterations taken modulo the number of colors in our palette:
private int palette(int it) { int[] color_palette = new int[]{7473449,7539499,7605805,7737391,7803697,7935284,8001590,8133176,8199482,8331325,8397375,8529217 ,8595267,8727109,8793160,8925002,8991308,9057358,9189201,9255251,9387093,9453143,9584986,9651036,9782878,9849184,9980770, 10047077,10178663,10244969,10376555,10442862,10574704,10640754,10707060,10838647,10904953,11036539,11102845,11234431,11300738, 11432580,11498630,11630472,11696523,11828365,11894415,12026257,12092564,12158614,12290456,12356506,12488348,12554399,12686241,12752291, 12884133,12950440,13082026,13148332,13279918,13346224,13477811,13544117,13675959,13742009,13808316,13939902,14006208,14137794,14204101,14335687, 14401993,14533835,14599885,14731728,14797778,14929620,14995670,15127513,15193819,15259869,15391711,15457762,15589604,15655654,15787496,15853546, 15985389,16051695,16183281,16249587,16381174,16447480,16579066,16645372,10799852,10668009,10536423,10404836,10338786,10207199,10075613,10009562, 9877976,9746389,9614547,9548496,9416910,9285324,9219273,9087687,8956100,8824514,8758463,8626877,8495034,8428984,8297397,8165811,8099761,7968174, 7836588,7705001,7638951,7507108,7375522,7309471,7177885,7046298,6914712,6848661,6717075,6585489,6519438,6387596,6256009,6124423,6058372,5926786, 5795199,5729149,5597562,5465976,5399926,5268083,5136497,5004910,4938860,4807273,4675687,4609636,4478050,4346463,4214621,4148570,4016984,3885398, 3819347,3687761,3556174,3424588,3358537,3226951,3095108,3029058,2897471,2765885,2699835,2568248,2436662,2305075,2239025,2107182,1975596,1909545, 1777959,1646372,1514786,1448735,1317149,1185563,1119512,987670,856083,724497,658446,526860,395273,329223,197636,66050}; return color_palette[it % color_palette.length]; }
In order to compost two colors together, we’ll use a linear interpolation function that takes two colors, strips out their rgb components, blends the components and then combines the r, g, b colors into a single int value.
private int linerp(int color1, int color2, double alpha) { int r1 = (color1 >> 16) & 0xFF; int g1 = (color1 >> 8) & 0xFF; int b1 = color1 & 0xFF; int r2 = (color2 >> 16) & 0xFF; int g2 = (color2 >> 8) & 0xFF; int b2 = color2 & 0xFF;</p> int rgb = (int)Math.round(r1*alpha + r2*(1 - alpha)); rgb = (rgb << 8) + (int)Math.round(g1*alpha + g2*(1 - alpha)); rgb = (rgb << 8) + (int)Math.round(b1*alpha + b2*(1 - alpha)); return rgb; }
Let’s tackle the iterate()function now.
The idea here is to generate a new complex number while it does not exceed some threshold and the number of iterations is below some threshold. Additionally, though, we want to produce a color from this process, and attempt to smooth the color transitions caused by different numbers of iterations. Therefore, if the number of iterations reached is not maximal, then we can color using the following pieces of math:
$$\nu = (\log(z_n) / \log(T) ) / \log(T)$$
where $$T$$ is the threshold. By subtract this from the iteration count and then flooring it, we can retrieve a color from our palette. Retrieve the neighboring color from the palette and then compost them. Here’s what that code looks like:
public int iterate(double a, double b) { int n = 0; double it; double ca = (a - _density/2) / _zoom; double cb = (b - _density/2) / _zoom; a = b = 0; int iter = _maxIters; while((modulus(a, b) < _threshold) && (iter > 0)) { double[] z2 = zsquared(a,b); a = z2[0] + ca; b = z2[1] + cb; iter--; } if ( iter > 0) { double zn = modulus(a, b); double nu = ( Math.log(zn) / Math.log(_threshold) ) / Math.log(_threshold); it = iter + 1 - nu; } else it = iter; int _it = (int)Math.floor(it); int color1 = palette(_it); int color2 = palette(_it + 1); return linerp(color1, color2, it % 1); }
The last piece of this is the map. It takes a chunk array, which is a chunk of rows, and produces some new chunks. The coder doesn’t need to worry about how the chunks are managed, H2O takes care of all that.
The chunk array is expected to have length 2, since we are working with 2 columns of input (x-dim, y-dim). The output dimension is 3: x, y, color. Again, color is the output from the iterate() function above. Here’s what the map looks like:
@Override public void map(Chunk[] cs, NewChunk[] ncs) { assert cs.length == 2; for(int row = 0; row < cs[0]._len; row++) { double x = cs[0].at0(row); double y = cs[1].at0(row); int color = iterate(x,y); ncs[0].addNum(x); ncs[1].addNum(y); ncs[2].addNum(color); } }
@Override Response serve()
Let’s work on the serve() call as our last part (the JFrame part is standard Java, I’ll just hand that out at the end. No secret sauce there.)
There’s a standard “header” to each Request2 subclass that contains various UI knobs and declarations.
Here’s what a starting header might look like (OffsetX and OffsetY are left to the reader!):
public class Mandelbrot extends Request2 { static final int API_WEAVER = 1; // This file has auto-gen'd doc & json fields static public DocGen.FieldDoc[] DOC_FIELDS; // Initialized from Auto-Gen code. @API(help = "Destination key", required = false, filter = Default.class) protected final Key destination_key = Key.make("Mandelbrot"); @API(help = "Data frame", required = true, filter = Default.class) public Frame source; @API(help = "Threshold to stop iterating.", required= true, filter = Default.class) public int threshold = 2; @API(help = "Max Iterations", required = true, lmin = 1, filter = Default.class) public int max_iters = 10000; @API(help = "Zoom in on Mandelbrot drawing (default is no zoom)", required = false, lmin = 1, filter = Default.class) public int zoom = 1; ......
The serve call will instantiate a new FractalSet object (constructor provided in full code) and that will invoke the computation with the function doAll(int, Vec[]), which takes an integer (the number of output fields) and a Vec array. A Vec is an internal H2O data format representing a single column vector. The output frame is then retrieved and passed on to a MandelbrotView object (not shown yet) that creates a new BufferedImage object, loads in the pixels and colors, and finally paints it. Here’s the serve() call code:
@Override protected Response serve() { try { FractalSet F = new FractalSet(max_iters, zoom, threshold, source.anyVec().length()); Vec[] vecs = source.vecs(); F.doAll(3, vecs); Frame fr = F.outputFrame(destination_key, new String[]{"x", "y", "decimal_color"}, null); fr.delete_and_lock(destination_key).unlock(destination_key); UKV.put(destination_key, fr); new MandelbrotView(fr).setVisible(true); } catch( Throwable t ) { return Response.error(t); } return Inspect2.redirect(this, destination_key.toString()); //MandelbrotView.redirect(this, destination_key); }
And last but not least, the MandlebrotView class:
public class MandelbrotView extends JFrame { private final int width = 1500; private final int height = 1500; private BufferedImage I; private Frame _fr; public MandelbrotView(Frame fr) { super("Mandelbrot Set"); _fr = fr; setBounds(100, 100, width, height); setResizable(false); setDefaultCloseOperation(DISPOSE_ON_CLOSE); I = new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_RGB); Vec[] vecs = fr.vecs(); for (int row = 0; row < vecs[0].length(); row++) { I.setRGB((int)vecs[0].at8(row), (int)vecs[1].at8(row), (int)vecs[2].at8(row)); } } @Override public void paint(Graphics g) { Graphics2D g2 = (Graphics2D)g; RenderingHints rh = new RenderingHints( RenderingHints.KEY_ANTIALIASING, RenderingHints.VALUE_ANTIALIAS_ON); RenderingHints rh2 = new RenderingHints( RenderingHints.KEY_ALPHA_INTERPOLATION, RenderingHints.VALUE_ALPHA_INTERPOLATION_QUALITY); RenderingHints rh3 = new RenderingHints( RenderingHints.KEY_COLOR_RENDERING, RenderingHints.VALUE_COLOR_RENDER_QUALITY); RenderingHints rh4 = new RenderingHints( RenderingHints.KEY_DITHERING, RenderingHints.VALUE_DITHER_ENABLE); g2.setRenderingHints(rh2); g2.setRenderingHints(rh); g2.setRenderingHints(rh3); g2.setRenderingHints(rh4); g2.drawImage(I, 0, 0, this); } }
Here’s a few more images generated by this code:
Happy Valentine’s Day!
Join the AI Revolution
Subscribe, read the documentation, download or contact us. | https://www.h2o.ai/blog/roses-red/ | CC-MAIN-2018-51 | refinedweb | 2,045 | 54.83 |
-03-29 00:00Z, eehab hamzeh wrote:
>
> gcc.exe: hamzeh.dll no such file or directory
>
> this comes after writing the following command
> gcc -shared hamzeh.dlltt1.o "c:/programme/postgresql/8.3/lib" -lpostgres
Perhaps you wanted a command like this instead?
gcc -shared -o hamzeh.dll tt1.o -L "c:/programme/postgresql/8.3/lib" -lpostgres
'hamzeh.dlltt1.o' looks like a space is missing between 'l' and 't'.
'-o' names the output file.
'-L' names a library search path.
hi,
i have downloaded mingw v2.0.0 and msys v1.0.8, i
downloaded gcj too to use with mingw and now i got the
gcc source code and i was trying to build it using
msys and mingw ,
i typed
./configure
then make
and it was compiling and it finally gave me an error
and stoped. im doing this on win ME on p2.
now what is going on??? shouldnt gcc compile ?
please tell me what to do.. thanks
__________________________________________________
Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.
Dear sir,
I am trying to build the dll file to use in postgres. i keep receiving error. can you please help me or provide me with direction of where i can find solution to the problem. iam new to míngw-msys
i am using mingw.
i was able to compile the *.c file to *.o file.
the problem start appear when i try to create dll library i recieve the following error.
gcc.exe: hamzeh.dll no such file or directory
this comes after writing the following command
gcc -shared hamzeh.dlltt1.o "c:/programme/postgresql/8.3/lib" -lpostgres
below is my function
#include "postgres.h"
#include <string.h>
#include "fmgr.h"
/* by value */
PG_FUNCTION_INFO_V1(add_one);
Datum
add_one(PG_FUNCTION_ARGS)
{
int32 arg = PG_GETARG_INT32(0);
PG_RETURN_INT32(arg + 1);
}
hopefully you can help me...
kind regards
ihab
What can you do with the new Windows Live? Find out
_________________________________________________________________
Show them the way! Add maps and directions to your party invites.
alex bohemia wrote:
> hi, i have downloaded mingw v2.0.0 and msys v1.0.8, i downloaded gcj
> too to use with mingw and now i got the gcc source code and i was
> trying to build it using msys and mingw ,
>
A test release of Gjc is available for download from the
web-site.
If you *REALY* have to build it for yourself look back thru the mailing
list to see if anyone else has asked questions about it.
Good luck
Paul | https://sourceforge.net/p/mingw/mailman/message/21957364/ | CC-MAIN-2018-17 | refinedweb | 419 | 78.96 |
Immutability in React and Redux: The Complete Guide
This article has been republished from daveceddia.com.” Maybe you or one of your teammates regularly writes Redux reducers that mutate state, and you have to constantly correct them (the reducers, or your teammates 😄)
It’s tricky. It can be really subtle, especially if you’re not sure what to look for. And honestly, if you’re not sure why it matters, it’s hard to care.
This guide will explain what immutability is and how to write immutable code in your own apps. Here’s what we’ll cover:
- What is Immutability?
- What’s a “Side Effect”?
- Why Immutability Is Important In React
- How Referential Equality Works in JS
- Does
constEnforce Immutability?
- Recipes: How To Update State in Redux, including:
- Update an Object
- Update an Object in an Object
- Updating an Object by Key
- Prepend an item to an array
- Add an item to an array
- Insert an item in the middle of an array
- Update an item in an array by index
- Update an item in an array with
map
- Update an object in an array
- Remove an item from an array with
filter
- Easy State Updates with Immer (saved the best for last)
What Is Immutability?).
While JavaScript isn’t a purely functional language, it can sorta pretend to be sometimes. Certain Array operations in JS are immutable (meaning that they return a new array, instead of modifying the original). String operations are always immutable (they create a new string with the changes). And you can write your own functions that are immutable, too. You just need to be aware of a few rules.
A Code Example with Mutation
Let’s look at an example to see how mutability works. We’ll start with this
person object here:
let person = { firstName: "Bob", lastName: "Loblaw", address: { street: "123 Fake St", city: "Emberton", state: "NJ" } }
Then let’s say we write a function that gives a person special powers:
function giveAwesomePowers(person) { person.specialPower = "invisibility"; return person; }
Ok so everyone gets the same power. Whatever, invisibility is great.
Let’s give some special powers to Mr. Loblaw now.
// Initially, Bob has no powers :( console.log(person); // Then we call our function... let samePerson = giveAwesomePowers(person); // Now Bob has powers! console.log(person); console.log(samePerson); // He's the same person in every other respect, though. console.log('Are they the same?', person === samePerson); // true
This function
giveAwesomePowers mutates the
person passed into it. Run this code, and you’ll see that the first time we print out
person, Bob has no
specialPower property. But then, the second time, he suddenly has the
specialPower of invisibility.
The thing is, since this function modified the
person that was passed in, we don’t know what the old one looked like anymore. They are forever changed.
The object returned from
giveAwesomePowers is the same object as the one that was passed in, but its insides have been messed with. Its properties have changed. It has been mutated.
I want to say this again because it’s important: the internals of the object have changed, but the object reference has not. It’s the same object on the outside (which is why an equality check like
person === samePerson will be
true).
If we want the
giveAwesomePowers function not to modify the person, we’ll have to make a few changes. First, though, let’s look at what makes a function pure, because it’s very closely related to immutability.
Rules of Immutability
In order to be pure a function must follow these rules:
- A pure function must always return the same value when given the same inputs.
- A pure function must not have any side effects.
What’s a “Side Effect”?
“Side effects” is a broad term, but basically, it means modifying things outside the scope of that immediate function. Some examples of side effects…
- Mutating/modifying input parameters, like
giveAwesomePowersdoes
- Modifying any other state outside the function, like global variables, or
document.(anything)or
window.(anything)
- Making API calls
console.log()
Math.random()
The API calls one might surprise you. After all, making a call to something like
fetch('/users') might not appear to change anything in your UI at all.
But ask yourself this: If you called
fetch('/users'), could it change anything anywhere? Even outside your UI?
Yep. It’ll create an entry in the browser’s Network log. It’ll create (and maybe later shut down) a network connection to the server. And once that call hits the server, all bets are off. The server could do whatever it wants, including calling out to other services and making more mutations. At the very least, it’ll probably put an entry in a log file somewhere (which is a mutation).
So, like I said: “side effect” is a pretty broad term. Here’s a function that has no side effects:
function add(a, b) { return a + b; }
You can call this once, you can call it a million times, and nothing else in the world will change. I mean, technically, things in the world might change while the function runs. Time will pass… empires may fall… but calling this function will not directly cause any of those things. That satisfies Rule 2 – no side effects.
What’s more, every time you call this function like
add(1, 2) you will get the same answer. No matter how many times you call
add(1, 2) you will get the same answer. That satisifies Rule 1 – same inputs == same answers.
JS Array Methods That Mutate! It will sort the array in place. The easiest thing to do, if you need to use one of these operations, is to make a copy of the array and then operate on the copy. You can copy an array with any of these methods:
let a = [1, 2, 3]; let copy1 = [...a]; let copy2 = a.slice(); let copy3 = a.concat();
So, if you wanted to do an immutable sort on an array, you could do it like this:
let sortedArray = [...originalArray].sort(compareFunction);
And one quick aside about
sort (which has bitten me in the past) is that the
compareFunction needs to return 0, 1, or -1. Not a boolean! Keep that in mind next time you’re writing a comparator.
Pure Functions Can Only Call Other Pure Functions
One potential source of trouble is calling a non-pure function from a pure one.
Purity is transititive, and it’s all-or-nothing. You can write a perfect pure function, but if you end it with a call to some other function that eventually calls
setState or
dispatch or causes some other sort of side effect… then all bets are off.
Now, there are some sorts of side effects that are “acceptable.” Logging messages with
console.log is fine. Yeah, it’s technically a side effect, but it’s not going to affect anything.
A Pure Version of
giveAwesomePowers
Now we can rewrite our function with the Rules in mind.
function giveAwesomePowers(person) { let newPerson = Object.assign({}, person, { specialPower: 'invisibility' }) return newPerson; }
This is a bit different now. Instead of modifying the person, we’re creating an entirely new person.
If you haven’t seen
Object.assign, what it does is assign properties from one object to another. You can pass it a series of objects, and it will merge them all together, left to right, while overwriting any duplicate properties. (And by “left to right”, I mean that executing
Object.assign(result, a, b, c) will copy
a into
result, then
b, then
c).
It doesn’t do a deep merge though – only the immediate child properties of each argument will be moved over. It also, importantly, does not create copies or clones of the properties. It assigns them as-is, keeping references intact.
So the code above creates an empty object, then assigns all of
person’s properties to that empty object, and then assigns the
specialPower property to that object as well. Another way to write this is with the object spread operator:
function giveAwesomePowers(person) { let newPerson = { ...person, specialPower: 'invisibility' } return newPerson; }
You can read this as: “Create a new object, then insert the properties from
person, then add another property called
specialPower”. As of this writing, this object spread syntax is officially part of the JavaScript spec in ES2018.
Pure Functions Return Brand New Objects
Now we can re-run our experiment from earlier, using our new pure version of
giveAwesomePowers.
// Initially, Bob has no powers :( console.log(person); // Then we call our function... var newPerson = giveAwesomePowers(person); // Now Bob's clone has powers! console.log(person); console.log(newPerson); // The newPerson is a clone console.log('Are they the same?', person === newPerson); // false
The big difference is that
person was not modified. Bob is unchanged. The function created a clone of Bob, with all the same properties, plus the ability to go invisible.
This is sort of a weird thing about functional programming. Objects are constantly being created and destroyed. We do not change Bob; we create a clone, modify his clone, and then replace Bob with his clone. A bit grim, really. If you’ve seen that movie The Prestige, it’s kind of like that. (If you haven’t seen it, forget I said anything.)
React Prefers Immutability
In React’s case, it’s important to never mutate state or props. Whether a component is a function or a class doesn’t matter for this rule. If you’re about to write code like
this.state.something = ... or
this.props.something = ..., take a step back and try to come up with a better way.
To modify state, always use
this.setState. If you’re curious you can read more about why not to modify state directly.
As for props, they’re a one-way thing. Props come IN to a component. They’re not a two-way street, at least not via mutable operations like setting a prop to a new value.
If you need to send some data back to the parent, or trigger something in the parent component, you can do that by passing in a function as a prop, and then calling that function from inside the child whenever you need to communicate to the parent. Here’s a quick example of a callback prop that works this way:
function Child(props) { // When the button is clicked, // it calls the function that Parent passed down. return ( <button onClick={props.printMessage}> Click Me </button> ); } function Parent() { function printMessage() { console.log('you clicked the button'); } // Parent passes a function to Child as a prop // Note: it passes the function name, not the result of // calling it. It's printMessage, not printMessage() return ( <Child onClick={printMessage} /> ); }
Immutability Is Important for PureComponents
By default, React components (both the
function type and the
class type, if it extends
React.Component) will re-render whenever their parent re-renders, or whenever you change their state with
setState.
An easy way to optimize a React component for performance is to make it a class, and make it extend
React.PureComponent instead of
React.Component. This way, the component will only re-render if it’s state is changed or if it’s props have changed. It will no longer mindlessly re-render every single time its parent re-renders; it will ONLY re-render if one of its props has changed since the last render..
Remember our first example with Bob and the
giveAwesomePowers function? Remember how the object returned by the function was exactly the same, triple-equal,
===, to the
person that was passed in? That’s because both variables referred to the same object. Only the internals had been changed.
How Referential Equality Works in JavaScript
What does “referentially equal” mean? Ok, this’ll be a quick tangent, but it’s important to understand.
JavaScript objects and arrays are stored in memory. (You should be nodding right now)
Let’s say a place in memory is like a box. The variable name “points to” the box, and the box holds the actual value.
In JavaScript, these boxes (memory addresses) are unnamed and unknowable. You can’t figure out the memory address a variable points to. (In some other languages, like C, you can actually inspect the memory address of a variable and see where it lives)
If you reassign a variable, it will point to a new memory location.
If you mutate the internals of the variable, it still points to the same address.
Much like ripping out the insides of a house and putting in new walls, kitchen, living room swimming pool, and so on – the address of that house remains the same. You don’t have to remind your relatives where to send the birthday money, because you still live at the same place.
Here is the key: when you compare two objects or arrays with the
=== operator, JavaScript is actually comparing the addresses they point to – a.k.a. their references. JS does not even peek into the object. It only compares the references. That’s what “referential equality” means.
So if you take an object, and modify it, it will modify the contents of the object, but it will not change its reference.
Another thing is, when you assign one object to another (or pass it in as a function argument, which is effectively doing the same thing), that other object is merely another pointer to the same memory location as the first object. It’s like a voodoo doll. Anything you do to the second object will directly affect the value of the first object too.
Here’s some code to make this a bit more concrete:
// This creates a variable, `crayon`, that points to a box (unnamed), // which holds the object `{ color: 'red' }` let crayon = { color: 'red' }; // Changing a property of `crayon` does NOT change the box it points to crayon.color = 'blue'; // Assigning an object or array to another variable merely points // that new variable at the old variable's box in memory let crayon2 = crayon; console.log(crayon2 === crayon); // true. both point to the same box. // Niw, any further changes to `crayon2` will also affect `crayon1` crayon2.color = 'green'; console.log(crayon.color); // changed to green! console.log(crayon2.color); // also green! // ...because these two variables refer to the same object in memory console.log(crayon2 === crayon);
Why Not Check Equality Deeply?
It might seem more “correct” to check the internals of two objects against each other before declaring them equal. While that’s true, it’s also slower.
How much slower? Well, that depends on the objects being compared. One with 10,000 child and grandchild properties is gonna be slower than one with 2 properties. It’s unpredictable.
A reference equality check is what computer scientists would call “constant time.” Constant time, a.k.a. O(1), means that the operation will always take the same amount of time, regardless of how big the inputs are.
A deep equality check, on the other hand, is more likely to be linear time, a.k.a. O(N), which means the time it takes is proportional to how many keys are in the objects. Linear time is, generally speaking, slower than constant time.
Think of it this way: Pretend that every time JS compares two values like
a === b it takes one full second to run. Now, do you want to do that once, to check the reference? Or do you want to descend into the depths of BOTH objects and compare each and every property? Sounds pretty slow, yeah?
In reality, an equality check is much much much faster than a whole second, but still, the principle of “do the least work possible” applies here. All else being equal, use the most performant option. It’ll save you time down the road trying to figure out why your app is slow. If you’re careful (and maybe a bit lucky), it’ll never get slow in the first place :)
Does
const Prevent Changes?
The short answer is: no. Neither
let nor
const nor
var will prevent you from changing the internals of an object. All three ways of declaring a variable allow you to mutate its internals.
“But it’s called
const! Isn’t that supposed to be constant?”
Well, sorta.
const will only prevent you from reassigning the reference. It doesn’t stop you from changing the object. Here’s an example:
const order = { type: "coffee" } // const will allow changing the order type... order.type = "tea"; // this is fine // const will prevent reassigning `order` order = { type: "tea" } // this is an Error
Keep that in mind the next time you see a
const.
I like to use
const as a reminder to myself that an object or array shouldn’t be mutated (which is most of the time). If I’m writing code where I know for certain I’ll be mutating an array or object, I’ll declare it with
let. It’s just a convention, though. (and, like most conventions, if you break it every now and then, that’s about as good as having no convention at all).
How To Update State in Redux
Redux requires that its reducers be pure functions. This means you can’t modify the state directly – you have to create a new state based on the old one, just like we did up above with Bob. (And if you’re unsure, read about what a reducer is and where that name comes from)
Writing code to do immutable state updates can be tricky. Below, you will find a few common patterns..
At the end, we’ll look at how to make it easier with a library called Immer – but don’t just skip to the end! If you’re going to be working on existing code bases, it’ll be very useful to understand how to do this stuff “long hand.”
The
... Spread Operator
These examples make heavy use of the spread operator for arrays and objects. Here’s how it works.
When this
... notation is placed before an object or array, it unwraps the children within, and inserts them right there.
// For arrays: let nums = [1, 2, 3]; let newNums = [...nums]; // => [1, 2, 3] nums === newNums // => false! it's a new array // For objects: let person = { name: "Liz", age: 32 } let newPerson = {...person}; person === newPerson // => false! it's a new object // Internal properties are left alone: let company = { name: "Foo Corp", people: [ {name: "Joe"}, {name: "Alice"} ] } let newCompany = {...company}; newCompany === company // => false! not the same object newCompany.people === company.people // => true!
When used as shown above, the spread operator makes it easy to create a new object or array that contains the exact same contents as another one. This is useful for creating a copy of an object/array, and then overwriting specific properties that you need to change:. I’ll show what the incoming state looks like, and then show how to return an updated state.
For the sake of keeping the examples clean, I’m gonna ignore the “action” parameter entirely. Pretend that this state update will happen for any action. Of course in your own reducers you’ll probably have a
switch statement with
cases for each action, but I think that would just add noise here.
Updating State in React
To apply these examples to plain React state, you just need to tweak a couple things in these examples.
Since React will shallow merge the object you pass into
this.setState(), you don’t need to spread the existing state like you would with Redux.
In a Redux reducer, you might write this:
return { ...state, (updates here) }
With plain React state, you can write it like this, without the spread operator:
this.setState({ updates here })
Keep in mind, though, that since
setState does a shallow merge, you’ll need to use the object (or array) spread operator when you’re updating deeply-nested items within state (anything deeper than the first level).
Redux: Update an Object
When you want to update the top-level properties in the Redux state object, copy the existing state with
...state and then list out the properties you want to change, with their new values.
function reducer(state, action) { /* State looks like: state = { clicks: 0, count: 0 } */ return { ...state, clicks: state.clicks + 1, count: state.count - 1 } }
Redux: Update an Object in an Object
(This isn’t specific to Redux – the same method applies with plain React state. See here for how to adapt it.)
When the object you want to update is one (or more) levels deep within the Redux state, you need to make a copy of every level up to and including the object you want to update. Here’s an example one level deep:
function reducer(state, action) { /* State looks like: state = { house: { name: "Ravenclaw", points: 17 } } */ // Two points for Ravenclaw return { ...state, // copy the state (level 0) house: { ...state.house, // copy the nested object (level 1) points: state.house.points + 2 } }
Here’s another example, this time updating an object that’s two levels deep:
function reducer(state, action) { /* State looks like: state = { school: { name: "Hogwarts", house: { name: "Ravenclaw", points: 17 } } } */ // Two points for Ravenclaw return { ...state, // copy the state (level 0) school: { ...state.school, // copy level 1 house: { // replace state.school.house... ...state.school.house, // copy existing house properties points: state.school.house.points + 2 // change a property } } }
This code can get hard to read when you’re updating deeply-nested items!
Redux: Updating an Object by Key
(This isn’t specific to Redux – the same method applies with plain React state. See here for how to adapt it.)
function reducer(state, action) { /* State looks like: const state = { houses: { gryffindor: { points: 15 }, ravenclaw: { points: 18 }, hufflepuff: { points: 7 }, slytherin: { points: 5 } } } */ // } } }
Redux: Prepend an item to an array
(This isn’t specific to Redux – the same method applies with plain React state. See here for how to adapt it.)
The mutable way to do this would be to use Array’s
.unshift function to add an item to the front. Array.prototype.unshift mutates the array, though, and that’s not what we want to do.
Here is how you can add an item to the beginning of an array in an immutable way, suitable for Redux:
function reducer(state, action) { /* State looks like: state = [1, 2, 3]; */ const newItem = 0; return [ // a new array newItem, // add the new item first ...state // then explode the old state at the end ];
Redux: Add an item to an array
(This isn’t specific to Redux – the same method applies with plain React state. See here for how to adapt it.)
The mutable way to do this would be to use Array’s
.push function to add an item to the end. That would mutate the array, though.
Here is how you can append an item to the end of an array, immutably:
function reducer(state, action) { /* State looks like: state = [1, 2, 3]; */ const newItem = 0; return [ // a new array ...state, // explode the old state first newItem // then add the new item at the end ];
You can also make a copy of the array with
.slice, and then mutate the copy:
function reducer(state, action) { const newItem = 0; const newState = state.slice(); newState.push(newItem); return newState;
Redux: Update an item in an array with
map
(This isn’t specific to Redux – the same method applies with plain React state. See here for how to adapt it.), below).
function reducer(state, action) { /* State looks like: state = [1, 2, "X", 4]; */ return state.map((item, index) => { // Replace "X" with 3 // alternatively: you could look for a specific index if(item === "X") { return 3; } // Leave every other item unchanged return item; }); }
Redux: Update an object in an array
(This isn’t specific to Redux – the same method applies with plain React state. See here for how to adapt it.)
This works the same way as above. The only difference is we’ll need to construct a new object and return a copy of the one we want to change.).
In this example we have an array of users with email addresses. One of them changed their email and we need to update it. I’ll show how the user’s ID and new email could come in as part of the
action, but you can adapt this to accept the values from somewhere else of course (if you’re not using Redux, for instance).
function reducer(state, action) { /* State looks like: state = [ { id: 1, email: 'jen@reynholmindustries.com' }, { id: 2, email: 'peter@initech.com' } ] Action contains the new info: action = { type: "UPDATE_EMAIL" payload: { userId: 2, // Peter's ID newEmail: 'peter@construction.co' } } */ return state.map((item, index) => { // Find the item with the matching id if(item.id === action.payload.userId) { // Return a new object return { ...item, // copy the existing item email: action.payload.newEmail // replace the email addr } } // Leave every other item unchanged return item; }); }
Redux: Insert an item in the middle of an array
(This isn’t specific to Redux – the same method applies with plain React state. See here for how to adapt it.)
Array’s
.splice function will insert an item, but it will also mutate the array.
Since we don’t want to mutate the original, we can make a copy first (with
.slice), then use
.splice to insert an item into the copy.
The other way to do this involves copying in all the elements BEFORE the new one, then inserting the new one, and then copying in all the elements AFTER it. It’s easy to get the indices wrong though.
Pro tip: Write unit tests for these things! It’s easy to make off-by-one errors.
function reducer(state, action) { /* State looks like: state = [1, 2, 3, 5, 6]; */ const newItem = 4; // make a copy const newState = state.slice(); // insert the new item at index 3 newState.splice(3, 0, newItem) return newState; /* // You can also do it this way: return [ // make a new array ...state.slice(0, 3), // copy the first 3 items unchanged newItem, // insert the new item ...state.slice(3) // copy the rest, starting at index 3 ]; */ }
Redux: Update an item in an array by index
(This isn’t specific to Redux – the same method applies with plain React state. See here for how to adapt it.)
We can use Array’s
.map to return a new value for a specific index, and leave the other elements unchanged.
function reducer(state, action) { /* State looks like: state = [1, 2, "X", 4]; */ return state.map((item, index) => { // Replace the item at index 2 if(index === 2) { return 3; } // Leave every other item unchanged return item; }); }
Redux: Remove an item from an array with
filter
(This isn’t specific to Redux – the same method applies with plain React state. See here for how to adapt it.)
Array’s
.filter function will call the function you provide, pass in each existing item, and return a new array with only the items where your function returned “true” (or truthy). If you return false, that item gets removed.
If you have an array with N items and you want to end up with fewer items, use
.filter.
function reducer(state, action) { /* State looks like: state = [1, 2, "X", 4]; */ return state.filter((item, index) => { // Remove item "X" // alternatively: you could look for a specific index if(item === "X") { return false; } // Every other item stays return true; }); }
Check out this Immutable Update Patterns section of the Redux docs for some other good tricks.
Easy State Updates with Immer
If you looked at some of the immutable state update code above and wanted to run away screaming, I don’t blame you.
Deeply-nested object updates are difficult to read, difficult to write, and difficult to get right. Unit tests are imperative, but even those don’t make the code much easier to read and write.
Thankfully, there’s a library that can help. Using Immer by Michael Weststrate, you can write the mutable code you know and love, with all the
[].push and
[].pop and
= you can squeeze in there – and Immer will take that code and produce a perfect immutable update, like magic.
It’s awesome. Let me show you how it works:
First, you need to install Immer. (3.9K gzipped, according to this handy Import Cost plugin for VSCode. 2K according to Immer’s GitHub page. Either way – pretty small for how much awesomeness it adds)
yarn add immer
Then, you need to import the
produce function from Immer. It only has this one export; this function is all it does. Which is great, nice and focused.
import produce from 'immer';
By the way, it’s called “produce” because it produces a new value, and the name is sort of the opposite of “reduce”. There’s an issue on Immer’s GitHub where they originally discussed the name.
From there, you can use the
produce function to build yourself a nice little mutable playground, where all your mutations will be handled by the magic of JS Proxies. Here’s a before-and-after, starting with the plain JS version of a reducer that updates a value nested inside an object, followed by the Immer version:
/* State looks like: state = { houses: { gryffindor: { points: 15 }, ravenclaw: { points: 18 }, hufflepuff: { points: 7 }, slytherin: { points: 5 } } } */ function plainJsReducer(state, action) { // } } } } function immerifiedReducer(state, action) { const key = "ravenclaw"; // produce takes the existing state, and a function // It'll call the function with a "draft" version of the state return produce(state, draft => { // Modify the draft however you want draft.houses[key].points += 3; // The modified draft will be // returned automatically. // No need to return anything. }); }
Using Immer with React State
Immer works well with plain React state, too – the “functional” form of setState.
You may already know that React’s
setState has a “functional” form that accepts a function and passes it the current state. The function then returns the new state:
onIncrementClick = () => { // The normal form: this.setState({ count: this.state.count + 1 }); // The functional form: this.setState(state => { return { count: state.count + 1 } }); }
Immer’s
produce function can be slotted in as the state update function. You’ll notice this way of calling
produce only passes a single argument – the update function – instead of two arguments
(state, draft => {}) as we did in the reducer example.
onIncrementClick = () => { // The Immer way: this.setState(produce(draft => { draft.count += 1 }); }
This works because Immer’s
produce function is set up to return a “curried” function when it’s called with only 1 argument. The function it returns, in this case, is ready to accept a state, and call your update function with the draft.
Gradually Adopting Immer
A nice feature of Immer is that because it’s so small and focused (just the one function that produces new states), it’s easy to add it to an existing codebase and try it out.
Immer is backwards compatible with existing Redux reducers, too. If you wrap your existing
switch/case in Immer’s
produce function, all of your reducer tests should still pass.
Earlier I showed that the update function you pass to
produce can implicitly return
undefined and that it’ll automatically pick up the changes to the
draft state. What I didn’t mention is that the update function can alternatively return a brand new state, as long as it hasn’t made any changes to the
draft.
This means your existing Redux reducers, which already return brand new states, can be wrapped with Immer’s
produce function and they should keep working exactly the same. At that point, you’re free to replace hard-to-read immutable code at your leisure, piece by piece. Check out the official example of different ways to return data from producers.
Write your comment…
Be the first one to comment | https://hashnode.com/post/immutability-in-react-and-redux-the-complete-guide-cjn0cgy030054kxs270exwnm0 | CC-MAIN-2018-43 | refinedweb | 5,324 | 73.17 |
Configure the DNS suffix search list for a disjoint namespace
Applies to: Exchange Server 2013
Topic Last Modified: 2016-12-09
This topic explains how to use the Group Policy Management console (GPMC) to configure the Domain Name System (DNS) suffix search list. In some Microsoft Exchange 2013 scenarios, if you have a disjoint namespace, you must configure the DNS suffix search list to include multiple DNS suffixes.
Estimated time to complete: 10 minutes
To perform this procedure, the account you use must be delegated membership in the Domain Admins group.
Confirm that you have installed .NET Framework 3.0 on the computer on which you will install the GPMC.
For information about keyboard shortcuts that may apply to the procedures in this topic, see Keyboard shortcuts in the Exchange admin center.
On a 32-bit computer in your domain, install GPMC with Service Pack 1 (SP1). For download information, see Group Policy Management Console with Service Pack 1.
Click Start > Programs > Administrative Tools >.
To verify that you have successfully completed your migration, do the following:
After you install Exchange 2013, verify that you can send email messages inside and outside your organization. | https://technet.microsoft.com/en-us/library/bb847901(v=exchg.150).aspx | CC-MAIN-2017-26 | refinedweb | 193 | 54.02 |
. > > > > this is subtly different from dpkg triggers, and the differences are > > due to the different requirements of selinux. YES it is necessary > > to run the selinux postinst.d script on EVERY package. YES it > > is necessary to run it after every package's postinst script. > > > This is the bit I still have trouble believing; i really can't help you with what you believe. i know you do: it's why i keep repeating it in the hope that i can help. your difficulty in believing it isn't going to make any difference in affecting whether it's actually true or not. it's quite simple: any file or directory created by dpkg as it unpacks the package needs to have its selinux permissions "retrofitted", as laid down by a SITE ADMINISTRATOR's decision on what version and what type of policy to use. the decision made by the site administrator (not by the package maintainer and not by the NSA!) results in the generation of a "file contexts" file, /etc/selinux/contexts/file_contexts. that file contains "the equivalent" of chmod+chown+chgrp, with regexps to match the files+directories+symlinks that that equivalent of chmod+chown+chgrp should be applied to. e.g. i am running a "strict" policy, version 1.14 with some custom modifications, so i have, as an example, these: /bin(/.*)? system_u:object_r:bin_t /bin/tcsh -- system_u:object_r:shell_exec_t /bin/ls -- system_u:object_r:ls_exec_t which means "everything in /bin (and including /bin the directory) except /bin/tcsh and /bin/ls get the file context 'bin_t". and: /etc(/.*)? system_u:object_r:etc_t /etc/mtab -- system_u:object_r:etc_runtime_t where etc_t is typically kept read-only to most programs, and files of type etc_runtime_t are typically allowed to be be written to by certain programs that need to (init scripts, dpkg etc). ... i wrote you a detailed message last night, in which i outlined an analogy. the analogy is to imagine that chmod, chgrp and chown were a new concept that had to be retrofitted onto a system where the default permissions were noaccess, noaccess, 0000, with no members in the group "noaccess". there would not be a chance in hell of being able to "retrofit" chmod, chgrp and chown into postinst scripts: package maintainers would bitch like mad, and not understand it, and it would be unreasonable to expect them to understand it (think of retro converting 30 years of FAT filesystem usage to ext2!) and it would almost undoubtedly be expected that every file and directory created would need to be chmodded, chowned and chgrp'd, as a package was installed. you'd have to specify a file with unix_contexts in it which had: /bin(/.*)? root,root,0755 /bin/ping root,root,1755 (...or is it 3755 for setuid?) etc, with THOUSANDS of entries... ... just like there are in /etc/selinux/contexts/file_contexts (5972 lines in the case of my modified 1.14 policy) the scope and scale of the task and achievement of selinux is just... mind-boggling. it's just... so ambitious!! *gibber*. retro-fitting a security model onto linux. madness! ahem. sorry. calm, calm. > I could claim that it is > necessary for ldconfig or scrollkeeper-update to run on EVERY page... > but that's just not true. and i can see for myself that it's not true. > Why is it necessary for the selinux update to complete before the > processing of the next package is begun? this is what i am not sure about.) namely, that there are some libselinux functions that can be used to ensure that at file/dir CREATE time the correct selinux permissions will be applied, on a per-file, per-directory basis. the /etc/dpkg/postinst.d/selinux approach is the quickest and simplest way to achieve the desired results... ... it might be the case that there are flaws in it that will _require_ modifications to dpkg... looking at the code, main.c around lines 529 etc. in the tarobject function. am i correct in thinking that the tarobject function is responsible for doing the dpkg unpacking? because if so, that's where the setfscreatecon() function calls to libselinux functions would be need to be done (as they are in udev, cron, star, and other packages). this would result in guaranteed atomic file/dir/symlink create operations that the /etc/dpkg/postinst.d/selinux ... "hack" just doesn't and can't do. > If it's really true, isn't it going to be also true that sometimes you > need to do the selinux processing before certain actions in the postinst > (like starting the daemon) are performed? ... you know, i think you could be right. i think only stephen, russell, dan or colin are in a position to answer that. hang on... in russell's postinst patch, what happens first. the package is unpacked, the /etc/dpkg/postinst.d/selinux script is run, the package's postinst script is run. [this is the one that makes sense] OR is it: the package is unpacked, the package's postinst script is run, the /etc/dpkg/postinst.d/selinux script is run. [this is the one that would suffer from the problems you outline above, scott] > Or does it not matter that the daemon is started before selinux > privileges/permissions are applied to its binary? yes, i think it does. _hope_fully russell's patched dpkg does the selinux postinst.d script _before_ running the package's postinst script. > If that doesn't matter I don't see why it matters that the processing is > deferred until the end of the run. well i too was hoping that it would be the case: it doesn't look like it will, i am going to assume otherwise by rewriting your statement as i believe it should have been asked, pretend that you said that instead, and then answer that. this is what i believe you to have said, which "softens" the impact of what "you" are trying to say into a respectful joke. > if i understand you correctly, then by the same logic, it could be > argued that Package maintainers don't need, and don't even need to > know about debconf templates, translations, debian policy, etc. > surely, therefore, we should consider separating those out into > separate packages too! *lol* :) not quite.? a SITE ADMINISTRATOR may decide to: - apply the targetted policy, which allows users to do pretty much anything (respecting unix perms of course) and it restricts services. - not apply a policy at all, for testing purposes. - apply the "strict" policy, which even stops unauthorised users from doing "su -", and even stops administrators from starting and stopping services unless they go through a special procedure - write, and apply, THEIR OWN policy. now, in the case of the latter, you can't seriously be telling me that a site administrator's OWN policy files, which they have hacked up for a particular package and a particular requirement, should go into a debian package? and that a debian maintainer must include hundreds, maybe thousands of separate administrators' policy files, world-wide? it just can't happen. > > i am endeavouring to either convince him otherwise, or to make > > some suggestions which would allow the present or an equivalent > > of the present /etc/dpkg/postinst.d/selinux script running > > under russell's patched dpkg to successfully be replaced by > > a dpkg "trigger". > > > I believe triggers really will work for what you want; if they don't > then here's two problems with postinst.d: > > 1) is there actually another use case for it that isn't solved by > triggers? > > 2) if not, then it's a selinux-specific feature and actually worse than > simply embedding knowledge of selinux into dpkg. > >. > > dpkg "triggers" will ONLY work for selinux under these circumstances, > > and i REALLY mean "only". > > > > a) if the selinux dpkg "trigger" could guarantee to ALWAYS be run, not > > just be called from one or a few packages. reason: ALL files > > installed by ALL packages must have their file contexts revamped, > > NOT just some. > > > > all files, in fact, that are listed by dpkg -L <packagename> > > > Another use case for this "always run" functionality is prelinking > binaries, or updating the updatedb/locate database. yes. yes! well ... in order to be useful, both those two systems (prelink and updatedb) would need to be given a list of files removed and a list of files added, and to know what to do with them. > > b) i think i am right in saying that debian "Pre-Depends" > > groupings will have to be respected: namely that it would be > > necessary AT THE LEAST to have the selinux dpkg "trigger" run > > on groups of packages separated at "Pre-Depends" boundaries. > > > Why? > > If there really is this "policy must be in place before FOO" problem, > surely that (potentially) exists *inside* a single postinst. okay, looking at russell's patch, it looks like he's been smart enough to switch the ordering based on whether it's a pre or post script. it _looks_ like: - on post<something> , all scripts in /etc/dpkg/post<something>.d are run BEFORE the package's post<something> script is run - on pre<something>, all scripts in /etc/dpkg/pre<something>.d are run AFTER ... > In which case we're back to changing postinst of every package again > (which is actually largely just a change-debhelper problem, anyway). well, if the same effect can be achieved by doing the equivalent in debhelper, then great. > > i also believe that i am right in saying that this would > > probably have to be done for all dpkg triggers _anyway_ - NOT > > just the proposed selinux dpkg trigger. > > > > otherwise, you could end up with awful screw-ups where > > update-modules might not get run, and a "Pre-Dependent" > > package was _really_ relying on a particular module or > > module options being set, that sort of thing. > > > You've made the very common mistake of mis-understanding how Pre-Depends > work, i'm happy to be corrected because i need to understand the issues. > > c) if the dpkg "triggers" could receive the complete list of all package > > names, then it is conceivable that this list of names could be > > amalgamated by the dpkg trigger /usr/lib/dpkg/triggers/selinux... > > something like this: > > > I certainly don't see a problem with this. We're wandering to classes > with this behaviour though <g> yep :) all solutions welcome. >? does dpkg -L not include debconf-maintained configuration files? > Aren't you going to need to modify the postinst to add policy for them > too? > > > > *1 is there an absolute requirement that the present > > /etc/dpkg/postinst.d/selinux script BE RUN at the > > end and ONLY IMMEDIATELY AT THE END of EVERY > > postinst script of EVERY package being installed? > > > If there is this requirement, I maintain there is probably a requirement > that it be run before the end of an individual postinst ... requiring > changes to the postinst. > > Use case: Mail server, started as per-policy in postinst. > > Do you need to have the selinux policies for its queue and spool > directories in place before it starts? yes, definitely. [see above description of my forays into russell's patch: it's dealt with. hopefully. mostly. ] > If so you need to modify that > postinst to do the work before the invoke-rc.d call. > > > *2 is it sufficient, is it the case that it's okay, to > > just wait until the end of a dpkg run (respecting > > "Pre-Depends" groupings of course), to wait until ALL > > postinst scripts (of one or more "Pre-Depends" groups > > of course) are run and THEN to do the above [proposed] > > /usr/lib/dpkg/triggers/selinux script (more than once if there > > is more than one "Pre-Depends" grouping of course), receiving > > the list of all names of all packages just newly installed > > (in each "Pre-Depends" group of course)? > > > I don't believe you need this pre-depends grouping (see above). > > > and if it's _not_ okay, then unfortunately we must continue to insist > > that dpkg have the postinst.d system. > > > Which would have an entire feature for one particular subset of users, > leaving that feature open to abuse by the others; which I massively > dislike. > > > or continue to maintain a fork of dpkg. > > > > or request the ftp masters to accept the creation of an se-dpkg package > > containing the forked version of dpkg. > > > These are the same thing, and there's no need to involve the ftpmasters. > I'm reasonably sure that anybody is free to upload competing package > managers, provided they don't stamp on the dpkg namespace (dpkg.*). > > > my concern with the latter (*2) is that there would be dependency > > problems due to file contexts not having been updated yet, > > that are completely avoided by the /etc/dpkg/postinst.d/selinux > > approach [run just after the package's postinst script is run]. > > > Why would there be this problem? > > ... or is the problem i envisage avoided by the selinux domain > > in which dpkg is run (and by breaking things down into "Pre-Depends" > > groups). > > > Why do we need to do this breaking-things-into-groups? > > > for those people not familiar with dpkg, "Pre-Depends" is a system which > > forces a COMPLETE install of all "Pre-Depends" packages prior to > > proceeding with the installation of a package with a "Pre-Depends" > > requirement. > > > > if you specify "Depends:" then what you are saying is that > > dpkg can just run all the unpacks of all packages and all > > dependencies (in any order), then run all postinsts of all > > packages and all dependencies (basically in any order it likes). > > > *BZZZT* wrong! > > Utterly, utterly, incorrect my friend. *lol* :) > Pre-Depends does require complete installation of a package has been > performed before the pre-depending package. This means that the pre- > depended package will have been unpacked and configured (including > postinst) before the pre-depending package is even unpacked. yes. that is what i understood this to be, i just wasn't able to articulate it clearly. > Depends requires that depended packages have been unpacked and > configured before depending packages are configured. This means that > the packages may be unpacked in a reasonably random order, BUT THAT > CONFIGURATION HAPPENS *IN ORDER* (not "any order it likes"). > that i didn't know. makes sense. thanks for responding to this scott: we'll get there, i'm sure. i'll write a summary message later, outlining some of the problems you've highlighted. (e.g the symlinks-created-in-postinst scripts one) l. | https://lists.debian.org/debian-devel/2004/09/msg00009.html | CC-MAIN-2017-04 | refinedweb | 2,410 | 61.77 |
17 May 2016 Sudhanshu Ranjan 254
What is static Constructor? Explain.
A static constructor is used to initialize any static data, or to perform a particular action that needs to be performed once only. It is called automatically before the first instance is created or any static members are referenced.
It is basically used to initialize the static fields and properties of the class. It can be used mainly where, in our program, we want to do something before any reference for the class is created, like changing the background of the application.
There should at most one Static Constructor in a class and also Static Constructor can't take any parameters.
Example:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace TestApp { class Program { static Program() { //code inside this constructor } static void Main(string[] args) { } } }
Static constructors have the following properties:
1.The user has no control on when the static constructor is executed in the program.
2.A typical use of static constructors is when the class is using a log file and the constructor is used to write entries to this file.
3.Static constructors are also useful when creating wrapper classes for unmanaged code, when the constructor can call the LoadLibrary method.
4.If a static constructor throws an exception, the runtime will not invoke it a second time, and the type will remain uninitialized for the lifetime of the application domain in which your program is running.
5.A static constructor does not take access modifiers or have parameters.
6.A static constructor is called automatically to initialize the class before the first instance is created or any static members are referenced.
7.A static constructor cannot be called directly.
About Author
Sudhanshu Kumar Ranjan | Lead Engineer at HCL Technologies | Expertise in Microsoft Technology | Author | Blogger | Programmer
Know More
Connect with him on Facebook | Google+ | LinkedIn | Twitter
Recommended Post
- datepicker textbox in mvc 4 using JQuery
- Paging in MVC using C# with razor
- Prevent Space of First letter in Text Box using Javascript.
- secure connection failed firefox fix
- How to delete the duplicate record from table in sql server
- What is c#?
- AutoComplete Textbox in ASP.net MVC using Jquery.
- Display data which has two or more than two times in table in sql server.
- Extension Method In C# | MVC | Asp.net
- the webpage cannot be found http 404 error. Fixed!) | http://www.dotnetcube.com/article/6/what-is-static-constructor-explain | CC-MAIN-2017-17 | refinedweb | 404 | 55.74 |
On Fri, 18 Mar 2005 12:41:17 -0300, Edgar Poce <edgarpoce@gmail.com> wrote:
> Hi
>
> I'm trying to run the persistence managers directly without starting
> jackrabbit and I realized I need instances of RepositoryConfig,
> NodeTypeRegistry and a NameSpaceRegistry. The repository config part is
> simple (btw, thanks jukka). Now I need instances of NodeTypeRegistry and
> NameSpaceRegistry and there's no public class that provide this
> functionality. I think that to copy the code from RepositoryImpl is not
> a good choice. Is there any option?
hmm, there's no quick&easy solution right now. the current PM implementations
in jackrabbit don't use these, so i guess, as a workaround, you could
just pass null in the PMContext constructor.
cheers
stefan
>
> regards
> edgar
> | http://mail-archives.apache.org/mod_mbox/jackrabbit-dev/200503.mbox/%3C90a8d1c005031807536f85b8b9@mail.gmail.com%3E | CC-MAIN-2014-15 | refinedweb | 123 | 58.58 |
Open Source Your Knowledge, Become a Contributor
Technology knowledge has to be shared and made accessible for free. Join the movement.
ObjectDumper is a very handy assembly that can be found under Nuget Packages.
Once installed, the assembly provides a simple single static method (Dump) that takes a Generic type T, Dump name and a stream writer as parameters.
The idea of the object dumper is that it recursively loops on the object’s properties through reflection and collects all data underlying that object.
See the below sample code, I am using ObjectDumper's dump method to output the object's types and values onto the console.
Unfortunately, the ObjectDumper Nuget package is not supported by .Net Core which this lesson is built on it, so I am posting plain code as a snippet. If you are using the regular .Net framework as your environment then you can try this code on your local development machine.
using System; using Models; namespace ConsoleApplication1{ class Program { public static void Main(string[] args) { Item item = new Item { Name = "Chocolate", Number = 12345, CreatedDate = DateTime.Now, Price = 36.7M, Category = new Category(1, "Sweets") }; using (var writer = new System.IO.StringWriter()) { ObjectDumper.Dumper.Dump(item, "Object Dumper", writer); Console.Write(writer.ToString()); }; } } }
After running the above code, the output will become:
| https://tech.io/playgrounds/2098/how-to-dump-objects-in-c/using-objectdumper | CC-MAIN-2018-17 | refinedweb | 216 | 57.87 |
MySQL supports the use of protocol trace plugins: client-side plugins that implement tracing of communication between a client and the server that takes place using the client/server protocol. This capability can be used in MySQL 5.7.2 and up.
MySQL includes a test protocol trace plugin that serves to illustrate the information available from such plugins, and as a guide to writing other protocol trace plugins. To see how the test plugin works, use a MySQL source distribution; binary distributions are built with the test plugin disabled.
Enable the test protocol trace plugin by configuring MySQL
with the
WITH_TEST_TRACE_PLUGIN
CMake option enabled. This causes the
test trace plugin to be built and MySQL client programs to
load it, but the plugin has no effect by default. Control
the plugin using these environment variables:
MYSQL_TEST_TRACE_DEBUG: Set this
variable to a value other than 0 to cause the test
plugin to produce diagnostic output on
stderr.
MYSQL_TRACE_TRACE_CRASH: Set this
variable to a value other than 0 to cause the test
plugin to abort the client program if it detects an
invalid trace event.
Diagnostic output from the test protocol trace plugin can disclose passwords and other sensitive information.
Given a MySQL installation built from source with the test plugin enabled, you can see a trace of the communication between the mysql client and the MySQL server as follows:
shell>
export MYSQL_TEST_TRACE_DEBUG=1shqll>
mysqltest_trace: Test trace plugin initialized test_trace: Starting tracing in stage CONNECTING test_trace: stage: CONNECTING, event: CONNECTING test_trace: stage: CONNECTING, event: CONNECTED test_trace: stage: WAIT_FOR_INIT_PACKET, event: READ_PACKET test_trace: stage: WAIT_FOR_INIT_PACKET, event: PACKET_RECEIVED test_trace: packet received: 87 bytes 0A 35 2E 37 2E 33 2D 6D 31 33 2D 64 65 62 75 67 .5.7.3-m13-debug 2D 6C 6F 67 00 04 00 00 00 2B 7C 4F 55 3F 79 67 -log.....+|OU?yg test_trace: 004: stage: WAIT_FOR_INIT_PACKET, event: INIT_PACKET_RECEIVED test_trace: 004: stage: AUTHENTICATE, event: AUTH_PLUGIN test_trace: 004: Using authentication plugin: mysql_native_password test_trace: 004: stage: AUTHENTICATE, event: SEND_AUTH_RESPONSE test_trace: 004: sending packet: 188 bytes 85 A6 7F 00 00 00 00 01 21 00 00 00 00 00 00 00 .?......!....... 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................ ... mysql>
quittest_trace: 008: stage: READY_FOR_COMMAND, event: SEND_COMMAND test_trace: 008: QUIT test_trace: 008: stage: READY_FOR_COMMAND, event: PACKET_SENT test_trace: 008: packet sent: 0 bytes test_trace: 008: stage: READY_FOR_COMMAND, event: DISCONNECTED test_trace: 008: Connection closed test_trace: 008: Tracing connection has ended Bye test_trace: Test trace plugin de-initialized
To disable trace output, do this:
shell>
MYSQL_TEST_TRACE_DEBUG=
To use your own protocol trace plugins, you must configure
MySQL with the
WITH_TEST_TRACE_PLUGIN
CMake option
disabled because only one protocol
trace plugin can be loaded at a time and an error occurs
for attempts to load a second one. If you have already
built MySQL with the test protocol trace plugin enabled to
see how it works, you must rebuild MySQL without it before
you can use your own plugins.
This section discusses how to write a basic protocol trace
plugin named
simple_trace. This plugin
provides a framework showing how to set up the client plugin
descriptor and create the trace-related callback functions.
In
simple_trace, these functions are
rudimentary and do little other than illustrate the
arguments required. To see in detail how a trace plugin can
make use of trace event information, check the source file
for the test protocol trace plugin
(
test_trace_plugin.cc in the
libmysql directory of a MySQL source
distribution). However, note that the
st_mysql_client_plugin_TRACE structure
used there differs from the structures used with the usual
client plugin declaration macros. In particular, the first
two members are defined explicitly, not implicitly by
declaration macros.
Several header files contain information relevant to protocol trace plugins:
client_plugin.h: Defines the API
for client plugins. This includes the client plugin
descriptor and function prototypes for client plugin C
API calls (see
Section 22.8.14, “C API Client Plugin Functions”).
plugin_trace.h: Contains
declarations for client-side plugins of type
MYSQL_CLIENT_TRACE_PLUGIN. It also
contains descriptions of the permitted protocol stages,
transitions between stages, and the types of events
permitted at each stage.
To write a protocol trace plugin, include the following header files in the plugin source file. Other MySQL or general header files might also be needed, depending on the plugin capabilities and requirements.
#include <mysql/plugin_trace.h> #include <mysql.h>
plugin_trace.h includes
client_plugin.h, so you need not
include the latter file explicitly.
Declare the client-side plugin descriptor with the
mysql_declare_client_plugin() and
mysql_end_client_plugin macros (see
Section 23.2.4.2.3, “Client Plugin Descriptors”). For the
simple_trace plugin, the descriptor looks
like this:
mysql_declare_client_plugin(TRACE) "simple_trace", /* plugin name */ "Author Name", /* author */ "Simple protocol trace plugin", /* description */ {1,0,0}, /* version = 1.0.0 */ "GPL", /* license type */ NULL, /* for internal use */ plugin_init, /* initialization function */ plugin_deinit, /* deinitialization function */ plugin_options, /* option-handling function */ trace_start, /* start-trace function */ trace_stop, /* stop-trace function */ trace_event /* event-handling function */ mysql_end_client_plugin;
The descriptor members from the plugin name through the option-handling function are common to all client plugin types. The members following the common members implement trace event handling.
Function members for which the plugin needs no processing
can be declared as
NULL in the
descriptor, in which case you need not write any
corresponding function. For illustration purposes and to
show the argument syntax, the following discussion
implements all functions listed in the descriptor, even
though some of them do nothing,
The initialization, deinitialization, and options functions common to all client plugins are declared as follows. For a description of the arguments and return values, see Section 23.2.4.2.3, “Client Plugin Descriptors”.
static int plugin_init(char *errbuf, size_t errbuf_len, int argc, va_list args) { return 0; } static int plugin_deinit() { return 0; } static int plugin_options(const char *option, const void *value) { return 0; }
The trace-specific members of the client plugin descriptor are callback functions. The following descriptions provide more detail on how they are used. Each has a first argument that is a pointer to the plugin instance in case your implementation needs to access it.
trace_start(): This function is called at
the start of each traced connection (each connection that
starts after the plugin is loaded). It is passed the
connection handler and the protocol stage at which tracing
starts.
trace_start() allocates memory
needed by the
trace_event() function, if
any, and returns a pointer to it. If no memory is needed,
this function returns
NULL.
static void* trace_start(struct st_mysql_client_plugin_TRACE *self, MYSQL *conn, enum protocol_stage stage) { struct st_trace_data *plugin_data= malloc(sizeof(struct st_trace_data)); fprintf(stderr, "Initializing trace: stage %d\n", stage); if (plugin_data) { memset(plugin_data, 0, sizeof(struct st_trace_data)); fprintf(stderr, "Trace initialized\n"); return plugin_data; } fprintf(stderr, "Could not initialize trace\n"); exit(1); }
trace_stop(): This function is called
when tracing of the connection ends. That usually happens
when the connection is closed, but can happen earlier. For
example,
trace_event() can return a
nonzero value at any time and that causes tracing of the
connection to terminate.
trace_stop() is
then called even though the connection has not ended.
trace_stop() is passed the connection
handler and a pointer to the memory allocated by
trace_start() (
NULL if
none). If the pointer is non-
NULL,
trace_stop() should deallocate the
memory. This function returns no value.
static void trace_stop(struct st_mysql_client_plugin_TRACE *self, MYSQL *conn, void *plugin_data) { fprintf(stderr, "Terminating trace\n"); if (plugin_data) free(plugin_data); }
trace_event(): This function is called
for each event occurrence. It is passed a pointer to the
memory allocated by
trace_start()
(
NULL if none), the connection handler,
the current protocol stage and event codes, and event data.
This function returns 0 to continue tracing, nonzero if
tracing should stop.
static int trace_event(struct st_mysql_client_plugin_TRACE *self, void *plugin_data, MYSQL *conn, enum protocol_stage stage, enum trace_event event, struct st_trace_event_args args) { fprintf(stderr, "Trace event received: stage %d, event %d\n", stage, event); if (event == TRACE_EVENT_DISCONNECTED) fprintf(stderr, "Connection closed\n"); return 0; }
The tracing framework shuts down tracing of the connection
when the connection ends, so
trace_event() should return nonzero only
if you want to terminate tracing of the connection early.
Suppose that you want to trace only connections for a
certain MySQL account. After authentication, you can check
the user name for the connection and stop tracing if it is
not the user in whom you are interested.
For each call to
trace_event(), the
st_trace_event_args structure contains
the event data. It has this definition:
struct st_trace_event_args { const char *plugin_name; int cmd; const unsigned char *hdr; size_t hdr_len; const unsigned char *pkt; size_t pkt_len; };
For different event types, the
st_trace_event_args structure contains
the information described following. All lengths are in
bytes. Unused members are set to
0/
NULL.
AUTH_PLUGIN event:
plugin_name The name of the plugin
SEND_COMMAND event:
cmd The command code hdr Pointer to the command packet header hdr_len Length of the header pkt Pointer to the command arguments pkt_len Length of the arguments
Other
SEND_
and
xxx
events:
xxx_RECEIVED
pkt Pointer to the data sent or received pkt_len Length of the data
PACKET_SENT event:
pkt_len Number of bytes sent
To compile and install a plugin library object file, see the
instructions in
Section 23.2.4.3, “Compiling and Installing Plugin Libraries”. To use the
library file, it must be installed in the plugin directory
(the directory named by the
plugin_dir system
variable).
After the plugin library file is compiled and installed in
the plugin directory, you can test it easily by setting the
LIBMYSQL_PLUGINS environment variable to
the plugin name, which affects any client program that uses
that variable. mysql is one such program:
shell>
export LIBMYSQL_PLUGINS=simple_traceshqll>
mysqlInitializing trace: stage 0 Trace initialized Trace event received: stage 0, event 1 Trace event received: stage 0, event 2 ... Welcome to the MySQL monitor. Commands end with ; or \g. Trace event received Trace event received ... mysql>
SELECT 1;Trace event received: stage 4, event 12 Trace event received: stage 4, event 16 ... Trace event received: stage 8, event 14 Trace event received: stage 8, event 15 +---+ | 1 | +---+ | 1 | +---+ 1 row in set (0.00 sec) mysql>
quitTrace event received: stage 4, event 12 Trace event received: stage 4, event 16 Trace event received: stage 4, event 3 Connection closed Terminating trace Bye
To stop the trace plugin from being loaded, do this:
shell>
LIBMYSQL_PLUGINS=
It is also possible to write client programs that directly
load the plugin. You can tell the client where the plugin
directory is located by calling
mysql_options() to set the
MYSQL_PLUGIN_DIR option:
char *plugin_dir = "
path_to_plugin_dir"; /* ... process command-line options ... */ mysql_options(&mysql, MYSQL_PLUGIN_DIR, plugin_dir);
Typically, the program will also accept a
--plugin-dir option that enables users to
override the default value.
Should a client program require lower-level plugin
management, the client library contains functions that take
an
st_mysql_client_plugin argument. See
Section 22.8.14, “C API Client Plugin Functions”. | http://dev.mysql.com/doc/refman/5.7/en/writing-protocol-trace-plugins.html | CC-MAIN-2015-11 | refinedweb | 1,803 | 50.67 |
This article is a guide for the popular DHT11 and DHT22 temperature and humidity sensors with the Arduino. We’ll explain how it works, show some of its features and share an Arduino project example that you can modify to use in your own projects.
For more guides about other popular sensors, check our compilation of more than 60 Arduino tutorials and projects: 60+ Arduino Projects and Tutorials.
Introducing the DHT11 and DHT22 Sensors
The DHT11 and DHT22 sensors are used to measure temperature and relative humidity. These are very popular among makers and electronics hobbyists.
These sensors contain a chip that does analog to digital conversion and spit out a digital signal with the temperature and humidity. This makes them very easy to use with any microcontroller.
DHT11 vs DHT22
The DHT11 and DHT22 are very similar, but differ in their specifications. The following table compares some of the most important specifications of the DHT11 and DHT22 temperature and humidity sensors. For a more in-depth analysis of these sensors, please check the sensors’ datasheet.
The DHT22 sensor has a better resolution and a wider temperature and humidity measurement range. However, it is a bit more expensive, and you can only request readings with 2 seconds interval.
The DHT11 has a smaller range and it’s less accurate. However, you can request sensor readings every second. It’s also a bit cheaper.
Despite their differences, they work in a similar way, and you can use the same code to read temperature and humidity. You just need to select in the code the sensor type you’re using.
DHT Pinout
DHT sensors have four pins as shown in the following figure. However, if you get your DHT sensor in a breakout board, it comes with only three pins and with an internal pull-up resistor on pin 2.
The following table shows the DHT22 and DHT11 pinout. When the sensor is facing you, pin numbering starts at 1 from left to right
Where to buy?
You can check Maker Advisor Tools‘ page and find the best price for these modules:
DHT11 Temperature and Humidity Sensor with Arduino
In this section, we’ll build a simple project with the Arduino that reads temperature and humidity and displays the results on the Serial Monitor.
Parts Required
To complete this tutorial, you need the following components:
- Arduino UNO – read Best Arduino Starter Kits
- DHT11 temperature and humidity sensor
- Breadboard
- 4.7k Ohm Resistor
- Jumper wires
You can use the preceding links or go directly to MakerAdvisor.com/tools to find all the parts for your projects at the best price!
Schematic
Follow the next schematic diagram to wire the DHT11 (or DHT22) temperature and humidity sensor to the Arduino.
Here are the connections (from left to right):
To read from the DHT sensor, we’ll use the DHT library from Adafruit. To use this library you also need to install the Adafruit Unified Sensor library. Follow the next steps to install those libraries.
Open your Arduino IDE and go to Sketch > Include Library > Manage Libraries. The Library Manager should open.
Search for “DHT” on the Search box and install the DHT library from Adafruit.
After installing the DHT library from Adafruit, type “Adafruit Unified Sensor” in the search box. Scroll all the way down to find the library and install it.
After installing the libraries, restart your Arduino IDE.
Code
After installing the necessary libraries, you can upload an example code from the library.
In your Arduino IDE, go to File > Examples > DHT Sensor library > DHTtester
The following code should load. It reads temperature and humidity, and displays the results in the Serial Monitor.
//) // Initialize DHT sensor for normal 16mhz Arduino DHT dht(DHTPIN, DHTTYPE);"); }
How the Code Works
You start by including the DHT library:
#include "DHT.h"
Then, you define the pin that the DHT sensor is connected to. In this case it is connected to digital pin 2.
#define DHTPIN 2 // what digital pin we're connected to
Then, you need to define the DHT sensor type you’re using. In our example we’re using the DHT11.
#define DHTTYPE DHT11 // DHT 11
If you’re using another DHT sensor, you need to comment the previous line and uncomment one of the following:
//#define DHTTYPE DHT22 // DHT 22 (AM2302) //#define DHTTYPE DHT21 // DHT 21 (AM2301)
Then, initialize a DHT object called dht with the pin and type you’ve defined previously:
DHT dht(DHTPIN, DHTTYPE);
In the setup(), initialize the Serial Monitor at a baud rate of 9600 for debugging purposes.
erial.begin(9600); Serial.println("DHTxx test!");
Initialize the DHT sensor with the .begin() method.
dht.begin();
In the loop(), at the beginning, there’s a delay of 2 seconds. This delay is needed to give enough time for the sensor to take readings. The maximum sampling rate is two seconds for the DHT22 and one second for the DHT11.
delay(2000);
Reading temperature and humidity is very simple. To get humidity, you just need to use the readHumidity() method on the dht object. In this case, we’re saving the humidity in the h variable. Note that the readHumidity() method returns a value of type float.
float h = dht.readHumidity();
Similarly, to read temperature use the readTemperature() method.
float t = dht.readTemperature();
Tto get temperature in Fahrenheit degrees, just pass true to the readTemperature() method as follows:
float f = dht.readTemperature(true);
This library also comes with methods to compute the heat index in Fahrenheit and Celsius:
// Compute heat index in Fahrenheit (the default) float hif = dht.computeHeatIndex(f, h); // Compute heat index in Celsius (isFahreheit = false) float hic = dht.computeHeatIndex(t, h, false);
Finally, all readings are displayed on the Serial Monitor.
Serial.print("Humidity: "); Serial.print(h); Serial.print(" %\t"); Serial.print("Temperature: "); Serial.print(t); Serial.print(" *C "); Serial.print(f); Serial.print(" *F\t"); Serial.print("Heat index: "); Serial.print(hic); Serial.print(" *C "); Serial.print(hif); Serial.println(" *F");
Demonstration
After uploading the code to the Arduino, open the Serial Monitor at a baud rate of 9600. You should get sensor readings every two seconds. Here’s what you should see in your Arduino IDE Serial Monitor.
Troubleshooting – Failed to read from DHT sensor
If you’re trying to read the temperature and humidity from the DHT11/DHT22 sensor and you get an error message in your Serial Monitor, follow the next steps to see if you can make your sensor work (or read our dedicated DHT Troubleshooting Guide).
“Failed to read from DHT sensor!” or Nan readings
If your DHT sensor returns the error message “Failed to read from DHT sensor!” or the DHT readings return “Nan”:
Try one of the following troubleshooting tips:
- Wiring: when you’re building an electronics project, you need to double-check the wiring or pin assignment. After checking and testing that your circuit is properly connected, if it still doesn’t work, continue reading the next troubleshooting tips.
- Power: the DHT sensor has an operating range of 3V to 5.5V (DHT11) or 3V to 6V (DHT22). If you’re powering the sensor from the a 3.3V pin, in some cases powering the DHT with 5V solves the problem.
- Bad USB port or USB cable: sometimes powering the Arduino.
- Power source: as mentioned in the previous tip, your Arduino might not be supplying enough power to properly read from the DHT sensor. In some cases, you might need to power the Arduino with a power source that provides more current.
- Sensor type: double-check that you’ve uncommented/commented in your code the right sensor for your project. In this project, we were using the DHT22:
//#define DHTTYPE DHT11 // DHT 11 #define DHTTYPE DHT22 // DHT 22 (AM2302), AM2321 //#define DHTTYPE DHT21 // DHT 21 (AM2301)
- Sampling rate: the DHT sensor is very slow getting the readings (the sensor readings may take up to 2 seconds). In some cases, increasing the time between readings solves the problem.
-
The DHT11 and DHT22 sensors provide an easy and inexpensive way to get temperature and humidity measurements with the Arduino. The wiring is very simple – you just need to connect the DHT data pin to an Arduino digital pin.
Writing the code to get temperature and humidity is also simple thanks to the DHT library. Getting temperature and humidity readings is as simple as using the readTemperature() and readHumidity() methods. ! | https://randomnerdtutorials.com/complete-guide-for-dht11dht22-humidity-and-temperature-sensor-with-arduino/ | CC-MAIN-2021-31 | refinedweb | 1,400 | 55.74 |
Question: The Waitangi Group has invested 18 000 in a high tech project
The Waitangi Group has invested $18,000 in a high-tech project lasting three years. Depreciation is $5,300, $7,800, and $4,900 in years 1, 2, and 3, respectively. The project generates pretax income of $2,260 each year. The pretax income already includes the depreciation expense. If the tax rate is 25 percent, what is the project’s average accounting return (AAR)?
View Solution:
View Solution:
Answer to relevant QuestionsPluto Planet, Inc., has a project with the following cash flows. The company evaluates all projects by applying the IRR rule. If the appropriate interest rate is 9 percent, should the company accept the project? Yasmin Machine Shop is considering a four-year project to improve its production efficiency. Buying a new machine press for $475,000 is estimated to result in $183,000 in annual pretax cost savings. The press falls in the ...Leno Industries runs a small manufacturing operation. For this fiscal year, it expects real net cash flows of $210,000. The company is an ongoing operation, but it expects competitive pressures to erode its real net cash ...You have been hired as a consultant for Pristine Urban-Tech Zither, Inc. (PUTZ), manufacturers of fine zithers. The market for zithers is growing quickly. The company bought some land three years ago for $1.4 million in ...Suppose we are thinking about replacing an old computer with a new one. The old one cost us $430,000; the new one will cost $465,000. The new machine will be depreciated straight-line to zero over its five-year life. It will ...
Post your question | http://www.solutioninn.com/the-waitangi-group-has-invested-18000-in-a-hightech-project | CC-MAIN-2017-26 | refinedweb | 281 | 68.77 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.