text
stringlengths 454
608k
| url
stringlengths 17
896
| dump
stringclasses 91
values | source
stringclasses 1
value | word_count
int64 101
114k
| flesch_reading_ease
float64 50
104
|
|---|---|---|---|---|---|
Talk:Proposed features/peak
Contents
Rendering
I have a small proposal regarding rendering of the tags peak=hill and peak=knoll/hillock. Stereotypical (in reality it varies) mountain have a very distinct peak, while hills and knolls/hillocks (especially the latter) usually don't. If we're going to use different symbols for different tags, maybe it's a good idea to reflect this fact in them, and use more rounded icon for hills and (or?) knolls/hillocks.
For example:
Live example:
--Psadk (talk) 18:00, 6 July 2014 (UTC)
- I redo symbol for hill (
) (because, of course, I forgot to save the xcf file...) and made a new one for hillock/knoll (
). Live example: --Psadk (talk) 07:39, 7 July 2014 (UTC)
General issues
For me, idea of these tags is not clearly described. The first thing I'd like to understand is what exactly should these tags reflect: height, shape, origin (geological processes), surface, or composite of these properties. In case of more than one property at once (for example: height and shape or height and surface), I'll strongly oppose this, because OSM is already full of tags, reflecting several properties in the same time because of historical reasons.
--BushmanK (talk) 23:33, 6 July 2014 (UTC)
- I guess this is not a historical reason at all - people use that distinction contemporary. Relative height is the most important property here, but it can be more complex, of course - as with any definition. -- Kocio (talk) 17:30, 16 July 2014 (UTC)
- Please, read carefully my statement above. I'm talking about lack of clarity in proposed terms. And this problem seems to have the same impact as tags, introduced in the beginning of OSM history, because these tags are reflecting equally non-clear entities. Currently, we have to do our best to avoid introduction of any non-clear tags.--BushmanK (talk) 23:24, 2 August 2014 (UTC)
not useful
too vague
The proposal starts with the hint that "the distinction between those land forms is more or less subjective", which already contradicts the subsequent hint to randomly set the limit to 300 m. This is like person=young/old with a random limit of 40 years. If the age is of any interest, then a number will definitely be more helpful than such a random classification. There's already ele=* for elevation above sea level, and you may use height=* or something like prominence=* for relative height.
- 1. The limit is the hint, because while I like to explicitly trust people to make intelligent choices (as they do all the time), I also like them to have something more concrete (our definition to be overpassed if needed). Definition can be made without mentioning trust for overpassing it, just like many others, I just want the trust to be seen.
- 2.Classification, however, is not entirely subjective - there are just different criteria we can use (see [1]). That being said, look at the definition of "highway:primary" - doesn't "A major highway linking large towns, normally with 2 lanes" sound too vague, too (what is "large"/"small" town and what is sure way of recognizing the primary from secondary)? Do you propose to stick with less problematic "highway=yes" only?
- Most highway=* tags were invented at a time when OSM was new and immature. They were adopted from paper maps of the UK. Other countries were not considered (look at the original definition for trunk roads), and other tags such as lanes=*, surface=* and access=* did not exist. As unclear definitions are horrific for both mappers and application developers, people began refining the definitions for each country. For example, highway=primary has been defined to mean B roads in Austria, and the tag highway=trunc is used on motorroads in many countries. These highway tags now also act as shortcuts to add implied default access and maxspeed values. There's no such a benefit for peak=* values. According to the proposal, it's merely a classification by one phisical quantity, which is the relative height. Nothing implied, no use as a shortcut. --Fkv (talk) 12:18, 31 July 2014 (UTC)
- 3.Height tags are welcome, of course, but how do you suggest we can use them? -- Kocio (talk) 17:00, 16 July 2014 (UTC)
counter-intuitive
Given the 100m/300m criteria, the Kleinglockner which is known as the second highest mountain in Austria, would only be a hillock because the col to the Großglockner is only 17 m lower.
- It is debatable whether Kleinglockner is a mountain. Some claims it's subpeak. And author of this proposal in his rationale wrote: "Especially the micromapping phenomenon makes it important to separate some smaller, but still visible and useful land forms, from the typical alpinist/hiking aims." So if Kleinglockner is a well-known peak for hiking, it should be mapped as natural=peak and without peak=* subtag. Rafmar (talk) 22:08, 11 July 2014 (UTC)
derivable
More and more exact DEMs get publicly available. Starting tagging topographic prominence for millions of peaks worldwide would be wasted effort as these data will be able to be computed sooner or later. Derivable information like this does not even belong in the OSM database. It's the same as with contour lines. Note that ele=* is an exception, as the elevation of peaks is often surveyed even more exactly (e.g. by theodolite) than a laser scan could currently do.
- 1. If so, why do we really tag the peaks, while they can already be "computed" - especially mountain/high?
- 2. Not all peaks are made equal, we tag only those, which are of some importance for the people, not every local maximum. Some of them have names, some other have some other special meaning.
- 3. Extracting such informations would be computational intensive task, from the pure technical point of view.
- 4. Existing (usable for OSM) terrain data are poor and are useful only for high mountains. However if we can use them for maps, I would be happy, but only as a shading/layers background, not the substitute for peak tagging. -- Kocio (talk) 17:11, 16 July 2014 (UTC)
natural/man_made headaches
The dictinction between landuse=forest and natural=wood caused so many problems, why should we repeat them for peaks? If something looks like a peak, tag it as natural=peak, otherwise man_made=heap or barrier=debris might be more suitable. One famous example for an artificial peak is Monte Müllo, which - as the name indicates - is generally referred to as a mountain since it developped a crust of soil and vegetation.
- I thought about this problem and the real solution would be to make special "terrain=*" namespace, because while many such objects are natural, some notable examples are not, and that would cut the knot. Yet I don't think such a big change is a realistic scenario for this moment and treat man_made=peak as a half step in the right direcrion. -- Kocio (talk) 17:25, 16 July 2014 (UTC)
existing tagging for archaeological sites
Mounds use to be mapped as historic=archaeological_site + site_type=tumulus.
- is man-made, not a tumulus, and it is mapped as natural=peak. However, I'd rather use man_made=mound as a second tag than man_made=peak Rafmar (talk) 22:08, 11 July 2014 (UTC)
--Fkv (talk) 09:15, 9 July 2014 (UTC)
|
https://wiki.openstreetmap.org/wiki/Talk:Proposed_features/peak
|
CC-MAIN-2018-30
|
refinedweb
| 1,228
| 59.94
|
Analysing the stroop effect.
What is the independent variable? What is the dependent variable?
Independent variable is a variable that is controlled in an experiment. In this problem it is
whether the words are congruent or incongruent.
Dependent variable is the variable being tested. In this problem it is the time to read congruent
and incongruent words
What is an appropriate set of hypotheses for this task? What kind of statistical test do you expect to perform?
The appropriate hypotheses are if there is a difference between in the reading time of congruent and incongruent words. If there is a difference, we can infer that the Stroop effect exists. correction The null hypotheses is there is no difference between the reading time of congruent words and incongruent words. Mathematically, Ho: u_congruent – u_incongruent = 0 The alternate hypotheses is there is a difference between the reading time of congruent words and incongruent words and Stroop effect does exist. Mathematically, Ha : u_congruent – u_incongruent != 0 where, Ho is null hypotheses, Ha is alternate hypotheses, u_congruent is the population mean of reading time of congruent words, u_incongruent is the population mean of reading time of incongruent words, ‘=’ is arithmetic sign for equal to, ‘!=’ is the sign for not equal to, If we get enough evidence that we can reject null hypotheses, we will support our alternative hypotheses and infer that there is a difference between reading time of congruent and incongruent words i.e. Stroop Effect exist. I’ll perform two side T-test for paired samples. I chose this because the population standard deviation is unknown. The other reason is our two sets of observations are not independent but paired. This means the same person is recorded twice(once for congruent and other for incongruent).
import pandas as pd import numpy as np from scipy import stats import matplotlib.pyplot as plt
df=pd.read_csv("stroopdata.csv")
df.head()
df.describe()
array=np.array(df)
The visualizations describe the given situations
N = len(array) con = array[:,0] incon = array[:,1] ind = np.arange(N) width = 0.5 p1 = plt.bar(ind,con,width) p2 = plt.bar(ind,incon,width,bottom=con) plt.ylabel('Reading Time') plt.title('Reading time for Congruent and Incongruent') plt.legend((p1[0], p2[0]), ('Congruent', 'Incongruent')) plt.show()
We can clearly see from the visualization that time taken to read incongruent words is higher
than the time taken to read incongruent words.
Now,we perform the statistical test to find out the confidence level and critical statistic value. And check if we can reject the null hypothesis.
diff=array[:,0]-array[:,1] #Difference vector of two reading times diff
array([ -7.199, -1.95 , -11.65 , -7.057, -8.134, -8.64 , -9.88 , -8.407, -11.361, -11.802, -2.196, -3.346, -2.437, -3.401, -17.055, -10.028, -6.644, -9.79 , -6.081, -21.919, -10.95 , -3.727, -2.348, -5.153])
u_diff=diff.mean() #Mean of the difference vector u_diff
-7.964791666666666
std_diff=diff.std() #Standard Deviation of the difference std_diff
4.762398030222158
n=len(diff) n
24
se_diff=std_diff/np.sqrt(n) #Standard Error se_diff
0.9721204271733325
t=(u_diff-0)/se_diff #Calculated t-statistic t
-8.193215000970776
dof=n-1 #Degrees of Freedom dof
23
critical_t= stats.t.ppf(1-0.025, dof) #Crtical value given apha=0.05 critical_t
2.0686576104190406
Standard Error : 0.9721 Degree of Freedom: dof= n-1= 23 Calculated T statistic: t = 8.193 For apha
= 0.05 , Critical Value is +- 2.0686 p value is nearly 0.000001
Hence, our T-statistic is greater than critical value and falls under the critical region, we reject
our Null Hypotheses. Also the p-value is much less than 0.05, hence there is strong evidence in
support of alternate hypotheses.
|
http://www.datascribble.com/blog/uncategorized/hypothesis-testing-using-stroop-effect/
|
CC-MAIN-2020-45
|
refinedweb
| 633
| 52.05
|
I am new in JBoss. I have deployed an EAR, which contains one jar file containg the bean files and one war file containg the webservice related files, including web.xml,wsdl and mapping files.
now when i am trying to access this webservice, i m getting error in client side but its showing no error in server side except only a warning.The incoming datas are also handled successfully.
The warning is similar to the posted warning.
I am including it again,
WARN [DeserializationContextImpl] Ignoring invalid namespace mapping: [prefix=,uri=]
Its really frustrating, couse i am not sure what to do... I have used WSDL2Java tool of AXIS
for client side.. This tool worked fine when i deployed my application using IBM Websphere.
For Server side i have used JWSDP pack's WSCOMPILE tools to generate the necessary artifacts.
Can anyone tell me whts the problem??
|
https://developer.jboss.org/thread/100661
|
CC-MAIN-2018-17
|
refinedweb
| 147
| 58.38
|
Introduction
When Microsoft released Exchange 2007 they built the Exchange Management Console on top of Windows PowerShell 1.0; when you execute commands from the console, underneath it uses PowerShell cmdlets to carry out the requested actions. Exchange 2007 SP1 ships with around 400 native PowerShell cmdlets to efficiently configure, manage and automate your messaging environment; everything from mailbox and database management, through junk mail settings and public folders. Whilst this has been a fantastic move forward for those organisations who have either migrated from previous Exchange versions or other messaging platforms or even a completely fresh Exchange 2007 installation not every Exchange administrator is lucky enough to be in that boat and may well still be managing an Exchange 2003 (or earlier) environment.
The aim of this article is to demonstrate how you can use standard PowerShell techniques to query information stored in Event Logs, WMI and Active Directory to make management of your pre-Exchange 2007 environment more efficient than with the standard GUI based tools provided.
What Do I Need To Get Started
Installation of Exchange 2007 has a dependency on Windows PowerShell 1.0 being installed on the server so that you are able to execute commands from the Exchange Management Console. For the purposes of what we are going to look at it is not recommended that you install PowerShell on your Exchange 2003 server, rather install it on your management workstation. (It would be unlikely to cause an issue on your Exchange 2003 server, but it’s not good practise to install unnecessary software there).
On your management workstation you require the following:
- Windows XP (or above)
- .NET Framework 2.0 or above
- PowerShell – installed by default on Windows 7 SP1 and later, and on Windows Server 2008 R2 SP1 and later
- For the WMI queries we are going to run you will need to provide an account with local admin rights on the Exchange Server since that is a requirement for remote WMI access
Querying Event Logs Using WMI
Later on we’ll come onto using some of the Exchange specific WMI classes which are available to us for extracting information from an Exchange server, but for now I will show you how you can use WMI to query the Event Logs on any Windows based server.
Windows Management Instrumentation (WMI) is the infrastructure for management data and operations on Windows-based operating systems, think of it as a database of the OS. You may have previously used other tools for querying WMI such as WMIC or VBScript and consequently not had the easiest experience with WMI – I recently heard WMI described as ‘Voodoo’! It doesn’t need to be like this either with the previously mentioned tools or best of all with PowerShell which makes WMI your friend.
Using the PowerShell cmdlet Get-WmiObject you can query WMI on local or remote computers and easily obtain valuable results. For instance with a very simple example you could use Get-WmiObject Win32_ComputerSystem to return information about the local computer.
PowerShell by default will return a standard set of properties for the WMI class you have queried as above, however there are typically many more properties hidden away underneath which can be exposed. You can pipe the results of your WMI query to the Format-Listcmdlet to discover what properties are available to you.
Now you know what’s available you can stipulate the particular properties you wish to display. PowerShell will then only output the results for the information you are specifically interested in.
Moving on, we can now use some slightly more advanced techniques to query the event logs on remote machines. If you are an experienced Exchange administrator it is likely that at some point you will have taken your mailbox databases offline so that they can be defragged to reclaim the white space inside them. In a large environment you may have hundreds of databases and looking for candidates for defragging could be a time consuming process, typically you would use an event log entry similar to that below from the Application log which shows how much free space is in a database after the online defragmentation process has completed. You would not want to manually browse through these records on multiple Exchange servers over multiple mailbox stores.
Instead, it is far better to use PowerShell to query those entries in the event logs and return to you only the information you need. For instance, give you only those databases which have over 2GB free space in them. The following command will carry out most of this work for you:
(Do not worry too much about $WmidtQueryDT, it’s supplied for you in the full script file, Get-FreeSpace.ps1, where we use some .NET code to get a date into a format that WMI likes)
Step-by-step we use the:
- Get-WmiObject cmdlet to query the remote server ExchangeServerName by using the -computer parameter
- Use the query parameter to specify a SQL style query
- Select everything from the Win32_NTLogEvent WMI class
- Filter it on the Application Log
- Filter it on Event ID 1221
- Filter it on the last 24 hours – this assumes that you run online maintenance on your databases daily
This will return the information we need. In the script file we then run some extra code to read those log files and filter it down so that only those whose Message Text includes a value over 2GB and export the results to a CSV file ready for use.
Exchange 2003 WMI classes
When you install Exchange 2003 some additional WMI classes will be added specific to Exchange information which you are able to query. These are documented on MSDN ( ) where you can find full details of what they have to offer and some VBScript examples of how you might use them. (Note: these classes have been removed from Exchange 2007 and above) Fortunately for you I’m going to show you how to use them in PowerShell instead. The best one to start with is the Exchange_Mailbox class.
So far I have not mentioned WMI namespaces. In the previous examples the WMI classes we have been using are all part of the default WMI namespace Root\CIMV2. Since it’s the default, PowerShell assumes that’s the namespace you wish to use if you don’t specify one. The Exchange 2003 classes do not live in this namespace; they belong to the root\MicrosoftExchangeV2 namespace, so when using the Get-WmiObject cmdlet with these classes we must use the namespace parameter. A simple command to query an Exchange server about its mailboxes is the following:
This will return information about mailboxes across all Mailbox Stores on the specified Exchange server. By using a couple of additional standard PowerShell cmdlets we can produce more readable output. Firstly we can use the Sort-Object cmdlet and specify the property MailboxDisplayName so that the mailboxes are returned in alphabetical order. Then we can use Format-Table to specify which properties are returned.
The below example shows the kind of results this would produce.
A common scenario for Exchange management is the demand from users for large mailboxes and consequently lots of storage. Your manager may well want to know who is using up all the extra storage he only just paid for last month that you told him you needed. Using the GUI Exchange tool it is not easy to aggregate this information across your server farm if you have multiple databases over many Exchange servers. However, we can take the previous example, make some modifications to it and hey presto you will have the report for your manager.
This time instead of sorting by MailboxDisplayName we sort by the Size of the mailbox, then we use the Select-Object cmdlet and use the -First parameter to return only a subset of the results – in this case 10. This will give us the top 10 largest mailboxes (you can obviously adjust the number of results returned to suit your needs). Since we return the Exchange Servername, Storage Group and Mailbox Store it also makes it simple to track these mailboxes down.
It’s not just mailbox data we can query though with Get-WmiObject, another useful Exchange WMI class is the Exchange_Logon class. Not surprisingly this will tell us about who is logged into Exchange and other useful information like what client they are using, for instance you might wish to report on which of your users are logged in using Outlook Web Access.
This time we use the -filter parameter of Get-WmiObject to return a restricted amount of data, in this case where the ClientVersion reported by the Exchange_Logon class matches ‘HTTP’ ; we also exclude entries where the LoggedOnUser is NT Authority\System which is something we are obviously not particularly interested in. We may well get multiple results per user returned via the Exchange_Logon class for the same client connection so when sorting the data with Sort-Object we use the -unique parameter so that we only get one relevant result in the output.
(In the demo environment used for this article I was not able to simulate multiple logons, but you can imagine the kind of output you would receive in a larger environment.)
As an Exchange administrator another common best practise is to keep mailboxes spread evenly over Mailbox Stores and Storage Groups as best is possible. To do that though you need the information easily to hand of the number of mailboxes across these areas; again the out of the box Exchange GUI tools do not help you with this task, particularly across large environments. PowerShell can again come to your rescue and very simply provide this information.
We use the Exchange_Mailbox class again, but this time instead of returning names, locations and sizes of mailboxes we count the results and return that data. We store in a text file the names of the Storage Groups we are interested in (as below) and use the Get-Content cmdlet to store these names into a variable.
We then use a foreach statement to loop through each of these Storage Groups and count the number of mailboxes in each one. You will see below that the WMI query is filtered on the StorageGroupName property and the entire command is encapsulated in brackets with .count added at the end to give us the total in that query. Finally Write-Host is used to display some text and the results to the screen.
It doesn’t take a genius to extend this to be more granular and look at Mailbox Stores instead of Storage Groups.
Provide your list of Mailbox Stores to PowerShell with
Then use a foreach statement to loop through each of the stores and return the number of users in each one.
Querying Active Directory to Determine Exchange Information in a Network
Another way we can use PowerShell to garner information about our Exchange environments is to query Active Directory which stores a lot of Exchange configuration information. One of the ways to do this is to use a couple of classes within the .NET DirectoryServices Namespace which is a .NET wrapper for ADSI ( ), in particular the DirectoryEntry and DirectorySearcher classes.
I have heard stories of administrators being put off getting into PowerShell because they’ve been told that you need to be a .NET programmer to use it – this is so not true. You don’t need to know any .NET to effectively use PowerShell, however PowerShell does have access to .NET so it can be to your advantage and potentially open a few doors if you just explore some of the basics.
In this example we are going to query Active Directory to find out the names of the Exchange servers in our environment. First of all we create a DirectoryServices.DirectoryEntry object for the current Active Directory domain.
Then we use ADSI to get a reference to the configuration naming context within AD. All Exchange information in AD (except for per-recipient information) is in the configuration naming context.
Create a directory search object
Filter the search on the Exchange Server objectclass
Use the FindAll method to execute the search
Finally, return the names of all the results.
In my demo environment there is only one Exchange server so the results are not particularly exciting, however you can imagine how useful this could be in a large deployment of Exchange servers.
To save typing all of those lines out each time you want to run the code you could turn it into a filter like this:
and then you can either include it in a script or if it’s something you might run regularly then include it in your PowerShell profile ( ).
Note: a filter is similar to a function in PowerShell, however it enables you to take advantage of the pipeline.
What we have now though is something potentially very powerful. By putting the GetExchangeServers filter together with one of our earlier WMI queries we can now run that query against all of our Exchange Servers without even having to specify their names!
To find the top 10 largest mailboxes on each Exchange server run the GetExchangeServers filter and pipe it to the Get-WmiObject command.
Other objectclasses you could also query in Active Directory include:
Storage Groups:
msExchStorageGroup
Mailbox Databases:
msExchPrivateMDB
Public Folder Databases:
msExchPublicMDB
Simply replace the value for the
objectclass in the
$searcher.filter line.
Exchange 2003 Powerpack for PowerGUI
You might be reading this article and thinking “Well this scripting stuff is all very well, but it’s really not for me, I’m a GUI kind of administrator and what I’d really like is someone just to package up all these scripts for me and let me run them from the click of a button”.
If that’s you then you are in luck because Quest has created a tool to make that dream a reality by creating a fabulous free tool called PowerGUI (). Not only do you get a brilliant scripting editor (yes for free) the package also includes a tool which lets you create custom management consoles. So say you don’t like the console which ships with Exchange or it has something missing and you don’t want to wait for the next product release, you can create your own console by packaging up PowerShell scripts into PowerPacks – or even better download one which somebody else has already created for you.
Simply download and install PowerGUI, then head over to the PowerPacks part of the site () and you will find many pre-made PowerPacksfor a variety of different products including Exchange, SQL, VMware, Citrix, etc ready to download.
The PowerPacks are completely open so you can modify / add / delete any part of them to make it more suitable for your own environment. However, as of version 1.7 it is also now possible to lock down and brand a PowerPack, for instance if you wanted to provide one for helpdesk use.
I made a PowerPack for Exchange 2003 () which when plugged into PowerGUI will give you easy access to most of the scripts already mentioned in this article plus many more which you can see below.
Simply click one of the script nodes and the script will go off and do its work in the background and return the results into the grid pane. For instance if you chose ‘Top 10 Largest Mailboxes Per Server’ the exact same PowerShell script as previously mentioned will be used and you will see the results like below.
Included is a section for managing the user and groups part of Exchange management. These require the AD PowerShell cmdlets also provided free by Quest (). A good example for their use would be the node ‘Get-EmptyDistributionGroups’. It’s pretty common for a distribution group to be set up for short-term use, say for a project and then at the end of the project people are removed from the group, but nobody ever thinks to tell the administrator that the group is no longer needed. Simply run this PowerShell script and it will give you a list of all distribution groups which are empty – it’s even got a Delete button so that you can remove them from the same console.
Conclusion
So you have seen that just because you may not be using Exchange 2007 you don’t have to be a second-class citizen in terms of managing Exchange from scripts or the command line.
Using PowerShell and underlying technologies like WMI and Active Directory you can quickly and simply gather information from your Exchange environment where using the standard GUI tools for the same tasks requires far more manual effort. Make one further step by scheduling these scripts to run and you will soon be automating large sections of your admin tasks and be free to get on with those interesting projects you never have time to work on.
|
https://www.red-gate.com/simple-talk/sysadmin/powershell/so-you-thought-powershell-was-only-for-exchange-2007/
|
CC-MAIN-2020-10
|
refinedweb
| 2,836
| 53.85
|
Editing the taxonomy database
Stating the problem
In the previous lecture I made a taxonomy browser with support for searches and viewing individual records. I'm going to modify the server to let anyone add aliases for a given term.
The user in this case is a taxonomy specialist somewhere in the world. To make the user more concrete I'll name him Jim. Jim received his PhD in biology 2 years ago and is doing a post-doc on coral reef fishes of Micronesia. The local fishermen use names in their own language for different species, and don't use the Latin names. Jim published a paper listing the different names and now he wants to add it to the database so others can find it more easily. Jim is not a computer programmer and would rather spend his time in the water than learning how to use a new computer program.
Given the user case here's the interaction scenario. Jim is working with Pomacentridae (locally known as damselfishes) and wants to note that Abudefduf luridus is also known as 'canary damsel' or 'canary damselfish.' He goes to the taxonomy server and searches for "Abudefduf luridus". From the search results he selects the species record and sees that the aliases don't yet exist so he adds both of them. He then wants to record that Stegastes planifrons is also known as "threespot damselfish."
The above paragraph is a functional description of what Jim wants to do, but it doesn't desribe all of the details of how he does it. You as the developer need to figure out a few things:
- Is there a specialized interface for doing curation or does it extend the existing browser? That is, when and how do you go from browsing/searching to curational editing?
- If the search term exactly matches a record's name should the search results page go directly to the record? (Hint: "yeast" should also match "yeast killer virus".)
- If there is only one match should the results page go directly to that record instead of displaying a list of length 1?
- When adding an alias for a given record does the user enter one alias at a time or multiple aliases?
I can think of several solutions. The choice of which is right depends on the users. Because I expect that people like Jim will more often broswe the database and will want little or no training I'll modify the browser interface slightly to include the ability to add new aliases. Suppose instead I had a local curational team searching the literature and adding aliases. In that case a specialized entry application makes more sense.
Understanding what people want and how to make the human/computer interface work well is part of "CHI" (for "computer/human interface) and includes subfields like usability design and user-centered development. For some initial pointers read some of the stories and books by Bruce Tognazzini, Alan Cooper's "The Inmates are Running the Asylum", and my lecture from last year.
Updating the taxid detail page
Here's the look of the page showing details about a taxonomy record.
Abudefduf luridus
Taxon identifier: 142648
Genetic Code: Standard
Parent taxon:
Abudefduf
Children: None
I'm going to modify it to
Abudefduf luridus
Taxon identifier: 142648
Genetic Code: Standard
Aliases: canary damsel, canary damselfish [edit aliases]
Parent taxon:
Abudefduf
Children: None
and clicking on the "edit aliases" link will take me to a page which looks like this:
Changing the data model
To make this work I'll need to store aliases in the database. The current database has two tables, "taxonomy" and "genetic_code". They look like this:
I need a new table which relates an alias record to a taxonomy record. The alias table must have two fields: a tax_id field which is the primary key of the taxonomy record (meaning "this alias is an alias to that taxonomy record") and the text of the alias. SQLObject adds another requirement that every record have a unique primary key. (This is not a requirement of a relational database and if were using SQL directly we wouldn't need an primary identifier.)
The easiest way to make the table is through the database administation program. For sqlite that's 'sqlite3'. The other database systems have similar interfaces. I'll load the database and show the current schema:
[~/nbn/taxonomy] % sqlite3 taxdata.sqlite SQLite version 3.2.7 Enter ".help" for instructions sqlite> .schema CREATE TABLE genetic_code ( id INTEGER PRIMARY KEY, name TEXT ); CREATE TABLE taxonomy ( tax_id INTEGER PRIMARY KEY, scientific_name TEXT, rank TEXT, parent_id INT, genetic_code_id INT, mitochondrial_genetic_code_id INT ); CREATE INDEX genetic_code_id_idx on taxonomy (genetic_code_id); CREATE INDEX name_idx on genetic_code (name); CREATE INDEX parent_id_idx on taxonomy (parent_id); CREATE INDEX scientific_name_idx on taxonomy (scientific_name);I then entered the following to create the new table
sqlite> CREATE TABLE alias ( ...> id INTEGER PRIMARY KEY, ...> tax_id INTEGER, ...> alias STRING); sqlite>I could have done this by modifying the model.py file and using it to update the table but I find that working with the SQL level is easier to understand because there's one fewer layer of abstraction going on.
Because I want to work with the database through SQLObject I still need to modify my model.py file to add the 'Alias' table. The new definition is
class Alias(SQLObject): taxon = ForeignKey("Taxonomy", dbName="tax_id") alias = StringCol()The default identifier name is 'id' so I don't need the sqlmeta statement to override that. I want to be able to say "alias.taxon" to get the taxonomy record for a given alias. SQLObject's by default assumes the relation will be stored in the "taxonomy_id" but it's actually stored in "tax_id", so I need to override that using the 'dbName' option.
I also want to go from an annotation to the list of aliases, so I'll change the Taxonomy SQLObject class definition to include a new field called "aliases". The new bit is in bold:
class Taxonomy(SQLObject): class sqlmeta: idName = "tax_id" scientific_name = StringCol() rank = StringCol() parent = ForeignKey("Taxonomy") children = MultipleJoin("Taxonomy", joinColumn="parent_id") genetic_code = ForeignKey("GeneticCode") mitochondrial_genetic_code = ForeignKey("GeneticCode") aliases = MultipleJoin("Alias", joinColumn="tax_id", orderBy="alias")The MultipleJoin means that ".alias" will contain a list of all Alias entries where the 'tax_id' of the alias is the same as the Taxonomy's primary identifier. Normally the results are in arbitrary order. I want the results to be sorted alphabetically on the "alias" field so it's easier to read the list of aliases. I do this with the "orderBy" parameter, which takes either the SQL column name for the field to sort on, or the special/magic Table.q.fieldName from SQLObject.
Changing the templates
I'm going to update the record details template to include the aliases as well as the hyperlink to the edit page. I haven't defined the edit page yet. Because I want to see some results I'll edit the database through the TurboGears shell and add "canary damsel" as an alias to taxonomy record 142648.
[~/nbn/taxonomy/TaxonomyServer] % tg-admin shell Python 2.4.2 (#6, Apr 15 2006, 11:26:48) [GCC 3.3 20030304 (Apple Computer, Inc. build 1495)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from model import Taxonomy, Alias >>> taxon = Taxonomy.get(142648) >>> Alias(taxon=taxon, alias="canary damsel") <Alias 1 taxonID=142648 >>> import model >>> model.hub.commit() >>>I then quit the shell and restarted it, to prove to myself that the database actually saved my new record. After getting the record again I'll request all of its aliases:
>>> from model import Taxonomy >>> Taxonomy.get(142648).aliases [<Alias 1 taxonID=142648] >>>Well what do you know - it worked!
Now that I have data I'll edit "templates/details.kid". Where the old template had:
<P> Taxon identifier: ${taxon.id}<br /> Genetic Code: ${taxon.genetic_code.name}<br /> </P>I'll add a new bit at the end
<P> Taxon identifier: ${taxon.id}<br /> Genetic Code: ${taxon.genetic_code.name}<br /> Aliases: <span py:${alias.alias}</span> [<a href="/edit_aliases?tax_id=${taxon.id}">edit aliases</a>]<br /> </P>and try out taxon/142648.
That seems to work so I'll create the new edit interface. This will be a new URL and function called "edit_aliases" which takes the tax_id as its input parameter. Here's the new controller for it, which will be part of the Root class
@expose(template="taxonomyserver.template.edit") def edit_aliases(self, tax_id): return dict(taxon=Taxonomy.get(tax_id))It's very simple as all of the display logic is in the template. The template will give an edit field for each alias in the database. I'll add 5 extra fields just in case someone wants to add a several aliases at the same time. (This is part of the use case - Jim wants to add two aliases for the specific damselfish species.)
Following is the new template
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <head> <meta content="text/html; charset=UTF-8" http- <title>Edit aliases for ${taxon.scientific_name}</title> </head> <body> [<a href="/">Start a new search</a>| <a href="/taxon/${taxon.id}">View taxon detail page</a>] <h2>Edit aliases for ${taxon.scientific_name}</h2> <P> Taxon identifier: ${taxon.id}<br /> <form method="POST" action="update_aliases"> <input type="hidden" name="tax_id" value="${taxon.id}" /> Edit, erase entries or add to the following list of aliases<br /> <ol> <li py:<input type="text" name="alias" value="${alias.alias}" /></li> <li><input type="text" name="alias" /></li> <li><input type="text" name="alias" /></li> <li><input type="text" name="alias" /></li> <li><input type="text" name="alias" /></li> <li><input type="text" name="alias" /></li> </ol> <button type="submit">Update database</button> </form> </P> </body> </html>
There are a few things to point out in it. The form will go to a new URI named "update_aliases". I haven't created it yet.
The form uses a POST action instead of a GET action. This is important! The distinction is built into the way the web works. A GET request is supposed to be idempotent. That's a fancy way of saying it shouldn't be used to edit or delete things. Why? It's used for fetching existing records and documents, and only that. The web specification says that GET requests may be cached along the way. For example if you use a caching proxy it may store the results of old GET request. When you ask for it again the proxy may return the old results and not go all the way to the remote server to get the original document. This means the server never sees the change.
If you are interested in knowing more about web proxies, take a look a Squid. (Google for "squid proxy".) I've wondered how useful they are for places here in South Africa where out-of-country bandwidth is low and even in-country bandwidth is expensive.
Notice that all of the form input elements use the same variable "name" of "alias"? The to-be-written update_aliases controller will get those as the Python function parameter "aliases". If one alias is given then the function will get a string and if more than one aliases is given then it will get a list of all aliases.
One last thing to notice is the special "hidden" field. The update_aliases controller needs the list of aliases and the taxonomy record to edit. The web is "stateless" meaning that there are no built-in mechanisms for the web browswer or server to know the status of the other. Any important state information must be passed about through each request and response document. There are three main ways to do this. One is through hidden fields in form elements. These will not be visible to the user but will be sent back to the server when the form is submitted. Here the hidden field stores the taxon identifier.
Updating the database - edit_aliases
The edit_aliases controller will work in a very dumb and obvious way. To update the database for a given taxon it will delete all of the existing aliases then create new entries for each of the inputs. When done, redirect the browser to the details page for the taxon. This will list the aliases and confirm to the user that the update took place.
Here are the two changes I made to the "controllers.py" file. The first was the extra import for the "Alias" class:
from model import GeneticCode, Taxonomy, Aliasand the second is the actual controller code (another method in the Root class):
for alias in aliases: Alias(taxon=taxon, alias=alias) # Go to the details page redirect("/taxon/%s" % (taxon.id))
I tried it out and it seemed to work just fine. But after I added "canary damselfish" and went back to the detail page I saw a problem. The aliases line looked like this:
Aliases: canary damselcanary damselfish [edit aliases]There is no space between the aliases! I'll change that so it's a comma separated list of aliases. The old code was
Aliases: <span py:${alias.alias}</span> [<a href="/edit_aliases?tax_id=${taxon.id}">edit aliases</a>]<br />There are several ways to fix it. I decided to push everything into Python and use the 'join' method of strings
Aliases: ${", ".join([alias.alias for alias in taxon.aliases])} [<a href="/edit_aliases?tax_id=${taxon.id}">edit aliases</a>]<br />I saved the template and reloaded the details page only to find the next problem. The detailed aliases output was
Aliases: canary damsel, canary damselfish, , , , [edit aliases]It had a few blank records! These were the empty form elements passed into the "update_aliases" function. What I'll do is remove extra spaces from the input and skip adding empty strings to the database. I'll use a neat Python idiom for this which also fixes things like multiple spaces between words (eg, "homo sapiens" instead of "homo sapiens"). Here's the new step 2 for the controller
# 2. Create new aliases for alias in aliases: # Clean up whitespace problems new_alias = " ".join(alias.split()) # only add non-blank aliases if new_alias: Alias(taxon=taxon, alias=alias)The trick is to strip the string, which splits the string into a list of non-whitespace words, then join the string using a " ". The result is a single-space separated list of words from the original string.
Fixing the database was easy. I went to the edit page and pressed the "Update database" button. This removed all of the old aliases, including the multiple blank ones, and added only the non-blank ones back into the database. Which reminds me, I should add code to remove duplicates, just in case the same name was added more than once by accident. With that in place here's the final update_aliases code
already_added = {} for alias in aliases: # Clean up whitespace problems new_alias = " ".join(alias.split()) # only add non-blank aliases if new_alias and new_alias not in already_added: Alias(taxon=taxon, alias=alias) already_added[new_alias] = True # Go to the details page redirect("/taxon/%s" % (taxon.id))
Version control
Remember version control? This project is under version control. Now that things are stabilized I'll check the code into version control. I need to add the new "edit.kid" template and check in the changes I made to model.py, controllers.py and the details.kid.
[taxonomy/TaxonomyServer/taxonomyserver] % svn add templates/edit.kid A templates/edit.kid [taxonomy/TaxonomyServer/taxonomyserver] % svn commit Sending taxonomyserver/controllers.py Sending taxonomyserver/model.py Sending taxonomyserver/templates/details.kid Adding taxonomyserver/templates/edit.kid Transmitting file data .... Committed revision 25. [taxonomy/TaxonomyServer/taxonomyserver] %
Testing
I'll test the alias editing component by running through my user scenario. Ideally Jim would do it but he couldn't make it. To make it realistic I'll remove all aliases from the database before doing the testing. Here's how to do using a SQL command (through the sqlite3 interface):
DELETE FROM alias;If you only wanted to delete some of the aliases then add a selection query after a "WHERE", like this:
DELETE FROM alias WHERE id > 10;
Here's how to clear the table using SQLObject, in this case through the TurboGears shell
>>> model.Alias.clearTable() >>> model.hub.commit() >>>
I'll walk through the scenario and ask Jim "Add the common names 'canary damsel' and 'canary damselfish' to the species record Abudefduf luridus. This is a functional description in biologist terms. The goal is to see how easily someone can use the interface without any hints as to how it works.
Once done I'll follow up with "now add 'threespot damselfish' to Stegastes planifrons." There's a somewhat hidden requirement as well which is "add 'damselfish' as an alias for Pomacentridae" so I'll ask for that one as well.
Once this works, try a few variations. I've thought of several ways to improve the pages:
- Include the search box on the detail page so the user doesn't need to go to the start page to start a new search.
- Possibly have the simple search box only do a "substring search" for any genetic code, with a link to a "complex search" page for the rare case someone wants something else.
- Integrate the details page and the edit page into a single page. Or on the details page include a way to add a single alias or two (the common case) without having to go to the full edit page
How do you determing which designs are better than others? One good measure is the time needed to complete a task, so use a stopwatch. There are ways to estimate the input time for a page with Fitts' Law, and add a couple of seconds for every web page load both for load time and for the user to figure out what the page does.
|
http://www.dalkescientific.com/writings/NBN/adding_new_data.html
|
CC-MAIN-2018-34
|
refinedweb
| 2,998
| 64.71
|
I introduce you to Graphical User Interfaces (GUI) in this part of my Java Video Tutorial. I focus specifically on Java Swing and its components.
We figure out how to display frames, panels, labels, buttons, text areas and more. I also go over the Dimension object and the Java Toolkit, which allows you to ask questions of the operating system.
All of the code follows the video and it’s set up to help you retain this information.
If you like videos like this share it
Code From the Video
// Swing allows you to create Graphical User Interfaces // You need the Swing library to create GUI interfaces import java.awt.Dimension; import java.awt.Toolkit; import javax.swing.*; // You must extend the JFrame class to make a frame public class LessonTwenty extends JFrame{ public static void main(String[] args){ new LessonTwenty(); } public LessonTwenty(){ // Define the size of the frame this.setSize(400, 400); // Opens the frame in the middle of the screen--------------------------------------------------- // You could also define position based on a component | // this.setLocationRelativeTo(null); // user exits the program // This closes when they click the close button // Define if the user can resize the frame (true by default) this.setResizable(false); //(); // How to create a label with its text ---------- JLabel label1 = new JLabel("Tell me something"); // How to change the text for the label label1.setText("New Text"); // How to create a tool tip for the label label1.setToolTipText("Doesn't do anything"); // How to add the label to the panel thePanel.add(label1); // How to create a button ----------------------- JButton button1 = new JButton("Send"); // How to hide the button border (Default True) button1.setBorderPainted(false); // How to hide the button background (Default True) button1.setContentAreaFilled(false); // How to change the text for the label button1.setText("New Button"); // How to create a tool tip for the label button1.setToolTipText("Doesn't do anything either"); thePanel.add(button1); // How to add a textfield ---------------------- JTextField textField1 = new JTextField("Type Here", 15); // Change the size of the text field textField1.setColumns(10); // Change the initial value of the text field textField1.setText("New Text Here"); // Change the tool tip for the text field textField1.setToolTipText("More of nothing"); thePanel.add(textField1); // How to add a text area ---------------------- JTextArea textArea1 = new JTextArea(15, 20); // Set the default text for the text area textArea1.setText("Just a whole bunch of text that is important\n"); // If text doesn't fit on a line, jump to the next textArea1.setLineWrap(true); // Makes sure that words stay intact if a line wrap occurs textArea1.setWrapStyleWord(true); // Gets the number of newlines in the text int numOfLines = textArea1.getLineCount(); // Appends text after the current text textArea1.append(" number of lines: " + numOfLines); //); // How to add the panel to the frame this.add(thePanel); // Makes the frame show on the screen this.setVisible(true); // Gives focus to the textfield textField1.requestFocus(); // You can also request focus for the text array // textArea1.requestFocus(); } }
nice tutorial as you always give..;-)
but i have a request to you, i want to know how to create jar files as well as installers for any type of application, please if possible provide some tutorial for that, i never found any video for this.
Thank You
Hi, I’ll definitely do that. Because Java is such a powerful language I hope to create numerous tutorials on all the things it can be used to do. Keep sending me requests and I’ll keep going until I cover most everything. Thank you for taking the time to say Hi 🙂
your welcome, and thanks to you to accepting my request, love your all videos, keep going…
thanks again…
one more request i have, if possible, make some tutorial for databases (in java swing) also, using derby/javadb, and if possible explain with some projects with installation files as well, ie(how to create installation file/.exe, as in my previous request)..
Thank you
I’ll definitely cover Java and databases. Derby is kind of fun. I’ll see what I can do
Thank you….
Thank you 🙂
You’re very welcome 🙂 Thanks for stopping by
Thanks for this GREAT Java video. This is the first tutorial of yours I’ve viewed but I LOVE it! Thank you so much. I’m sure I will be viewing many more of your videos in the future. I especially love the transcript information with the code below. You helped me tremendously!
Thank you very much 🙂 I’m glad you were able to find me. I don’t have that many fans because I make long videos. Always feel free to ask questions and make requests
Hi Derek, great one!
Could you possible tell me Y is this not the same in NetBeans(it throws me 267 error when try to run it)?
I tried the code and even the first few lines and no luck, but in Eclipse it works?
I can’t imagine why? What line is causing the error
Hi (:
Well, basically, after typing the first part, following your video, when you run the script I tried running mine and no luck there!
——————————
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package igra;
import java.awt.Dimension;
import java.awt.Toolkit;
import javax.swing.*;
/**
*
* @author user2
*/
public class Igra extends JFrame{
private static final long serialVersionUID = 1L;
public static void main(String[] args) {
new Igra();
}
public Igra(){
this.setSize(500,500);
this.setVisible(true);
}
}
Hi (:
My machine is old and partition that I was saving my projects on, on HDD, somehow is messed up and it only works when I save projects to system partition.. not sure y! But still, SORRY for bothering with this.
I’m sorry to hear that. Always feel free to ask questions. I’ll do my best to help if possible
Thanks for these.. That’s all really 🙂 I start uni in September and you have held my hand through JQuery, PHP and now Java… Some of the best instruction on the net.
Thank you very much 🙂 I do the best I can
Hello..i am a bit new to programming but i have experience in c#.net in visual studio..but i have never created gui by hand coding..so this seems like much pain for me..i am hoping for an advice from you..what is better to start with because its hard to get rid of bad habits once you start them..should i use a tool or plugin to build gui or should i create them programmatically? thanks in advance 🙂
I teach this stuff by hand coding mainly because I like everyone to understand what is going on. You can definitely use a GUI though. Check out Window Builder. I think it is what you are looking for
Great Video’s. Any plans on making videos on SWT and RCP. Appreciate what you do.
Yes I plan on going back to Java at some point to cover the topics that I didn’t cover the first time. Yes a J2EE tutorial is in the works as well.
Code runs but no scroll bars are shown. I see a dot to the right of the textarea.
Nevermind. YOUR code works, my nesting was wrong 🙂
Great I’m glad you fixed it 🙂
Nice tutorials. I understood a lot about java from you. Thank you for everything. I have been studying at school C++ for 2 years and I think this helps me to understand Java better. Thank you again, you’re awesome. I have a question: how did you learn all this classes? It seems impossible to me. I guess by practicing and coding.
I’m very happy that I could help. I have been doing this for decades. I have never been big on memorization because I think that takes the fun out of coding, which I think is most important. Just keep at it and eventually you’ll remember everything naturally.
Thanks a lot and keep doing this.
You’re very welcome 🙂
Really, REALLY nice tutorial. I am a fan for sure and I have passed it to me Java class.
Thank you very much 🙂
|
http://www.newthinktank.com/2012/02/java-video-tutorial-20/
|
CC-MAIN-2017-30
|
refinedweb
| 1,359
| 75.71
|
David Miller <davem@davemloft.net> writes:> From: "Michael S. Tsirkin" <mst@dev.mellanox.co.il>> Date: Mon, 19 Mar 2007 00:42:34 +0200>> > Hmm. Then the code moving dst->dev to point to the loopback>> > device will have to be fixed too. I'll post a patch a bit later.>> >> Does this look sane (untested)?>> >> Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>>> You can't point it at NULL, we don't point it at loopback> just for fun.>> There can be asynchronous paths elsewhere in the networking still> referencing the neigh or dst and they will (correctly) feel free to> derefence whatever device is hanging there. So transitioning> to NULL is invalid.>> You guys will need to come up with a better solution for this silly> situation with network namespaces. Loopback is always available to> point dead routes and neighbour entries at, and this assumption is> massively rooted in the networking.Sure. In the network namespace case I think the careful ordering of theshutdown handles that case. Even with per network namespace lounregistered it still existed until the network namespace actuallyexited. And it only happened on exit. So while there may be a tiny race there it hasn't been an issue yetin practice.I wasn't proposing that we fix it this way. I was simply saying thatthere was the possibility for the case to exist. The existence ofa per network namespace loopback device is fairly fundamental to thenetwork namespace concept. Heck I think Herbert has been looking atit for vserver which almost totally socket isolation.Eric-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
|
https://lkml.org/lkml/2007/3/19/9
|
CC-MAIN-2015-40
|
refinedweb
| 292
| 67.76
|
Related
Tutorial
Using Chart.js aren’t familiar with Chart.js, it’s worth looking into it. It’s a powerful and simple way to create clean graphs with the HTML5
<canvas> element. Don’t worry, you don’t need to know anything about the
<canvas> to use Chart.js. With Vue’s
data() object, it’s easy to store our data and manipulate it to change our graph when needed.
Install Chart.js
First thing that you want to do is create a Vue.js application using the
webpack-simple template and install Chart.js.
$ vue init webpack-simple <project-name> $ cd <project-name> $ npm install $ npm install chart.js --save
Navigate to your
App.vue file and remove all of the generated code. The chart’s
<canvas> will be in the root
#app element. Next, import Chart.js using ES6 into your Vue component.
import Chart from 'chart.js';
Creating the Chart
This chart is going to have two datasets: One) with the number of moons per planet in our solar system and two) with the overall mass of each planet. With these two datasets, we can have different chart types to show correlations in data.
Every Chart.js chart needs to have a
<canvas> in the HTML markup. The
id of the chart is used as a selector to bind the JavaScript to it.
<template> <div id="app"> <canvas id="planet-chart"></canvas> </div> </template>
Structure of the Chart
In it’s simplest form, each chart has the same basic structure:
const ctx = document.getElementById('planet-chart'); const myChart = new Chart(ctx, { type: '', data: [], options: {}, });
You can start by adding your data to this
Chart object and keep repeating that process for each new chart you want to create. However, this process can be a lot easier if we had a function to pass arguments into.
Start by creating a new function in your component’s
methods object and give it two parameters,
chartId and
chartData.
methods: { createChart(chartId, chartData) { const ctx = document.getElementById(chartId); const myChart = new Chart(ctx, { type: chartData.type, data: chartData.data, options: chartData.options, }); } }
Chart.js tends to have a lot of code. This simple “planets” chart for example, has at least 50+ lines of code in it. Imagine having multiple charts with complex data.
Your single file Vue component can become large and confusing quick. So let’s use ES6 to
import our chart’s data, in order to keep our Vue component slim and focused.
Creating the Chart’s Data
Create a new
.js file inside the root
src directory. For this post, it’s called
chart-data.js. However, you can name this anything you like. Create a
const and name it
planetChartData.
Keep in mind, you’ll want to give it a unique and descriptive name based on the data. You can have several data objects in this file for different charts
export const planetChartData = { type: 'line', data: { labels: ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune'], datasets: [ { // one line graph label: 'Number of Moons', data: [0, 0, 1, 2, 67, 62, 27, 14], backgroundColor: [ 'rgba(54,73,93,.5)', // Blue 'rgba(54,73,93,.5)', 'rgba(54,73,93,.5)', 'rgba(54,73,93,.5)', 'rgba(54,73,93,.5)', 'rgba(54,73,93,.5)', 'rgba(54,73,93,.5)', 'rgba(54,73,93,.5)' ], borderColor: [ '#36495d', '#36495d', '#36495d', '#36495d', '#36495d', '#36495d', '#36495d', '#36495d', ], borderWidth: 3 }, { // another line graph label: 'Planet Mass (x1,000 km)', data: [4.8, 12.1, 12.7, 6.7, 139.8, 116.4, 50.7, 49.2], backgroundColor: [ 'rgba(71, 183,132,.5)', // Green ], borderColor: [ '#47b784', ], borderWidth: 3 } ] }, options: { responsive: true, lineTension: 1, scales: { yAxes: [{ ticks: { beginAtZero: true, padding: 25, } }] } } } export default planetChartData;
Note: You can reference Chart.js’ documentation for more information about line charts, as well as others like
bar,
polarArea,
radar,
pie, and
doughnut.
By exporting
planetChartData, you are allowing that
const to be imported into another JavaScript file. More importantly, you’re separating the data from the component. This makes it much easier to manage and create a new chart with new data in the future.
Import your chart’s data into your
App.vue component.
import planetChartData from './chart-data.js';
Next, store the single chart’s data into Vue’s
data() function.
data() { return { planetChartData: planetChartData, } }
Note: You can also use ES6 shorthand. Since the data property and value have the same name, you can just use
planetChartData instead of
planetChartData: planetChartData.
Initializing the Chart
At this point, Chart.js should be installed and the chart’s data should be imported into the
App.vue component. In the
methods object, you also created a function that creates the chart object with data from the
chart-data.js file.
You should already have a
<canvas> element created in the component’s template. At this point, it’s time to initialize the chart and write to the
<canvas>.
To do so, you need to run the
createChart() function in the component’s
mounted() lifecycle method. This function takes two arguments; the
chartId (string) and
chartData (object in our data from
chart-data.js).
mounted() { this.createChart('planet-chart', this.planetChartData); }
The chart should render now when the component is mounted!
As you can see, we can focus on our data and let Chart.js do the hard work.
Mixed charts
Chart.js also supports mixed charts. Continuing with the planets chart you created above, let’s show this same data with two types of graphs.
Open your
chart-data.js file and let’s modify the
types property of our chart and
datasets. Find the
type property of your chart’s data and change it to
bar. At this point, both charts will be bar graphs. However, we want the graph to show a bar and a line graph.
To change this, in each
dataset object, add a
type property under the
label property. For the first
dataset object, give it a
type property with a value of
line and for the second, give it a
type property with a value of
bar.
data: { type: 'bar', // was "line" labels: ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune'], datasets: [ { label: 'Number of Moons', type: 'line', // Add this data: [...], backgroundColor: [...], borderColor: [...], borderWidth: 3 }, { label: 'Planet Mass (x1,000 km)', type: 'bar', // Add this data: [...], backgroundColor: [...], borderColor: [...], borderWidth: 3 } ] }
After your component mounts, you should see something like this:
This post just scratched the surface of what you can do with Chart.js. You should have a basic understanding on how to use Chart.js with Vue by separating your data with ES6 imports.
|
https://www.digitalocean.com/community/tutorials/vuejs-vue-chart-js
|
CC-MAIN-2020-34
|
refinedweb
| 1,103
| 67.04
|
The QPtrCollection class is the base class of most pointer-based Qt collections. More...
All the functions in this class are reentrant when Qt is built with thread support.
#include <qptrcollection.h>
Inherited by QAsciiCache, QAsciiDict, QCache, QDict, QIntCache, QIntDict, QPtrList, QPtrDict, and QPtrVector.
List of all member functions.
The QPtrCollection class is an abstract base class for the Qt collection classes QDict, QPtrList, etc. Qt also includes value based collections, e.g. QValueList, QMap, etc.
A Q.
See also Collection Classes and Non-GUI Classes.
This type is the generic "item" in a QPtrCollection.
Constructs a collection. The constructor is protected because QPtrCollection is an abstract class.
Constructs a copy of source with autoDelete() set to FALSE. The constructor is protected because QPtrCollection is an abstract class.
Note that if source has autoDelete turned on, copying it will risk memory leaks, reading freed memory, or both.
Destroys the collection. The destructor is protected because QPtrCollection is an abstract class.
Returns the setting of the auto-delete option. The default is FALSE.
See also setAutoDelete().
Removes all objects from the collection. The objects will be deleted if auto-delete has been enabled.
See also setAutoDelete().
Reimplemented in QAsciiCache, QAsciiDict, QCache, QDict, QIntCache, QIntDict, QPtrList, QPtrDict, and QPtrVector.
Returns the number of objects in the collection.
Reimplemented in QAsciiCache, QAsciiDict, QCache, QDict, QIntCache, QIntDict, QPtrList, QPtrDict, and QPtrVector..
This file is part of the Qt toolkit. Copyright © 1995-2005 Trolltech. All Rights Reserved.
|
http://doc.trolltech.com/3.3//qptrcollection.html
|
crawl-002
|
refinedweb
| 242
| 62.95
|
US4622013A - Interactive software training system - Google PatentsInteractive software training system Download PDF
Info
- Publication number
- US4622013AUS4622013A US06612760 US61276084A US4622013A US 4622013 A US4622013 A US 4622013A US 06612760 US06612760 US 06612760 US 61276084 A US61276084 A US 61276084A US 4622013 A US4622013 A US 4622013A
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- event
- match
- system
- data
- pertains generally to a new computer system and in particular to a computer system which simulates a tutor for interactively training students to use selected computer programs. The system can also be used as an image-based expert or training system for tasks such as CAT-scan image analysis.
Prior art software training systems have included a variety of types of manuals and a variety of types of special computer programs designed to introduce the student to certain features of a selected computer program. Manual or text-based systems are totally unsatisfactory because (1) a large portion of potential users have neither the patience nor discipline to work through such a program; and (2) a manual cannot truly accommodate people with varied backgrounds and varying attention spans. Computer-based interactive software training systems are generally preferable to text-based because they require less discipline and patience on the part of the student. Such systems generally need to be complemented with a reference manual for use after the initial training is complete. The subject matter of the present invention concerns computer-based software training.
Computer-based (as opposed to text-based) prior art systems suffer at least two major deficiencies which are solved by the present invention: (1) they use a separate training program to introduce the student to certain features of the target or selected computer program instead of allowing the student to use the target computer program itself during the training; and (2) each training program must be written separately and specifically for a particular selected computer program, must duplicate many of the features of the selected computer program, and yet must be debugged just as carefully and thoroughly as any computer program.
A primary object of the present invention is to provide a system and method for training a student to use any selected computer program.
Another object of the invention is to allow the student to use the actual selected computer program while training.
Another object of the invention is to provide a system and method whereby the student is instructed on how through the use of a medium separate from the output medium used by the selected computer program or at least in such a way that the normal outputs created by the selected computer program are not disturbed.
Still another object of the invention is to provide a system for defining a training course for a selected computer program that requires neither simulation of the selected computer program nor the writing of a separate software training program for each selected computer program.
Yet another object of the invention is to provide a system suitable not only for interactive software training but also suitable for use as a display based training and expert system, wherein a video display, a picture quality display and an audio message system are used together, as necessary, to aid the user of the system.
In a preferred embodiment the present invention provides an interactive software training system for training a student to use a selected computer program. The system uses a computer or CPU (central processing unit) capable of executing the selected computer program. The system must have an input device (e.g., a keyboard or mouse) for accepting input data from the student, and a display device for displaying the results of executing the selected computer program. The flow of data into and out of the selected computer program can be monitored and interrupted.
A software interrupt module receives all the data entered through the input device. A monitor module interprets the input data and can also interpret the resulting output display on the display device generated by the selected computer program.
How the input and output data is interpreted and manipulated is defined by a courseware module. A separate courseware module must be authored for each selected computer program. Each courseware module includes a set of events associated with the selected computer program. Each event corresponds to one or more contextual circumstances in the operation of the selected computer program. An event can be used to perform certain tasks and to interpret input or output data.
Most events define one or more match values and a set of one or more tasks corresponding to each match value. A task is defined herein as a discrete operation, such as passing a datum from the monitor to the operating system, or setting a parameter to a particular value. A task is not used herein in the conventional sense of a process. Upon the receipt of data which matches a match value, a corresponding set of predefined tasks is performed. One type of task allows the data input by the student to be processed by the selected computer program. Another type of task defines which event is the next event to be used by the monitor. Match values associated with incorrect entries or entries which otherwise deviate from the planned training session can be blocked. The match values include both character-for-character match values and certain predefined wildcard match values. The tasks are selected from a defined set of available tasks. Through the use of these tasks, events can perform all, or practically all, functions performable by the computer system.
In a first preferred embodiment, explanational messages are generated using a speaker system and a second display device, preferably a laser disk display system. The use of both a second display system and an audio message system, while optional, allow the generation of a wide variety of explanational messages without interrupting the normal output generated by the selected computer program.
In a second preferred embodiment, the visual portion of the explanational messages is generated using one or more "windows" on the primary display device. This embodiment eliminates the need for a second display device, which may not be economically justified in certain circumstances.
Additional objects and features of the invention will be more readily apparent from the following detailed description and appended claims when taken in conjuction with the drawings, in which:
FIG. 1 is a block diagram of a system in accordance with the present invention.
FIG. 2 is a general flowchart of the method used in the present invention.
FIGS. 3A-3C are detailed flowcharts of an exemplary computer process, herein called a monitor routine, employing the method used in the invention.
FIGS. 4A-4E are detailed flowcharts showing the co-processor aspect of the process of one embodiment of this invention.
FIGS. 5A-5B are detailed flowcharts showing the match routine part of the process of one embodiment of this invention.
FIGS. 6A-6C are detailed flowcharts showing the object interpreter aspect of the process of one embodiment of this invention.
FIGS. 7A-7C are detailed block diagrams of certain data structures used in one embodiment of this invention.
Referring to FIG. 1, in a preferred embodiment of the invention the core of the system 11 is a standard computer subsystem 20, such as the IBM personal computer (also known as the IBM PC), including a CPU 12 (e.g., a microcomputer system, including a central processing unit, disk drive, etc.), a display device 13 (such as a standard CRT monitor), and an input device 14 (such as a keyboard or mouse). In most circumstances, the computer system 20 has an operating system 24 (also occassionally referred to herein as DOS) such as UNIX or DOS. The selected computer program 15 which the student is to be trained to use is usually an applications program such as a word processor program, a spread-sheet program or a data base program.
The selected computer program 15 can also be an operating system, such as UNIX or DOS, when the training course is a course on how to use the operating system. In such a case the selected computer program will generally also act as the system's operating system. (Note that while the operating system 24 and the target operating system 15 could theoretically be different, such an arrangement would normally be very cumbersome, except possibly in a main frame computer.) The selected computer program 15 will hereinafter be called the target or the applications program (even if it is the operating system 24). Naturally, the version of the applications program 15 to be used in the system 11 must be one designed to run with the particular computer subsystem 20 used in the system 11.
In the first preferred embodiment, the standard computer subsystem 20 is supplemented with a second display device 16, such as a laser disk 17 and television receiver 18 combination, and an audio device 19. It is generally intended that the second display device 16 be any system capable of producing high quality video images and video sequences, at least comparable in quality or resolution to standard U.S. television pictures. By way of contrast, the first display device 13 can be a standard CRT computer monitor capable of producing only black-and-white or only grey-scale images. In a second preferred embodiment, explanational messages are displayed by windowing the display on the first display device and confining the explanational messages to a portion of the screen that minimizes or avoids interference with the normal display produced by the target program.
The preferred embodiment also includes an audio device 19 comprising a speaker 21 and speech synthesizer system 22 which can be used to generate verbal messages in addition to those generated by the sound system (not shown) associated with the laser disk 17 and television 18 subsystem. It is often convenient in a training system to be able to generate verbal messages (such as short explanation to remind the trainee what to do next) without having to display a new image on the video display 16.
The training subsystem 30, which generally works as a co-routine, operating in parallel with the operating system 24, has four modules: a keyboard interrupt module 25 for receiving data from the input device 14 before it is processed by the operating system 24, a courseware module 26 for defining the training course, a monitor module 27 for interpreting input and output data and generally running the training system 11, and a keyboard request interrupt module 28 for transferring control to the monitor module upon the occurrence of a read or a poll of the keyboard. Polls and reads are usually performed by the operating system 24 when the applications program 15 is ready to receive more data. But any poll or read, even if executed by the applications program 15, will transfer control to the monitor module 27.
The explanation of how the system 11 can be used in interactive training and expert systems other than a software training system will be presented after the explanation of the structure and operation of the system 11 as an interactive software training system.
Referring to FIG. 2, from a very simplistic viewpoint, the training subsystem 30 works by interrupting the flow of data from the input device 14 to the applications program 15 and interpreting the data entered into the system 11 before it is processed by either the operating system 24 or the applications program 15. This allows the training subsystem 30 a chance to act before unacceptable data is sent by the student to the applications program 15--and to decide if the student requires more instruction before proceeding with the next step in using the applications program 15.
The basic flow of control shown in FIG. 2 is different from the normal flow of control in most computer systems. Normally, the operating system 24 filters all data sent to the applications program 15 from input devices 14 and all data sent by the applications program 15 to output devices 31. In the present invention the connection between the operating system 24 and the input device 14 is rerouted through a training subsystem 30. In some circumstances, the flow of data from the operating system 24 to the output devices 31 is also rerouted through the training subsystem 30.
Still referring to FIG. 2, the training subsystem 30 works as a co-routine that operates in parallel with the operating system 24. The training subsystem 30 interprets input data and selectively sends data to the operating system 24 for processing by the operating system 24 and the applications program 15. When the operating system 24 tries to poll or read the input device 14 the training subsystem 30 answers the request. As part of the task of interpreting input data the training subsystem 30 controls a secondary output system 32, generally for communicating with the user of the system in a medium independent of that used by the operating system 24. The training subsystem 30 can also send output data directly to the primary output devices 31 (such as CRT monitors, printers, disks), completely bypassing the operating system 24.
The basic mechanics of setting up a co-routine such as the one used in the invention are as follows. The system's interrupt vectors for keyboard entries and keyboard poll/read requests are reset to point to entry points in the keyboard interrupt 25 and poll/read interrupt 28 modules, respectively. Thus whenever the student types on the keyboard 14 (i.e., enters data) the data is processed by the training subsystem 30 rather than by the operating system 24. Similarly, poll and read requests by the operating system 24 invoke the training subsystem 30 instead of the normal keyboad request software interrupt program (which is generally part of the basic input/output system of the CPU 12 or operating system 24).
When the training subsystem 30 is invoked by a keyboard request interrupt, the state of the operating system 24 is saved (in a stack associated with the operating system 24) and the training subsystem 30 is restored to control. When the training subsystem 30 is ready to transfer control back to the operating system 24 it saves the state of the training module (in a stack associated with the training subsystem 30), restores the operating system 24 to the state it occupied when it last performed a read or poll, and exits to the operating system 24 passing as a parameter the answer to the operating system's poll or read request.
Additionally, the training subsystem receives control on every "clock tick" of the computer system clock, by means of an interrupt vector. Any training subsystem operation may be initiated by the clock tick routine asynchronously of the student and the operation of the target. In the preferred embodiment, the clock ticks are counted, and if the total accumulated time exceeds a timeout parameter, the monitor 27 performs certain predefined tasks, as discussed below.
The particular method interrupting the flow of data from the input devices 14 to the target programs 15 is not essential to the invention in its broadest aspect. For instance, one advantage of using a keyboard interrupt module as described herein, but which is not essential to the invention, is that it allows the use of otherwise illegal key combinations (e.g., CNTRL ALT DEL) as function keys for special training subsystem functions.
While the preferred embodiment also contains the ability to interrupt and filter all output from the operating system, this facility need not be used in the design of many training systems incorporating the invention. The ability to interrupt and filter output can be used, for instance, to avoid overwriting a valuable file, or to direct output to a different file than the output would normally go to, or to modify output sent to the first display device 13 in some useful way.
Also, even though the preferred embodiment described herein assumes that data is entered on a keyboard, the invention applies equally well to systems with other types of data input. The basic flow of control between the training subsystem 30 and the operating system 24 remains the same. Only the particular courseware 26 (as explained below) used with the invention is dependent on the nature of the input data received.
Referring to FIGS. 7A-7C, the courseware 26 comprises a special set of data structures which defines how the training interprets input data. The three basic data structures used by the training subsystem are an event map 71, event structures 72, and an object table 73. The main other data structures used in the system are standaard stack data structures, including a DOS stack, a monitor stack, an event stack. The use of each data structure is described below.
The most basic data structure is called an event 72. Only one event is used at any one time by the monitor module 27 to interpet input data. Events specify how data is to be interpreted in a few ways. Generally, each event corresponds to one or more contextual circumstances in the operation of the training program. For instance one event may be used when the student needs to enter the command "print", another event may be used when the student has forgotten to press the return key after entering data, and another event may be used when the student has made so many mistakes that he apparently has not absorbed the portion of the training course already presented.
The contextual circumstances of the training course are evaluated by the system with four categories of tests: (1) match branches, which compare entered data with a predetermined match string; (2) screen-match branches, which compare output data from the target program with a predetermined screen-match string; (3) object branches, which manipulate and test certain system-defined and certain user-defined variables called objects; and (4) timeout tests, which detect failure of the student to make a correct entry (or for the system to respond to the student's commands in a specified way) within a specified timeout period.
For each type of tests there is a corresponding data structure or set of data structures. For convenience, the name of the test and the name of the corresponding data structures are the same. There are three distinct types of branch data structures: match branches, screen match branches, and object branches. The timeout data structure 72f is merely a single integer parameter within the event data structure 72.
In the preferred embodiment, all branch structures have the same general format shown in FIG. 7C. The type of any particular branch is determined by the settings of the branch flags 74c in its branch structure 74, as explained below. For each type of branch structure the branch string 74f has a distinct function. In other embodiments different data structures could be used for each type of branch without affecting the substance of the invention described herein.
Referring now to FIG. 7C, a typical event structure 72 includes a plurality of branches. Note that any particular event structure can have any number of branches, including zero or one branches. When there are a plurality of branches, the branches are sequentially organized by means of a singly linked list of the sort well known to those skilled in the art. The branches are sequentially tested by the training subsystem 30, using a pointer 74a in each branch structure 74 to find the next branch.
The first type of test, called a match branch, provides a match value (generally, a string of one or more characters) and a set of corresponding tasks to be performed when the input data matches the match value. In the preferred embodiment each such match value and corresponding set of tasks are collectively called a branch 74. A match branch, has a match value, called a branch string 74f, a set of flags 74c indicating what to do upon the occurrence of a match, and a match event 74b specifying what event to invoke upon the occurrence of a match.
Referring to FIG. 7A, in the preferred embodiment the match event value is not a pointer directly to the match event but rather an index to an event map 71 holding pointers to all the events defined by the courseware 26. Thus events are indirectly addressed through the event map 71. Since events are variable length data structures and can be located quite far from one another in the memory array of a large courseware module 26, indirect addressing through an event map makes generation of the courseware module easier, as will be understood by those skilled in the art.
TABLE 1A______________________________________Seq: 002 name: BG001L05 # 00001 Com: begin introVideo Starting Frame: 210 Video Ending Frame 2920Video Duration: 1644Video Mode of Play is: Play Channel 1Text data length: 7 Row: 1 Column: 79Character delay 0* TEXT *---"CNTRL P"* End *---Clear CRT*** BRANCH 1 ***Match Event: BG002L05* Branch Text *---PIP* End Text *---Allow KB to DOS*** BRANCH 2 ***Match Event: BG046L04* Branch Text *---PRINT* End Text *---Allow KB to DOS Ignore case*** BRANCH 3 ***Match Event: BG003L05* Branch Text *--- ##STR1##* End Text *---Single wild characterMaximum Key presses: 6Time out: 90 secondsTime out event: BR005L15______________________________________
TABLE 1B______________________________________Seq: 040 name: BG010L05 # 00001 Com: get formatVideo Starting Frame: 34008 Video Ending Frame: 0Video Duration: 0Video Mode of Play is: Single Frame*** BRANCH 1 ***Match Event: BG020L05* Branch Text *---* End Text *---Ignore case*** BRANCH 2 ***Match Event: BG046L35* Branch Text *---Format b:* End Text *---Allow KB to DOS Ignore case*** BRANCH 3 ***Match Event: BR045L05* Branch Text *---E+1;KEYIN&255;KEYIN[32* End Text *---Object operation*** BRANCH 4 ***Match Event: NULL* Branch Text *---X=E;X+1* End Text *---Object operation*** BRANCH 5 ***Match Event: BR400L05 Column: 1 Row: 1* Branch Text *---EOF* End Text *---*** BRANCH 6 ***Match Event: BG440L05* Branch Text *--- ##STR2##* End Text *---Single wild characterMaximum Key presses: 9Time out: 90 secondsTime out event: BR006L15______________________________________
TABLE 1C______________________________________Seq: 045 name: BG010L10 # 00001 Com: delete errorVideo Starting Frame: 34122 Video Ending Frame: 0Video Duration: 0Video Mode of Play is: Single Frame*** BRANCH 1 ***Match Event: 0* Branch Text *---"CNTRL H"* End Text *---Allow KB to DOS Continue on Match*** BRANCH 2 ***Match Event: BG046L55 Column: 1 Row: 23* Branch Text *---COPY X:* End Text *---Maximum Key presses: 7Time out: 90 secondsTime out event: BR006L25______________________________________
Tables 1A, 1B, and 1C contain representational views of three typical event structures 72. These representational views are useful in describing the event structure 72. Referring now to Table 1A, which shows a fairly simple event, the operation of match branches is explained. Each time a datum is entered it is compared with the match string 74f in each branch of the event until it finds a sub-match. A sub-match occurs when all the characters entered so far (i.e., since the character counter was last reset) match the corresponding portion of the match string. Thus in the example shown in Table 1A, if the user typed in the letters "p", "r", and "i", the first letter (i.e., "p") would cause a sub-match with branch 1, the second letter (i.e., "r") would cause a sub-match with branch 2, and the third letter (i.e., "i") would also cause a sub-match with branch 2.
A "match" does not occur until the input data fully matches a whole match string. Note that all match strings are branch strings but, as will be explained below, not all branch strings are match strings. The system 11 puts no constraints on the the length of the match strings specified in the courseware; a match string can be whatever length is required by the context of the event.
Each time a datum is entered by the student, the monitor 27 sequentially compares the input data with each of the match strings in the branches of the current event. Upon the occurrence of the first match or sub-match the comparison process stops, with one exception explained below.
To understand what happens upon the occurrence of a match or a sub-match, the purpose each branch flag must be understood. In the preferred embodiment there are eight branch flags 74c. See Table 2. Each is a single-bit value, and the eight flags are stored collectively in a single byte in the branch data structure 74.
Note that each branch structure has its own set of flags. The type of the branch is determined by the BR-- CRT-- M and the BR-- OBJECT flags. If neither flag is on (i.e., enabled) then the branch is a match branch. If the BR-- CRT13 M flag is on then the branch is a screen-match branch. If the the BR13 OBJECT flag is on then the branch is an object branch. Clearly the BR13 CRT13 M and BR13 OBJECT flags are mutually exclusive: at most one can be enabled in any branch structure.
TABLE 2______________________________________BRANCH FLAGSBRANCH FLAG DESCRIPTION______________________________________BR --CALL Match event begins an event subroutineBR --DOS Send input data to DOS upon sub-matchBR --CRT --M This is a screen-match branchBR --CRT Send input data to CRTBR --PASS Match on a single wildcardBR --OBJECT This is an object branchBR --NOCASE Ignore case of input data when checking for a matchBR --NOBR Do not branch upon occurrence of a match______________________________________
Only match branches can let input data pass through to the operating system 24. Each input data is sent to the operating system (DOS) 24 upon the occurrence of a sub-match with the match string in a match branch if the branch's BR-- DOS flag is on. If there is no sub-match or if the BR-- DOS flag is off, the operating system 24 does not receive the input data. Thus if the student makes an entry that does not match any branch string with an enabled BR-- DOS flag in the current event, the entry will not affect the state of the applications program 15 (i.e., it will be as if the entry had never been made). On the other hand, an "incorrect" entry by the student can be handled in several different ways.
First the "incorrect" entry may match a catch-all wildcard branch (e.g., Branch 3 in Table 1A) which (1) transfers control to another event that causes the generation of an explanational message by the training subsystem 30, and (2) does not allow the datum to be passed to the target program 15 (i.e., with the BR-- DOS flag of the wildcard branch being not enabled). In a second example, an "incorrect" entry could match a branch which (1) allows the datum to be passed to the target program 15, and (2) transfers control to another event that explains how to correct the "incorrect" entry. Thus different types of "incorrect" entries can be handled in distinct ways by the training subsystem 30.
When the student makes a "correct" entry (i.e., an entry that matches a branch string in the current event) the data can be passed to the applications program 15 via the operating system 24. On the other hand, the author of the courseware 26 can prevent the entered data from being sent to the applications package until a full match is achieved by sending the entered data only to the CRT monitor upon the occurrence of each sub-match, and then using another event to send the full text of the correct entry to the operating system upon the occurrence of a match. This would be done by setting up a first event with a branch having a match string equal to the correct entry, an enabled BR-- CRT flag, a disabled BR-- DOS flag, and a match event that contains an appropriate initial text task for sending the full text of the correct entry to the operating system 24. The enabled BR-- CRT flag causes the monitor 27 to echo the entered data on the first display device 13 at the current position of the display's cursor, but does not cause the entered data to be passed to the operating system 14. Note that in some contexts it may be appropriate for a branch to have both an enabled BR-- CRT flag and an enable BR--DOS flag.
Upon the occurrence of a match, the event indicated by the match event 74b will be invoked unless: (1) the BR-- NOBR flag is enabled, or (2) the match event 74b has a value of NULL (e.g., zero in the preferred embodiment). If the BR-- NOBR flag is enabled, the monitor 27 continues to test the other branches in the event, if any. Note that normally the monitor 27 stops checking for a match when the first match or sub-match occurs. If the match event is the NULL event, the current event remains in effect even after the occurrence of a match.
A typical event having a branch with the BR-- NOBR flag enabled is shown in Table 1C. In this example, the student has made an "incorrect" entry and has been told to backspace (denoted "CNTRL H" in Branch 1) until the input line looks like "COPY X:". Thus the backspaces by the student are let through to DOS, but the monitor continues on to test Branch 2, which looks at the input line. The monitor cyclically tests Branches 1 and 2 until Branch 2 is satisfied or a timeout occurs.
A set of one or more events can be used as an event subroutine that is used in one or more contexts in the operation of a training course. To call a set of events as an event subroutine the branch making the call specifies the first event in the event subroutine as the match event and has the BR-- CALL flag enabled. When the event subroutine returns, the event invoked is the next event after the calling event. That is, the event to be returned to is the next event referenced in the event map 71 after the calling event. (Optionally, in other embodiments, the calling event data structure could specify the event to be invoked after the subroutine returns.) To execute a return from an event subroutine, a branch in the subroutine must specify a match event equal to the RETURN event, which is equal to -1 in the preferred embodiment.
The two flags not yet discussed, BR-- PASS and BR-- NOCASE, both relates to details of the process of determining if input data matches a specified match string. The BR-- NOCASE flag indicates that, if enabled, the case of the input characters should be ignore when comparing it to the match string. Thus uppercase input characters can match lowercase match string characters and vice versa.
The BR-- PASS flag, if enabled, indicates that the match string is a special wildcard character. Each wildcard character matches a set of two or more input data values. Using wildcard characters enables the author of the courseware to reduce the number of branches and events required to determine if an input datum fits within a certain class of input values. The wildcard characters used in the preferred embodiment are shown in Table 3.
The " *" in each wildcard string is merely a marker used to indicate that the following letter specifies a wildcard set. In a preferred embodiment any match string can have embedded wildcards. Thus the match string "FL *N" would match the input data strings FL0, FL1, . . . FL9. In the embodiment shown in the program listings at the end of this specification, wildcards can be used only as a single character match string in a match branch. Such match branches are designated by enabling the BR-- PASS flag therein.
TABLE 3______________________________________WILDCARD MATCH VALUESWildcard String Match Value Set______________________________________ ##STR3## any input character ##STR4## any key denoting x ##STR5## any cursor control key ##STR6## all function keys ##STR7## all letters ##STR8## all numbers ##STR9## all key pad keys ##STR10## return key______________________________________
Screen-match tests are similar to match tests, except that the string to be compared with the screen-match string 74f is on the CRT monitor 13 of the core computer subsystem 20. The BR-X and BR-Y parameters 74d and 74e of the branch structure 74 specify the column and row on the CRT monitor 13 of the first letter of the screen string. The branch string 74f of the screen-match branch 74 is called the screen-match string. The length of the screen string to be compared with the screen-match string is the length of the screen-match string 74f. If, when the screen-match test is performed, the designated screen match string matches the screen string, then the monitor 27 will generally invoke the match event. As with match branches, if the BR-- CALL flag is set then the match event will be called as an event subroutine. Similarly, if the BR-- NOBR flag is set then the monitor will continue to test the other branches in the current event instead of invoking the match event, but such a use of the BR-- NOBR flag with a screen-match branch is unlikely to be of any practical use. More useful, if the match event is the NULL event, the screen match can be used to prevent the testing of the succeeding branches of the event until the the screen string no longer matches the screen-match string 74f (e.g., until the student enters data or a command which causes the target program 15 to alter the data displayed on the primary display device 13).
In the preferred embodiment the BR-- PASS, BR-- NOCASE, BR-- DOS, and BR-- CRT flags are inoperative in screen-match branches. In other embodiments, however, it would straightforward to be able to selectively ignore the case of the screen string (although the author of the courseware should practically always know the case of the screen string which is indicative of a correct action by the student) and to allow the use of wildcards in the screen-match string 74f.
Object branches can perform a number of functions including mathematical operations and tests, heuristic tests, moving the CRT monitor's cursor, and changing the shift state of the keyboard. The branch string 74f of an object branch contains one or more object "equations". Each object equation either performs an operation on an "object" or performs a test of the value of an "object". In the preferred embodiment, the set of available types of object equations is as shown in Table 4.
TABLE 4______________________________________OBJECT OPERATIONSTYPE EQUATION DESCRIPTION______________________________________mathematical:OBJ=NNNN set object to NNNNOBJ+NNNN add NNNN to OBJOBJ-NNNN subtract NNNN from OBJOBJ*NNNN multiply OBJ by NNNNOBJ/NNNN divide OBJ by NNNNlogical:OBJ&NNNN AND NNNN with OBJOBJ! COMPLEMENT OBJOBJ|NNNN OR NNNN with OBJbranch:OBJ@ branch to event(OBJ)OBJ<NNNN branch if OBJ is less than NNNNOBJ>NNNN branch if OBJ is greater than NNNNOBJ?NNNN branch if OBJ equals NNNNvideo:OBJV display video frame(OBJ)OBJVNNNN display video frame NNNN______________________________________
In these "equations" the first parameter (OBJ) is called the object, the second parameter is called the operator, and the third parameter (NNNN), if any, is called the operand. The operand can be either a number, another object, a event number, or a video frame member.
"Objects" are basically variables whose value can be tested or changed mathematically or logically. The result of mathematical and logical object operations is stored in the object referenced in the equation. The branch operations cause a branch to the match event if the branch test is satisfied, except that if the "@" operator is used when the value of the object (i.e., the value of OBJ) is used as the match event value. The video operations cause the secondary display 16 to display a particular video frame.
Objects can have different types of values: simple numerical values, match event values (which are essentially an index pointer for the event map 71), and video frame values. In the preferred embodiment, each type of value is not encoded in any special way. It is the responsibility of the courseware author to ensure that the literal value of the object or the operand (i.e., NNNN) is meaningful in the context it is being used. Furthermore, in the preferred embodiment objects are 16-bit (i.e., two-byte) integers.
In the preferred embodiment there are two types of objects: dedicated objects having a specific system function, and general variable objects. In the preferred embodiment general variable objects are like normal variables in a computer program and can be given any value without inadvertently affecting another part of the system. The set of dedicated objects in the preferred embodiment is shown in Table 5.
TABLE 5______________________________________DEDICATED OBJECTSOBJECT LISTINGNAME NAME DESCRIPTION______________________________________CP X Cursor Position on the CRT monitor: first bytes is the column (Y) position; last byte is the row (X) positionSHIFT C Shift state of the keyboardMENU B Event to be invoked when the user enters Menu interrupt: (CNTRL) (ALT) (DEL) on keyboardREVIEW T Event to be invoked when the user enters Review interrupt: (CNTRL) (TAB) on keyboardHELP Q Event to be invoked when the user enters Help interrupt: (CNTRL) (SHIFT) on keyboardKEYIN K Next keystroke, if any, to be interpretedCC E Character Counter: the position in the match-string of the character to be compared with KEYIN______________________________________
In the preferred embodiment, as shown in FIG. 7B, the letters A through Z are available as general variable objects and all the dedicated objects have distinct names from the general variables. In the embodiment shown in the program listings at the end of the specification, the letters listed under the heading "LISTING NAME" are reserved for use as dedicated objects. Only the remaining 19 letters are available for use as general variable objects. As will be clear to those skilled in the art, these restrictions on the names of the objects are arbitrary and unrelated to the subject matter of the invention. In other embodiments the number of distinct general variable objects and the names of the general variable objects can be unrestricted, except that they must not duplicate the names of the dedicated objects.
Object branches can serve many useful purposes. For instance a general variable object can be used to count the number of mistakes the user has made in a particular portion of the training course. When that object's value exceeds a particular predetermined value the system can branch to a special review section of events to help the student learn material that he apparently missed the first time through. The KEYIN object can be tested to determine if the input falls within a class of input values other than those provided by the wildcard values. The cursor position object can be reset to a new value so that the student can enter data at the correct location on the CRT monitor 13 without knowing how to get there. The character counter can be reset before the student starts entering a new string or can be decremented in order to give the student a second chance to make a correct entry without having to restart at the beginning of the string.
The MENU, REVIEW, and HELP objects hold special event values which the student can invoke at any time by typing in the proper key sequence. See Table 5 describing the dedicated objects. This allows the student to interrupt the training session if he doesn't understand something (by invoking the HELP event), wants to go through a portion of the training session a second time (by invoking the REVIEW event), or wants to switch to a different portion of the training course (by invoking the MENU event).
Events perform several functions in addition to those specified by means of branches. In order to facilitate the discussion of these tasks, reference is now made to Table 6 which describes the event flags 72h.
In the preferred embodiment, if the none of the branches in an event cause a branch to a new event within a specified period of time, called the timeout period, then then a specified timeout event is automatically invoked. The timeout period is specified by parameter 72f in the event structure 72 and the timeout event is specified by parameter 72g in the event structure 72. If the timeout period for an event is zero, then the event branches are not tested unless the EXEC-- OBJ flag is enabled, as explained below. If the timeout event is the NULL event, the next event after the current event in the event map is invoked.
When an event is invoked it can perform up to three initial tasks before it executes any branches. The three types of initial tasks are TEXT, VIDEO, and AUDIO. The order in which these initial tasks is to be performed is specified by relative numerical values of the pointers 72a, 72b and 72c to the corresponding TEXT data structure 75, VIDEO data structure 76, and AUDIO data structure 77. (I.e, the task corresponding to the lowest value pointer is executed first.) The TEXT task can send a specified string to the operation system 24, or to the CRT monitor 13, or to some other output device such as a printer. The VIDEO task generates a video frame or video sequence on the secondary display device 16. The AUDIO task generates a specified message on the supplementary audio system 19.
TABLE 6______________________________________EVENT FLAGSEVENT FLAG DESCRIPTION______________________________________CLEAR --KBD Clear keyboard buffer when event is invokedEXEC --OBJ Execute screen-match and object branches before testing for timeoutCLR --STACK Clear the event subroutine stack when event is invokedCRT --LOOP Continue to execute screen-match and object branches even when no keyboard input has been receivedKEEP --C --COUNT Do not zero the character counter when the event is invoked______________________________________
The text data structure 75 is shown in FIG. 7C. The length of the text string 75f is specified by parameter 75a. If the target program 15 is known to be able to absorb data at a particular rate, the rate at which the text string characters are sent to the operating system 24 can be controlled by setting the character-to-character delay parameter 75b to an appropriate value (in milliseconds). The text flags 75c, shown in Table 7, specify the destination of the text string.
TABLE 7______________________________________TEXT FLAGSTEXT FLAG DESCRIPTION______________________________________TXT --CRT Send the text string to the CRTTXT --PRT Send the text string to the printerTXT --DOS Send the text string to DOSTXT --WT If Set, respond to polls by sending back the NULL character; If not Set, respond to polls by sending the next character in the text string______________________________________
If the TXT-- CRT flag is set the text string is to be sent to the CRT monitor 13 at the column and row specified by the CRT-- X and CRT-- Y parameters 75d and 75e. If the TXT-- PRT flag is set then the text string 75f is sent to the system's printer (or an equivalent output device, not shown in FIG. 1). Finally, if the TXT-- DOS flag is set the text string is sent to the operating system 24. The manner in which text is sent to the operating system 24 is determined by the TXT-- WT flag. The purpose of the TXT-- WT flag is to handle applications programs that indiscriminately read in data even if doing so overflows the program's keyboard input buffer. If the TXT-- WT flag is not set, polls by the operating system 24 are answered by sending it the next character (i.e., by sending it a signal indicating that another keyboard character has been received). If the TXT-- WT flag is set, polls by the operating system 24 are answered by sending it the NULL character (i.e., by sending it a signal indicating that no keyboard character has been received).
The video data structure 76 is shown in FIG. 7C. In the first preferred embodiment, video tasks can display a single video frame, a sequence of single video frames, or motion picture sequence. The video tasks can also have an associated sound track that plays on one or two audio channels. These capabilities of the preferred embodiment are tied to the capabilities of an industry standard laser disk player, but the particular set of video capabilities is not essential to the invention. What is important is that (in this first preferred embodiment) there be a second display device, in addition to the one associated with the core computer system 20, that is capable of producing relatively high quality images (e.g., the quality of standard U.S. television pictures) suitable for a training course or for displaying the images needed in the particular expert system (e.g., a CAT-scan image analysis system) with which the invention is being used.
The video tasks work generally by sending signals to the laser disk player which specify the frame or frames to be played, or the starting and ending frames of a motion picture sequence. The video data structure 76 includes a start-frame parameter 76a and an end-frame parameter 76b which reference particular video frames on a laser disk. Clearly, in the first preferred embodiment each courseware module 26 for a particular training course must have an associated laser disk or equivalent set of video images. In many cases a single laser disk will be able to hold the images for several training courses. In any case, the events in the courseware 26 must reference the video frames to be played with those events. In the preferred embodiment there is no facility for indirectly referencing the video images through an index table on the laser disk. Therefore the courseware will generally be tied to a particular video disk. But in future embodiments including such a video frame index table, the start- and end-frame parameters 76a and 76b could be indirect pointers and the video disk could be changed or upgraded without having to rewrite the video data structures 76 in the courseware 26.
TABLE 8______________________________________VIDEO FLAGSVIDEO FLAG DESCRIPTION______________________________________VID --STEP --FORE Display next single video frameVID --STEP --BACK Display next prior video frameVID --SEQ Display a sequence of single video frames, each for a period of V --TIME millisecondsVID --PLAY Display a motion picture sequenceVID --SINGLE Display a single video frameVID --LEFT Turn on the left audio channelVID --RIGHT Turn on the right audio channelVID --PAUSE Blank the video screenVID --RESET Reset the video player______________________________________
The video flags, shown in Table 8, specify the type of the video sequence to be played and which audio channels, if any, are to be used with the video sequence, among other things.
The video data structure has seven mutually exclusive flags which determine the mode of video operation. If the VID-- SINGLE flag is set the video frame referenced by the Start-Frame parameter 76a is displayed. If the VID-- STEP-- FORE flag is set the video player 17 steps forward one frame and displays that frame. If the VID-- STEP-- BACK flag is set the video player steps back one frame and displays that frame. IF the VID-- SEQ flag is set the video player sequentially displays each video frame from Start-Frame 76a to End-Frame 76b for V-- TIME 76c milliseconds. If the VID-- PLAY flag is set the video player displays a motion picture sequence starting at Start-Frame 76a and ending at End-Frame 76b. If the VID-- PAUSE flag is set then the video screen 18 is blanked. Finally, if the VID-- RESET flag is set the video player 17 is reset. The video player is generally reset only once at the beginning of the training course.
The VID-- RIGHT and VID-- LEFT flags are used only in video play mode (i.e., when VID-- PLAY is set). They determine whether the right and left audio channels of the video player are to be turned on or off. Also in video play mode the V-- TIME parameter 76c is used to determine how long to wait after the motion picture sequence is initiated before allowing the monitor to perform its next task. For instance, a text task could be performed while a video sequence is being watched by the student.
In the second preferred embodiment, wherein only a single display device is used, video tasks define one or more windows and the video data to be displayed therein. As in the first preferred embodiment, the courseware can cause the display of a single image, a sequence of images or even an animated sequence. In an exemplary embodiment using two windows, a first window could be used to display an explanational message and a second window could be used to highlight the portion of the "normal" display associated with target program which the student should be concentrating on. The position of these windows is context dependent, with the first (explanational) window being position so as be least intrusive on the "normal" display. Also the monitor could leave the "normal" display unencumbered by windows except when the monitor 27 needs to communicate with the student. Since the use of windows is well known in the prior art, those skilled in the art will be able to apply the teachings of the first preferred embodiment to build embodiments using a single windowed display.
The audio data structure 77 is shown in FIG. 7C. In the preferred embodiment the audio system 22 is a speech synthesizer (e.g., the TMS 5220 made by Texas Instruments) that plays encoded speech messages. The speech is encoded using standard LPC encoding techniques well known in the art. The audio message is generated by sending the encoded audio data 77b to the speech synthesizer 22 which translates the audio data into electrical signals that drive a speaker 21. The length of the audio is specified by the length parameter 77a in the audio data structure 77.
The method of the invention as performed in the preferred embodiment is shown in the flowcharts of FIGS. 3A-3C, 4A-4D, 5A-5B, 6A-6C. Note that the entry points (denoted in the figures as a capital letter enclosed in a circle) for each set of figures (e.g., FIGS. 3A-3C) comprises a distinct set (i.e., entry point A in FIGS. 3A-3C is distinct from entry point A in FIGS. 5A-5B).
The flowchart of FIGS. 3A-3C corresponds to the MAIN program in the program listings. The monitor 27 is initialized at 41 by loading the courseware 26 into the computer 20 memory. This involves loading the event map, the object table, and the events into appropriate portions of memory; setting up stack pointers for the monitor stack and event stack; setting the event pointer to the first event in the courseware 26; and various other tasks well known to those skilled in the art of designing assembly language system programs. Block 41 is re-executed only if the monitor 27 needs to be totally reset, for example if the monitor encounters a situation (e.g., a set of data entries from the student) that unexpectedly causes a return from an event subroutine but the event stack is empty.
The main event loop begins at block 42, where certain parameters from the current event are loaded into temporary registers. The first test 43 in the event loop tests the CLR-- STACK flag (see Table 6) in the event structure. If it is set, the event stack is cleared at block 44. In most embodiments this merely involves resetting the event stack pointer to the bottom of the stack. The event stack is generally reset only by the events at the beginning of each section of the training course. These are the events which are invoked when the student interrupts the training sequence by with either a Menu interrupt (which invokes the event pointed to by the MENU object) or a Review interrupt (which invokes the event pointed to by the REVIEW object).
The initial Text, Video and Audio tasks, if any, are performed next at block 45. The order of these initial events is indicated by the sort order of the numerical values of the pointers 72b, 72c and 72d of the corresponding data structures. For a description of these initial tasks see the above descriptions of the corresponding data structures.
At block 46 the KEEP-- C-- COUNT flag (see Table 6) in the event structure 72 is tested to see if the character counter used in match branch tests should be reset at block 47. The character counter is reset by most events which include match branches corresponding to correct student data entries; it is not reset by many events which are designed to allow the student to recover from a mistaken data entry.
At block 48 the EXEC-- OBJ flag (see Table 6) in the event structure 72 is tested to see if the non-match (i.e., the screen match and the object) branches should be tested at this point. If the EXEC-- OBJ flag is set, then the match routine, described below with reference to FIGS. 5A-5B, is called at 49. The KEYIN parameter for the match routine is set to the NULL character at 49 so that, as explained below, the match branches are not tested.
At block 50 the keyboard input buffer is checked to see if the student has requested a Menu interrupt. See Table 5. If so the MENU object is tested at 51 to determine if the menu event is "defined" (i.e., if the MENU object is not equal to NULL). If the menu event is defined the next event pointer is set to the menu event, at block 52, and the process jumps to entry point H to block 121, described below.
The Timeout parameter 72f is tested at 54. If it is zero none of the branches will be tested and the process jumps to entry point G at block 84, where the pointer to the next event is set.
Block 56 begins the preparations for the timeout loop, which starts at entry point E (block 83). The CLEAR-- KBD flag (see Table 6) in the event structure 72 is tested to see if the keyboard buffer should be cleared. If so all unprocessed data entries from the student are deleted at block 56 and the student restarts will a clean slate.
The timeout clock is started at block 81 and character counter is copied into the CX register at block 82.
At block 83, which is the beginning of the timeout loop, the timeout clock is tested to determine if the elapsed time since the timeout clock was last started exceeds the length of time specified by the timeout parameter 72f. Timeout generally occurs if the student fails to make a correct entry in the allotted time, but can be used for other purposes. If the timeout event is defined (i.e., not equal to the NULL event), as determined at block 84, the next event pointer is set to the timeout event 72g (block 86); otherwise the next event pointer is set to point to event after the current event (block 85). In either case the process then continues at entry point H.
If timeout has not yet occurred, the keyboard buffer is checked to see if the student has entered a Menu interrupt (block 88), Review interrupt (91), or Help interrupt (block 94). If any such interrupt has been entered, the corresponding MENU, REVIEW or HELP object is tested to see if it is defined (block 89, 92 or 95, respectively) and, if so, the next-event pointer is set equal to the corresponding object (block 90, 93 or 96, respectively). If the object is not defined the process continue to check for any of the other interrupts and moves onto block 97 where the keyboard buffer is tested to see if any data entries have been made by the student. This test for a keyboard entry is not performed by a normal poll or read of the keyboard because that would cause a software interrupt, as explained below. Rather, the monitor directly tests a special register set by the keyboard input interrupt routine when input is received.
In an keyboard entry is found, the process jumps to entry point D to block 111 (FIG. 3C) where the process will test the branches, if any, in the event. If no keyboard entry is found, the type of keyboard request interrupt used by the operating system (DOS) 24 (which passed control to the training subsystem) is tested at 99. If DOS 24 was polling the keyboard (i.e., merely checking to see if any data was entered) DOS 24 is called at 101. This gives DOS 24 some CPU cycles, which it may need for proper operation. When DOS 24 performs another read or poll control will return to the monitor process at the point where the monitor called DOS 24. In other words, when DOS 24 performs another read or poll control will pass to block 102. If DOS 24 was reading the keyboard, DOS 24 is not called. Instead control is passed to block 102.
At block 102 the CRT-- LOOP flag (see Table 6) is tested to determine if the object branches and screen match branches of the event should be tested even though no keyboard input has been received. If the CRT-- LOOP flag is not set then the process resumes at entry point E to block 83 at the beginning of the timeout loop. If CRT-- LOOP flag is set then the process jumps to entry point F to block 112 (FIG. 3C).
Referring now to FIG. 3C, if a keyboard entry was found, the monitor process picks up at entry point D to block 111 where, in preparation for calling the Match routine, the keyboard entry is placed in the KEYIN object and the Character Counter and the corresponding CX register are incremented. The CX register is used to determine the position of the keyboard entry in the input string which is being compared with match strings in match branches.
At block 112 the process checks that the event has at least one branch. Note than an event may have only initial tasks and no branches; an event may even have no initial tasks and no branches, although such an event will generally not be particularly useful. Also note that if block 112 is entered through entry point F then KEYIN is equal to NULL because no keyboard entry was received by the system. If there are no branches (i.e., if first-branch pointer 72d is equal to zero) then the process resumes at entry point E to block 83 at the beginning of the timeout loop.
If there is at least one branch in the event then the Match routine is called at block 113. The flowchart for the Match routine is shown in FIGS. 5A and 5B.
If the Match routine found a "match" that means that the test specified by a branch caused the match flag to be set. As will be explained in more detail below, the match flag is set if a match branch finds an input string that matches the full length of the match string; if a screen match finds a string at the specified screen location that matches the screen-match string; or if any object branch test is satisfied. Also, when there is a match, the next event pointer is set to the match event specified in the branch structure.
If the match event is the NULL event (see block 116) then the timeout loop continues at block 117. At block 117 the CX register (which is generally equal to the character counter unless an object branch changed it when the Match routine was called) is compared with the Max-Keys parameter 72e in the event structure 72. If CX exceeds Max-Keys it usually means that the student has entered more keystrokes than is reasonable under the circumstances. If so, the process invokes the timeout event (if one is defined) by proceeding to block 84 (see FIG. 3B) via entry point G. If CX does not exceed Max-Keys the process resumes at entry point E to block 83 at the beginning of the timeout loop.
If the match event is not the NULL event the process continues at entry point H to block 121, where preparations are made to invoke the next event.
At block 121 the next event pointer is tested to determine if it equals the RETURN event (which is equal to -1 in the preferred embodiment). If the next event is the RETURN event then the pointer to the next event is obtained by popping the pointer from the event stack. Generally, the RETURN event is invoked only at the end of an event subroutine. However it is possible for any branch to specify the RETURN event as the match event, although this is generally not good courseware programming practice because the results may be vary depending on the context in which the branch is tested. Thus if the next event is the RETURN event the process continues at block 122 where the event stack is tested to see if it is empty. Clearly the event stack should not be empty if a RETURN event is called for. But if this does happen the monitor is reset by restarting the monitor process at entry point A to block 41. The courseware can deliberately use this capability by invoking the RETURN event in order to reset the monitor process, for instance when no input has been received from the student for several minutes and it is presumed that the student has abandoned the training session.
If the next event is the RETURN event and the event stack is not empty, the pointer to the next event is popped from the event stack at block 124 and the process resumes at entry point B to block 42 (see FIG. 3A) at the beginning of the event loop.
If the next event is not the RETURN event the BR-- CALL flag (see Table 2) (i.e., the BR-- CALL flag of the branch which caused the "match" to occur) is tested at block 125. If the BR-- CALL flag is not set, the process resumes at entry point B to block 42 (see FIG. 3A) at the beginning of the event loop. If the BR-- CALL flag is set then the pointer to the event after the current event is pushed onto the event stack at block 126. Then the process resumes at entry point B to block 42 (see FIG. 3A) at the beginning of the event loop.
Referring to FIGS. 4A-4D, the subprocesses of the co-processor aspect of the invention are: the DOS subroutine (FIG. 4A), the SDOS subroutine (FIG. 4C), the keyboard request interrupt routine (FIG. 4B), the keyboard interrupt routine (FIG. 4D), and the clock tick routine (FIG. 4E).
The DOS subroutine is used by the main monitor routine return control to the operating system (i.e., to give the operating system 24 CPU cycles) when operating system 24 has polled the keyboard but no keyboard entries have been received. At block 131 the state of the monitor is saved on the monitor stack. Then, at block 132 the state of the operating system is retrieved from the DOS state, and (at block 133) the monitor "returns" (i.e., exits) to the operating system 24.
The SDOS subroutine is used by the match routine (see FIGS. 5A-5B) to send a keyboard entry to the operating system 24. If the operating system 24 is trying to poll the keyboard (see block 141) then the SDOS subroutine waits (see blocks 142 to 144) until the operating system tries to read the keyboard. It does this by (1) saving the value of KEYIN on the monitor stack (at block 142) for later use; (2) calling DOS (see FIG. 4A) with KEYIN as a parameter (at block 143), indicating that there has been a keyboard entry; and (3) retrieving KEYIN from the monitor stack (at block 144) when the operating system 24 restores control to the monitor 27 via a poll or read. This is done until DOS tries to read the keyboard.
When DOS 24 tries to read the keyboard, the main branch of the SDOS routine begins at block 145, where DOS 24 is called using KEYIN as a parameter. When DOS 24 restores control to the monitor 27 (via a poll or read), the monitor checks whether DOS is trying to do a poll or a read. If it is trying to do a poll the routine exits back to the monitor 27 at block 149. If DOS is trying to do a read, KEYIN is set to NULL (at block 147) and DOS is called with KEYIN as a parameter (at block 148). This tells DOS that there is nothing in the keyboard buffer for it to read. (The monitor does this even if there is something in the keyboard buffer because it has yet to determine if that data should be passed to DOS). When DOS returns control to the monitor the SDOS routine exits at block 149 back to the match routine.
Referring to FIG. 4B, the Keyboard Request Interrupt routine is invoked every time DOS 24 tries to poll or read the keyboard buffer. This is a software interrupt. The entry points of the poll and read routines are usually determined by items in a vector table set up by the operating system 24. These vectors can be changed by software to point to different poll and read routines than the routines originally designated by the operating system 24. The process of initializing the monitor (see block 41 in FIG. 3A) includes resetting these vectors to point to the Keyboard Request Interrupt routine.
The only tasks of the Keyboard Request Interrupt routine are to save the state of DOS on the DOS stack (block 151); retrieve the Monitor state from the monitor stack (block 153) and return (i.e., exit) to the monitor (block 154). However, the first time the monitor is called there is no monitor state to retrieve. Therefore the routine tests at block 151 whether this is the first time that DOS had done a poll or read since the vector table entries for keyboard request interrupts were changed. If it is, the monitor state is not retrieved from the monitor stack (i.e., block 153 is skipped) and the routine exits to the monitor (which effectively transfers the process to entry point A of the main monitor routine in FIG. 3A).
Referring to FIG. 4D, the Hardware Keyboard Interrupt routine responds when the student enters data into the system. It acts as a pre-amble to the standard system routine (herein GOROM) for handling keyboard entries. Normal data entries continue to be handled by the system's GOROM routine. Only certain special entries receive special handling. Four of these special entries are keystroke combinations that are normally illegal or not allowed by the operating system 24. The fifth special entry is any keystroke while the monitor is in the pause state.
If the monitor is in the "pause state" the normal flow of the monitor process continues except that the clock used for the timeout loop is frozen. In the program listings, below, the pause state is implemented in the MS-- TICK routine by not incrementing the clock when the pause flag is set. Thus the system is not put into a "hard loop" when in the pause state. This allows system functions such as disk operations to continue even if the system appears to be in suspended animation. In effect the system is running at infinite speed (because no time is passing so far as the timeout clock is concerned) but appears to stopped to the casual outside observer.
Block 161 is the entry point of the keyboard interrupt routine. If the monitor 27 is in the pause state (block 161) any keyboard entry will terminate the pause state. This basically involves (see block 162) resetting the pause state flag, restoring the CRT (to the state it was in before the pause state was entered), and dismissing the interrupt (block 163). In the preferred embodiment, the keyboard entry used to exit the pause state is thrown away--it is not used as input to the system.
If the monitor 27 was not in the pause state, the routine checks to see if any of the special entries has been made. A HELP request entry (`CNTRL` `SHIFT`), a REVIEW request entry (`CNTRL` `TAB`), or a MENU request entry (detected at block 164, 166, or 168, respectively) causes the corresponding flag to be marked (at block 165, 167, or 169, respectively) for later use by the main monitor routine. After the flag is marked the interrupt is dismissed at block 163. In the entry was not a HELP, REVIEW, or MENU request then it is checked to see if it was a PAUSE request (`ALT```SPACE BAR`) at block 171. If the entry is not a PAUSE request the system GOROM routine is called (block 172) to process the entry as a normal keyboard entry. If the entry is a PAUSE request, the character on the CRT screen at the current cursor position is saved (block 173) and replaced with a flashing capital "P" (block 174). The pause flag is marked (for use in the clock routine) at block 175 and the interrupt is dismissed. As will be understood by those skilled in the art, any visual reminder that the system is in a pause state is equivalent to the flashing "P" scheme just described.
Referring to FIG. 4E, the clock tick routine is a software interrupt routine that is invoked automatically every time the system clock "ticks" (e.g., every 18.2 microseconds in the preferred embodiment). In block 300 the routine checks whether the monitor 27 is in the pause state. If it is, then the routine does nothing and just return, via block 302. If the monitor is not in the pause state, then block 301 is executed. In the preferred embodiment, block 301 merely increments an internal counter called the master scheduler clock, which is used to determine when there is a timeout, as discussed above. In other embodiments, any monitor 27 or system function could be invoked from block 301. For instance, the monitor could be "returned" to under certain specified conditions (i.e., the DOS state would be saved and the monitor state retrieved from their respective stacks), much as though the operating system had performed a read. In another instance, the clock tick routine could periodically send a message to the system user or update a "clock" on one of the display devices.
Referring to FIGS. 5A and 5B, there is shown a flow chart of the match routine process. Whenever there is a "match" the next event pointer is set to the match event 74b (at block 211) specified by the branch which found the match. If a branch is tested and does not result in a match the process continues (at block 216) with next branch in the event, if there is one. If no match has been found and all the branches have been tested, the match routine exits (at block 218) back to the main monitor routine.
At block 181 the process is initiated by getting the pointer to the first branch from parameter 72d of the event structure. Block 182 begins the branch loop by getting the branch string 74f and branch flags. If the branch is a screen-match branch (i.e., if the BR-- CRT-- M flag is set) (see block 183) the CRT-- MATCH routine is called (block 184). The CRT-- MATCH routine determines whether the data on the CRT starting at the position indicated by the BR-- X and BR-- Y parameters 74d and 74e matches the branch string (also called the screen-match string). If it there is a match it sets the match flag (which is tested at block 187) and the process continues at entry point E to block 211 on FIG. 5B.
If the branch is not a screen-match branch the process checks the BR-- OBJECT flag (at block 185) to see if it is an object branch. If the BR-- OBJECT flag is set, then the branch is an object branch and the OBJECT-- WORK routine is called (at block 186). See FIGS. 6A-6C for a flow chart of the OBJECT-- WORK routine. If the object branch results in a "match" the process is directed by a decision block 187 to entry E to block 211. If no match results from the object branch the process is directed to block 216 on FIG. 5B.
If the branch is a match branch the process checks (at block 188) to see if any data entries have been received. If not, the process continues with the next branch (if any) at block 215 on FIG. 5B. If data has been received, the preferred embodiment processes the input in one of two ways.
If the BR-- PASS flag is set (see decision block 189) the match string is a single wildcard character. See discussion above regarding wildcards. The wildcard routine (block 191) checks to see if the input data is within the set specified by the wildcard in the match string. If it is the match flag is set.
If the BR-- PASS flag is not set, the match string can be any string of characters. The process at block 201 determines if the substring, including the previously entered data (entered since the character counter was last reset) and the last data entry, match the corresponding substring of the match string. If so there is a "submatch". (For example, if the match string is "PRINT" and the data entered so far is "PRI" there is a "submatch".)
The processing of wildcard and regular match branches is mostly the same after the check has been made for a wildcard match or a "submatch". At decision blocks 192 or 202, respectively, if there was no match or submatch the process continues at entry point D to block 216. If there was a wildcard match or submatch the BR-- DOS flag is tested at block 193 or 203, respectively. If the BR-- DOS flag is set, the last entry, which is in the KEYIN object, is passed to DOS by the SDOS routine at block 194 or 204, respectively. Next the BR-- CRT flag is tested at block 195 or 205, respectively, and if it is set then the KEYIN character is sent to the CRT but not to DOS at block 196 or 206, respectively.
At this point the processing of wildcard and regular match branches diverges slightly. In the processing of regular match branches there is a test at block 207 to determine if the input data matches the whole match string or only a portion of it. If the only a portion of the match string has been matched the match flag is not set and the routine exits (at block 208) back to the main monitor routine. Otherwise the match flag is set (at block 209).
If there was a match, the next event pointer is set to the match event at block 211. If the BR-- CALL flag is set (see decision block 212) the next event is marked as a call event (at block 213) and will be treated as the beginning of an event subroutine. If the BR-- NOBR flag is not set (see decision block 214) the routine exits (at block 215) to the main monitor routine. If the BR-- NOBR flag is set the match routine continues at block 216 as though no match had been found.
At block 216 the match routine prepares to test the next branch in the event, if there is one, by resetting the match flag, saving the KEYIN object (which holds the latest data entry) for use by the next branch and getting the pointer to the next branch. If the pointer is null (see decision block 217) there are no more branches and (at block 218) the KEYIN object is set to NULL and the routine exits back to the main monitor routine. If there are more branches, the match process resumes at entry point A to block 182 at the beginning of the branch loop.
Referring to FIGS. 6A-6C, the OBJECT-- WORK routine is basically a straightforward implementation of process for interpreting a string of characters representing one or more or the object operations shown in Table 4. As shown in Branch 3 of Table 1B, object operations are separated by semicolons.
Since two of the dedicated objects, CP and SHIFT, affect the physical state of the system the state of these system parameters are stored at the beginning of the object routine (at block 211) and are used to set the system state (at block 265) just before the object routine exits (at block 266).
The main object loop starts at block 222 where the object and the operator of the next object equation are read in and a string pointer is set to the next character in the branch string.
If there is an operand (note that not all object equations are operands), it is decoded and its effective value is put in a variable denoted OPND in the flow chart. There are three types of operands in the preferred embodiment; events, objects, and numbers. The types are distinguished by the value of the first byte of the operand: events and objects have unique prefixes not equal to any numerical operand. If the operand is an event (decision block 223) OPND equals the value of the event pointer to the event (block 224). If the operand is an object (decision block 225), the OPND equals the value of the object (block 226). If the operand is a number, the number is decoded (at block 227) as such. If there is no operand, OPND is equals zero.
Next, at block 228 the monitor advances the string pointer past any blanks at the end of the equation until it reaches either a semicolon or the end of the branch string. This merely prepares for checking at the end of the object loop to see if the monitor is done with the object branch.
In addition to changing the value of an object the object routine can either set (at block 262) or reset (at block 261) the match flag if a branch equation is performed.
In the odd numbered blocks from 231 to 255, the process checks to see what type of operator (OP) is used in the object equation. In the even numbered blocks from 232 to 256 the operation is performed if the corresponding operator was found. After mathematical and logical equations the process always continues at entry point C to block 261. After branch equations the process continues at entry point C to block 261 if the test was not satisfied and continues at entry point D to block 262 if the test was satisfied. The "@" operand always continues at entry point D since it is used only to force a branch. After video equations the process always continues at entry point C. (Note that in the program listings below routine at block 234 for the "V" operand is not implemented.)
If the process continues at block 261 (entry point C) the match flag is reset. If it continues at block 262 (entry point D) the match flag is set. Thus, when the object routine exits back to the main monitor routine the match flag is set only if the last object equation sets the match flag. The earlier object equations have no effect on the match flag as seen by the main monitor routine.
If all the object equations in the branch string have been processed (see decision block 263) then the shift state of the keyboard and the CRT cursor position are set in accordance with the current value of the SHIFT and CP objects (at block 265) and the object routine exits (at block 266) back to the main object routine. If the branch string contains more object equations the string pointer is advanced (at block 264) and the process continues at entry point A to block 222 at the beginning of the object loop.
The current invention can be used as an expert system rather than a training system. By way of example, the invention can be used to supplement a medical diagnosis computer program. In such a system, the training/experts subsystem 30 interprets entered data in accordance with the context of the medical diagnosis computer program and provides additional visual information to the user, thereby facilitating the use of the expert system. It is anticipated that most expert systems using the invention will use the embodiment which incorporates a second display device, because the pictures provided by such a second display can greatly expand the power and usefulness of many expert system.
Furthermore, the invention's method of separating training/expert system tasks from the application's tasks, can be used to advantage in the design of new expert systems. Use of the invention will make possible the design of expert systems which would have been very difficult and cumbersome, if not impossible, using prior art techniques. For instance, an X-ray diagnosis expert system might contain an X-ray diagnosis computer program (i.e., an applications program) and courseware having a library of X-rays for display on the second display device and a set of corresponding events, coordinated with the computer program. Either the computer program and/or the courseware prompt the user of the system with questions relevant to analyzing X-rays. In response to the user's answers, different X-rays in the library are shown on the second display device. The goal of the expert system in this example is to maximize certain predefined indicia of similarity between the image displayed on the second display device and the X-ray that the user is trying to analyze. When the best match is found, the X-ray diagnosis computer program supplies the user with the diagnosis associated with the best-match X-ray shown on the second display device.
The current invention does not include the process for authoring courseware. Given a specification for a software training program (for a particular target program 15), anyone skilled in the art could build the necessary data structures as shown in FIGS. 7A-7D..
The following program listings comprise one embodiment of the monitor module, data input interrupt and data request interrupt portions of the invention. All the programs, except for VID-- WRK are written in standard assembler language for the IBM PC. VID-- WRK is written in the C language. There are several texts available on the IBM PC assembler language and numerous tests on the C language. For ease of understanding the listings in this specification, the applicant refers to and incorporates as part of the disclosure, although not part of the invention, (1) a publication of International Business Machines, entitled IBM PC Hardware Reference Manual, copyrighted in 1982; and (2) a publication of Prentice Hall, entitled "the C Programming Language", copyrighted in 1978.
______________________________________Index to program listings:PROGRAM NAME PROGRAM FUNCTION______________________________________MAIN Main routine for monitor moduleNXT --WRK Point to next initial taskMATCH Evaluate BRANCHES in an eventCRTM Evaluate CRT match branchesOBJECT Interpret an OBJECT string"Utility" routines:GET --EVENT Set up to use next eventDOS --STATE Transfer control back to the operating systemSDOS Send a specific character back to the operating systemKB --INT Keyboard request interrupt routinetransfers control from operating system to training subsystemHDW --KB Keyboard interrupt routineINIT --KB Set keyboard interrupt vector to point to HDW --KB routineINIT --DAT Read in the event map and the event and the audio files.Audio routines:INIT --AUD Set up for audioAUD --WRK Play an audio messageText routine:TXT --WRK Send Text to the CRT, Printer or the operating systemVideo routines:VID --WRK Play a specified video image or sequenceClock routines:MS --TICK Time keeper for timeout loopGETTICK Get elapsed time valueINIT --MS Set clock interrupt vectorSTART --TICK Start clock for timeout loopCHK --TICK Check for timeoutWAIT Kill time until end of pause______________________________________
|
https://patents.google.com/patent/US4622013A/en
|
CC-MAIN-2018-17
|
refinedweb
| 14,129
| 55.58
|
True color GeoTIFF
The Sentinel-2 satellite mission has resulted in the collection and dissemination of imagery covering the earth's land surface with a revisit rate of 2 to 5 days. The sensors collect multi-band imagery, where each band is a portion of the electromagnetic spectrum. The Level 2A (L2A) product provides surface reflectance measures in the following bands:
When viewing multi-band imagery that includes data from outside the visible spectrum, we have to choose how to map each band to one of the three visible channels (red, green, or blue) available for rendering on digital displays. A true color composite is a rendering that displays visible blue (B02 from Sentinel-2) in the blue channel, visible green (B03) in the green channel, and visible red (B04) in the red channel. Any other mapping of satellite image bands to display channels is a false color composite.
There are a collection of Sentinel-2 L2A products hosted as Cloud-Optimized GeoTIFFs on Amazon S3. In this exercise, we'll render one of these on a map.
First, reset>
Now we'll import two new components we haven't used before:
- the
ol/source/GeoTIFFsource for working with multi-band raster data
- the
ol/layer/WebGLTilelayer for manipulating data tiles with shaders on the GPU
Update your
main.js to load and render a remotely hosted GeoTIFF file on a map:
import GeoTIFF from 'ol/source/GeoTIFF.js'; import Map from 'ol/Map.js'; import Projection from 'ol/proj/Projection.js'; import TileLayer from 'ol/layer/WebGLTile.js'; import View from 'ol/View.js'; import {getCenter} from 'ol/extent.js'; const projection = new Projection({ code: 'EPSG:32721', units: 'm', }); // metadata from const sourceExtent = [300000, 6090260, 409760, 6200020]; const source = new GeoTIFF({ sources: [ { url: '', }, ], }); const layer = new TileLayer({ source: source, }); new Map({ target: 'map-container', layers: [layer], view: new View({ projection: projection, center: getCenter(sourceExtent), extent: sourceExtent, zoom: 1, }), });
The working example at shows a map with a GeoTIFF rendered in a WebGL tile layer.
The trickiest part here is finding the URL for an image that you might be interested in. To do that, you can try searching in the EO (Earth Observation) Browser. If you have the
aws command line interface installed, you can also list the
s3://sentinel-cogs/ bucket contents to get the paths for images by the Sentinel-2 grid cell identifier and date. For example, to search for images around Buenos Aires from September, 2021:
aws s3 ls s3://sentinel-cogs/sentinel-s2-l2a-cogs/21/H/UB/2021/9/ --no-sign-request
The next hardest part is figuring out what
projection and
extent are appropriate for the map view. In the next step, we'll make that easier.
|
https://openlayers.org/workshop/en/cog/true-color.html
|
CC-MAIN-2021-43
|
refinedweb
| 455
| 50.97
|
Introduction to Internationalization Programming
In the old days when only a few people used computers, they were, for the most part, English speakers. Today, computers are widely available, and differences in languages, traditions and cultures need to be reflected in the world of programming. This article introduces the GNU gettext system.
The idea of using the same program but changing its properties according to the cultural traditions of different peoples is called internationalization. However, because programmers like to make words shorter, instead of typing 20 characters they type only four: i18n. I18n means programming designed to handle many languages.
Once you've written an i18n program, you may want to add a new language. This is not an i18n problem. In general, you need a person who will translate messages from the program for a specific nation. This problem is called localization, or i10n. I10n refers to the implementation of a specific language for an internationalized software, or in other words, the creation of localized objects according to the specific region's rules.
Although each organization and company that designs and distributes software tries to implement this in its own way, in general, the i18n idea is simple. Software should be created with two parts in mind: common and nation-dependent. This second part is known as localized objects.
Hopefully, standards will make life more comfortable. The basic concept of locale was introduced by the ISO (International Standard Organization) with the C standard in 1990, which was expanded in 1995. POSIX also has rules for i18n, so the term POSIX locale is used together with National Language Support (NLS). Formally, NLS is not a part of POSIX but has some functions that help when using the POSIX locale. X11 has its own i18n implementation, but the common way for programmers is to move the X11 i18n/i10n “up a level” into the POSIX/NLS locale. [Other software has its own i18n and i10n. See “Bridging the Digital Divide in South Africa” for one way to handle it].
What should we take into account when speaking of locale? Of course, the name of the language, but that is not enough. Everybody knows there are differences between American and British English, so we also have to know where the particular language is used, or in other words, the territory, taking into account individual traditions and cultural rules.
Every language has its own system of writing, and sometimes even several. Languages have alphabets, or character repertoires, but computers deal with digits. So, a character should be associated with a digit. This kind of association is called a coded character set (CCS). There are plenty of them, and each has its own name, such as ASCII, ISO-8859-1, KOI8-U. Instead of CCS, the term charset is often used. There is no special standard for the name of a charset, so ISO-8859-1, ISO8859-1 and iso8859-1 all refer to the same thing. There are some definitions from IANA, the organization that also is responsible for the registration of charsets (see Resources). As you probably know, the X11 system has its own system for charset naming, and their document “Logical Font Description Conversion” (described in Jim Flower's article, see Resources) provides a good name and alias charset creating system.
Charsets are important. Some countries have several different charsets for the same language! In the Ukraine, for instance, the same text may be displayed nicely in koi8-u but may be absolutely unreadable if a terminal uses the Ukrainian charsets iso8859-5, cp1251 or Unicode. In those cases, we would have to convert the text from one charset into another.
In order to take all of this into account, the POSIX locale defines some things that all together are called locale categories. They are shown in Table 1. Knowing them is important; C functions work differently with different locales! Categories are reflected in shell as environment variables with the same name. An example of using LC_ALL is shown in Listing 1.
Listing 1. Example of Using LC_ALL
The syntax to build a locale name looks like this:
language[_territory][.codeset][@modifier]
where language is represented by two lowercase letters, such as en for English and fr for French; territory is represented by two uppercase letters, such as GB for United Kingdom and FR for France, and in these two cases, euro would be the modifier. So, you can change your locale by setting the corresponding environment variables. See Listing 1, where we use the programs date, cat and our example program, counter. Note, we use only language and territory; we cannot change the charset for the terminal with this command. Now imagine that the program messages are written in one charset but are output in another. POSIX does not have functions to determine the current charset, but XPG has nl_langinfo(). In some distributions, the man page for this function may be missing (Debian does not have it, but SuSE and Red Hat do). In any case, you can obtain additional information from /usr/include/langinfo.h. To determine the current charset, use the following code:
#include <locale.h> #include <langinfo.h> ... setlocale(LC_ALL,""); printf ("Current charset = %s\n", nl_langinfo(CODESET));To convert from one coding into another for the correct output, you can use the conv() function. For more details, consult “Introduction to i18n” (see Resources).
In order to provide output information, a message catalog for that locale was created. This means that all software messages are kept separately from a program that may have (and must have) its own catalog. NLS provides a set of utilities for creating and supporting such catalogs, as well as functions for extracting information according to three keys: 1) program name, 2) current categories of locale and 3) pointer to a particular message to be output.
There are two general realisations of the NLS mechanism:
X/Open Portability Guide XPG3/XPG4/XPG5 with the functions catopen(), catgets() and catclose() and the gencat utility.
SUN XView with functions gettext() and textdomain(). The GNU Project has its own fully compatible release called GNU gettext.
Usually programs, as well as system libraries, use one (or even two) NLS systems.
Although XPG5 is included in the UNIX specification version 2, and all versions of UNIX systems support it, with Linux, GNU gettext is the most popular solution.
The POSIX locale has the following components:
Locale API, i.e., subroutines like setlocale(), isalpha(), etc.
Shell environment variables to manage locale categories.
The locale utility to get information about the current locale; see man locale for more details.
Objects of localization. The default directory for their location is /usr/share/locale/..
linux i18n support sucks
Wow, linux internationalization support sucks.
There's so little documentation, and no standard system support.
Windows is much better in this aspect.
|
http://www.linuxjournal.com/article/6176?quicktabs_1=2
|
CC-MAIN-2014-15
|
refinedweb
| 1,136
| 55.95
|
We can convert python dict to json using json.dumps() function. Actually json.dumps() accepts python dict as an argument. This function returns a JSON object. In this article, We will see the implementation stepwise.
1. How to Convert python dict to JSON?
Firstly, We will create a python dictionary in step 1. In Step 2, we will convert it into a JSON object.
Step 1:
Here we will convert a python dictionary with some keys and values.
#sample python dictionary python_dict = { "key1": "Value1", "key2": "Value2", "key3": "Value3", }
Step 2:
Let’s use json.dumps() function. But as it is the part of JSON module. Hence we need to import it before using it. Let’s see how?
import json JSON_obj=json.dumps(python_dict)
Complete code with Output –
Here is the complete code with its output.
import json #sample python dictionary python_dict = { "key1": "Value1", "key2": "Value2", "key3": "Value3", } #Converting Python dict to JSON JSON_obj=json.dumps(python_dict) #printing the JOSN object print(JSON_obj)
Dictionary to Json ( Formatting)-
Till now, We have seen the conversion. Now we will see how can we format the JSON response.
1.1 indent:
We can use the indent parameter for improving indentation. Here we can provide values as an integer. Let’s see an example.
JSON_obj=json.dumps(python_dict, indent=6) print(JSON_obj)
1.2 sort_keys:
This parameter is for sorting the keys of the JSON. We can have the syntax for better understanding.
JSON_obj=json.dumps(python_dict, sort_keys=True)
1.3 separators:
We can use the separators for changing the default separator.
JSON_obj=json.dumps(python_dict, separators=(". ", " = "))
Well, this seems a little confusing. Because of three consecutive values as parameter. No problem! Actually, It will use “.” in the place of “,” for separating objects. It will use “=” in the place of “:” for separating keys with values.
import json #sample python dictionary python_dict = { "key1": "Value1", "key2": "Value2", "key3": "Value3", } JSON_obj=json.dumps(python_dict, separators=(". ", " = ")) print(JSON_obj)
2. How to convert dict to JSON file ?
Here we will use json.dump() function in the place of json.dumps(). Lets see the implemention.
import json #sample python dictionary python_obj = {"key1": "Value1","key2": "Value2", "key3": "Value3"} with open("generated.json", "w") as f: json.dump(python_obj, f)
3. How to Convert JSON to Dict ?
We can convert JOSN object to dict using the json.loads() function. Implementation wise It is very easy and self-explanatory.
import json #sample python dictionary JOSN_obj = '{"key1": "Value1","key2": "Value2", "key3": "Value3"}' python_obj=json.loads(JOSN_obj) print(python_obj)
I hope now can easily perform the JSON to dict conversion and vice versa also. Please comment on your views on the comment box.
Thanks
Data Science Learner Team
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
|
https://www.datasciencelearner.com/convert-python-dict-to-json/
|
CC-MAIN-2021-39
|
refinedweb
| 461
| 62.14
|
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
#include <priv.h> int priv_set(priv_op_t op, priv_ptype_t which...);
boolean_t priv_ineffect(const char *priv);
The priv_set() function is a convenient wrapper for the setppriv(2) function. It takes three or more arguments. The operation argument, op, can be one of PRIV_OFF, PRIV_ON or PRIV_SET. The which argument is the name of the privilege set to change. The third argument is a list of zero or more privilege names terminated with a null pointer. If which is the special pseudo set PRIV_ALLSETS, the operation should be applied to all privilege sets.
The specified privileges are converted to a binary privilege set and setppriv() is called with the same op and which arguments. When called with PRIV_ALLSETS as the value for the which argument, setppriv() is called for each set in turn, aborting on the first failed call.
The priv_ineffect() function is a conventient wrapper for the getppriv(2) function. The priv argument specifies the name of the privilege for which this function checks its presence in the effective set.
Upon successful completion, priv_set() return 0. Otherwise, -1 is returned and errno is set to indicate the error.
If priv is a valid privilege that is a member of the effective set, priv_ineffect() returns B_TRUE. Otherwise, it returns B_FALSE and sets errno to incicate the error.
The priv_set() function will fail if:
The value of op or which is out of range.
Insufficient memory was allocated.
The application attempted to add privileges to PRIV_LIMIT or PRIV_PERMITTED, or the application attempted to add privileges to PRIV_INHERITABLE or PRIV_EFFECTIVE that were not in PRIV_PERMITTED.
The priv_ineffect() function will fail if:
The privilege specified by priv is invalid.
Insufficient memory was allocated.
See attributes(5) for descriptions of the following attributes:
setppriv(2), priv_str_to_set(3C), attributes(5), privileges(5)
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
|
http://docs.oracle.com/cd/E19082-01/819-2243/priv-set-3c/index.html
|
CC-MAIN-2014-23
|
refinedweb
| 312
| 57.57
|
Lab 7: Iterators and Generators, Object-Oriented Programming
Due by 11:59pm on Tuesday, July 20.
Starter Files
Download lab.
Iterators
An iterable is any object that can be iterated through, or gone through one element at a time. One construct that we've used to iterate through an iterable is a for loop:
for elem in iterable: # do something
for loops work on any object that is iterable. We previously described it
as working with any sequence -- all sequences are iterable, but there are other
objects that are also iterable! We define an iterable as an object on which
calling the built-in function
iter function returns an iterator. An
iterator is another type of object that allows us to iterate through an
iterable by keeping track of which element is next in the sequence.
To illustrate this, consider the following block of code, which does the exact same thing as a the for statement above:
iterator = iter(iterable) try: while True: elem = next(iterator) # do something except StopIteration: pass
Here's a breakdown of what's happening:
- First, the built-in
iterfunction is called on the iterable to create a corresponding iterator.
- To get the next element in the sequence, the built-in
nextfunction is called on this iterator.
- When
nextis called but there are no elements left in the iterator, a
StopIterationerror is raised. In the for loop construct, this exception is caught and execution can continue.
Calling
iter on an iterable multiple times returns a new iterator each time
with distinct states (otherwise, you'd never be able to iterate through a
iterable more than once). You can also call
iter on the iterator itself, which
will just return the same iterator without changing its state. However, note
that you cannot call
next directly on an iterable.
Let's see the
iter and
next functions in action with an iterable we're
already familiar with -- a list.
>>> lst = [1, 2, 3, 4] >>> next(lst) # Calling next on an iterable TypeError: 'list' object is not an iterator >>> list_iter = iter(lst) # Creates an iterator for the list >>> list_iter <list_iterator object ...> >>> next(list_iter) # Calling next on an iterator 1 >>> next(list_iter) # Calling next on the same iterator 2 >>> next(iter(list_iter)) # Calling iter on an iterator returns itself 3 >>> list_iter2 = iter(lst) >>> next(list_iter2) # Second iterator has new state 1 >>> next(list_iter) # First iterator is unaffected by second iterator 4 >>> next(list_iter) # No elements left! StopIteration >>> lst # Original iterable is unaffected [1, 2, 3, 4]
Since you can call
iter on iterators, this tells us that that they are also
iterables! Note that while all iterators are iterables, the converse is not
true - that is, not all iterables are iterators. You can use iterators wherever
you can use iterables, but note that since iterators keep their state, they're
only good to iterate through an iterable once:
>>> list_iter = iter([4, 3, 2, 1]) >>> for e in list_iter: ... print(e) 4 3 2 1 >>> for e in list_iter: ... print(e)
Analogy: An iterable is like a book (one can flip through the pages) and an iterator for a book would be a bookmark (saves the position and can locate the next page). Calling
iteron a book gives you a new bookmark independent of other bookmarks, but calling
iteron a bookmark gives you the bookmark itself, without changing its position at all. Calling
nexton the bookmark moves it to the next page, but does not change the pages in the book. Calling
nexton the book wouldn't make sense semantically. We can also have multiple bookmarks, all independent of each other.
Iterable Uses
We know that lists are one type of built-in iterable objects. You may have also
encountered the
range(start, end) function, which creates an iterable of
ascending integers from start (inclusive) to end (exclusive).
>>> for x in range(2, 6): ... print(x) ... 2 3 4 5
Ranges are useful for many things, including performing some operations for a particular number of iterations or iterating through the indices of a list.
There are also some built-in functions that take in iterables and return useful results:
map(f, iterable)- Creates an iterable over
f(x)for
xin
iterable. In some cases, computing a list of the values in this iterable will give us the same result as [
func(x)for
xin
iterable]. However, it's important to keep in mind that iterators can potentially have infinite values because they are evaluated lazily, while lists cannot have infinite elements.
filter(f, iterable)- Creates iterator over
xfor each
xin
iterableif
f(x)
zip(iterables*)- Creates an iterable over co-indexed tuples with elements from each of the
iterables
reversed(iterable)- Creates iterator over all the elements in the input iterable in reverse order
list(iterable)- Creates a list containing all the elements in the input
iterable
tuple(iterable)- Creates a tuple containing all the elements in the input
iterable
sorted(iterable)- Creates a sorted list containing all the elements in the input
iterable
reduce(f, iterable)- Must be imported with
functools. Apply function of two arguments
fcumulatively to the items of
iterable, from left to right, so as to reduce the sequence to a single value.
Generators
We can create our own custom iterators by writing a generator function, which
returns a special type of iterator called a generator. Generator functions
have
yield statements within the body of the function instead of
return
statements. Calling a generator function will return a generator object and
will not execute the body of the function.
For example, let's consider the following generator function:
def countdown(n): print("Beginning countdown!") while n >= 0: yield n n -= 1 print("Blastoff!")
Calling
countdown(k) will return a generator object that counts down from
k
to 0. Since generators are iterators, we can call
iter on the resulting
object, which will simply return the same object. Note that the body is not
executed at this point; nothing is printed and no numbers are output.
>>> c = countdown(5) >>> c <generator object countdown ...> >>> c is iter(c) True
So how is the counting done? Again, since generators are iterators, we call
next on them to get the next element! The first time
next is called,
execution begins at the first line of the function body and continues until the
yield statement is reached. The result of evaluating the expression in the
yield statement is returned. The following interactive session continues
from the one above.
>>> next(c) Beginning countdown! 5
Unlike functions we've seen before in this course, generator functions can
remember their state. On any consecutive calls to
next, execution picks up
from the line after the
yield statement that was previously executed. Like
the first call to
next, execution will continue until the next
yield
statement is reached. Note that because of this,
Beginning countdown! doesn't
get printed again.
>>> next(c) 4 >>> next(c) 3
The next 3 calls to
next will continue to yield consecutive descending
integers until 0. On the following call, a
StopIteration error will be
raised because there are no more values to yield (i.e. the end of the function
body was reached before hitting a
yield statement).
>>> next(c) 2 >>> next(c) 1 >>> next(c) 0 >>> next(c) Blastoff! StopIteration
Separate calls to
countdown will create distinct generator objects with their
own state. Usually, generators shouldn't restart. If you'd like to reset the
sequence, create another generator object by calling the generator function
again.
>>> c1, c2 = countdown(5), countdown(5) >>> c1 is c2 False >>> next(c1) 5 >>> next(c2) 5
Here is a summary of the above:
- A generator function has a
yieldstatement and returns a generator object.
- Calling the
iterfunction on a generator object returns the same object without modifying its current state.
- The body of a generator function is not evaluated until
nextis called on a resulting generator object. Calling the
nextfunction on a generator object computes and returns the next object in its sequence. If the sequence is exhausted,
StopIterationis raised.
A generator "remembers" its state for the next
nextcall. Therefore,
the first
nextcall works like this:
- Enter the function and run until the line with
yield.
- Return the value in the
yieldstatement, but remember the state of the function for future
nextcalls.
And subsequent
nextcalls work like this:
- Re-enter the function, start at the line after the
yieldstatement that was previously executed, and run until the next
yieldstatement.
- Return the value in the
yieldstatement, but remember the state of the function for future
nextcalls.
- Calling a generator function returns a brand new generator object (like calling
iteron an iterable object).
- A generator should not restart unless it's defined that way. To start over from the first element in a generator, just call the generator function again to create a new generator.
Another useful tool for generators is the
yield from statement (introduced in
Python 3.3).
yield from will yield all values from an iterator or iterable.
>>> def gen_list(lst): ... yield from lst ... >>> g = gen_list([1, 2, 3, 4]) >>> next(g) 1 >>> next(g) 2 >>> next(g) 3 >>> next(g) 4 >>> next(g) StopIteration
Object-Oriented Programming
Object-oriented programming (OOP) is a style of programming that
allows you to think of code in terms of "objects." Here's an example of
a
Car class:
class Car::
- class: a blueprint for how to build a certain type of object. The
Carclass (shown above) describes the behavior and data that all
Carobjects have.
instance: a particular occurrence of a class. In Python, we create instances of a class like this:
>>> my_car = Car('red')
my_caris an instance of the
Carclass.
data attributes: a variable that belongs to the instance (also called instance variables). Think of a data attribute as a quality of the object: cars have wheels and color, so we have given our
Carinstance
self.wheelsand
self.colorattributes. We can access attributes using dot notation:
>>> my_car.color 'red' >>> my_car.wheels 4
method: Methods are just like normal functions, except that they are bound to an instance. Think of a method as a "verb" of the class: cars can drive and also pop their tires, so we have given our
Carinstance the methods
driveand
pop_tire. We call methods using dot notation:
>>> my_car = Car('red') >>> my_car.drive() 'red car goes vroom!'
constructor: As with data abstraction, constructors build an instance of the class. The constructor for car objects is
Car(color). When Python calls that constructor, it immediately calls the
__init__method. That's where we initialize the data attributes:
def __init__(self, color): self.wheels = Car.num_wheels self.color = color
The constructor takes in one argument,
color. As you can see, this constructor also creates the
self.wheelsand
self.colorattributes.
self: in Python,
selfis the first parameter for many methods (in this class, we will only use methods whose first parameter is
self). When a method is called,
selfis bound to an instance of the class. For example:
>>> my_car = Car('red') >>> car.drive()
Notice that the
drivemethod takes in
selfas an argument, but it looks like we didn't pass one in! This is because the dot notation implicitly passes in
caras
selffor us.
Required Questions
Iterators and Generators
Generators also allow us to represent infinite sequences, such as the sequence of natural numbers (1, 2, ...) shown in the function below!
Relevant Topics: Iterators and Generators
Q1: Scale
Write a generator function
scale(it, multiplier) which yields the elements of the
iterable
it, multiplied by
multiplier.
As an extra challenge, try writing this function using a
yield from statement!
A
yield from statement yields the values from an iterator one at a time.
def scale(it, multiplier): """Yield elements of the iterable it multiplied by a number multiplier. >>> m = scale([1, 5, 2], 5) >>> type(m) <class 'generator'> >>> list(m) [5, 25, 10] >>> m = scale(naturals(), 2) >>> [next(m) for _ in range(5)] [2, 4, 6, 8, 10] """ "*** YOUR CODE HERE ***"
Use Ok to test your code:
python3 ok -q scale
Q2: Hailstone
Write a generator function that outputs the hailstone sequence starting at number
n.
Here's a quick reminder of how the hailstone sequence is defined:
- Pick a positive integer
nas the start.
- If
nis even, divide it by 2.
- If
nis odd, multiply it by 3 and add 1.
- Continue this process until
nis 1.
Note: It is highly encouraged (though not required) to try writing a solution using recursion for some extra practice. Since
hailstone returns a generator, you can
yield from a call to
hailstone!
def hailstone(n): """Yields the elements of the hailstone sequence starting at n. >>> for num in hailstone(10): ... print(num) ... 10 5 16 8 4 2 1 """ "*** YOUR CODE HERE ***"
Use Ok to test your code:
python3 ok -q hailstone
WWPD: Objects
Q3: The Car class
Note: These questions use inheritance. For an overview of inheritance, see the inheritance portion of Composing Programs
Below is the definition of a
Car class that we will be using in the following WWPD questions.
Note: This definition can also be found in
car.py.
class Car: wwpd-car -u
If an error occurs, type Error. If nothing is displayed, type Nothing.
>>> deneros_car = Car('Tesla', 'Model S') >>> deneros_car.model______'Model S'>>> deneros_car.gas = 10 >>> deneros_car.drive()______'Tesla Model S goes vroom!'>>> deneros_car.drive()______'Cannot drive!'>>> deneros_car.fill_gas()______'Gas level: 20'>>> deneros_car.gas______20>>> Car.gas______30
>>> deneros_car = Car('Tesla', 'Model S') >>> deneros_car.wheels = 2 >>> deneros_car.wheels______2>>> Car.num_wheels______4>>> deneros_car.drive()______'Cannot drive!'>>> Car.drive()______Error (TypeError)>>> Car.drive(deneros_car)______'Cannot drive!'
For the following, we reference the
MonsterTruck class below.
Note: The
MonsterTruck class can also be found in
car.py.
class MonsterTruck(Car): size = 'Monster' def rev(self): print('Vroom! This Monster Truck is huge!') def drive(self): self.rev() return Car.drive(self)
>>> deneros_car = MonsterTruck('Monster', 'Batmobile') >>> deneros_car.drive()______Vroom! This Monster Truck is huge! 'Monster Batmobile goes vroom!'>>> Car.drive(deneros_car)______'Monster Batmobile goes vroom!'>>> MonsterTruck.drive(deneros_car)______Vroom! This Monster Truck is huge! 'Monster Batmobile goes vroom!'>>> Car.rev(deneros_car)______Error (AttributeError)
Magic: The Lambda-ing
In the next part of this lab, we will be implementing a card game! This game is inspired by the similarly named Magic: The Gathering.. If you're familiar with the original game, you may notice random card from their deck. If a player's deck is empty when they try to draw, they will automatically lose the game. Cards have a name, an attack statistic, and a defense statistic. attack/1000 defense and Player 2 plays a card with 1500 attack/3000 defense. card will cause the opponent to discard and re-draw the first 3 cards in their hand.
- A TA card will swap the opponent card's attack and defense.
- A Professor card will add the opponent card's attack and defense to all cards in their deck and then remove all cards in the opponent's deck that share its attack or defense!
These are a lot of rules to remember, so refer back here if you need to review them, and let's start making the game!
Q ***" def power(self, opponent_card): """ Calculate power as: (player card's attack) - (opponent card's defense)/2 >>> staff_member = Card('staff', 400, 300) >>> other_staff = Card('other', 300, 500) >>> staff_member.power(other_staff) 150.0 >>> other_staff.power(staff_member) 150.0 >>> third_card = Card('third', 200, 400) >>> staff_member.power(third_card) 200.0 >>> third_card.power(staff_member) 50.0 """ "*** YOUR CODE HERE ***"
Use Ok to test your code:
python3 ok -q Card.__init__ python3 ok -q Card.power
Q.
Submit
Make sure to submit this assignment by running:
python3 ok --submit
Optional Questions
The following code-writing questions will all be in
classes.py.
For the following sections, do not overwrite any lines already provided in the
code. Additionally, make sure to uncomment any calls to
Q, opponent_card, player, opponent): """ Discard the first 3 cards in the opponent's hand and have them draw the same number of cards from their deck. >>> from cards import * >>> player1, player2 = Player(player_deck, 'p1'), Player(opponent_deck, 'p2') >>> opponent_card = Card('other', 500, 500) >>> tutor_test = TutorCard('Tutor', 500, 500) >>> initial_deck_length = len(player2.deck.cards) >>> tutor_test.effect(opponent_card, player1, player2) p2 discarded and re-drew 3 cards! >>> len(player2.hand) 5 >>> len(player2.deck.cards) == initial_deck_length - 3 True """ "*** YOUR CODE HERE ***" #Uncomment the line below when you've finished implementing this method! #print('{} discarded and re-drew 3 cards!'.format(opponent.name))
Use Ok to test your code:
python3 ok -q TutorCard.effect
Q7: TAs: Shift
Let's add an effect for TAs now! Implement the
effect method for TAs, which
swaps the attack and defense of the opponent's card.
class TACard(Card): cardtype = 'TA' def effect(self, opponent_card, player, opponent): """ Swap the attack and defense of an opponent's card. >>> from cards import * >>> player1, player2 = Player(player_deck, 'p1'), Player(opponent_deck, 'p2') >>> opponent_card = Card('other', 300, 600) >>> ta_test = TACard('TA', 500, 500) >>> ta_test.effect(opponent_card, player1, player2) >>> opponent_card.attack 600 >>> opponent_card.defense 300 """ "*** YOUR CODE HERE ***"
Use Ok to test your code:
python3 ok -q TACard.effect
Q8: The Professor Arrives
A new challenger has appeared! Implement the
effect method for the Professor,
who adds the opponent card's attack and defense to all cards in the player's
deck and then removes all cards in the opponent's deck that have the same
attack or defense as the opponent's card.
Note: You might run into trouble when you mutate a list as you're iterating through it. Try iterating through a copy instead! You can use slicing to copy a list:
>>> lst = [1, 2, 3, 4] >>> copy = lst[:] >>> copy [1, 2, 3, 4] >>> copy is lst False
class ProfessorCard(Card): cardtype = 'Professor' def effect(self, opponent_card, player, opponent): """ Adds the attack and defense of the opponent's card to all cards in the player's deck, then removes all cards in the opponent's deck that share an attack or defense stat with the opponent's card. >>> test_card = Card('card', 300, 300) >>> professor_test = ProfessorCard('Professor', 500, 500) >>> opponent_card = test_card.copy() >>> test_deck = Deck([test_card.copy() for _ in range(8)]) >>> player1, player2 = Player(test_deck.copy(), 'p1'), Player(test_deck.copy(), 'p2') >>> professor_test.effect(opponent_card, player1, player2) 3 cards were discarded from p2's deck! >>> [(card.attack, card.defense) for card in player1.deck.cards] [(600, 600), (600, 600), (600, 600)] >>> len(player2.deck.cards) 0 """ orig_opponent_deck_length = len(opponent.deck.cards) "*** YOUR CODE HERE ***" discarded = orig_opponent_deck_length - len(opponent.deck.cards) if discarded: #Uncomment the line below when you've finished implementing this method! #print('{} cards were discarded from {}\'s deck!'.format(discarded, opponent.name)) return
Use Ok to test your code:
python3 ok -q ProfessorCard.effect
After you complete this problem, we'll have a fully functional game of Magic: The Lambda-ing! This doesn't have to be the end, though - we encourage you to get creative with more card types, effects, and even adding more custom cards to your deck!
|
https://inst.eecs.berkeley.edu/~cs61a/su21/lab/lab07/
|
CC-MAIN-2021-49
|
refinedweb
| 3,157
| 56.25
|
From seed to full bloom, Ambrose takes us through the steps to grow a domain-specific language in Clojure.
Lisps like Clojure are well suited to creating rich DSLs that integrate seamlessly into the language.
You may have heard Lisps boasting about code being data and data being code. In this article we will define a DSL that benefits handsomely from this fact.
We will see our DSL evolve from humble beginnings, using successively more of Clojure’s powerful and unique means of abstraction.
The Mission
Our goal will be to define a DSL that allows us to generate various scripting languages. The DSL code should look similar to regular Clojure code.
For example, we might use this Clojure form to generate either Bash or Windows Batch script output:
Input (Clojure form):
Output (Bash script):
Output (Windows Batch script):
We might, for example, use this DSL to dynamically generate scripts to perform maintenance tasks on server farms.
Baby Steps: Mapping to Our Domain Language
I like Bash, so let’s start with a Bash script generator.
To start, we need to expose some parallels between Clojure’s core types and our domain language.
So which Clojure types have simple analogues in Bash script?
Strings and numbers should just simply return their String representation, so we will start with those.
Let’s define a function emit-bash-form that takes a Clojure form and returns a string that represents the equivalent Bash script.
The case expression is synonymous here with a C or Java switch statement, except it returns the consequent. Everything in Clojure is an expression, which means it must return something.
Now if we want to add some more dispatches, we just need to add a new clause to our case expression.
Echo and Print
Let’s add a feature.
Bash prints to the screen using echo. You’ve probably seen it if you’ve spent any time with a Linux shell.
clojure.core also contains a function println that has similar semantics to Bash’s echo.
Wouldn’t it be cool if we could pass (println "a") to emit-bash-form?
At first, this seems like asking the impossible.
To made an analogy with Java, imagine calling this Java code and expecting the first argument to equal System.out.println("asdf").
(Let’s ignore the fact that System.out.println() returns a void).
Java evaluates the arguments before you can even blink, resulting in a function call to println. How can we stop this evaluation and return the raw code?
Indeed this is an impossible task in Java. Even if this were possible, what could we expect do with the raw code?(!)
System.out.println("asdf") is not a Collection, so we can’t iterate over it; it is not a String, so we can’t partition it with regular expressions.
Whatever “type” the raw code System.out.println("asdf") has, it’s not meant to be known by anyone but compiler writers.
Lisp turns this notion on its head.
Lisp Code Is Data
A problem with raw code forms in Java (assuming it is possible to extract them) is the lack of facilities to interrogate them. How does Clojure get around this limitation?
To get to the actual raw code at all, Clojure provides a mechanism to stop evaluation via the tick. Prepending a tick (aka quote) to a code form prevents its evaluation and returns the raw Clojure form.
So what is the type of our result?
We can now interrogate the raw code as if it were any old Clojure list (because it is!).
This is a result of Lisp’s remarkable property of code being data.
A Little Closer to Clojure
Using the tick, we can get halfway to a DSL that looks like Clojure code.
Let’s add this feature to emit-bash-form. We need to add a new clause to the case form. Which type should the dispatch value be?
So let’s add a clause for clojure.lang.PersistentList.
As long as we remember to quote the argument, this is not bad.
Multimethods to Abstract the Dispatch
We’ve made a good start, but I think it’s time for some refactoring.
Currently, to extend our implementation we add to our function emit-bash-form. Eventually this function will be too large to manage; we need a mechanism to split this function into more manageable pieces.
Essentially emit-bash-form is dispatching on the type of its argument. This dispatch style is a perfect fit for an abstraction Clojure provides called a multimethod.
Let’s define a multimethod called emit-bash. Here is the complete multimethod.
A multimethod is actually fairly similar to a case form. Let’s compare this multimethod with our previous case expression. defmulti is used to create a new multimethod, and associates it with a dispatch function.
This is very similar to the first argument to case.
defmethod is used to add “clauses,” known as methods. Here java.lang.String is the “dispatch value,” and the method returns the form as-is.
This is similar to adding clauses to our case expression.
Notice how the multimethod is like a more flexible case expression.
We can put methods wherever we like; anyone who can see the multimethod can add their own method from their own namespace. This is much more “open” than a case form, in which all clauses are required to be in the same code form.
Notice how this compares to Java inheritance, where modifications can only occur in a single namespace, often not one that you control. This common situation highlights some advantages of separating class definitions from implementation inheritance.
Compared to case, multimethods also have an important advantage of being able to add new dispatches without disturbing existing code.
So how can we use emit-bash? Calling a multimethod is just like calling any Clojure function.
The dispatch is silently handled under the covers by the multimethod.
Extending our DSL for Batch Script
Let’s say I’m happy with the Bash implementation. I feel like starting a new implementation that generates Windows Batch script. Let’s define a new multimethod, emit-batch.
We can now use emit-batch and emit-bash when we want Batch and Bash script output respectively.
Ad-hoc Hierarchies
Comparing the two implementations reveals many similarities. In fact, the only dispatch that differs is clojure.lang.PersistentList!
Some form of implementation inheritance would come in handy here.
We can tackle this with a simple mechanism Clojure provides to define global, ad-hoc hierarchies.
When I say this mechanism is simple, I mean non-compound; inheritance is not compounded into the mechanism to define classes or namespaces but rather is a separate functionality.
Contrast this to languages like Java, where inheritance is tightly coupled with defining a hierarchy of classes.
We can derive relationships from names to other names, and between classes and names. Names can be symbols or keywords. This is both very general and powerful!
We will use (derive child parent) to establishes a parent/child relationship between two keywords. isa? returns true if the first argument is derived from the second in a global hierarchy.
Let’s define a hierarchy in which the Bash and Batch implementations are siblings.
Let’s test this hierarchy.
Utilizing a Hierarchy in a Multimethod
We can now define a new multimethod emit that utilizes our global hierarchy of names.
The dispatch function returns a vector of two items: the current implementation (either ::bash or ::batch), and the class of our form (like emit-bash’s dispatch function).
*current-implementation* is a dynamic var, which can be thought of as a thread-safe global variable.
In our hierarchy, ::common is the parent, which means it should provide the methods in common with its children. Let's fill in these common implementations.
Remember the dispatch value is now a vector, notated with square brackets. In particular, in each defmethod the first vector is the dispatch value (the second vector is the list of formal parameters).
This should look familiar. The only methods that needs to be specialized are those for clojure.lang.PersistentList, as we identified earlier. Notice the first item in the dispatch value is ::bash or ::batch instead of ::common.
The ::common implementation is intentionally incomplete; it merely exists to manage any common methods between its children.
We can test emit by rebinding *current-implementation* to the implementation of our choice with binding.
Because we didn’t define an implementation for [::common clojure.lang.PersistentList], the multimethod falls through and throws an Exception.
Multimethods offer great flexibility and power, but with power comes great responsibility. Just because we can put our multimethods all in one namespace doesn’t mean we should. If our DSL becomes any bigger, we would probably separate all Bash and Batch implementations into individual namespaces.
This small example, however, is a good showcase for the flexibility of decoupling namespaces and inheritance.
Icing on the Cake
We’ve built a nice, solid foundation for our DSL using a combination of multimethods, dynamic vars, and ad-hoc hierarchies, but it’s a bit of a pain to use.
Let’s eliminate the boilerplate. But where is it?
The binding expression is an good candidate. We can reduce the chore of rebinding *current-implementation* by introducing with-implementation (which we will define soon).
That’s an improvement. But there’s another improvement that’s not as obvious: the quote used to delay our form’s evaluation. Let’s use script, which we will define later, to get rid of this boilerplate:
This looks great, but how do we implement script? Clojure functions evaluate all their arguments before evaluating the function body, exactly the problem the quote was designed to solve.
To hide this detail we must wield one of Lisp’s most unique forms: the macro.
The macro’s main drawcard is that it doesn’t implicitly evaluate its arguments. This is a perfect fit for an implementation of script.
(That first ' should really be a backtick. The editor had a brainfreeze and couldn’t figure out how to get a backtick through the build system intact.)
To get an idea what is happening, here’s what a call to script returns and then implicitly evaluates.
It isn’t crucial that you understand the details, rather appreciate the role that macros play in cleaning up the syntax.
We will also implement with-implementation as a macro, but for different reasons than with script. To evaluate our script form inside a binding form we need to drop it in before evaluation.
(Again, that ' should really be a backtick.)
Roughly, here is the lifecyle of our DSL, from the sugared wrapper to our unsugared foundations.
It’s easy to see how a few well-placed macros can put the sugar on top of strong foundations. Our DSL really looks like Clojure code!
Conclusion
We have seen many of Clojure’s advanced features working in harmony in this DSL, even though we incrementally incorported many of them. Generally, Clojure helps us switch our implementation strategies with minimum fuss.
This is notable when you consider how much our DSL evolved.
We initially used a simple case expression, which was converted into two multimethods, one for each implementation. As multimethods are just ordinary functions, the transition was seamless for any existing testing code. (In this case I renamed the function for clarity).
We then merged these multimethods, utilizing a global hierachy for inheritance and dynamic vars to select the current implementation.
Finally, we devised a pleasant syntactic interface with a two simple macros, eliminating that last bit of boilerplate that other languages would have to live with.
I hope you have enjoyed following the evolution of our little DSL. This DSL is based on a simplified version of Stevedore by Hugo Duncan. If you are interested in how this DSL can be extended, you can do no better than browsing the source code of Stevedore.
Ambrose Bonnaire-Sergeant is a Computer Science student at the University of Western Australia. He is passionate about functional languages, Clojure being his current favourite. In his spare time, Ambrose likes to learn new programming languages, play his Clarinet and sing in local Choirs. If you are in Western Australia and are looking to start a Clojure or Functional Programming User group, you can contact Ambrose at abonnairesergeant@gmail.com.
This article was written in Vim using Meikel Brandmeyer’s VimClojure plugin. See more of Meikel’s work here.
Send the author your feedback or discuss the article in the magazine forum.
|
http://pragprog.com/magazines/2011-07/growing-a-dsl-with-clojure
|
CC-MAIN-2014-35
|
refinedweb
| 2,103
| 57.47
|
05 September 2008 11:22 [Source: ICIS news]
By Prema Viswanathan
SINGAPORE (ICIS news)--A sharp reduction in prices and protection against further price drops offered by producers have failed to revive buying interest for polymers in India, suppliers and buyers said on Friday.
Prices of polymers were slashed by close to 6% this week in a bid to spur trade, but buyers are only purchasing enough material to meet their immediate requirements, traders said.
“There are hardly any enquiries for polyethylene (PE) and polypropylene (PP) despite reducing prices this week for the fourth time in five weeks,” a local producer said.
Prices of general purpose PS (GPPS) and high impact PS (HIPS) were slashed by Indian rupees (Rs) 4/kg ($0.09) to Rs79/kg EXW (ex-works) and Rs5/kg to $85/kg EXW.
Prices of PP and PE were reduced by Rs 2/kg and Rs1/kg respectively to Rs83.50-85.50/kg EXW. Polyvinyl chloride (PVC) prices were cut by Rs2/kg to Rs68.50/kg EXW.
Buyers were holding back from purchases in anticipation of further price drops, an Indian converter said.
“There is considerable uncertainty in the market due to the falling crude and feedstock ethylene and propylene prices,” it said. “This makes it hard for us to make purchasing decisions.”
August-September is traditionally the peak season for polymer demand, when end-users make their purchases for the October-November festive season.
However, this year, the peak season has been indefinitely delayed, said a polystyrene (PS) supplier.
The fall in domestic prices also dampened buying interest in imported material. The persistent weakness in crude values blunted the appetite for imported cargoes further, a Mumbai-based trader said.
Major polymer producers in ?xml:namespace>
($1 = Rs44.44)
For more on PP,
|
http://www.icis.com/Articles/2008/09/05/9154315/price-cuts-fail-to-revive-india-polymer-trade.html
|
CC-MAIN-2013-48
|
refinedweb
| 298
| 63.49
|
Function Overloading¶
Sometimes the types in a function depend on each other in ways that
can’t be captured with a
Union. For example, the
__getitem__
(
[] bracket indexing) method can take an integer and return a
single item, or take a
slice and return a
Sequence of items.
You might be tempted to annotate it like so:
from typing import Sequence, TypeVar, Union T = TypeVar('T') class MyList(Sequence[T]): def __getitem__(self, index: Union[int, slice]) -> Union[T, Sequence[T]]: if isinstance(index, int): ... # Return a T here elif isinstance(index, slice): ... # Return a sequence of Ts here else: raise TypeError(...)
But this is too loose, as it implies that when you pass in an
int
you might sometimes get out a single item and sometimes a sequence.
The return type depends on the parameter type in a way that can’t be
expressed using a type variable. Instead, we can use overloading
to give the same function multiple type annotations (signatures) and
accurately describe the function’s behavior.(...)
Overloaded function variants are still ordinary Python functions and
they still define a single runtime object. There is no automatic
dispatch happening, and you must manually handle the different types
in the implementation (usually with
isinstance() checks, as
shown in the example).
The overload variants must be adjacent in the code. This makes code clearer, as you don’t have to hunt for overload variants across the file.
Overloads in stub files are exactly the same, except there is no implementation.
Note
As generic type variables are erased at runtime when constructing
instances of generic types, an overloaded function cannot have
variants that only differ in a generic type argument,
e.g.
List[int] and
List[str].
Note
If you just need to constrain a type variable to certain types or subtypes, you can use a value restriction.
|
http://mypy.readthedocs.io/en/stable/function_overloading.html
|
CC-MAIN-2017-26
|
refinedweb
| 308
| 61.36
|
CodePlexProject Hosting for Open Source Software
For a specific project we need to be able to test – via Code Behind in a given ASP.NET file - whether or not a given user is logged on to the Extranet module.
This approach is necessary to ensure restricted access to a set of advanced forms that may only be accessed by Extranet users who are logged in. The forms are far more advanced than what may be created via the built-in C1 Form module, which is the reason
why we need to be able to externally test the users login status in relation to the C1 Extranet module security.
Could this be done directly from the separate ASP.NET files Code Behind? Perhaps via testing a Session Variable set by the Extranet module? Perhaps there is a more snazzy way to test the users status directly from a separate ASP.NET file which we are not
aware of?
It is imperative to emphasize that the separate ASP.NET file (and the subsequent Code Behind file) will be located on the same domain as the C1 project itself. It is actually located in a separate folder placed in the root structure which makes it directly
accessible to all users unless access restrictions are implemented in order to ensure the users rightful Extranet/login status.
All suggestions will be welcomed - thank you in advance. :o)
Hello,
There is the public static ExtranetFacade class which is a part of the
Composite.Community.Extranet.ExtranetFacade namespace. You can read this
Developer Guide to learn this class methods. Hope, you will find some helpful information.
ExtranetFacade
Composite.Co
mmunity.Extranet.ExtranetFacade
That sounds like a possible solution - thank you, Inna. :o)
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://c1cms.codeplex.com/discussions/264217
|
CC-MAIN-2017-51
|
refinedweb
| 321
| 65.01
|
2.9: Void Pointers
- Page ID
- 34654
There is some additional discussion of Void Pointers. This transformation is called casting in C++. Notice in the following example - the first function, there is a void pointer for *data, BUT, in the if statement this variable is cast to a (char *). This forces the variable *data to be a char*. This is important for use of void pointers.
// C++ program to illustrate Void Pointer in C++ #include <iostream> using namespace std; void increase(void *data, int ptrsize) { if(ptrsize == sizeof(char)) { char *ptrchar; //Typecast data to a char pointer ptrchar = (char*)data; //Increase the char stored at *ptrchar by 1 (*ptrchar)++; cout << "*data points to a char"<<"\n"; } else if(ptrsize == sizeof(int)) { int *ptrint; //Typecast data to a int pointer ptrint = (int*)data; //Increase the int stored at *ptrchar by 1 (*ptrint)++; cout << "*data points to an int"<<"\n"; } } int main() { // Declare a character char c='x'; // Declare an integer int i=10; //Call increase function using a char and int address respectively increase(&c,sizeof(c)); cout << "The new value of c is: " << c <<"\n"; increase(&i,sizeof(i)); cout << "The new value of i is: " << i <<"\n"; return 0; }
Output:
*data points to a char
The new value of c is: y
*data points to an int
The new value of i is: 11
Adapted from:
"Pointers in C/C++ with Examples" by Abhirav Kariya, Geeks for Geeks is licensed under CC BY 4.0
|
https://eng.libretexts.org/Courses/Delta_College/C_-_Data_Structures/02%3A_C_Pointers/2.09%3A_Void_Pointers
|
CC-MAIN-2022-40
|
refinedweb
| 246
| 50.13
|
//-- Columns ]}, //space {31, 36, 68, 36, 31}, //a {127, 73, 73, 73, 54}, //b {62, 65, 65, 65, 34}, //c {127, 65, 65, 34, 28}, //d {127, 73, 73, 65, 65}, //e {127, 72, 72, 72, 64}, //f {62, 65, 65, 69, 38}, //g {127, 8, 8, 8, 127}, //h {0, 65, 127, 65, 0}, //i {2, 1, 1, 1, 126}, //j {127, 8, 20, 34, 65}, //k {127, 1, 1, 1, 1}, //l {127, 32, 16, 32, 127}, //m {127, 32, 16, 8,}, //q {99, 20, 8, 20, 99}, //x {96, 16, 15, 16, 96}, //y {67, 69, 73, 81, 97}, //z {62, 69, 73, 81, 62}, //0 - zero {0, 33, 127, 1, 0}, //1 {49, 67, 69, 73, 49}, //2 {34, 65, 73, 73, 54}, //3 {24, 104, 8, 127, 8}, //4 {114, 73, 73, 73, 70}, //5 {62, 73, 73, 73, 38}, //6 {64, 64, 71, 72, 112}, //7 {54, 73, 73, 73, 54}, //8 {50, 73, 73, 73, 62}, //9 };//===[] = "LED MATRIX 1234 ";();}
SPI is so hard
QuoteSPI is so hardNo it is not.Look at the examples given in the SPI library, it is almost trivial.
#include <SPI.h>...SPI.begin ();
Then to use SPI, i connect the column shift register to the row shift register linking together?
How to i send data into my shift register? SPI.TRANSFER(0xff) - will make the 1st shift register all 1?
How to display Character? Lets say i want to display a A , how do i program? Please help me out.
QuoteHow to i send data into my shift register? SPI.TRANSFER(0xff) - will make the 1st shift register all 1? Yes,do it again and the contents of the first shift register are transferred to the second and the first has the new data in it. o do it 5 times to fill all 5 shift registers. Then toggle the latch to make what is inside the shift registers appear on the output pins.However, one thing is that. One of my SR is for row, other 4 are for column . Does it matter? If i SPI.TRANSFER(0xff) to the 4 SR for column . It will light up all right? Then what the row SR for? QuoteHow to display Character? Lets say i want to display a A , how do i program? Please help me out.Read his:- there is only enough current from a pin of a shift register for one LED so you will need some current drivers to get more current output. Again see the above link.
|
http://forum.arduino.cc/index.php?topic=104148.msg798005
|
CC-MAIN-2015-35
|
refinedweb
| 424
| 88.67
|
29 March 2011 18:27 [Source: ICIS news]
TORONTO (ICIS)--Shell has started to offer drivers in ?xml:namespace>
The move is in response to continued fears and uncertainty among drivers about potential damage to car engines from E10, Shell said.
Drivers pumping at least 30 litres of Shell’s “Super E10” and driving an E10-compatible car could register for the insurance coverage online, Shell Deutschland said.
Refiners introduced E10, replacing regular 95 RON (research octane number) gasoline.
However, drivers, fearing engine damage, largely avoided E10, switching instead to “super gasoline” with 98 RON (research octane number), a fuel that requires greater amounts of octane-boosting ethers, thus leading to increased demand for methyl tertiary butyl ether (MTBE).
If drivers do not accept the Shell insurance, and if the scheme is not imitated by competitors, E10 would be confined to a niche role on the German market, the paper said in a commentary.
Another paper, Westdeutsche Allgemeine Zeitung, said under the insurance terms, drivers may find it difficult to prove that damage was caused by E10.
Also, drivers would needed to prove that they pumped about 80% of their gasoline at Shell, the paper added.
German consumer lobby group Verbraucherzentrale Bundesverband (vzbw) said Shell’s plan was not helpful.
“Are there really drivers out there who would want to buy a gasoline for which they need an insurance?”, asked vzbw head Gerd Billen.
Billen said what was needed was a legally-binding guarantee from car manufacturers to drivers that a respective car model could run on E10..
|
http://www.icis.com/Articles/2011/03/29/9448177/Shell-offers-German-drivers-insurance-against-E10-damage.html
|
CC-MAIN-2014-10
|
refinedweb
| 258
| 58.82
|
Type: Posts; User: mariocatch
Nothing calls out to me at the moment. I may take some time today to replicate a simple use case of this and see if I can repro it.
I also suggest you write a simple test case for this scenario as...
Where is the code that handles the uninstallation of the wservice? Is it correctly releasing handles to the file after deletion (ie: using())?
It sounds like you didn't do as I suggested. You only want the workbook to open once, so don't move the workbook open to your button_Click event. Keep it in your forms constructor. But, move the...
First, use [ code][ /code] tags please.
Second, you're declaring the 'excelWorkbook' variable inside of a method. Therefore, it has no scope outside of the method.
In your Constructor:
...
If you have the Assembly object loaded and pointing to your assembly in question, you can use the GetCustomAttributes() method to get all custom attributes on it.
This should get you started.
Btw SomersetBoy, in Visual Studio you can right click on Types that you know should be right, but aren't scopped, and go to Resolve->Include namespace. It will automatically add the namespace using...
What sort of validation?
You can handle the OK button press event, and do conditional checks there. Perhaps a regex validator. Plenty of options.
ah, well if you're being forced to do it that way since it's an assignment for class, then I suppose you have no other choice.
As long as you know what's wrong and right :)
You've created this same thread 3 times on this forum. Stop.
Your calling code has to know about the parameter types to use if you plan to reflect them.
Your solution is a little ugly, but this link has the proper solution for calling overloaded methods...
If it's for basic users, just write a wiki page, or a word doc. There's no software that takes code and translates it into user friendly how-to's for regular end users of your application.
I'm not sure I understand the difference... Your documentation IS a help file. Are you writing a help file for users or other engineers of your code?
If it's for users, then you should make a word...
If you're using Visual Studio, this is built in for you btw.
Use XML tags for your comments and you can generate a help file directly from...
A lot of major corporations use doxygen:
What you want is the MaskedTextBox.
Some links to get you started:...
You could modify the TabPages collection which is a member of your TabControl instance.
Just like any other array, you can modify the index of its children.
Reposting the original code from the original post.. I was going crazy trying to read it without code tags...
public class House
{
public House()
{
Room = new Room();
Just want to say that HashTable is considered deprecated and has been replaced by the more generic solution of Dictionary<T,T>.
From MSDN:
TabIndex is the order in which a control will be tabbed to (tab in the sense of hitting the tab key on your keyboard). Is this what you want? Or are you trying to modify the order in which TabItems...
john_avi,
There is no 'clean' way to cancel a thread on-demand. The only way to cancel threads is to signal for cancellation. When the thread receives the signal, it can then clean up and return...
The best way to do this is to create the string before-hand, and use that string in the Console.WriteLine and the MessageBox.Show().
ie:
string myString = "foobar";...
Console.WriteLine returns void (meaning, it does not return anything).
You can not cache the return of WriteLine to anything since it doesn't return anything.
i can keep asking you questions and get one line responses, or you can give us more details of the problem. perhaps the entire config file?
is it in a <configuration> group? is this for a webpage? what's the context?
See MSDN for proper usage of connectionStrings tags in app.config files:
|
http://forums.codeguru.com/search.php?s=166d1cf44b08e5e4f6ab47e721159956&searchid=7651623
|
CC-MAIN-2015-35
|
refinedweb
| 696
| 76.42
|
To check if a number is a power of two, the instruction BLSR from BMI1 extension can be used. The instruction resets the least set bit of a number, i.e. calculates (x - 1) and x. A sample C procedure that use the bit-trick:
bool is_power_of_two(int x) { return (x != 0) && (x == ((x - 1) & x)); }
If a number has exactly one bit set then BLSR yields zero. However, when input of BLSR is zero, the instruction also yields zero. Fortunately, BLSR sets CPU flags in following way:
Thanks to that we can properly handle all cases. Below is an assembly code:
blsr %eax, %eax // result = (ZF == 1) and (CF == 0) setz %al // al = ZF sbb $0, %al // al = ZF - CF movzx %al, %eax // cast
Sample program is available.
|
http://0x80.pl/notesen/2018-03-11-is-power-of-two-bmi1.html
|
CC-MAIN-2020-16
|
refinedweb
| 129
| 72.36
|
Louis Salin commented on my original post about the Ninject.RhinoMocks automocking container, and brought up a very good point. Here is his comment, reproduced in full:
I’ve heard (or read…) that automocking is equivalent to taking weight loss pills while still eating cheesburgers for breakfast. Okay, I just made that up!
My point is, and I’m in no way in a position to opine on the matter, that the pain of mocking might be due to design issues. Hiding the pain with a tool won’t make the cause go away.
So maybe in this case it’s a very benign use of an automocker, but as the code base grows, the automocker will hide pain points that would otherwise become immediately obvious, no?
Louis has a good point and it is one that I have argued in the past to justify why I have not used an auto mocking container. However, I stand by my response in the comments of that post:
yeah, that’s the "big problem" that people complain about when they say auto mocking containers are bad. honestly, that’s a pretty weak excuse for not teaching developers how to spot too many dependencies as a part of bad design. trust your team. if they get it wrong, teach them right.
The “pain of mocking” that Louis is referring to is most often the need to mock a significant number of things in order to get a class spun up for testing. It may be painful or tedious or however you want to describe it, to get all of the things you need setup in order to get a class under test. But just because you can ignore that pain with an automocking container, doesn’t mean you will.
Before I expand on my response, though, let’s look at an example of what the problem really is.
A Simple Specification
This is the same sample specification that I ended yesterday’s blog post with. It’s small, easy to read and easy to understand. There is nothing really wrong with this code, in my opinion.
1: public class when_doing_something_with_that_thing : ContextSpecification<MyPresenter>
2: {
3: protected override void When()
4: {
5: SUT.DoSomething();
6: }
7:
8: [Test]
9: public void it_should_do_that_thing()
10: {
11: AssertWasCalled<IMyView>(v => v.ThatThing());
12: }
13:
14: [Test]
15: public void it_should_do_the_other_thing_twice()
16: {
17: AssertWasCalled<IMyView>(v => v.TheOtherThing(), mo => mo.Repeat.Twice());
18: }
19: }
The problem that an automocking container hides is not this code, but what this code potentially hides in the implementation.
A Complex Implementation
Now take a look at one possibility for the implementation of the MyPresenter class used in the above specification:
1: public class MyPresenter
2: {
3: private IMyView view;
4: private ISomeService someService;
5: private IAnotherService anotherService;
6: private IMoreService moreService;
7: private ISomeRepository someRepository;
8: private IAnotherRepository anotherRepository;
9: private IMoreRepository moreRepository;
10: private IValidator<SomeData> someDataValidator;
11: private IValidator<MoreData> moreDataValidator;
12: private IValidator<AnotherData> anotherDataValidator;
13: private SomeData someData;
14: private AnotherData anotherData;
15: private MoreData moreData;
16:
17: public MyPresenter(
18: IMyView view,
19: ISomeService someService,
20: IAnotherService anotherService,
21: IMoreService moreService,
22: ISomeRepository someRepository,
23: IAnotherRepository anotherRepository,
24: IMoreRepository moreRepository,
25: IValidator<SomeData> someDataValidator,
26: IValidator<MoreData> moreDataValidator,
27: IValidator<AnotherData> anotherDataValidator
28: )
29: {
30: this.view = view;
31: this.someService = someService;
32: this.anotherService = anotherService;
33: this.moreService = moreService;
34: this.someRepository = someRepository;
35: this.anotherRepository = anotherRepository;
36: this.moreRepository = moreRepository;
37: this.someDataValidator = someDataValidator;
38: this.moreDataValidator = moreDataValidator;
39: this.anotherDataValidator = anotherDataValidator;
40: }
41:
42: // ... methods and implementation details go here
43: }
That’s 40 lines of code just to get the object constructed! ARGH! that’s AWFUL! There are so many things wrong with just the member variables listed in the constructor of this class… and I haven’t even begun to imagine what the implementation of any methods on this class may look like. Quite honestly, I don’t want to think about what they may look like.
Now imagine that this code uses a couple of abstract base classes instead of all interfaces for the dependencies. Replace the three validators, for example, with abstract base classes. What happens when each of these base classes requires 2 constructor arguments for their own dependencies? The auto mocking container will go ahead and wire them up as well, and pass them into the abstract base classes so that the objects can be mocked. Now, instead of having 10 objects being mocked, we have 16. If any of those dependencies are objects with their own dependencies… well, I think you get the picture. The object graph being automocked in this scenario is horrendous and wreaks of bad design left and right.
But because we have an automocking container, we don’t care about that bad design, right? We’ll just let it slip and go on about our business because the pain of that horrendous mess is hidden away at test time. Our tooling of choice makes it easy to get away with poor design… or so the argument goes.
The Truth About Auto Mocking Containers
There is nothing inherently evil in auto mocking containers. They are not “bad” and using them is not “wrong”. Sure, they can be abused and you can do damage with them. The same thing is true of baseball bats, eggs, automatic rifles, and thousands upon thousands of other tools. Scott Bellware has quoted Ani DeFranco on the subject of tools, more than a few times: “every tool is a weapon if you hold it right”.
Now… let me restate my opinion on the problem that Louis is referring to, keeping in mind that I used to cite this exact reason for my not wanting to use an auto mocking container.
Auto mocking containers do not facilitate poor design or horrendous implementation. Poor design and horrendous implementation skill, in the person designing and implementing the code, does.
That’s it, right there. The notion that an automocking container will let someone design and implement that pile of garbage, while not using one won’t let them or will expose the problem, is ridiculous.
Speaking as a person who used to write garbage like this (and still does, occasionally, I’ll admit), I know that not using an automocking container will not prevent you from doing this. It will not make the problems more obvious if you don’t use an automocking container, and you will not inherently write better code without one. A software developer who writes code like this is not going to know the problems they are causing just because they have to declare and instantiate the 10 mock objects that this code needs to be tested. Developers write code like this because that’s the kind of code they right… no other reason. Now, there might be a lot of reasons why they write code like this… but that’s a completely different set of subjects.
“a poor craftsman blames his tools” … “with great power comes great responsibility” … “(insert other overused and abused quotes here)”
One Last Note
I wanted to note, specifically, that this post is not directed at Louis or anyone in particular. Louis is only the guy that prompted the discussion and not a person that I would single out for writing bad code.
|
https://lostechies.com/derickbailey/2010/05/26/the-dangers-of-automocking-containers/
|
CC-MAIN-2016-44
|
refinedweb
| 1,218
| 53.21
|
It looks like you're new here. If you want to get involved, click one of these buttons!
Hi all,
I'm a new beginner in Klayout pymacro.
I'd like to transform the gds to center(0,0). I used to execute this function in GUI, and recently I try to execute by pymacro, but I have no idea why the results are different between these 2 approaches.
And the result by GUI is as my expected, that the gds is completed shifted to the center (0,0).
execute in pymacro: I refer to andyL's code in the discussion "Adjust Cell Origin using the Python API". (Many thanks for andyL)
Please check the following code:
import pya layout_file = "/.../layout_path/input.gds" output_file = "/.../layout_path/input_shift.gds" layout = pya.Layout.new() layout.read(layout_file) def adjustOrigin(layoutB,topcell): bbox= topcell.bbox() trans = pya.Trans.new(-bbox.center()) # center point for inst in layoutB.top_cell().each_inst(): layoutB.top_cell().transform(inst,trans) for li in layoutB.layer_indices(): for shape in layoutB.top_cell().each_shape(li): layoutB.top_cell().shapes(li).transform(shape,trans) layoutB.update() return(layoutB) layout = adjustOrigin(layout,layout.top_cell()) layout.write(output_file)
I checked the new output_file from pymacro, and it did transform but the final bbox/layout placements are not as same as the result by GUI.
It's just my guess that the code would shift the layout in sub-cell coordinate due to the instance for loop.
I'd like to ask which part should I modify this code to get the same result as GUI...
(The Klayout version is 0.25)
Thank you so much!
ming
Just update some information.
I found that this code doesn't work for some instances, that some instances in the gds can't be shifted and left on the original position.
I still try to figure out what the difference are between these instances...
First of all, I'd propose to use a more recent version (0.27.x). With my version, the code pretty much does what "Adjust origin" does (minus the undo and "adjust instances in parents" capability).
The code you pasted is not quite clean as inside the function you refer to the actual topcell ("layoutB.top_cell()"), not the one passed in the "topcell" parameter. That makes a difference when you try to adjust a cell which is not the top cell.
With the 0.27 version, the function can be reduced to:
I don't see how this may not work.
Matthias
Hi Matthias,
Actually, I've tried klayout-0.26.7 with the newer transform function in my own computer, and it works for another simple layout. But the issue layout is only in another special work environment and it is difficult to install other klayout build in the environment, so currently I only get klayout-0.25 to use.
And as your suggestion, I modified "layoutB.top_cell()" to "topcell" in adjustOriginCenter(), and it works fine.
(But I actually I don't fully understand about the difference between these 2 conditions, I thought they are the same...)
Thank you so much!
ming
Calling "topcell" inside the transformation loop will continuously trigger the internal sort & hierarchy tree update which may change the order of the instances while you're iterating. That's why calling the "topcell" method will no just spoil performance but also give you unexpected results or even application crashes.
Matthias
Got it.
Thank you so much!
ming
Uups, 3 lines only - just read this lines. I updated my code. Thanks Matthias !
And also for this great piece of SW.
Best Regards
thanks
|
https://www.klayout.de/forum/discussion/2018/the-results-are-different-for-adjust-origin-between-gui-and-pymacro
|
CC-MAIN-2022-33
|
refinedweb
| 598
| 68.26
|
Add Controllers and Action Methods in ASP.NET MVC Applications:
ASP.NET MVC Controllers:
- Controllers Folder contains the controller classes
- MVC Controller is a simply class file of C# code.
- A MVC Controller has some Action methods.
- MVC requires the name of controllers to end with "Controller" word.
- We create the following files “HomeController.cs” (for the Home page) see the given Image.
How to Add Controller in MVC Application:
Use the Fallowing steps for Adding a Controller in asp.net MVC web project. These are –
Step first:
Open .net IED and create an Empty MVC Application. If you don’t know then view my first post how to start work with asp.net MVC project.
Step two:
Right click on Controllers folder and add new controller as like that.
Step three:
Give the name of controller and click on ok button.
Asp.net MVC controller’s C# code :
You can see the MVC controller “HomeController.cs” is create and the code is –
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace MvcApplication1.Controllers
{
public class HomeController : Controller
{
//
// GET: /Home/
public ActionResult Index()
{
return View();
}
}
}
Action Method in MVC Controller Class:
In this controller class you have a ActionResult Index() Action Method.
This Method returns to control in a view.
if you want to more about MVC then visit these posts:-
For learning What is View and how to add a view in MVC Application see the next post.
If you want to more than you can add here.
Asp.net related Post:
- SQL Server questions for interview
- Find datetime difference in asp.net by C#
- In Asp.net Difference between ""(empty string) and String.Empty
- Asp.net NullReferenceException and how fix it
- Add rows in GridView dynamic with Textbox
- In asp.net by jquery change div text color on mouseover
- Asp.net Watermark Text on uploaded Image
- Create Dynamic Rows in ASP.Net GridView Control with TextBoxes
- Introduction of asp.net mvc model(framework)
- Asp net mvc version, MVC introduction
- Features of ASP.NET MVC Model
- C# Programming language Features
- Validation checkbox control using JavaScript
- jquery disable or Enable submit button after validation
- Enable Disable Submit Button using jQuery
- Check Uncheck all asp.net CheckBox in asp.net using jQuery
- Example of jQuery Validate on Radiobuttonlist in Asp.Net using C#
- Limit Number of Characters in a TextArea using jQuery
- Limitation of Characters in Textbox or TextArea in asp.net using jquery:
- Example jQuery Validate on CheckBoxList using C#
- Check Uncheck all html CheckBox controls using jQuery:
- fill data into Dropdown list by using Jquery
|
http://asp-net-by-parijat.blogspot.in/2014/11/aspnet-mvc-add-controllers-and-action.html
|
CC-MAIN-2018-13
|
refinedweb
| 434
| 69.18
|
Build Your Alexa Skill for Web App Games
You can build a voice-enabled gaming skill by using Alexa Skills Kit (ASK) directives. Design your game to include all or part of the game logic in the skill. Use HTML directives to start with the web app and send instructions and data throughout the game. Use regular Alexa directives to communicate with the Alexa service. For more details about the Alexa Web API for Games, see About Alexa Web API for Games.
Configure your skill to support HTML directives
To enable the HTML directives in your skill, you must add the
ALEXA_PRESENTATION_HTML interface in your skill manifest by using the ASK Command Line Interface (CLI) or the developer console. For details about the skill manifest, see Skill Manifest Schema.
Add the interface by using the CLI
To configure your skill to support HTML directives, add the
ALEXA_PRESENTATION_HTML interface to the
manifest.apis.custom.interfaces in the skill manifest JSON file.
The following example shows skill manifest configured with the HTML interface.
"apis": { "custom": { "interfaces": [ { "type": "ALEXA_PRESENTATION_HTML" } ] } }
After you save your manifest file, use the Alexa Skills Kit Command Line Interface to deploy the changed manifest.
Add the interface in the developer console
You can also configure the skill manifest in the developer console.
To add support for the HTML interface
- Go to developer.amazon.com/alexa.
- Click Your Alexa Consoles, and then click Skills. The developer console opens and displays any skills you have already created.
- Locate the skill you want to configure, and then click Edit.
- Navigate to the Build > Interfaces page.
- Enable the Alexa Web API for Games interface.
When you select this option, you add the
ALEXA_PRESENTATION_HTMLto the skill manifest.
- Click Save Interfaces, and then, to rebuild your interaction model, click Build Model.
Verify that the device supports HTML directives
Not all devices support the
Alexa.Presentation.HTML directives. Before your skill launches the web app, determine whether the device supports HTML directives by inspecting the
LaunchRequest,
IntentRequest, or In Skills Purchase
Connections.Response for
Alexa.Presentation.HTML. If the device supports HTML, respond with the
Start directive to load your web app.
The following example shows the
context.System.device.supportedInterfaces object with HTML support.
"device": { "deviceId": "amzn1.ask.device.XXXX", "supportedInterfaces": { "Alexa.Presentation.HTML": { "runtime": { "maxVersion": "1.1" } } } }
const supportedInterfaces = Alexa.getSupportedInterfaces(handlerInput.requestEnvelope); const htmlInterface = supportedInterfaces['Alexa.Presentation.HTML']; if(htmlInterface !== null && htmlInterface !== undefined) { // Add a start directive. }
Support for the HTML interface is separate from support for Alexa Presentation Language (APL). A device might support APL, but not support HTML. Therefore, check for the HTML support separately.
Start the web app
To notify the Alexa Service that you want to start a web app, use the
Alexa.Presentation.HTML.Start directive in the
Response object. Include the HTTPS link of the webpage to load onto the device. The SSL certificate must be valid for the webpage to open on an Alexa-enabled device.
For more details, see
Alexa.Presentation.HTML.Start.
The following example shows a Start directive.
"response": { "outputSpeech": { "type": "PlainText", "text": "Plain text string to speak", "playBehavior": "REPLACE_ENQUEUED" }, "directives": [ { "type": "Alexa.Presentation.HTML.Start", "data": { "arbitraryDataKey1": "Initial start up information" }, "request": { "uri": "", "method": "GET" }, "configuration": { "timeoutInSeconds": 300 } } ], "shouldEndSession": false }
When your skill sends the directive to start the web app, the response must include
shouldEndSession set to either
false or undefined (not set). This setting keeps the skill session open so that the user can interact with the web app on the screen:
- When
shouldEndSessionis
false, Alexa speaks the provided
outputSpeech, and then opens the microphone for a few seconds to get the user's response.
- When
shouldEndSessionis undefined, Alexa speaks any provided
outputSpeech, but doesn't open the microphone. The session remains open.
The session remains open as long as the web app is active on the screen.
Communicate with the web app
Your Alexa skill can send messages to your web app running on the device, letting the skill hold all or part of the game logic. Your skill might send a message to the app when the skill receives an
IntentRequest or in response to a message from your web app. When your skill receives a message from the web app, you can respond with directives and output speech Alexa should say.
Use the
Alexa.Presentation.HTML.HandleMessage directive to communicate with the app. The directive gives you the flexibility to define your own interface to your web app by formatting the
message as needed. For more details, see
Alexa.Presentation.HTML.HandleMessage.
The following example shows how to send arbitrary state data from your skill.
{ "type": "Alexa.Presentation.HTML.HandleMessage", "message": { "players": 1, "myGameState": "Level 2", "speech": "Are you ready for the next level?" } }
Your skill can handle an incoming
Alexa.Presentation.HTML.Message event just like you would handle an
IntentRequest. For more details, see
Alexa.Presentation.HTML.Message.
The following example shows a message handler.
const myWebAppLogger = { canHandle(handlerInput) { return Alexa.getRequestType(handlerInput.requestEnvelope) === "Alexa.Presentation.HTML.Message"; }, handle(handlerInput) { const messageToLog = handlerInput.requestEnvelope.request.message; console.log(messageToLog); return handlerInput.responseBuilder.getResponse(); } }
Close the web app
When the device displays your web app, your app remains active on the screen. The following actions close the app:
- Your skill returns a directive from an interface other than
Alexa.Presentation.HTML. This action closes the web app, but doesn't necessarily close the skill session.
- Your skill returns a response with
shouldEndSessionset to
true.
- The user ends the skill with "Alexa, exit."
- The user exits the skill with "Alexa, go home."
- The user stops interacting with the web app, and then leaves it idle. After the duration of
timeoutInSeconds(up to 30 minutes), the skill session ends. Specific devices might choose to ignore the configured timeout value, or set a lower bound.
For example, when your skill returns an Alexa.Presentation.APL.RenderDocument directive, the device closes the web app, and then inflates the provided document. The skill session then has the lifecycle described in How devices with screens affect the skill session until the skill sends another
Alexa.Presentation.HTML.Start directive to restart the web app.
About using Alexa Web API for Games and APL in the same skill
You can use both the Alexa Web API for Games and APL in your game. However, you can't mix the two in a single screen. For a given response, you can display either the web app or an APL document.
When the device screen displays your web app, sending the
Alexa.Presentation.APL.RenderDocument directive or
ExecuteCommands directive closes the app. Make sure to save any state information as needed for your game.
APL and the Web API for Games use different interfaces within the
Alexa.Presentation namespace. To use both, configure your skill to support both
Alexa.Presentation.APL and
Alexa.Presentation.HTML:
- Configure your skill to support HTML directives
- Configure your skill to support the Alexa.Presentation.APL interface
A given device might not necessarily support both of these interfaces. Make sure the device supports the interface you're using before sending the relevant directives:
|
https://developer.amazon.com/fr-FR/docs/alexa/web-api-for-games/alexa-games-build-your-skill.html
|
CC-MAIN-2021-43
|
refinedweb
| 1,183
| 50.43
|
Yii - Using Actions
To create an action in a controller class, you should define a public method whose name starts with the word action. The return data of an action represents the response to be sent to the end user.
Step 1 − Let us define the hello-world action in our ExampleController.
<?php namespace app\controllers; use yii\web\Controller; class ExampleController extends Controller { public function actionIndex() { $message = "index action of the ExampleController"; return $this->render("example",[ 'message' => $message ]); } public function actionHelloWorld() { return "Hello world!"; } } ?>
Step 2 − Type in the address bar of the web browser. You will see the following.
Action IDs are usually verbs, such as create, update, delete and so on. This is because actions are often designed to perform a particular change if a resource.
Action IDs should contain only these characters − English letters in lower case, digits, hyphens, and underscores.
There are two types of actions: inline and standalone.
Inline actions are defined in the controller class. The names of the actions are derived from action IDs this way −
- Turn the first letter in all words of the action ID into uppercase.
- Remove hyphens.
- Add the action prefix.
Examples −
- index becomes actionIndex.
- hello-world(as in the example above) becomes actionHelloWorld.
If you plan to reuse the same action in different places, you should define it as a standalone action.
Create a Standalone Action Class
To create a standalone action class, you should extend yii\base\Action or a child class, and implement a run() method.
Step 1 − Create a components folder inside your project root. Inside that folder create a file called GreetingAction.php with the following code.
<?php namespace app\components; use yii\base\Action; class GreetingAction extends Action { public function run() { return "Greeting"; } } ?>
We have just created a reusable action. To use it in our ExampleController, we should declare our action in the action map by overriding the actions() method.
Step 2 − Modify the ExampleController.php file this way.
<?php namespace app\controllers; use yii\web\Controller; class ExampleController extends Controller { public function actions() { return [ 'greeting' => 'app\components\GreetingAction', ]; } public function actionIndex() { $message = "index action of the ExampleController"; return $this->render("example",[ 'message' => $message ]); } public function actionHelloWorld() { return "Hello world!"; } } ?>
The actions() method returns an array whose values are class names and keys are action IDs.
Step 3 − Go to. You will see the following output.
Step 4 − You can also use actions to redirect users to other URLs. Add the following action to the ExampleController.php.
public function actionOpenGoogle() { // redirect the user browser to return $this->redirect(''); }
Now, if you open, you will be redirected to.
The action methods can take parameters, called action parameters. Their values are retrieved from $_GET using the parameter name as the key.
Step 5 − Add the following action to our example controller.
public function actionTestParams($first, $second) { return "$first $second"; }
Step 6 − Type the URL in the address bar of your web browser, you will see the following output.
Each controller has a default action. When a route contains the controller ID only, it means that the default action is requested. By default, the action is index. You can easily override this property in the controller.
Step 7 − Modify our ExampleController this way.
<?php namespace app\controllers; use yii\web\Controller; class ExampleController extends Controller { public $defaultAction = "hello-world"; /* other actions */ } ?>
Step 8 − Now, if you go to, you will see the following.
To fulfill the request, the controller will undergo the following lifecycle −
The yii\base\Controller:init() method is called.
The controller creates an action based on the action ID.
The controller sequentially calls the beforeAction() method of the web application, module, and the controller.
The controller runs the action.
The controller sequentially calls the afterAction() method of the web application, module, and the controller.
The application assigns action result to the response.
Important Points
The Controllers should −
- Be very thin. Each action should contain only a few lines of code.
- Use Views for responses.
- Not embed HTML.
- Access the request data.
- Call methods of models.
- Not process the request data. These should be processed in the model.
|
https://www.tutorialspoint.com/yii/yii_using_actions.htm
|
CC-MAIN-2018-39
|
refinedweb
| 685
| 51.34
|
13 February 2012 08:37 [Source: ICIS news]
SINGAPORE (ICIS)--?xml:namespace>
In addition, the company will shut its 60,000 tonne/year butyl acrylate (butyl-A) plant and a 30,000 tonne/year ethyl acrylate (ethyl-A) plant at the same site after shutting its AA unit, the source said without elaborating further.
The shutdowns is not expected to cause a supply shortage in east
The spot prices of butyl-A rose by yuan (CNY) 600-700/tonne ($95-111/tonne) over the past two weeks to CNY13,200-13,400/tonne on 10 February.
The prices rose mainly because trade activity picked up after the Lunar New Year in late January, according to Chemease, an ICIS service in
Shanghai Huayi is a key AA and acrylate esters producer in east
(
|
http://www.icis.com/Articles/2012/02/13/9531557/chinas-shanghai-huayi-to-shut-acrylic-acid-plant-for.html
|
CC-MAIN-2014-49
|
refinedweb
| 132
| 59.47
|
table of contents
- bullseye 2.4.57+dfsg-3
- bullseye-backports 2.4.59+dfsg-1~bpo11+1
- testing 2.5.11+dfsg-1
- unstable 2.5.11+dfsg-1
NAME¶
ldap_sync_init, ldap_sync_init_refresh_only, ldap_sync_init_refresh_and_persist, ldap_sync_poll - LDAP sync routines
LIBRARY¶
OpenLDAP LDAP (libldap, -lldap)
SYNOPSIS¶
#include <ldap.h>
int ldap_sync_init(ldap_sync_t *ls, int mode);
int ldap_sync_init_refresh_only(ldap_sync_t *ls);
int ldap_sync_init_refresh_and_persist(ldap_sync_t *ls);
int ldap_sync_poll(ldap_sync_t *ls);
ldap_sync_t * ldap_sync_initialize(ldap_sync_t *ls);¶
These routines provide an interface to the LDAP Content Synchronization operation (RFC 4533). They require an ldap_sync_t structure to be set up with parameters required for various phases of the operation; this includes setting some handlers for special events. All handlers take a pointer to the ldap_sync_t structure as the first argument, and a pointer to the LDAPMessage structure as received from the server by the client library, plus, occasionally, other specific arguments.
The members of the ldap_sync_t structure are:
-¶.
At the end of a session, the structure can be cleaned up by calling ldap_sync_destroy(3), which takes care of freeing all data assuming it was allocated by ldap_mem*(3) routines. Otherwise, the caller should take care of destroying and zeroing out the documented search-related fields, and call ldap_sync_destroy(3) to free undocumented members set by the API.
REFRESH ONLY¶¶.
A client may insert a call to ldap_sync_poll(3) into an external loop to check if any modification was returned; in this case, it might be appropriate to set ls_timeout to 0, or to set it to a finite, small value. Otherwise, if the client's main purpose consists in waiting for responses, a timeout of -1 is most suitable, so that the function only returns after some data has been received and handled.
ERRORS¶¶
SEE ALSO¶
ldap(3), ldap_search_ext(3), ldap_result(3); RFC 4533 (
AUTHOR¶
Designed and implemented by Pierangelo Masarati, based on RFC 4533 and loosely inspired by syncrepl code in slapd(8).
ACKNOWLEDGEMENTS¶
Initially developed by SysNet s.n.c. OpenLDAP is developed and maintained by The OpenLDAP Project ( OpenLDAP is derived from University of Michigan LDAP 3.3 Release.
|
https://manpages.debian.org/unstable/libldap2-dev/ldap_sync.3.en.html
|
CC-MAIN-2022-21
|
refinedweb
| 343
| 52.09
|
With variadic templates, parameter packs and template aliases
C++11 changes the playing field
The wide acceptance of Boost.MPL made C++ metaprogramming seem a solved problem. Perhaps MPL wasn’t ideal, but it was good enough to the point that there wasn’t really a need to seek or produce alternatives.
C++11 changed the playing field. The addition of variadic templates with their associated parameter packs added a compile-time list of types structure directly into the language. Whereas before every metaprogramming library defined its own type list, and MPL defined several, in C++11, type lists are as easy as
// C++11 template<class... T> struct type_list {};
and there is hardly a reason to use anything else.
Template aliases are another game changer. Previously, "metafunctions", that is, templates that took one type and produced another, looked like
// C++03 template<class T> struct add_pointer { typedef T* type; };
and were used in the following manner:
// C++03 typedef typename add_pointer<X>::type Xp;
In C++11, metafunctions can be template aliases, instead of class templates:
// C++11 template<class T> using add_pointer = T*;
The above example use then becomes
// C++11 typedef add_pointer<X> Xp;
or, if you prefer to be seen as C++11-savvy,
// C++11 using Xp = add_pointer<X>;
This is a considerable improvement in more complex expressions:
// C++03 typedef typename add_reference< typename add_const< typename add_pointer<X>::type >::type >::type Xpcr;
// C++11 using Xpcr = add_reference<add_const<add_pointer<X>>>;
(The example also takes advantage of another C++11 feature - you can now use
>> to close templates without it being interpreted as a right shift.)
In addition, template aliases can be passed to template template parameters:
// C++11 template<template<class... T> class F> struct X { }; X<add_pointer>; // works!
These language improvements allow for C++11 metaprogramming that is substantially different than its idiomatic C++03 equivalent. Boost.MPL is no longer good enough, and something must be done. But what?
Type lists and mp_rename
Let’s start with the basics. Our basic data structure will be the type list:
template<class... T> struct mp_list {};
Why the
mp_ prefix? mp obviously stands for metaprogramming, but could we not
have used a namespace?
Indeed we could have. Past experience with Boost.MPL however indicates that
name conflicts between our metaprogramming primitives and standard identifiers
(such as
list) and keywords (such as
if,
int or
true) will be common
and will be a source of problems. With a prefix, we avoid all that trouble.
So we have our type list and can put things into it:
using list = mp_list<int, char, float, double, void>;
but can’t do anything else with it yet. We’ll need a library of primitives that
operate on
mp_lists. But before we get into that, let’s consider another
interesting question first.
Suppose we have our library of primitives that can do things with a
mp_list,
but some other code hands us a type list that is not an
mp_list, such as for
example an
std::tuple<int, float, void*>, or
std::packer<int,
float, void*>.
Suppose we need to modify this external list of types in some manner (change
the types into pointers, perhaps) and give back the transformed result in the
form it was given to us,
std::tuple<int*, float*, void**> in the first
case and
std::packer<int*, float*, void**> in the second.
To do that, we need to first convert
std::tuple<int, float, void*> to
mp_list<int, float, void*>, apply
add_pointer to each element obtaining
mp_list<int*, float*, void**>, then convert that back to
std::tuple.
These conversion steps are a quite common occurrence, and we’ll write a
primitive that helps us perform them, called
mp_rename. We want
mp_rename<std::tuple<int, float, void*>, mp_list>
to give us
mp_list<int, float, void*>
and conversely,
mp_rename<mp_list<int, float, void*>, std::tuple>
to give us
std::tuple<int, float, void*>
Here is the implementation of
mp_rename:
template<class A, template<class...> class B> struct mp_rename_impl; template<template<class...> class A, class... T, template<class...> class B> struct mp_rename_impl<A<T...>, B> { using type = B<T...>; }; template<class A, template<class...> class B> using mp_rename = typename mp_rename_impl<A, B>::type;
(This pattern of a template alias forwarding to a class template doing the actual work is common; class templates can be specialized, whereas template aliases cannot.)
Note that
mp_rename does not treat any list type as special, not even
mp_list; it can rename any variadic class template into any other. You could
use it to rename
std::packer to
std::tuple to
std::variant (once there is
such a thing) and it will happily oblige.
In fact, it can even rename non-variadic class templates, as in the following examples:
mp_rename<std::pair<int, float>, std::tuple> // -> std::tuple<int, float> mp_rename<mp_list<int, float>, std::pair> // -> std::pair<int, float> mp_rename<std::shared_ptr<int>, std::unique_ptr> // -> std::unique_ptr<int>
There is a limit to the magic;
unique_ptr can’t be renamed to
shared_ptr:
mp_rename<std::unique_ptr<int>, std::shared_ptr> // error
because
unique_ptr<int> is actually
unique_ptr<int,
std::default_delete<int>> and
mp_rename renames it to
shared_ptr<int,
std::default_delete<int>>, which doesn’t compile. But it still works in many
more cases than one would naively expect at first.
With conversions no longer a problem, let’s move on to primitives and define a
simple one,
mp_size, for practice. We want
mp_size<mp_list<T...>> to
give us the number of elements in the list, that is, the value of the
expression
sizeof...(T).
template<class L> struct mp_size_impl; template<class... T> struct mp_size_impl<mp_list<T...>> { using type = std::integral_constant<std::size_t, sizeof...(T)>; }; template<class L> using mp_size = typename mp_size_impl<L>::type;
This is relatively straightforward, except for the
std::integral_constant.
What is it and why do we need it?
std::integral_constant is a standard C++11 type that wraps an integral
constant (that is, a compile-time constant integer value) into a type.
Since metaprogramming operates on type lists, which can only hold types, it’s
convenient to represent compile-time constants as types. This allows us to
treat lists of types and lists of values in a uniform manner. It is therefore
idiomatic in metaprogramming to take and return types instead of values, and
this is what we have done. If at some later point we want the actual value, we
can use the expression
mp_size<L>::value to retrieve it.
We now have our
mp_size, but you may have noticed that there’s an interesting
difference between
mp_size and
mp_rename. Whereas I made a point of
mp_rename not treating
mp_list as a special case,
mp_size very much does:
template<class... T> struct mp_size_impl<mp_list<T...>>
Is this really necessary? Can we not use the same technique in the
implementation of
mp_size as we did in
mp_rename?
template<class L> struct mp_size_impl; template<template<class...> class L, class... T> struct mp_size_impl<L<T...>> { using type = std::integral_constant<std::size_t, sizeof...(T)>; }; template<class L> using mp_size = typename mp_size_impl<L>::type;
Yes, we very much can, and this improvement allows us to use
mp_size on any
other type lists, such as
std::tuple. It turns
mp_size into a truly generic
primitive.
This is nice. It is so nice that I’d argue that all our metaprogramming
primitives ought to have this property. If someone hands us a type list in the
form of an
std::tuple, we should be able to operate on it directly, avoiding
the conversions to and from
mp_list.
So do we no longer have any need for
mp_rename? Not quite. Apart from the
fact that sometimes we really do need to rename type lists, there is another
surprising task for which
mp_rename is useful.
To illustrate it, let me introduce the primitive
mp_length. It’s similar to
mp_size, but while
mp_size takes a type list as an argument,
mp_length
takes a variadic parameter pack and returns its length; or, stated differently,
it returns its number of arguments:
template<class... T> using mp_length = std::integral_constant<std::size_t, sizeof...(T)>;
How would we implement
mp_size in terms of
mp_length? One option is to just
substitute the implementation of the latter into the former:
template<template<class...> class L, class... T> struct mp_size_impl<L<T...>> { using type = mp_length<T...>; };
but there is another way, much less mundane. Think about what
mp_size does.
It takes the argument
mp_list<int, void, float>
and returns
mp_length<int, void, float>
Do we already have a primitive that does a similar thing?
(Not much of a choice, is there?)
Indeed we have, and it’s called
mp_rename.
template<class L> using mp_size = mp_rename<L, mp_length>;
I don’t know about you, but I find this technique fascinating. It exploits the
structural similarity between a list,
L<T...>, and a metafunction "call",
F<T...>, and the fact that the language sees the things the same way and
allows us to pass the template alias
mp_length to
mp_rename as if it were
an ordinary class template such as
mp_list.
(Other metaprogramming libraries provide a dedicated
apply primitive for
this job.
apply<F, L> calls the metafunction
F with the contents of the
list
L. We’ll add an alias
mp_apply<F, L> that calls
mp_rename<L, F> for
readability.)
template<template<class...> class F, class L> using mp_apply = mp_rename<L, F>;
mp_transform
Let’s revisit the example I gave earlier - someone hands us
std::tuple<X, Y,
Z> and we need to compute
std::tuple<X*, Y*, Z*>. We already have
add_pointer:
template<class T> using add_pointer = T*;
so we just need to apply it to each element of the input tuple.
The algorithm that takes a function and a list and applies the function to each
element is called
transform in Boost.MPL and the STL and
map in functional
languages. We’ll use
transform, for consistency with the established C++
practice (
map is a data structure in both the STL and Boost.MPL.)
We’ll call our algorithm
mp_transform, and
mp_transform<F, L> will apply
F to each element of
L and return the result. Usually, the argument order
is reversed and the function comes last. Our reasons to put it at the front
will become evident later.
There are many ways to implement
mp_transform; the one we’ll pick will make
use of another primitive,
mp_push_front.
mp_push_front<L, T>, as its name
implies, adds
T as a first element in
L:;
There is no reason to constrain
mp_push_front to a single element though. In
C++11, variadic templates should be our default choice, and the
implementation of
mp_push_front that can take an arbitrary number of elements
is almost identical:;
On to
mp_transform:
template<template<class...> class F, class L> struct mp_transform_impl; template<template<class...> class F, class L> using mp_transform = typename mp_transform_impl<F, L>::type; template<template<class...> class F, template<class...> class L> struct mp_transform_impl<F, L<>> { using type = L<>; }; template<template<class...> class F, template<class...> class L, class T1, class... T> struct mp_transform_impl<F, L<T1, T...>> { using _first = F<T1>; using _rest = mp_transform<F, L<T...>>; using type = mp_push_front<_rest, _first>; };
This is a straightforward recursive implementation that should be familiar to people with functional programming background.
Can we do better? It turns out that in C++11, we can.
template<template<class...> class F, class L> struct mp_transform_impl; template<template<class...> class F, class L> using mp_transform = typename mp_transform_impl<F, L>::type; template<template<class...> class F, template<class...> class L, class... T> struct mp_transform_impl<F, L<T...>> { using type = L<F<T>...>; };
Here we take advantage of the fact that pack expansion is built into the
language, so the
F<T>... part does all the iteration work for us.
We can now solve our original challenge: given an
std::tuple of types, return
an
std::tuple of pointers to these types:
using input = std::tuple<int, void, float>; using expected = std::tuple<int*, void*, float*>; using result = mp_transform<add_pointer, input>; static_assert( std::is_same<result, expected>::value, "" );
mp_transform, part two
What if we had a pair of tuples as input, and had to produce the corresponding tuple of pairs? For example, given
using input = std::pair<std::tuple<X1, X2, X3>, std::tuple<Y1, Y2, Y3>>;
we had to produce
using expected = std::tuple<std::pair<X1, Y1>, std::pair<X2, Y2>, std::pair<X3, Y3>>;
We need to take the two lists, represented by tuples in the input, and combine
them pairwise by using
std::pair. If we think of
std::pair as a function
F, this task appears very similar to
mp_transform, except we need to use a
binary function and two lists.
Changing our unary transform algorithm into a binary one isn’t hard:
template<template<class...> class F, class L1, class L2> struct mp_transform2_impl; template<template<class...> class F, class L1, class L2> using mp_transform2 = typename mp_transform2_impl<F, L1, L2>::type; template<template<class...> class F, template<class...> class L1, class... T1, template<class...> class L2, class... T2> struct mp_transform2_impl<F, L1<T1...>, L2<T2...>> { static_assert( sizeof...(T1) == sizeof...(T2), "The arguments of mp_transform2 should be of the same size" ); using type = L1<F<T1,T2>...>; };
and we can now do
using input = std::pair<std::tuple<X1, X2, X3>, std::tuple<Y1, Y2, Y3>>; using expected = std::tuple<std::pair<X1, Y1>, std::pair<X2, Y2>, std::pair<X3, Y3>>; using result = mp_transform2<std::pair, input::first_type, input::second_type>; static_assert( std::is_same<result, expected>::value, "" );
again exploiting the similarity between metafunctions and ordinary class
templates such as
std::pair, this time in the other direction; we pass
std::pair where
mp_transform2 expects a metafunction.
Do we have to use separate transform algorithms for each arity though? If we
need a transform algorithm that takes a ternary function and three lists,
should we name it
mp_transform3? No, this is exactly why we put the function
first. We just have to change
mp_transform to be variadic:
template<template<class...> class F, class... L> struct mp_transform_impl; template<template<class...> class F, class... L> using mp_transform = typename mp_transform_impl<F, L...>::type;
and then add the unary and binary specializations:
template<template<class...> class F, template<class...> class L, class... T> struct mp_transform_impl<F, L<T...>> { using type = L<F<T>...>; }; template<template<class...> class F, template<class...> class L1, class... T1, template<class...> class L2, class... T2> struct mp_transform_impl<F, L1<T1...>, L2<T2...>> { static_assert( sizeof...(T1) == sizeof...(T2), "The arguments of mp_transform should be of the same size" ); using type = L1<F<T1,T2>...>; };
We can also add ternary and further specializations.
Is it possible to implement the truly variadic
mp_transform, one that works
with an arbitrary number of lists? It is in principle, and I’ll show one
possible abridged implementation here for completeness:
template<template<class...> class F, class E, class... L> struct mp_transform_impl; template<template<class...> class F, class... L> using mp_transform = typename mp_transform_impl<F, mp_empty<L...>, L...>::type; template<template<class...> class F, class L1, class... L> struct mp_transform_impl<F, mp_true, L1, L...> { using type = mp_clear<L1>; }; template<template<class...> class F, class... L> struct mp_transform_impl<F, mp_false, L...> { using _first = F< typename mp_front_impl<L>::type... >; using _rest = mp_transform< F, typename mp_pop_front_impl<L>::type... >; using type = mp_push_front<_rest, _first>; };
but will omit the primitives that it uses. These are
mp_true— an alias for
std::integral_constant<bool, true>.
mp_false— an alias for
std::integral_constant<bool, false>.
mp_empty<L...>— returns
mp_trueif all lists are empty,
mp_falseotherwise.
mp_clear<L>— returns an empty list of the same type as
L.
mp_front<L>— returns the first element of
L.
mp_pop_front<L>— returns
Lwithout its first element.
There is one interesting difference between the recursive
mp_transform
implementation and the language-based one.
mp_transform<add_pointer,
std::pair<int, float>> works with the
F<T>... implementation and fails
with the recursive one, because
std::pair is not a real type list and can
only hold exactly two types.
The infamous tuple_cat challenge
Eric Niebler, in his
Tiny
Metaprogramming Library article, gives the function
std::tuple_cat as a
kind of a metaprogramming challenge.
tuple_cat is a variadic template
function that takes a number of tuples and concatenates them into another
std::tuple. This is Eric’s solution:)...)); }
All right, challenge accepted. Let’s see what we can do.
As Eric explains, this implementation relies on the clever trick of packing the
input tuples into a tuple, creating two arrays of indices,
inner and
outer,
then indexing the outer tuple with the outer indices and the result, which is
one of our input tuples, with the inner indices.
So, for example, if tuple_cat is invoked as
std::tuple<int, short, long> t1; std::tuple<> t2; std::tuple<float, double, long double> t3; std::tuple<void*, char*> t4; auto res = tuple_cat(t1, t2, t3, t4);
we’ll create the tuple
std::tuple<std::tuple<int, short, long>, std::tuple<>, std::tuple<float, double, long double>, std::tuple<void*, char*>> t{t1, t2, t3, t4};
and then extract the elements of t via
std::get<0>(std::get<0>(t)), // t1[0] std::get<1>(std::get<0>(t)), // t1[1] std::get<2>(std::get<0>(t)), // t1[2] std::get<0>(std::get<2>(t)), // t3[0] std::get<1>(std::get<2>(t)), // t3[1] std::get<2>(std::get<2>(t)), // t3[2] std::get<0>(std::get<3>(t)), // t4[0] std::get<1>(std::get<3>(t)), // t4[1]
(
t2 is empty, so we take nothing from it.)
The first column of integers is the
outer array, the second one - the
inner
array, and these are what we need to compute. But first, let’s deal with the
return type of
tuple_cat.
The return type of
tuple_cat is just the concatenation of the arguments,
viewed as type lists. The metaprogramming algorithm that concatenates lists is
called
meta::concat
in Eric Niebler’s Meta library, but I’ll
call it
mp_append, after its classic Lisp name.
(Lisp is today’s equivalent of Latin. Educated people are supposed to have studied and forgotten it.)
template<class... L> struct mp_append_impl; template<class... L> using mp_append = typename mp_append_impl<L...>::type; template<> struct mp_append_impl<> { using type = mp_list<>; }; template<template<class...> class L, class... T> struct mp_append_impl<L<T...>> { using type = L<T...>; }; template<template<class...> class L1, class... T1, template<class...> class L2, class... T2, class... Lr> struct mp_append_impl<L1<T1...>, L2<T2...>, Lr...> { using type = mp_append<L1<T1..., T2...>, Lr...>; };
That was fairly easy. There are other ways to implement
mp_append, but this
one demonstrates how the language does most of the work for us via pack
expansion. This is a common theme in C++11.
Note how
mp_append returns the same list type as its first argument. Of
course, in the case in which no arguments are given, there is no first argument
from which to take the type, so I’ve arbitrarily chosen to return an empty
mp_list.
We’re now ready with the declaration of
tuple_cat:
template<class... Tp, class R = mp_append<typename std::remove_reference<Tp>::type...>> R tuple_cat( Tp &&... tp );
The reason we need
remove_reference is because of the rvalue reference
parameters, used to implement perfect forwarding. If the argument is an lvalue,
such as for example
t1 above, its corresponding type will be a reference to a
tuple —
std::tuple<int, short, long>& in
t1's case. Our primitives do
not recognize references to tuples as type lists, so we need to strip them off.
There are two problems with our return type computation though. One, what if
tuple_cat is called without any arguments? We return
mp_list<> in that
case, but the correct result is
std::tuple<>.
Two, what if we call
tuple_cat with a first argument that is a
std::pair?
We’ll try to append more elements to
std::pair, and it will fail.
We can solve both our problems by using an empty tuple as the first argument to
mp_append:
template<class... Tp, class R = mp_append<std::tuple<>, typename std::remove_reference<Tp>::type...>> R tuple_cat( Tp &&... tp );
With the return type taken care of, let’s now move on to computing inner. We have
[x1, x2, x3], [], [y1, y2, y3], [z1, z2]
as input and we need to output
[0, 0, 0, 2, 2, 2, 3, 3]
which is the concatenation of
[0, 0, 0], [], [2, 2, 2], [3, 3]
Here each tuple is the same size as the input, but is filled with a constant that represents its index in the argument list. The first tuple is filled with 0, the second with 1, the third with 2, and so on.
We can achieve this result if we first compute a list of indices, in our case
[0, 1, 2, 3], then use binary
mp_transform on the two lists
[[x1, x2, x3], [], [y1, y2, y3], [z1, z2]] [0, 1, 2, 3]
and a function which takes a list and an integer (in the form of an
std::integral_constant) and returns a list that is the same size as the
original, but filled with the second argument.
We’ll call this function
mp_fill, after
std::fill.
Functional programmers will immediately realize that
mp_fill is
mp_transform with a function that returns a constant, and here’s an
implementation along these lines:
template<class V> struct mp_constant { template<class...> using apply = V; }; template<class L, class V> using mp_fill = mp_transform<mp_constant<V>::template apply, L>;
Here’s an alternate implementation:
template<class L, class V> struct mp_fill_impl; template<template<class...> class L, class... T, class V> struct mp_fill_impl<L<T...>, V> { template<class...> using _fv = V; using type = L<_fv<T>...>; }; template<class L, class V> using mp_fill = typename mp_fill_impl<L, V>::type;
These demonstrate different styles and choosing one over the other is largely a
matter of taste here. In the first case, we combine existing primitives; in the
second case, we "inline"
mp_const and even
mp_transform in the body of
mp_fill_impl.
Most C++11 programmers will probably find the second implementation easier to read.
We can now
mp_fill, but we still need the
[0, 1, 2, 3] index sequence. We
could write an algorithm
mp_iota for that (named after
std::iota), but it so
happens that C++14 already has a standard way of generating an index
sequence, called
std::make_index_sequence.
Since Eric’s original solution makes use of
make_index_sequence, let’s follow
his lead.
Technically, this takes us outside of C++11, but
make_index_sequence is not
hard to implement (if efficiency is not a concern):
template<class T, T... Ints> struct integer_sequence { }; template<class S> struct next_integer_sequence; template<class T, T... Ints> struct next_integer_sequence<integer_sequence<T, Ints...>> { using type = integer_sequence<T, Ints..., sizeof...(Ints)>; }; template<class T, T I, T N> struct make_int_seq_impl; template<class T, T N> using make_integer_sequence = typename make_int_seq_impl<T, 0, N>::type; template<class T, T I, T N> struct make_int_seq_impl { using type = typename next_integer_sequence< typename make_int_seq_impl<T, I+1, N>::type>::type; }; template<class T, T N> struct make_int_seq_impl<T, N, N> { using type = integer_sequence<T>; }; template<std::size_t... Ints> using index_sequence = integer_sequence<std::size_t, Ints...>; template<std::size_t N> using make_index_sequence = make_integer_sequence<std::size_t, N>;
We can now obtain an
index_sequence<0, 1, 2, 3>:
template<class... Tp, class R = mp_append<std::tuple<>, typename std::remove_reference<Tp>::type...>> R tuple_cat( Tp &&... tp ) { std::size_t const N = sizeof...(Tp); // inner using seq = make_index_sequence<N>; }
but
make_index_sequence<4> returns
integer_sequence<std::size_t, 0, 1, 2,
3>, which is not a type list. In order to work with it, we need to convert it
to a type list, so we’ll introduce a function
mp_from_sequence that does
that.
template<class S> struct mp_from_sequence_impl; template<template<class T, T... I> class S, class U, U... J> struct mp_from_sequence_impl<S<U, J...>> { using type = mp_list<std::integral_constant<U, J>...>; }; template<class S> using mp_from_sequence = typename mp_from_sequence_impl<S>::type;
We can now compute the two lists that we wanted to transform with
mp_fill:] return R{}; }
and finish the job of computing
inner:] return R{}; }
For
outer, we again have
[x1, x2, x3], [], [y1, y2, y3], [z1, z2]
as input and we need to output
[0, 1, 2, 0, 1, 2, 0, 1]
which is the concatenation of
[0, 1, 2], [], [0, 1, 2], [0, 1]
The difference here is that instead of filling the tuple with a constant value,
we need to fill it with increasing values, starting from 0, that is, with the
result of
make_index_sequence<N>, where
N is the number of elements.
The straightforward way to do that is to just define a metafunction
F that
does what we want, then use
mp_transform to apply it to the input:
template<class N> using mp_iota = mp_from_sequence<make_index_sequence<N::value>>; template<class L> using F = mp_iota<mp_size<L>>; template<class... Tp, class R = mp_append<std::tuple<>, typename std::remove_reference<Tp>::type...>> R tuple_cat( Tp &&... tp ) { std::size_t const N = sizeof...(Tp); // outer using list1 = mp_list<typename std::remove_reference<Tp>::type...>; using list2 = mp_transform<F, list1>; // list2: [[0, 1, 2], [], [0, 1, 2], [0, 1]] using outer = mp_rename<list2, mp_append>; // outer: [0, 1, 2, 0, 1, 2, 0, 1] return R{}; }
Well that was easy. Surprisingly easy. The one small annoyance is that we can’t
define
F inside
tuple_cat - templates can’t be defined in functions.
Let’s put everything together.
template<class N> using mp_iota = mp_from_sequence<make_index_sequence<N::value>>; template<class L> using F = mp_iota<mp_size<L>>; template<class R, class...Is, class... Ks, class Tp> R tuple_cat_( mp_list<Is...>, mp_list<Ks...>, Tp tp ) { return R{ std::get<Ks::value>(std::get<Is::value>(tp))... }; }] // outer using list4 = mp_transform<F, list1>; // list4: [[0, 1, 2], [], [0, 1, 2], [0, 1]] using outer = mp_rename<list4, mp_append>; // outer: [0, 1, 2, 0, 1, 2, 0, 1] return tuple_cat_<R>( inner(), outer(), std::forward_as_tuple( std::forward<Tp>(tp)... ) ); }
This almost compiles, except that our
inner happens to be a
std::tuple, but
our helper function expects an
mp_list. (
outer is already an
mp_list, by
sheer luck.) We can fix that easily enough.
return tuple_cat_<R>( mp_rename<inner, mp_list>(), outer(), std::forward_as_tuple( std::forward<Tp>(tp)... ) );
Let’s define a
print_tuple function and see if everything checks out.
template<int I, int N, class... T> struct print_tuple_ { void operator()( std::tuple<T...> const & tp ) const { using Tp = typename std::tuple_element<I, std::tuple<T...>>::type; print_type<Tp>( " ", ": " ); std::cout << std::get<I>( tp ) << ";"; print_tuple_< I+1, N, T... >()( tp ); } }; template<int N, class... T> struct print_tuple_<N, N, T...> { void operator()( std::tuple<T...> const & ) const { } }; template<class... T> void print_tuple( std::tuple<T...> const & tp ) { std::cout << "{"; print_tuple_<0, sizeof...(T), T...>()( tp ); std::cout << " }\n"; } int main() { std::tuple<int, long> t1{ 1, 2 }; std::tuple<> t2; std::tuple<float, double, long double> t3{ 3, 4, 5 }; std::pair<void const*, char const*> t4{ "pv", "test" }; using expected = std::tuple<int, long, float, double, long double, void const*, char const*>; auto result = ::tuple_cat( t1, t2, t3, t4 ); static_assert( std::is_same<decltype(result), expected>::value, "" ); print_tuple( result ); }
Output:
{ int: 1; long: 2; float: 3; double: 4; long double: 5; void const*: 0x407086; char const*: test; }
Seems to work. But there’s at least one error left. To see why, replace the first tuple
std::tuple<int, long> t1{ 1, 2 };
with a pair:
std::pair<int, long> t1{ 1, 2 };
We now get an error at
using inner = mp_rename<list3, mp_append>;
because the first element of
list3 is an
std::pair, which
mp_append tries
and fails to use as its return type.
There are two ways to fix that. The first one is to apply the same trick we
used for the return type, and insert an empty
mp_list at the front of
list3, which
mp_append will use as a return type:
using inner = mp_rename<mp_push_front<list3, mp_list<>>, mp_append>;
The second way is to just convert all inputs to mp_list:
using list1 = mp_list< mp_rename<typename std::remove_reference<Tp>::type, mp_list>...>;
In both cases, inner will now be an
mp_list, so we can omit the
mp_rename
in the call to
tuple_cat_.
We’re done. The results hopefully speak for themselves.
Higher order metaprogramming, or lack thereof
Perhaps by now you’re wondering why this article is called "Simple C++11 metaprogramming", since what we covered so far wasn’t particularly simple.
The relative simplicity of our approach stems from the fact that we’ve not
been doing any higher order metaprogramming, that is, we haven’t introduced any
primitives that return metafunctions, such as
compose,
bind, or a lambda
library.
I posit that such higher order metaprogramming is, in the majority of cases, not necessary in C++11. Consider, for example, Eric Niebler’s solution given above:
using outer = typelist_cat_t< typelist_transform_t< typelist<as_typelist_t<Tuples>...>, meta_compose< meta_quote<as_typelist_t>, meta_quote_i<std::size_t, make_index_sequence>, meta_quote<typelist_size_t> > > >;
The
meta_compose expression takes three other ("quoted") metafunctions and
creates a new metafunction that applies them in order. Eric uses this example
as motivation to introduce the concept of a "metafunction class" and then to
supply various primitives that operate on metafunction classes.
But when we have metafunctions
F,
G and
H, instead of using
meta_compose, in C++11 we can just do
template<class... T> using Fgh = F<G<H<T...>>>;
and that’s it. The language makes defining composite functions easy, and there
is no need for library support. If the functions to be composed are
as_typelist_t,
std::make_index_sequence and
typelist_size_t, we just
define
template<class... T> using F = as_typelist_t<std::make_index_sequence<typelist_size_t<T...>::value>>;
Similarly, if we need a metafunction that will return
sizeof(T) < sizeof(U),
there is no need to enlist a metaprogramming lambda library as in
lambda<_a, _b, less<sizeof_<_a>, sizeof_<_b>>>>
We could just define it inline:
template<class T, class U> using sizeof_less = mp_bool<(sizeof(T) < sizeof(U))>;
One more thing
Finally, let me show the implementations of
mp_count and
mp_count_if, for
no reason other than I find them interesting.
mp_count<L, V> returns the
number of occurrences of the type
V in the list
L;
mp_count_if<L, P>
counts the number of types in
L for which
P<T> is
true.
As a first step, I’ll implement
mp_plus.
mp_plus is a variadic (not just
binary) metafunction that returns the sum of its arguments.
template<class... T> struct mp_plus_impl; template<class... T> using mp_plus = typename mp_plus_impl<T...>::type; template<> struct mp_plus_impl<> { using type = std::integral_constant<int, 0>; }; template<class T1, class... T> struct mp_plus_impl<T1, T...> { static constexpr auto _v = T1::value + mp_plus<T...>::value; using type = std::integral_constant< typename std::remove_const<decltype(_v)>::type, _v>; };
Now that we have
mp_plus,
mp_count is just
template<class L, class V> struct mp_count_impl; template<template<class...> class L, class... T, class V> struct mp_count_impl<L<T...>, V> { using type = mp_plus<std::is_same<T, V>...>; }; template<class L, class V> using mp_count = typename mp_count_impl<L, V>::type;
This is another illustration of the power of parameter pack expansion. It’s a
pity that we can’t use pack expansion in
mp_plus as well, to obtain
T1::value + T2::value + T3::value + T4::value + ...
directly. It would have been nice for
T::value + ... to have been
supported, and it appears that in C++17, it will be.
mp_count_if is similarly straightforward:
template<class L, template<class...> class P> struct mp_count_if_impl; template<template<class...> class L, class... T, template<class...> class P> struct mp_count_if_impl<L<T...>, P> { using type = mp_plus<P<T>...>; }; template<class L, template<class...> class P> using mp_count_if = typename mp_count_if_impl<L, P>::type;
at least if we require
P to return
bool. If not, we’ll have to coerce
P<T>::value to 0 or 1, or the count will not be correct.
template<bool v> using mp_bool = std::integral_constant<bool, v>; template<class L, template<class...> class P> struct mp_count_if_impl; template<template<class...> class L, class... T, template<class...> class P> struct mp_count_if_impl<L<T...>, P> { using type = mp_plus<mp_bool<P<T>::value != 0>...>; }; template<class L, template<class...> class P> using mp_count_if = typename mp_count_if_impl<L, P>::type;
The last primitive I’ll show is
mp_contains.
mp_contains<L, V> returns
whether the list
L contains the type
V:
template<class L, class V> using mp_contains = mp_bool<mp_count<L, V>::value != 0>;
At first sight, this implementation appears horribly naive and inefficient — why would we need to count all the occurrences just to throw that away if we’re
only interested in a boolean result — but it’s actually pretty competitive and
perfectly usable. We just need to add one slight optimization to
mp_plus, the
engine behind
mp_count and
mp_contains:
template<class T1, class T2, class T3, class T4, class T5, class T6, class T7, class T8, class T9, class T10, class... T> struct mp_plus_impl<T1, T2, T3, T4, T5, T6, T7, T8, T9, T10, T...> { static constexpr auto _v = T1::value + T2::value + T3::value + T4::value + T5::value + T6::value + T7::value + T8::value + T9::value + T10::value + mp_plus<T...>::value; using type = std::integral_constant< typename std::remove_const<decltype(_v)>::type, _v>; };
This cuts the number of template instantiations approximately tenfold.
Conclusion
I have outlined an approach to metaprogramming in C++11 that
takes advantage of variadic templates, parameter pack expansion, and template aliases;
operates on any variadic template
L<T...>, treating it as its fundamental data structure, without mandating a specific type list representation;
uses template aliases as its metafunctions, with the expression
F<T...>serving as the equivalent of a function call;
exploits the structural similarity between the data structure
L<T...>and the metafunction call
F<T...>;
leverages parameter pack expansion as much as possible, instead of using the traditional recursive implementations;
relies on inline definitions of template aliases for function composition, instead of providing library support for this task.
Further reading
Part 2 is now available, in which I show algorithms that allow us to treat type lists as sets, maps, and vectors, and demonstrate various C++11 implementation techniques in the process.
|
https://www.boost.org/doc/libs/develop/libs/mp11/doc/html/simple_cxx11_metaprogramming.html
|
CC-MAIN-2021-31
|
refinedweb
| 5,651
| 56.86
|
Stefan Monnier <address@hidden> writes: [resolution] >>> -. > > Here's a scenario: > - namespaced packages A and B both locally define a function `toto'. > - non-namespaced package C comes along with a symbol `toto' somewhere in > its code, suddenly causing A and B's `toto' to be global rather > than local. I don't think this is a serious problem personally. But I'm also not wedded to global-obarray-first. > Note that instead of "non-namespaced package C", we could have some > package which uses symbols as "uniquified strings" and which uses the > global obarray for it and might occasionally intern `toto' in the course > of its normal execution. Again, it only matters if it's a non-namespaced package that does it. > IOW I think we should instead first look in the local obarray (over > which the coder does have control) and if that fails then look in the > global obarray. I am not wedded to the proposal of using the global obarray first. The rules for interning are slightly more complicated in that case: - given a string X - lookup X in the local obarray - if it exists return the symbol - else - lookup X in the global obarray - if it exists return the symbol - else - add the symbol to the local obarray The only problem I see here is the possibility of problems with concurrency. The whole operation above would have to be atomic and it involves lookups in two separate data structures. But since Emacs doesn't have concurrency yet and it would be a very bad idea at this stage to add unfettered concurrency of the sort that would cause a problem here (if there were a GIL-less threads implementation for example) and the existing concurrency branch is tied to a GIL then I really don't think that is actually a real problem we need to worry about. Although I bet that's what both Guido and Matz said when they were designing the namespace bits of Python and Ruby. [import] >>)) > > Should this equality still stand if I (fset 'nic::foo 'blabla)? > I.e. is it one and the same symbol? I guess this needs more careful specification because that would not be true of aliases. My feeling is that an import should be like the creation of an alias. >>. > > Indeed, my impression is that you inevitably get to this kind > of situation, which you seemed to dislike. I personally don't find it > problematic, not even if we generalize it to some arbitrary graph, with > cycles and all. It's not that I don't like it per-se. I just want this to be easy to implement in the first instance. If the implementation gets more difficult later I have no problem with that. But initial low cost is a good thing. Nic
|
https://lists.gnu.org/archive/html/emacs-devel/2013-07/msg00772.html
|
CC-MAIN-2018-34
|
refinedweb
| 468
| 68.7
|
Catching an exception
Posted on March 1st, 2001.
To see how an exception is caught, you must first understand the concept of a guarded region , which is a section of code that might produce exceptions, and is followed by the code to handle those exceptions.
The try block
If you’re inside a method and you throw an exception (or another method you call within this method throws an exception), that method will exit in the process of throwing. If you don’t want a throw to leave.
Exception handlers.
Note that, within the try block, a number of different method calls might generate the same exception, but you need only one handler.Termination vs. resumption
There are two basic models in exception-handling theory. In termination (which is what Java and C++ support), you assume the error is so critical there’s no way to get back to where the exception occurred. Whoever threw the exception decided that there was no way to salvage the situation, and they don’t want to come back..
Historically, programmers using operating systems that supported resumptive exception handling eventually ended up using termination-like code and skipping resumption. So although resumption sounds attractive at first, it seems it isn’t quite so useful in practice. The dominant reason is probably the coupling that results: your handler must often be aware of where the exception is thrown from and contain non-generic code specific to the throwing location. This makes the code difficult to write and maintain, especially for large systems where the exception can be generated from many points.
The exception specification .[42]
There is one place you can lie: you can claim to throw an exception that you.
Catching any exception
It is possible to create a handler that catches any type of exception. You do this by catching the base-class exception type Exception (there are other types of base exceptions, but Exception is the base that’s pertinent to virtually all programming activities):
catch(Exception e) { System.out.println("caught an exception"); }
This will catch any exception, so if you use it you’ll want to put it at the end of your list of handlers to avoid pre-empting any exception handlers that might otherwise follow it.
Since the Exception class is the base of all the exception classes that are important to the programmer, you don’t get much specific information about the exception, but you can call the methods that come from its base type Throwable:
String getMessage( )
Gets the detail message.
String toString( )
Returns a short description of the Throwable, including the detail message if there is one.
void printStackTrace( )
void printStackTrace(PrintStream)
Prints the Throwable and the Throwable’s call stack trace. The call stack shows the sequence of method calls that brought you to the point at which the exception was thrown.
The first version prints to standard error, the second prints to a stream of your choice. If you’re working under Windows, you can’t redirect standard error so you might want to use the second version and send the results to System.out; that way the output can be redirected any way you want. the book.
Here’s an example that shows the use of the Exception methods: (See page 97 if you have trouble executing this program.)
//: ExceptionMethods.java // Demonstrating the Exception Methods package c09; public class ExceptionMethods { public static void main(String[] args) { try { throw new Exception("Here's my Exception"); } catch(Exception e) { System.out.println("Caught Exception"); System.out.println( "e.getMessage(): " + e.getMessage()); System.out.println( "e.toString(): " + e.toString()); System.out.println("e.printStackTrace():"); e.printStackTrace(); } } } ///:~
The output for this program is:
Caught Exception e.getMessage(): Here's my Exception e.toString(): java.lang.Exception: Here's my Exception e.printStackTrace(): java.lang.Exception: Here's my Exception at ExceptionMethods.main
You can see that the methods provide successively more information – each is effectively a superset of the previous one.
Rethrowing an exception
Sometimes you’ll want to rethrow the exception that you just caught, particularly when you use Exception to catch any exception. Since you already have the handle to the current exception, you can simply re-throw that handle:
catch(Exception e) { System.out.
If you simply re-throw the current exception, the information that you print about that exception in printStackTrace( ) will pertain to the exception’s origin, not the place where you re-throw it. If you want to install new stack trace information, you can do so by calling fillInStackTrace( ), which returns an exception object that it creates by stuffing the current stack information into the old exception object. Here’s what it looks like:
//: Rethrowing.java // Demonstrating fillInStackTrace() public class Rethrowing { public static void f() throws Exception { System.out.println( "originating the exception in f()"); throw new Exception("thrown from f()"); } public static void g() throws Throwable { try { f(); } catch(Exception e) { System.out.println( "Inside g(), e.printStackTrace()"); e.printStackTrace(); throw e; // 17 // throw e.fillInStackTrace(); // 18 } } public static void main(String[] args) throws Throwable { try { g(); } catch(Exception e) { System.out.println( "Caught in main, e.printStackTrace()"); e.printStackTrace(); } } } ///:~
The important line numbers are marked inside of comments. With line 17 un-commented (as shown), the output.
With line 17 commented and line 18 un-commented,)
The class Throwable must appear in the exception specification for g( ) and main( ) because fillInStackTrace( ) produces a handle( ):
//: ThrowOut.java public class ThrowOut { public static void main(String[] args) throws Throwable { try { throw new Throwable(); } catch(Exception e) { System.out:
//: RethrowNew.java // Rethrow a different object from the one that // was caught public class RethrowNew { public static void f() throws Exception { System.out.println( "originating the exception in f()"); throw new Exception("thrown from f()"); } public static void main(String[] args) { try { f(); } catch(Exception e) { System.out.println( "Caught in main, e.printStackTrace()"); e.printStackTrace(); throw new NullPointerException("from main"); } } } ///:~
The output is:
originating the exception in f() Caught in main, e.printStackTrace() java.lang.Exception: thrown from f() at RethrowNew.f(RethrowNew.java:8) at RethrowNew.main(RethrowNew.java:13) java.lang.NullPointerException: from main at RethrowNew.main(RethrowNew.java:18)
The final exception knows only that it came from main( ), and not from f( ). Note that Throwable isn’t necessary in any of the exception specifications.
You never have to worry about cleaning up the previous exception, or any exceptions for that matter. They’re all heap-based objects created with new, so the garbage collector automatically cleans them all up.
[42] This is a significant improvement over C++ exception handling, which doesn’t catch violations of exception specifications until run time, when it’s not very useful.
There are no comments yet. Be the first to comment!
|
http://www.codeguru.com/java/tij/tij0098.shtml
|
CC-MAIN-2016-22
|
refinedweb
| 1,130
| 54.63
|
So after my first one breaking, and now with the POWER OF ARDUINO, i can finailly make a MK.2 version of my Ejection Lever!
Step 1: Parts
Things your gonna need:
- An arduino pro micro (their very cheap)
- A bathroom pull switch
- An electrical box (or whatever you call them)
- A piece of rope
- 2 dupont cables
- Soldering iron
And you might need:
- Some zipties
- Tape/electrical tape
Step 2: The Switch
Firstly we need to cut the dupont cables and solder them to the ends of the switch. Then some electrical tape around them so they don't make contact with each other.
Step 3: Modifying the Box
So the electrical box needs some modification for the arduino and switch to sit in it. there was a little loip in the middle that needs to go, and we need a hole for our arduino usb cable.
Step 4: Putting Everything in the Box
Since i havent really thought this through, i just used zip-ties to mount the arduino to the box :P the switch gets secured with zip-ties as well, even tho i pull on it, but just for security.
Connect the wires to digital pin 4 and ground on the arduino, doesnt matter which way.
Then i used some tape to fix the switcht to my rope, as you can see in the picture.
Step 5: Adding It Under Your Table
So everything is mounted now, so now to secure it under my table!
I just used some scews, i had an plan to be able to remove it if it sin the way, but it doesnt bother me that much, but yo ucan come up with your own idea to mount it, maybe even under your chair!
Step 6: The Code
So the best thing about this design is that it now has a arduino! So now we can let it do anything instead of wiring the whole thing to a wireless mouse.
The code is pretty simple, but as a beginner, i had a hard time coming up with the code, but here it is!
The arduino software should have the keyboard library installed, but if not, probably google has the answer ;)
also small side note: the arduino needs to be a pro micro or leonardo, because they have the right processor for the keyboard function to work.
#include <Keyboard.h>
int chain = 4;
int state = 4; int old_state = 0;
void setup() { pinMode(chain, INPUT_PULLUP); Keyboard.begin();
}
void loop() { state = digitalRead(chain); if (state != old_state) { Keyboard.print("e"); delay(100); Keyboard.print("e"); delay(100); Keyboard.print("e"); old_state = state; } }
Step 7: You're Done!
And you're done! Have fun!
If you have any questions, let me know below!
2 Discussions
9 months ago
Neat! I love games controllers which aren't just buttons!
Reply 9 months ago
Tnx dude! :D
|
https://www.instructables.com/id/Titanfall-Ejection-Lever-MK2/
|
CC-MAIN-2019-04
|
refinedweb
| 479
| 78.18
|
Looking at ASP.NET MVC 5.1 and Web API 2.1 - Part 1 - Overview and Enums
This is the first in a four part series covering ASP.NET MVC 5.1 and Web API 2.1
-P.NET MVC 5.1, Web API 2.1 and Web Pages 3.1 were released on January 20. I call it the star-dot-one release, not sure if that one's going to stick. Here are the top links to find out more:
- The announcement blog post: Announcing the Release of ASP.NET MVC 5.1, ASP.NET Web API 2.1 and ASP.NET Web Pages 3.1
Release notes
- ASP.NET MVC 5.1 release notes
- ASP.NET Web API release notes
- ASP.NET Web Pages 3.1 is a bug fix release, here's the list of fixed bugs
Let's run through what's involved in getting them and trying some of the new features.
Nothing to Install, just NuGet package updates
As I mentioned in my last post, ASP.NET has moved from a "big thing" that you install every few years. The ASP.NET project templates are now mostly a collection of composable NuGet packages, which can be updated more frequently and used without needing to install anything that will affect your dev environment, other projects you're working on, your server environment, or other applications on your server.
You don't need to wait for your hosting provider to support ASP.NET MVC 5.1, ASP.NET Web API 2.1 or ASP.NET Web Pages 3.1 - if they supported 5/2/3 they support 5.1/2.1/3.1. Easier said, if your server supports ASP.NET 4.5, you're set.
However, there are some new features for ASP.NET MVC 5.1 views that require you to be running the most recent Visual Studio update to get editing support. You're installing the Visual Studio updates when they come out so that's not a problem, right?
- For Visual Studio 2012, you should have ASP.NET and Web Tools 2013.1 for Visual Studio 2012. You'd need this for ASP.NET MVC 5 support in Visual Studio 2012, so no real change there.
- For Visual Studio 2013, you should have Visual Studio 2013 Update 1. This update is needed to get nice editor support for the new ASP.NET MVC 5.1 Razor View features (e.g. Bootstrap overloads).
Okay, Let's Have a Look Then
Game plan: I'm going to take an ASP.NET MVC 5 + Web API 2 project, update the NuGet packages, and then throw some of my favorite features in there.
In this case, I'm opting for the "mostly Web API template" since it includes both MVC and Web API, and it includes help pages right out of the box. I could go with "mostly MVC" + Web API, but then I'd need to install the Web API Help Page NuGet package and I might strain a muscle.
Now I'll open the Manage NuGet Packages dialog and check for updates. Yup, there they are.
Since this is a throw-away project I'll throw caution to the wind and click Update All. If this were a real project, I might just update the three new releases so as not to pick an unnecessary fight with JavaScript libraries. But I'm feeling lucky today so Update All it is.
Wow, look at them go! jQuery 2.0.3 even. It's a party. (anti-party disclaimer for those who might be getting carsick: I didn't have to update to jQuery 2.0.3 or any of that other stuff to use the 5.1/2.1 stuff).
Enum Support in ASP.NET MVC Views
Okay, I'll start by creating a Person model class with a Salutation enum:
using System.ComponentModel.DataAnnotations; namespace StarDotOne.Models { public class Person { public int Id { get; set; } public Salutation Salutation { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public int Age { get; set; } } //I guess technically these are called honorifics public enum Salutation { [Display(Name = "Mr.")] Mr, [Display(Name = "Mrs.")] Mrs, [Display(Name = "Ms.")] Ms, [Display(Name = "Dr.")] Doctor, [Display(Name = "Prof.")] Professor, Sir, Lady, Lord } }
Note that I'm using the Display attribute on a few that I want to abbreviate.
Next, I delete my HomeController and views and scaffold a new HomeController using the Person class. Caution to the wind being our theme, I'll run it.
Oh no! No dropdown on Salutation!
Just kidding. That's to be expected. To get the dropdown, we need to change the scaffolded view code for the Salutation from the generic Html.EditorFor to use the new Html.EnumDropDownListFor helper.
So in my Create.cshtml, I need to change this line:
@Html.EditorFor(model => model.Salutation)
to this:
@Html.EnumDropDownListFor(model => model.Salutation)
Okay, with that done I'll refresh the page:
And there it is.
"Now, Jon," you say, "That's really nice, but it would have been absolutely perfect if the scaffolder or EditorFor or something had seen the Enum property and just done the right thing."
You're right. I'm told that will all magically work in an update on the way soon. For now, though, it's easy to get that behavior using some simple EditorTemplates and DisplayTemplates. You can find examples of them in this EnumSample on CodePlex. So I grabbed those templates and copied them into the /Views/Shared directory in my project:
And I'll change my Create.cshtml view back how it was originally scaffolded, using Html.EditorFor. That way the view engine will look for a matching EditorTemplate for the object type, find Enum.cshtml, and use that to render all Enum model properties.
Blam!
Okay, one more fun thing in that EnumSample. There's an override in Html.EditorFor that lets you specify the EditorTemplate you'd like to be used. So I'll change that line to this:
@Html.EditorFor(model => model.Salutation, templateName: "Enum-radio")
And now we are truly dropping science like Galileo dropped the orange:
Recap so far:
- We updated to the new NuGet packages
- We saw that we can now use a new helper to render dropdowns for enums: Html.EnumDropDownListFor
- We saw that we can use EditorTemplates (and, trust me, DisplayTemplates as well) to encapsulate that so any call to Html.EditorFor will intelligently display enum properties
Here's the next post in the series: Looking at ASP.NET MVC 5.1 and Web API 2.1 - Part 2 - Attribute Routing with Custom Constraints
|
http://weblogs.asp.net/jongalloway/looking-at-asp-net-mvc-5-1-and-web-api-2-1-part-1-overview-and-enums
|
CC-MAIN-2014-49
|
refinedweb
| 1,109
| 77.94
|
I am just getting started with Rake. I am trying to create a rake task
from within a rails plugin to copy some asset files (inside /assets in
my plugin folder) to my rails application /public folder.
How can I achieve this?
I am not sure whether to use FileUtils or File. Where AM I in rake
terms. No matter what I do it keeps saying it can’t find the file. Do
I need to use File.join or is there a better rake way? Any resources
out there on how to use rake for file oriented tasks like this? Should
I use a generator instead?
require ‘ftools’ #file tools
desc “Installs famfam icons and stylesheets in application”
namespace :famfam do
desc “famfam icons and stylesheets in application”
task :install => :environment do
puts "Copying famfam icons from …/assets/icons to " + RAILS_ROOT
- “/public/styles”
FileUtils.cd ‘…’
puts FileUtils.pwd
puts File.dirname(FILE)
File.cp “/assets/icons”, RAILS_ROOT + “/public/styles”
end
end
|
https://www.ruby-forum.com/t/getting-started-with-rake-how-do-i-copy-a-directory-of-assets-to-rails-root-public/175168
|
CC-MAIN-2021-25
|
refinedweb
| 162
| 68.97
|
US6332195B1 - Secure server utilizing separate protocol stacks - Google PatentsSecure server utilizing separate protocol stacks Download PDF
Info
- Publication number
- US6332195B1US6332195B1 US09255111 US25511199A US6332195B1 US 6332195 B1 US6332195 B1 US 6332195B1 US 09255111 US09255111 US 09255111 US 25511199 A US25511199 A US 25511199A US 6332195 B1 US6332195 B1 US 6332195B1
- Authority
- US
- Grant status
- Grant
- Patent type
-
- Prior art keywords
- server
- domain
- burb
- system
-/28—Security in network management, e.g. restricting network management access
-. 8/605,320, filed Feb. 9, 1996, now U.S. Pat. No. 5,913,024.
1. Field of the Invention
The present invention relates to computer security, and more particularly, to an apparatus and method for providing increased computer security to commercial transactions across the Internet.
2. Background Information., the discussion of which is hereby incorporated by reference. Boebert teaches that modifications can be made to the kernel of the operating system in order to add type enforcement protections to the operating system kernel. This protection mechanism can be added to any other program by modifications to the program code made prior to compiling. It cannot, however, be used to add type enforcement protection to program code after that program code has been compiled.
As use of the Internet has grown, companies are increasingly interested in providing goods and services across the Internet. Software companies such as Netscape have responded by providing commerce server software. Such software typically will be partitioned into a commerce server which is accessible to the Internet shopper and an administration server which is used to maintain the commerce server and which, for security reasons, must be kept inaccessible to all but system administrators. Security mechanisms used to date have not sufficiently protected the administration server from malicious attack. What is needed is a system and method for protecting the administration servers of systems used in Internet commerce from malicious attack.
The present invention is a secure commerce server system and method. A secure commerce server system includes a plurality of regions or burbs, including an internal burb and an external burb, a commerce server and an administration server. Processes and data objects associated with the administration server are bound to the internal burb. Processes and data objects associated with the commerce server are bound to the external burb. Processes bound to one burb cannot communicate directly to processes and data objects bound to other burbs. The administration server cannot be manipulated by a process bound to the external burb.
FIG. 1 is a representation of a system having an internal and external interface connected via two separate protocol stacks;
FIGS. 2a-d are representations of communication protocols;
FIG. 3 is a representation of one form of interprocessor communication which can be used in a system having a plurality of separate protocol stacks;
FIG. 4 is a more detailed representation of one embodiment of the form of interprocessor communication shown in FIG. 3;
FIG. 5 is an alternate embodiment of the system of FIG. 1, in which all communications between regions (burbs) pass through system space before being passed to another burb;
FIG. 6 is a flowchart illustrating the steps taken in securing compiled program code according to the present invention;
FIG. 7 is a flowchart illustrating the steps taken in applying network separation to compiled program code according to the present invention; and
FIG. 8 is a representation of a system built using the steps shown in FIGS. 6 and. Various trademarks, including INTEL™, PENTIUM™ and UNIX™ are referred to herein. PENTIUM™ brand microprocessors are made by Intel and UNIX™ is the name of a particular type of computer operating system.
Computer systems which use a single communications protocol stack to handle communication between an internal and an external network are widely in use. This is the communication model used, for instance, in BSD 4.4. The problem with such a system is that once a process receives privileges within the system, it can use those privileges to access other network files. This can lead to a dangerous breach of network security. Two approaches can be used to beef up the security of such a system: type enforcement and network separation. Type enforcement adds an additional level of protection to the process of accessing files. Network separation divides a system into a set of independent regions. Through network separation a malicious attacker who gains control of one of the regions is prevented from being able to compromise processes executing in other regions. Type enforcement and network separation will be described next.
Type Enforcement
In “SYSTEM AND METHOD FOR PROVIDING SECURE INTERNETWORK SERVICES”, U.S. patent application Ser. No. 08/322078 filed Oct. 12, 1994, Boebert et al. describe a way of extending type enforcement protection to a computer system having both an internal private network and an external public network. In one embodiment of such a system, a secure computer is used to connect a private network having a plurality of workstations to a public network. A protocol package (such as TCP/IP) running on the secure computer implements a communications protocol used to communicate between each workstation and the secure computer. A Local Cryptography function can be integrated into the protocol package in order to protect and authenticate traffic on the private network.
Program code running on the secure computer is used to communicate through the private network to the workstation's protocol package. In one embodiment, the secure and cryptographic methods may be used when communicating with different entities on the Internet. In one embodiment, a tcp wrapper package operating in the Internet protocols is used to sit on the external, public network so that information about external probes can be logged. It is most likely that the open nature of the public network will favor the use of public-key cryptography in this module.
As noted above, in one embodiment the secure computer is an Intel the memory of the secure computer. To accomplish this, system calls in the basic BSD386 kernel were modified so that type enforcement checks cannot be avoided. Certain other system calls were either disabled or had certain options disabled.
The. It should be noted that Type Enforcement works best as a supplement to the normal Unix permissions. That is, the Unix permissions are the first line of defense; when processes get past the Unix permissions, however, they run into the type enforcement checks.
In one embodiment, the secure computer has been configured under BSD386 to run in one of two states: administrative and operational. In the administrative state all network connections are disabled and the Server will only accept commands from a properly authenticated System Administrator accessing the system from the hard-wired administrative terminal. This feature prevents anyone other than the System Administrator from altering the security databases in the secure computer.
In the operational state the network connections are enabled and the Server will execute only software which has been compiled and installed as executable by an assured party.
The two states are reflected in two separate kernels. The administrative kernel is not subject to type enforcement. Instead, it is network isolated and accessible only to authorized personnel. This means that in administrative kernel mode, the secure computer cannot be seeded with malicious software by any but the people charged with system administration.
On the other hand, the operational kernel is subject to type enforcement. This means, for instance, that executable files stored in the memory of the secure computer cannot be executed without explicit execution privileges. In one such embodiment, executable files cannot be give execution privileges from within the operational kernel. Instead, the secure computer must enter administrative kernel to grant execution privileges. This prevents execution of malicious software posted to memory of the secure computer. Instead, only executables approved by operational administrators while in administrative kernel mode ever become executable within operational kernel mode of the secure computer. In one such embodiment, administrative kernel can be entered only from either a manual interrupt of the boot process to boot the administrative kernel or by booting the secure computer from a floppy that has a pointer to the administrative kernel.
The flow of data between processes is limited to transfers through assured pipelines controlled, in one embodiment,:
Read Only (R): Data values may be fetched from memory and used as inputs to operations, but may not be modified or used as program text.
Read Execute (RE): Data values may be fetched from memory and used as inputs to operations, and may also be used as program text, but may not be modified.
Read Write (RW): Data values can be fetched from memory and used as inputs to operations, and may also be stored back in modified form.
No Access: The data cannot be fetched from memory for any purpose, and it may not be modified.
These hardware-enforced accesses can be used to force data flowing from the internal private network to the Internet to go through a filter process, without any possibility that the filter is bypassed or that filtered data is tampered with by possibly vulnerable software on the Internet side of the filter.
The access a process has to a data object via type enforcement is defined by an entry in a central, protected data structure called the Domain Definition Table (DDT). A Domain name denotes an equivalence class of processes. Every process in execution has associated with it two Domain names which are used to control its interaction with objects ‘*’ or whitespace.
Subtypes will not be shared; thus Mail: file means, in effect, “the files private to the Mail Domain.” When objects are created they are automatically assigned the appropriate default subtype. Objects which are to be shared between Domains must have their subtype changed from the default to an explicit subtype.
Subtypes can be assigned one of three ways:
By having a default subtype assigned when the object is created by the operational kernel. a subtype is changed to a default subtype, then the object becomes private.
By having a default or explicit subtype assigned administratively by the administrative kernel.
The default subtypes exec and gate are “static.” The operational kernel will not create any objects of those subtypes, change those subtypes into any other subtype, or change any other subtypes into a gate or exec.
The Domain/Type relationship is used to define the modes and consequences of accesses by processes to objects. The modes and consequences of accesses are defined by access attributes which are store in the DDT database. The DDT database is “indexed” by three values:
The effective Domain of the process requesting the access or action.
The creator field of the object Type.
The subtype field of the object Type.
The result of “indexing” is the retrieval of a set of access attributes. The term “attribute” is used instead of “mode” because some of the attributes define immediate side effects. The selection of attributes was governed by the following considerations.
To constrain the modes of access which processes may exercise on objects.
To prevent the execution of any application software other than that which has been installed through the controlled administrative environment.
To enable the spoofing of attackers so that the attack response facilities can be used to trace them at the physical packet level. This required a more sophisticated response to illegal accesses than just shutting down the offending process.
Gating permits a process to temporarily become a member of another Domain. The “home” or permanent Domain of the process is called its real Domain and the temporary or assumed Domain is called the effective Domain..
Explicit gating is used when a looser control on the temporary Domain transition is appropriate, or when the “tying” of the gating to a specific executable would require excessive restructuring of existing.
Certain kernel syscalls are restricted to processes executing out of privileged Domains. In one embodiment two levels of checks are made. First, the normal BSD UNIX permissions are checked; if these permissions cause the operation to fail, the system call returns the normal error code. If the UNIX permissions are adequate, the type enforcement (TE) privileges are checked next, (and thus in addition to the UNIX permissions).
The following BSD system calls have been modified to properly implement type enforcement. The modified calls have been grouped into four groups for ease of explanation. computer's.
The second group of system calls that require modification are those that allow interaction with the computer's file system. like manner. Domains of the process remain unchanged. In the event of a privilege violation, the system call will raise an Alarm, will not honor the request, but will return success. The ktrace, ptrace and profil system calls are modified in like manner. All are modified to perform no function. Attempts to call them will raise an Alarm, will not honor the request. The ktrace and ptrace system calls will return EPERM, whereas the profil system call will return EFAULT.
The mprotect system call is modified to perform no function. Attempts to call it will raise an Alarm, will not honor the request, and will return EPERM.
The fourth group of system calls that require modification are those that relate processes to user ids. The setuid and seteuid and old.seteuid system calls are modified in like manner..
Network Separation
The goal of network separation is to provide an operating system kernel with support for multiple networking protocol stacks. A system 10 having such an operating system kernel is illustrated in FIG. 1. System 10 is split into separate regions, with a domain 16 and a protocol stack 12 assigned to each region. Each protocol stack (12.0, 12.1) has a fixed set of interfaces bound to it. For example, two or more Ethernet drivers can be connected to, for instance, protocol stack 12.0.
A given socket will be bound to a single protocol stack 12 at creation time. Each protocol stack 12 will have its own independent set of data structures including routing information and protocol information. No data will pass between protocol stacks without going through proxy space and being sent back down another protocol stack by a proxy program 14. Proxy 14 acts as a go-between, therefore, for transfers between domains 16.0 and 16.1. No user applications will have direct access to either network.
The embodiments discussed will only cover common networking support, such as the media layer drivers like PPP, Ethernet, and SLIP and the TCP/IP suite of protocols. The BSD kernel supports other protocols such as CCITT/X.25 and XNS. These are not covered here, although it should be apparent that network separation can be extended to other protocols by dividing the protocol stacks associated with the protocol layers into multiple protocol stacks with the number of (and the name of) stacks being the same across all protocol suites.
The following terminology will be used throughout the document:
BSD The BSD/OS 2.0 Unix operating system as based upon the BSD 4.4 Lite distribution. In the case of the networking functionality the code is virtually identical across all 4.4 derived Unix systems (NetBSD1.0, FreeBSD 2.0, and BSD/OS 2.0). The only significant differences are the hardware device drivers in terms of how they autoconfig during boot, which ones are present, and their internal code. Their interface to the rest of the networking code is effectively identical.
BSD kernel The BSD/OS 2.0 kernel.
Burb The aggregation of a protocol stack with all the processes that can access that stack. Processes that can access a particular protocol stack are said to be bound to that protocol stack.
NBURBS The number of protocol stacks or network interfaces to which all networking processes and related entities are bound.
Kernel space The address and code space of the running kernel. This includes kernel data structures and functions internal to the kernel. Technically the kernel can access the memory of the current process as well, but “kernel space” generally means the memory (including code and data) that is private to the kernel and not accessible by any user process.
User space The address and code space of processes running that isn't kernel space. A running process will often execute inside of the kernel during system calls, but that is kernel space since the kernel *always* is running within the context of the current process except during the few moments of the actual context switch—in a kernel without SMP support and kernel threads. In general, user space refers to code and data allocated by a code that is loaded from an executable and is available to that process, not counting the kernel private code data structures.
Protocol stack The set of data structures and logical entities associated with the networking interfaces. This includes sockets, protocol drivers, and the media device drivers.
Link level & hardware drivers These are the (almost in some cases) bottom layer drivers that talk a particular physical and/or link level protocol (Ethernet, PPP, SLIP). They may be a hardware driver, in the case of most Ethernet drivers, or they may sit on top of yet other drivers, such as PPP on a TTY on the COM port driver or even a parallel port Ethernet driver on top of the LPT port driver. Most of these drivers have two layers, a generalized layer that handles the common parts of the link level protocol (Ethernet, PPP, etc.) and the hardware specific driver.
One of the goals of a secure operating system kernel is to allow dividing the network interfaces into distinct regions so that there is assurance that packets are never quietly passed through the kernel between those regions. In one embodiment, the following rules must be met to ensure secure control of packet transfer between regions:
A single user process can only send and receive information (packets) from one region at a time.
Once a process is bound to a region, it can only access that region.
Incoming packets can only go to processes that are in the region associated with the interface the packet arrived on.
Any data passing through the computing system to a different region must come into user process and be handed off to a different process that has access to the other region, a proxy program, to be sent out again.
The reasons for these design requirements are fairly simple. The goal is that any information passing through the computer between different regions has to be passed through a predefined, or assured, pipeline.
There are three ways of achieving this goal:
1) Packet Filtering Packet filtering would be done using conventional approaches similar to current firewalls and screening routers. This could be enhanced with additional filters based on interface rather than address to prevent certain types of spoofing.
2) Packet Separation A given message, incoming or outgoing, would be assigned a Type based on the interface it arrived on or the socket it was sent from. The packet would then be thrown out at the top or bottom level if that Type didn't match the Type of the interface it was being sent on.
3) Separate Protocol Stacks The protocol stacks would be separated into multiple instances with a given interface or socket existing on only one.
In arriving at our current approach, the packet filtering solution was quickly discarded. Although you could eliminate several kinds of spoofing, many of them would still remain. For filtering to work at all, IP packet forwarding must be enabled, but then you are vulnerable to all sorts of nasty things sneaking through the computer system using the normal sorts of attacks with various forms of spoofing, piggybacking, and just plain misconfiguration of complex filtering expressions.
The packet typing approach had merit, but required extensive code changes as all of the various layers would have to be rewritten to handle additional fields in all of the various structures (mbufs, sockets, etc.). The other problem with the packet typing approach was how to deal with kernel internally generated messages like ICMP messages.
The separate protocol stack approach was selected because it gives a very clear split of the regions making assurance easier. It also requires surprisingly few code changes. Rather than change the existing data structures, they are simply replicated as needed for the various regions. The initial reference has to be fixed, but most references are done by cached pointers within the various structures, so many of the functions and layers of the protocol stack require no changes at all. For instance, the hardware drivers require absolutely no changes. The socket code requires only a minor change during the socket instantiation; subsequent references are done using the already fixed pointer to the particular protocol driver. This also gives a performance win because basically no lookups are done dynamically, only an extra level of indirection during initialization, but not subsequently.
The general structure of the design chosen.
A name was needed for each of these protocol stacks, and for the processes and related entities bound to them, the name that was chosen was burb.
Second, the decision whether to replicate and subdivide into separate burbs is made on a protocol family by protocol family basis. Generally all external supported networking protocol families will be divided. The internal Unix domain socket family won't be replicated. This is important because the Unix Domain sockets (AF_UNIX) will continue to exist in one common region for the entire system. This feature becomes important for the pipeline structure given later.
To identify the network/socket access a process is given a burb ID that says which burb it is in. In one embodiment, the burb ID is an ordinal number.
The general picture of the protocol stacks in a normal BSD kernel is shown in FIG. 2a. An expanded view of this picture to include the various IP protocol drivers is shown in FIG. 2b.
When a socket is actually created, it is explicitly, during the initial socket( ) call 20, bound to one of the protocol drivers 22 via a pair of pointers in the socket structure, the so_proto member (a pointer to a struct protosw), and the so_pcb member. The protosw structure is unique per protocol driver, but there is only one per protocol driver. The so_pcb pointer is an opaque pointer to a socket specific structure that has the protocol specific information for a socket (such as the TCP state information for a TCP socket).
At the bottom level, the generic media drivers 24 (Ethernet, PPP, SLIP) put incoming packets on a per protocol family (IP, XNS, CCITT, etc.) queue, ipintrq in the case of IP.
The picture then for a created socket becomes something like the system of FIG. 2c.
So, the fundamental concept of the kernel portion of the design is that the protocol stacks are replicated N times (with N being a fixed small constant, typically 2-10), with all protocol level and other shared data structures being replicated. In practice, each burb/protocol stack has a name that maps onto a number, starting at 0, that is used as an index. Each static structure is converted into an array. This also makes it easy to change the number of burbs supported. FIG. 2d illustrates a system having two burbs and limited to just IP. A generic system having N burbs is shown in FIG. 3. A specific TCP/IP system having N burbs is shown in FIG. 4.
Once the socket is created then the picture becomes much simpler again, because the individual data structures are then bound explicitly and no lookup or search stage occurs again. All activity then occurs within one of the burbs (e.g., TCP[0]).
The advantage of duplicating the protocol stacks in their entirety is that the number of data structures to be duplicated, and the points that those are referenced, is manageable, much more so that making a given datum have a type and/or domain and having to “route” within the kernel based on that at every layer. As mentioned before, the basic approach is that all shared data structures are replicated N times, 1 for each burb. These data structures are a reasonably finite list. They include:
Protocol Domain List and Tables (domains, protosw)—When a socket is created, a family and a type is specified, such as socket (AF_INET, SOCK_STREAM). The socket creation routine first looks up the top level protocol in the family list, ala: AF_INET, which returns a second level table of protosw structures that has the individual protocol drivers. The second level table is in the form of XXX domain, such as inetdomain. The first level list is called “domains”.
The design replicates only the tables (when that family supports separate burbs, such as IP, route). In the case of things like the AF_UNIX domain, each list will point at the same table. The lookup will then return a protosw structure unique to the instance of the protocol driver, such as inetsw[proto][burb], instead of inetsw[proto]. The routing structure will be replicated into routesw [NBURBS].
Protocol Interrupt Queues (ipintrq)—External networking protocols each have an input queue for incoming packets. Each queue is a simple mbuf list. So queues for protocols will be replicated, becoming things like ipintrq [NBURBS] instead of just ipintrq.
IP Fragmentation Queues (ipq)—Similar to the IP interrupt queue, there is a reassembly queue, ipq, to reassemble fragmented IP packets. All network interfaces have a maximum transmission unit (MTU). Outgoing packets larger than the MTU are fragmented by the IP layer into smaller packets for transmission. When the fragmented packets arrive at the destination, they are placed on a reassembly queue waiting for reassembly. This queue is also replicated into ipq [NBURBS].
Interfaces (ifnet)—Each interface, including the loopback will be bound at boot time to a given burb by a one time system call. A burb field will be added to the ifnet structure, ifnet.if_burb, which is unique per interface so that it knows which burb it's in and which interrupt queues to send data to. The binding will also be one shot, so that once an interface is bound, it can't be rebound to a different burb.
Other data structures are already unique per interface. The other item to be replicated is the list of interfaces. There will be one list per interface. The interface binding call will set the burb of the interface and tell it which interface list to put itself on during initialization.
Loopback Interface (loif)—The loopback interface IP address will have to be unique per interface. In other words, the recommended loopback address 127.0.0.1 cannot be shared between the interfaces. The loopback interface data structure will be replicated, loif [NBURBS]. Loopback addresses will be subnetted, with netmask 255.255.0.0, to give each interface a unique IP address, 127.burb.0.1.
Protocol Control Blocks (tcb, udb, rawcb)—Each protocol maintains its own list of protocol control blocks, which hold various pieces of information required for the socket operations. These lists will be replicated and maintained on a burb basis: tcb [NBURBS] for TCP, udb [NBURBS] for UDP, and rawcb [NBURBS] for routing sockets.
Routing Tables (rt_tables)—There is a single master routing table, rt_tables [family], that is a linked list of routes. This table will be separated into one table per burb, rt_tables [family] [NBURBS]. There will be no master routing table.
These routing tables will be used as lookup tables so that the incoming end of a proxy knows which outgoing proxy to connect to. For outgoing packets, the search always starts with the local table. If a route cannot be found, then the system searches other tables. If a route is found on a non-local table, then the networking code will hand the packets off to the proxy processes.
ARP Tables and Queues (llinfo_arp, arpintrq)—There are no data structures at the common link level (Ethernet, PPP, SLIP) except the ARP table and the ARP interrupt queue, llinfo_arp and arpintrq. This again is a list of interfaces with just the information needed for ARP and a queue for incoming ARP packets. Both of these data structures will be replicated, llinfo_arp [NBURBS] and arpintrq [NBURBS].
Sockets (struct socket)—The data structures for a socket are created dynamically for each socket, so no fixed replication needs to be done. The alteration of the protocol driver lookup code means that a socket will automatically get the pointer to the protocol driver in the correct burb without any changes to the socket code except the domain list search routines. A burb field will be added to the socket structure, socket.so_burb, so that it knows which burb it's in.
There are some other issues. A message doesn't normally have an intrinsic burb, but it will be added as a debugging aid during development. Each major data structure that is burb dependent (the input queues, the protosw structures, etc.) will always have a field giving its burb. So at various points, these fields can be compared against during development as a debugging aid. Critical hand off stages (like socket initialization) will have sanity checks that are done all the time because the overhead is negligible.
The kernel is implemented to respond to connection requests that are not intended for the computer system itself. For instance, when an internal client requests a connection to an external Internet host, the kernel will answer the request as if it were the external host. Meanwhile, it will attempt to set up the connection with the external host. This process occurs transparently to the internal client. Such support allows the secure computer to answer in a secure manner connections intended for outside boxes.
Each burb will have its own routing tables. The general algorithm for each incoming packet will be, see if it's to one of the local addresses, if so accept it (as this is standard behavior). If not, route it within the burb. If that fails, then see if there is a route in another burb (using other burbs' routing tables). If not, then toss the packet. If so, accept the packet and pass it up the stack, which eventually should arrive at the proxies.
The existing TCP and UDP drivers will happily accept the packet and pass it onto a socket if it exists. If it's to an already established TCP connection or a UDP socket, it will get there normally. If it's a new incoming TCP connection that is accepted, it will create the socket, and fill in the local address from the incoming packet. The process doing the accept ( ) can call getsockname ( ) to get that address. In other words, the UDP and TCP drivers trust the lower layer IP driver that the packet is for the machine, so no change to those layers need to be done at all to support this, only the secondary routing lookup in the IP layer.
This functionality means a couple of things. For this to work, there must still be unique IP addresses all around, so no hacks are done to support hidden IP addresses that conflict with external ones. Also, there can only be one default route on the machine. If there were one per burb, the burb routing lookup wouldn't know where to send it. This isn't hard. Generally the default route will be for one of the outside burbs. Other burbs will simply have to have complete routing for the internal network. This isn't a problem as it's really already the case. A host with a complex internal network already has to have proper routing for all of those networks if it uses a default route.
This also means there can be no duplicate routes in the replicated routing tables. If there were duplicate routes, the routing lookup would always return the first route found.
Network Separation Using Type Enforcement
The degree to which network separation protects a network from malicious attack is enhanced through the use of type enforcement.
As noted above, in a type enforcement scheme processes and files are assigned to a domain and subtype, domain:subtype. Process-to-file, or domain-to-domain:subtype, access is controlled by the Domain Definition Table (DDT), that specifies the permissions:
create—ddt_create
write—ddt_write
read—ddt_read
destroy—ddt_destroy
execute—ddt_execute
rename—ddt_rename
Subtypes allow a finer division of file types (file=file, diry=directory, sock=socket, exec=executable, etc.).
Interactions between the processes, or domain-to-domain, such as signals, are enforced by the Domain Interaction Table (DIT), that specifies the allowed signals (sHUP, sABRT, sJob, etc.)
An instantiated domain is one that has several instances with identical access rights to itself and other domains. Take two instantiated domains, www0 and www1. They both have identical access to all other domains and subtypes. They also have equivalent access to their instances and no access to other instances. In the domain specification language, these domains are specified with ‘X’ as the last character, where X represents the burb ID. So if you say:
wwwX:
has read access to www#:conf
has write access to Slog:sock
the DDT/DIT is created like:
www0:
has ddt_read access to www0:conf
has ddt_write access to Slog:sock
www1:
has ddt_read access to www1:conf
has ddt_write access to Slog:sock
If another domain is given access to a general instantiated type, then it gets equal access to all instances, for instance:
Admn:
has write, read, create access to www #:conf
the DDT/DIT is created like:
Admn:
has ddt_write, ddt_read, ddt_create access to www0:conf
has ddt_write, ddt_read, ddt_create access to www1:conf
There are a set of special domains that are used to actually grant access to the networks themselves. These domains are in the form of Protocol:port. Protocol is a domain name corresponding to the protocol type. Current domains and what type of socket they map to include:
For sockets that have port numbers, such as tcp and udp, the subtype is an ASCI-ish number that is the port to which binding is allowed. For instance, the ftp0 domain would have access to ftp 0:0020 and ftp 0:0021 to give it access to the ftp control and data ports (20 and 21).
The access given corresponds to the directionality of communication. There must be ddt_create privilege to create the socket at all. For datagram sockets, ddt_read gives the ability to read ( ) or recvmsg ( ), and ddt_write gives access to write ( ) and sendmsg ( ).
With stream sockets, ddt_read gives access to bind ( ), listen ( ), and accept ( ) so that the process can receive an incoming connection. Ddt_write gives access to bind ( ) and connect ( ) so that the process can initiate an outgoing connection.
The following subtypes give access to random and reserved ports:
rall—bind to all reserved ports explicitly
rany—bind to any arbitrary reserved ports
nall—bind to all non-reserved ports explicitly
nany—bind to any arbitrary non-reserved ports
For portless types, the subtype grants access to a specific type of socket. The types currently include ‘icmp’, only valid under the IP domain (ipoX:icmp), which gives access to ICMP sockets. Some more network subtypes:
sock—flag access to a network type
conf—access to network configuration capability
Subtype ‘sock’ is used during the socket creation so the kernel can verify access to a given network domain. The port isn't known at this point, and an exhaustive search of the DIT would be expensive, so any domain that has access to any networking domain has to be given access to the ‘sock’ subtype as well.
Network configuration programs such as ifconfig, nss, and sysctl must have access to “network_domain:conf”. For example: tcpX:conf allows the Unix commands ifconfig to configure network interfaces and sysctl to set networking configurables.
A network interface is bound to a burb at boot time. A given socket is bound to a particular burb at the socket creation time. The rules for this are pretty simple.
An example of a network separated system 70 implemented with Type Enforcement™ protection is shown in FIG. 5. In FIG. 5, ftp0 and ftp1 are two instantiated domains (72.0 and 72.1, respectively). Each instantiated domain has access to its own protocol stack (74.0 and 74.1). Transfers from one domain to another are made through a ftp proxy 75 via calls to kernel 76.
Processes running in an “instantiated domain” such as ftp0, intrinsically have access to one and only one burb, the burb that matches the numeric part of the domain name. In these cases, the system just binds that socket to the appropriate burb. It should be noted that a process is not actually bound to a burb until it tries to make a direct or indirect socket call to that burb.
Programs running within other domains either have no access or access to all burbs. In these cases, the caller must specify which burb they want. If they don't, it's an error. If they do specify a burb and they don't have access privileges to that burb, it's also an error.
This specification is implemented via a new system call, socketburb ( ). Basically, socketburb ( ) replaces the old socket ( ) system call. It performs the same function and takes one additional argument, burbid. The old socket ( ) still exists, but its gut has been replaced by a call to socketburb ( ).
The syntax of such a call could be as follows: int socketburb (int domain, int type, int protocol, unsigned long burbid);
Ported applications and new programs will use the new socketburb ( ). Existing programs, in instantiated domains, using socket ( ) wouldn't have to be changed and still work. Burbification of socket ( ) calls to socketburb ( ) should use the library call domain_to_burb ( ) to convert and pass the burbid argument:
sock1=socket (AF_INET, SOCK_STREAM, 0);
sock2=socket (AF_INET, SOCK_RAW, IPPROTO_ICMP);
would become:
int domain_to_burb (long domain);
sock1=socketburb (AF_INET, SOCK_STREAM, 0, domain_to_burb (domain));
sock2=socketburb (AF-INET, SOCK-RAW, IPPROTO_ICMP, domain_to_burb (domain));
Interfaces are bound to a particular burb. This is implemented by a few new socket ioctls( )s which set and get the burb id of an interface.
Access to a socket for controlling interfaces and other shared parameters is controlled by access to the various conf subtypes. Parameters specific to a protocol, like TCP tunables, are controlled by access to the protocol specific type, such as ‘tcp0:conf’. IP layer things like the IP address and burbness of an interface are controlled by the types like ‘ip0:conf’.
The following interfaces have been modified or added to support type enforcement, burbification, and transparent proxies. The objectives of these calls have not changed, although access to these calls has been made more restricted. Some new error codes, errno, have been added in the case of type enforcement failures. Only the modifications are described below.
socket ( )—is used by a process to create a socket that is bound to a particular burb. If socket ( ) is called, the socket is either bound implicitly to the burb id that matches the number of the instantiated domain, or fails.
socketburb ( )—a new system call and it is used to create a socket in a given burb. The burb file is passed as an additional argument to the socketburb ( ) call:
int socket (int family, int type, int protocol);
int socketburb (int family, int type, int protocol, unsigned long burb);
int domain_to_burb (long domain);
sock1=socket (AF_INET, SOCK_STREAM, 0);
sock2=socketburb (AF_INET, SOCK_RAW, IPPROTO_ICMP, domain_to_burb (domain));
These new error codes, errno, are returned by socket ( ) and socketburb ( ):
[EDBOM] returned and an audit generated if the calling process is not in a burb-bound domain.
[EPERM] returned and an audit generated if the calling process doesn't have ddt_create privilege.
bind ( )—assigns a local address and port number to a socket. The bind ( ) call has been extended to allow a process to set the local address arbitrarily, it needs ddt_rename access to “ipoX:spuf”. Currently, no domains have this spoofing capability, i.e., no domains have ddt_rename access to “ipoX:spuf”. The calling process also must have the following privileges to successfully bind:
ddt_read-tcpX:port
ddt_read-udpX:port
ddt_read-ipoX:port, for raw IP has_rootness-reserved ports (0-1023)
[EADDRNOTAVAIL] returned and an audit generated if the calling process doesn't have ddt_rename privilege.
[EPERM] returned and an audit generated if the calling process doesn't have ddt_read or has_rootness privilege.
connect ( )—initiates a connection from a socket to a specified address and port number. The calling process must have these privileges to the listed domains: subtypes to successfully connect.
ddt_write-tcpX:port
ddt_write-udpX:port
[EPERM] returned and an audit generated if the calling process doesn't have ddt_write privilege.
The ioctl ( ) call is used to manipulate the underlying device parameters. Several new socket ioctl ( ) commands have been added. Most will work with a “struct ifreq” argument and be handled similarly to the other set/get ioctls of an interface. If an ordinal value is being set or retrieved such as the burb id, then it will be passed in the ifr_metric field.
The following ioctl ( ) commands now require is_startup or is_admin privilege to execute: SIOCSIFFLAGS, SIOCSIFMETRIC, SIOCADDMULTI, and SIOCDELMULTI. EPERM is returned and an audit is generated if the calling process doesn't have the correct privilege. These new ioctl ( ) commands are added to support burbness in the secure computer:
SIOCGIFBURB/SIOCSIFBURB—get and set the burb ID of a network interface. The burb ID is passed in the ifr_metric field. The CGI abbreviation in the first burb ID represents the ANSI/ISO computer graphics interface standard. To set the interface's burb ID, the calling process must have is_startup or is_admin privilege. A sample call would look like:
int ioctl (int socket_descriptor, unsigned long command, char*argp);
struct ifreq if;
int burb, so;
ifr.ifr_metric=burb;
if (ioctl (so, SIOCSIFBURB, (caddr_t) & ifr)<0 perror (“ioctl (SIOCSIFBURB)”);
if (ioctl (so, SIOCGIFBURB, (caddr_t) & ifr)<0) perror (“ioctl (SIOCGIFBURB)”);
[EPERM] returned and an audit generated if the calling process doesn't have is_startup or is_admin privilege.
[EINVAL] returned and an audit generated if the calling process doesn't have access to subtype “:conf”.
SIOCGSOCKBURB—get the burb that a socket is bound to. This is mainly a debugging aid. The burb id is passed in the ifr_metric field.
SIOCGMSGBURB—get the burb of the pending message. It should always be the same as the burb of the socket. This is mainly a debugging aid. The burb id is passed in the ifr_metric field. A sample call would look like:
struct ifreq ifr;
int burb, so;
if (ioctl (so, SIOCMSGBURB, (caddr_t) & ifr)<0 perror (“ioctl (SIOCMSGBURB) ”);
burb=ifr.ifr_metric;
[ENOENT] returned and an audit generated if the socket cannot be read.
SIOCGMSGIFNAME—get the name of the interface that the pending message arrived on. It is used as a debugging aid or to filter on the basis of the specific interface of the message. The interface name is returned in the ifr_name field. A sample call would look like:
struct ifreq ifr;
int so;
if (ioct (so, SIOCMSGIFNAME, (caddr_t & ifr)<0 perror (“ioctl (SIOCMSGIFNAME)”);
printf (“interface : %s” , ifr.ifr_name);
[ENOENT] returned and an audit generated if the socket cannot be read.
SIOGSUBURBRT—returns a burb id for a given IP addresses. The kernel searches each routing table and returns the first route found. The IP address is passed in the ifr_addr field. The burb id is returned in the ifr_metric field on success. Since there can be more than one burb, this command is used by an incoming proxy to find which outing proxy to connect to. A sample call would look like:
struct ifreq ifr;
int so;
ifr.ifr_addr=tempaddr;
if (ioctl (so, SIOCGBUIRBRT, (caddr_t) & ifr)<0) perror (“ioctl (SIOCGBURBRT)”);
printf (“route %d” , ifr.ifr_metric);
[EHOSTUNREACH] returned and an audit generated if no route can be found for the given address.
The getsockopt ( ) and set sockopt ( ) calls manipulate the options associated with a socket. The following socket options have been added:
#include <sys/types.h>
#include <sys/socket.h>
int getsockopt (int so, int level, int optname, void*optval, int*optlen);
int setsockopt (int so, int level, int optname, cones void*optval, int optlen);
SO_STATE—returns the state of the socket, the so_state field of the struc socket. This is used by applications that wish to be more intelligent in their handling of a socket depending on its state.
SO_SOCKBURB—returns the burb id of the socket, similar to ioctl (SIOCGSOCKBURB). A sample call would look like:
int value, length, error;
length=sizeof (value);
if ((error=getsockopt (sockfd, SOL_SOCKET, SO_SOCKBURB, & value, & length return error;
return value;
SO_MSGBURB—returns the burb id of the pending message, similar to ioctl (SIOCGMSGBURB) via an unsigned long.
[ENOENT] returned and an audit generated if the socket cannot be read.
SO_MSGIFNAME—returns the burb id of the pending message, similar to ioctl (SIOCGMSGIFNAME) via an unsigned long.
[ENOENT] returned and an audit generated if the socket cannot be read.
Most of the kernel auditings will be generated by type enforcement checks such as check_ddt ( ), check_dit ( ), and check_ddt_net ( ) and user privilege check such as psuser ( ). These audits are logged to /var/log/audit.asc and /var/log/audit.te.asc.
Network audit is another type of kernel audit. These are logged to /var/log/audit.asc and /var/log/audit.attack.asc. Currently, network audits are generated for ICMP, TCP, UDP, and IP packets.
ICMP—ICMP messages are used to communicate error and administrative messages between systems. ICMP audits are the only network audits that can be configurable. This is implemented via the Unix command sysctl and the field burb.X.net.inet.ip.icmpaudit, where X is the burb id. The default state of the secure computer is level 1, audit only icmp-redirects. ICMP audits can be configured at 3 levels:
0=no icmp audits
1 =icmp redirects only (default)
2 =all icmp messages, except echo and reply
IP—if the source-routed packets option is disabled (0), the kernel will generate a “source-routed packets dropped” audit for each source-routed packet. The source-routed packet option is configurable via the sysctl command, by disabling or enabling the fields burb.X.net.inet.ip.forwarding and burb.X.net.inet.ip.forwsrcrt. As default, both of these fields are disabled so the secure computer will not forward any source-routed packets.
TCP—the kernel will generate an audit for any TCP attempts to synchronize sequence numbers without a valid protocol control block. These types of packets are viewed as hostile probing attempts.
UDP—similar to TCP, the kernel will also generate a probing attempt audit for any UDP attempts to connect without a valid protocol control block.
There are a number of system utilities that configure or retrieve the network states. These utilities are also modified to support burbness.
arp—displays and modifies the Internet-to-Ethernet address resolution. The arp display and command line interface will remain the same. Only the internal implementation will be modified to handle the replicated arp tables.
ifconfig—is used to configure network interface parameters. It is modified to accept a “burb id” entry so that it can be used to set the burb of an interface. It will also print the burb of an interface as part of its normal output. Example: if config ef0 inet 172.17.128.79 netmask 255.255.0.0 burb 0.
inetd—is the internet-service daemon that runs at boot time and listens for connections on certain internet sockets. This daemon will be replaced by the Network Services Sentry, NSS. NSS will be “burb aware” and able to start network services in the appropriate domain. Resident servers will be launched from a similar master utility, such as/etc/rc, so that they run in the correct burb.
netstat—formats and displays various network-related status and data structures. The formatting has been modified so that when appropriate the display includes a burb ID column in its output.
sysctl—is used to retrieve kernel state and allows processes with appropriate privilege to set kernel state. Network states are replicated appropriately into NBURBS.
All services supported on the secure computer, such as telnet and ftp, pass their data through a set of proxies (see, e.g. ftp proxy 75 in FIG. 5). There is one proxy for each type of service, http proxy for web, telnet proxy for telnet, and so on. The proxy will accept ( ) the connection or rcvmsg ( )/recvmsg ( ) the packet and look at the incoming address, either in the mesg structure for UDP or via getsockname ( ) for TCP. It will then find out which burb has a route to that destination, via ioctl (SIOCGSUBURBRT), and establish a socket connection in the outgoing burb.
That connection will be via a Unix domain socket. Each service will have a proxy directory in /var/run, such as /var/run/telnetp/. In that directory each outgoing proxy will have already opened a domain socket. These sockets exist in the file namespace, so they already have access control via the domain and type of the socket node. Thus, the burb functionality guarantees that a given process can only access one burb, and the type enforcement on the socket guarantees that process can't cheat and access another service's proxies. The types on the sockets can give a finer control within a service if needed.
So the following example shows a telnet proxy servicing a telnet from the external network Net1, into the internal network, Net0. /etc/rc is the boot up script, telnetd is the telnet daemon, and nss is the Network Services Sentry
Packets can be filtered as part of the process. Filtering could go before the proxy, it could go after the proxy, or it could go inside the proxy.
Some things, like ftp, are more complex. The proxies have to watch for PORT and PASV requests and set up the data channel and proxy it. Additional channels can be either set up in advance with a standard protocol, or created on the fly by having proxies set up a control channel and bind additional separate domain sockets.
UDP proxies are also more complex. The proxies may have to maintain state information about requests, so that they know which request goes where. A UDP (or other datagram) proxy can work two ways. It can either have a single proxy per burb that just forwards and filters packets one at a time (in which case the connecting Domain socket is a datagram socket as well), or it can create a proxy for each “pseudo” connection and track state to optimize things (like sending acks, etc.). In this case there could be a pair of proxies on the fly that created a stream connection between themselves. They could be routed by port (for protocols that have a unique port and each end negotiated on the fly, like gopher or DNS), otherwise a single proxy still has to handle incoming packets and dispatch them.
Which approach is to be used depends on a few things. If address hiding and masquerading is done, state must be maintained. If not, then simple packet forwarding can be done. Some protocols, like NFS (if ever done) will require extensive state. Further information on network separation can be found in “SYSTEM AND METHOD FOR ACHIEVING NETWORK SEPARATION”, U.S. patent application Ser. No. 08/599,232 by Gooderum et al., filed Feb. 9, 1996, the details of which are hereby incorporated by reference.
Type Enforcement Protection of Compiled Program Code
Type enforcement can be used to extend the access protections inherent to a particular program, without access to the source code for that program. An overview of the process used to install any pre-compiled application binary (i.e. the program) onto the secure computer system is shown in FIG. 6. The first step (100) is to install the binary to a location from which it can be executed. Some (but not all) examples of how this can be accomplished are: (a) copying from the installation floppy disk(s) to the internal hard drive, (b) copying from the installation cdrom to the internal hard drive, (c) mounting external media containing the application binary, and (d) copying the application binary via the network to the internal hard drive.
Once the application binary is accessible for execution, the secure computer system is placed in a different state than the fully secure operational state. This state is called the development state, since in best practice (i.e. for highest security) this state is only available to developers. The development state has all of the type enforcement checks that the operational state has; however, the consequences of a violation are different. In operational state, a type enforcement violation results in denying the requested access. In development state, a type enforcement violation results in allowing the requested access. In both the operational and development state, a log entry is produced which records the type enforcement violation. This logging capability is needed to determine which type enforcement violations the program is generating.
It should be noted that the system does not have to be placed in development state to perform this procedure. All that is absolutely required is the logging capability. However, by placing the system in development state, the time and effort required to perform this procedure is greatly reduced. In practice, the procedure detailed in FIG. 6 is followed under in the operational state only when an error was made when performing the procedure under the development state.
At step 102 the program is executed. Because this program was developed using non-type enforcement techniques, this program will make assumptions about its execution environment that will not be true under the type enforcement execution environment. These faulty assumptions will manifest themselves as type enforcement violations. Some sample faulty assumptions that the program can make include (but is not limited to): (a) the program can create files in the system-wide temporary file area, (b) the program can read files that have Unix “global-read” permissions set to true, etc. These faulty assumptions are a direct consequence of the program being non-type enforcement aware: the program is unaware that, in addition to the Unix security restrictions that it is expecting, there are type enforcement security restrictions. Every type enforcement violation must be successfully dealt with in order for the program to function properly while under the operational state.
There are three general ways to successfully deal with a type enforcement violation. They are: (1) ignore the violation, (2) remove the request that generated the violation, (3) grant additional permission(s) to remove the violation. The criteria for choosing one branch over another branch are functionality and security. The less critical the functionality, the more likely that the violation can be ignored or the request removed. The more critical the functionality, the more likely that additional permission(s) will be granted.
Some violations can be safely ignored. For example, some programs attempt to determine how busy the system is, in order to behave differently under different conditions. If, however, the program is unable to determine how busy the system is (because of a type enforcement violation), the program will simply assume that the system is not busy and continue. For some programs this is an acceptable response, and requires no further action; in these cases (108), type enforcement violations can be ignored.
Some violations can be removed by changing the environment that the program is running under. For example, some programs attempt to change their effective user identification after some resource has been acquired. The most common situation of this kind is when the program runs as “root” long enough to open a file, but then changes to “wwwdaemon” immediately thereafter. Under type enforcement, this ability to change to another user will result in a violation under the default operational state. In cases such as these, it is more security effective to remove the need for the program to change to another user by following these steps:
(1) start the process as user “wwwdaemon” directly (thus, the program will no longer require the ability to change to another user);
(2) set the Unix permissions on the file such that it can be read by user “wwwdaemon”; and
(3) set the type enforcement permissions on the file such that it can be read by the domain in which the process started in step (1) is executing.
So, in this example, the original type enforcement violation is removed at 110 by changing the environment external to the program. Note that in a non-type enforcement system, step (2) results in a security vulnerability. It is only through step (3) that the security vulnerability is closed.
In the above example, one can think of the original type enforcement violation as being transformed into another type enforcement violation. The ability to “change user” was transformed into the ability to “read a file”.
Some violations can be removed by granting the program appropriate type enforcement permission(s), so that what was a violation now is a granted permission. Once it is determined that the violation can neither be ignored nor transformed, this action is performed (112). There are three general kinds of type enforcement violations. They are: (1) File Access, (2) Process-to-Process Communication, and (3) System Environment Access. Therefore, a check must be made at 114 to determine the variety of type enforcement violation.
If the violation concerns general file access, a check is made at 116 to see if the file is a shared file. Programs typically need access to many different sorts of files: configuration files, temporary or scratch files, permanent data files, etc. The only factor that distinguishes these sorts of files from another in the type enforcement sense is whether or not the file is a shared file or not. An example of a non-shared file could be a temporary file. Here, only the program needs access, and no other. An example of a non-shared file could be a database file containing a list of every user of the system.
If the file is shared, then a shared type must be used. Sometimes, this shared type will not yet exist, and must be created. This can occur, for instance, when two new programs both access the same new file (for example, a configuration file that one program writes and the other program reads). Sometimes, the shared file will already exist. This can occur, for instance, when the new program accesses a system-wide file (for example, a database file containing a list of every user of the system.) If the shared file already exists, it will already have a type. This type is used at 118.
If the file is not shared, a new type must be created, and the file is set to this new type.
In either case (both shared and non-shared), the final step is to add at 120 a direct permission to the type enforcement database. This type enforcement permission will grant the domain (in which the program is running) permission on the type (and the file is then set to that type). The fact that the file is not shared is manifested in that there will not be another domain with permission on the type. The fact that the file is shared is manifested in that there will be another domain with permission on the type.
The second kind of violation concerns process-to-process communication. Some programs require the ability to communicate with another program via a process-level mechanism. To deal with this kind of violation, one determines the domain of the second program (the program that the first program is trying to communicate with). Then, at 122 a direct permission is added to the type enforcement database. This permission will grant the sending domain (in which the first program is running) permission on the receiving domain (in which the second program is running). Permissions of this kind are called domain-to-domain interactions, and effectively control the permitted lines of communication between processes.
The third kind of violation concerns system environment access. This kind captures all the kinds of violations that do not concern file access or process-to-process communication. An example would be the ability to determine how busy the system is. Previously, this functionality was used to demonstrate a case where the ability could be removed. However, if the ability was required, then dealing with the type enforcement violation would be handled under this kind of violation. To deal with this kind of violation, one determines the specific functionality required. Then, at 124 a direct permission is added to the type enforcement database. This permission will grant the program's domain permission to invoke the required system function.
The Use of Both Type Enforcement and Network Separation to Protect Compiled Program Code
The process described in the previous section is sufficient to handle those instances where programs do not have to be compartmentalized to achieve sufficient security (e.g. utilities such as screen savers or logging utilities). In that case, all programs can reside in and be bound to one region or burb. For systems which can be partitioned into distinct burbs, however, an additional degree of protection can be added by incorporating network separation.
In the case, for instance, where the program(s) to be installed can reside in multiple burbs, yet must maintain the ability to communicate with each other via a networking interface, network separation is used to increase the overall security of the system. One example of such a case is where the system consists of two programs: a production-side program and an administration-side program. In these instances, it is required that the production-side program reside in an unprotected (e.g. Internet) burb. But one does not want the administration-side program residing in an unprotected (e.g. Internet) burb. So instead, the administration-side program is placed in a protected (e.g. internal) burb.
However, by splitting the two programs (production and administration) into two burbs, one has again changed the actual execution environment away from the expected execution environment. The two programs expect (and rely on) the execution environment to allow process to process communication (e.g. signals) between the two programs. On a non-Network Separated system without Type Enforcement™ protection, this assumption is valid. However, on a Network Separated system, the act of placing the administration-side program in a different burb than the production-side program will have the Network Separation side effect of disallowing process to process communication between the administration-side program and the production-side program. From a functionality standpoint, this is not acceptable.
To restore the required program to program communication functionality, one is required to perform steps outside of those explained in the previous section. The problem to be solved concerns the conflicting restrictions placed by the type enforcement system and the network separation system.
In Network Separation, if one program has been “bound” to one burb (e.g. Internet) and another program has been “bound” to another burb (e.g. internal), the Network Separation mechanism will preclude the ability for the first program to establish a connection to the second program via a network communication connection; thus it precludes the ability for program to program communication using the network. One way of viewing Network Separation having Type Enforcement™ protection is to acknowledge this lack of direct network to network communication: since the networks cannot be shared, they are separated.
So, the problem here can be restated as follows: given a program bound to one burb and another program bound to a different burb, how can we allow the two programs to communicate with each other without having access to the source code for either program.
The procedure to solve this problem is shown in FIG. 7:
First, at 140 decide to which burb each of the separate programs is to be bound. In the case of a production-side program and an administration-side program, it is clear that the production-side program must be bound to an Internet burb, and it is clear from a security standpoint that the administration-side program should be bound to an internal burb. In the case of more complicated sets of programs, tradeoffs must be made between availability and security. Binding a program to an Internet burb increases availability but also enormously increases security risks. Binding a program to an internal burb decreases security risks, but results in a decrease in availability.
Second, at 142 determine how the two programs interact. Again, it is important to note that source code is not available for this step. If source code is available for inspection and modification, then this procedure does not apply. Without source code, this step is accomplished by examining the type enforcement violations generated by either or both programs, as required.
Third, at 144 determine whether the type enforcement and network separation violations discovered in the second step can be eliminated. Techniques for removing TE violations (such as those shown in FIG. 6) can be used. In addition, one may be able to eliminate a type enforcement violation by placing one of the programs into a situation where it is “partially bound” to a particular burb. A “partially bound” program is a program which can interact with a specific burb under a reduced set of Network Separation rules. That is, a partially bound program can perform process to process communication with another burb but not network to network communication. Specifically, it is possible for the partially bound program in a first burb to signal a program that has been completely bound to another, separate burb, while simultaneously maintaining other connections to the first burb. Therefore, communication between programs assigned to different burbs is permitted in a limited way and the problem is solved.
In one embodiment, one creates this “partially bound” program by using a type enforcement database trick. Normally, a program “P” is completely bound to a burb when the following syntax is used:
“P” runs in domain www# which means that program “P” can access any of the burbs 0 through 9 (as signified by the character ‘#’ above). When program “P” first starts running, it has its choice of burbs (e.g. www0, www1, . . . www9) to establish a connection. But after the first connection, the program “P” is fully bound to that burb and the domain that is associated with that burb, and cannot establish a connection to any network in another burb.
A program “P” is “partially bound” to a burb when the following syntax is used:
Here, the program “P” loses some of its flexibility (it can now connect only to burb adm0) but it gains the “partially bound” status. Furthermore, since such an approach is treated as if there are no other instantiated domains (i.e. no adm1, adm2, etc.), program P running in domain adm0 can now connect to all other “#”-defined domains.
For the production-side program “PP” and administration-side program “AA” example above, a full example might be:
Thus, since the program “AA” is “partially bound” to ‘adm0’, there can be no interact act woth prd0 domains adm11 through adm9. Program “AA” has, however, gained the ability tointeract with prd0 through prd9. Thus, the required signaling ability is re-established.
If it is not possible to place all but one of the programs in a “partially bound” burb, then this procedure must stop without succeeding. The only alternative at this point is to place all programs in the same burb. Then, since the additional Network Separation checks are no longer an issue, this procedure degenerates into the type enforcement only approach described previously.
Further information on increasing security of compiled program code can be found in “SYSTEM AND METHOD FOR SECURING COMPILED PROGRAM CODE”, U.S. Pat. No. 5,867,647, by Haigh et al., issued Feb. 2, 1999, the details of which are hereby incorporated by reference.
Secure Commerce Server
One embodiment of a binary program made secure through a combination of type enforcement and network separation is described next. The Netscape Commerce Server, version 1.1, available from Netscape Computer, Inc., is a server for use in conducting commercial transactions over the Internet. This server provides user authentication, SSL protection to http connections, and a forms interface for server administration. As such, it is critical that care be taken to protect a malicious attacker from gaining access to and subverting the program.
A network separated embodiment 160 having Type Enforcement™ protection of the Netscape Commerce Server system is shown in FIG. 8. The Netscape Commerce Server system includes two distinct servers: the Commerce server 162 and the administration server 164. In the preferred embodiment, one administration server 164 should be able to administer and configure a plurality of separate Commerce servers 162. In the system of FIG. 8, the dashed lines indicate a user level permission check such as the Unix file permissions. In addition, components of the system are placed within domains 166, 168 and 172 for a higher level of security.
Commerce server 162 is installed via an HTML forms driven interface. Administration and configuration after installation is done in the same manner. Installation operates as follows:
1. The user runs the script ns-setup in the source tree (the directory structure from which files are copied to install the Netscape Commerce Server).
2. The user is prompted for the hostname of the host machine.
3. A temporary configuration is written in/tmp and a special WWW server is started using this configuration (the installation server).
4. The installation server binds to an arbitrary, non-reserved port.
5. The user is prompted for the name of a Web browser to start up. It then starts the browser on the port the installation server is listening on.
6. The rest of the installation is HTML forms-driven through the browser.
Various items such as port number for the Commerce server, UID to run the server under, install directory, logging, administration password, and other server configuration are entered via three forms.
7. The configuration is verified and the actual installation takes place:
1. create destination directory and subdirectories
2. create document root directory
3. create server startup and shutdown scripts
4. create server configuration files
5. copy all binaries, administration forms, and other files from source tree to destination tree
6. change owner of directories
7. start administration server
8. start Commerce Server
9. remove files created in/tmp
8. Installation complete.
Many Commerce servers 162 can be run on the same machine, binding to different ports (or even the same port, using different IP addresses) and having their own configuration. Each server 162 preforks 16 processes (number is configurable) to serve requests. This can load the system down if many servers are running.
After installation, Commerce server 162 is administered via forms using any forms-capable browser and connecting to a separate administration server running on its own port. This server is started and stopped manually, except during installation when it is started automatically. (It is recommended by Netscape that it be stopped when not being used.) The administration server is also configurable via forms (it configures itself). It can be configured to require a password to access it, and to allow only certain hostnames or IP addresses to connect to it.
The Commerce server 162, once running, serves Web pages from a directory tree whose root is configurable. CGI script location, URL mappings, user authentication, access control by host, logging, and other items are also configurable via the administration server. Configuring some items causes creation of files, such as generating a key, installing a certificate, and creating a user database.
High Level Design
This section describes what must be done to port the Netscape Commerce Server to the secure computer. Since we have no source code from Netscape, we cannot modify how the server or installation process operates. Thus, design will focus on what needs to be done outside the Netscape source code to handle type enforcement.
The installation process will be done completely in the administrative kernel (no type enforcement checks), so setting file types in the source tree is not necessary. All file types must be set after the install process is complete since Netscape is not type-enforcement (TE) aware. To accomplish this, a script is used to set all types appropriately. The script must be able to find the appropriate directories, because paths are configurable, and the server files are in a directory named for the port it binds to. This info is passed via arguments or entered by the installer via prompts.
Commerce Server 162 runs in domain 166. In one embodiment, this is the same domain in which the CERN server (not shown) runs. Domain 166 is a bound (burbed) domain which is extended to handle the Commerce Server 162 That is, the Commerce Server is extended to be able to bind to port 443 (https), 80 (http), 8000, 8001, and 8080. In addition, to allow site flexibility, Commerce Server 162 must be able to bind to selected reserved port ranges.
Administration Server 164 runs in a new Netscape Admin domain, domain 168. Domain 168 must be a burbed domain, since it does a non-burb-aware socket ( ) call. Transfers between domains must be done through proxy programs acting in concert with kernel 170.
CGI scripts run in a CGI processor 171 in burbed CGI domain 172. Since, however, the Netscape server cannot be modified to do a makedomain ( ) to run the CGI script in the proper domain, there will need to be a :tran type added to CGI domain 172. Then, all CGI scripts will be of type tran. In an alternate embodiment, a line could be added to the CGI script itself which causes the script to automatically transition into CGI domain 172. To run a CGI script, Commerce Server 162 initiates a new process which executes in CGI processor 171 within CGI domain 172. The results are then transferred back to Commerce Server 162 In one embodiment, the CERN server also executes CGI scripts within CGI domain 172.
For installing two servers on the same port, but with different IP addresses:
1. Follow the install directions twice, specifying everything the same except a different server name (which will need to resolve to the correct IP address), bind address, and document root.
2. Make an IP alias so that both IP addresses used for the bind addresses in the previous step access the secure computer. Type: ifconfig ef1 alias 172.17.128.199 to alias the IP address 172.17.128.199 to the external interface.
3. Access both servers using the different IP addresses.
Files Manifest
The following are files associated with the Commerce Server. In this section, /server-root/ will denote the path where the server files are installed. Also, the subdirectory https-443 will denote the directory containing all files specific to the server running on port 443. The actual name will contain the port number the server binds to in place of ‘443’.
/server-root/bin/https/ns-httpd
The Commerce server.
/server-root/admserv/ns-admin
The administration server.
/server-root/adinserv/servlist
Part of the administration server.
/server-root/bin/httpsladrnin/bin/*
CGI scripts for the admin server that generate forms, interpret responses, change config files, administer servers, generate keys, etc.
/server-root/extrasldatabase/batchdb
Utility to convert NCSA-style user database to DBM database.
/server-root/extras/database/changepw.cgi
CGI script to allow users to change their own password via forms.
/server-root/extras/log_anly/analyze
Analyze access logs.
/server-root/extras/log_anly/a_form.cgi
CGI script to analyze access logs via forms.
/server-root/https-443/ restart
Restart the Commerce server.
/server-root/https-443/rotate
Rotate the log files.
/server-root/https-443/start
Start up the Commerce server.
/server-root/https-443/stop
Shut down the Commerce server.
/server-root/start-admin
Start up the administration server.
/server-root/stop-admin
Shut down the administration server.
/server-root/admserv/admpw
User and password file for access to administration server.
/server-root/admserv/ns-admin.conf
Configuration file for the administration server.
/server-rootibin/https/admin/html/*
Forms templates for the administration server.
/server-root/bin/https/admin/icons/*
Icons for the administration server.
/server-root/mc-icons/*
Icons for the Commerce server, used for gopher and ftp listings.
/server-root/userdb/*
User databases. (Initially empty)
/server-root/https-443/config/admin.conf
Commerce server configuration file.
/server-root/https-443/config/magnus.conf
Commerce server configuration file.
/server-root/https-443/config/obj.conf
Commerce server configuration file.
/server-root/https-443/config/mime.types
Commerce server configuration file.
/server-root/https-443/config/ServerKey.der
Key pair file. Initially non-existent, generated by admin server when requested by the administrator. Path is configurable.
/server-root/https-443/config/ServerCert.der
Certificate file. Initially non-existent, installed by admin server from the Certificate Authority's certificate response when requested by the administrator. Path is configurable.
/server-root/admserv/errors
Log file for administration server.
/server-root/https-443/logs/access
Commerce server access log. Path is configurable.
/server-root/https-443/logs/errors
Commerce server error log.
/server-root/https-443/logs/secure
Commerce server secure log. (All https accesses are logged here with the keysize used).
/server-root/admserv/pid
Contains the process ID of the administration server when running.
/server-root/https-443/logs/pid
Contains the process ID of the Commerce server when running.
/etc/spwd.db
System password file containing encrypted passwords.
/etc/pwd.db
A shadow password file.
/server-root/extras/database/changepw.htrn
HTML form to allow users to change their own password.
/server-root/extras/log_anly/a_fonn.html
HTML form to analyze access logs.
/server-root/docs.index. html
Default home page for the Commerce server. Path is configurable.
CGI scripts
Path is configurable.
HTML pages
Path is configurable.
As noted previously, Netscape Commerce Web server 162 runs in the Web server domain (domain 166), the same domain that the CERN Web server runs in. Therefore, the Web server domain needs execute permission to the following:
/server-root/bin/https/ns-httpd
In addition, the Web server domain needs read permission to these files:
/server-root/mc-icons/*
/extras/database/changepw.htm
/server-root/extras/log_anly/a_form.html
/server-root/docs/index.html
The Web server domain needs write permission to these files:
/server-root/https-443/logs/access
/server-root/https-443/logs/errors
/server-root/https-443/logslsecure
/server-root/https-443/logs pid
Since the server normally runs as root, if it were able to execute arbitrary code, this would be bad. We address this by not having it run as root, taking advantage of Type Enforcement™ protection of the sockets. This allows users other than root to connect to low numbered sockets. Other than needing to bind to port 80 or 443, there is no other reason the server needs to run as root. Therefore, the server will run as user “www” and group “www” with no need to run as root.
The CGI bin executables present a potential security concern. Since the Commerce server is allowed to execute these files, if someone were able to put their own executable on the secure computer, the server may be compromised. TE only allows the Web server domain to transition to the CGI domains and no others. It does not have permission to create or write to files of any executable type. This helps prevent the possibility of an external user subverting the server and uploading an executable file.
The entire directory tree containing the html documents which the Commerce server accesses will be write protected from the Web server domain. A separate writable directory will be set aside for all the data created by external users. This is where the CGI executables will put their results. The CGI domain will be allowed to create files in this directory, but not to read or destroy them. These files may later be read and moved internally by the webmaster using mail or FTP.
The password administration program will run in the Netscape Admin domain (domain 168). This domain will have all the accesses required to maintain the Admin server and all Commerce servers on the secure computer.
The Admin domain needs execute access to the following:
/server-root/admserv/ns-admin
/server-root/admserv/servlist
/server-root/bin/https/admin/bin/*
/server-root/extras/database/batchdb
/server-root/extras/log_anly/a_form.cgi
/server-root/extras/log_anly/analyze
/server-root/https-443/restart
/server-root/https-443/rotate
/server-rootlhttps-443/start
/server-root/https-443/stop
/server-root/start-admin
/server-root/stop-admin
The Admin domain needs read access for:
/server-root/bin/https/admin/html/*
/server-root/bin/https/admin/icons/*
/server-root/extrasldatabase/changepw.htm
/server-root/extras/log_anly/a_form.html
HTML pages
The Admin domain needs both read and write access for:
/server-root/admserv/admpw
/server-root/admserv/ns-admin.conf
/server-root/admserv/pid
/server-root/admserv/errors
/https-443/logs/access
/server-root/https-443/logs/errors
/server-root/https-443/logs/secure
/server-root/https-443/logs/pid
CGI scripts run in CGI domain 172. The following files will be of cgix:tran type and will run in the CGI domain:
/server-root/extras/database/changepw.cgi
/server-root/extras/log_anly/a_form.cgi
CGI-scripts
Commerce Server 162 is controlled by the various configuration files maintained by administration server 164. In one embodiment, it may be advantageous to add the ability to start and stop administrative server 164 from another system administration program running in another domain.
Since the secure computer uses type enforcement on sockets, we can allow users other than root to bind to reserved ports. As a result, we run the Commerce server as user id “www” and group id “www”. The “www” user is similar to “nobody” and has no log in capabilities. The administration server will still run as root. In addition, scripts that start and stop the servers need to execute /bin/sh (type $Sys:shel). These scripts are also not type-aware, so they need to be modified to execute in the correct domain.
In one embodiment, the following domains are used to define the commerce server system. It should be apparent that other combinations of domains, privileges or access rights could be used. In the list, each domain can have the following privileges: “is_admin” indicates the domain is an administrative domain, “has_rootness” means that the domain can violate Unix permissions if the process UID is root, “can_setlogin” indicates that the domain can set the login name of a process. In addition, each domain's DIT could have the following permissions: “dt” indicates that transition to the named domain is allowed, “sABRT” indicates that sending an ABRT signal to the named domain is allowed, “sJob” indicates that sending a job control signal to the named domain is allowed, “sHUP” indicates that sending a HUP signal to the named domain is allowed, and “sUser” indicates that sending a USR1 or USR2 signal to the named domain is allowed.
Finally, each DDT uses the following permissions: “w” is permission to write, “r” is permission to read, “d” is permission to destroy, “n” is permission to rename, “c” is permission to create and “e” is permission to execute.
Although the present invention has been described with reference to the preferred embodiments, those skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the invention.
|
https://patents.google.com/patent/US6332195
|
CC-MAIN-2018-17
|
refinedweb
| 13,994
| 54.02
|
Functional programming is a programming paradigm in which the primary method of computation is evaluation of functions. In this tutorial, you’ll explore functional programming in Python.
Functional programming typically plays a fairly small role in Python code. But it’s good to be familiar with it. At a minimum, you’ll probably encounter it from time to time when reading code written by others. You may even find situations where it’s advantageous to use Python’s functional programming capabilities in your own code.
In this tutorial, you’ll learn:
- What the functional programming paradigm entails
- What it means to say that functions are first-class citizens in Python
- How to define anonymous functions with the
lambdakeyword
- How to implement functional code using
map(),
filter(), and
reduce()
Free Bonus: 5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you’ll need to take your Python skills to the next level.
What Is Functional Programming?
A pure function is a function whose output value follows solely from its input values, without any observable side effects. In functional programming, a program consists entirely of evaluation of pure functions. Computation proceeds by nested or composed function calls, without changes to state or mutable data.
The functional paradigm is popular because it offers several advantages over other programming paradigms. Functional code is:
- High level: You’re describing the result you want rather than explicitly specifying the steps required to get there. Single statements tend to be concise but pack a lot of punch.
- Transparent: The behavior of a pure function depends only on its inputs and outputs, without intermediary values. That eliminates the possibility of side effects, which facilitates debugging.
- Parallelizable: Routines that don’t cause side effects can more easily run in parallel with one another.
Many programming languages support some degree of functional programming. In some languages, virtually all code follows the functional paradigm. Haskell is one such example. Python, by contrast, does support functional programming but contains features of other programming models as well.
While it’s true that an in-depth description of functional programming is somewhat complex, the goal here isn’t to present a rigorous definition but to show you what you can do by way of functional programming in Python.
How Well Does Python Support Functional Programming?
To support functional programming, it’s useful if a function in a given programming language has two abilities:
- To take another function as an argument
- To return another function to its caller
Python plays nicely in both these respects. As you’ve learned previously in this series, everything in a Python program is an object. All objects in Python have more or less equal stature, and functions are no exception.
In Python, functions are first-class citizens. That means functions have the same characteristics as values like strings and numbers. Anything you would expect to be able to do with a string or number you can do with a function as well.
For example, you can assign a function to a variable. You can then use that variable the same as you would use the function itself:
1>>> def func(): 2... print("I am function func()!") 3... 4 5>>> func() 6I am function func()! 7 8>>> another_name = func 9>>> another_name() 10I am function func()!
The assignment
another_name = func on line 8 creates a new reference to
func() named
another_name. You can then call the function by either name,
func or
another_name, as shown on lines 5 and 9.
You can display a function to the console with
print(), include it as an element in a composite data object like a list, or even use it as a dictionary key:
>>> def func(): ... print("I am function func()!") ... >>> print("cat", func, 42) cat <function func at 0x7f81b4d29bf8> 42 >>> objects = ["cat", func, 42] >>> objects[1] <function func at 0x7f81b4d29bf8> >>> objects[1]() I am function func()! >>> d = {"cat": 1, func: 2, 42: 3} >>> d[func] 2
In this example,
func() appears in all the same contexts as the values
"cat" and
42, and the interpreter handles it just fine.
Note: What you can or can’t do with any object in Python depends to some extent on context. There are some operations, for example, that work for certain object types but not for others.
You can add two integer objects or concatenate two string objects with the plus operator (
+). But the plus operator isn’t defined for function objects.
For present purposes, what matters is that functions in Python satisfy the two criteria beneficial for functional programming listed above. You can pass a function to another function as an argument:
1>>> def inner(): 2... print("I am function inner()!") 3... 4 5>>> def outer(function): 6... function() 7... 8 9>>> outer(inner) 10I am function inner()!
Here’s what’s happening in the above example:
- The call on line 9 passes
inner()as an argument to
outer().
- Within
outer(), Python binds
inner()to the function parameter
function.
outer()can then call
inner()directly via
function.
This is known as function composition.
Technical note: Python provides a shortcut notation called a decorator to facilitate wrapping one function inside another. For more information, check out the Primer on Python Decorators.
When you pass a function to another function, the passed-in function sometimes is referred to as a callback because a call back to the inner function can modify the outer function’s behavior.
A good example of this is the Python function
sorted(). Ordinarily, if you pass a list of string values to
sorted(), then it sorts them in lexical order:
>>> animals = ["ferret", "vole", "dog", "gecko"] >>> sorted(animals) ['dog', 'ferret', 'gecko', 'vole']
However,
sorted() takes an optional
key argument that specifies a callback function that can serve as the sorting key. So, for example, you can sort by string length instead:
>>> animals = ["ferret", "vole", "dog", "gecko"] >>> sorted(animals, key=len) ['dog', 'vole', 'gecko', 'ferret']
sorted() can also take an optional argument that specifies sorting in reverse order. But you could manage the same thing by defining your own callback function that reverses the sense of
len():
>>> animals = ["ferret", "vole", "dog", "gecko"] >>> sorted(animals, key=len, reverse=True) ['ferret', 'gecko', 'vole', 'dog'] >>> def reverse_len(s): ... return -len(s) ... >>> sorted(animals, key=reverse_len) ['ferret', 'gecko', 'vole', 'dog']
You can check out How to Use
sorted() and
sort() in Python for more information on sorting data in Python.
Just as you can pass a function to another function as an argument, a function can also specify another function as its return value:
1>>> def outer(): 2... def inner(): 3... print("I am function inner()!") 4... 5... # Function outer() returns function inner() 6... return inner 7... 8 9>>> function = outer() 10>>> function 11<function outer.<locals>.inner at 0x7f18bc85faf0> 12>>> function() 13I am function inner()! 14 15>>> outer()() 16I am function inner()!
Here’s what’s going on in this example:
- Lines 2 to 3:
outer()defines a local function
inner().
- Line 6:
outer()passes
inner()back as its return value.
- Line 9: The return value from
outer()is assigned to variable
function.
Following this, you can call
inner() indirectly through
function, as shown on line 12. You can also call it indirectly using the return value from
outer() without intermediate assignment, as on line 15.
As you can see, Python has the pieces in place to support functional programming nicely. Before you jump into functional code, though, there’s one more concept that will be helpful for you to explore: the
lambda expression.
Defining an Anonymous Function With
lambda
Functional programming is all about calling functions and passing them around, so it naturally involves defining a lot of functions. You can always define a function in the usual way, using the
def keyword as you have seen in previous tutorials in this series.
Sometimes, though, it’s convenient to be able to define an anonymous function on the fly, without having to give it a name. In Python, you can do this with a
lambda expression.
Technical note: The term lambda comes from lambda calculus, a formal system of mathematical logic for expressing computation based on function abstraction and application.
The syntax of a
lambda expression is as follows:
lambda <parameter_list>: <expression>
The following table summarizes the parts of a
lambda expression:
The value of a
lambda expression is a callable function, just like a function defined with the
def keyword. It takes arguments, as specified by
<parameter_list>, and returns a value, as indicated by
<expression>.
Here’s a quick first example:
1>>> lambda s: s[::-1] 2<function <lambda> at 0x7fef8b452e18> 3 4>>> callable(lambda s: s[::-1]) 5True
The statement on line 1 is just the
lambda expression by itself. On line 2, Python displays the value of the expression, which you can see is a function.
The built-in Python function
callable() returns
True if the argument passed to it appears to be callable and
False otherwise. Lines 4 and 5 show that the value returned by the
lambda expression is in fact callable, as a function should be.
In this case, the parameter list consists of the single parameter
s. The subsequent expression
s[::-1] is slicing syntax that returns the characters in
s in reverse order. So this
lambda expression defines a temporary, nameless function that takes a string argument and returns the argument string with the characters reversed.
The object created by a
lambda expression is a first-class citizen, just like a standard function or any other object in Python. You can assign it to a variable and then call the function using that name:
>>> reverse = lambda s: s[::-1] >>> reverse("I am a string") 'gnirts a ma I'
This is functionally—no pun intended—equivalent to defining
reverse() with the
def keyword:
1>>> def reverse(s): 2... return s[::-1] 3... 4>>> reverse("I am a string") 5'gnirts a ma I' 6 7>>> reverse = lambda s: s[::-1] 8>>> reverse("I am a string") 9'gnirts a ma I'
The calls on lines 4 and 8 above behave identically.
However, it’s not necessary to assign a variable to a
lambda expression before calling it. You can also call the function defined by a
lambda expression directly:
>>> (lambda s: s[::-1])("I am a string") 'gnirts a ma I'
Here’s another example:
>>> (lambda x1, x2, x3: (x1 + x2 + x3) / 3)(9, 6, 6) 7.0 >>> (lambda x1, x2, x3: (x1 + x2 + x3) / 3)(1.4, 1.1, 0.5) 1.0
In this case, the parameters are
x1,
x2, and
x3, and the expression is
x1 + x2 + x3 / 3. This is an anonymous
lambda function to calculate the average of three numbers.
As another example, recall above when you defined a
reverse_len() to serve as a callback function to
sorted():
>>> animals = ["ferret", "vole", "dog", "gecko"] >>> def reverse_len(s): ... return -len(s) ... >>> sorted(animals, key=reverse_len) ['ferret', 'gecko', 'vole', 'dog']
You could use a
lambda function here as well:
>>> animals = ["ferret", "vole", "dog", "gecko"] >>> sorted(animals, key=lambda s: -len(s)) ['ferret', 'gecko', 'vole', 'dog']
A
lambda expression will typically have a parameter list, but it’s not required. You can define a
lambda function without parameters. The return value is then not dependent on any input parameters:
>>> forty_two_producer = lambda: 42 >>> forty_two_producer() 42
Note that you can only define fairly rudimentary functions with
lambda. The return value from a
lambda expression can only be one single expression. A
lambda expression can’t contain statements like assignment or
return, nor can it contain control structures such as
for,
while,
if,
else, or
def.
You learned in the previous tutorial on defining a Python function that a function defined with
def can effectively return multiple values. If a
return statement in a function contains several comma-separated values, then Python packs them and returns them as a tuple:
>>> def func(x): ... return x, x ** 2, x ** 3 ... >>> func(3) (3, 9, 27)
This implicit tuple packing doesn’t work with an anonymous
lambda function:
>>> (lambda x: x, x ** 2, x ** 3)(3) <stdin>:1: SyntaxWarning: 'tuple' object is not callable; perhaps you missed a comma? Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'x' is not defined
But you can return a tuple from a
lambda function. You just have to denote the tuple explicitly with parentheses. You can also return a list or a dictionary from a
lambda function:
>>> (lambda x: (x, x ** 2, x ** 3))(3) (3, 9, 27) >>> (lambda x: [x, x ** 2, x ** 3])(3) [3, 9, 27] >>> (lambda x: {1: x, 2: x ** 2, 3: x ** 3})(3) {1: 3, 2: 9, 3: 27}
A
lambda expression has its own local namespace, so the parameter names don’t conflict with identical names in the global namespace. A
lambda expression can access variables in the global namespace, but it can’t modify them.
There’s one final oddity to be aware of. If you find a need to include a
lambda expression in a formatted string literal (f-string), then you’ll need to enclose it in explicit parentheses:
>>> print(f"--- {lambda s: s[::-1]} ---") File "<stdin>", line 1 (lambda s) ^ SyntaxError: f-string: invalid syntax >>> print(f"--- {(lambda s: s[::-1])} ---") --- <function <lambda> at 0x7f97b775fa60> --- >>> print(f"--- {(lambda s: s[::-1])('I am a string')} ---") --- gnirts a ma I ---
Now you know how to define an anonymous function with
lambda. For further reading on
lambda functions, check out How to Use Python Lambda Functions.
Next, it’s time to delve into functional programming in Python. You’ll see how
lambda functions are particularly convenient when writing functional code.
Python offers two built-in functions,
map() and
filter(), that fit the functional programming paradigm. A third,
reduce(), is no longer part of the core language but is still available from a module called
functools. Each of these three functions takes another function as one of its arguments.
Applying a Function to an Iterable With
map()
The first function on the docket is
map(), which is a Python built-in function. With
map(), you can apply a function to each element in an iterable in turn, and
map() will return an iterator that yields the results. This can allow for some very concise code because a
map() statement can often take the place of an explicit loop.
Calling
map() With a Single Iterable
The syntax for calling
map() on a single iterable looks like this:
map(<f>, <iterable>)
map(<f>, <iterable>) returns in iterator that yields the results of applying function
<f> to each element of
<iterable>.
Here’s an example. Suppose you’ve defined
reverse(), a function that takes a string argument and returns its reverse, using your old friend the
[::-1] string slicing mechanism:
>>> def reverse(s): ... return s[::-1] ... >>> reverse("I am a string") 'gnirts a ma I'
If you have a list of strings, then you can use
map() to apply
reverse() to each element of the list:
>>> animals = ["cat", "dog", "hedgehog", "gecko"] >>> iterator = map(reverse, animals) >>> iterator <map object at 0x7fd3558cbef0>
But remember,
map() doesn’t return a list. It returns an iterator called a map object. To obtain the values from the iterator, you need to either iterate over it or use
list():
>>> iterator = map(reverse, animals) >>> for i in iterator: ... print(i) ... tac god gohegdeh okceg >>> iterator = map(reverse, animals) >>> list(iterator) ['tac', 'god', 'gohegdeh', 'okceg']
Iterating over
iterator yields the items from the original list
animals, with each string reversed by
reverse().
In this example,
reverse() is a pretty short function, one you might well not need outside of this use with
map(). Rather than cluttering up the code with a throwaway function, you could use an anonymous
lambda function instead:
>>> animals = ["cat", "dog", "hedgehog", "gecko"] >>> iterator = map(lambda s: s[::-1], animals) >>> list(iterator) ['tac', 'god', 'gohegdeh', 'okceg'] >>> # Combining it all into one line: >>> list(map(lambda s: s[::-1], ["cat", "dog", "hedgehog", "gecko"])) ['tac', 'god', 'gohegdeh', 'okceg']
If the iterable contains items that aren’t suitable for the specified function, then Python raises an exception:
>>> list(map(lambda s: s[::-1], ["cat", "dog", 3.14159, "gecko"])) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in <lambda> TypeError: 'float' object is not subscriptable
In this case, the
lambda function expects a string argument, which it tries to slice. The second element in the list,
3.14159, is a
float object, which isn’t sliceable. So a
TypeError occurs.
Here’s a somewhat more real-world example: In the tutorial section on built-in string methods, you encountered
str.join(), which concatenates strings from an iterable, separated by the specified string:
>>> "+".join(["cat", "dog", "hedgehog", "gecko"]) 'cat+dog+hedgehog+gecko'
This works fine if the objects in the list are strings. If they aren’t, then
str.join() raises a
TypeError exception:
>>> "+".join([1, 2, 3, 4, 5]) Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: sequence item 0: expected str instance, int found
One way to remedy this is with a loop. Using a
for loop, you can create a new list that contains string representations of the numbers in the original list. Then you can pass the new list to
.join():
>>> strings = [] >>> for i in [1, 2, 3, 4, 5]: ... strings.append(str(i)) ... >>> strings ['1', '2', '3', '4', '5'] >>> "+".join(strings) '1+2+3+4+5'
However, because
map() applies a function to each object of a list in turn, it can often eliminate the need for an explicit loop. In this case, you can use
map() to apply
str() to the list objects before joining them:
>>> "+".join(map(str, [1, 2, 3, 4, 5])) '1+2+3+4+5'
map(str, [1, 2, 3, 4, 5]) returns an iterator that yields the list of string objects
["1", "2", "3", "4", "5"], and you can then successfully pass that list to
.join().
Although
map() accomplishes the desired effect in the above example, it would be more Pythonic to use a list comprehension to replace the explicit loop in a case like this.
Calling
map() With Multiple Iterables
There’s another form of
map() that takes more than one iterable argument:
map(<f>, <iterable₁>, <iterable₂>, ..., <iterableₙ>)
map(<f>, <iterable
1
>, <iterable
2
>, ..., <iterable
n
>) applies
<f> to the elements in each
<iterable
i
> in parallel and returns an iterator that yields the results.
The number of
<iterable
i
> arguments specified to
map() must match the number of arguments that
<f> expects.
<f> acts on the first item of each
<iterable
i
>, and that result becomes the first item that the return iterator yields. Then
<f> acts on the second item in each
<iterable
i
>, and that becomes the second yielded item, and so on.
An example should help clarify:
>>> def f(a, b, c): ... return a + b + c ... >>> list(map(f, [1, 2, 3], [10, 20, 30], [100, 200, 300])) [111, 222, 333]
In this case,
f() takes three arguments. Correspondingly, there are three iterable arguments to
map(): the lists
[1, 2, 3],
[10, 20, 30], and
[100, 200, 300].
The first item returned is the result of applying
f() to the first element in each list:
f(1, 10, 100). The second item returned is
f(2, 20, 200), and the third is
f(3, 30, 300), as shown in the following diagram:
The return value from
map() is an iterator that yields the list
[111, 222, 333].
Again in this case, since
f() is so short, you could readily replace it with a
lambda function instead:
>>> list( ... map( ... (lambda a, b, c: a + b + c), ... [1, 2, 3], ... [10, 20, 30], ... [100, 200, 300] ... ) ... )
This example uses extra parentheses around the
lambda function and implicit line continuation. Neither is necessary, but they help make the code easier to read.
Selecting Elements From an Iterable With
filter()
filter() allows you to select or filter items from an iterable based on evaluation of the given function. It’s called as follows:
filter(<f>, <iterable>)
filter(<f>, <iterable>) applies function
<f> to each element of
<iterable> and returns an iterator that yields all items for which
<f> is truthy. Conversely, it filters out all items for which
<f> is falsy.
In the following example,
greater_than_100(x) is truthy if
x > 100:
>>> def greater_than_100(x): ... return x > 100 ... >>> list(filter(greater_than_100, [1, 111, 2, 222, 3, 333])) [111, 222, 333]
In this case,
greater_than_100() is truthy for items
111,
222, and
333, so these items remain, whereas
1,
2, and
3 are discarded. As in previous examples,
greater_than_100() is a short function, and you could replace it with a
lambda expression instead:
>>> list(filter(lambda x: x > 100, [1, 111, 2, 222, 3, 333])) [111, 222, 333]
The next example features
range().
range(n) produces an iterator that yields the integers from
0 to
n - 1. The following example uses
filter() to select only the even numbers from the list and filter out the odd numbers:
>>> list(range(10)) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> def is_even(x): ... return x % 2 == 0 ... >>> list(filter(is_even, range(10))) [0, 2, 4, 6, 8] >>> list(filter(lambda x: x % 2 == 0, range(10))) [0, 2, 4, 6, 8]
Here’s an example using a built-in string method:
>>> animals = ["cat", "Cat", "CAT", "dog", "Dog", "DOG", "emu", "Emu", "EMU"] >>> def all_caps(s): ... return s.isupper() ... >>> list(filter(all_caps, animals)) ['CAT', 'DOG', 'EMU'] >>> list(filter(lambda s: s.isupper(), animals)) ['CAT', 'DOG', 'EMU']
Remember from the previous tutorial on string methods that
s.isupper() returns
True if all alphabetic characters in
s are uppercase and
False otherwise.
Reducing an Iterable to a Single Value With
reduce()
reduce() applies a function to the items in an iterable two at a time, progressively combining them to produce a single result.
reduce() was once a built-in function in Python. Guido van Rossum apparently rather disliked
reduce() and advocated for its removal from the language entirely. Here’s what he had to say about. (Source)
Guido actually advocated for eliminating all three of
reduce(),
map(), and
filter() from Python. One can only guess at his rationale. As it happens, the previously mentioned list comprehension covers the functionality provided by all these functions and much more. You can learn more by reading When to Use a List Comprehension in Python.
As you’ve seen,
map() and
filter() remain built-in functions in Python.
reduce() is no longer a built-in function, but it’s available for import from a standard library module, as you’ll see next.
To use
reduce(), you need to import it from a module called
functools. This is possible in several ways, but the following is the most straightforward:
from functools import reduce
Following this, the interpreter places
reduce() into the global namespace and makes it available for use. The examples you’ll see below assume that this is the case.
Calling
reduce() With Two Arguments
The most straightforward
reduce() call takes one function and one iterable, as shown below:
reduce(<f>, <iterable>)
reduce(<f>, <iterable>) uses
<f>, which must be a function that takes exactly two arguments, to progressively combine the elements in
<iterable>. To start,
reduce() invokes
<f> on the first two elements of
<iterable>. That result is then combined with the third element, then that result with the fourth, and so on until the list is exhausted. Then
reduce() returns the final result.
Guido was right when he said the most straightforward applications of
reduce() are those using associative operators. Let’s start with the plus operator (
+):
>>> def f(x, y): ... return x + y ... >>> from functools import reduce >>> reduce(f, [1, 2, 3, 4, 5]) 15
This call to
reduce() produces the result
15 from the list
[1, 2, 3, 4, 5] as follows:
This is a rather roundabout way of summing the numbers in the list! While this works fine, there’s a more direct way. Python’s built-in
sum() returns the sum of the numeric values in an iterable:
>>> sum([1, 2, 3, 4, 5]) 15
Remember that the binary plus operator also concatenates strings. So this same example will progressively concatenate the strings in a list as well:
>>> reduce(f, ["cat", "dog", "hedgehog", "gecko"]) 'catdoghedgehoggecko'
Again, there’s a way to accomplish this that most would consider more typically Pythonic. This is precisely what
str.join() does:
>>> "".join(["cat", "dog", "hedgehog", "gecko"]) 'catdoghedgehoggecko'
Now consider an example using the binary multiplication operator (
*). The factorial of a positive integer
n is defined as follows:
You can implement a factorial function using
reduce() and
range() as shown below:
>>> def multiply(x, y): ... return x * y ... >>> def factorial(n): ... from functools import reduce ... return reduce(multiply, range(1, n + 1)) ... >>> factorial(4) # 1 * 2 * 3 * 4 24 >>> factorial(6) # 1 * 2 * 3 * 4 * 5 * 6 720
Once again, there’s a more straightforward way to do this. You can use
factorial() provided by the standard
math module:
>>> from math import factorial >>> factorial(4) 24 >>> factorial(6) 720
As a final example, suppose you need to find the maximum value in a list. Python provides the built-in function
max() to do this, but you could use
reduce() as well:
>>> max([23, 49, 6, 32]) 49 >>> def greater(x, y): ... return x if x > y else y ... >>> from functools import reduce >>> reduce(greater, [23, 49, 6, 32]) 49
Notice that in each example above, the function passed to
reduce() is a one-line function. In each case, you could have used a
lambda function instead:
>>> reduce(lambda x, y: x + y, [1, 2, 3, 4, 5]) 15 >>> reduce(lambda x, y: x + y, ["foo", "bar", "baz", "quz"]) 'foobarbazquz' >>> def factorial(n): ... from functools import reduce ... return reduce(lambda x, y: x * y, range(1, n + 1)) ... >>> factorial(4) 24 >>> factorial(6) 720 >>> reduce((lambda x, y: x if x > y else y), [23, 49, 6, 32]) 49
This is a convenient way to avoid placing an otherwise unneeded function into the namespace. On the other hand, it may be a little harder for someone reading the code to determine your intent when you use
lambda instead of defining a separate function. As is often the case, it’s a balance between readability and convenience.
Calling
reduce() With an Initial Value
There’s another way to call
reduce() that specifies an initial value for the reduction sequence:
reduce(<f>, <iterable>, <init>)
When present,
<init> specifies an initial value for the combination. In the first call to
<f>, the arguments are
<init> and the first element of
<iterable>. That result is then combined with the second element of
<iterable>, and so on:
>>> def f(x, y): ... return x + y ... >>> from functools import reduce >>> reduce(f, [1, 2, 3, 4, 5], 100) # (100 + 1 + 2 + 3 + 4 + 5) 115 >>> # Using lambda: >>> reduce(lambda x, y: x + y, [1, 2, 3, 4, 5], 100) 115
Now the sequence of function calls looks like this:
You could readily achieve the same result without
reduce():
>>> 100 + sum([1, 2, 3, 4, 5]) 115
As you’ve seen in the above examples, even in cases where you can accomplish a task using
reduce(), it’s often possible to find a more straightforward and Pythonic way to accomplish the same task without it. Maybe it’s not so hard to imagine why
reduce() was removed from the core language after all.
That said,
reduce() is kind of a remarkable function. The description at the beginning of this section states that
reduce() combines elements to produce a single result. But that result can be a composite object like a list or a tuple. For that reason,
reduce() is a very generalized higher-order function from which many other functions can be implemented.
For example, you can implement
map() in terms of
reduce():
>>> numbers = [1, 2, 3, 4, 5] >>> list(map(str, numbers)) ['1', '2', '3', '4', '5'] >>> def custom_map(function, iterable): ... from functools import reduce ... ... return reduce( ... lambda items, value: items + [function(value)], ... iterable, ... [], ... ) ... >>> list(custom_map(str, numbers)) ['1', '2', '3', '4', '5']
You can implement
filter() using
reduce() as well:
>>> numbers = list(range(10)) >>> numbers [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> def is_even(x): ... return x % 2 == 0 ... >>> list(filter(is_even, numbers)) [0, 2, 4, 6, 8] >>> def custom_filter(function, iterable): ... from functools import reduce ... ... return reduce( ... lambda items, value: items + [value] if function(value) else items, ... iterable, ... [] ... ) ... >>> list(custom_filter(is_even, numbers)) [0, 2, 4, 6, 8]
In fact, any operation on a sequence of objects can be expressed as a reduction.
Conclusion
Functional programming is a programming paradigm in which the primary method of computation is evaluation of pure functions. Although Python is not primarily a functional language, it’s good to be familiar with
lambda,
map(),
filter(), and
reduce() because they can help you write concise, high-level, parallelizable code. You’ll also see them in code that others have written.
In this tutorial, you learned:
- What functional programming is
- How functions in Python are first-class citizens, and how that makes them suitable for functional programming
- How to define a simple anonymous function with
lambda
- How to implement functional code with
map(),
filter(), and
reduce()
With that, you’ve reached the end of this introductory series on the fundamentals of working with Python. Congratulations! You now have a solid foundation for making useful programs in an efficient, Pythonic style.
If you’re interested in taking your Python skills to the next level, then you can check out some more intermediate and advanced tutorials. You can also check out some Python project ideas to start putting your Python superpowers on display. Happy coding!
|
https://realpython.com/python-functional-programming/
|
CC-MAIN-2021-49
|
refinedweb
| 4,951
| 61.06
|
Opened 6 years ago
Closed 6 years ago
Last modified 5 years ago
#14557 closed (wontfix)
Generic View improvment
Description
Hi,
Maybe adding a __call__ method to django.views.generic.View — source:django/trunk/django/views/generic/base.py — can be useful
something like
def __call__(self, request, *args, **kwargs): return self.dispatch(request, *args, **kwargs)
it will allow to use a genric View "instance" as a view... (hum.. what can be more logical?)
url(r'^permlist/$', ListView(model=Permission, template_name="default/list.html")),
in addition to (instead of?) the current View.as_view() "factory"
url(r'^permlist/$', ListView.as_view(model=Permission, template_name="default/list.html")),
The sanity check has also to be moved to the View.__init__() method. Have a look to the attached file
Attachments (2)
Change History (4)
Changed 6 years ago by pyrou
Changed 6 years ago by pyrou
comment:1 Changed 6 years ago by lrekucki
-.
Sorry, but I'm going to mark this as won't fix. In summary, what you propose causes assignments to self persist from one request to another(It's also not thread safe). Please see wiki page: (and the discussion threads mentioned there) if you want to know more. There is a good reason, why this way of creating views was chosen.
|
https://code.djangoproject.com/ticket/14557
|
CC-MAIN-2016-22
|
refinedweb
| 212
| 67.15
|
Classification and Regression Trees (CART) Theory and Applications
- Tyrone Waters
- 1 years ago
- Views:
Transcription
1 Classification and Regression Trees (CART) Theory and Applications A Master Thesis Presented by Roman Timofeev (188778) to Prof. Dr. Wolfgang Härdle CASE - Center of Applied Statistics and Economics Humboldt University, Berlin in partial fulfillment of the requirements for the degree of Master of Art Berlin, December 20, 2004
2 Declaration of Authorship I hereby confirm that I have authored this master thesis independently and without use of others than the indicated resources. All passages, which are literally or in general matter taken out of publications or other resources, are marked as such. Roman Timofeev Berlin, January 27, 2005
3 Abstract. Second part of the paper answers the questions why should we use or should not use the CART method. Advantages and weaknesses of the method are discussed and tested in detail. In the last part, CART is applied to real data, using the statistical software XploRe. Here different statistical macros (quantlets), graphical and plotting tools are presented. Keywords: CART, Classification method, Classification tree, Regression tree, Statistical Software
4 Contents 1 Introduction 7 2 Construction of Maximum Tree Classification tree Gini splitting rule Twoing splitting rule Regression tree Choice of the Right Size Tree Optimization by minimum number of points Cross-validation Classification of New Data 18 5 Advantages and Disadvantages of CART CART as a classification method CART in financial sector Disadvantages of CART Examples Simulated example Boston housing example Bankruptcy data example
5 List of Figures 1.1 Classification tree of San Diego Medical Center patients Splitting algorithm of CART Maximum classification tree for bancruptcy dataset, constructed using Gini splitting rule Maximum classification tree for bancruptcy dataset, constructed using Twoing splitting rule Classification tree for bancruptcy dataset, parameter N min is equal to 15. Tree impurity is equal to Number of terminal nodes Classification tree for bancruptcy dataset, parameter N min is equal to 30. Tree impurity is equal to Number of terminal nodes Classification tree for bancruptcy dataset after including a third random variable Classification tree for bancruptcy dataset after monotone transformation of the first variable log(x ) Classification tree for bancruptcy dataset after two outliers Classification tree, countrcuted on 90% data of bancruptcy dataset Dataset of three classes with linear structure Dataset of three classes with non-linear structure Simulated uniform data with 3 classes Classification tree for generated data of 500 observations Simulated uniform data with three classes and overlapping Maximum classification tree for simulated two dimensional data with overlapping Maximum tree for simulated two dimensional data with overlapping and NumberOfPoints option PDF tree generated via cartdrawpdfclass command Regression tree for 100 observations of Boston housing dataset, minimum number of observations N min is set to
6 List of Figures 6.8 Regression tree for 100 observations of Boston housing dataset, N min parameter is set to Classification tree of bankruptcy dataset, minimum number of observations N min is set to Classification tree of bankruptcy dataset, minimum number of observations N min is set to Distribution of classification ratio by N min parameter Classification tree of bankruptcy dataset, minimum number of observations N min is set to
7 Notation X [N M] matrix of variables in learning sample Y [M 1] vector of classes/response values in learning sample N number of observations in learning sample M number of variables in learning sample t index of node t p parent node t l child left node t r child right node P l probability of left node P r probability of right node i(t) impurity function i(t p ) impurity value for parent node i(t c ) impurity value for child nodes i(t l ) impurity value for left child node i(t r ) impurity value for right child node i(t) change of impurity K number of classes k index of class p(k t) conditional probability of class k provided we are in node t T decision tree R(T ) misclassification error of tree T T number of terminal nodes in tree T α( T ) complexity measure which depends on number of terminal nodes T x R j best splitting value of variable x j. N min minimum number of observations parameter, which is used for pruning
8 1 Introduction Classification and Regression Trees is a classification method which uses historical data to construct so-called decision trees. Decision trees are then used to classify new data. In order to use CART we need to know number of classes a priori. CART methodology was developed in 80s by Breiman, Freidman, Olshen, Stone in their paper Classification and Regression Trees (1984). For building decision trees, CART uses so-called learning sample - a set of historical data with pre-assigned classes for all observations. For example, learning sample for credit scoring system would be fundamental information about previous borrows (variables) matched with actual payoff results (classes). Decision trees are represented by a set of questions which splits the learning sample into smaller and smaller parts. CART asks only yes/no questions. A possible question could be: Is age greater than 50? or Is sex male?. CART algorithm will search for all possible variables and all possible values in order to find the best split - the question that splits the data into two parts with maximum homogeneity. The process is then repeated for each of the resulting data fragments. Here is an example of simple classification tree, used by San Diego Medical Center for classification of their patients to different levels of risk: Is the systolic blood pressure > 91? Is age > 62.5? Class Low Risk Is sinus tachycardia present? Class High Risk Class High Risk Class Low Risk Figure 1.1: Classification tree of San Diego Medical Center patients.
9 1 Introduction In practice there can be much more complicated decision trees which can include dozens of levels and hundreds of variables. As it can be seen from figure 1.1, CART can easily handle both numerical and categorical variables. Among other advantages of CART method is its robustness to outliers. Usually the splitting algorithm will isolate outliers in individual node or nodes. An important practical property of CART is that the structure of its classification or regression trees is invariant with respect to monotone transformations of independent variables. One can replace any variable with its logarithm or square root value, the structure of the tree will not change. CART methodology consists of tree parts: 1. Construction of maximum tree 2. Choice of the right tree size 3. Classification of new data using constructed tree 8
10 2 Construction of Maximum Tree This part is most time consuming. Building the maximum tree implies splitting the learning sample up to last observations, i.e. when terminal nodes contain observations only of one class. Splitting algorithms are different for classification and regression trees. Let us first consider the construction of classification trees. 2.1 Classification tree Classification trees are used when for each observation of learning sample we know the class in advance. Classes in learning sample may be provided by user or calculated in accordance with some exogenous rule. For example, for stocks trading project, the class can be computed as a subject to real change of asset price. Let t p be a parent node and t l,t r - respectively left and tight child nodes of parent node t p. Consider the learning sample with variable matrix X with M number of variables x j and N observations. Let class vector Y consist of N observations with total amount of K classes. Classification tree is built in accordance with splitting rule - the rule that performs the splitting of learning sample into smaller parts. We already know that each time data have to be divided into two parts with maximum homogeneity: PLeft x j t Parent x R j PRight tleft tright Figure 2.1: Splitting algorithm of CART
11 2 Construction of Maximum Tree where t p, t l, t r - parent, left and right nodes; x j - variable j; x R j of variable x j. - best splitting value Maximum homogeneity of child nodes is defined by so-called impurity function i(t). Since the impurity of parent node t p is constant for any of the possible splits x j x R j, j = 1,..., M, the maximum homogeneity of left and right child nodes will be equivalent to the maximization of change of impurity function i(t): i(t) = i(t p ) E[i(t c )] where t c - left and right child nodes of the parent node t p. Assuming that the P l, P r - probabilities of right and left nodes, we get: i(t) = i(t p ) P l i(t l ) P r i(t r ) Therefore, at each node CART solves the following maximization problem: arg max [i(t p ) P l i(t l ) P r i(t r )] (2.1) x j x R j, j=1,...,m Equation 2.1 implies that CART will search through all possible values of all variables in matrix X for the best split question x j < x R j which will maximize the change of impurity measure i(t). The next important question is how to define the impurity function i(t). In theory there are several impurity functions, but only two of them are widely used in practice: Gini splitting rule and Twoing splitting rule Gini splitting rule Gini splitting rule (or Gini index) is most broadly used rule. It uses the following impurity function i(t): i(t) = p(k t)p(l t) (2.2) k l where k, l 1,..., K - index of the class; p(k t) - conditional probability of class k provided we are in node t. Applying the Gini impurity function 2.2 to maximization problem 2.1 we will get the following change of impurity measure i(t): i(t) = K p 2 (k t p ) + P l k=1 K k=1 p 2 (k t l ) + P r K k=1 p 2 (k t r ) 10
12 2 Construction of Maximum Tree Therefore, Gini algorithm will solve the following problem: [ K K arg max p 2 (k t p ) + P l p 2 (k t l ) + P r x j x R j, j=1,...,m k=1 k=1 K k=1 p 2 (k t r ) ] (2.3) Gini algorithm will search in learning sample for the largest class and isolate it from the rest of the data. Ginni works well for noisy data Twoing splitting rule Unlike Gini rule, Twoing will search for two classes that will make up together more then 50% of the data. Twoing splitting rule will maximize the following change-ofimpurity measure: [ i(t) = P K ] 2 lp r 4 p(k t l) p(k t r ) k=1 which implies the following maximization problem: [ arg max P lp r K ] 2 x j x R j, j=1,...,m 4 p(k t l) p(k t r ) (2.4) k=1 Although Twoing splitting rule allows us to build more balanced trees, this algorithm works slower than Gini rule. For example, if the total number of classes is equal to K, than we will have 2 K 1 possible splits. Besides mentioned Gini and Twoing splitting rules, there are several other methods. Among most used are Enthropy rule, χ 2 rule, maximum deviation rule. But it has been proved 1 that final tree is insensitive to the choice of splitting rule. It is pruning procedure which is much more important. We can compare two trees, build on the same dataset but using different splitting rules. With the help of cartdrawpdfclass command in XploRe, one can construct classification tree in PDF, specifying which splitting rule should be used (setting the third parameter to 0 for Gini rule or to 1 - for Twoing): cartdrawpdfclass(x,y, 0, 1) - Gini splitting rule cartdrawpdfclass(x,y, 1, 1) - Twoing splitting rule 1 Breiman, Leo; Friedman, Jerome; Olshen, Richard; Stone, Charles (1984). Classification and Regression Trees, Chapman & Hall, page 95 11
13 2 Construction of Maximum Tree X1 < (42,42) X2 < (13,35) X1 < (29,7) X1 < (7,5) X2 < (6,30) X1 < (16,6) X2 < (13,1) X1 < (2,4) X1 < (5,1) X1 < (4,28) X1 < (2,2) X1 < (16,5) (0,1) (13,0) (0,1) X1 < (2,1) (0,3) (5,0) (0,1) X1 < (1,20) X1 < (3,8) (0,2) (2,0) X1 < (8,1) X1 < (8,4) (1,0) X1 < (1,1) X1 < (1,6) (0,14) (2,0) X2 < (1,8) X1 < (2,1) (6,0) X2 < (1,3) X2 < (7,1) (0,1) (1,0) (0,6) (1,0) X1 < (1,1) (0,7) (2,0) (0,1) (1,0) (0,3) X1 < (2,1) (5,0) (1,0) (0,1) (2,0) (0,1) Figure 2.2: Maximum classification tree for bancruptcy dataset, constructed using Gini splitting rule. CARTGiniTree1.xpl 12
14 2 Construction of Maximum Tree X1 < (42,42) X2 < (13,35) X1 < (29,7) X1 < (7,5) X1 < (6,30) X1 < (16,6) X2 < (13,1) X1 < (2,4) X1 < (5,1) X1 < (2,22) X1 < (4,8) X1 < (16,5) (0,1) (13,0) (0,1) X1 < (2,1) (0,3) (5,0) (0,1) X1 < (2,8) (0,14) (3,0) X2 < (1,8) X1 < (8,1) X1 < (8,4) (1,0) X1 < (1,1) X2 < (1,8) (1,0) X1 < (1,1) (0,7) X1 < (2,1) (6,0) X2 < (1,3) X2 < (7,1) (0,1) (1,0) (0,6) X1 < (1,2) (1,0) (0,1) (2,0) (0,1) (1,0) (0,3) X1 < (2,1) (5,0) (0,2) (1,0) (2,0) (0,1) Figure 2.3: Maximum classification tree for bancruptcy dataset, constructed using Twoing splitting rule. CARTTwoingTree1.xpl 13
15 2 Construction of Maximum Tree It can be seen that although there is a small difference between tree contructed using Gini and tree constructed via Twoing rule, the difference can be seen only at the bottom of the tree where the variables are less significant in comparison with top of the tree. 2.2 Regression tree Regression trees do not have classes. Instead there are response vector Y which represents the response values for each observation in variable matrix X. Since regression trees do not have pre-assigned classes, classification splitting rules like Gini 2.3 or Twoing 2.4 can not be applied. Splitting in regression trees is made in accordance with squared residuals minimization algorithm which implies that expected sum variances for two resulting nodes should be minimized. arg min [P l Var(Y l ) + P r Var(Y r )] (2.5) x j x R j, j=1,...,m where Var(Y l ), Var(Y r ) - reponse vectors for corresponding left and right child nodes; x j x R j, j = 1,..., M - optimal splitting question which satisfies the condition 2.5. Squared residuals minimization algorithm is identical to Gini splitting rule. Gini impurity function 2.2 is simple to interpret through variances notation. If we assign to objects of class k the value 1, and value 0 to objects of other classes, then sample variance of these values would be equal to p(k t)[1 p(k t)]. Summarizing by number of classes K, we will get the following impurity measure i(t): i(t) = 1 K p 2 (k t) k=1 Up to this point so-called maximum tree was constructued which means that splitting was made up to the last observations in learning sample. Maximum tree may turn out to be very big, especially in the case of regression trees, when each response value may result in a separate node. Next chapter is devoted to different prunning methods - procedure of cutting off insignificant nodes. 14
16 3 Choice of the Right Size Tree Maximum trees may turn out to be of very high complexity and consist of hundreds of levels. Therefore, they have to be optimized before being used for classification of new data. Tree optimization implies choosing the right size of tree - cutting off insignificant nodes and even subtrees. Two pruning algorithms can be used in practice: optimization by number of points in each node and cross-validation. 3.1 Optimization by minimum number of points In this case we say that splitting is stopped when number of observations in the node is less than predefined required minimum N min. Obviously the bigger N min parameter, the smaller the grown tree. On the one hand this approach works very fast, it is easy to use and it has consistent results. But on the other hand, it requires the calibration of new parameter N min. In practice N min is usually set to 10% of the learning sample size. While defining the size of the tree, there is a trade-off between the measure of tree impurity and complexity of the tree, which is defined by total number of terminal nodes in the tree T. Using the command cartsplitclass one can build the tree structure in XploRe and then define different parameters of the tree, namely: cartimptree(tree) - calculated the impurity measure of the tree as the sum of classification error for all terminal nodes. cartleafnum(tree) - returns number of terminal nodes in the tree
17 3 Choice of the Right Size Tree X1 < (42,42) X2 < (13,35) X1 < (29,7) (7,5) X2 < (6,30) X1 < (16,6) (13,1) X1 < (4,28) (2,2) X1 < (16,5) (0,1) X1 < (1,20) (3,8) (8,1) (8,4) (1,6) (0,14) Figure 3.1: Classification tree for bancruptcy dataset, parameter N min is equal to 15. Tree impurity is equal to Number of terminal nodes 9 CARTPruning1.xpl X1 < (42,42) X2 < (13,35) X1 < (29,7) (7,5) X2 < (6,30) (16,6) (13,1) X1 < (4,28) (2,2) (1,20) (3,8) 1 Figure 3.2: Classification tree for bancruptcy dataset, parameter N min is equal to 30. Tree impurity is equal to Number of terminal nodes 6 CARTPruning2.xpl 16 1
18 3 Choice of the Right Size Tree We can see, that with the increase of the tree parameter N min, on the one hand, the impurity increases (for N min = 15, impurity is equal to and for N min = 30 impurity is equal to ). On the other hand, the complexity of the tree decreases (for N min = 15, number of terminal nodes T is equal 9 and for N min = 30 T = 6). For the maximum tree, the impurity measure will be minimum and equal to 0, but number of terminal nodes T will be maximum. To find the optimal tree size, one can use cross-validation procedure. 3.2 Cross-validation The procedure of cross validation is based on optimal proportion between the complexity of the tree and misclassification error. With the increase in size of the tree, misclassification error is decreasing and in case of maximum tree, misclassification error is equal to 0. But on the other hand, complex decision trees poorly perform on independent data. Performance of decision tree on independent data is called true predictive power of the tree. Therefore, the primary task - is to find the optimal proportion between the tree complexity and misclassification error. This task is achieved through cost-complexity function: R α (T ) = R(T ) + α( T ) min T (3.1) where R(T ) - misclassification error of the tree T ; α( T ) - complexity measure which depends on T - total sum of terminal nodes in the tree. α - parameter is found through the sequence of in-sample testing when a part of learning sample is used to build the tree, the other part of the data is taken as a testing sample. The process repeated several times for randomly selected learning and testing samples. Although cross-validation does not require adjustment of any parameters, this process is time consuming since the sequence of trees is constructed. Because the testing and learning sample are chosen randomly, the final tree may differ from time to time. 17
19 4 Classification of New Data As the classification or regression tree is constructed, it can be used for classification of new data. The output of this stage is an assigned class or response value to each of the new observations. By set of questions in the tree, each of the new observations will get to one of the terminal nodes of the tree. A new observation is assigned with the dominating class/response value of terminal node, where this observation belongs to. Dominating class - is the class, that has the largest amount of observations in the current node. For example, the node with 5 observations of class 1, two observation of class 2 and 0 observation of class 3, will have class 1 as a dominating class.
20 5 Advantages and Disadvantages of CART This chapter answers an important questions: Why should we use CART?. Before applying CART to real sector, it is important to compare CART with other statistical classification methods, identify its advantages and possible pitfalls. 5.1 CART as a classification method CART is nonparametric. Therefore this method does not require specification of any functional form CART does not require variables to be selected in advance. CART algorithm will itslef identify the most significant variables and eleminate non-significant ones. To test this property, one can inlude insignificant (random) variable and compare the new tree with tree, built on initial dataset. Both trees should be grown using the same parameters (splitting rule and N min parameter). We can see that the final tree 5.1, build on new dataset of three variables, is the identical to tree 3.2, built on two-dimentional dataset.
21 5 Advantages and Disadvantages of CART X1 < (42,42) X2 < (13,35) X1 < (29,7) (7,5) X2 < (6,30) (16,6) (13,1) X1 < (4,28) (2,2) (1,20) (3,8) Figure 5.1: Classification tree for bancruptcy dataset after including a third random variable CARTRandom.xpl CART results are invariant to monotone transformations of its independent variables. Changing one or several variables to its logarithm or square root will not change the structure of the tree. Only the splitting values (but not variables) in the questions will be different. I have replaced the values of the first variable with corresponding log(x ) values. It can be seen that the structure of the tree did not change, but changed 1 the splitting values in questions with the first variable. 20
22 5 Advantages and Disadvantages of CART X1 < (42,42) X2 < (13,35) X1 < (29,7) (7,5) X2 < (6,30) (16,6) (13,1) X1 < (4,28) (2,2) (1,20) (3,8) Figure 5.2: Classification tree for bancruptcy dataset after monotone transformation of the first variable log(x ) CARTLog.xpl CART can easily handle outliers. Outliers can negatively affect the results of some statistical models, like Principal Component Analysis (PCA) and linear regression. But the splitting algorithm of CART will easily handle noisy data: CART will isolate the outliers in a separate node. This property is very important, because financial data very often have outliers due to financial crisises or defaults. In order to test CART ability to handle 1 outliers, I have included instead of usual x 1 of about [ 0.5; 0.5] two observation with x 1 equal to 25 and 26. Let us see how the tree structure changed: 21
23 5 Advantages and Disadvantages of CART X1 < (42,44) X2 < (5,26) X2 < (37,18) (3,3) (2,23) X1 < (30,9) (7,9) X1 < (30,7) (0,2) (17,7) (13,0) Figure 5.3: Classification tree for bancruptcy dataset after two outliers CARTOutlier.xpl We can see that both outliers were isolated to a separate node. 5.2 CART in financial sector CART has no assumptions and computationally fast. There are plenty of models that can not be applied to real life due to its complexity or strict assumptions. In the table there is an computational efficiency of CART modul in XploRe. One can see that a dataset with 50 variables and observations is processed in less than 3 minutes. 22
24 5 Advantages and Disadvantages of CART Number of observations Number of variables Table 5.1: Computational efficiency of CART module in XploRe (in seconds) CART is flexible and has an ability to adjust in time. The main idea is that learning sample is consistently replanished with new observations. It means that CART tree has an important ability to adjust to current situatation in the market. Many banks are using the Basel II credit scoring system to classify different companies to risk levels, which uses a group of coefficients and inidicators. This approach, on the other hand, requires continuous correction of all indicators and coefficients in order to adjsut to market changes. 23
25 5 Advantages and Disadvantages of CART 5.3 Disadvantages of CART As any model, method of classification and regression trees has its own weaknesses. CART may have unstable decision trees. Insignificant modification of learning sample, such as elminating several observations, could lead to radical changes in decision tree: increase or descease of tree complexity, changes in splitting variables and values. At figure 5.4 one can see how the decision tree constructed for 90% of bacruptcy data is different from initial classification tree. X1 < (33,42) X1 < (9,35) X1 < (24,7) (3,23) (6,12) (13,6) (11,1) Figure 5.4: Classification tree, countrcuted on 90% data of bancruptcy dataset CARTStab.xpl One can notice that tree complexity decreased from 5 levels to 3 levels. In the new classification tree only x 1 participate1 in splitting questions, therefore x 2 is not considered significant anymore. Obviously, classification results of data will change with the use of new classification tree. Therefore, instability of trees can negatively influence the financial results. CART splits only by one variable. In other words, all splits are perpendicular to axis. Let us consider two different examples of data structure. At the first picture 5.5 there are 3 classes (red, grey and green). CART will easily handle the splits and it can be seen on the right picture. 1 Although, if data have more complex structure, as for example at figure 5.6, then CART may not catch the correct structure of the data. From example 5.6 it can be seen that CART can not correctly identify question x 1 x 2 0 because in split question can participate only one variable. In order to capture the data structure, splitting algorithm will generate many splits (nodes) at the border of x 1 x 2 0 line. 24
26 5 Advantages and Disadvantages of CART Figure 5.5: Dataset of three classes with linear structure In the end CART will grow a huge tree where almost each observation at the border will be in a separate node. But despite the big tree, classification will be done correctly and all observations that belong to red class will be classified as a red, green observation as green and etc. Figure 5.6: Dataset of three classes with non-linear structure 25
27 6 Examples 6.1 Simulated example Let s simulate a simple example with three classes. 1 proc (y) = simulate (seed, n) 2 randomize ( seed ) 3 // generating data with colour layout 4 xdat = uniform (n,2) 5 index = ( xdat [,2] <=0.5) + ( xdat [,2] >0.5).*( xdat [,1] <=0.5) *2 6 color = 2.*( index ==1) +1.*( index ==0) + 4.*( index ==2) 7 layout = 4.*( index ==1) +2.*( index ==0) + 3.*( index ==2) 8 // return the list 9 y = list ( xdat, index, color, layout ) 10 endp 11 library (" xclust ") 12 d = createdisplay (1,1) 13 data = simulate (1, 500) 14 x = data. xdat 15 setmaskp (x, data. color, data. layout ) 16 show (d, 1, 1, x) The generated data will have the following structure:
28 6 Examples Y X Figure 6.1: Simulated uniform data with 3 classes CARTSim1.xpl Each colour indicates a class (red - class 2, blue - class 0, green - class 1). Therefore we have 500 observations with three classes. Visually it can be seen that data can be perfectly splitted at y 0.5 and then at x 0.5. Let s see how classification tree will identify the data structure. 1 library (" xclust ") 2 data = simulate (1, 500) 3 tr = cartsplitclass (x, data. index, 0, 1) 4 cartdisptree ( tr) 27
29 6 Examples X2<=0.50 X1<= Figure 6.2: Classification tree for generated data of 500 observations CARTSimTree1.xpl But in real life, the data usually is not perfectly split. We can try to simulate the real life example with overlapping over different classes. 1 proc (y) = simulate (seed, n) 2 randomize ( seed ) 3 // generating data with color layout 4 xdat = uniform (n,2) 5 index = ( xdat [,2] <=0.5) + ( xdat [,2] >0.5).*( xdat [,1] <=0.5) *2 6 color = 2.*( index ==1) +1.*( index ==0) + 4.*( index ==2) 7 layout = 4.*( index ==1) +2.*( index ==0) + 3.*( index ==2) 8 // generating overlapping 9 overlapping = xdat [,2]= xdat [,2]+( index ==1) * overlapping 11 xdat [,2]= xdat [,2] -( index ==2) * overlapping -( index ==0) * overlapping 12 xdat [,1]= xdat [,1]+( index ==2) * overlapping 13 xdat [,1]= xdat [,1] -( index ==0) * overlapping 14 // return the list 15 y = list ( xdat, index, color, layout ) 16 endp 17 library (" xclust ") 18 d = createdisplay (1,1) 19 data = simulate (1, 500) 20 x = data. xdat 21 setmaskp (x, data. color, data. layout ) 22 show (d, 1, 1, x)} 28
30 6 Examples Y X Figure 6.3: Simulated uniform data with three classes and overlapping CARTSim2.xpl This time maximum tree will consist of much more levels, since CART algorithm tries to split all observations of learning sample. 1 data = simulate (1, 500) 2 x = data. xdat 3 tr = cartsplitclass (x, data. index, 0, 1) 4 catdisptree ( tr)} In order to see number of observations for each split, run the following code: 1 data = simulate (1, 500) 2 x = data. xdat 3 tr = cartsplitclass (x, data. index, 0, 1) 4 catdisptree ( tr, " NumberOfPoints ") 29
31 6 Examples X2<=0.49 X2<=0.48 X1<=0.49 X2<=0.48 X2<=0.52 X2<=0.52 X1<=0.20 X1<=0.48 X2<=0.49 X1<=0.52 X1<=0.33 X1<=0.49 X1<=0.85 X1<=0.51 X1<=0.37 X1<=0.48 X1<=0.49 X1<=0.57 X2<=0.49 X1<=0.68 X1<=0.71 X1<=0.76 X1<=0.79 X2<= Figure 6.4: Maximum classification tree for simulated two dimensional data with overlapping CARTSimTree2.xpl Figure 6.5: Maximum tree for simulated two dimensional data with overlapping and NumberOfPoints option CARTSimTree3.xpl 30
32 6 Examples It can be seen at figures 6.4 and 6.5, that CART first makes more important splits - splits x and then x Afterwards CART tries to capture the overlapping structure which we simulated: CART isolates the observations of the same class in separate nodes. For building and analysing big tree, cartdrawpdfclass command can be used. The following code will generate TEX file with classification tree: 1 data = simulate (1, 500) 2 x = data. xdat 3 cartdrawpdfclass (x, data. index, 0, 1, " c:\ textbackslash ClassificationTree. tex ") X2 < (120,236,144) X2 < (0,224,1) X1 < (120,12,143) (0,219,0) X2 < (0,5,1) X2 < (2,6,139) X2 < (118,6,4) CLASS 2 (0,0,1) (0,5,0) X1 < (0,6,7) X1 < (2,0,132) X2 < (10,6,0) X1 < (108,0,4) (0,4,0) X1 < (0,2,7) CLASS 2 (0,0,129) X1 < (2,0,3) CLASS 0 (4,0,0) X1 < (6,6,0) X1 < (5,0,4) CLASS 0 (103,0,0) CLASS 2 (0,0,3) X1 < (0,2,4) CLASS 0 (2,0,0) CLASS 2 (0,0,3) X1 < (6,4,0) (0,2,0) X2 < (5,0,1) CLASS 2 (0,0,3) (0,1,0) X1 < (0,1,4) (0,1,0) X2 < (6,3,0) CLASS 2 (0,0,1) CLASS 0 (5,0,0) CLASS 2 (0,0,2) X1 < (0,1,2) (0,1,0) X1 < (6,2,0) (0,1,0) CLASS 2 (0,0,2) CLASS 0 (3,0,0) X1 < (3,2,0) (0,1,0) X1 < (3,1,0) CLASS 0 (2,0,0) X1 < (1,1,0) (0,1,0) CLASS 0 (1,0,0) Figure 6.6: PDF tree generated via cartdrawpdfclass command CARTSimTree4.xpl 31
33 6 Examples 6.2 Boston housing example Boston Housing is a classical dataset which can be easily used for regression trees. On the one hand, we have 13 independent variables, on the other hand, there is response variable - value of house (variable number 14). Boston housing dataset consists of 506 observations and includes the following variables: 1. crime rate 2. percent of land zoned for large lots 3. percent of non-retail business 4. Charles river indicator, 1 if on Charles river, 0 otherwise 5. nitrogen oxide concentration 6. average number of rooms 7. percent built before weighted distance to employment centers 9. accessibility to radial highways 10. tax rate 11. pupil-teacher ration 12. percent black 13. percent lower status 14. median value of owner-occupied homes in thousands of dollars Let us choose a small sample of 100 observations and then build the regression tree with minimum 10 observations in terminal node: 1 library (" xclust ") 2 boston = read (" bostonh ") 3 data = boston [1:100,] 4 Var = data [,1:13] 5 Class = data [,14] 6 tr = cartsplitregr ( Var, Class, 10) 7 cartdisptree ( tr) 8 NumLeaf = cartleafnum ( tr) 9 NumLeaf 32
34 6 Examples X6<=6.79 X13<=12.95 X8<=3.46 X13<=7.47 X1<=0.22 X6<=7.62 X10<= X6<=5.96 X6<=6.09 X12<= X3<=3.72 X13<=9.16 X7<=74.20 X12<= X5<=0.45 X13<= Figure 6.7: Regression tree for 100 observations of Boston housing dataset, minimum number of observations N min is set to 10. CARTBoston1.xpl It is also important to notice, that at the upper levels of the tree there are more significant variables, and less significant at the bottom of the tree. Therefore, we can state, that for Boston housing dataset, average number of rooms (x6) is most significant, since it is located in the root node of the tree. Then come variable x13 (percent lower status) and x8 (weighted distance to employment center). Taking into account the fact that in regression trees we do not have classes, but have response values, maximum tree will contain as many terminal nodes as there are observation in the dataset, because each observation has a different response value. On the contrary, classification tree approach uses classes instead of response values, therefore splitting can be automatically finished when the node contains observations of one class. If we try to count number of terminal nodes for the maximum tree, then we find out that there are 100 terminal nodes in the regression tree: 1 library (" xclust ") 2 boston = read (" bostonh ") 3 data = boston [1:100,] 4 Var = data [,1:13] 5 Class = data [,14] 6 tr = cartsplitregr ( Var, Class, 1) 7 NumLeaf = cartleafnum ( tr) 8 NumLeaf 33
35 6 Examples As we found out, the optimization (pruning) of the tree even more important for regression trees, since the maximum tree is too big. There are several methods for optimal tree pruning. Most used method - cross-validation which uses cost-complexity function 3.1 to determine the optimal complexity of the tree. Tree complexity is defined by number of terminal nodes T. The other method - adjusting the parameter of minimum number of observations. By increasing the parameter, the size of the tree will decrease and vice versa. Let us build the regression tree with at least 30 observations in each of the terminal nodes: 1 library (" xclust ") 2 boston = read (" bostonh ") 3 data = boston [1:100,] 4 Var = data [,1:13] 5 Class = data [,14] 6 tr = cartsplitregr ( Var, Class, 30) 7 cartdisptree ( tr) X6<=6.79 X13<=12.95 X13<=7.47 X6<= Figure 6.8: Regression tree for 100 observations of Boston housing dataset, N min parameter is set to 30. CARTBoston2.xpl From this tree it can be seen that only two variables are found significant: x6 - number of rooms and x13 - percentage lower status. The parameter minimum number of observation in the terminal node has to be adjusted by iterative testing procedure when a part of dataset is used as an learning sample and the rest of the data - as an out-of-sample testing sample. By this procedure one can determine the tree of which size performs in the best way. We will illustrate this example on bankruptcy dataset. 34
36 6 Examples 6.3 Bankruptcy data example The bankruptcy dataset - two-dimensional dataset, which includes the following variables 1. Net income to total assets ratio 2. Total liabilities to total assets ratio For each of 84 observations there is a third column - bankruptcy class. In case of the company went bankrupt - the class 1, otherwise class -1. For this dataset we shall use classification procedures, since we do have only classes, no response values. Let us build the classification tree with minimum number of observations N min equal to library (" xclust "); 2 data = read (" bankruptcy. dat ") 3 x = data [,1:2]; 4 y = data [,3] 5 cartdrawpdfclass (x,y,0,30, " C:\ textbackslash ClassificationTree. tex ") X1 < (42,42) X2 < (13,35) X1 < (29,7) (7,5) X2 < (6,30) (16,6) (13,1) X1 < (4,28) (2,2) (1,20) (3,8) Figure 6.9: Classification tree of bankruptcy dataset, minimum number of observations N min is set to 30. CARTBan1.xpl 35 1
37 6 Examples If we change the parameter of minimum number of observations N min to 10, then we will get a classification tree of higher complexity: 1 library (" xclust "); 2 data = read (" bankruptcy. dat ") 3 x = data [,1:2]; 4 y = data [,3] 5 cartdrawpdfclass (x,y,0,10, " C:\ textbackslash ClassificationTree. tex ") X1 < (42,42) X2 < (13,35) X1 < (29,7) X1 < (7,5) X2 < (6,30) X1 < (16,6) X2 < (13,1) (2,4) (5,1) X1 < (4,28) (2,2) X1 < (16,5) (0,1) (13,0) (0,1) X1 < (1,20) X1 < (3,8) (8,1) X1 < (8,4) (1,6) (0,14) (2,0) (1,8) (1,3) (7,1) Figure 6.10: Classification tree of bankruptcy dataset, minimum number of observations N min is set to 10. CARTBan2.xpl The question is which tree to choose. The answer can be found by iterative procedure of calculation of each tree performance. Let us take 83 observations as a learning sample, and 1 left observation as a testing sample. We will use 83 observations to construct the tree and then using constructed tree, classify the out-of-sample testing sample (1 observation). Since we actually know the classes of testing sample, we can determine the so-called classification ratio - ratio of correct classifications to total number of observations in testing sample. We can choose the learning sample and testing sample in 84 different ways, therefore, we can loop the procedure over possible number of iterations which is in our case
38 6 Examples 1 proc () = bancrupcy () 2 data = read (" bankrupcy. dat ") 3 x = data [,1:2]; 4 y = data [,3] 5 NumberOfLines = rows ( data ) 6 index = (1: NumberOfLines ) 7 CorrectAnswers = 0 8 MinSize = 1 9 i = 1 10 while ( i <= NumberOfLines ) 11 newx = paf (x, index <>i) 12 newy = paf (y, index <>i) 13 testingdata = data [i,1:14]; 14 realclass = data [i,15];] 15 tr = cartsplitclass ( newx, newy, 0, 1); 16 predictedclass = cartpredict ( tr, testingdata ) 17 if ( predictedclass == realclass ) 18 CorrectAnswers = CorrectAnswers endif 20 i = i endo 22 CorrectAnswers / NumberOfLines 23 endp 24 bancrupcy () The quantlet will return 73.81% value of classification ratio for maximum tree (M insize = 1) and gini rule (SplitRule = 0). We can run this procedure for different parameters and build the distribution of classification ratio over tree size. 37
39 6 Examples Classification ratio by minsize parameter Classification Ratio MinSize parameter Figure 6.11: Distribution of classification ratio by N min parameter So in this case, it turned out that maximum tree gives the best classification ratio at rate of 73.81%. One can see that the dependence of classification performance over different tree sizes is not monotone: at first it decreases, but beginning with N min = 15 the performance improves. It can be explained by the fact that simpler trees may reflect the actual data structure and do not capture any small dependencies which may be in fact misleading. Let us depict the tree with N min equal to 40 which performs at the ratio of %: X1 < (42,42) X2 < (13,35) (29,7) (7,5) (6,30) Figure 6.12: Classification tree of bankruptcy dataset, minimum number of observations N min is set to 40. CARTBan3.xpl 1 38
40 6 Examples 1 library (" xclust "); 2 data = read (" bankruptcy. dat ") 3 x = data [,1:2]; 4 y = data [,3] 5 tr = cartsplitclass (x,y,0,40) 6 impurity = carttrimp ( tr) 7 impurity Analyzing this very simple tree we can state that the overall impurity of the tree, which is claculated as the sum of misclassification ratios for all terminal nodes, is equal to %. Impurity is an inverse value of successful classification ratio, i.e. the higher the impurity measure, the less classification power has the tree. In this example we can see that despite tree simplicity, the impurity is quite low and this is probably why this tree performs well on independent data. 39
Statistical Data Mining. Practical Assignment 3 Discriminant Analysis and Decision Trees
Statistical Data Mining Practical Assignment 3 Discriminant Analysis and Decision Trees In this practical we discuss linear and quadratic discriminant analysis and tree-based classification
Homework Assignment 7
Homework Assignment 7 36-350, Data Mining Solutions 1. Base rates (10 points) (a) What fraction of the e-mails are actually spam? Answer: 39%. > sum(spam$spam=="spam") [1] 1813 > 1813/nrow(spam) [1] 0.3940448
Simple Predictive Analytics Curtis Seare
Using Excel to Solve Business Problems: Simple Predictive Analytics Curtis Seare Copyright: Vault Analytics July 2010 Contents Section I: Background Information Why use Predictive Analytics? How to use
Using multiple models: Bagging, Boosting, Ensembles, Forests
Using multiple models: Bagging, Boosting, Ensembles, Forests Bagging Combining predictions from multiple models Different models obtained from bootstrap samples of training data Average predictions or
College Tuition: Data mining and analysis
CS105 College Tuition: Data mining and analysis By Jeanette Chu & Khiem Tran 4/28/2010 Introduction College tuition issues are steadily increasing every year. According to the college pricing trends report
Technology Step-by-Step Using StatCrunch
Technology Step-by-Step Using StatCrunch Section 1.3 Simple Random Sampling 1. Select Data, highlight Simulate Data, then highlight Discrete Uniform. 2. Fill in the following window with the appropriate
Data Mining Classification: Decision Trees
Data Mining Classification: Decision Trees Classification Decision Trees: what they are and how they work Hunt s (TDIDT) algorithm How to select the best split How to handle Inconsistent data Continuous
Lecture 10: Regression Trees
Lecture 10: Regression Trees 36-350: Data Mining October 11, 2006 Reading: Textbook, sections 5.2 and 10.5. The next three lectures are going to be about a particular kind of nonlinear predictive model,
Trees and Random Forests
Trees and Random Forests Adele Cutler Professor, Mathematics and Statistics Utah State University This research is partially supported by NIH 1R15AG037392-01 Cache Valley, Utah Utah State University Leo
Classification/Decision Trees (II)
Classification/Decision Trees (II) Department of Statistics The Pennsylvania State University Email: jiali@stat.psu.edu Right Sized Trees Let the expected misclassification rate of a tree T be R (T ).
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
Data Mining - Evaluation of Classifiers
Data Mining - Evaluation of Classifiers Lecturer: JERZY STEFANOWSKI Institute of Computing Sciences Poznan University of Technology Poznan, Poland Lecture 4 SE Master Course 2008/2009 revised for
Data mining techniques: decision trees
Data mining techniques: decision trees 1/39 Agenda Rule systems Building rule systems vs rule systems Quick reference 2/39 1 Agenda Rule systems Building rule systems vs rule systems Quick reference 3/39
Classification and regression trees
Classification and regression trees December 9 Introduction We ve seen that local methods and splines both operate by partitioning the sample space of the regression variable(s), and then fitting separate/piecewise
Efficiency in Software Development Projects
Efficiency in Software Development Projects Aneesh Chinubhai Dharmsinh Desai University aneeshchinubhai@gmail.com Abstract A number of different factors are thought to influence the efficiency of the software Practical Machine Learning Tools and Techniques
Ensemble learning Data Mining Practical Machine Learning Tools and Techniques Slides for Chapter 8 of Data Mining by I. H. Witten, E. Frank and M. A. Hall Combining multiple models Bagging The basic idea
Polynomial Neural Network Discovery Client User Guide
Polynomial Neural Network Discovery Client User Guide Version 1.3 Table of contents Table of contents...2 1. Introduction...3 1.1 Overview...3 1.2 PNN algorithm principles...3 1.3 Additional criteria...3
Local classification and local likelihoods
Local classification and local likelihoods November 18 k-nearest neighbors The idea of local regression can be extended to classification as well The simplest way of doing so is called nearest neighbor
5. Linear Regression
5. Linear Regression Outline.................................................................... 2 Simple linear regression 3 Linear model.............................................................
Exploratory data analysis (Chapter 2) Fall 2011
Exploratory data analysis (Chapter 2) Fall 2011 Data Examples Example 1: Survey Data 1 Data collected from a Stat 371 class in Fall 2005 2 They answered questions about their: gender, major, year in school,
Data Mining. Nonlinear Classification
Data Mining Unit # 6 Sajjad Haider Fall 2014 1 Nonlinear Classification Classes may not be separable by a linear boundary Suppose we randomly generate a data set as follows: X has range between 0 to 15
DECISION TREE ANALYSIS: PREDICTION OF SERIOUS TRAFFIC OFFENDING
DECISION TREE ANALYSIS: PREDICTION OF SERIOUS TRAFFIC OFFENDING ABSTRACT The objective was to predict whether an offender would commit a traffic offence involving death, using decision tree analysis.,
Applied Data Mining Analysis: A Step-by-Step Introduction Using Real-World Data Sets
Applied Data Mining Analysis: A Step-by-Step Introduction Using Real-World Data Sets August 2015 Salford Systems Course Outline Demonstration of two classification
Step 5: Conduct Analysis. The CCA Algorithm
Model Parameterization: Step 5: Conduct Analysis P Dropped species with fewer than 5 occurrences P Log-transformed species abundances P Row-normalized species log abundances (chord distance) P Selected
Model-Based Recursive Partitioning for Detecting Interaction Effects in Subgroups
Model-Based Recursive Partitioning for Detecting Interaction Effects in Subgroups Achim Zeileis, Torsten Hothorn, Kurt Hornik Overview Motivation: Trees, leaves, and
Classification of Bad Accounts in Credit Card Industry
Classification of Bad Accounts in Credit Card Industry Chengwei Yuan December 12, 2014 Introduction Risk management is critical for a credit card company to survive in such competing industry. In addition
ANALYSIS, THEORY AND DESIGN OF LOGISTIC REGRESSION CLASSIFIERS USED FOR VERY LARGE SCALE DATA MINING
ANALYSIS, THEORY AND DESIGN OF LOGISTIC REGRESSION CLASSIFIERS USED FOR VERY LARGE SCALE DATA MINING BY OMID ROUHANI-KALLEH THESIS Submitted as partial fulfillment of the requirements for the degree of
Java Modules for Time Series Analysis
Java Modules for Time Series Analysis Agenda Clustering Non-normal distributions Multifactor modeling Implied ratings Time series prediction 1. Clustering + Cluster 1 Synthetic Clustering + Time series
Statistical Models in R
Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Structure of models in R Model Assessment (Part IA) Anova
Classification and Regression by randomforest
Vol. 2/3, December 02 18 Classification and Regression by randomforest Andy Liaw and Matthew Wiener Introduction Recently there has been a lot of interest in ensemble learning methods that generate many
Better credit models benefit us all
Better credit models benefit us all Agenda Credit Scoring - Overview Random Forest - Overview Random Forest outperform logistic regression for credit scoring out of the box Interaction term hypothesis
CART 6.0 Feature Matrix
CART 6.0 Feature Matri Enhanced Descriptive Statistics Full summary statistics Brief summary statistics Stratified summary statistics Charts and histograms Improved User Interface New setup activity window
Regression III: Advanced Methods
Lecture 4: Transformations Regression III: Advanced Methods William G. Jacoby Michigan State University Goals of the lecture The Ladder of Roots and Powers Changing the shape of distributions Transforming
Environmental Remote Sensing GEOG 2021
Environmental Remote Sensing GEOG 2021 Lecture 4 Image classification 2 Purpose categorising data data abstraction / simplification data interpretation mapping for land cover mapping use land cover class
Psychology 205: Research Methods in Psychology
Psychology 205: Research Methods in Psychology Using R to analyze the data for study 2 Department of Psychology Northwestern University Evanston, Illinois USA November, 2012 1 / 38 Outline 1 Getting ready
SAS Software to Fit the Generalized Linear Model
SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling
Univariate Regression
Univariate Regression Correlation and Regression The regression line summarizes the linear relationship between 2 variables Correlation coefficient, r, measures strength of relationship: the closer r
CHAPTER 2 Estimating Probabilities
CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a
Acknowledgments. Data Mining with Regression. Data Mining Context. Overview. Colleagues
Data Mining with Regression Teaching an old dog some new tricks Acknowledgments Colleagues Dean Foster in Statistics Lyle Ungar in Computer Science Bob Stine Department of Statistics The School of
Data Mining and Data Warehousing. Henryk Maciejewski. Data Mining Predictive modelling: regression
Data Mining and Data Warehousing Henryk Maciejewski Data Mining Predictive modelling: regression Algorithms for Predictive Modelling Contents Regression Classification Auxiliary topics: Estimation of prediction
Machine Learning and Data Mining. Regression Problem. (adapted from) Prof. Alexander Ihler
Machine Learning and Data Mining Regression Problem (adapted from) Prof. Alexander Ihler Overview Regression Problem Definition and define parameters ϴ. Prediction using ϴ as parameters Measure the error,
MAXIMIZING RETURN ON DIRECT MARKETING CAMPAIGNS
MAXIMIZING RETURN ON DIRET MARKETING AMPAIGNS IN OMMERIAL BANKING S 229 Project: Final Report Oleksandra Onosova INTRODUTION Recent innovations in cloud computing and unified communications have made a
Additional sources Compilation of sources:
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of
MHI3000 Big Data Analytics for Health Care Final Project Report
MHI3000 Big Data Analytics for Health Care Final Project Report Zhongtian Fred Qiu (1002274530) 1. Data pre-processing The data given
DATA INTERPRETATION AND STATISTICS
PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE
Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection
Chapter 311 Introduction Often, theory and experience give only general direction as to which of a pool of candidate variables (including transformed variables) should be included in the regression model.
Generalized Linear Models
Generalized Linear Models We have previously worked with regression models where the response variable is quantitative and normally distributed. Now we turn our attention to two types of models where the
Data Mining and Visualization
Data Mining and Visualization Jeremy Walton NAG Ltd, Oxford Overview Data mining components Functionality Example application Quality control Visualization Use of 3D Example application Market research
Normality Testing in Excel
Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com
Model Combination. 24 Novembre 2009
Model Combination 24 Novembre 2009 Datamining 1 2009-2010 Plan 1 Principles of model combination 2 Resampling methods Bagging Random Forests Boosting 3 Hybrid methods Stacking Generic algorithm for mulistrategy
|
http://docplayer.net/15612609-Classification-and-regression-trees-cart-theory-and-applications.html
|
CC-MAIN-2017-51
|
refinedweb
| 8,761
| 50.87
|
Hello, Laravel? Communicating with PHP through SMS!
This article was peer reviewed by Wern Ancheta. Thanks to all of SitePoint’s peer reviewers for making SitePoint content the best it can be!
In this article, we will modify our Laravel-powered phone-capable weather forecast app so that it is accessible via SMS (text message) in addition to the voice telephone system. It is recommended you read the previous post if you haven’t done so yet – it’s a 10 minute read for an excellent outcome.
Note: If you’re confused by the development environment we’re using, it’s Homestead Improved and you can learn more about it here, or go in detail by buying our book about PHP environments.
Adding Routes
To allow for SMS communication, we need some more routes. Open up the
routes/web.php file and append the following code to it:
Route::group(['prefix' => 'sms', 'middleware' => 'twilio'], function () { Route::post('weather', 'SmsController@showWeather')->name('weather'); });
The prefix for the route is
sms, so that routes will have a path like
/sms/weather, as the one in the example. This is the only route we need for SMS, as Twilio will call the same route over and over again. Twilio will access it via HTTP
POST. We could also do this without the prefix, but it’s more flexible this way if we decide to add more functionality to the SMS side later.
Service Layer
Next, we’ll modify the service we wrote previously. Open up the
app/Services/WeatherService.php file and remove the current
getWeather method, then replace it with the one below:
public function getWeather($zip, $dayName, $forSms = false) { $point = $this->getPoint($zip); $tz = $this->getTimeZone($point); $forecast = $this->retrieveNwsData($zip); $ts = $this->getTimestamp($dayName, $zip); $tzObj = new \DateTimeZone($tz->timezoneId); $tsObj = new \DateTime(null, $tzObj); $tsObj->setTimestamp($ts); foreach ($forecast->properties->periods as $k => $period) { $startTs = strtotime($period->startTime); $endTs = strtotime($period->endTime); if ($ts > $startTs and $ts < $endTs) { $day = $period; break; } } $weather = $day->name; $weather .= ' the ' . $tsObj->format('jS') . ': '; $response = new Twiml(); if ($forSms) { $remainingChars = 140 - strlen($weather); if (strlen($day->detailedForecast) > $remainingChars) { $weather .= $day->shortForecast; $weather .= '. High of ' . $day->temperature . '. '; $weather .= $day->windDirection; $weather .= ' winds of ' . $day->windSpeed; } else { $weather .= $day->detailedForecast; } $response->message($weather); } else { $weather .= $day->detailedForecast; $gather = $response->gather( [ 'numDigits' => 1, 'action' => route('day-weather', [], false) ] ); $menuText = ' '; $menuText .= "Press 1 for Sunday, 2 for Monday, 3 for Tuesday, "; $menuText .= "4 for Wednesday, 5 for Thursday, 6 for Friday, "; $menuText .= "7 for Saturday. Press 8 for the credits. "; $menuText .= "Press 9 to enter in a new zipcode. "; $menuText .= "Press 0 to hang up."; $gather->say($weather . $menuText); } return $response; }
This function is very similar to the old one. The only difference is that it takes into consideration that the weather request might be coming form a telephone device via SMS, so it makes sure that the weather forecast isn’t too long and tries to limit it to less than 140 characters. The response for SMS is still TwiML, just formatted for SMS.
Controller
Create a file called
SmsController.php in the
app/Http/Controllers folder and put the following code into it:
<?php namespace App\Http\Controllers; use App\Services\WeatherService; use Illuminate\Http\Request; use Twilio\Twiml; class SmsController extends Controller { protected $weather; public function __construct(WeatherService $weatherService) { $this->weather = $weatherService; } public function showWeather(Request $request) { $parts = $this->parseBody($request); switch ($parts['command']) { case 'zipcode': $zip = $parts['data']; $request->session()->put('zipcode', $zip); $response = $this->weather->getWeather($zip, 'Today', true); break; case 'day': $zip = $request->session()->get('zipcode'); $response = $this->weather->getWeather($zip, $parts['data'], true); break; case 'credits': $response = new Twiml(); $response->message($this->weather->getCredits()); break; default: $response = new Twiml(); $text = 'Type in a zipcode to get the current weather. '; $text .= 'After that, you can type the day of the week to get that weather.'; $response->message($text); break; } return $response; } private function parseBody($request) { $ret = ['command' => '']; $body = trim($request->input('Body')); if (is_numeric($body) and strlen($body) == 5) { $ret['command'] = 'zipcode'; $ret['data'] = $body; } if (in_array(ucfirst(strtolower($body)), $this->weather->daysOfWeek) !== false) { $ret['command'] = 'day'; $ret['data'] = ucfirst(strtolower($body)); } if (strtolower($body) == 'credits') { $ret['command'] = 'credits'; } return $ret; } }
When an SMS message comes in from a user, Twilio will always hit the same route. This app does not have any redirects. That is why we only defined one route meaning all the requests will be going through the
showWeather method. There are different things a user can text the app, so we will parse the request body to figure out what they want using the
parseBody method.
The
parseBody method first creates a default return value. Then, it strips whitespace. This is so that if a user inputs “90210 ” (note the space), the program will still work as intended. Once the whitespace has been stripped, the body of the text is evaluated against three
if statements. The first
if statement checks to see if the user entered a zipcode. The second
if statement checks to see if the user entered in a day of the week. It normalizes the input by making sure that only the first letter is capitalized, and compares it to the contents of the
$daysOfWeek array property in the
WeatherService class to determine if a day of the week was mentioned. The last
if statement checks if a user requested the credits. If none of the three
if statements evaluate to
true then the program cannot figure out what the user wants and will return the default value. This default value will make the
weather method send the user a help message that explains how to use the app.
The
parseBody method returns an array with two keys in it. The
command key is what the user’s intention was determined to be. The
data key is the data that goes with the command. Inside the
showWeather method, after the
parsebody is called, a
switch statement is used to look at the value of the
command array key.
If the parser determines a user texted a zipcode, then we store the zipcode in a session and return today’s forecast for that zipcode. A sample TwiML response looks like this:
<?xml version="1.0" encoding="UTF-8"?> <Response> <Message>This Afternoon the 31st: Sunny, with a high near 72. South southwest wind around 8 mph. </Message> </Response>
If it is determined a day of the week was entered, then that day’s forecast is returned. A sample TwiML response looks like this:
<?xml version="1.0" encoding="UTF-8"?> <Response> <Message>Monday the 3rd: Sunny, with a high near 70. </Message> </Response>
If the parser determines the credits were asked for, then the app returns a TwiML response with the credits:
<?xml version="1.0" encoding="UTF-8"?> <Response> <Message>Weather data provided by the National Weather Service. Zipcode data provided by GeoNames. </Message> </Response>
If the parser cannot determine the user’s intent, then a help message is returned with this TwiML:
<?xml version="1.0" encoding="UTF-8"?> <Response> <Message>Type in a zipcode to get the current weather. After that, you can type the day of the week to get that weather. </Message> </Response>
Twilio
Login to your Twilio account and navigate to the settings for your phone number. You can see your number by going to this page. In the SMS section, put in the URL in the following format:, where
NGROK_HOST is the hostname in the URL you noted from the Ngrok program.
Using the App
Open up the text messaging app on your phone and send a zipcode like
92010 to your Twilio phone number. In a couple of seconds, you should get a response with today’s forecast.
Next, you can send a day of the week to the number and it will respond with that day’s forecast.
You can also send the word
credits and it will return the credits.
If you enter in a command the weather app does not understand, it returns some help text.
Conclusion
Over the course of two articles, we have seen how to build an application that is able to interact with users via the voice telephone system using voice menus and to interact with them using SMS. This was implemented using Laravel for the application backend and Twilio for the telephone / SMS integration. With writing a little bit more code, we have seen that it is possible to extend the voice app to have the same functionality exposed to users via SMS.
You can find the example code for this article series on Github.
There are lots of possibilities for apps that you can implement with Twilio and PHP, this is just a little glimpse into what can be done. Check out the documentation here for some inspiration.
|
https://www.sitepoint.com/hello-laravel-communicating-php-sms/
|
CC-MAIN-2019-18
|
refinedweb
| 1,469
| 62.68
|
One.
Cross-platform templates
One of the core concepts of Piranha CMS is to be as platform independent as possible. This means that I will not provide any Visual Studio templates as this is a Windows only solution, and instead focus on the .NET CLI tools.
Installing new templates
Templates in the .NET CLI are
NuGet packages that you install on your computer. This is done with the simple command:
> dotnet new -i Piranha.Templates
After the template has been successfully installed you can create a new project by standing in an empty folder and typing.
> dotnet new piranha
By the default the
ProjectFile and
namespace will be named after the folder you're located in. Pretty easy, right!
The Basic Site template
Besides a classic empty template that just gives you the right references and code to wire up the DI I will also provide a Basic Site template. This will be created with
Bootstrap 4.0 and provide a couple of different Page Types.
The styling will be based on
.scss and is split up into
base.scss and
theme.scss so that you can quickly get rid of the default look to start crafting your own.
Start page
A classic startpage with a
heading, collection of
teasers and a main HTML body.
Blog archive
A nicely structured blog archive with a
heading and post listing.
Post detail
A post detail view with an optional
primary image.
Basic content page
Just a standard content page with a HTML body. Nothing more, nothing less.
Feedback
As the
4.2 version isn't released yet I'd love to hear input on what you'd like to see in a project template. If you have certain types of pages that you almost always add for every project I'd be more than happy to include it!
Contributing
If you want to take a look at the current source code for the project template or maybe contribute to it you can find it in the repo piranha.core.basicweb at
GitHub.
|
https://piranhacms.org/blog/new-project-templates
|
CC-MAIN-2021-25
|
refinedweb
| 340
| 75.61
|
Amazon EventBridge
Dynatrace ingests metrics for multiple preselected namespaces, including Amazon EventBridge. You can view graphs per service instance, with a set of dimensions, and create custom graphs that you can pin to your dashboards.
How Dynatrace displays your service metrics
Dashboard
Note: In Dynatrace, the custom device group is the whole Amazon EventBridge service. Each custom device is a custom event bridge in each region with the metrics for each rule name available in the Further details of the custom device overview page. The
default event bridge is set as service-wide metrics so they can be viewed in the Further details of the custom device group overview section.
Enable monitoring
To enable monitoring for Amazon EventBridge,_3<<.
DeadLetterInvocations
The number of times a rule’s target is not invoked in response to an event.
FailedInvocations
The number of invocations that failed permanently.
Invocations
The number of times a target is invoked for a rule in response to an event.
MatchedEvents
The number of events that matched with any rule.
ThrottledRules
The number of triggered rules that are being throttled.
TriggeredRules
The number of triggered rules that matched with any event.
Limitations
Amazon EventBridge sends
Invocations metrics to CloudWatch only if it has a non-zero value. For more information, see AWS documentation.
|
https://www.dynatrace.com/support/help/technology-support/cloud-platforms/amazon-web-services/supporting-services/monitor-amazon-eventbridge/
|
CC-MAIN-2020-34
|
refinedweb
| 214
| 56.25
|
Many ways to skin a conduit
April 17, 2012
Michael Snoyman
There's more than one way to skin a cat, and certainly more than one way to
write code. The various options can sometimes be confusing. And in the case of
the
conduit library, there are also some routes that you shouldn't take.
You'll see what I mean through the examples.
For the most part, using existing
Sources,
Sinks, and
Conduits is
straight-forward. The problem comes from writing them in the first place. Let's
take a simple example: we want a
Source that will enumerate the
Ints 1 to
1000. For testing purposes, we'll connect it to a
Sink that sums up all of
its input. I came up with six different ways to write the
Source, though two
of those are using functions I haven't yet released.
import Criterion.Main import Data.Conduit import qualified Data.Conduit.List as CL import qualified Data.List import Data.Functor.Identity (runIdentity) sourceList, unfold, enumft, yielder, raw, state :: Monad m => Int -- ^ stop -> Source m Int sourceList stop = CL.sourceList [1..stop] unfold stop = CL.unfold f 1 where f i | i > stop = Nothing | otherwise = Just (i, i + 1) enumft stop = CL.enumFromTo 1 stop yielder stop = go 1 where go i | i > stop = return () | otherwise = do yield i go $ i + 1 raw stop = go 1 where go i | i > stop = Done Nothing () | otherwise = HaveOutput (go $ i + 1) (return ()) i state stop = sourceState 1 pull where pull i | i > stop = return StateClosed | otherwise = return $ StateOpen (i + 1) i main :: IO () main = do mapM_ test sources defaultMain $ map bench' sources where sink :: Monad m => Sink Int m Int sink = CL.fold (+) 0 bench' (name, source) = bench name $ whnf (\i -> runIdentity $ source i $$ sink) 1000 sources = [ ("sourceList", sourceList) , ("unfold", unfold) , ("enumFromTo", enumft) , ("yield", yielder) , ("raw", raw) , ("sourceState", state) ] test (name, source) = do let res = runIdentity $ source 1000 $$ sink putStrLn $ name ++ ": " ++ show res
sourceList is probably the approach most of us- myself included- would
actually use in real life. It let's us take advantage of all of the
list-processing functions and special syntax that Haskell already provides.
unfold and
enumFromTo are both new functions for 0.4.2 (in fact, I wrote
them for the purpose of this comparison). They correspond very closely to their
Data.List and
Prelude counterparts.
yield is a new option we have starting with
conduit 0.4. Due to the unified
datatypes,
Source has inherited a
Monad instance. This allows us to fairly
easily compose together different
Sources, and the
yield function provides
the simplest of all
Sources. In previous versions of
conduit, we could have
used
Source's
Monoid instance instead of
do-notation.
raw goes directly against the datatypes. I find it interesting that the raw
version isn't really much more complicated than
yield or
sourceState,
though you do have to understand some of the extra fields on the constructors.
Finally, we use
sourceState. This is one of the oldest approaches, since this
function has been available since the first release of
conduit. I think that
this function would compile and run perfectly on conduit 0.0.
The Criterion benchmarks are very informative. Thanks to Bryan's cool new report, let's look at the graph:
unfold,
enumFromTo, and
raw all perform equally well.
sourceList comes
in behind them: the need to allocate the extra list is the culprit. Behind that
is
yield. To see why, look at the difference between
yielder and
raw.
They're structure almost identically. For the
i > stop case, we have
return
() versus
Done Nothing (). But in reality, those are the same thing!
return is defined as
Done Nothing.
The performance gap comes from the
otherwise branch. If we fully expand the
do-notation, we end up with:
yield i >>= (go $ i + 1) ==> HaveOutput (Done Nothing ()) (return ()) i >> (go $ i + 1) ==> HaveOutput (Done Nothing () >> (go $ i + 1)) (return ()) i ==> HaveOutput (go $ i + 1) (return ()) i
Which is precisely what
raw says. However, without adding aggressive inlining
to
conduit, most of this transformation will occur at runtime, not compile
time. Still, the performance gap is relatively minor, and in most real-world
applications should be dwarfed by the actual computations being performed, so I
think the
yield approach definitely has merit.
What might be shocking is the abysmal performance of
sourceState. It's a full
8 times slower than
raw! There are two major contributing factors here:
- Each step goes through a monadic bind. This is necessitated by the API of
sourceState.
- We have to unwrap the
SourceStateResulttype.
sourceState was great when it first came out. When
conduit's internals were
ugly and based on mutable variables, it provided a clean, simple approach to
creating
Sources. However,
conduit has moved on: the internals are pure and
easy to work with and we have alternatives like
yield for high-level stuff.
And performance wise, the types now distinguish between pure and impure
actions.
sourceState forces usage of an extra
PipeM constructor at each
step of output generation, which kills GHC's ability to optimize.
So our main takeaway should be: don't use
sourceState. It's there for API
compatibility with older versions, but is no longer the best approach to the
problem. Similarly, we can improve upon
sourceIO, but we have to be a bit
careful here, since we have to ensure that all of our finalizers are called
correctly. Let's take a look at a simple
Char-based file source, comparing a
sourceIO implementation to the raw constructors.
import Data.Conduit import qualified Data.Conduit.List as CL import Control.Monad.Trans.Resource import System.IO import Control.Monad.IO.Class (liftIO) import Criterion.Main sourceFileOld :: MonadResource m => FilePath -> Source m Char sourceFileOld fp = sourceIO (openFile fp ReadMode) hClose (\h -> liftIO $ do eof <- hIsEOF h if eof then return IOClosed else fmap IOOpen $ hGetChar h) sourceFileNew :: MonadResource m => FilePath -> Source m Char sourceFileNew fp = PipeM (allocate (openFile fp ReadMode) hClose >>= go) (return ()) where go (key, h) = pull where self = PipeM pull close pull = do eof <- liftIO $ hIsEOF h if eof then do release key return $ Done Nothing () else do c <- liftIO $ hGetChar h return $ HaveOutput self close c close = release key main :: IO () main = defaultMain [bench "old" $ go sourceFileOld, bench "new" $ go sourceFileNew] where go src = whnfIO $ runResourceT $ src "source-io.hs" $$ CL.sinkNull
The results are much closer here:
We're no longer getting the benefit of avoiding monadic binds, since by its
very nature this function has to call
IO actions constantly. In fact, I
believe that the performance gap here doesn't warrant avoiding
sourceIO in
normal user code, though it's likely a good idea to look at optimizing the
Data.Conduit.Binary functions. Perhaps even better is if we can get some
combinators that make it easier to express this kind of control flow.
The story is much the same with
Sinks and
Conduits, so I won't bore you
with too many details. Let's jump into the code first, and then explain what we
want to notice.
import Criterion.Main import Data.Conduit import qualified Data.Conduit.List as CL import qualified Data.List import Data.Functor.Identity main :: IO () main = defaultMain [ bench "mapOutput" $ flip whnf 2 $ \i -> runIdentity $ mapOutput (* i) source $$ sink , bench "map left" $ flip whnf 2 $ \i -> runIdentity $ source $= CL.map (* i) $$ sink , bench "map right" $ flip whnf 2 $ \i -> runIdentity $ source $$ CL.map (* i) =$ sink , bench "await-yield left" $ flip whnf 2 $ \i -> runIdentity $ source $= awaitYield i $$ sink , bench "await-yield right" $ flip whnf 2 $ \i -> runIdentity $ source $$ awaitYield i =$ sink ] where source :: Monad m => Source m Int source = CL.sourceList [1..1000] sink :: Monad m => Sink Int m Int sink = CL.fold (+) 0 awaitYield :: Monad m => Int -> Conduit Int m Int awaitYield i = self where self = do mx <- await case mx of Nothing -> return () Just x -> do yield $ x * i self
There are five different ways presented to multiple each number in a stream by
2.
CL.map is likely the most obvious choice, since it's a natural analogue to
the list-based
map function. But we have two different ways to use it: we can
either left-fuse the source to the conduit, and then connect the new source to
the sink, or right-fuse the conduit to the sink, and connect the source to the
new sink.
We also have an
awaitYield function, which uses the
await and
yield
functions and leverages the
Monad instance of
Conduit. Like
map, we have
both a left and a right version.
We also have a
mapOutput function. In that case, we're not actually using a
Conduit at all. Instead, we're modifying the output values being produced by
the source directly, without needing to pipe through an extra component. Let's
see our benchmark results:
There are three things worth noticing:
- Like previously, the high-level approach (using
awaitand
yield) was slower than using the more highly optimized function from
Data.Conduit.List.
- There no clear winner between left and right fusing.
mapOutputis significantly faster than using a
Conduit. The reason is that we're able to eliminate an entire extra
Pipein the pipeline.
mapOutput will not be an option in the general case. You're restricted in a number of ways:
- It can only be applied to a
Source, not a
Sink.
- You have to have transformations which produce one output for one input.
- You can perform any monadic actions.
However, if your use case matches,
mapOutput can be a very convenient optimization.
|
http://www.yesodweb.com/blog/2012/04/skinning-conduits
|
CC-MAIN-2016-18
|
refinedweb
| 1,594
| 63.9
|
Yup, I know. I meant within the application itself. I may not want to open links for the rss reader on my default browser (I actually don't, really). You can also register browsers which are not by default registered on the module. Python2 still didn't have Chromium/Chrome but it's pretty simple to register.]]>
Also, being able to choose the browser is also a good one.
Technically, you can. Webbrowser module uses xdg-open on Linux to detect default browser, so you just have to change xdg-open settings to pick a different one.]]>
Aaah,.]]>
got all that stuff working and did a push. it would be nice if when it did updates, it kept track of which articles in a feed are new. also, which articles have already been read and which haven't.]]>
If the threads don't have to exit cleanly, you can make them daemons. This way, when the main event loop exits, daemon threads are killed automatically.]]>
thanks for that!
i made a bit of progress. its updating feeds every x seconds and printing notifications using libnotify when it does the updates. still need to fix a few things though. right now it makes a whole new submenu for each feed when it does the update so feeds appear multiple times. the quit button also isn't working because of the threading stuff i needed to have it do the updates. also, its doing multiple feeds now.]]>
Fixed that bug, kind of, i'm not sure if it's the proper solution, but it seems to work. Relevant code:
def build_submenu(self, feed): menu = Gtk.Menu() for i in feed.entries: menuitem = Gtk.MenuItem() menuitem.set_label(i.title) menuitem.connect("activate", self.construct_command(i.link)) menu.append(menuitem) return menu def construct_command(self, link): def open_link(self): webbrowser.open(link) return open_link
Lambda expression inside a loop seems to overwrite itself during every iteration, like here:
l = [] for i in range(10): l.append(id(lambda x: x*i)) print(l) #] print(l.count(l[0])) # OUT: 10
As you can see, id of the lambda expression is the same in every iteration, despite being a different function (i is different in each lambda), so when the for loop finishes, you'll end up with only one callable object, that every menu item executes on activation.
EDIT:
After a bit of screwing around, i found that this also works:
menuitem.link = i.link menuitem.connect("activate", lambda x: webbrowser.open(x.link))
Instead of passing the arbitrary address to the lambda it just extracts link from the menu entry itself.]]>
Keep digging! I love the idea of having a systray RSS reader like this. Sorry I can't help in any way, as I'm absolutely code blind...]]>
im working on having it update feeds every however many minutes and hopefully show a little pop up using libnotify. it should be able to handle multiple feeds then. also i didn't notice this before but there seems to be a bug, when you click on a specific article it opens the page for the last article it added to the submenu. guess i don't completely understand how connect works.]]>
Pretty cool, I like it.
Now you just have to add a way for it to read a list of feeds from file (and generate menu items and submenus for each one) and it's a great start
]]>
i uploaded it to github just now, the code is very rough. i don't know gtk at all, so i've just been taking code from random examples i find online. i used to use this app on os x and couldn't find anything like it on linux,]]>
Is your code hosted somewhere? i.e github, bitbucket, etc.]]>
Started working on a little project. An rss reader that sits in the systray. Its pretty bare right now, but its downloading feeds and opens them in the browser when i click on them.
-- mod edit: read the Forum Etiquette and only post thumbnails … s_and_Code [jwr] --]]>
|
https://bbs.archlinux.org/extern.php?action=feed&tid=156381&type=atom
|
CC-MAIN-2018-13
|
refinedweb
| 682
| 66.13
|
Feedback
Getting Started
Discussions
Site operation discussions
Recent Posts
(new topic)
Departments
Courses
Research Papers
Design Docs
Quotations
Genealogical Diagrams
Archives
A while back I began tinkering with the idea of continuation-carrying exceptions as an approach to divide error handling policy from the mechanism. Of course, I later discovered I was putting old wine into new bottles. Common Lisp follows a similar approach with its Condition/Restart mechanism.
Anyhow, at that time a friend pointed me towards A Modular Verifiable Exception-Handling Mechanism by S.Yemini and D.Berry (1985).
The following varieties of handler responses to an exception can be identified in the literature:
Resume the signaller: Do something, then resume the operation where it left off.
Terminate the signaller: Do something, then return a substitute result of the required type for the signalling operation; if the operation is not a value returning operation, this reduces to doing something and returning to the construct following the invocation of the operation. This includes using alternative resources, alternative algorithms, and so on.
Retry the signaller: Do something, then invoke the signaller again.
Propagate the exception: Do something, then allow the invoker of the invoker of the signalling operation to respond to the detection of the exception.
Transfer control: Do something, then transfer control to another location in the program. This includes doing something and then terminating a closed construct containing the invocation.
The following varieties of handler responses to an exception can be identified in the literature:.
Without [an exception-handling mechanism], too much information is not hidden and coupling is high. Either the signaler has to be told more about what the invoker is doing, so that the signaler can do what the invoker would want done, or else the invoker has to be given more implementation details so that it can do the exception checking.
Whether the exception-handling mechanism be continuations or something else, I'd really love to see the modern stack-based languages (Java, C++, etc.) implement something much more along these lines. I have been bitten far too often by the high coupling that comes from the inability to separate error handling policy from the exception handling mechanism.
That would be interesting to see in a modern imperative language. It can't be more irritating than Java's checked exceptions.
Some of the design decisions would probably change in a functional language, where individual functions tend to be shorter and higher-order functions are cheap. You can approximate the style fairly closely in Haskell with the multi-prompt continuation monad, using this subset of the operations:
promptP :: (Prompt r a -> CC r a) -> CC r a
abortP :: Prompt r a -> CC r a -> CC r b
Instead of using a new construct for declaring exceptions, we can pass the handlers as functions:
convert :: (Prompt r String -> Int -> CC r Char) -> [Int] -> CC r String
convert badcode codes = promptP $ \p ->
let conv (i,code) = if validCode code
then return (chr code)
else badcode p i
in mapM conv (zip [0..] codes)
You pass the approprate handler as "badcode". It can return a Char, use the prompt to return directly from convert, or do something else.
-- replace all bad codes with '?'
h1 _ _ = return '?'
-- runCC (convert h1 [65,-1,66]) => "A?B"
-- terminate early, returning ""
h2 p _ = abortP p (return "")
-- runCC (convert h2 [65,-1,66]) => ""
-- replace the code with zero & retry
h3 codes p i = let codes' = replaceAt i 0 codes in abortP p (convert (h3 codes') codes')
-- runCC ((\c -> convert (h3 c) c) [65,-1,66]) => "A\NULB"
-- call out to some other handler
foo final ... = promptP $ \f ->
...
let h4 p i = abortP p (final f)
...
convert h4 codes
...
I guess it's not too surprising that you can get this functionality with the multi-prompt, delimited continuation monad, since it's so powerful. (Arguably too powerful; it might be better to hide the prompts from user code.) The part that took me a while explicitly passing the exception handler(s) to the code, which does seem to be the best fit for the mechanism described in the paper.
That would be interesting to see in a modern imperative language.
Actually, I believe it to be quite doable. I wasn't handling 'verification', but I did explore the implementation of the whole restart mechanisms here.
Considering how symmetrical that proposed solution is to the existing systems, and that I can't find any implementation hurdles, I'm more surprised that we don't have it already.
It can't be more irritating than Java's checked exceptions.
That has me seriously laughing out loud. Java developers should know better than to integrate 'new' features in what is intended to be a mainstream language without seeing them tested first.
In any case, checked 'resumption' conditions could be made part of the exception or part of function signatures, but I'm not certain it would be a worthwhile pursuit as checking these things would ultimately limit user-created resumption policies. Actually, I'm against checked exceptions for the same reason: they severely limit the distance for which error handling policy can be decoupled from the code that introduces the error.
Actually, I'm against checked exceptions for the same reason: they severely limit the distance for which error handling policy can be decoupled from the code that introduces the error.
I don't think that's so much a knock against checked exceptions, as it is a knock against the policy that exceptions can't propagate automatically to enclosing scopes. I think every function should have an effect variable which denotes the exceptions that can be thrown from its execution. This effect variable is a union of all effects of calling child functions (which should be fully inferrable). 'main' should simply have a signature with a nil effect, so no exceptions escape the execution of the program as a whole. Thus we achieve checked exceptions, but without the headaches that Java imposes.
Ah, yes. That would be correct; it isn't the 'checking' that is the problem, but rather both the requirement for manifest declarations of what must be checked and the inability to automatically propagate such checks. In what I wrote above, I was considering "checked exceptions" with the Java design.
Regardless, I'd be tempted to simply guarantee all exceptions are checked by having the 'default case' for exceptions be provided as a standard behavior via the process or thread task, allowing programmers to override these defaults at will.
This effect variable is a union of all effects of calling child functions (which should be fully inferrable).
I have a hard time imagining how this solution would work at module boundaries without sacrificing separate compilation.
On the other hand, I'm not so opposed to sacrificing separate compilation so long as one can at least integrate some pre-compiled forms (with 'pre-compiled' being less restrictive than 'separately compiled' in that it allows one to have a compilation ordering requirement).
'main' should simply have a signature with a nil effect, so no exceptions escape the execution of the program as a whole
There are times it would be quite useful for something like 'main' to propagate exceptions so long as the host knows how to process them.
This is OT, but the whole prescribed notion of 'main' and 'program as a whole' is something I dislike as a language standard concept for other reasons. How a host environment (such as Unix or a shell) interprets a library or dictionary of executable or evaluable code should really be left outside of the language's definition.
I have a hard time imagining how this solution would work at module boundaries without sacrificing separate compilation.
Since every function now sports an effect variable, and ML modules can be statically erased, I don't see a problem in principle. First-class modules or modules as first-class values might be more challenging since the module used could change at runtime.
How a host environment (such as Unix or a shell) interprets a library or dictionary of executable or evaluable code should really be left outside of the language's definition.
I disagree. I believe it was you who was arguing that languages and operating systems should be essentially unified, and I agree. In that case, consider a system where the OS is built in a safe language, ala Singularity OS. The user's shell dynamically loads the code for the program, but in order for it to launch the program, the program entry point must have a well-defined signature, such as implementing some Application interface:
interface IApplication {
void Main(string args);
}
The signature could be arbitrary of course, but there has to be something well-defined for the dynamic code loader to typecheck and the shell or OS to invoke.
The user's shell dynamically loads the code for the program, but in order for it to launch the program, the program entry point must have a well-defined signature, such as implementing some Application interface:
Modulo reflection, I agree. But I'd note it important that it is the the user's shell that defines "the necessary interface". The language designers are not 'prescribing' any particular meaning to 'IApplication' or 'Main'. To the language, those are not special at all.
With reflection - the ability to inspect the 'object' file selected for execution - things become much more interesting. A shell could use 'main' by default, but still allow access to other functions via simple parametric extensions to the name (such as ':foo arg1 arg2 arg3'), with (' args') being the same as (':main args').
Access to type-descriptors (foo : int int -> int) for the arguments could be used to help parse the arguments or even support tab-completions, and could certainly be utilized to support typesafe shell operations and workflows.
Alternatively, a shell could default to 'open ' as the behavior, automatically importing both 'foo' and 'main' and any other exports as commands in the local environment and thus allow one to divide the execution from the environment manipulation. Or both could be possible (in Java, for example, one may use classes without importing them, so long as they use the long name). In this sense, the shell is really an interpreter for a given language that can be extended, and also happens to (in the default configuration) include extensions to help locate commands a user might be interested in executing (such as installed services).
#myshell> open math;
ok
#myshell> square 20
400
#myishell> x ## (square x) == 81
x=9
#myshell> play funnymovie.avi
executing play:main filename=funnymovie.avi...
I believe operating systems, shells, and languages should be fully unified. I would not stop at half-measures like forcing everything through a shell's 'IApplication' interface.
This is probably getting a bit off topic for the thread, but Windows PowerShell might be of interest to you. It's a shell which, amongst other things, allows .NET objects to be piped between commands instead of just text; and which specifies an interface that objects can implement for better integration with the command line (including argument-specific tab-completion).
But I'd note it important that it is the the user's shell that defines "the necessary interface". The language designers are not 'prescribing' any particular meaning to 'IApplication' or 'Main'.
Other than it's the application initialization point, I don't see what kind of meaning there could be.
Access to type-descriptors (foo : int int -> int) for the arguments could be used to help parse the arguments or even support tab-completions, and could certainly be utilized to support typesafe shell operations and workflows.
I don't think reflection adds anything really. One can just as easily create a strongly typed interface for all of the features you're describing, and the application can itself provide the tab-completion.
public interface IApplication {
void Main(string[] args);
string[] InferArg(string fragment);
}
I think concise language constructs, like lambda args and type inference, help more than reflection here.
Despite that you can't see it being used any other way, can you offer a good reason why the language designers should prescribe such a notion as 'Main' being an 'application initialization point'? Why should language designers even accept the concept of 'application'? You seem stuck on the idea of reinventing a Unix shell in a language. That is not at all the only option available to you.
I don't think reflection adds anything really.
Reflection buys you the ability to implement the shell as a common language with a shared parser used by all the applications. It also buys you the ability to implement strongly typed workflow languages (like pipes and filters, or service mashups).
One can just as easily create a strongly typed interface for all of the features you're describing
Strongly typed, perhaps. But you're going to be performing runtime typechecking in any shell application because the 'code' isn't produced until runtime. Runtime/dynamic typechecking (weak or not) may be implemented inside 'Main' by reinventing a poor-man's parser for each and every application. Alternatively, you can invent it once and only once (as well as unify syntax for running commands) by having both the typechecking and the parsing duties be moved into the shell.
and the application can itself provide the tab-completion [...] I think concise language constructs, like lambda args and type inference, help more than reflection here.
The approach you propose is considerably less flexible or "strongly typed" than it seems you believe it is, considering that you'll be unable to support composition of commands and functions and you'll still suffer type mismatches whenever you need to parse input parameters (except worse because YOU need to implement the parsing and the parameter validation each time). At the same time you lose efficiency for both the programmer (who must integrate a parameter parser into each application) and the runtime (which will be continuously marshalling and demarshalling structured data as strings).
I'm all for static typing whatever can be typed statically. But providing static typing over commands parsed and introduced at runtime means knowing types. And knowing types of objects that have already been compiled into programs requires the same set of environmental features as to support runtime reflection.
Reflection buys you the ability to implement the shell as a common language with a shared parser used by all the applications. It also buys you the ability to implement strongly typed workflow languages (like pipes and filters, or service mashups).
Reflection buys you far more dangers than benefits, and I used to be firmly in the other camp. Of course, this all depends on the scope of the reflective capabilities, whether it includes abstraction-breaking introspection, etc.
And reflection doesn't buy you any workflow or filtering capabilities. Those are readily implemented by higher-order functions and/or objects, as evidenced by any ML, none of which have reflection.
I think what you really want is to embed the parser and a language interpreter, ala "typing dynamic typing". So you basically want a REPL shell.
But you're going to be performing runtime typechecking in any shell application because the 'code' isn't produced until runtime.
Not necessarily. Code isn't loaded until runtime, which is very different from saying that it's produced/generated at runtime. Either way, a program must provide a type signature for the code it is attempting to load:
val load_code: 'a Sig.t → in_channel → 'a Rep.t option
Rep.t is a 'representation type' often appearing in polytypic programming literature (which is a safer form of reflection). For simplicity, I'm assuming the Sig.t defines the structure of the expected type.
Now the user's shell is a text-based interface, so we can go two routes here: a full blown REPL shell, or a smaller standardized interface language for constructing and invoking subsystems.
You also seem to imply that my original suggestion required the entire program to be loaded into memory, but really, only a small launcher component need be instantiated for tab-completion and initialization. A language-based OS is a sea of objects, not heavy monolithic processes like current OSs, so when I say IApplication, I mean literally a single object, not all the objects instantiated when Main is invoked.
[Edit: note that, even with the REPL shell option, the code must still export either a standardized symbol table, or a standard entry point which is executed on code load, which loads all symbols into a global table. There always a standard interface somewhere! :-)]
'IApplication' is "a smaller standardized interface language for constructing and invoking subsystems." It is not ML. It does not have ML's capabilities.
By using 'IApplication', the shell program cannot determine the 'real' types of the inputs to the published applications. That is, when a user types in a 'wrong' string, the shell can't issue any warnings or do anything similar. The shell certainly cannot typecheck its workflows or pass complex objects as inputs to 'application' methods.
Essentially, yes. But it doesn't need to be a REPL shell; the HCI is more open, and could be an Object Browser akin to Squeak, ToonTalk, or Second Life. But a REPL browser would be a convenient version.
And the typing doesn't need to be dynamic. But static typing of each shell command, as it arrives, requires the language runtime maintain all the information one would usually associate with static type reflection.
The ability to create alternative shells that were not strictly REPL loops would certainly benefit from the ability to query for the appropriate metadata, implying reflection.
Reflection buys you far more dangers than benefits
Eh, well, I'll agree it can cause problems. But the issue is never reflection by itself, but rather reflection in the context of some assumptions used in the development of other language features. And I'm willing to trade some of those assumptions in order to buy reflection without the dangers. Besides, most of the assumptions 'problematic' when combined with reflection (such as use of abstraction as a mechanism for encapsulation, and the use of encapsulation as a mechanism for security, and the use of named types with inheritance) are also problematic for other language features that may be nice to have in an OS-integrated language (including distribution or security).
Code isn't loaded until runtime, which is very different from saying that it's produced/generated at runtime.
I literally meant produced/generated at runtime. But you might be confused: I'm talking about the shell code - the stuff the user types dutifully into the REPL interpreter or whatever. The shell code is among the stuff that, if "the language and shell were integrated" could be typecheked, abstractable, capable of producing and handling exceptions, etc.
What you say here confirms what I already understood of your claims. I don't take issue with the above. Where I have a problem is you presenting 'IApplication' as a solution then pointing at 'ML' when attempting to explain the properties of your solution.
The link I provided earlier to tagless interpreters and "typing dynamic typing" can be used to achieve what you're looking for. A language's standard library can provide a module which is a self-interpreter for the language (or a JIT preferably), and standard library can provide another module for the language's parser. Constructing a REPL shell from this is somewhat trivial. I think all languages should be built in this metacircular fashion in fact; metacircular interpreters for statically typed languages FTW! :-)
... are also present in some Smalltalks. There it is cast in an object-oriented form which may be more amenable to implementations for other OO languages.
|
http://lambda-the-ultimate.org/node/3056
|
CC-MAIN-2019-43
|
refinedweb
| 3,306
| 50.67
|
Hi to All,We are working in a residential project which have 6 different buildings (Block) in it. We have sepererated the revit files according to these blocks. Each block has its own sheets, schedules, areas in revit.When we need to publish these sheets for coordination, we open revit files one by one and export sheets to CAD and PDFs.This process takes too much time. If we had a tool like etransmit or batch print that will work without opening revit project, we will just push a button and all related cad and pdf files would be ready.
Do you think it is possible by using DYNAMO?
Hi @cenkerkocoglu
Welcome on Dynamo forum!
Show us some work please. This is a help forum to ask for help with things that you cannot figure out on your own, but actually tried. Read this how to ask help
Thank you Kulkul,I really do not know how to use Dynamo. I will start to learn it. I am just asking if it is possible to export.Kind Regards.
Very interesting topic..i think this is possible only when the revit model is open in the background. Anyways i am also interested to know if this is possible. Imagine you can do this task at the background...
Hi,
Use Google and search for DynamoAutomation by @Andreas_DieckmannAnd have a look overhere.
Here are some of his scripts.MasterSimple_End[1].dyn (18.1 KB)MasterSimple_Start[1].dyn (15.0 KB)SlaveMain_End[1].dyn (29.0 KB)SlaveMain_Start[1].dyn (16.3 KB)
Good luckMarcel
Thank you. It's very usefull. I will try it but ı hope there would be another way without opening it. In autocad, you can handle it with batch print without opening cad files.
All geometry in revit will have to be generated before you can print it i presume.Therefor the file has to be opened.So what you are asking is impossible.
impeccable logic Mr Spock (James T. Kirk)Marcel
Thank you for the information. I hope someday it will be possible.
Hi @cenkerkocoglu , I think it's actually possible through background open . You can export pdfs and dwg automatically . Check out this topic :
@Marcel_Rijsmus,
There are two ways of "opening" a Revit file. One is with the UI and one without it. One can open Revit in the background without loading up the UI. It's much faster and can be used for what the OP is asking.
@Konrad_K_Sobon
Thank you.You learn something new every day in the world of Revit
Marcel
Thank you to all. I need to learn dynamo asap. Its good think to know that can be achieved.It woud be great if you can share some other links. They are very helpful.Kind regards.
Dear Mostafa,I have tried to work on Dynamo to achieve the batch print.But here is the result Print_PDF.dyn (9.1 KB)
@cenkerkocoglu ,it doesn't look like you changed the code and added the input. Please take another look at the link I posted in my previous message.
It really works. Thats great. Thank you Now i need to figure it out how i can export them in cad format.
this should do it:
import clr
clr.AddReference('RevitAPI')
from Autodesk.Revit.DB import*
clr.AddReference('RevitServices')
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager
from System.Collections.Generic import*
doc = IN[4]
if isinstance(IN[3],list):
views = IN[3]
else:
views = [IN[3]]
viewid = [ElementId(v.Id) for v in views]
icollection = List[ElementId](viewid)
dwgOptions = DWGExportOptions()
dwgOptions.SharedCoords = IN[1]
path = IN[0]
doc.Export(path, IN[2], icollection, dwgOptions)
OUT = ''
Thank You,MustafaIt works and it's amazing I tried to print two different revit files at same time. Combining view lists from two different sheets works but when it comes to document input, it gives warning. Can it work with multible document input?I have same problem with exporting cad. I can export from one linked files but when it comes multiple links, ı couldn't do it.Thank for all your help.Best Regards.Print_PDF.dyn (18.3 KB)
|
https://forum.dynamobim.com/t/exporting-cad-or-pdf-without-opening-revit/10404
|
CC-MAIN-2017-43
|
refinedweb
| 693
| 69.79
|
NAMEaio_error - get error status of asynchronous I/O operation
SYNOPSIS
#include <aio.h>
int aio_error(const struct aiocb *aiocbp);
Link with -lrt.
DESCRIPTIONThe aio_error() function returns the error status for the asynchronous I/O request with control block pointed to by aiocbp. (See aio(7) for a description of the aiocb structure.)
RETURN VALUEThis function returns one of the following:
- EINPROGRESS, if the request has not been completed yet.
- ECANCELED, if the request was canceled.
- 0, if the request completed successfully.
- A positive error number, if the asynchronous I/O operation failed. This is the same value that would have been stored in the errno variable in the case of a synchronous read(2), write(2), fsync(2), or fdatasync(2) call.
ERRORS
- EINVAL
- aiocbp does not point at a control block for an asynchronous I/O request of which the return status (see aio_return(3)) has not been retrieved yet.
- ENOSYS
- aio_error() is not implemented.
|
https://man.archlinux.org/man/aio_error.3.en
|
CC-MAIN-2021-17
|
refinedweb
| 156
| 66.64
|
Due by Midnight on (that is, the end of) Wednesday, 1/25
This homework must be submitted online. See the instructions for submitting online for how to do this. Put all your answers to this homework in a file named <tt>hw1.py</tt>.
Readings. All problems in this homework can be solved with the subset of Python 3 introduced in sections 1.2-1.5 of the lecture notes.
Q1. Fill in the following function definition for adding a to the absolute value of b, without calling abs:
from operator import add, sub def a_plus_abs_b(a, b): """Return a+abs(b), but without calling abs.""" if ____: op = ____ else: op = ____ return op(a, b)
Q2. Write a function that takes three positive numbers and returns the sum of the squares of the two larger numbers. Use only a single expression for the body of the function:
def two_of_three(a, b, c): """Return x**2 + y**2, where x and y are the two largest of a, b, c.""" return ____ doesn't do the same thing as an if statement in all cases. To prove this fact, write functions c, t, and f such that one of these functions returns the number 1, but the other does not:
def with_if_statement(): if c(): return t() else: return f() def with_if_function(): return if_function(c(), t(), f())
Q4. Douglas Hofstadter’s Pulitzer-prize-winning book, Gödel, Escher, Bach, poses the following mathematical puzzle., returning its length."""
Hailstone sequences can get quite long! Try 27. What's the longest you can find?
|
http://www-inst.eecs.berkeley.edu/~cs61a/sp12/hw/hw1.html
|
CC-MAIN-2018-09
|
refinedweb
| 261
| 72.05
|
zest.emailhider 2.7
A simple jQuery component for hiding email addresses from spammers.
Zest emailhider
This document describes the zest.emailhider package.
Dependencies
This package depends on jquery.pyproxy to integrate python code with jquery code.
Overview
This package provides a mechanism to hide email addresses with JavaScript. Or actually: with this package you can hide your email addresses by default so they are never in the html; with javascript the addresses are then fetched and displayed.
For every content item in your site you can have exactly one email address, as we look up the email address for an object by its UID. For objects for which you want this you should register a simple adapter to the IMailable interface, so we can ask this adapter for an email attribute and a UID method. The ‘emailhider’ view is provided to generate the placeholder link.
Objects display a placeholder link with a hidden-email class, a uid rel attribute and a email-uid-<some uid> class set to the UID of an object; when the page is loaded some jQuery is run to make a request for all those links to replace them with a ‘mailto’ link for that object. Using this mechanism the email address isn’t visible in the initial page load, and it requires JavaScript to be seen - so it is much harder for spammers to harvest.
Special case: when the uid contains ‘email’ or ‘address’ it is clearly no real uid. In that case we do nothing with the IMailable interface but we try to get a property with this ‘uid’ from the property sheet of the portal. Main use case is of course the ‘email_from_address’, but you can add other addresses as well, like ‘info_email’. If you want to display the email_from address in for example a static portlet on any page in the site, use this html code:
<a class="hidden-email email-uid-email_from_address" rel="email_from_address"> Activate JavaScript to see this address.</a>
Instructions for your own package
What do you need to do if you want to use this in your own package, for your own content type?
First you need to make your content type adaptable to the IMailable interface, either directly or via an adapter.
If your content type already has a UID method (like all Archetypes content types) and an email attribute, you can use some zcml like this:
<class class=".content.MyContentType"> <implements interface="zest.emailhider.interfaces.IMailable" /> </class>
If not, then you need to register an adapter for your content type that has this method and attribute. For example something like this:
from zope.component import adapts from zope.interface import implements from zest.emailhider.interfaces import IMailable from your.package.interfaces import IMyContentType class MailableAdapter(object): adapts(IMyContentType) implements(IMailable) def __init__(self, context): self.context = context def UID(self): return self.context.my_special_uid_attribute @property def email(self): return self.context.getSomeContactAddress()
Second, in the page template of your content type you need to add code to show the placeholder text instead of the real email address:
<span>For more information contact us via email:</span> <span tal:
Note that if you want this to still work when zest.emailhider is not installed, you can use this code instead:
<span tal:
This shows the unprotected plain text email when zest.emailhider is is not available. When you are using zest.emailhider 2.6 or higher this works a bit better, as we have introduced an own browser layer: the @@emailhider page is only available when zest.emailhider is actually installed in the Plone Site. This also makes it safe to use zest.emailhider when you have more than one Plone Site in a single Zope instance and want emailhider to only be used in one them.
Note that the generated code in the template is very small, so you can also look at the page template in zest.emailhider and copy some code from there and change it to your own needs. As long as your objects can be found by UID in the uid_catalog and your content type can be adapted to IMailable to get the email attribute, it should all work fine.
Note on KSS usage in older releases
Older releases (until and including 1.3) used KSS instead of jQuery. As our functionality should of course also work for anonymous users, we had to make KSS publicly accessible. So all javascript that was needed for KSS was loaded for anonymous users as well.
We cannot undo that automatically, as the package has no way of knowing if the same change was needed by some other package or was done for other valid reasons by a Manager. So you should check the javascript registry in the ZMI and see if this needs to be undone so anonymous users no longer get the kss javascripts as they no longer need that.
For reference, this is the normal line in the Condition field of ++resource++kukit.js (all on one line):
python: not here.restrictedTraverse('@@plone_portal_state').anonymous() and here.restrictedTraverse('@@kss_devel_mode').isoff()
and this is the normal line in the Condition field of ++resource++kukit-devel.js (all on one line):
python: not here.restrictedTraverse('@@plone_portal_state').anonymous() and here.restrictedTraverse('@@kss_devel_mode').ison()
History of zest.emailhider package
2.7 (2012-09-12)
- Moved to github. [maurits]
2.6 (2011-11-11)
- Added MANIFEST.in so our generated .mo files are added to the source distribution. [maurits]
- Register our browser views only for our own new browser layer. Added an upgrade step for this. This makes it easier for other packages to have a conditional dependency on zest.emailhider. [maurits]
2.5 (2011-06-01)
- Updated call to ‘jq_reveal_email’ to use the one at the root of the site to avoid security errors. [vincent]
2.4 (2011-05-10)
- Updated jquery.pyproxy dependency to at least 0.3.1 and removed the now no longer needed clean_string call. [maurits]
2.3 (2010-12-15)
- Not only look up a fake uid for email_from_address as portal property, but do this for any fake uid that has ‘email’ or ‘address’ in it. Log warnings when no email address can be found for a fake or real uid. [maurits]
2.2 (2010-12-14)
- Added another upgrade step as we definitely need to apply our javascript registry too when upgrading. Really at this point a plain reinstall in the portal_quickinstaller is actually fine, which we could also define as upgrade step, but never mind that now. [maurits]
2.1 (2010-12-14)
- Added two upgrade steps to upgrade from 1.x by installing jquery.pyproxy and running our kss step (which just removes our no longer needed kss file). [maurits]
2.0 (2010-12-09)
- Use jquery.pyproxy instead of KSS. This makes the page load much less for anonymous users. [vincent+maurits]
1.3 (2009-12-28)
- Made reveal_email available always, as it should just work whenever we want to hide the glocal ‘email_from_address’. If we have a real uid target, then try to adapt that target to the IMailable interface and if that fails we just silently do nothing. [maurits]
1.2 (2008-11-19)
- Using kss.plugin.cacheability and added it as a dependency. [jladage]
- Allow to set the uid to email_from_address. [jladage]
- Changed the KSS to use the load event instead of the click event - it now either works transparently, or asks the user to activate JS. [simon]
1.1 (2008-10-24)
- Added translations and modified template to use them. [simon]
- Initial creation of project. [simon]
1.0 (2008-10-20)
- Initial creation of project. [simon]
- Author: Zest Software
- Keywords: zestsoftware email spamprotection javascript
- License: GPL
- Categories
- Package Index Owner: maurits, reinout, fredvd, vincentpretre
- DOAP record: zest.emailhider-2.7.xml
|
https://pypi.python.org/pypi/zest.emailhider/2.7
|
CC-MAIN-2016-30
|
refinedweb
| 1,297
| 65.52
|
#include <item_func.h>
Check if m_ptr point to an external buffer previously alloced by realloc().
Assert the user variable is locked.
This is debug code only. The thread LOCK_thd_data mutex protects:
Copy the array of characters from the given name into the internal name buffer and initialize entry_name to point to it.
Allocates and initializes a user variable instance.
Free all memory used by a user_var_entry instance previously created by create().
Free the external value buffer, if it's allocated.
Initialize all members.
Position inside a user_var_entry where small values are stored: double values, longlong values and string values with length up to extra_size (should be 8 bytes on all platforms).
String values with length longer than 8 are stored in a separate memory buffer, which is allocated when needed using the method realloc().
Initialize m_ptr to the internal buffer (if the value is small enough), or allocate a separate buffer.
Position inside a user_var_entry where a null-terminates array of characters representing the variable name is stored.
Set value to NULL.
Set type of to the given value.
Store a value of the given type into a user_var_entry instance.
Set value to user variable.
Get the value of a variable as a decimal.
Get the value of a variable as an integer.
Get the value of a variable as a double.
Get the value of a variable as a string.
Value length.
Value.
Value type.
Set to the id of the most recent query that has used the variable.
Used in binlogging: When set, there is no need to add a reference to this variable to the binlog. Imagine it is this:
INSERT INTO t SELECT @a:=10, @a:=@a+1.
Then we have a Item_func_get_user_var (because of the
@a+1) so we think we have to write the value of
@a to the binlog. But before that, we have a Item_func_set_user_var to create
@a (
@a:=10), in this we mark the variable as "already logged" so that it won't be logged by Item_func_get_user_var (because that's not necessary).
|
https://dev.mysql.com/doc/dev/mysql-server/latest/classuser__var__entry.html
|
CC-MAIN-2022-21
|
refinedweb
| 341
| 66.54
|
30 October 2014 4 comments Django
If you do things with the Django ORM and want an audit trails of all changes you have two options:
Insert some cleverness into a
pre_save signal that writes down all changes some way.
Use eventlog and manually log things in your views.
(you have other options too but I'm trying to make a point here)
eventlog is almost embarrassingly simple. It's basically just a model with three fields:
You use it like this:
from eventlog.models import log def someview(request): if request.method == 'POST': form = SomeModelForm(request.POST) if form.is_valid(): new_thing = form.save() log(request.user, 'mymodel.create', { 'id': new_thing.id, 'name': new_thing.name, # You can put anything JSON # compatible in here }) return redirect('someotherview') else: form = SomeModelForm() return render(request, 'view.html', {'form': form})
That's all it does. You then have to do something with it. Suppose you have an admin page that only privileged users can see. You can make a simple table/dashboard with these like this:
from eventlog.models import Log # Log the model, not log the function def all_events(request): all = Log.objects.all() return render(request, 'all_events.html', {'all': all})
And something like this to to
all_events.html:
<table> <tr> <th>Who</th><th>When</th><th>What</th><th>Details</th> </tr> {% for event in all %} <tr> <td>{{ event.user.username }}</td> <td>{{ event.timestamp | date:"D d M Y" }}</td> <td>{{ event.action }}</td> <td>{{ event.extra }}</td> </tr> {% endfor %} </table>
What I like about it is that it's very deliberate. By putting it into views at very specific points you're making it an audit log of actions, not of data changes.
Projects with overly complex model save signals tend to dig themselves into holes that make things slow and complicated. And it's not unrealistic that you'll then record events that aren't particularly important to review. For example, a cron job that increments a little value or something. It's more interesting to see what humans have done.
I just wanted to thank the Eldarion guys for eventlog. It's beautifully simple and works perfectly for me.
Follow @peterbe on Twitter
We started with something like this, but ended up not liking the DB bloat that this sort of thing causes. We also started finding that we were using our event log data for some light analytics, which isn't something we had anticipated.
After pondering a number of alternative methods (breaking the event log out into its own DB was the leading candidate), we discovered Keen.io. You throw event logs at it and you can do all kinds of filtering/graphing/statistical calculations based on the key/value pairs within each body.
I don't work for Keen.io, but we really love it at Pathwright. Avoids a lot of unnecessary DB writes and DB bloat, plus our less technical business staff can still figure out how to query it without involving a developer.
Keen.io looks awesome! I just spent a coupla' minutes skimming their docs and stuff.
However, even though it uses a "persistent connection" it's still a HTTPS POST over the network every time it needs to send something. Blocking too. Perhaps it's doable to "up" that to be a gevent greenlet or something so it can send async but that's probably not trivial.
As far as bloat is concerned, I think this is where it matters. I use my eventlog only to send events when users do something that relates to changing the state (e.g. doing a POST) so there aren't that many events actually.
with postgres you could do this entirely inside the db server. See for example sql. No code in django needed at all
But then you'd get every insert or update or something, right?
That's no explicit and not very deliberate.
|
https://api.minimalcss.app/plog/shout-out-to-eventlog
|
CC-MAIN-2020-24
|
refinedweb
| 653
| 67.45
|
Struts
Struts 2 Date Validator
Struts 2 Date Validator
The Date validator in the Struts 2 Framework checks
whether the supplied date lies... demonstrates
how to use the date validator to check the input range.
[ NOTE: If date
struts
"/>
</plug-in>
</struts-config>
validator...;!--
This file contains the default Struts Validator pluggable validator... in this file.
# Struts Validator Error Messages
errors.required={0
Struts validation not work properly - Struts
Struts validation not work properly hi...
i have a problem...) {
this.address = address;
}
}
my struts-config.xml.../StrutsCustomValidator.shtml
http
how to display duplicate elements with out using collection Frame work?
how to display duplicate elements with out using collection Frame work? how to display duplicate elements with out using collection Frame work
Struts 2 RequiredString validator
Struts 2 RequiredString validator
This section discusses RequiredString validator of Struts 2 framework... the forms, required String validator will generate error message E-mail Validator
Struts 2 E-mail Validator
The email validator in Struts 2 Framework checks
whether a
given String field is empty or not and contains a valid email address or not. If the entered
USING THE VALIDATOR FRAMEWORK
STRUTS-VALIDATOR FRAMEWORK
USING THE VALIDATOR FRAMEWORK
Validator framework requires two XML files, namely,
alidator... and
struts-config.xml. So, no special installation is necessary, to use the Valid 1)in struts server side validations u are using programmatically validations and declarative validations? tell me which one is better ?
2) How to enable the validator plug-in file
. Please visit for more information.
Thanks
FRAME
FRAME WHILE I'M RUNNINGFILE OF A GUI PROGRAMME(JDBC CONNECTION) INSTEAD OF OUTPUT FRAME ONLY TWO BLANK FRAMES ARE GETTING DISPLAYD... CAN ANYONE HELP ME TO SOLVE DS PROBLEM
Struts
Struts Why struts rather than other frame works?
Struts is used into web based enterprise applications. Struts2 cab be used with Spring... by the application.
There are several advantages of Struts that makes it popular
frame
frame how to creat a frame in java
Hi Friend,
Try the following code:
import java.awt.*;
import javax.swing.*;
import java.awt.event.... label2=new JLabel("Address: ");
final JTextField text1=new JTextField(20
struts ValidatorResources not found in application scope under key "org.apache.commons.validator.VALIDATOR_RESOURCES
I get this error when i try the validator framework example.......wat could b the problem Console
visually edit Struts, Tiles and Validator configuration files.
The Struts Console... Struts Console
The Struts Console is a FREE standalone Java Swing
Struts Guide
? -
- Struts Frame work is the implementation of Model-View-Controller
(MVC) design...
Struts Guide
- This tutorial is extensive guide to the Struts Framework
Open Source Web Frameworks in Java
Open Source Web Frameworks in Java
Struts
Struts Frame work.... Struts is maintained as a part of Apache Jakarta
project and is open source. Struts Framework is suited for the application
of any size. Latest
NoughtsAndCrossesGame play button doesn't work
NoughtsAndCrossesGame play button doesn't work /*
* To change this template, choose Tools | Templates
* and open the template in the editor...
* pressed. Dispose of the frame and exits the application
*/
@Override
Regarding struts validation - Struts
-------------------------------------------------------------
For more information : struts validation how to validate mobile number field should have 10 digits and should be start with 9 in struts validation? Hi
struts html tag - Struts
struts html tag Hi, the company I work for use an "id" tag on their tag like this: How can I do this with struts? I tried and they don't work
Struts Books
components
How to work with the Commons Validator, ActionForms... quickly apply Struts to your work settings with confidence.
... applications are written to use the Struts Validator from the get-go, most start out
Introduction to Struts 2 Framework
and then
teach you how to develop your first application using Struts 2 frame work.
Topics covered in this session are:
Introduction to Struts 2...Introduction to Struts 2 Framework - Video tutorial of Struts 2
In this video
frame update another frame.
frame update another frame. How do I make a link or form in one frame update another frame
XML files used in Validator Framework?
XML files used in Validator Framework? Give the Details of XML files used in Validator Framework
Struts Alternative
not force you to go the XML route, both technologies will work side by side. Struts...
Struts Alternative
Struts is very robust and widely used framework, but there exists the alternative to the struts framework
file download in struts - Struts
used validator in struts but it didn't worked...
i used validate() in form bean
Creating Custom Validators in STRUTS
validation.xml and validator-rules.xml
to the directory where your struts-config.xml... address.
Edit struts-configuration.xml and add the following lines
<form...;p>
This application shows the use of Struts Validator.<br>
JSF validator Tag
JSF validator Tag
This tag is used to add and register the validator... to the required type is needed and then specified
validator type is invoked
what is struts? - Struts
what is struts? What is struts?????how it is used n what... of the Struts framework is a flexible control layer based on standard technologies like Java... Commons packages. Struts encourages application architectures based on the Model 2 first example - Struts
the version of struts is used struts1/struts 2.
Thanks
Hi!
I am using struts 2 for work.
Thanks. Hi friend,
Please visit...Struts first example Hi!
I have field price.
I want to check
Client Side Address Validation in Struts
for the address form. Struts Validator Framework uses this rule for generating...
Client Side Address Validation in Struts
...- in
To enable the validator plug-in open the file struts-config.xml
struts - Framework
be defined in the Commons Validator configuration when dynamicJavascript="true
one frame update another frame
one frame update another frame How do I make a link or form in one frame update another frame
panel and frame
panel and frame What is the difference between panel and frame
Java Frame
Java Frame What is the difference between a Window and a Frame
Validator in Flex4
Validator in Flex4:
The Validator class validates the text field value...
for a required field.
The tag of Validator is <mx:Validator>...;
]]>
</fx:Script>
<fx:Declarations>
<mx:Validator
id 2 Tutorial
the database.
More Struts Validator Examples
User input validations...Struts 2 Tutorial
RoseIndia Struts 2 Tutorial and Online free training helps you learn new
elegant
billing frame
billing frame how to generate billing frame using swing which contain sr. no, name of product, quantity, price & total
Tags in struts 1
Tags in struts 1 I have problem in Include tag in Struts
this tag using but it is not work
Please Explain
|
http://roseindia.net/tutorialhelp/comment/38628
|
CC-MAIN-2014-10
|
refinedweb
| 1,109
| 56.86
|
Task #1803
Plan replacement of mongodb with postgres
0%
Related issues
History
#1
Updated by ipanova@redhat.com over 3 years ago
- Status changed from NEW to ASSIGNED
#2
Updated by semyers over 3 years ago
I'm in the process of writing all this in more detail in an etherpad, but for now I'll outline the work so far.
pcreech and I started by basically just writing out all of models we had, and trying to organize them in a diagram.
This went poorly, because pulp has four distinct categories of models, and the tool we were using didn't make it very easy to diagram them all together. With all the pulp releasing I've been doing, and all the RHUI pcreech has been doing, all of this finally came together toward the end of last week while jortel was on PTO. Before unleashing the confusing and questionably useful diagrams on the team, I wanted to get jortel's feedback. I was not disappointed.
With his help, we've identified the aformentioned data categories: Repositories, Content Units, RBAC, Tasks. I'm writing up a doc right now that explains what collections we currently have in mongo, to which category they belong, how the currently relate to each other, and some speculation about how we can migrate the data in that category to a more relational design. Since I'll be speculating, this doc will go up in an etherpad for review and improvement by the small team focused on this at the moment (jortel, pcreech, me), before being submitted to the team at large.
Depending on how quickly I get this done and reviewed, the link to that etherpad should most likely be appearing here tomorrow.
#3
Updated by semyers over 3 years ago
- Private changed from No to Yes
The "Relational Pulp" etherpad has been drafted and edited. It is available for internal comment at by the entirety of the pulp team.
#4
Updated by semyers over 3 years ago
- Private changed from Yes to No
#5
Updated by semyers over 3 years ago
- Related to Task #1872: Profile Django ORM instantiation cost added
#6
Updated by mhrivnak over 3 years ago
- Related to Task #1873: Plan REST API for 3.0 added
#7
Updated by mhrivnak over 3 years ago
- Related to Task #1874: Plan User/Auth system for 3.0 added
#8
Updated by mhrivnak over 3 years ago
- Sprint/Milestone changed from 19 to 20
#9
Updated by semyers over 3 years ago
The migration plan is largely "ratified", in that objections to it have been (or are currently being) addressed. I feel like we've reached a point where what we don't know outweighs what we do know, and the best way I can think of to bridge the knowledge gap is to finishing modeling pulp out on postgres and start trying to use it. I'll be modifying my relational pulp project to this end over the coming days.
#10
Updated by mhrivnak over 3 years ago
- Sprint/Milestone changed from 20 to 21
#11
Updated by semyers over 3 years ago
Quick update, progress is still being made. pcreech has converted the project from docker to Vagrant, which is awesome. The pulp platform models related to repos and units have been written down, and I'm currently porting RPM's units and repo-related models now so we can start looking at migrating data from nonrel-pulp to rel-pulp to see what explodes and needs to be revisited. :)
#12
Updated by mhrivnak over 3 years ago
- Sprint/Milestone changed from 21 to 22
#13
Updated by mhrivnak over 3 years ago
- Sprint/Milestone changed from 22 to 23
#14
Updated by semyers over 3 years ago
- Status changed from ASSIGNED to CLOSED - CURRENTRELEASE
This is largely done (and by done, I mean now we can start to do it? :D). The rel-pulp doc and related repo are generally accepted by the team, and are now available in the pulp namespace on github: - check out the db-translation-guide.md doc.
Most of the work was done by a cabal made up of jortel, pcreech, and myself with lots of great ideas and insights coming from all over the place, including stakeholders not on the pulp team. Thanks to all involved. More meetings will ensue, followed by more redmine tasks.
#15
Updated by bmbouter over 1 year ago
- Sprint set to Sprint 5
#16
Updated by bmbouter over 1 year ago
- Sprint/Milestone deleted (
23)
Please register to edit this issue
Also available in: Atom PDF
|
https://pulp.plan.io/issues/1803
|
CC-MAIN-2019-47
|
refinedweb
| 768
| 60.69
|
>>>>> "N" == nicodemus <nicodemus at globalite.com.br> writes: >> 1. Something like the C compiler's '-c' option. In this case >> Pyste >> should simply generate the wrapper code and none of the module >> code i.e. for an invocation like so: >> >> pyste.py --out build --multiple -c file1.pyste N> I don't understand exactly what you are saying. Are you N> suggesting that instead of generating: N> #include <...> N> BOOST_PYTHON_MODULE(module) { N> class_<A>(...); N> } N> You want to be able to generate just this: N> class_<A>(...); N> ? N> How does that help you handle dependencies? No, what I mean is 'pyste.py --multiple -c file.pyste' should generate this (edited for brevity): // ------------ _file.cpp ---------- // Includes ===== #include <boost/python.hpp> // Using ======= using namespace boost::python; // Declarations ==== namespace { struct test_A_Wrapper: test::A { [...] }; [ other wrappers etc. ] // Module =========== void _Export_file_pyste() { class_<...> } // ------------ _file.cpp ---------- And without -c this command: pyste.py --module=test --multiple file1.pyste file2.pyste should *only* generate: // ---------- test.cpp ---------- // Include ============== #include <boost/python.hpp> // Exports =============== void _Export_file_pyste(); // Module ================ BOOST_PYTHON_MODULE(test) { _Export_file_pyste(); } // ---------- test.cpp ---------- I hope this is clear. I think you will see what I am getting at here. This way if file.pyste changes then only _file.cpp changes. None of the other wrapper code sources will change and pyste will run faster since all you need to process is one of the pyste files. Its much easier for users to use and incrementally wrap their libraries and very convenient for development. >> 2. Instead of generating files for each header, it would be >> useful if >> one file were generated per pyste file when --multiple were N> Good idea! When I implemented the --multiple option, I didn't N> consider dependencies. I will put in my TODO list, shouldn't be N> too hard to implement this. That would be great! Thanks! N> Thanks a lot for your suggestions! My pleasure! cheers, prabhu
|
https://mail.python.org/pipermail/cplusplus-sig/2003-July/004339.html
|
CC-MAIN-2017-17
|
refinedweb
| 314
| 71.51
|
After ever released, and there may be errors on it. Please point out any errors done! :-)
FullScreenHeader
Why did I make this? Well, there is this really good open source util called RealVNC. It is a "remote desktop" freeware with very nice clones. Check them out to UltraVNC and TightVNC. Many features have been added and I've already implemented the caption on each of these versions. Hopefully, this gets integrated in future releases of every VNC. If not, you can download all the versions from my website.
You need to add five resources to your demo project. All pictures should be the same size, but you can decide the width and height. See "customize" for more info.
You also need to add a context menu to the resource, default name is IDR_tbMENU. If you have set the tbLastIsStandard=TRUE then you have to add 3 "default" entries to the menu. From the bottom: Close, Minimize and Restore. All IDs used on each item can be set to e.g.: IDC_NULL since it is not in use. The flag tbWMCOMMANDIDStart/End sets the range for the IDs used in a context menu internally. If you want to add other entries to the context menu, you can add it above the 3 default entries if you're using them. The parent will get a WM_USER+tbWMUSERID+nItem sent to its message queue. (See "Customize" for options that are available to you.)
IDR_tbMENU
tbLastIsStandard=TRUE
IDC_NULL
tbWMCOMMANDIDStart
End
WM_USER+tbWMUSERID+nItem.
hwnd
#include "stdafx.h"
#include "res\\Resource.h"
#include "FullScreenTitleBar.h"
CTitleBar *TitleBar;
int APIENTRY WinMain(HINSTANCE hInstance,
HINSTANCE hPrevInstance,
LPSTR lpCmdLine,
int nCmd bar on Win32/MFC. Here are the rest of the public variables:
void Create(HINSTANCE hInst, HWND ParentWindow)
void SetText(LPTSTR TextOut)
void DisplayWindow(BOOL Show, BOOL SetHideFlag=FALSE)
DisplayWindow
ShowWindow
SetTheHideflag
true
ShowWindow(m_hWnd, SW_HIDE)
HWND GetSafeHwnd()
HWND
There are several options that might be changed in compile time for the caption bar. Here is a complete list:
//Width of captionbar
#define tbWidth 500
//Height of captionbar
#define tbHeigth 20
//Default size on picture (cx)
#define tbcxPicture 16
//Default size on picture (cy)
#define tbcyPicture 14
//Topmargin
#define tbTopSpace 3
//Leftmargin
#define tbLeftSpace 20
//Rightmargin
#define tbRightSpace 20
//Space between buttons
#define tbButtonSpace 1
//Font name used in the caption
#define tbFont "Arial"
//Size of font
#define tbFontSize 10
//Color of text
#define tbTextColor RGB(255,255,255)
//Backgroundcolor - Start of gradient
#define tbStartColor RGB(0,128,192)
//Backgroundcolor - End of gradient
#define tbEndColor RGB(0,0,192)
#define tbGradientWay TRUE
//TRUE = Vertical, FALSE = Horiz. If you don't like gradient set
//tbStartColor=tbEndColor
//Color of the border around captionbar
#define tbBorderPenColor RGB(255,255,255)
//Color of the shadow of the captionbar (bottom)
#define tbBorderPenShadow RGB(100,100,100)
#define tbTriangularPoint 10
//Triangularpoint is how many pixel if should move on the left and right
//side to make the captionbar not so like the other captionbars
//Width of the pen used to draw the border
#define tbBorderWidth 2
//Hide window when created
#define tbHideAtStartup TRUE
//Is the pin pushed in or out at startup (INVERTED!)
#define tbPinNotPushedIn FALSE
//Animate window to scroll up/down
#define tbScrollWindow TRUE
//Timer variable for scrolling the window (cycletime) [ms]
#define tbScrollDelay 20
#define tbAutoScrollTime 10
//* tbAutoScrollDelay milliseconds steps. Meaning if it is 10 then
//= 10 (steps) * 100ms (tbAutoScrollDelay) = 1000ms delay
//Timer id - Internally used but it can make conflicks!
#define tbScrollTimerID 1
//Timer id - Internally used but it can make conflicks!
#define tbAutoScrollTimer 2
#define tbAutoScrollDelay 100
//Timer variable for how many times the cursor is not over the window.
//If it is tbAutoScrollTime then it will hide if autohide
//Resource ID - closebutton
#define tbIDC_CLOSE 10
//Resource ID - maximizebutton
#define tbIDC_MAXIMIZE 20
//Resource ID - minimizebutton
#define tbIDC_MINIMIZE 30
//Resource ID - pinbutton
#define tbIDC_PIN 40
#define tbDefault FALSE
//FALSE = Send a custon WM message, TRUE = Send Minimize, maximize
//and close to parent (normal Sendmessage and Showmessage commands)
//Message to send to parent on close-event
#define tbWM_CLOSE WM_USER+1000
//Message to send to parent on minimize-event
#define tbWM_MINIMIZE WM_USER+1001
//Message to send to parent on maximize-event
#define tbWM_MAXIMIZE WM_USER+1002
//Resource name for the contextmenu
#define tbMENUID IDR_tb :)
Sometimes when you delete the caption bar, a destroywindow is called. Since the window is made as a child, it will send a message to the parent asking to close. This might cause the parent to close also. A solution to this problem is to create the caption bar on runtime, hide it and show it as you need it. And when the main program closes, you can destroy the object.
destroywindow
This bug will be sorted out in a later version.
If you encounter that no messages are being sent to your parent window, please check the tbDefault variable. If the user presses the contextmenu, it will send a "click" to any of the buttons in the parent, and use the parameters used at the buttons.
tbDefault
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
General News Suggestion Question Bug Answer Joke Rant Admin
Man throws away trove of Bitcoin worth $7.5 million
|
http://www.codeproject.com/Articles/6024/Full-Screen-Caption-bar?fid=32937&df=90&mpp=25&sort=Position&spc=Relaxed&tid=761203
|
CC-MAIN-2013-48
|
refinedweb
| 867
| 51.68
|
This thread over at GameDev got me thinking, “can one assign Python-like tuples in C++?”
I don’t want to pollute the thread in For Beginners with that discussion, but the answer is yes, even without C++11 initialiser lists:
#include <iostream> struct A { A &operator = (int i) { std::cout << "A = " << i << std::flush; return *this; } A &operator , (int i) { std::cout << ", " << i << std::flush; return *this; } }; int main() { A a; a = 10, 20, 30; std::cout << std::endl; }
Should you ever do this? Probably not. Though I’m guessing one of Boost’s container libraries is doing exactly this.
|
http://swiftcoder.wordpress.com/2013/06/15/fun-with-commas/
|
CC-MAIN-2013-48
|
refinedweb
| 101
| 57.95
|
02 August 2012 21:06 [Source: ICIS news]
HOUSTON (ICIS)--US chlor-alkali producer Westlake is seeking a 5 cent/lb ($110/tonne, €90/tonne) increase for domestic polyvinyl chloride (PVC) contracts, effective 1 September, a company source said on Thursday.?xml:namespace>
Two other producers have announced similar increases, sources said, but those were not confirmed.
Pipe-grade PVC is assessed at 51-56 cents/lb, and general purpose PVC at 54-58 cents/lb.
The Westlake source said the increase is based on a two factors. First, domestic demand for PVC, which is used widely in home construction, tends to rise in autumn before dropping in the winter.
Second, the price for primary feedstock ethylene has rebounded in recent weeks after dropping in the previous months.
The source said the proposed increase is driven mostly by cost factors, as demand for PVC is said to be still steady but not strong.
($1 = €0
|
http://www.icis.com/Articles/2012/08/02/9583491/us-westlake-seeks-5-centlb-pvc-increase-for-1-september.html
|
CC-MAIN-2015-14
|
refinedweb
| 155
| 59.33
|
>> does. sigh.. slow down please, will you? you addressed this reply to me. yet I have been careful, each time this topic comes up, to argue neither in favour nor against pattern guards. instead, my purpose has been to clarify misconceptions, eg., by demonstrating how pattern guards, even though they do add substantial convenience, do not add fundamentally new expressiveness (they can be replaced by a local rewrite), or in this case by providing the examples Iavor asked for, showing the difference in three uses of '<-' as generators and booleans as guards. I do not mind if pattern guards make it into Haskell, precisely because I know how to sugar them away - once all implementations support them, I might even use them more. Nevertheless, I wanted to support Yitzchak's argument, namely that '<-' is used for generators in monadic contexts, but its use in pattern guards is different from that in list comprehensions and do. pattern guards are useful, once explained, but there is nothing particularly obvious about them, nor is there only one way to formulate them: the usual argument that they are just list comprehension syntax transferred to guards breaks down because of the differences Yitzchak is concerned about, and the correspondent claiming to be apfelmus has already shown that a direct embedding of Maybes would be at least as natural as the current implicit embedding into the effect-free part of an unknown mon? sure, the one you gave right there. to be consistent with other uses of '<-' as a generator, I'd expect to write either f value | match <- lookup value list = g match or f value | Just match <- return (lookup value list) = g match Claus
|
http://www.haskell.org/pipermail/haskell-prime/2006-December/001975.html
|
CC-MAIN-2013-48
|
refinedweb
| 280
| 51.82
|
At this week’s ACCU Conference I went to an excellent talk by Sven Rosvall entitled “Unit Testing Beyond Mock Objects”.
The talk covered the newer Java and C# unit testing frameworks that allow inserting mock objects even where legacy code is using some undesired dependency directly, meaning there is no seam where you can insert a different implementation.
These tools solve a real problem, and could be useful.
However, I want to move the discussion in a different direction: can we avoid mocking altogether, and end up with better code?
Sven gave us an example of some legacy code a little bit like this. (I translated into C++ just for fun.)
// Return true if it's between 12:00 and 14:00 now. bool is_it_lunch_time_now() { system_clock::time_point now = system_clock::now(); time_t tt = system_clock::to_time_t( now ); tm local_tm = ( *localtime( &tt ) ); int hour = local_tm.tm_hour; return ( 12 <= hour && hour < 14 ); }
To test this code, we would have something like this:
TEST_CASE( "Now: It's lunchtime now (hopefully)" ) { // REQUIRE( is_it_lunch_time_now() ); // NOTE: only run at lunch time! REQUIRE( !is_it_lunch_time_now() ); // NOTE: never run at lunch time! }
Which is agony. But the production code is fine:
if ( is_it_lunch_time_now() ) { eat( sandwich ); }
So, the normal way to allow mocking out a dependency like this would be to add an interface, making our lunch-related code take in a TimeProvider. To avoid coupling the choice of which TimeProvider is used to the calling code, we pass it into the constructor of a LunchCalculator that we plan to make early on:
class TimeProvider { public: virtual tm now_local() const = 0; }; class RealTimeProvider : public TimeProvider { public: RealTimeProvider() { } virtual tm now_local() const { system_clock::time_point now = system_clock::now(); time_t tt = system_clock::to_time_t( now ); return ( *localtime( &tt ) ); } }; class HardCodedHourTimeProvider : public TimeProvider { public: HardCodedHourTimeProvider( int hour ) { tm_.tm_hour = hour; } virtual tm now_local() const { return tm_; } private: tm tm_; }; class LunchCalc { public: LunchCalc( const TimeProvider& prov ) : prov_( prov ) { } bool is_it_lunch_time() { int hour = prov_.now_local().tm_hour; return ( 12 <= hour && hour < 14 ); } private: const TimeProvider& prov_; };
and now we can write tests like this:
TEST_CASE( "TimeProvider: Calculate lunch time when it is" ) { HardCodedHourTimeProvider tp( 13 ); // 1pm (lunch time!) REQUIRE( LunchCalc( tp ).is_it_lunch_time() ); } TEST_CASE( "TimeProvider: Calculate lunch time when it isn't" ) { HardCodedHourTimeProvider tp( 10 ); // 10am (not lunch :-( ) REQUIRE( ! LunchCalc( tp ).is_it_lunch_time() ); }
Innovatively, these tests will pass at all times of day.
However, look at the price we’ve had to pay: 4 new classes, inheritance, and class names ending with unKevlinic words like “Provider” and “Calculator”. (Even worse, in a vain attempt to hide my embarrassment I abbreviated to “Calc”.)
We’ve paid a bit of a price in our production code too:
// Near the start of the program: config.time_provider = RealTimeProvider(); // Later, with config passed via PfA if ( LunchCalc( config.time_provider ).is_it_lunch_time() ) { eat( sandwich ); }
The code above where we create the RealTimeProvider is probably not usefully testable, and the class RealTimeProvider is also probably not usefully testable (or safely testable in the case of some more dangerous dependency). The rest of this code is testable, but there is a lot of it, isn’t there?
The advantage of this approach is that we have been driven to a better structure. We can now switch providers on startup by providing config, and even code way above this stuff can be fully exercised safe in the knowledge that no real clocks were poked at any time.
But. Sometimes don’t you ever think this might be better?
tm tm_local_now() // Not testable { system_clock::time_point now = system_clock::now(); time_t tt = system_clock::to_time_t( now ); return ( *localtime( &tt ) ); } bool is_lunch_time( const tm& time ) // Pure functional - eminently testable { int hour = time.tm_hour; return ( 12 <= hour && hour < 14 ); }
The tests look like this:
TEST_CASE( "Functional: Calculate lunch time when it is" ) { tm my_tm; my_tm.tm_hour = 13; // 1pm (lunch time!) REQUIRE( is_lunch_time( my_tm ) ); } TEST_CASE( "Functional: Calculate lunch time when it isn't" ) { tm my_tm; my_tm.tm_hour = 10; // 10am (not lunch time :-( ) REQUIRE( ! is_lunch_time( my_tm ) ); }
and the production code looks very similar to what we had before:
if ( is_lunch_time( tm_local_now() ) ) { eat( sandwich ); }
Where possible, let’s not mock stuff we can just test as pure, functional code.
Of course, the production code shown here is now untestable, so there may well be work needed to avoid calling it in a test. That work may involve a mock. Or we may find a nicer way to avoid it.
11 thoughts on “Avoid mocks by refactoring to functional”
… What about just mocking what you use? Stop trying to make C++ into Java with EasyMock or C# with RhinoMock, use something tailored to C++ (and I mean all of C++).
#include “hippomocks.h”
TEST_CASE( “Calculate lunch time when it is” )
{
MockRepository mocks;
mocks.ExpectCall(system_clock::now).Return(system_clock::lunchtime);
REQUIRE( is_lunch_time() );
}
TEST_CASE( “Calculate lunch time when it isn’t” )
{
MockRepository mocks;
mocks.ExpectCall(system_clock::now).Return(system_clock::dinnertime);
REQUIRE( ! is_lunch_time() );
}
Stop using C++ as if it’s a badly implemented Java or a badly implemented Haskell. Use it for what it’s worth.
I have been thinking on how to mock avoiding to,write an interface.
You could do this, though it is not superior to the last solution:
extern std::function is_it_lunch_time_now;
Default implementation goes in .cpp.
In test code:
is_lunch_time_now = /*mock impl*/;
function is bool() but the website swallowed things
between less than and greater than symbols
due to markup I guess.
You can avoid writing an interface doing this, but needs source code modification:
extern std::function is_it_lunch_time_now;
.cpp file:
is_it_lunch_time_now = []() -> bool { … };
test file, rebind function:
is_it_lunch_time_now = /*mock implementation*/
Peter: it looks like hoppomocks gives you the ability to insert a seam where there isn’t one, like PowerMock which was the subject of the talk that inspired this post. This sounds like a helpful thing to be able to do when you’re in a situation where there is a lot of code already using untestable code, but I am trying to think about how we would strucure code ideally. In my opinion, in the ideal case we wouldn’t rely on “magic” like that. Of course, that argument relies on us sharing understanding of what is “magic” and what is perfectly good and normal code.
Germán: thank you, I fixed the less-thans.
Germán: yes, that is another way to avoid mocks. But isn’t the functional style for is_lunch_time() nicer anyway, even if we also implement is_it_lunch_time_now() based on it?
> Of course, that argument relies on us sharing understanding of what is “magic” and what is perfectly good and normal code.
To me it is the discussion of what exactly an interface is. Is it only an interface if you have a C++ class with pure virtual methods that you can replace with a mock object? Or is a set of C functions with a coherent goal also an interface?
I tend to the latter. Having code that does not require mocks to test is inherently better, but you will have to tie those bits of code together *somehow*, and you’ll need to check that the whole of those bits of code does what you expect it to do. You can check a key and a lock, but until you check them together you don’t know you have the *right* key.
Hi Peter, yes – it is legitimate to use natural seams (e.g. a mock implementation that matches the declared function signature of the real dependency) that are available in C++ that are not in a language like Java. However, I’d still prefer to write code whose behaviour depends only on its input where possible, and doesn’t use an untestable depenency call in its implementation. I do admit you’ve suggested a good way to replace that dependency call, but I’d still rather not have it.
First – good to see you using Catch for your test framework :-)
Second – I completely agree with your thought processes here. It’s one of the reasons that truly functional code is so much easier to reason about (once you get your head around the paradigm in the first place) – but it all has it’s start in straightforward stuff like this.
Thirdly – despite a different language and focus – it echoes a lot of what I wrote here: (which I know you’ve seen because you were the first commenter ;-) )
Phil – yes, this was my first go with CATCH. I enjoyed it.
We are in total agreement :-) If only everyone listened to us …?
(That link should be )
|
https://www.artificialworlds.net/blog/2014/04/11/avoid-mocks-by-refactoring-to-functional/
|
CC-MAIN-2020-45
|
refinedweb
| 1,403
| 60.55
|
This is what we are going to do for our sample application: we will
populate a DataGrid with sample data and search an item from the DataGrid. Go
ahead and design with it in Expression Blend 3.
If you look into Object and Timeline Pane you will
find the following hierarchy:
Add a class named Users.cs to define the properties
of the sample data:
Add the following properties:
public class Users
{
public string Name { get; set; }
public int Age { get; set; }
public string Gender { get; set; }
public string Country { get; set; }
}
Now in MainPage.xaml.cs create sample data and
assign the ItemSource as the List:
public MainPage()
{
InitializeComponent();
List myList = new List
{ if you run your application you will get the
DataGrid with sample data populated:
Thank you
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/kb/1821-silverlight-scrollintoview-datagrid.aspx
|
CC-MAIN-2017-26
|
refinedweb
| 145
| 51.11
|
Before starting this Tutorial you should know
Please follow the steps below in order to get all data from SQLite database table and show it on your Android:
Step 1) First of all we’re going to take a button and this button we are going to use to view all data. So we change the text as view all and the button ID also I’m going to change as button.
Step 2) Now once our button is created what we’re going to do is we re going to create a method to get all the table data. So we are going to go to our database helper class and inside this database helper class we are going to create a new method and this method we are going to call as get all data which will be a public method and its going to return a class or an instance of a class called curser.
This curser will show some error so you just need to press alt+enter to import the class which is curser class.
Step 3) Now we’re going to create the instance of database class just as we have done in the previous tutorial.
Step 4) Then we will create an instance of our curser class and name it as, for examples, 'rest' for result and then we will take an instance of our database and call a raw query on this database so raw query. And if you know how to query SQLite database, you may know how to query all the data from the table. So this is a simple query we going to write
The table an argument which we are going to pass it as null for now.
Note: (*) asterisk stands for all from table.
And now we just need to return the instance of the cursor which is RES.
This cursor class is the interface which provides the random read-write access to your result okay. So you can see here we are querying the database and the result we are storing it in the cursor instance and using this we have an access to our data.
Read Here About How to insert data into SQLite database in Android
Step 5) Now we are going to our main activity .java class and in here, first of all, we will create the variable for view all button and also we will cast this button.
Step 6) And now once we have casted our button we can use the button object to call set on click listener. Let's create a method, it will be a public method which will take no argument and in here we can call the object of our view all button and we can call set on click listener, set on click listener and inside this we will create a new on click listener and when this button is clicked we want to perform some action.
Step 7) Now we are going to get all data using this function which we have just created get all data. So we will use the instance of our database helper class which is 'mydDb'. And we are going to save it as cursor because it returns as an object of the cursor. Now object 'res' has some properties. If we call this 'res' object and we can get the count of the lines. This is the result count we are getting, if this is equal to 0, then it means that there is no data available for us. From this get all data, by querying to the database.
Step 8) If there is no result then we are going to show arrow and return, otherwise if there is some other result then we are going to create some string buffer and then we are going to display this data, so we are going to create and instance of string buffer and then we are going to get all the data one by one using this RES object and how we can do it? we can use a while loop and as a condition of this while loop we can take this RES object and we can call a method move to next.
Step 9) Then we are going to get the result and we will store it in the buffer. And In order to append the result, we will write the name of our columns and then the index of the column.
So the index of the column starts from zero. And if you remember our table was containing four columns. first was ID, the second was name, third was a surname and fourth was marks. So the index of ID will be 0, index of the name will be 1, index of the surname will be 2 and index of marks will be 3.
Note: Just give double line break to the last column so that next data is printed, it is printed after line break
Step 10) Now we want to shows all the data. So let's create a new method will be also a public method and it will return nothing so void. This method is going to take 2 arguments. First is the title and second is the message itself.
Step 11) And in here we are going to create an instance of alert dialogue builder and it takes the argument which is the context so this itself and using this builder we can create an alert dialogue, okay so we can set title and set message using this builder to alert dialogue. So lets create, first of all, lets use this builder to set cancellable. And then we will set the title, and then we will set the message. Okay and then we can just call show method on this builder. This will show our dialogue or alert dialogue.
So just copy this showmessage function here. First of all, if no data is found, what we’re going to do is, we’re going to show the message.
And when the data is found in here, we can just show some message that data the dialogue.
Step 12) One more thing which is remaining is we need to call viewAll() method inside our onCreate method so just copy this view all method and paste it inside your onCreate method of your main activity.
Okay, now lets run our programme. So our app is running now, so when we create this view all button we will be able to see this data
Complete Code
public Cursor getAllData(){ SQLiteDatabase db = this.getWritableDatabase(); Cursor res = db.rawQuery("select * from "+TABLE_NAME,null); return res; } public class MainActivity extends ActionBarActivity{ DatabaseHelper myDb; EditText editName,editSurname,editMarks; Button btnAddData; Button btnviewAll; } public void viewAll(){ btnviewAll.setOnClickListener( new View.OnClickListener(){ @Override public void onClick(View v){ } } ) } @Override public void onClick(View v){ Cursor res = myDb.getAllData(); if(res.getCount() == 0){ return; } } StringBuffer buffer = new StringBuffer(); while (res.moveToNext()){ buffer.append("Id: "+ res.getString(0)+"n"); buffer.append("Name: "+ res.getString(1)+"n"); buffer.append("Surname: "+ res.getString(2)+"n"); buffer.append("Marks: "+ res.getString(3)+"n"); } public static final Sting Col_1 = "ID"; public static final Sting Col_2 = "Name"; public static final Sting Col_3 = "Surname"; public static final Sting Col_4 = "Marks"; public void showMessage(Spring title, String Message){ AlertDialog.Builder builder = new AlertDialog.Builder(this); builder.setCancelable(true); builder.setTitle(title); builder.setMessage(Message); builder.show(); }
|
https://www.stechies.com/data-from-sqlite-database-android/
|
CC-MAIN-2022-21
|
refinedweb
| 1,237
| 74.63
|
ames Pier930 Points
Error: Could not find or load main class Example
Hello -
I have no idea why this isn't working, this is for the Object Inheritance Java course. When I try to compile and run it gives the error "Could not find or load main class Example". I have the Example.java file in the same parent directory as the com folder. This format worked in the previous exercise.
package com.teamtreehouse; import java.util.Date; public class Example { public static void main(String[] args) { Treet treet = new Treet( "craigsdennis", "this is a tweet", new Date(1489438425L) ); System.out.printf("This is a new Treet: %s %n", treet); } }
2 Answers
James Pier930 Points
Hi Matthias,
That is my actual code posted above. I ran the compiler and had both the class and the file named Example.java, but it still wasn't working. I ended up just starting with a fresh workspace and copying in the code from earlier exercise files, so I'm good to go.
Thanks!
James
Matthias Margot7,236 Points
Happy to hear that, you know in the end it might just have been a bug in the current treehouse workspace if indeed you did do exactly the same code + file names in the new one.
moritz lehnhardt429 Points
I have the same problem today! tried everything of the above. Support is in the loop
Matthias Margot7,236 Points
Matthias Margot7,236 Points
Hi,
A couple of reasons why that might be:
-the most obvious one first: you might not have compiled your file ('javac Example.java' in the console)
-when you created the file you named it differently than Example.java (in the file directory not in actual code), in which case you need to right click on your file class associated with Example.java and choose rename and rename it to the same name as your class.
-this can't be the case if the code you're showing is your actual code but this error also occurs if your main() method isn't in the file you are trying to execute from
Make sure all of these properties are matched and try running it again.
Hope this will make your code run
Matthias :)
|
https://teamtreehouse.com/community/error-could-not-find-or-load-main-class-example-3
|
CC-MAIN-2022-27
|
refinedweb
| 371
| 70.63
|
You can subscribe to this list here.
Showing
3
results of 3
Hi CLISP devels,
The usual preface: I do _NOT_ think of the SCREEN/*KEYBOARD-INPUT* topic
as really urgent, but Sam had asked in the "feature request" tracker, so
here is a detailed list of problems I had with EXT:*KEYBOARD-INPUT* in
the past.
Sorry for posting this on the devel-list but for a comment in the
feature-tracker it's a bit too long.
Sam's original comment from the feature tracker was:
> SDS wrote:
>
>(?) and
> make it more emacs-compatible.
To demonstrate the problems I use the LOOP example from the CLISP Impnotes,
Chapter 21.2.2., "Macro EXT:WITH-KEYBOARD", that prints all keystrokes on
the screen until the user hits the spacebar:
(ext:with-keyboard
(loop :for char = (read-char ext:*keyboard-input*)
:for key = (or (ext:char-key char) (character char))
:do (print (list char key))
:when (eql key #\Space) :return (list char key)))
1.) The EXT:*KEYBOARD-INPUT* stream works byte-oriented and does not
produce the expected results with multi-byte unicode characters.
An uni-byte ASCII character #\a produces the correct result:
(#S(SYSTEM::INPUT-CHARACTER :CHAR #\a :BITS 0 :FONT 0 :KEY NIL) #\a)
A two-byte unicode character #\ä produces two wrong results:
(#S(SYSTEM::INPUT-CHARACTER :CHAR #\LATIN_CAPITAL_LETTER_A_WITH_TILDE
:BITS 0 :FONT 0 :KEY NIL) #\LATIN_CAPITAL_LETTER_A_WITH_TILDE)
(#S(SYSTEM::INPUT-CHARACTER :CHAR #\CURRENCY_SIGN :BITS 0 :FONT 0 :KEY NIL)
#\CURRENCY_SIGN)
while the correct answer would be:
(character #\ä) => #\LATIN_SMALL_LETTER_A_WITH_DIAERESIS
2.) An arrow-key pressed with no other key produces the correct result:
(#S(SYSTEM::INPUT-CHARACTER :CHAR NIL :BITS 8 :FONT 0 :KEY :LEFT) :LEFT)
while an arrow-key if pressed together with Shift, Control, or Meta [Alt]
produces an escape-sequence:
(#S(SYSTEM::INPUT-CHARACTER :CHAR #\Escape :BITS 0 :FONT 0 :KEY NIL) #\Escape)
(#S(SYSTEM::INPUT-CHARACTER :CHAR #\[ :BITS 0 :FONT 0 :KEY NIL) #\[)
(#S(SYSTEM::INPUT-CHARACTER :CHAR #\1 :BITS 0 :FONT 0 :KEY NIL) #\1)
(#S(SYSTEM::INPUT-CHARACTER :CHAR #\; :BITS 0 :FONT 0 :KEY NIL) #\;) ; <-[!]
(#S(SYSTEM::INPUT-CHARACTER :CHAR #\5 :BITS 0 :FONT 0 :KEY NIL) #\5)
(#S(SYSTEM::INPUT-CHARACTER :CHAR #\D :BITS 0 :FONT 0 :KEY NIL) #\D)
Many escape-sequences contain a semicolon, what in a Lisp character stream
produces very nasty side-effects, because in a character stream a semicolon
is understood as the beginning of a comment, and a #\Newline or END-OF-FILE
is understood as the end of a comment. Because escape-sequences are usually
_NOT_ terminated by a #\Newline or END-OF-FILE, the Lisp reader gets stuck
in "comment mode" until a #\Newline or END-OF-FILE appears in the stream
by accident. Because EXT:*KEYBOARD-INPUT* is an INPUT stream, I cannot write
an artificial #\Newline character to it to terminate the escape-sequence.
This way it's impossible to write an escape-sequence parser on the Lisp level.
The detailed problems with EXT:*KEYBOARD-INPUT* in CLISP 2.49+ CVS HEAD are:
READ-CHAR
- Recognizes multi-byte unicode characters as multiple uni-byte ASCII
characters and reads them as several characters if invoked sequentially.
- With escape-sequences containing a semicolon, READ-CHAR does _NOT_
consider the semicolon as the beginning of a comment [what is exactly the
opposite behaviour to all other functions below], instead the semicolon
and everything after is read as ordinary uni-byte ASCII characters
without hanging.
READ-CHAR-NO-HANG
- Recognizes multi-byte unicode characters correctly, but reads only the
first byte and returns wrong results with multi-byte characters. There
currently seems to be no way to find out whether the return value of
READ-CHAR-NO-HANG is correct or not.
- With escape-sequences containing a semicolon, READ-CHAR-NO-HANG consideres
the semicolon as the end of the stream [and probably everything afterwards
as a comment], what has to the consequence, that the semicolon and everything
after is left in the EXT:*KEYBOARD-INPUT* stream and re-appears at the next
invocation of READ-CHAR.
- A READ-CHAR-NO-HANG return value of NIL does not necessarily mean that
the *keyboard-input* stream is empty.
READ-CHAR-WILL-HANG-P
- Returns T, even if there are comment characters in the stream [left from
an escape-sequence containing a semicolon], which can be read by READ-CHAR
without hanging.
PEEK-CHAR
- PEEK-CHAR with and EOF-ERROR-P argument of NIL cannot be used to test
the end of EXT:*KEYBOARD-INPUT*, because if EXT:*KEYBOARD-INPUT* is empty,
PEEK-CHAR hangs until a new SYS::INPUT-CHARACTER appears in the stream.
CLEAR-INPUT
- (CLEAR-INPUT EXT:*KEYBOARD-INPUT*) does not reliably clear
EXT:*KEYBOARD-INPUT*. Comment characters [left from an escape-sequence
containing a semicolon] are still in the stream afterwards, probably
because the comment is not terminated by a #\Newline or END-OF-FILE
and is understood as an "unterminated comment".
READ, UNREAD-CHAR, and READ-LINE
- all tree only work with Common Lisp standard characters and signal a
"wrong-type" error with SYS::INPUT-CHARACTERs.
Summary:
IMO the main problem is that EXT:*KEYBOARD-INPUT* is implemented as a
Lisp character stream with a ton of exception handling on the C level
(e.g. lots of terminal escape sequences etc. but obviously still not
many enough).
Does it really make sense to overload the "exception handling" even more
or would it be better to implement EXT:*KEYBOARD-INPUT* as a byte-stream,
what would make it much easier to write custom parsers on the Lisp level?
A Lisp parser is not necessarily less work or less complicated than a
parser written in C, but a Lisp programmer would have the chance to
adapt the parser much easier to his/her own needs.
Thanks,
- edgar
--
The author of this email does not necessarily endorse the following
advertisements, which are the sole responsibility of the advertiser:
Feature Requests item #1339718, was opened at 2005-10-27 13:11
Message generated for change (Comment added) made by sds
You can respond by visiting:
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
>Category: UI
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Sam Steingold (sds)
>Assigned to: Nobody/Anonymous (nobody)
>Summary: screen & keyboard interaction
Initial Comment:
add setfable accessors:
(SCREEN:TEXT-COLOR window-screen)
(SCREEN:BACKGROUND-COLOR window-screen)
----------------------------------------------------------------------
>Comment By: Sam Steingold (sds)
Date: 2010-10-05 16:46(?); make it
more emacs-compatible.
----------------------------------------------------------------------
Comment By: Sam Steingold (sds)
Date: 2005-11-02 09:46
Logged In: YES
user_id=5735
for unix you can use ncurses
we already require them for readline.
tree of sreams is not really necessary:
just add x-offset/y-offset/width/height arguments
to window-stream creation.
----------------------------------------------------------------------
Comment By: Arseny Slobodyuk (ampy)
Date: 2005-11-01 20:06
Logged In: YES
user_id=553883
I thought about it 1000 times and since I don't know how to
use colors on UNIX, didn't implemented. Tree of streams is
too cool, on windows there is a system function to scroll
console region and it even is used in current SCREEN code
to scroll whole screen. Having lisp interface to it one can build
his own nice libraries, using generic streams for the nice
example given above. Again, threre is a question how to
make this scroll function portable. And how to name colors
portably, on windows there's Red, Green, Blue w/without
intensity for text and same R, G, B w/without blinking for
background.
----------------------------------------------------------------------
Comment By: Don Cohen (donc)
Date: 2005-11-01 16:52
Logged In: YES
user_id=8842
I have some code (actually quite old, but still in use) that
does the
scrolling subwindow part. I haven't tried it with screen in
some time
so it might not quite work there. I gave up on screen a
long time ago
and use ansi terminal emulation instead. It turns out this
is not so
easy to get these days in win-xp, but pscp does it, so one
way to use
this stuff from win-xp is to ssh to a machine with an ssh
server.
(Anyone know where to get an ssh server for win-xp?)
Ansi terminal emulation is also a good way to control colors.
I can offer code for that too.
I've put the window code at
----------------------------------------------------------------------
Comment By: Sam Steingold (sds)
Date: 2005-10-27 13:32
Logged In: YES
user_id=5735
another nice thing is being able to split the screen
so that scrolling in different parts is done independently.
I remember being able to do somehting like that
with turbo pascal 5.5 15 years ago. :-)
(this would make it necessary to introduce a _tree_ of
screen streams - maybe too hairy)
----------------------------------------------------------------------.7544,1.7545 makemake.in,1.952,1.953
(Sam Steingold)
2. clisp/src ChangeLog,1.7545,1.7546 zthread.d,1.89,1.90
(Vladimir Tzankov)
3. clisp/src zthread.d,1.90,1.91 (Sam Steingold)
4. clisp/modules/syscalls syscalls.xml,1.136,1.137 (Sam Steingold)
5. clisp/modules/syscalls calls.c,1.320,1.321 (Sam Steingold)
6. clisp/src ChangeLog,1.7546,1.7547 (Sam Steingold)
7. clisp/modules/syscalls test.tst,1.109,1.110 (Sam Steingold)
----------------------------------------------------------------------
Message: 1
Date: Mon, 04 Oct 2010 15:01:46 +0000
From: Sam Steingold <sds@...>
Subject: clisp/src ChangeLog,1.7544,1.7545 makemake.in,1.952,1.953
To: clisp-cvs@...
Message-ID: <E1P2mXy-0004Uz-Rs@...>
Update of /cvsroot/clisp/clisp/src
In directory sfp-cvsdas-2.v30.ch3.sourceforge.com:/tmp/cvs-serv17263/src
Modified Files:
ChangeLog makemake.in
Log Message:
* src/makemake.in (full): show which system supplied functionality is
replaced by gnulib
Index: makemake.in
===================================================================
RCS file: /cvsroot/clisp/clisp/src/makemake.in,v
retrieving revision 1.952
retrieving revision 1.953
diff -u -d -r1.952 -r1.953
--- makemake.in 3 Oct 2010 02:17:06 -0000 1.952
+++ makemake.in 4 Oct 2010 15:01:44 -0000 1.953
@@ -3267,6 +3267,8 @@
echotab "\$(RMRF) full"
test "${with_dynamic_modules}" = no || echotab "rm -rf dynmod; mkdir dynmod"
echotab "MAKE=\$(MAKE) CLISP=\"${HEREP}/clisp ${someflags}\" ${HERE}clisp-link add base full \$(MODULES) || (\$(RMRF) full ; exit 1)"
+# show which system supplied functionality is replaced by gnulib
+echotab 'grep "define REPLACE_.*1" `find . -name config.log`'
cygwin_finish full
echol "mod-check : base-mod-check full-mod-check"
Index: ChangeLog
===================================================================
RCS file: /cvsroot/clisp/clisp/src/ChangeLog,v
retrieving revision 1.7544
retrieving revision 1.7545
diff -u -d -r1.7544 -r1.7545
--- ChangeLog 3 Oct 2010 20:40:58 -0000 1.7544
+++ ChangeLog 4 Oct 2010 15:01:43 -0000 1.7545
@@ -1,3 +1,8 @@
+2010-10-04 Sam Steingold <sds@...>
+
+ * makemake.in (full): show which system supplied functionality is
+ replaced by gnulib
+
2010-10-03 Vladimir Tzankov <vtzankov@...>
fix bug#3077583: show-stack segfaults with MT
------------------------------
Message: 2
Date: Mon, 04 Oct 2010 18:48:53 +0000
From: Vladimir Tzankov <vtz@...>
Subject: clisp/src ChangeLog,1.7545,1.7546 zthread.d,1.89,1.90
To: clisp-cvs@...
Message-ID: <E1P2q5l-0003rW-Ju@...>
Update of /cvsroot/clisp/clisp/src
In directory sfp-cvsdas-2.v30.ch3.sourceforge.com:/tmp/cvs-serv14814/src
Modified Files:
ChangeLog zthread.d
Log Message:
fix default thread name (on OSX there was segfault if function argument was not a closure)
Index: zthread.d
===================================================================
RCS file: /cvsroot/clisp/clisp/src/zthread.d,v
retrieving revision 1.89
retrieving revision 1.90
diff -u -d -r1.89 -r1.90
--- zthread.d 3 Oct 2010 20:40:58 -0000 1.89
+++ zthread.d 4 Oct 2010 18:48:51 -0000 1.90
@@ -77,6 +77,22 @@
return name_arg;
}
+/* return default thread name depending on the type of function
+ > fun: functionp object
+ < returns default name to be used of none is specified */
+local object default_thread_name(object fun) {
+ if (subrp(fun))
+ return TheSubr(fun)->name;
+ else if (cclosurep(fun))
+ return Closure_name(fun);
+#ifdef DYNAMIC_FFI
+ else if (ffunctionp(fun))
+ return TheFfunction(fun)->ff_name;
+#endif
+ else /* interpreted closure */
+ return TheIclosure(fun)->clos_name;
+}
+
/* releases the clisp_thread_t memory of the list of Thread records */
global void release_threads (object list) {
/* Nothing to do here actually. In the past the memory of some
@@ -219,9 +235,7 @@
#endif
clisp_thread_t *me=(clisp_thread_t *)arg;
set_current_thread(me); /* first: initialize TLS */
- var struct backtrace_t bt;
me->_SP_anchor=(void*)SP();
- back_trace = NULL; /* no back trace */
pushSTACK(O(thread_exit_tag)); /* push the exit tag */
var gcv_object_t *initial_bindings = &STACK_1;
var gcv_object_t *funptr = &STACK_2;
@@ -301,7 +315,7 @@
if (!functionp(STACK_2))
STACK_2 = check_function_replacement(STACK_2);
/* set thread name */
- STACK_1 = check_name_arg(STACK_1,Closure_name(STACK_2));
+ STACK_1 = check_name_arg(STACK_1,default_thread_name(STACK_2));
/* do allocations before thread locking */
pushSTACK(allocate_thread(&STACK_1)); /* put it in GC visible place */
pushSTACK(allocate_cons());
Index: ChangeLog
===================================================================
RCS file: /cvsroot/clisp/clisp/src/ChangeLog,v
retrieving revision 1.7545
retrieving revision 1.7546
diff -u -d -r1.7545 -r1.7546
--- ChangeLog 4 Oct 2010 15:01:43 -0000 1.7545
+++ ChangeLog 4 Oct 2010 18:48:51 -0000 1.7546
@@ -1,3 +1,8 @@
+2010-10-04 Vladimir Tzankov <vtzankov@...>
+
+ * zthread.d (default_thread_name): returns name for functionp object
+ (MAKE-THREAD): use it
+
2010-10-04 Sam Steingold <sds@...>
* makemake.in (full): show which system supplied functionality is
------------------------------
Message: 3
Date: Mon, 04 Oct 2010 21:52:15 +0000
From: Sam Steingold <sds@...>
Subject: clisp/src zthread.d,1.90,1.91
To: clisp-cvs@...
Message-ID: <E1P2sxD-0006pT-31@...>
Update of /cvsroot/clisp/clisp/src
In directory sfp-cvsdas-2.v30.ch3.sourceforge.com:/tmp/cvs-serv26249
Modified Files:
zthread.d
Log Message:
comment: default_thread_name is similar to functions.lisp:function-name
Index: zthread.d
===================================================================
RCS file: /cvsroot/clisp/clisp/src/zthread.d,v
retrieving revision 1.90
retrieving revision 1.91
diff -u -d -r1.90 -r1.91
--- zthread.d 4 Oct 2010 18:48:51 -0000 1.90
+++ zthread.d 4 Oct 2010 21:52:12 -0000 1.91
@@ -78,9 +78,10 @@
}
/* return default thread name depending on the type of function
+ cf. functions.lisp:function-name (maybe move it to C?)
> fun: functionp object
< returns default name to be used of none is specified */
-local object default_thread_name(object fun) {
+local object default_thread_name (object fun) {
if (subrp(fun))
return TheSubr(fun)->name;
else if (cclosurep(fun))
------------------------------
Message: 4
Date: Mon, 04 Oct 2010 22:04:42 +0000
From: Sam Steingold <sds@...>
Subject: clisp/modules/syscalls syscalls.xml,1.136,1.137
To: clisp-cvs@...
Message-ID: <E1P2t9G-0007Uq-Ph@...>
Update of /cvsroot/clisp/clisp/modules/syscalls
In directory sfp-cvsdas-2.v30.ch3.sourceforge.com:/tmp/cvs-serv28802
Modified Files:
syscalls.xml
Log Message:
getdate: more on modules/syscalls/datemsk
Index: syscalls.xml
===================================================================
RCS file: /cvsroot/clisp/clisp/modules/syscalls/syscalls.xml,v
retrieving revision 1.136
retrieving revision 1.137
diff -u -d -r1.136 -r1.137
--- syscalls.xml 29 Sep 2010 19:50:55 -0000 1.136
+++ syscalls.xml 4 Oct 2010 22:04:40 -0000 1.137
@@ -837,6 +837,7 @@
<function role="unix">getdate</function>.</simpara>
<simpara>If the the &env-var; <envar>DATEMSK</envar> is not set when
&clisp; is invoked, &clisp; sets it to point to the file
+ <filename role="clisp">modules/syscalls/datemsk</filename>, installed as
<code>(&merge-pathnames; "syscalls/datemsk" &libdir;)</code>.
</simpara></listitem></varlistentry></variablelist></section>
------------------------------
Message: 5
Date: Mon, 04 Oct 2010 22:07:25 +0000
From: Sam Steingold <sds@...>
Subject: clisp/modules/syscalls calls.c,1.320,1.321
To: clisp-cvs@...
Message-ID: <E1P2tBt-0007mM-A9@...>
Update of /cvsroot/clisp/clisp/modules/syscalls
In directory sfp-cvsdas-2.v30.ch3.sourceforge.com:/tmp/cvs-serv29880/modules/syscalls
Modified Files:
calls.c
Log Message:
* modules/syscalls/calls.c (POSIX:STRING-TIME): when calling strptime,
init tm fields to reasonable values because strptime does not set
fields which are not specified by datum
Index: calls.c
===================================================================
RCS file: /cvsroot/clisp/clisp/modules/syscalls/calls.c,v
retrieving revision 1.320
retrieving revision 1.321
diff -u -d -r1.320 -r1.321
--- calls.c 29 Sep 2010 19:29:58 -0000 1.320
+++ calls.c 4 Oct 2010 22:07:23 -0000 1.321
@@ -569,6 +569,14 @@
if (stringp(STACK_1)) { /* parse: strptime */
struct tm tm;
unsigned int offset;
+ tm.tm_sec = 0; /* Seconds [0,60]. */
+ tm.tm_min = 0; /* Minutes [0,59]. */
+ tm.tm_hour = 0; /* Hour [0,23]. */
+ tm.tm_mday = 1; /* Day of month [1,31]. */
+ tm.tm_mon = 0; /* Month of year [0,11]. */
+ tm.tm_year = 0; /* Years since 1900. */
+ tm.tm_wday = 0; /* Day of week [0,6] (C: Sunday=0 <== CL: Monday=0 */
+ tm.tm_isdst = false; /* Daylight Savings flag. */
with_string_0(STACK_1,GLO(misc_encoding),buf, {
with_string_0(STACK_2,GLO(misc_encoding),format, {
char *ret;
------------------------------
Message: 6
Date: Mon, 04 Oct 2010 22:07:25 +0000
From: Sam Steingold <sds@...>
Subject: clisp/src ChangeLog,1.7546,1.7547
To: clisp-cvs@...
Message-ID: <E1P2tBt-0007mQ-IA@...>
Update of /cvsroot/clisp/clisp/src
In directory sfp-cvsdas-2.v30.ch3.sourceforge.com:/tmp/cvs-serv29880/src
Modified Files:
ChangeLog
Log Message:
* modules/syscalls/calls.c (POSIX:STRING-TIME): when calling strptime,
init tm fields to reasonable values because strptime does not set
fields which are not specified by datum
Index: ChangeLog
===================================================================
RCS file: /cvsroot/clisp/clisp/src/ChangeLog,v
retrieving revision 1.7546
retrieving revision 1.7547
diff -u -d -r1.7546 -r1.7547
--- ChangeLog 4 Oct 2010 18:48:51 -0000 1.7546
+++ ChangeLog 4 Oct 2010 22:07:23 -0000 1.7547
@@ -1,3 +1,9 @@
+2010-10-04 Sam Steingold <sds@...>
+
+ * modules/syscalls/calls.c (POSIX:STRING-TIME): when calling strptime,
+ init tm fields to reasonable values because strptime does not set
+ fields which are not specified by datum
+
2010-10-04 Vladimir Tzankov <vtzankov@...>
* zthread.d (default_thread_name): returns name for functionp object
------------------------------
Message: 7
Date: Mon, 04 Oct 2010 22:16:06 +0000
From: Sam Steingold <sds@...>
Subject: clisp/modules/syscalls test.tst,1.109,1.110
To: clisp-cvs@...
Message-ID: <E1P2tKI-0008Ma-5w@...>
Update of /cvsroot/clisp/clisp/modules/syscalls
In directory sfp-cvsdas-2.v30.ch3.sourceforge.com:/tmp/cvs-serv32139
Modified Files:
test.tst
Log Message:
add tests for string-time & getdate
Index: test.tst
===================================================================
RCS file: /cvsroot/clisp/clisp/modules/syscalls/test.tst,v
retrieving revision 1.109
retrieving revision 1.110
diff -u -d -r1.109 -r1.110
--- test.tst 16 Sep 2010 15:14:53 -0000 1.109
+++ test.tst 4 Oct 2010 22:16:04 -0000 1.110
@@ -34,6 +34,43 @@
(string= string (os:string-time fmt (show (os:string-time fmt string)))))
T
+;; for this to work, datum must specify _all_ fields in struct tm
+(defun check-time-date (fmt datum)
+ (let ((gd (os:getdate datum)) (st (os:string-time fmt datum)))
+ (print (list fmt datum gd (os:string-time "%Y-%m-%d %a %H:%M:%S" gd)))
+ (unless (= gd st)
+ (print (list st (os:string-time "%Y-%m-%d %a %H:%M:%S" st))))))
+CHECK-TIME-DATE
+
+(check-time-date "%m/%d/%y %I %p" "10/1/87 4 PM") NIL
+(check-time-date "%A %B %d, %Y, %H:%M:%S" "Friday September 18, 1987, 10:30:30") NIL
+(check-time-date "%d,%m,%Y %H:%M" "24,9,1986 10:30") NIL
+
+(defun check-time-date (fmt datum)
+ (declare (ignore fmt))
+ (null (show (os:string-time "%Y-%m-%d %a %H:%M:%S" (os:getdate datum)))))
+CHECK-TIME-DATE
+
+(check-time-date "%m/%d/%y" "11/27/86") NIL
+(check-time-date "%d.%m.%y" "27.11.86") NIL
+(check-time-date "%y-%m-%d" "86-11-27") NIL
+(check-time-date "%A %H:%M:%S" "Friday 12:00:00") NIL
+(check-time-date "%A" "Friday") NIL
+(check-time-date "%a" "Mon") NIL
+(check-time-date "%a" "Sun") NIL
+(check-time-date "%a" "Fri") NIL
+(check-time-date "%B" "September") NIL
+(check-time-date "%B" "January") NIL
+(check-time-date "%B" "December") NIL
+(check-time-date "%b %a" "Sep Mon") NIL
+(check-time-date "%b %a" "Jan Fri") NIL
+(check-time-date "%b %a" "Dec Mon") NIL
+(check-time-date "%b %a %Y" "Jan Wed 1989") NIL
+(check-time-date "%a %H" "Fri 9") NIL
+(check-time-date "%b %H:%S" "Feb 10:30") NIL
+(check-time-date "%H:%M" "10:30") NIL
+(check-time-date "%H:%M" "13:30") NIL
+
#+unix
(when (fboundp 'os:getutxent)
(not (integerp (show (length (loop :for utmpx = (os:getutxent) :while utmpx
@@ -658,5 +695,6 @@
(symbol-cleanup 'flush-clisp)
(symbol-cleanup 'proc-send)
(setq *features* (delete :no-stream-lock *features*))
+ (symbol-cleanup 'check-time-date)
T).
------------------------------
_______________________________________________
clisp-cvs mailing list
clisp-cvs@...
End of clisp-cvs Digest, Vol 54, Issue 3
****************************************
|
http://sourceforge.net/p/clisp/mailman/clisp-devel/?viewmonth=201010&viewday=5
|
CC-MAIN-2014-23
|
refinedweb
| 3,437
| 56.96
|
fetch, fubyte, fuibyte, fusword, fuswintr, fuword, fuiword - fetch data
from user-space
#include <sys/types.h>
#include <sys/systm.h>
int
fubyte(const void *base);
int
fusword(const void *base);
int
fuswintr(const void *base);
long
fuword(const void *base);
The fetch functions are designed to copy small amounts of data from userspace. context.
fuword() Fetches a word of data from the user-space address base.
The fetch functions return the data fetched or -1 on failure. Note that
these functions all do "unsigned" access, and therefore will never sign
extend byte or short values. This prevents ambiguity with the error
return value for all functions except fuword().
copy(9), store(9).
BSD January 7, 1996 BSD
|
https://nixdoc.net/man-pages/NetBSD/man9/fuword.9.html
|
CC-MAIN-2022-33
|
refinedweb
| 117
| 66.64
|
Twistsed FAQ
Here are a list of Frequently Asked Questions regarding Twisted.
- General
- Stability
- Installation
- Core Twisted
- Why does Twisted depend on Zope?
- How can I access self.factory from my Protocol's init?
- Where can I find out how to write Twisted servers?
- When I try to install my reactor, I get errors about a reactor already being installed. What gives?
twistdwon't load my
.tapfile! What's this Ephemeral nonsense?
- I get Interrupted system call errors when I use os.popen2. How do I read results from a sub-process in Twisted?
- Why don't my spawnProcess programs see my environment variables?
- My Deferred or DeferredList never fires, so my program just mysteriously hangs! What's wrong?
- My exceptions and tracebacks aren't getting printed!
- How do I use Deferreds to make my blocking code non-blocking?
- I get exceptions.ValueError: signal only works in main thread when I try to run my Twisted program! What's wrong?
- I'm trying to stop my program with sys.exit(), but Twisted seems to catch it! How do I exit my program?
- How do I find out the IP address of the other end of my connection?
- Why don't Twisted's network methods support Unicode objects as well as strings?
- Perspective Broker
- Requests and Contributing
- Documentation
- Communicating with us
General
What is Twisted?
See the Twisted Project.
Why should I use Twisted?
See the TwistedAdvantage.
I have a problem getting Twisted.
Did you check the HOWTO collection? There are so many documents there that they might overwhelm you... try starting from the index, reading through the overviews and seeing if there seems to be a chapter which explains what you need to. You can try reading the PostScript or PDF formatted books, inside the distribution. And, remember, the source will be with you... always.
Why are there so many parts and subprojects? Isn't Twisted just Twisted?
As of version 2.0, Twisted was split up into many subprojects, because it was getting too much to handle in a monolithic release, and we believe breaking the project into smaller chunks will help people understand the things they need to understand (There used to be a FAQ entry here asking Why is Twisted so big?). More information is available in the Split FAQ.
Stability
Does the 1.0 release mean that all of Twisted's APIs are stable?
No, only specific parts of Twisted are stable, i.e. we only promise backwards compatibility for some parts of Twisted. While these APIs may be extended, they will not change in ways that break existing code that uses them.
While other parts of Twisted are not stable, we will however do our best to make sure that there is backwards compatibility for these parts as well. In general, the more the module or package are used, and the closer they are to being feature complete, the more we will concentrate on providing backwards compatibility when API changes take place.
Which parts of Twisted are stable?
Only modules explictily marked as such can be considered stable. Semi-stable modules may change, but not in a large way and some sort of backwards-compatibily will probably be provided. If no comment about API stability is present, assume the module is unstable.
In Twisted 1.1, most of twisted.internet, .cred and .application are completely stable (excepting of course code marked as deprecated).
But as always, the only accurate way of knowing a module's stability is reading the module's docstrings.
Installation
I run mktap (from site-packages/twisted/scripts/mktap.py) and nothing happens!
Don't run scripts out of
site-packages. The Windows installer should install executable scripts to someplace like
C:\Python22\scripts\, *nix installers put them in
$PREFIX/bin, which should be in your
$PATH.
Why do the Debian packages for Alphas and Release Candidates have weird versions containing old version numbers?
An example: 1.0.6+1.0.7rc1-1
In Debian versioning, 1.0.7rc1 is greater than 1.0.7. This means that if you install a package with Version: 1.0.7rc1, and then that package gets a new version 1.0.7, apt will not upgrade it for you, because 1.0.7 looks like an older version. So, we prefix the previous version to the actual version. 1.0.6+1.0.7rc1 is less than 1.0.7.
Core Twisted.
How can I access self.factory from my Protocol's init?
You can't. A Protocol doesn't have a Factory when it is created. Instead, you should probably be doing that in your Protocol's connectionMade method.
Similarly you shouldn't be doing real work, like connecting to databases, in a Factory's init either. Instead, do that in startFactory.
See Writing Servers and Writing Clients for more details.
Where can I find out how to write Twisted servers?
Try Writing Servers.
When I try to install my reactor, I get errors about a reactor already being installed. What gives?
Here's the rule - installing a reactor should always be the first thing you do, and I do mean first. Importing other stuff before you install the reactor can break your code.
Tkinter and wxPython support, as they do not install a new reactor, can be done at any point, IIRC.
twistd won't load my
.tap file! What's this Ephemeral nonsense?
When the pickled application state cannot be loaded for some reason, it is common to get a rather opaque error like so:
% twistd -f test2.tap Failed to load application: global name 'initRun' is not defined
The rest of the error will try to explain how to solve this problem, but a short comment first: this error is indeed terse -- but there is probably more data available elsewhere -- namely, the
twistd.log file. Open it up to see the full exception.
The error might also look like this:
Failed to load application: <twisted.persisted.styles.Ephemeral instance at 0x82450a4> is not safe for unpickling
To load a
.tap file, as with any unpickling operation, all the classes used by all the objects inside it must be accessible at the time of the reload. This may require the PYTHONPATH variable to have the same directories as were available when the application was first pickled.
A common problem occurs in single-file programs which define a few classes, then create instances of those classes for use in a server of some sort. If the class is used directly, the name of the class will be recorded in the
.tap file as something like
__main__.MyProtocol. When the application is reloaded, it will look for the class definition in
__main__, which probably won't have it. The unpickling routines need to know the module name, and therefore the source file, from which the class definition can be loaded.
The way to fix this is to import the class from the same source file that defines it: if your source file is called
myprogram.py and defines a class called
MyProtocol, you will need to do a
from myprogram import MyProtocol before (and in the same namespace as) the code that references the MyProtocol class. This makes it important to write the module cleanly: doing an
import myprogram should only define classes, and should not cause any other subroutines to get run. All the code that builds the Application and saves it out to a .tap file must be inside an
if __name__ == '__main__' clause to make sure it is not run twice (or more).
When you import the class from the module using an external name, that name will be recorded in the pickled
.tap file. When the
.tap is reloaded by twistd, it will look for
myprogram.py to provide the definition of
MyProtocol.
Here is a short example of this technique:
# file dummy.py from twisted.internet import protocol class Dummy(protocol.Protocol): pass if __name__ == '__main__': from twisted.application import service, internet a = service.Application("dummy") import dummy f = protocol.Factory() f.protocol = dummy.Dummy # Note! Not "Dummy" internet.TCPServer(2000, f).setServiceParent(a) a.save()
I get Interrupted system call errors when I use os.popen2. How do I read results from a sub-process in Twisted?
You should be using
reactor.spawnProcess (see interfaces.IReactorProcess.spawnProcess). There's also a convenience function, getProcessOutput, in twisted.internet.utils.
Why don't my spawnProcess programs see my environment variables?
spawnProcess defaults to clearing the environment of child processes as a security feature. You can either provide a dictionary with exactly the name-value pairs you want the child to use, or you can simply pass in
os.environ to inherit the complete environment.
My Deferred or DeferredList never fires, so my program just mysteriously hangs! What's wrong?
It really depends on what your program is doing, but the most common cause is this: it is firing -- but it's an error, not a success, and you have forgotten to add an errback, so nothing happens. Always add errbacks!
The reason Deferred can't automatically show your errors is because a Deferred can still have callbacks and errbacks added to it even after a result is available -- so we have no reasonable place to put a logging call that wouldn't result in spurious tracebacks that are handled later on. There is a facility for printing tracebacks when the Deferreds are garbage collected -- call defer.setDebugging(True) to enable it.
My exceptions and tracebacks aren't getting printed!
See previous question.
How do I use Deferreds to make my blocking code non-blocking?
You don't. Deferreds don't magically turn a blocking function call into a non-blocking one. A Deferred is just a simple object that represents a deferred result, with methods to allow convenient adding of callbacks. (This is a common misunderstanding; suggestions on how to make this clearer in the Deferred Execution howto are welcome!)
If you have blocking code that you want to use non-blockingly in Twisted, either rewrite it to be non-blocking, or run it in a thread. There is a convenience function, deferToThread, to help you with the threaded approach -- but be sure to read Using Threads in Twisted.
I get exceptions.ValueError: signal only works in main thread when I try to run my Twisted program! What's wrong?
The default reactor, by default, will install signal handlers to catch events like Ctrl-C, SIGTERM, and so on. However, you can't install signal handlers from non-main threads in Python, which means that
reactor.run() will cause an error. Pass the
installSignalHandlers=0 keyword argument to
reactor.run to work around this.
|
https://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions?version=3
|
CC-MAIN-2021-04
|
refinedweb
| 1,784
| 67.65
|
> You are missing a point here: string methods were introduced > to make switching from plain 8-bit strings to Unicode easier. Is it the only purpose ? I agree with the OP that using string methods is much nicer and more convenient than having to import separate modules. Especially, it is nice to just type help(str) in the interactive prompt and get the list of supported methods. Also, these methods are living in the namespace of the supported objects. It feels very natural, and goes hand in hand with Python's object-oriented nature. (just my 2 cents - I am not arguing for or against the specific case of dedent, by the way) Regards Antoine.
|
https://mail.python.org/pipermail/python-dev/2005-November/058088.html
|
CC-MAIN-2022-05
|
refinedweb
| 115
| 63.8
|
import "golang.org/x/text/collate/build"
builder.go colelem.go contract.go order.go table.go trie.go
A Builder builds a root collation table. The user must specify the collation elements for each entry. A common use will be to base the weights on those specified in the allkeys* file as provided by the UCA or CLDR.
NewBuilder returns a new Builder.
Add adds an entry to the collation element table, mapping a slice of runes to a sequence of collation elements. A collation element is specified as list of weights: []int{primary, secondary, ...}. The entries are typically obtained from a collation element table as defined in. Note that the collation elements specified by colelems are only used as a guide. The actual weights generated by Builder may differ. The argument variables is a list of indices into colelems that should contain a value for each colelem that is a variable. (See the reference above.)
Build builds the root Collator.
Print prints the tables for b and all its Tailorings as a Go file that can be included in the Collate package.
Tailoring returns a Tailoring for the given locale. One should have completed all calls to Add before calling Tailoring.
A Tailoring builds a collation table based on another collation table. The table is defined by specifying tailorings to the underlying table. See for an overview of tailoring collation tables. The CLDR contains pre-defined tailorings for a variety of languages (See<version>/core.zip.)
Build builds a Collator for Tailoring t.
Insert sets the ordering of str relative to the entry set by the previous call to SetAnchor or Insert. The argument extend corresponds to the extend elements as defined in LDML. A non-empty value for extend will cause the collation elements corresponding to extend to be appended to the collation elements generated for the entry added by Insert. This has the same net effect as sorting str after the string anchor+extend. See for details on parametric tailoring and for full details on LDML.
Examples: create a tailoring for Swedish, where "ä" is ordered after "z" at the primary sorting level:
t := b.Tailoring("se") t.SetAnchor("z") t.Insert(colltab.Primary, "ä", "")
Order "ü" after "ue" at the secondary sorting level:
t.SetAnchor("ue") t.Insert(colltab.Secondary, "ü","")
or
t.SetAnchor("u") t.Insert(colltab.Secondary, "ü", "e")
Order "q" afer "ab" at the secondary level and "Q" after "q" at the tertiary level:
t.SetAnchor("ab") t.Insert(colltab.Secondary, "q", "") t.Insert(colltab.Tertiary, "Q", "")
Order "b" before "a":
t.SetAnchorBefore("a") t.Insert(colltab.Primary, "b", "")
Order "0" after the last primary ignorable:
t.SetAnchor("<last_primary_ignorable/>") t.Insert(colltab.Primary, "0", "")
SetAnchor sets the point after which elements passed in subsequent calls to Insert will be inserted. It is equivalent to the reset directive in an LDML specification. See Insert for an example. SetAnchor supports the following logical reset positions: <first_tertiary_ignorable/>, <last_teriary_ignorable/>, <first_primary_ignorable/>, and <last_non_ignorable/>.
SetAnchorBefore is similar to SetAnchor, except that subsequent calls to Insert will insert entries before the anchor.
Package build imports 12 packages (graph). Updated 2017-10-17. Refresh now. Tools for package owners.
|
https://godoc.org/golang.org/x/text/collate/build
|
CC-MAIN-2017-43
|
refinedweb
| 529
| 50.63
|
(Updated – 9 August 2010 – This was written in my pre-Maven days and after a few requests for working source, I’ve built the same project using Maven which can be downloaded. Just unzip the maven project, go to the directory in the command line and type
mvn jetty:run to start the server and deploy the project. Navigate to or to see the pages demonstrated in the tutorial.
I recently had to start another project using Spring Web Flow and found myself banging my head against a brick wall to get the web flow stuff set up and to request the page properly. As a result, I decided to write up my results as a quick how-to for other developers should they find themselves in the same situation and also as a reference for myself the next time I need to start a Spring Web Flow project using Spring Faces from scratch.This article is meant more of a “here’s-how” as opposed to a “how-to” or an “explain-why” so we’ll move at a quick pace with little explanation.
For the IDE, I used Eclipse 3.4.1 with Spring IDE plugins version 2.2.1, with Spring 2.5.6 (with dependencies) and Spring Web Flow 2.0.5. You should be able to use SWF 2.0.7 without any problems. For dependencies, I mostly used the ones provided with Spring except for a few, the details of which are included below. Hibernate was used as the JPA implementation and it was all deployed on Tomcat 6.0.18.
Getting Started
U started by installing Eclipse and then the Spring plugins. I created a new workspace, and added Tomcat as a server, and created a new dynamic web project. I right clicked on the project to add the Spring nature to the project.
The project will be arranged with multiple Spring configuration files for the database, spring web flow and the main
applicationContext.xml to include them in. Flows will be in a directory under
/WebContent/WEB-INF/flows.
I started off setting
web.xml up with the faces pieces and the Spring MVC dispatcher servlet.
<!-- The main config file for this Spring web application --> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </context-param> <!-- Use JSF view templates saved as *.xhtml, for use with Facelets --> <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value> </context-param> <!-- Enables special Facelets debug output during development --> <context-param> <param-name>facelets.DEVELOPMENT</param-name> <param-value>true</param-value> </context-param> <!-- Causes Facelets to refresh templates during development --> <context-param> <param-name>facelets.REFRESH_PERIOD</param-name> <param-value>1</param-value> </context-param> <!-- Loads the Spring web application context --> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> <!-- Serves static resource content from .jar files such as spring-faces> <!-- The front controller of this Spring Web application, responsible for handling all application requests --> <servlet> <servlet-name>Spring MVC Dispatcher Servlet</servlet-name> <servlet-class> org.springframework.web.servlet.DispatcherServlet </servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value></param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <!-- Map all /spring requests to the Dispatcher Servlet for handling --> <servlet-mapping> <servlet-name>Spring MVC Dispatcher Servlet</servlet-name> <url-pattern>/spring/*</url-pattern> </servlet-mapping> <!-->*.jsf</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list>
At the top of this
web.xml file, we indicate that our primary Spring bean config file is called
\WEB-INF\applicationContext.xml so we navigate to that folder (it’s in the WebContent folder) and right click and add a new Spring Bean Definition. We also add two other Spring Bean definitions in the same place called
dbConfig.xml and
flowConfig.xml. These files are defined below :
/WebContent/WEB-INF/applicationContext.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <import resource="dbConfig.xml" /> <import resource="flowConfig.xml" /> </beans>
/WebContent/WEB-INF/dbConfig.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <property name="generateDdl" value="true" /> <property name="databasePlatform" value="org.hibernate.dialect.MySQLDialect" /> </bean> </property> <property name="dataSource" ref="dataSource" /> </bean> <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource" destroy- <property name="driverClassName" value="com.mysql.jdbc.Driver" /> <property name="url" value="jdbc:mysql://localhost:3306/dbName" /> <property name="username" value="someUser" /> <property name="password" value="somePassword" /> </bean> <bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager"> <property name="dataSource" ref="dataSource" /> <property name="entityManagerFactory" ref="entityManagerFactory" /> </bean> <tx:annotation-driven /> <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" /> </beans>
/WebContent/WEB-INF/flowConfig.xml
<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <!--Executes flows: the central entry point into the Spring Web Flow system--> <webflow:flow-executor <webflow:flow-execution-listeners> <webflow:listener </webflow:flow-execution-listeners> </webflow:flow-executor> <!-- The registry of executable flow definitions --> <webflow:flow-registry <webflow:flow-location</webflow:flow-location> </webflow:flow-registry> <!-- Configures the Spring Web Flow JSF integration --> <faces:flow-builder-services <!--> <!-- Maps request URIs to controllers --> <bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <value> /testFlow=flowController </value> </property> <property name="defaultHandler"> <!-- Selects view names to render based on the request URI: e.g. /main selects "main" --> <bean class="org.springframework.web.servlet.mvc.UrlFilenameViewController" /> </property> </bean> <!-- Handles requests mapped to the Spring Web Flow system --> <bean id="flowController" class="org.springframework.webflow.mvc.servlet.FlowController"> <property name="flowExecutor" ref="flowExecutor" /> </bean> </beans>
Since we are using JPA, we need to include a
persistence.xml file to the classpath in
src/META-INF/persistence.xml.
<?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="" xmlns: <persistence-unit </persistence>
Now we need to add a
faces-config.xml in the same location.
/WebContent/WEB-INF/faces-config.xml
<?xml version="1.0" encoding="UTF-8"?> <faces-config <application> <el-resolver>org.springframework.web.jsf.el.SpringBeanFacesELResolver</el-resolver> <view-handler>com.sun.facelets.FaceletViewHandler</view-handler> </application> </faces-config>
We’ll also add a html page that redirects to a jsf page immediately. Since we are using facelets, we’ll also throw in a template to work from. In the WebContent folder, we’ll add the
templates directory to contain our page layout.
WebContent\templates\template.xhtml
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <head> <meta http- <title><ui:insertuntitled</ui:insert></title> </head> <body> <h:messages <ui:insertTitle Here</ui:insert> <ui:insert </body> </html>
Now to add our two pages that will initially launch is into a JSF page.
/WebContent/index.html
<html> <head> <meta http- </head> </html> </textarea> <code>/WebContent/home.xhtml</code> <pre name="code" class="xml"> <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <ui:composition <ui:define <h:form> <h:outputText </h:form> </ui:define> </ui:composition>
One more configuration file we need is for Log4j to get rid of the warnings that it has not been set up correctly. The properties file goes in the
src directory.
/src=info, stdout log4j.category.org.springframework=WARN log4j.category.org.hibernate=WARN
Now let’s look at libraries. There are a bunch of libraries that are needed since we haven’t added any yet.
One reason I downloaded the latest JSF version was because I was having problems with the JSF version I was using. The SpringBeanELResolver was being ignored at run-time, it didn’t even blink if I set it to an undefined class name, however the Spring Delegating variable resolver was working, but the IDE was saying it was deprecated. Once I upgraded to JSF 1.2_11, the EL resolver worked fine. I’m wondering if an old 1.1 JSF version had crept in there.
Depending on where you end up deploying your application (i.e.Glassfish), you may end up having to remove some of these libraries if they are already installed on the server. In this case, I was using Apache 6.0.18 with a clean install, and therefore with no libraries added to the server.
Creating a Flow
Now we should have a working application all ready to go. To test this, we’ll add a little code and a test page just to verify that everything is working ok. Create a new class called
MessageHolder in a package called
swfproject. This is a simple class that contains a string that can be set and retrieved from our pages.
/src/swfproject/MessageHolder.java
package swfproject; public class MessageHolder implements Serializable { private String text = "Hello From the Message Holder"; public String getText() { return text; } public void setText(String text) { this.text = text; } }
Now we define the bean in the
/WEB-INF/applicationContext.xml so we can call the bean from a regular jsf page, or a flow page.
<bean name="springMessage" class="swfproject.MessageHolder"> <property name="text" value="This was defined in Spring" /> </bean>
In the
home.xhtml page, we add the following line to the page, somewhere between the
ui:define tag on the page to display the message
Message is : #{springMessage.text}
If you start Tomcat, assuming you have already attached the project to the server if you are using an IDE, and go to you should redirect to
home.jsf and it should the message that was defined in Spring.
Now let’s add a simple flow called testFlow. If you look in the
flowConfig.xml file, there is already a mappings property where we already defined
/testFlow=flowController. This means that when this url is requested, the
flowController bean instance deals with it.
We need to create a new directory under
/WebContent/WEB-INF/flows/, and in there, we need to create a directory called
testFlow. In that, we add a page called
testFlow.xhtml and
testFlow.xml. This is the view and the flow defined respectively.
/WebContent/WEB-INF/flows/testFlow/testFlow.xml Flow
<?xml version="1.0" encoding="UTF-8"?> <flow xmlns="" xmlns: <var name="flowMessage" class="swfproject.MessageHolder" /> <view-state <transition on="post" to="testFlow" /> </view-state> </flow>
/WebContent/WEB-INF/flows/testFlow/testFlow.xhtml View
<!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <ui:composition <ui:define <h:form> <h:outputText <br /> Message From Spring = #{springMessage.text}<br /> Message From Flow = #{flowMessage.text}<br /> <h:inputText <h:commandButton </h:form> </ui:define> </ui:composition>
Now if you go to (Remember to replace
app_name with your project name) you should see the page we have made as part of our flow. It displays the message from the
springMessage instance of the
MessageHolder that contains the message from spring. I now also displays the instance from the flow which contains the default message displayed since the flow variable hasn’t had the text property changed. Also, the URL should change to or something similar with the execution on the end. This page demonstrates that we will have access to the spring beans, as well as access to variables defined in a flow which is where
flowMessage is defined. You can edit the message and click post to change the flowMessage text value. You can open the link up in two browser windows or tabs and see how the two values can be edited independently, and the flowMessage variable is scoped to the flow in the browser.
From this point, you can go ahead and move on with the application all you like. You can change the flow mappings, put in wildcarded flow locations, even get started with trying out Spring MVC (as I plan to). Hope you find this useful as a quick start guide to getting a JSF Spring Web Flow project up and running, as well as defining flows and calling them.
Very good quick start tutorial.
Is possible to integrate Your project with JSF 2.0 ?
Hi Andy,
I did the same thing step by step but in the flowconfig.xml ,it is showing errors
“cvc-complex-type.2.4.c: The matching wildcard is strict, but no declaration can be found for element ‘webflow:flow-executor”
I am using Eclipse 3.5 Galileo, Kindly help on this
It looks like a versioning problem. In the flowConfig.xml document, you should have :
in the schema location at the top. Make sure it is version 2.0 since 1.0 does not have the flow executor in the same way. Beyond that, take a look at the maven download, that is working and copy from there.
i would like to test the webflow project in jbossenterprise application server. i am getting incomplete deploymentexception.i removed the dependencies from maven.
then copied the libraries in lib in webapp. Still unable to deploy on Jboss EAP 5.1
|
http://www.andygibson.net/blog/tutorial/creating-a-spring-web-flow-jsf-project-from-scratch/
|
CC-MAIN-2019-13
|
refinedweb
| 2,133
| 50.84
|
All co-binary numbers in a range in Python
In this problem, we need to find all the numbers of a co-binary palindrome that exist in a given range (start, end) in Python.
Now you must have thought about what is a co-binary palindrome? A co-binary palindrome is a number which is a palindrome in both ways, when it is a decimal number and when it has been binary converted.
Example:
In: start=000 , end=800 Out: Co-Binary numbers are : [0, 1, 3, 5, 7, 9, 33, 99, 313, 585, 717]
Now let’s understand its implementation through the help of code.
Code(Python): find all the numbers of a co-binary palindrome that exist in a given range
- Here first we convert the coming number to Binary number
- After conversion, we reverse the number and check for if it is palindrome or not.
- We have declared the highest and the lowest value. Between which the program or the function will run.
- You may set your own limits and check for other results.
def Binary_conversion(args): return bin(args).replace("0b","") def reverse(n): n = str(n) return n[::-1] def check_palindrome(num): if num == int(reverse(num)) : return 1 else: return 0 # starting number start = 000 # ending number end = 800 bin_all= [] for j in range(start,end+1): if check_palindrome(j)== 1 and check_palindrome( int(Binary_conversion(j))): bin_all.append(j) print("Co-Binary numbers are:",bin_all)
Output
Co-Binary numbers are: [0, 1, 3, 5, 7, 9, 33, 99, 313, 585, 717] [Program finished]
I hope you understand the concept clearly. Try running the code, if you have any doubt you may drop a comment. Your feedback will be appreciated.
|
https://www.codespeedy.com/all-co-binary-numbers-in-a-range-in-python/
|
CC-MAIN-2020-45
|
refinedweb
| 285
| 60.14
|
On Mon, Jan 15, 2018 at 9:22 AM, Johannes Berg<johannes@sipsolutions.net> wrote:> Hi syzbot maintainers,>> Thanks for the report.>>> hwsim_new_radio_nl+0x5b7/0x7c0 drivers/net/wireless/mac80211_hwsim.c:3152>> genl_family_rcv_msg+0x7b7/0xfb0 net/netlink/genetlink.c:599>> genl_rcv_msg+0xb2/0x140 net/netlink/genetlink.c:624>> You're getting into the kernel via generic netlink receive, so just as> an FYI - the generic netlink numbers aren't stable across systems, so> your reproducer has a quite good chance of not working without your> kernel .config and (virt) hardware environment.Hi Johannes,Thanks for the feeback.syzbot tests within a net namespace (which is free of eth0 and otherstuff) and does setup of devices in that namespace. For bugs, it firsttries to reproduce them in that environment and if that succeeds ittries to simplify the reproducer by stripping namespace/device setup(which is quite verbose), and if that succeeds it provides thissimplified reproducer.In this case it decided that namespace setup is not important. .configis still important, but it is provided.Are you able to reproduce the WARNING with the provided config? Ifnot, we can look as to how to improve this.> I'll take a look at this and the rfkill one, I assume that there are> some sanity checks missing in hwsim generic netlink when it builds a> radio struct.>> However, I can't really promise that I'll be able to validate the> changes against your reproducer.>> johannes>> -->.
|
https://lkml.org/lkml/2018/1/15/123
|
CC-MAIN-2018-09
|
refinedweb
| 241
| 57.57
|
Java (6&7) Code Coverage Plugin for NetBeans, based on JaCoCo
NetBeans module that provides JaCoCo code coverage (Java7 compatible). For Ant based JavaSE projects. Maven and Gradle support may be added later. WARNING: Binary files have been removed. They're now on a NetBeans update center. Please check for details. ~Jonathan Lermitage <jonathan.lermitage@gmail.com>
This UML profile will augument UML at meta-model level. It will provide a new set of stereotypes, taggged values to software developers, who use UML to model their software designs. Project has long term vision to make software actions secure.
Mergetool specifically for single git conflict files
It's difficult to find a good mergetool for git conflict files. WinMerge is a good mergetool for resolving conflicts in files you're merging in git, but when resolving conflicts in conflict files, it only allows changes in "your" file, negating much of its usefulness. Unconflict helps you remove conflicts from git conflict files without losing data.
An application to decompile and analyse Windows Phone apps
An application to decompile and analyse Windows Phone apps specifically focusing on finding security issues.
Android Wordpress Theme
2 Tarih arası zaman ölçümleri.
İki tarih arasında zaman olcumleri yapıyor. Visual Studio, Visual Basic'te yapılmış ve kaynak kodlari eklenmistir.
Khmer Unicode Messagebox, it's support for dot net framework 4.0.
This project tries to combine existing software quality tools for eclipse into one. If sucessfull, it will integrate checkstyle, findbugs, pmd, and possible others.
function finder
ff util is created for searching some text (functions, variables, definitions) in some source tree in all files (and just for fun), a program for programmers.
read code and draw inheritance hierarchy
Program can read code in C++, PHP, or Actionscript (flash). After reading code it draw in it's windows inheritance hierarchy.
JCS - JUL Comment System
JCS - JUL Comment System is a way of documenting your JavaScript source code while keeping the comments in a separate file. It allows you to associate the comments to any version of the source code and to get the updated commented code. Also, JCS is able to generate and to document the code for an entire JavaScript namespace or DOM tree loaded by a web page. System requirements: * a CSS2.1 compliant web browser with JavaScript 1.5 or later engine * Node.js 0.10.0 or later installed * OR a web server with PHP 5.2.0 or later extension * 1024x768 minimum resolution All major browsers supported, including: FF 4+, IE 8+, Edge, Safari 4+, Chrome 2+, Opera 10+., IOS 4+, Android 2.3+.
LuziensEditor ist eine kleine IDE für Java, C/C++, und Webentwicklung
LuziensEditor ist eine kleine IDE für Java, C/C++, und Webentwicklung mit einigen Besonderheiten. - Direkter Zugriff auf XAMPP - Eigene Konsole - Syntax highligts - Eigene Versionsverwaltung - Programme als Debian Paket verpacken - ...
Simple tool for fixing common misspellings, typos
Utility to fix common misspellings, typos in source codes. There are lots of typical misspellings in program codes. Typically they are more eye-catching in the living code but they can easily hide in comments, examples, samples, notes and documentations. With this utility you can fix a large number of them very quickly. Be aware that the utility does not check or fix file names. It can easily happen that a misspelled word is fixed in a file name in a program's code, but the file itself will not be renamed by this utility. And also important to note to be extra careful when fixing public APIs! A manual review is always needed to verify that nothing has been broken.
Reviewing memory allocation and data structures of an extant Sourceforge project unix-named "simupop". A new development version of the extant project is established and the ultimate goal is to "rev up" the old, hence the project name.
|
https://sourceforge.net/directory/development/sourcereview/?page=5
|
CC-MAIN-2018-05
|
refinedweb
| 637
| 57.37
|
Screening Files
Updated: February 27, 2008
Applies To: Windows Server.
A file screen does not prevent users and applications from accessing files that were saved to the path before the file screen was created, regardless of whether the files are members of blocked file groups.
To simplify the management of file screens, we recommend that you base your file screens on file screen templates. A file screen template defines a screening type (active or passive), a set of file groups to block, and a set of notifications to be generated when a user attempts to save an unauthorized file. File Server Resource Manager provides several default file screen templates, which you can use to block audio and video files, executable files, image files, and e-mail files—and to meet some other common administrative needs. To view the default templates, select the File Screen Templates node in the File Server Resource Manager console tree.
For additional flexibility, you can configure a file screen exception in a subfolder of a path where you have created a file screen. When you place a file screen exception on a subfolder, you allow users to save file types there that would otherwise be blocked by the file screen applied to the parent folder.
In this section:
- Working with File Groups
- Creating a File Screen
- Creating a File Screen Exception
- Monitoring File Screening
Before you begin working with file screens, you must understand the role of file groups in determining which files are screened. A file group is used to define a namespace for a file screen or a file screen exception, or to generate a Files by File Group storage report.
A file group consists of a set of file name patterns, which are grouped into files to include and files to exclude:
- Files to include: files that belong in the group.
- Files to exclude: files that do not belong in the group.
For example, an Audio Files file group might include the following file name patterns:
- Files to include:*.mp*: includes all audio files created in current and future MPEG formats (MP2, MP3, and so forth).
- Files to exclude:*.mpp: excludes files created in Microsoft Project (.mpp files), which would otherwise be included by the *.mp* inclusion rule.
File Server Resource Manager provides several default file groups, which you can view in File Screening Management by clicking the File Groups node. You can define additional file groups, or change the files to include and exclude. Any changes that you make to a file group affect all existing file screens, templates, and reports to which the file group has been added.
In File Screening Management, click the File Groups node.
In the Actions pane, click Create File Group. This opens the Create File Group Properties dialog box.
(Alternatively, while you edit the properties of a file screen, file screen exception, file screen template, or Files by File Group report, under Maintain file groups, click Create.)
In the Create File Group Properties dialog box, type a name for the file group.
Add files to include and files to exclude:
- For each set of files that you want to include in the file group, in Files to include, type a file name pattern, and then click Add.
Standard rules for wildcard characters apply. For example, *.exe selects all executable files.
- For each set of files that you want to exclude from the file group, in Files to exclude, type a file name pattern, and then click Add.
Note that standard wildcard rules apply—for example, *.exe selects all executable files.
Click OK.. In a similar way, you can save a new quota template based on the custom properties of a quota you create.
In File Screening Management, click the File Screens node.
Right-click File Screens, and click Create File Screen (or click Create File Screen in the Actions pane). This opens the Create File Screen dialog box.
Under File screen path, type the name of or browse to the folder that the file screen will apply to. The file screen will apply to the selected folder and all of its subfolders.
Under How do you want to configure file screen properties, click Define custom file screen properties, and then click Custom Properties. This opens the File Screen Properties dialog box.
If you want to copy the properties of an existing template to use as a base for your new file screen, select a template from the Copy properties from template drop-down list. Then click Copy.
Under Screening type, click the Active screening or Passive screening option. (Active screening prevents users from saving files that are members of blocked file groups, and generates notifications when users try to save unauthorized files. Passive screening sends configured notifications, but it does not prevent users from saving files.)
Under File groups, select each file group that you want to include in your file screen.
If you want to view the file types a file group includes and excludes, click the file group label, and then click Edit. To create a new file group, click Create.
Additionally, you can configure File Server Resource Manager to generate one or more notifications by setting the following options on the E-mail Message, Event Log, Command, and Report tabs.
If you want to generate e-mail notifications, on the E-mail Message tab, set the following options:
- To notify administrators when a user or application attempts to save an unauthorized file, select the Send e-mail to the following administrators check box, and then enter the names of the administrative accounts that will receive the notifications. Use the format account@domain, and use semicolons to separate multiple accounts.
- To send e-mail to the user who attempted to save the file, select the Send e-mail to the user who attempted to save an unauthorized file check box.
- To configure the message, edit the default subject line and message body that are provided. The text that is in brackets inserts variable information about the file screen event that caused the notification. For example, the [Source Io Owner] variable inserts the name of the user who attempted to save an unauthorized file. To insert additional variables in the text, click Insert Variable.
- To configure additional headers (including From, Cc, Bcc, and Reply-to), click Additional E-mail Headers.
If you want to log an error to the event log when a user tries to save an unauthorized file, on the Event Log tab, select the Send warning to event log check box. Optionally, edit the default log entry.
If you want to run a command or script when a user tries to save an unauthorized file:
On the Command tab, select the Run this command or script check box. Then type the command, or click Browse to search for the location where the script is stored. You can also enter command arguments, select a working directory for the command or script, or modify the command security setting.
If you want to generate one or more storage reports when a user tries to save an unauthorized file:
On the Report tab, select the Generate reports check box, and then select which reports to generate. The reports will be saved in the default location for incident reports, which you can modify in the File Server Resource Manager Options dialog box. Optionally, you can choose one or more administrative e-mail recipients for the report or e-mail the report to the user who attempted to save the file.
After you have selected all of the file screen properties that you want to use, click OK to close the File Screen Properties dialog box.
In the Create File Screen dialog box, click Create to save the file screen. This opens the Save Custom Properties as a Template dialog box.
To save a template that is based on these customized properties, click Save the custom properties as a template and type a name for the template. This option will apply the template to the new file screen, and you can use the template to create additional file screens in the future.
Click OK.
Occasionally, you. To determine which file types the exception will allow, file groups are assigned.
In File Screening Management, click the File Screens node.
Right-click File Screens, and click Create File Screen Exception (or click Create File Screen Exception.
- If you want to view the file types that a file group includes and excludes, click the file group label, and click Edit.
- To create a new file group, click Create.
Click OK.
In addition to the information in your file screen notifications, you can monitor file screening by viewing file screens in the File Screens Results pane and by generating a File Screening Audit report.
To view file screening information in the File Server Resource Manager console tree, click File Screening Management, and then click the File Screens node.
- For each file screen, the Results pane displays the following information: the path that the file screen was created for, the type of file screen (file screen or exception), the file groups included in the file screen, the template on which the file screen is based, and whether the current configuration of the file screen matches the configuration of the template.
- For the selected file screen, the description area lists all file groups that are being blocked on the file screen path. This includes file groups that are blocked by the current file screen as well as file groups blocked by file screens created higher in the file screen path.
- To filter the Results pane display to the file screens that affect a specific path:
- Click Filter at the top of the pane.
- In the File Screen Filter dialog box, under File Screen path, click either the Parents of the following folder option or the Children of the following folder option.
- Type or browse to the path.
- Click OK.
Use the File Screening Audit report to identify individuals or applications that violate file screening policy. For instructions on generating a File Screening Audit report, see Generating Storage Reports later in this guide.
|
https://technet.microsoft.com/en-us/library/cc732349(v=ws.10).aspx
|
CC-MAIN-2015-40
|
refinedweb
| 1,688
| 68.5
|
(Been a week or so since the last post but I haven’t burnt out with blogging yet – I was on vacation
over the July 4th weekend and totally offline in and around a small town called Pullman in
south-eastern Washington.)
In a previous post I described extents, and in another previous post a while back I described how
the extents and pages that are allocated to an IAM chain are tracked in IAM pages. What I didn’t
describe is how the allocation status of individual pages is tracked, or how the global allocation
bitmaps work – that’s the subject of this post.
This is the last post that lays the groundwork to
be able to discuss allocation checks in CHECKDB – the subject of the following post – and various
corruption scenarios (yes Kimberly, I’m going to get to scenarios…)
Bear in mind that everything below is exactly the same in SQL Server 2000 and 2005., and IAM pages track – 64000 extents or almost 4GB. These
bitmaps are the same size in each of these three page types and have one bit per extent, but they
mean different things in each of the different allocation pages.
One thing to note, at the start of every GAM interval is a GAM extent which contains the
global allocation pages that track that GAM interval. This GAM extent cannot be used for any
regular page allocations..
SGAM pages
I remember last year having an email discussion about what the ‘S’ stands for in SGAM. Various names have been used over the
years inside and outside Microsoft but the official name that Books Online uses is Shared
Global Allocation Map. To be honest, we always just call them ‘es-gams’ and
never spell it out.
As I said above, the SGAM bitmap is exactly the same as the GAM bitmap in structure and the interval it covers, but the semantics of the
bits are different:
bit = 1: the extent is a mixed extent and has at least one unallocated page available for
use
bit = 0: the extent is either dedicated or is a mixed extent with no unallocated pages
(essentially the same situation given that the SGAM is used to find mixed extents with unallocated
pages) – more on some of these in later posts.
(But I can’t resist – ‘how do these corruptions happen?’ I’m sure someone is asking. Every database page is 8KB, which is really 16 512-byte disk segments. Imagine a flaky IO system writing some random data into one of the disk segments of a GAM page and causing multiple IAM pages to think they have the same extents allocated…)?
For instance, an IAM page will have a PFS byte value of 0x70 (allocated + IAM page + mixed page). You can examine PFS pages using DBCC PAGE (the instructions in that post use a PFS page as an example).
Free space is only tracked for pages storing LOB values (i.e. text/image in SQL Server 2000, plus. Just yesterday I was helping a couple of MVPs with an issue and one of the questions was (paraphrasing) “This page has a PFS byte value of 0x04 – how can it be full when its not allocated?”
The answer is that run the following:
BEGIN
TRANSACTION
DROP
TABLE T1
GO
And then do the DBCC PAGE again, the output now includes:
PFS (1:1) = 0x30 IAM_PG MIXED_EXT 0_PCT_FULL
And if I rollback then transaction, the DBCC PAGE output reverts to:
PFS (1:1) = 0x70 IAM_PG MIXED_EXT ALLOCATED 0_PCT_FULL
Now that these three pages have been discussed, we’re free to explore allocation checks and corruptions and I’m free to go and have breakfast!
(This is just adding the blog to Technorati – Technorati Profile)
Join the conversationAdd Comment
Thanks again. Keep up the good posts.
Very good post, it answers questions which I could not find anywhere else!
However, in the PFS pages section, I think bit 4 and bit 5 are switched. should be like this:
bit 5 (0x20): is the page a mixed-page?
bit 4 (0x10): is the page an IAM page?
I did some practises on NON-IAM mixed-extent pages, the PFS returns 0X60 (allocated + Mixed)
for example:
DBCC page(AdventureWorks,1,16176,3) — check the PFS to find a mixed extent
–return (1:20760) – (1:20767) = ALLOCATED 0_PCT_FULL Mixed Ext
DBCC page(AdventureWorks,1,20760,1)
–GAM (1:2) = ALLOCATED SGAM (1:3) = ALLOCATED PFS (1:16176) = 0x60 MIXED_EXT ALLOCATED 0_PCT_FULL
And I have a question for the IAM pages,
I checked some of them, they alays return 0x70 (allocated + Mixed + IAM),
Does that mean IAM pages are always allocated in a mix extent ?
Also in the GAM, SGAM and IAM pages section, in the chart
the 3rd column is "Any IAM", What does "Any IAM" = 0 or "Any IAM" = 1 mean ?
Thanks.
David
Hey David – good catch – that’s what happens when you try to do stuff from memory!
Yes, IAM pages are always from mixed extents. Any IAM means that one IAM page mut have a bit set corresponding to that extent.
Cheers
You’ve probably heard the term banded around but do you know what it means and what it means to the performance
|
https://blogs.msdn.microsoft.com/sqlserverstorageengine/2006/07/08/under-the-covers-gam-sgam-and-pfs-pages/
|
CC-MAIN-2016-50
|
refinedweb
| 873
| 63.32
|
Freshman Honors Seminar: Exercise 1
Problem 1: Function Tables
Several of the samples on the FHS site focus on computation of functions. This is because functions are familiar from courses on mathematics and because functions lend themselves to showing basic patterns of computation.
Two of the function samples, Fibonacci Variations and
Table of Factorials, illustrate dealing with discrete
functions that produce integer values that may be larger than
what may be represented in the primitive types
int or
long. This leads to the use
of either the Java class
BigInteger or the JPT
class
XBigInteger.
The goal of this problem is to have you imitate the
Table of Factorials sample and create methods to make
tables for two additional functions. You may use the file
Methods.java
from that sample as a starter file or you can just as easily
make a fresh file. In the factorials example, we show how
to create the table using either
BigInteger or
XBigInteger. It is sufficient for you to choose
one approach and implement it.
So, we want you make a method that will compute tables for:
2raised to the power
n.
- The binomial coefficient
B(n,k)for
nfixed and
kvarying from
0to
n.
Perhaps the simplest way to describe this problem in more detail is to give you the comments and header from our code.
First,
2 raised to the power
n.
/** * Table of 2-to-the-power-n. * * Output for n = 10: * * 2 to power 0 = 1 * 2 to power 1 = 2 * 2 to power 2 = 4 * 2 to power 3 = 8 * 2 to power 4 = 16 * 2 to power 5 = 32 * 2 to power 6 = 64 * 2 to power 7 = 128 * 2 to power 8 = 256 * 2 to power 9 = 512 * 2 to power 10 = 1024 */ public void TableOfPowerOf2(int n) { ... }
Second, the binomial coefficient
B(n,k).
/** * The binomial coefficients B(n,k) arise as the coeffients * in the expansion of (x + y) to the power n. * * B(n,0) = 1. * * B(n,k) = B(n,k-1) * (n-k+1) / k, for 0 < k <= n. * * Equivalently, B(n,k) = n!/(k! * (n-k)!), but the formulas * above are more efficient for computing an entire table. * * Output for n = 10: * * B(10,0) = 1 * B(10,1) = 10 * B(10,2) = 45 * B(10,3) = 120 * B(10,4) = 210 * B(10,5) = 252 * B(10,6) = 210 * B(10,7) = 120 * B(10,8) = 45 * B(10,9) = 10 * B(10,10) = 1 * * For example, the coeffient of x^4 * y^6 in (x + y)^10 is * 210. */ public void BinomialTable(int n) { ... }
You can see that for each table, I have provided example output as well as a discussion of what the computation should do. This is an example of where the methodology taught in Fundamentals really pays off.
Problem 2: Binomial Revisited
In solving Problem 1, you never actually define a “Binomial
function”. Rather, you accumulate mathematical results and
print them while taking advantage of prior computations to get
the next value. In this problem, we want you to define such a
function. Depending on whether you want to use the Java class
BigInteger or the JPT class
XBigInteger,
the header will be:
public static BigInteger Binomial(int n, int k)
or
public static XBigInteger Binomial(int n, int k)
The keyword
static simply means that the function is a
stand-alone function that may be called without building another
object. Mathematical functions are usually
static so
rather than build bad habits we introduce the keyword here.
Note that you must make a choice here. You cannot define two functions with the same name and argument types that differ only in the type of the return value.
Some additional specification and suggestions:
- The function should return “zero” if
kis negative or greater than
n.
Binomial(n,k)equals
Binomial(n,n-k)as the sample in Problem 1 indicates. If you can, use this fact to make the computation more efficient.
Speaking of efficiency, you will later learn that there are ways to save information from one binomial computation to be used in another such computation. This is called “memoization”.
Binomial in Scheme
For comparison, you can see how the binomial problems are solved in scheme, by clicking on: Binomial In Scheme .
Problem 3: Dice and Random Throws
In the Playing Cards sample program, you can see how to load images from the JPT web site. There is a a directory there with dice images:
In this directory, there are 7 images named
die0.jpg
...
die6.jpg. The image “die0.jpg” is blank
and acts as a placeholder so that when the other images are loaded
then there index in the array is equal to their value as one of 6
dice possibilities. Here is a snapshot that shows all 7 images.
We want you make a simple program that will use random numbers to select 2 dice values and then display the corresponding pair of images on the screen.
When the program starts up, it should look something like this:
After one or more tosses of the dice, it should look something like this:
To show you that getting the dice images is simple, we will give you the code for the program that displayed all 7 dice images shown above.
public class Methods extends JPF { public static void main(String[] args) { LookAndFeelTools.adjustAllDefaultFontSizes(5); new Methods(); } /** The dice URL. */ private String diceURL = ""; /** The image list file name for reading the dice. */ private String diceList = "imagelist.txt"; /** The dice as an ImagePaintableLite[]. */ private ImagePaintableLite[] dice = WebImageTools.readImagesAsPaintableLite (diceURL, diceList); /** The size of the dice array. */ private int N = dice.length; public void ShowAllDice() { int gap = 10; HTable table = new HTable(dice, gap, gap, CENTER); if (N == 0) table.addObject ("Dice images failed to load from the web", 0, 0); table.setBackground(Colors.tan); table.frame("All Dice"); } }
The above program reads all dice images into an array and then loads
the entire array into a horizontal table which is then framed. You
will need to be more subtle since you only want to show 2 dice at a
time and you want to be able to change the dice afterwards. Notice,
also, how the above program checks the size
N of the
array and installs an error message if the images could not be
loaded. When loading from the web, one can never be certain that
the requested task has succeeded.
The key to be able to change dice images is the create two wrappers,
specifically
Tile objects, and put the wrappers into
the GUI. Then afterwards you can change the images in the tile
and the GUI will update automatically.
The two tiles are defined as follows as member data.
/** The tile for die #1. */ private Tile die1 = null; /** The tile for die #2. */ private Tile die2 = null;
Notice that we have only named the tiles here. We have not yet
constructed them. That is because until we check
N
in the constructor we do not know whether or not the dice images
loaded.
The constructor looks like this.
public Dice() { if (N == 0) { addObject("Dice images failed to load from the web"); return; } makeTiles(); makeGUI(); }
Notice that we don't attempt to make tiles if we have no dice images.
The method
makeTiles is also simple.
/** The method to make the initial blank tiles. */ private void makeTiles() { die1 = new Tile(dice[0]); die2 = new Tile(dice[0]); }
Initially, the tiles are blank since they both use the blank image
that is stored in
dice[0]. To change what dice image
is showing in a tile, use the pattern that is illustrated with the
tile
die1.
die1.setPaintable(dice[a]);
Here
a is a random
int between 1 and 6
that may be obtained by the method call:
a = MathUtilities.randomInt(1, 6);
What you need to do is put all this together. There should be a
SimpleAction that is used to create the button with
the label “Toss Dice”. The two dice tiles should be
in a horizontal table and then that table and the simple action
should be placed in a vertical table.
You will of course need to code a method
private void tossDice() that actually implements the
simple action.
The main program method is very simple.
/** The main program. */ public static void main(String[] args) { new Dice().frame("Random Dice"); }
The call
new Dice() will build the tiles and the Dice
program GUI which is then put in a frame with the title
“Random Dice”.
|
http://www.ccs.neu.edu/jpt/fhs-06-07/Exercise1.htm
|
CC-MAIN-2015-27
|
refinedweb
| 1,426
| 63.19
|
How do you append to a JTextField with a JButton without the previous text being deleted
for example if i had
JButton num0 = new JButton("0")
JButton num1 = new JButton("1")
JTextField text =...
Type: Posts; User: devindadude
How do you append to a JTextField with a JButton without the previous text being deleted
for example if i had
JButton num0 = new JButton("0")
JButton num1 = new JButton("1")
JTextField text =...
the numbers were for creating my own arraylist that contained a bank of number strings for the computer to randomly generate one as the secret number to have the user guess from
That is true thank you very much i didnt think about that
I am confused on how to write a Boolean that will end the game if the guess is correct if anyone can help I would greatly appreciate it
My teacher calls it baseball lol so I made a baseball class...
i think i see what you are getting at because have the recall of my hidden word in the while loop so it goes everytime
i wrote the code and the method for the String buffer is this
<public StringBuffer choseWord(String s){ // method that makes the string dashes
hideWord = new StringBuffer(s.length());...
The nested statements are indented it is just the way the code was copied into thread.
Im guessing it does this in the loop some how when i do the guesses one by one, im just not understanding...
I dont know if you have tried to compile it or not but it happens when i compile the code. After i enter in my first guess it adds more dashes to word even though it stays the same, thus not allowing...
It is like hangman just a word guessing game with unlimited tries until you get the word right. The word is chosen from my arraylist of words and disguised by dashes.
When i run the program...
My code is not updating write after i enter a guess can someone help me out i cant think of what would be wrong
This is my hangman class
import java.util.*;
import java.io.*;
public...
|
http://www.javaprogrammingforums.com/search.php?s=57d9388d560e13aefe7f1426a0e67a2b&searchid=1429442
|
CC-MAIN-2018-05
|
refinedweb
| 360
| 67.12
|
Cover | Table of Contents | Colophon-- still the official place to find all things Python.
sys.
sysand
os. That's somewhat oversimplified; other standard modules belong to this domain too (e.g.,
glob,
socket,
thread,
time,
fcntl), and some built-in functions are really system interfaces as well (e.g.,
open). But
sysand
ostogether form the core of Python's system tools arsenal.
sysexports components related to the Python interpreter itself (e.g., the module search path), and
oscontains.
osmodule also attempts to provide a portable programming interface to the underlying operating system -- its functions may be implemented differently on different platforms, but they look the same everywhere to Python scripts. In addition, the
osmodule exports a nested submodule,
os.path, that provides a portable interface to file and directory processing tools.
sysand
osmodules form the core of much of Python's system-related toolset. Let's now take a quick, interactive tour through some of the tools in these two modules, before applying them in bigger examples.
sysincludes both informational names and functions that take action. For instance, its attributes give us the name of the underlying operating system the platform code is running on, the largest possible integer on this machine, and the version number of the Python interpreter running our code:
C:\...\PP2E\System>python >>> import sys >>> sys.platform, sys.maxint, sys.version ('win32', 2147483647, '1.5.2 (#0, Apr 13 1999, 10:51:12) [MSC 32 bit (Intel)]') >>> >>> if sys.platform[:3] == 'win': print 'hello windows' ... hello windows
sys.platformstring as done here; although most of Python is cross-platform, nonportable tools are usually wrapped in
iftests like the one here. For instance, we'll see later that program launch and low-level console interaction tools vary per platform today -- simply test
sys.platformto pick the right tool for the machine your script is running on.
sysmodule also lets us inspect the module search path both interactively and within a Python program.
sys.path.
sys.pathlist is simply initialized from your PYTHONPATH setting plus system defaults, when the interpreter is first started up. In fact, you'll notice quite a few directories that are not on your PYTHONPATH if you inspect
oscontains,
osserves as a largely portable interface to your computer's system calls: scripts written with
osand
os.pathcan usually be run on any platform unchanged.
osmoduleinstead of platform-specific modules, though, your scripts are mostly immune to platform implementation differences.']
os.getcwdgives access to the directory from which a script is started, and many file tools use its value implicitly.
sys.argvgives access to words typed on the command line used to start the program that serve as script inputs.
os.environprovides an interface to names assigned in the enclosing shell (or a parent program) and passed in to the script.
sys.stdin,
stdout, and
stderrexport the three input/output streams that are at the heart of command-line shell tools.
os.getcwdlets a script fetch the CWD name explicitly, and
os.chdirallows a script to move to a new CWD. module search path:
C:\PP2ndEd\examples\PP2E\System>type whereami.py import os, sys print 'my os.getcwd =>', os.getcwd( ) # show my cwd execution dir print 'my sys.path =>', sys.path[:6] # show first 6 import paths raw_input( ) # wait for keypress if clicked
'') to the front of the module search path, to designate the CWD (we met the
sys.pathmodule search path earlier):
C:\PP2ndEd\examples\PP2E\System>set PYTHONPATH=C:\PP2ndEd\examples C:\PP2ndEd\examples\PP2E\System>
sysmodule is also where Python makes available the words typed on the command used to start a Python script. These words are usually referred to as command-line arguments, and show up in
sys.argv, a built-in list of strings. C programmers may notice its similarity to the C "argv" array (an array of C strings). It's not much to look at interactively, because no command-line arguments are passed to start up Python in this mode:
>>> sys.argv ['']
argvlist for inspection.
import sys print sys.argv
C:\...\PP2E\System>python testargv.py ['testargv.py'] C:\...\PP2E\System>python testargv.py spam eggs cheese ['testargv.py', 'spam', 'eggs', 'cheese'] C:\...\PP2E\System>python testargv.py -i data.txt -o results.txt ['testargv.py', '-i', 'data.txt', '-o', 'results.txt']
-i
data.txtmeans the
-ioption's value is
data.txt(e.g., an input filename). Any words can be listed, but programs usually impose some sort of structure on them.
os.environ, a Python dictionary-like object with one entry per variable setting in the shell. Shell variables live outside the Python system; they are often set at your system prompt or within startup files, and typically serve as systemwide configuration inputs to programs.
os.environby the desired shell variable's name string (e.g.,
os.environ['USER']) is the moral equivalent of adding a dollar sign before a variable name in most Unix shells (e.g.,
$USER), using surrounding percent signs on DOS (
%USER%), and calling
getenv("USER")in a C program. Let's start up an interactive session to experiment:
>>> import os >>> os.environ.keys( ) ['WINBOOTDIR', 'PATH', 'USER', 'PP2HOME', 'CMDLINE', 'PYTHONPATH', 'BLASTER', 'X', 'TEMP', 'COMSPEC', 'PROMPT', 'WINDIR', 'TMP'] >>> os.environ['TEMP'] 'C:\\windows\\TEMP'
keysmethod returns a list of variables set, and indexing fetches the value of shell variable TEMP on Windows. This works the same on Linux, but other variables are generally preset when Python starts up. Since we know about PYTHONPATH, let's peek at its setting within Python to verify its content:
>>> os.environ['PYTHONPATH'] 'C:\\PP2ndEd\\examples\\Part3;C:\\PP2ndEd\\examples\\Part2;C:\\PP2ndEd\\ examples\\Part2\\Gui;C:\\PP2ndEd\\examples' >>> >>>
sysis also the place where the standard input, output, and error streams of your Python programs live:
>>> for f in (sys.stdin, sys.stdout, sys.stderr): print f ... <open file '<stdin>', mode 'r' at 762210> <open file '<stdout>', mode 'w' at 762270> <open file '<stderr>', mode 'w' at 7622d0>
raw_inputfunctions are really nothing more than user-friendly interfaces to the standard output and input streams, they are similar to using
stdoutand
stdinin
sysdirectly:
>>> print 'hello stdout world' hello stdout world >>> sys.stdout.write('hello stdout world' + '\n') hello stdout world >>> raw_input('hello stdin world>') hello stdin world>spam 'spam' >>> print 'hello stdin world>',; sys.stdin.readline( )[:-1] hello stdin world>eggs 'eggs'
openfunction is the primary tool scripts use to access the files on the underlying computer system. Since this function is an inherent part of the Python language, you may already be familiar with its basic workings. Technically,
opengives direct access to the
stdiofilesystem calls in the system's C library -- it returns a new file object that is connected to the external file, and has methods that map more or less directly to file calls on your machine. The open function also provides a portable interface to the underlying filesystem -- it works the same on every platform Python runs on.
os), store objects away in files by key (modules
anydbmand
shelve), and access SQL databases. Most of these are larger topics addressed in Chapter 16. In this section, we take a brief tutorial look at the built-in file object, and explore a handful of more advanced file-related topics. As usual, you should consult the library manual's file object entry for further details and methods we don't have space to cover here.
openfunction is all you need to remember to process files in your scripts. The file object returned by
openhas methods for reading data (
read,
readline,
readlines), writing data (
write,
writelines), freeing system resources (
close), moving about in the file (
seek), forcing data to be transferred out of buffers (
flush), fetching the underlying file handle (
fileno), and more. Since the built-in file object is so easy to use, though, let's jump right in to a few interactive examples.
forloop, processing each file in turn. The trick we need to learn here, then, is how to get such a directory list within our scripts. There are at least three options: running shell listing commands with
os.popen, matching filename patterns with
glob.glob, and getting directory listings with
os.listdir. They vary in interface, result format, and portability.
os.forkis called the child process. In general, parents can make any number of children, and children can create child processes of their own -- all forked processes run independently and in parallel under the operating system's control. It is probably simpler in practice than theory, though; the Python script in Example 3-1 forks new child processes until you type a "q" at the console.
# forks child processes until you type 'q' import os def child( ): print 'Hello from child', os.getpid( ) os._exit(0) # else goes back to parent loop def parent( ): while 1: newpid = os.fork( ) if newpid == 0: child( ) else: print 'Hello from parent', os.getpid( ), newpid if raw_input( ) == 'q': break parent( )
osmodule, are simply thin wrappers over standard forking calls in the C library. To start a new, parallel process, call the
os.forkbuilt-in function. Because this function generates a copy of the calling program, it returns a different value in each copy: zero in the child process, and the process ID of the new child in the parent. Programs generally test this result to begin different processing in the child only; this script, for instance, runs the
childfunction in child processes only.
sys.exitfunction:
>>> sys.exit( ) # else exits on end of script
SystemExitexception. Because of this, we can catch it as usual to intercept early exits and perform cleanup activities; if uncaught, the interpreter exits as usual. For instance:
C:\...\PP2E\System>python >>> import sys >>> try: ... sys.exit( ) # see also: os._exit, Tk( ).quit( ) ... except SystemExit: ... print 'ignoring exit' ... ignoring exit >>>
SystemExitexception with a Python
raisestatement is equivalent to calling
sys.exit. More realistically, a
tryblock would catch the exit exception raised elsewhere in a program; the script in Example 3-11 exits from within a processing function.
def later( ): import sys print 'Bye sys world' sys.exit(42) print 'Never reached' if __name__ == '__main__': later( )
sys.exitraises a Python exception, importers of its function can trap and override its exit exception, or specify a
finallycleanup block to be run during program exit processing:
C:\...\PP2E\System\Exits>python testexit_sys.py Bye sys world C:\...\PP2E\System\Exits>python >>> from testexit_sys import later >>> try: ... later( ) ... except SystemExit: ... print 'Ignored...' ... Bye sys world Ignored... >>>
os.popencalls
os.popenand simple files allow even more dynamic communication -- data can be sent between programs at arbitrary times, not only at program start and exit.
socketmodule, which lets us transfer data between programs running on the same computer, as well as programs located on remote networked machines.
os.pipecall..
os.forkcall to make a copy of the calling process as usual (we met forks earlier in this chapter). After forking, the original parent process and its child copy speak through the two ends of a pipe created with
os.pipeprior to the fork. The
os.pipecall.
signalmodule that allows Python programs to register Python functions as handlers for signal events. This module is available on both Unix-like platforms and Windows (though the Windows version defines fewer kinds of signals to be caught). To illustrate the basic signal interface, the script in Example 3-20 installs a Python handler function for the signal number passed in as a command-line argument.
########################################################## # catch signals in Python; pass signal number N as a # command-line arg, use a "kill -N pid" shell command # to send this process a signal; most signal handlers # restored by Python after caught (see network scripting # chapter for SIGCHLD details); signal module avaiable # on Windows, but defines only a few signal types there; ########################################################## import sys, signal, time def now( ): return time.ctime(time.time( )) # current time string def onSignal(signum, stackframe): # python signal handler print 'Got signal', signum, 'at', now( ) # most handlers stay in effect signum = int(sys.argv[1]) signal.signal(signum, onSignal) # install signal handler while 1: signal.pause( ) # wait for signals (or: pass)
|
http://www.oreilly.com/catalog/9780596000851/toc.html
|
crawl-001
|
refinedweb
| 2,006
| 56.25
|
DeviceLockState
Since: BlackBerry 10.0.0
#include <bb/platform/DeviceLockState>
To link against this class, add the following line to your .pro file: LIBS += -lbbplatform
The set of possible lock states of the device.
Overview
Public Types Index
Public Types
The set of possible lock states of the device.
BlackBerry 10.0.0
- Unknown 0
The current lock state is not known.
- Unlocked 1
The device is not locked at this time.Since:
BlackBerry 10.0.0
- ScreenLocked 2
The device is screen locked but not password locked.
The device can be unlocked with a swipe gesture without the need to enter the device password.Since:
BlackBerry 10.0.0
- PasswordLocked 3
The device is password locked.
A swipe gesture will raise a password prompt that must be completed before the device is unlocked.Since:
BlackBerry 10.0.0
- PinBlocked 4
The device is PIN-blocked.
The device has been disabled by the provider.Since:
BlackBerry 10.0.0
Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
|
https://developer.blackberry.com/native/reference/cascades/bb__platform__devicelockstate.html
|
CC-MAIN-2016-30
|
refinedweb
| 175
| 71.61
|
As the name suggests, the linked list comprises elements linked together by a bond or a linkage. In computer science, a linked list refers to a group of entities having data in them. These entities further point to the next entity containing some other valuable data.
Table of Contents:
- Advantages of Link List
- Link List: Concepts
- Link List in Python
- Insertion to the LinkedList
- Deletion from the LinkedList
- Disadvantages of Link List
Moreover, these entities’ order is not allotted by their placement in memory but because each entity gives the next one’s address. In short, a linked list is a kind of data structure which is consisted of nodes. Each node in the list contains two things. The first thing is the stored data, and the second thing is the link to the next node. This link between two nodes is referred to as reference or pointer. Thus, the collection of such nodes makes a sequence which is called a linked list.
Advantages of Link List
As mentioned earlier, the link list is a dynamic data structure. When working with unexpected data, it provides the liberty of extra memory usage on run time. Furthermore, there is no need to allocate memory beforehand in case the program needs it. The link list modifies itself to shrink or grow as the program runs further.
Also, working with complex programs using a link list proves helpful. For example, it helps implement linear data structures such as Queue and stack easily to run the program efficiently. Moreover, addition and deletion functions are relatively more straightforward in link list than in other data structures. The users can perform these functions by updating the previous node’s link without shifting nodes like in arrays. It also provides faster access to Data without memory overhead than others.
Link List: Concepts
As mentioned earlier, the link list consists of nodes. Every node has Data and pointer to the next node. The pointer part is called Next. The first node or the starting point of the list is called Head and its Next points to the next node. Whereas, the Next of the last node points to None.
Link List in Python
A node is a simple data block of a list connected using pointers. Each block stores the user’s data and the next node’s address to quickly locate the next node. The python code below shows a simple class of node which stores a data value and the “next” pointer to point to the next node in the list. When the user creates the node class object, the program assigns null to both data and next.
Class Node: def __init__(self, data = None): self.data = data self.next = None
The above code only creates the individual nodes of the list. Now, the user needs to write a class for the link list to maintain and connect the nodes, saving him from remembering and changing the nodes repeatedly. This class stores the pointer to the Head of the list, i.e., the first node. The below python code shows a class of LinkedList, which creates an empty LinkedList.
class LinkedList: def __init__(self): self.head = None
The user can create a link list containing the data such as weekdays’ names using the below python code. It creates a LinkedList with the name “List1,” which stores the Head to the node containing the value “Mon.” This node points to N2, which contains “Tues,” which points to N3 containing “Wed.” The user can observe from the below code that the next pointer of node “N5” is NULL because it doesn’t point to any node in the list and hence is the last node in the LinkedList.
List1 = LinkedList() List1.head = Node("Mon") N2 = Node("Tue") N3 = Node("Wed") N4 = Node(“Thurs”) N5 = Node ("Fri") List1.head.next = N2 N2.next = N3 N3.next = N4 N4.next = N5
Now when the user has created a LinkedList, it is essential to have a traversal function that prints the data stored in the linked list. The print function assigns the value of Head, i.e., the first node’s address to pointer “p.” Then, it prints the data of all the values and assigns the next node’s address to the pointer “p” until it reaches the end of the list, i.e., the pointer to the next node becomes empty. The user can understand this concept from the above code that the next pointer of the node “N5” is empty because it does not point to anyone.
Weekdays through Link List
def print(self): p = self.head while p is not None: print(p.data) p = p.print()
Output
Mon Tues Wed Thurs Fri
Insertion to the LinkedList
As the LinkedList nodes are connected by merely pointers, the users can add anywhere in the list by inserting the node in the list and changing the pointers of the previous and next node of the list. For instance, if the user wants to add a node at the beginning of the linked list, he makes the new node point to the first node, and the Head points to the new node. The image below visually explains this concept of insertion.
The python function for insertion at the beginning is given below:
def Insert_at_beginning(self,data): Node4 = Node(data) Node4.next = self.head self.head = Node4
The function given above takes the data, which it has to add to the list’s beginning. First, it creates a node and adds the data to it. It then assigns the Head’s value, which is the previous first node’s address, to the new first node. Lastly, it assigns the address of the new node to the Head. Now head points to the new node, which makes it the first node in the list
Adding a Weekday into.print()
Output
Sun Mon Tues Wed Thurs Fri
Deletion from the LinkedList
The deletion from the LinkedList is as more straightforward as an insertion to the linked list without wasting any space. The user can remove the node by searching through the list to find the key node and then make the previous node point to the next node instead of pointing to the key node.
The “Delete” function takes the value that it has to delete from the list in the code below. It then matches the data of all the nodes to find the key node. While searching the data, it keeps the pointers to the current node’s prev and next nodes. When it has found that key node, it merely makes the prev node point to the next node, skipping the current node, which removes the current node from the list.
Note: The users can also write search function using delete function
def Delete(self, Data): current = self.head if (current is not None): if (current.data == Data): self.head = current.next current = None return while (current is not None): if current.data == Data: break prev = current current = current.next if (current == None): return prev.next = current.next current = None
Deleting a Weekday from.Delete(“Mon”); List1.print()
Output
Sun Tues Wed Thurs Fri
Disadvantages of Link List
When the link list has various advantages, it has some disadvantages as well. Since it consists of pointers, it consumes more memory because pointers need extra memory for storage. Also, it would take a while to get to a specific node in the link list. The reason is that it does not support random access to any node.
Furthermore, the program’s time complexity increases with the link list’s use due to individual access required for every node. Moreover, reverse traversing a singly link list is rather tricky than a double link list. Finally, the link list is a useful data structure depending on its purpose and implementation.
|
https://pdf.co/blog/link-list-in-python
|
CC-MAIN-2020-50
|
refinedweb
| 1,306
| 72.87
|
loom 0.0.10 = {
'app': ['prod-web-1.example.com', 'prod-web-2.example.com'],
'db': ['prod-db-1.example.com'],
}
You can then define any third-party Puppet modules you want in a file called `Puppetfile`:
forge ""
mod "puppetlabs/nodejs"
mod "puppetlabs/mysql"
(This is for [librarian-puppet]) a tool for installing reusable Puppet modules. It can also install from Git, etc.)
Your own modules are put in a directory called `modules/` in the same directory as `fabfile.py`. Roles are defined in a magic module called `roles` which contains manifests for each role. (If you've used Puppet before, this is a replacement for `node` definitions.)
For example, `modules/roles/manifests/db.pp` defines what the db role is:
class roles::db {
include mysql
# ... etc
}
And that's it!
Let's set up a database server. First, bootstrap the host (in this example, the single db host you defined in `env.roledefs`):
$ fab -R db puppet.install
Then install third party Puppet modules, upload your local modules, and apply them:
$ fab -R db puppet.update puppet.apply
Every time you make a change to your modules, you can run that command to apply them. Because this is just Fabric, you can write a task in `fabfile.py` to do it too:
@task
def deploy_puppet():
execute(puppet.update)
execute(puppet.apply)
Then you could use the included "all" task to update Puppet on all your hosts:
$ fab all deploy_puppet-app-1.example.com', 'prod-app.)
Contributors
------------
* [Ben Firshman])
* [Andreas Jansson]
* [Steffen L. Norgren]
- Downloads (All Versions):
- 53 downloads in the last day
- 235 downloads in the last week
- 269 downloads in the last month
- Author: Ben Firshman
- Package Index Owner: bfirsh, loom
- Package Index Maintainer: loom
- DOAP record: loom-0.0.10.xml
|
https://pypi.python.org/pypi/loom/0.0.10
|
CC-MAIN-2015-11
|
refinedweb
| 294
| 58.79
|
In this article you will learn how to make Python speak in english and other languages, we will create a Python program that converts any text we provide into speech 😀
This is an interesting experiment to discover what can be created with Python and to show you the power of Python and its modules.
How can you make Python speak?
Python provides hundreds of thousands of packages that allow developers to write pretty much any types of program. Two cross-platform packages you can use to convert text into speech using Python are PyTTSx3 and gTTS.
Together we will create a simple program to convert text into speech. This program will show you how powerful Python is as a language. It allows to do even complex things with very few lines of code.
Let’s get started!
The Libraries to Make Python Speak
In this guide we will try two different text to speech libraries:
- PyTTSx3
- gTTS (Google text to Speech API)
They are both available on the Python Package Index (PyPI), the official repository for Python third-party software. Below you can see the page on PyPI for the two libraries:
There are different ways to create a program in Python that converts text to speech and some of them are specific to the operating system.
The reason why we will be using PyTTSx3 and gTTS is to create a program that can run in the same way on Windows, Mac and Linux (cross-platform).
Let’s see how PyTTSx3 works first…
Example Using the PyTTSx3 Module
Before using this module remember to install it using pip:
pip install pyttsx3
If you are using Windows and you see one of the following error messages, you will also have to install module pypiwin32:
No module named win32com.client No module named win32 No module named win32api
You can use pip for that module too:
pip install pypiwin32
If the pyttsx3 module is not installed you will see the following error when executing your Python program:
ModuleNotFoundError: No module named 'pyttsx3'
There’s also a module called PyTTSx (without the 3 at the end), but it’s not compatible with both Python 2 and Python 3.
We are using PyTTSx3 because is compatible with both Python versions.
It’s great to see that to make your computer speak using Python you just need few lines of code:
# import the module import pyttsx3 # initialise the pyttsx3 engine engine = pyttsx3.init() # convert text to speech engine.say("I love Python for text to speech, and you?") engine.runAndWait()
Run your program and you will hear the message coming from your computer.
With just four lines of code! (excluding comments)
Also notice the difference that commas make in your phrase. Try to remove the comma before “and you?” and run the program again.
Can you see (hear) the difference?
Also, you can use multiple calls to the say() function, so:
engine.say("I love Python for text to speech, and you?")
could be written also as:
engine.say("I love Python for text to speech") engine.say("And you?")
All the messages passed to the say() function are not said unless the Python interpreter sees a call to runAndWait(). You can confirm that by commenting the last line of the program.
Change Voice with PyTTSx3
What else can we do with PyTTSx?
Let’s see if we can change the voice starting from the previous program.
First of all, let’s look at the voices available. To do that we can use the following program:
import pyttsx3 engine = pyttsx3.init() voices = engine.getProperty('voices') for voice in voices: print(voice)
You will see an output similar to the one below:
<Voice id=com.apple.speech.synthesis.voice.Alex name=Alex languages=['en_US'] gender=VoiceGenderMale age=35> <Voice id=com.apple.speech.synthesis.voice.alice name=Alice languages=['it_IT'] gender=VoiceGenderFemale age=35> <Voice id=com.apple.speech.synthesis.voice.alva name=Alva languages=['sv_SE'] gender=VoiceGenderFemale age=35> <Voice id=com.apple.speech.synthesis.voice.amelie name=Amelie languages=['fr_CA'] gender=VoiceGenderFemale age=35> <Voice id=com.apple.speech.synthesis.voice.anna name=Anna languages=['de_DE'] gender=VoiceGenderFemale age=35> <Voice id=com.apple.speech.synthesis.voice.carmit name=Carmit languages=['he_IL'] gender=VoiceGenderFemale age=35> <Voice id=com.apple.speech.synthesis.voice.damayanti name=Damayanti languages=['id_ID'] gender=VoiceGenderFemale age=35> ...... .... ... etc...
The voices available depend on your system and they might be different from the ones present on a different computer.
Considering that our message is in english we want to find all the voices that support english as a language. To do that we can add an if statement inside the previous for loop.
Also to make the output shorter we just print the id field for each Voice object in the voices list (you will understand why shortly):
import pyttsx3 engine = pyttsx3.init() voices = engine.getProperty('voices') for voice in voices: if 'en_US' in voice.languages or 'en_GB' in voice.languages: print(voice.id)
And here are the voice IDs printed by the program:
com.apple.speech.synthesis.voice.Alex com.apple.speech.synthesis.voice.daniel.premium com.apple.speech.synthesis.voice.Fred com.apple.speech.synthesis.voice.samantha com.apple.speech.synthesis.voice.Victoria
Let’s choose a female voice, to do that we use the following:
engine.setProperty('voice', voice.id)
I select the id com.apple.speech.synthesis.voice.samantha, so our program becomes:
import pyttsx3 engine = pyttsx3.init() engine.setProperty('voice', 'com.apple.speech.synthesis.voice.samantha') engine.say("I love Python for text to speech, and you?") engine.runAndWait()
How does it sound? 🙂
You can also modify the standard rate (speed) and volume of the voice setting the value of the following properties for the engine before the calls to the say() function.
Below you can see some examples on how to do it:
Rate
rate = engine.getProperty('rate') engine.setProperty('rate', rate+50)
Volume
volume = engine.getProperty('volume') engine.setProperty('volume', volume-0.25)
Play with voice id, rate and volume to find the settings you like the most!
Text to Speech with gTTS
Now, let’s create a program using the gTTS module instead.
I’m curious to see which one is simpler to use and if there are benefits in gTTS over PyTTSx or viceversa.
As usual we install gTTS using pip:
pip install gtts
One difference between gTTS and PyTTSx is that gTTS also provides a CLI tool, gtts-cli.
Let’s get familiar with gtts-cli first, before writing a Python program.
To see all the language available you can use:
gtts-cli --all
That’s an impressive list!
The first thing you can do with the CLI is to convert text into an mp3 file that you can then play using any suitable applications on your system.
We will convert the same message used in the previous section:”I love Python for text to speech, and you?”
gtts-cli 'I love Python for text to speech, and you?' --output message.mp3
I’m on a Mac and I will use afplay to play the MP3 file.
afplay message.mp3
The thing I see immediately is that the comma and the question mark don’t make much difference. One point for PyTTSx that definitely does a better job with this.
I can use the –lang flag to specify a different language, below you can see an example in italian…
gtts-cli 'Mi piace programmare in Python, e a te?' --lang it --output message.mp3
…the message says:”I like programming in Python, and you?”
Now we will write a Python program to do the same thing.
# Import the gTTS module from gtts import gTTS # This the os module so we can play the MP3 file generated import os # Generate the audio using the gTTS engine. We are passing the message and the language audio = gTTS(text='I love Python for text to speech, and you?', lang='en') # Save the audio in MP3 format audio.save("message.mp3") # Play the MP3 file os.system("afplay message.mp3")
If you run the program you will hear the message.
Remember that I’m using afplay because I’m on a Mac. You can just replace it with any utilities that can play sounds on your system.
Looking at the gTTS documentation, I can also read the text more slowly passing the slow parameter to the gTTS() function.
audio = gTTS(text='I love Python for text to speech, and you?', lang='en', slow=True)
Give it a try!
Change Voice with gTTS
How easy is to change the voice with gTTS?
Is it even possible to customise the voice?
It wasn’t easy to find an answer to this, I have been playing a bit with the parameters passed to the gTTS() function and I noticed that the english voice changes if the value of the lang parameter is ‘en-US’ instead of ‘en’.
The language parameter uses IETF language tags.
audio = gTTS(text='I love Python for text to speech, and you?', lang='en-US')
The voice seems to take into account the comma and the question mark better than before.
Also from another test it looks ‘en’ (the default language) is the same as ‘en-GB’.
It looks to me there’s more variety in the voices available with PyTTSx3 compared to gTTS.
Before finishing this section I also want to show you a way to create a single MP3 file that contains multiple messages, in this case in different languages:
from gtts import gTTS import os audio_en = gTTS('hello', lang='en') audio_it = gTTS('ciao', lang='it') with open('hello_ciao.mp3', 'wb') as f: audio_en.write_to_fp(f) audio_it.write_to_fp(f) os.system("afplay hello_ciao.mp3")
The write_to_fp() function write bytes to a file-like object that we save as hello_ciao.mp3.
Makes sense?
Work With Text to Speech Offline
One last question about text to speech in Python.
Can you do it offline or do you need an Internet connection?
Let’s run first one of the programs we created using PyTTSx3.
From my tests everything works well, so I can convert text into audio even if I’m offline.
This can be very handy for the creation of any voice-based softwares.
Let’s try gTTS now…
If I run the program using gTTS after disabling my connection, I see the following error:
gtts.tts.gTTSError: Connection error during token calculation: HTTPSConnectionPool(host='translate.google.com', port=443): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x11096cca0>: Failed to establish a new connection: [Errno 8] nodename nor servname provided, or not known'))
So, gTTS doesn’t work without a connection because it requires access to translate.google.com.
If you want to make Python speak offline use PyTTSx3.
Conclusion
We have covered a lot!
You have seen how to use two cross-platform Python modules, PyTTSx3 and gTTS, to convert text into speech and to make your computer talk!
We also went through the customisation of voice, rate, volume and language that from what I can see with the programs we created here are more flexible with the PyTTSx3 module.
Are you planning to use this for a specific project?
Let me know in the comments below 🙂
I’m a Tech Lead, Software Engineer and Programming Coach. I want to help you in your journey to become a Super Developer!
|
https://codefather.tech/blog/make-python-speak/
|
CC-MAIN-2021-31
|
refinedweb
| 1,904
| 64.61
|
public class Solution { public RandomListNode copyRandomList(RandomListNode head) { if(head==null)return null; RandomListNode first=head; RandomListNode nHead=new RandomListNode(first.label); RandomListNode second=nHead; while(first!=null) { if(first.next!=null)second.next=new RandomListNode(first.next.label);//copy next else second.next=null; if(first.random!=null)second.random=new RandomListNode(first.random.label);//copy random else second.random=null; first=first.next; second=second.next; } return nHead; } }
Why this question need a map?I think it is a very obvious and straightforward, my code pass the submission. is it wrong?
Good one to copy "next" pointer, but not perfect one to copy "random" pointer.
Let's say you have a linked list, for Next pointer : 1 => 2 => 3
but for Random pointer: 1 = > 2 <= 3 (1 points to 2, 2 points to null, 3 points to 2)
Totally 3 nodes.
What you are asked to do is return this:
1 => 2 => 3 (=> for Next Pointer)
↓___↑____↓ (— for Random Pointer)
You do copy Next pointer correctly 1 => 2 => 3, but for Random Pointer. You actaully created additionally 2 Nodes with value 2.
1 => 2 => 3
↓ xxxxxxx ↓
2 xxxxxxx 2
Random Pointers of Node 1 and Node 3 point to another 2 nodes with value 2. Even though their values are the same, they are not the same Object.
If you do not understand this, try start with E and M difficulty and get familiar with Java first.
|
https://discuss.leetcode.com/topic/68290/shortest-java-o-1-space-why-we-need-a-map
|
CC-MAIN-2018-05
|
refinedweb
| 240
| 57.16
|
This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.
On Mon, Jan 07, 2013 at 05:05:34PM -0800, Roland McGrath wrote: > Paul's points are valid as a generic thing. But they aren't the key > points for considering changes to libc. > > The entire discussion about maximum usable size is unnecessary > fritter. We already have __libc_use_alloca, alloca_account, > etc. (include/alloca.h) to govern that decision for libc code. > If people want to discuss changing that logic, they can do that > separately. But we will certainly not have more than one separate > implementation of such logic in libc. > > Extending that internal API in some fashion to make it less work to > use would certainly be welcome. That would have to be done in some > way that doesn't add overhead when existing uses of __libc_use_alloca > are converted to the new interface. The simplest way to do that would > be a macro interface that stores to a local bool, which is what the > users of __libc_use_alloca mostly do now. It would be nice to have an > interface that is completely trivial to use like malloca is, but for > code inside libc that ideal is less important than making sure we do > not degrade the performance (including code size) of any of the > existing uses. I wrote a possible new interface below. I added __libc_use_alloca that tests if current stack frame will becomes larger than __MAX_ALLOCA_CUTOFF. However needs change of nptl __libc_use_alloca. This makes alloca_account be counted twice so I aliased it with alloca to be counted only once. This could be more effective than current state as we do not need to track counters. (modulo details like that on x64 stackinfo_get_sp definition causes %rsp be unnecessary copied into %rax.) > > There are a few existing uses of alloca that use their own ad hoc code > instead of __libc_use_alloca (misc/err.c, sunrpc/auth_unix.c, maybe > others). Those should be converted to use __libc_use_alloca or > whatever nicer interface is figured out. > > Then there are the existing uses of alloca that don't use > __libc_use_alloca at all, such as argp/argp-help.c. Those should > probably be converted as well, though in some situations like the argp > ones it's a bit hard to imagine their really being used with sizes > large enough to matter. One technical issue is if we want to use STACKINFO_BP_DEF. It would make getting base pointer more portable but must be added to each function that uses __libc_use_alloca. /* TODO: switch to later case when __builtin_frame_address don't work. */ #if 1 #define STACKINFO_BP_DEF #define stackinfo_get_bp() __builtin_frame_address(0) #else #define STACKINFO_BP_DEF void *__stackinfo_bp = &__stackinfo_bp; #define stackinfo_get_bp() __stackinfo_bp #endif #ifdef _STACK_GROWS_DOWN #define __STACKINFO_UB stackinfo_get_bp () #define __STACKINFO_LB stackinfo_get_sp () #endif #ifdef _STACK_GROWS_UP #define __STACKINFO_UB stackinfo_get_sp () #define __STACKINFO_LB stackinfo_get_bp () #endif Then alloca can use following #define __libc_use_alloca(x) \ ( __STACKINFO_UB - __STACKINFO_LB + (x) <= __MAX_ALLOCA_CUTOFF ) #define alloca_account(n, var) alloca(n) #define extend_alloca_account(buf, len, newlen, avar) \ extend_alloca (buf, len, newlen, avar) And here is new version of malloca. /* Safe automatic memory allocation. Copyright (C) 2012 _MALLOCA_H #define _MALLOCA_H #ifdef HAVE_ALLOCA_H #include <alloca.h> #include <stdlib.h> #ifdef __cplusplus extern "C" { #endif /* malloca(N) is a safe variant of alloca(N). It allocates N bytes of memory allocated on the stack until stack frame has __MAX_ALLOCA_CUTOFF bytes and heap otherwise. It must be freed using freea() before the function returns. */ #define malloca(n) ({ \ size_t __n__ = (n); \ void * __r__ = NULL; \ if (__libc_use_alloca (__n__)) \ { \ __r__ = alloca (__n__); \ } \ else \ { \ __r__ = malloc (__n__); \ } \ __r__; \ }) /* Maybe it is faster to use unsigned comparison such as __r - __STACKINFO_LB <= __STACKINFO_UB -__STACKINFO_LB */ #define freea(r) do { \ void *__r = (r); \ if ( __r && !( __STACKINFO_LB <= __r & \ __r <= __STACKINFO_UB )) \ free (__r); \ } while (0) #ifdef __cplusplus } #endif #else #define malloca(x) malloc (x) #define freea(x) free (x) #endif #endif /* _MALLOCA_H */
|
http://cygwin.com/ml/libc-alpha/2013-01/msg00266.html
|
CC-MAIN-2017-17
|
refinedweb
| 628
| 54.73
|
Introduction
In the previous article, we looked at how Python's Matplotlib library can be used for data visualization. In this article we will look at Seaborn which is another extremely useful library for data visualization in Python. The Seaborn library is built on top of Matplotlib and offers many advanced data visualization capabilities.
Though, the Seaborn library can be used to draw a variety of charts such as matrix plots, grid plots, regression plots etc., in this article we will see how the Seaborn library can be used to draw distributional and categorial plots. In the second part of the series, we will see how to draw regression plots, matrix plots, and grid plots.
Downloading the Seaborn Library
The
seaborn library can be downloaded in a couple of ways. If you are using pip installer for Python libraries, you can execute the following command to download the library:
pip install seaborn
Alternatively, if you are using the Anaconda distribution of Python, you can use execute the following command to download the
seaborn library:
conda install seaborn
The Dataset
The dataset that we are going to use to draw our plots will be the Titanic dataset, which is downloaded by default with the Seaborn library. All you have to do is use the
load_dataset function and pass it the name of the dataset.
Let's see what the Titanic dataset looks like. Execute the following script:
import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns dataset = sns.load_dataset('titanic') dataset.head()
The script above loads the Titanic dataset and displays the first five rows of the dataset using the head function. The output looks like this:
The dataset contains 891 rows and 15 columns and contains information about the passengers who boarded the unfortunate Titanic ship. The original task is to predict whether or not the passenger survived depending upon different features such as their age, ticket, cabin they boarded, the class of the ticket, etc. We will use the Seaborn library to see if we can find any patterns in the data.
Distributional Plots
Distributional plots, as the name suggests are type of plots that show the statistical distribution of data. In this section we will see some of the most commonly used distribution plots in Seaborn.
The Dist Plot
The
distplot() shows the histogram distribution of data for a single column. The column name is passed as a parameter to the
distplot() function. Let's see how the price of the ticket for each passenger is distributed. Execute the following script:
sns.distplot(dataset['fare'])
Output:
You can see that most of the tickets have been solved between 0-50 dollars. The line that you see represents the kernel density estimation. You can remove this line by passing
False as the parameter for the
kde attribute as shown below:
sns.distplot(dataset['fare'], kde=False)
Output:
Now you can see there is no line for the kernel density estimation on the plot.
You can also pass the value for the
bins parameter in order to see more or less details in the graph. Take a look at he following script:
sns.distplot(dataset['fare'], kde=False, bins=10)
Here we set the number of bins to 10. In the output, you will see data distributed in 10 bins as shown below:
Output:
You can clearly see that for more than 700 passengers, the ticket price is between 0 and 50.
The Joint Plot
The
jointplot()is used to display the mutual distribution of each column. You need to pass three parameters to
jointplot. The first parameter is the column name for which you want to display the distribution of data on x-axis. The second parameter is the column name for which you want to display the distribution of data on y-axis. Finally, the third parameter is the name of the data frame.
Let's plot a joint plot of
age and
fare columns to see if we can find any relationship between the two.
sns.jointplot(x='age', y='fare', data=dataset)
Output:
From the output, you can see that a joint plot has three parts. A distribution plot at the top for the column on the x-axis, a distribution plot on the right for the column on the y-axis and a scatter plot in between that shows the mutual distribution of data for both the columns. You can see that there is no correlation observed between prices and the fares.
You can change the type of the joint plot by passing a value for the
kind parameter. For instance, if instead of scatter plot, you want to display the distribution of data in the form of a hexagonal plot, you can pass the value
hex for the
kind parameter. Look at the following script:
sns.jointplot(x='age', y='fare', data=dataset, kind='hex')
Output:
In the hexagonal plot, the hexagon with most number of points gets darker color. So if you look at the above plot, you can see that most of the passengers are between age 20 and 30 and most of them paid between 10-50 for the tickets.
The Pair Plot
The
paitplot() is a type of distribution plot that basically plots a joint plot for all the possible combination of numeric and Boolean columns in your dataset. You only need to pass the name of your dataset as the parameter to the
pairplot() function as shown below:
sns.pairplot(dataset)
A snapshot of the portion of the output is shown below:
Note: Before executing the script above, remove all null values from the dataset using the following command:
dataset = dataset.dropna()
From the output of the pair plot you can see the joint plots for all the numeric and Boolean columns in the Titanic dataset.
To add information from the categorical column to the pair plot, you can pass the name of the categorical column to the
hue parameter. For instance, if we want to plot the gender information on the pair plot, we can execute the following script:
sns.pairplot(dataset, hue='sex')
Output:
In the output you can see the information about the males in orange and the information about the female in blue (as shown in the legend). From the joint plot on the top left, you can clearly see that among the surviving passengers, the majority were female.
The Rug Plot
The
rugplot() is used to draw small bars along x-axis for each point in the dataset. To plot a rug plot, you need to pass the name of the column. Let's plot a rug plot for fare.
sns.rugplot(dataset['fare'])
Output:
From the output, you can see that as was the case with the
distplot(), most of the instances for the fares have values between 0 and 100.
These are some of the most commonly used distribution plots offered by the Python's Seaborn Library. Let's see some of categorical plots in the Seaborn library.
Categorical Plots
Categorical plots, as the name suggests are normally used to plot categorical data. The categorical plots plot the values in the categorical column against another categorical column or a numeric column. Let's see some of the most commonly used categorical data.
The Bar Plot
The
barplot() is used to display the mean value for each value in a categorical column, against a numeric column. The first parameter is the categorical column, the second parameter is the numeric column while the third parameter is the dataset. For instance, if you want to know the mean value of the age of the male and female passengers, you can use the bar plot as follows.
sns.barplot(x='sex', y='age', data=dataset)
Output:
From the output, you can clearly see that the average age of male passengers is just less than 40 while the average age of female passengers is around 33.
In addition to finding the average, the bar plot can also be used to calculate other aggregate values for each category. To do so, you need to pass the aggregate function to the
estimator. For instance, you can calculate the standard deviation for the age of each gender as follows:
import numpy as np import matplotlib.pyplot as plt import seaborn as sns sns.barplot(x='sex', y='age', data=dataset, estimator=np.std)
Notice, in the above script we use the
std aggregate function from the
numpy library to calculate the standard deviation for the ages of male and female passengers. The output looks like this:
The Count Plot
The count plot is similar to the bar plot, however it displays the count of the categories in a specific column. For instance, if we want to count the number of males and women passenger we can do so using count plot as follows:
sns.countplot(x='sex', data=dataset)
The output shows the count as follows:
Output:
The Box Plot
The box plot is used to display the distribution of the categorical data in the form of quartiles. The center of the box shows the median value. The value from the lower whisker to the bottom of the box shows the first quartile. From the bottom of the box to the middle of the box lies the second quartile. From the middle of the box to the top of the box lies the third quartile and finally from the top of the box to the top whisker lies the last quartile.
You can study more about quartiles and box plots at this link.
Now let's plot a box plot that displays the distribution for the age with respect to each gender. You need to pass the categorical column as the first parameter (which is sex in our case) and the numeric column (age in our case) as the second parameter. Finally, the dataset is passed as the third parameter, take a look at the following script:
sns.boxplot(x='sex', y='age', data=dataset)
Output:
Let's try to understand the box plot for female. The first quartile starts at around 5 and ends at 22 which means that 25% of the passengers are aged between 5 and 25. The second quartile starts at around 23 and ends at around 32 which means that 25% of the passengers are aged between 23 and 32. Similarly, the third quartile starts and ends between 34 and 42, hence 25% passengers are aged within this range and finally the fourth or last quartile starts at 43 and ends around 65.
If there are any outliers or the passengers that do not belong to any of the quartiles, they are called outliers and are represented by dots on the box plot.
You can make your box plots more fancy by adding another layer of distribution. For instance, if you want to see the box plots of forage of passengers of both genders, along with the information about whether or not they survived, you can pass the
survived as value to the
hue parameter as shown below:
sns.boxplot(x='sex', y='age', data=dataset, hue="survived")
Output:
Now in addition to the information about the age of each gender, you can also see the distribution of the passengers who survived. For instance, you can see that among the male passengers, on average more younger people survived as compared to the older ones. Similarly, you can see that the variation among the age of female passengers who did not survive is much greater than the age of the surviving female passengers.
The Violin Plot
The violin plot is similar to the box plot, however, the violin plot allows us to display all the components that actually correspond to the data point. The
violinplot() function is used to plot the violin plot. Like the box plot, the first parameter is the categorical column, the second parameter is the numeric column while the third parameter is the dataset.
Let's plot a violin plot that displays the distribution for the age with respect to each gender.
sns.violinplot(x='sex', y='age', data=dataset)
Output:
You can see from the figure above that violin plots provide much more information about the data as compared to the box plot. Instead of plotting the quartile, the violin plot allows us to see all the components that actually correspond to the data. The area where the violin plot is thicker has a higher number of instances for the age. For instance, from the violin plot for males, it is clearly evident that the number of passengers with age between 20 and 40 is higher than all the rest of the age brackets.
Like box plots, you can also add another categorical variable to the violin plot using the
hue parameter as shown below:
sns.violinplot(x='sex', y='age', data=dataset, hue='survived')
Now you can see a lot of information on the violin plot. For instance, if you look at the bottom of the violin plot for the males who survived (left-orange), you can see that it is thicker than the bottom of the violin plot for the males who didn't survive (left-blue). This means that the number of young male passengers who survived is greater than the number of young male passengers who did not survive. The violin plots convey a lot of information, however, on the downside, it takes a bit of time and effort to understand the violin plots.
Instead of plotting two different graphs for the passengers who survived and those who did not, you can have one violin plot divided into two halves, where one half represents surviving while the other half represents the non-surviving passengers. To do so, you need to pass
True as value for the
split parameter of the
violinplot() function. Let's see how we can do this:
sns.violinplot(x='sex', y='age', data=dataset, hue='survived', split=True)
The output looks like this:
Now you can clearly see the comparison between the age of the passengers who survived and who did not for both males and females.
Both violin and box plots can be extremely useful. However, as a rule of thumb if you are presenting your data to a non-technical audience, box plots should be preferred since they are easy to comprehend. On the other hand, if you are presenting your results to the research community it is more convenient to use violin plot to save space and to convey more information in less time.
The Strip Plot
The strip plot draws a scatter plot where one of the variables is categorical. We have seen scatter plots in the joint plot and the pair plot sections where we had two numeric variables. The strip plot is different in a way that one of the variables is categorical in this case, and for each category in the categorical variable, you will see scatter plot with respect to the numeric column.
The
stripplot() function is used to plot the violin plot. Like the box plot, the first parameter is the categorical column, the second parameter is the numeric column while the third parameter is the dataset. Look at the following script:
sns.stripplot(x='sex', y='age', data=dataset)
Output:
You can see the scattered plots of age for both males and females. The data points look like strips. It is difficult to comprehend the distribution of data in this form. To better comprehend the data, pass
True for the
jitter parameter which adds some random noise to the data. Look at the following script:
sns.stripplot(x='sex', y='age', data=dataset, jitter=True)
Output:
Now you have a better view for the distribution of age across the genders.
Like violin and box plots, you can add an additional categorical column to strip plot using
hue parameter as shown below:
sns.stripplot(x='sex', y='age', data=dataset, jitter=True, hue='survived')
Again you can see there are more points for the males who survived near the bottom of the plot compared to those who did not survive.
Like violin plots, we can also split the strip plots. Execute the following script:
sns.stripplot(x='sex', y='age', data=dataset, jitter=True, hue='survived', split=True)
Output:
Now you can clearly see the difference in the distribution for the age of both male and female passengers who survived and those who did not survive.
The Swarm Plot
The swarm plot is a combination of the strip and the violin plots. In the swarm plots, the points are adjusted in such a way that they don't overlap. Let's plot a swarm plot for the distribution of age against gender. The
swarmplot() function is used to plot the violin plot. Like the box plot, the first parameter is the categorical column, the second parameter is the numeric column while the third parameter is the dataset. Look at the following script:
sns.swarmplot(x='sex', y='age', data=dataset)
You can clearly see that the above plot contains scattered data points like the strip plot and the data points are not overlapping. Rather they are arranged to give a view similar to that of a violin plot.
Let's add another categorical column to the swarm plot using the
hue parameter.
sns.swarmplot(x='sex', y='age', data=dataset, hue='survived')
Output:
From the output, it is evident that the ratio of surviving males is less than the ratio of surviving females. Since for the male plot, there are more blue points and less orange points. On the other hand, for females, there are more orange points (surviving) than the blue points (not surviving). Another observation is that amongst males of age less than 10, more passengers survived as compared to those who didn't.
We can also split swarm plots as we did in the case of strip and box plots. Execute the following script to do so:
sns.swarmplot(x='sex', y='age', data=dataset, hue='survived', split=True)
Output:
Now you can clearly see that more women survived, as compared to men.
Combining Swarm and Violin Plots
Swarm plots are not recommended if you have a huge dataset since they do not scale well because they have to plot each data point. If you really like swarm plots, a better way is to combine two plots. For instance, to combine a violin plot with swarm plot, you need to execute the following script:
sns.violinplot(x='sex', y='age', data=dataset) sns.swarmplot(x='sex', y='age', data=dataset, color='black')
Output:
Conclusion
Seaborn is an advanced data visualization library built on top of Matplotlib library. In this article, we looked at how we can draw distributional and categorical plots using Seaborn library. This is Part 1 of the series of article on Seaborn. In the second article of the series, we will see how we play around with grid functionalities in Seaborn and how we can draw Matrix and Regression plots in Seaborn.
|
https://stackabuse.com/seaborn-library-for-data-visualization-in-python-part-1/
|
CC-MAIN-2019-43
|
refinedweb
| 3,182
| 59.94
|
probe_irq_on, probe_irq_off - safe probing for IRQs
#include <linux/interrupt.h> unsigned long probe_irq_on(void) int probe_irq_off(unsigned long irqs));
Usage probe_irq_on() turns on IRQ detection. It operates by ena- bling all interrupts which have no handlers, while keeping the handlers for those interrupts NULL. The kernel's gen- eric interrupt handling routine will disable these IRQs when an interrupt is received on them. probe_irq_on() adds each of these IRQ numbers to a vector which it will return. It waits approximately 100ms for any spurious interrupts that may occur, and masks these from its vector; it then returns this vector to its caller. probe_irq_off() tests an internal list of enabled IRQs against its irqs parameter, which should be the value returned by the last probe_irq_on(). This function basi- cally detects which IRQs have been switched off, and thus which ones have received interrupts. Example This explanation may seem a bit confusing, so here is an example of code the mythical FUBAR 2000 driver could use to probe for IRQs: unsigned long irqs; int irq; irqs = probe_irq_on(); outb(FB2K_GIVE_ME_AN_INTERRUPT_OR_GIVE_ME_DEATH, FB2K_CONTROL_PORT); /* the interrupt could take a while to occur */ udelay(1000); irq = probe_irq_off(irqs); if (irq == 0) { printk("fb2k: could not detect IRQ.\n"); printk("fb2k: Installation failed.\n"); } else if (irq == -1) { printk("fb2k: multiple IRQs detected.\n"); printk("fb2k: Installation failed.\n"); } else { fb2k_dev->irq = irq; printk("fb2k: using probed IRQ %d.\n", irq); }
probe_irq_on() returns a bitmap of all unhandled IRQs (except those which are receiving spurious interrupts). This value should only be used as a parameter to the next call to probe_irq_off(). probe_irq_off() returns the IRQ number of whichever unhan- dled interrupt has occurred since the last probe_irq_on(). If no interrupts have occurred on any of the marked IRQs, 0 is returned; if interrupts have occurred on more than one of these IRQs, -1 is returned.
Linux 1.2+. These functions are not available on m68k-based machines.
request_irq(9) arch/*/kernel/irq.c
Neil Moore <amethyst@maxwell.ml.org>
As mentioned above, these functions are not available on m68k-based machines. This manpage is way too confusing.
|
http://www.linuxsavvy.com/resources/linux/man/man9/probe_irq_on.9.html
|
CC-MAIN-2019-13
|
refinedweb
| 349
| 64.51
|
Source
cx_Freeze / doc / cx_Freeze.rst
cx_Freeze
Abstract.3 or higher since it makes use of the zip import facility which was introduced in that version.
cx_Freeze is distributed under an open-source license.
Using cx_Freeze
There are three different ways to use cx_Freeze. The first is to use the included cxfreeze script which works well for simple scripts. The second is to create a distutils setup script which can be used for more complicated configuration or to retain the configuration for future use. The third method involves working directly with the classes and modules used internally by cx_Freeze and should be reserved for complicated scripts or extending or embedding. Each of these methods is described in greater detail below.
There are three different options for producing executables as well. The first option is the only one that was available in earlier versions of cx_Freeze, that is appending the zip file to the executable itself. The second option is creating a private zip file with the same name as the executable but with the extension .zip. The final option is the default which is to create a zip file called library.zip and place all modules in this zip file. The final two options are necessary when creating an RPM since the RPM builder automatically strips executables. These options are described in greater detail below as well.
cxfreeze script
The cxfreeze script is included with other Python scripts. On Windows and the Mac this is in the Scripts subdirectory of your Python installation whereas on Unix platforms this in the bin directory of the prefix where Python is installed.
Assuming you have a script called hello.py which you want to turn into an executable, this can be accomplished by this command:
cxfreeze hello.py --target-dir dist
Further customization can be done using the following options:
distutils setup script
In order to make use of distutils a setup script must be created. This is called setup.py by convention although it need not be called that. A very simple script might use the following:
from cx_Freeze import setup, Executable setup( name = "hello", version = "0.1", description = "the typical 'Hello, world!' script", executables = [Executable("hello.py")]).
cx_Freeze creates two.
distutils commands
build
This command is a standard command which has been modified by cx_Freeze to build any executables that are defined. The following options were added to the standard set of options for the command:
build_exe
This command performs the work of building an executable or set of executables. It can be further customized:
install
This command is a standard command which has been modified by cx_Freeze to install any executables that are defined. The following options were added to the standard set of options for the command:
install_exe
This command performs the work installing an executable or set of executables. It can be used directly but most often is used when building Windows installers or RPM packages. It can be further customized:
bdist_msi
This command is a standard command in Python 2.5 and higher which has been modified by cx_Freeze to handle installing executables and their dependencies. The following options were added to the standard set of options for the command:
bdist_rpm
This command is a standard command which has been modified by cx_Freeze to ensure that packages are created with the proper architecture for the platform. The standard command assumes that the package should be architecture independent if it cannot find any extension modules.
cx_Freeze.Executable
The options for the build_exe command are the defaults for any executables that are created. The options for the Executable class allow specification of the values specific to a particular executable. The arguments to the constructor are as follows:
|
https://bitbucket.org/anthony_tuininga/cx_freeze/src/e04af32c3209/doc/cx_Freeze.rst?at=4.1.2
|
CC-MAIN-2015-48
|
refinedweb
| 617
| 55.74
|
Reading comments of LinkedIn wall post
Hi,
In one of my grails project, i needed to show the comments on any wall post of linkedin through API. I used the Java wrapper to connect any linkedIn account with the grails application which can be seen here. But somehow this library was not working when we need to fetch comments from any wall post and display them in our UI.
I searched a lot about it but couldn’t find anything appropriate, then i decided to use the linkedIn API directly for retrieving the data. To make GET calls , i used the Scribe java library which can be downloaded from here.
To make API calls on linkedIn, we need to have an authenticated account’s access_token and access_secret, which can be obtained by connecting a linkedIn account with the application as mentioned in this post.
Code to fetch comments on LinkedIn Wall post :-
[java]
String consumerKey = CONSUMER_KEY // key obtained by linkedIn app
String consumerSecret = CONSUMER_SECRET // secret obtained from linkedIn app
String accessToken = ‘assess_token’
String accessSecret = ‘access_secret’
String postId = POST_ID // id of the wall post
OAuthService service = new ServiceBuilder()
.provider(LinkedInApi.class)
.apiKey(consumerKey)
.apiSecret(consumerSecret)
.debug()
.build();
String url = "{postId}/update-comments?format=json";
OAuthRequest request = new OAuthRequest(Verb.GET, url);
org.scribe.model.Token accessToken = new org.scribe.model.Token(accessToken,accessSecret)
service.signRequest(accessToken, request);
Response response = request.send();
String jsonResponse = response.getBody()
def updates = JSON.parse(jsonResponse) // contains comments data in JSON format
updates.values.each {def commentData ->
println "Comment : ${commentData.comment}"
println "Creator: ${commentData.person.firstName}"
}
[/java]
This code will fetch the comments from any linkedin wall post. It worked in my case.
Hope it helps.
|
https://www.tothenew.com/blog/reading-comments-of-linkedin-wall-post/
|
CC-MAIN-2022-40
|
refinedweb
| 279
| 50.43
|
Like many people with very large Visual Studio projects, we in the ReSharper team wanted to get an overview of the 300+ project dependencies for ReSharper itself. Unfortunately, VS Ultimate gave us a representation that was rather difficult to interpret:
What we ended up doing is building our own tool for viewing project dependencies as well as comparing architectural snapshots as the solution continues to change and evolve.
Here’s how it works. First of all, with your solution open, go to ReSharper|Architecture|Build Architecture Graph:
ReSharper then goes through your solution and, without compiling anything, presents a dependency diagram of all the projects in the solution:
The great thing about the way R# does it is the fact that layout is calculated automatically to present an optimal illustration of dependencies between elements. ReSharper builds the dependency graph, and if the Show Code Metrics option is on (it is on by default) then reference analysis happens asynchronously — ReSharper analyzes the whole solution and indicates the strength of coupling between projects. This all happens on the background thread, so you can continue editing code, navigating etc. while ReSharper does its work.
There are two arrows linking the projects: black arrows show project references wheras grey arrows show unused references that can be safely removed without breaking the build.
The architecture diagram gets rendered for the scope you’ve selected in Solution Explorer: if you select just a folder with several projects, only those projects will be included. However, there is always a way to fine-tune which projects are represented by checking the appropriate boxes:
Ticking each of the boxes shows or hides the appropriate elements on the diagram. This allows us to instantly update the representation of connections between different subsystems as soon as they are checked or unchecked in the list on the left.
The toolbar shown above has several options for how information is presented on the graph. First of all, there are several ways of grouping information:
The above options are None (meaning no grouping is done), Solution folders for grouping by solution folder, and File structure for groupings based on actual directory structure and the projects’ position therein.
The following toolbar buttons are also available:
Collapse Graph and Expand Graph buttons let you collapse or expand an area where a grouping is used. A collapsed group looks simply like this:
As an alternative, one can simply double-click the + or - button on the group to expand or collapse it.
The Show/Hide Code Metrics button lets you show and hide metrics associated with the code. These metrics show used and unused references (black and grey arrows respectively) and also show the number of references from one project to another:
You can also select the indicator, right-click it and use Show Usages… to find out where these referencing calls actually occur.
Show Transitive References shows not only direct references but actually all transit references, so in addition to showing A → B → C, it will also show A → C on the diagram. This can lead to a very rich, circuit-like representation:
Save Architecture Graph lets you save the graph to a file.
Show Diff shows you a difference between two architecture graphs. In fact, you can also invoke this function (even without having a solution open) from the top-level menu:
The ability to save architecture graphs to a file allows the user to look at the difference between the architectural layout of the solution as it continues to evolve. The following illustrates the changes that have been made to a solution:
The above is an illustration of the evolution in one of ReSharper’s subsystems. Red boxes indicate parts that have been removed, whereas dark green ones indicate new parts that have been added. The same color scheme is also applied to references.
Lastly, keep in mind that project-level menus which can be invoked in Solution Explorer can be invoked on Architecture View nodes in the same manner:
As always, we hope that you find this feature useful — it is already in the latest EAP, so if you want to see it today, you know what to do. Keep in mind that this is a very fresh tool that we’ve built, and one that we’ll be sure to augment with additional architecture-related features in future releases.
I guess we should just say “Thank you”! Looks great.
Very nice – a quick observation (granted, this is early) – Azure Cloud projects don’t get picked up in this – if they are the current project scope, they are treated singularly, and if selected in the check diagram, they sit off to the side. True, there isn’t a “normal” reference as there is in “binary” projects, but it might be interesting to see if there’s a way to fit them as containers of some kind in this view, chaining through the roles they contain…
Awesome – I love it!
Wow! This is extremely cool. Thank you so much!
Nice feature – looking forward to you fixing the OutOfMemoryException when I ran it on our solution (which is reasonably large).
(yes, bugs submitted)
-dave
Wow! It’s cool.
Please add exporting graph to images.
That’s absolutly awesome
But when I saw this, a new product idea popped into my mind. Restructuring project structures.
There is an existing product that analyses the structure of an project and lets the user create a new better structure. It makes than a list of restructuring tasks to get to that new structure the resturucturing tasks are on the code level.
So my idea is can the architectural view be also used as an refactoring tool? Meaning
creating a new Box -> creating new Project or Namespace.
Moving classes from one box to another moving the classes from one project/Namespace into another project/Namespace. (Ala drag and drop).
This means that the boxes can be expanded with first level for example namespace hierarchy and further down to classes. Like zooming into the code like in restructure. This would be uber cool. I know that this won’t happen in Resharper 8 but I think this could be a good longterm idea.
-Thomas
This is awesome, so nice for our 100 projects solution!
The only problem I can’t see how to zoom or print?
This feature doesn’t need VS Ultimate does it?
@Nick That’s right, VS Ultimate is not required: you can use it even with VS Professional.
Thanks everyone for the joyful comments!
@John Garland Feature request submitted. Please vote!
@David Gardiner You’re talking about this exception, right?
@Agile Hobo We’ll add exporting to images: please vote for this feature request.
@alex
Zooming in/out is currently only available with the mouse wheel. Are you expecting this functionality to be available with a slider?
Printing architecture graphs will be supported (you can vote for the feature request).
@Thomas Stocker Sounds interesting! We’ll definitely be thinking in this direction. I’ve logged your feature request in ReSharper issue tracker: here it is
Cool! But buggy. Is it worth writing up the bugs at this point, or is it still under heavy development and I can assume it’ll get more stable?
(Right-clicking an arrow and selecting “Find Usages” has no effect whatsoever; scroll bars don’t work as expected, and any attempt to use them just scrolls you toward the left/top; scroll wheel worked at first, but after I unchecked a bunch of projects, the scroll wheel stopped having any effect, even though the font was still about a 2-point size and unreadable.)
It’d also be great if I could hide all test projects at once — one click to uncheck any project that depends on NUnit.Framework.dll. But I’m not sure of the best way to do that without hard-coding a list of test frameworks.
@Joe
Thanks! Always great to know something is rated cool by you )
It certainly makes sense to file feature requests. As to bugs, I’d recommend letting a couple more nightly builds to come out, and if things don’t get better, start bombarding us with bug reports.
P.S. Just wondering, have the unit testing fixes you needed finally made it to EAP?
@Jura, yes they have — parameterized NUnit tests are much better in 8.0.8.197. I sent some feedback about one bug (might have been via “ReSharper misbehaves” rather than YouTrack) — extra crossed-out leaf nodes appear in the tree, sometimes not even parented to the test method they belong to — but so far it looks like that’s just cosmetic. The bugs I was running into (not all tests running, not all results showing in the tree) seem to be fixed, and I haven’t run into the problem with a class not getting added to the session. So far so good.
What if you don’t have a big-ass solution, but multiple solutions and they have references to compiled .dll already.
@Jura: Regarding the feature request of @Alex: you could use a similar UI like in this Silverlight demo here:
(take a look at the controls in the top left area of the view)
It’s similar to the way you can navigate in mapping services and very easy to implement with what you’ve got already!
That way you don’t have to add more and more icons to some toolbar and people know what to do with these controls, already.
@Sebastian Thanks for the tip!
Now, that is a nicely implemented tool – another reason to upgrade to ReSharper8.
Just installed the trial version and this tool marks f# projects as “redundant” even if they are being used? (Sorry to mention this here, but I couldn’t find the bug page.)
If this is the case, would you mind posting a bug on our issue tracker? Thanks.
@Sean, could you please fill in a bug report here with a sample solution attached?
Link is:
Thanks!
What versions of Visual Studio are supported for this feature? I’m using VS 2008 Professional Edition, and I don’t see the Architecture option under my ReSharper menu.
Thanks!
@Tristan: this feature is only supported from Visual Studio 2010 onward.
Thanks Dmitri…looks like I’ll need to upgrade!
-t.
Is there a way to change background color of exported graph?
This tool is somehow not working for me. I have a solution with three projects in it. The graph shows me the correct dependencies between those projects, but each project is just one node and I cannot expand the nodes. There are no buttons on those nodes themselves and the “expand graph” button on the left toolbar doesn’t do anything. Neither does it do anything in single project solutions. I always end up with just one node per project.
I am using the tool with 2012.
(Also after installing Resharper 8 my VS 2010 installation didn’t work anymore but crashed on startup, so I had to remove Resharper 8 from that installation.)
I also have this issue same as Andreas.
I have no ‘Build Graph’ option under Resharper > Architecture. Just an option to see graph for project dependencies, but nothing else, no ability to drill down etc.
Am I missing something, has the instructions changed?
VS2013
@Joseph at the moment, unfortunately, there isn’t – ReSharper uses the color settings that are defined for other elements, so there is nothing in Fonts & Colors to tweak. If this is critical for you, please request this feature on our issue tracker.
@Andreas @Sam actually, at the moment, ReSharper only highlights dependencies between projects, so you cannot drill down or “expand” a project. Rest assured, we are working on having class-level dependency diagrams too! Also, there is no ‘Build Graph’ option, the ‘View Project Dependencies’ is the item you want.
Would really like to be able to save the graph in a format that makes it possible to open it in yEd Graph Editor or simular tools.
Please submit a feature request on our issue tracker — thanks!
Hi guys,
great tool!
I have a question though. I use the dark skin in Visual Studio 2013 and now the background of the architecture view is also dark. Which doesn’t look very nice. Do you happen to know which window type the architecture view uses? I can’t seem to find it.
Thanks!
I don’t think there’s a specific setting for the background of Arch View – it most likely uses the global UI color settings. Even if you found the right color switch, that would damage all the other window and might get the UI looking inconsistent.
|
http://blog.jetbrains.com/dotnet/2013/06/13/resharper-8-architecture-tools/
|
CC-MAIN-2015-22
|
refinedweb
| 2,119
| 61.46
|
Technology news How to Use Sentry and GitLab to Capture React Errors
Technology news
James Walker
| 5 min read
Sentry is an error-tracking platform that lets you monitor issues in your production deployments. It supports most popular programming languages and frameworks.
GitLab is a Git-based DevOps platform to manage the entire software development lifecycle. GitLab can integrate with Sentry to display captured errors. In this article, we’ll use the two services to stay ahead of issues in a React application.
Getting Set up
GitLab and Sentry both have self-hosted and SaaS options. The steps in this guide apply to both variants. We’ll assume that you’ve already got a React project ready to use in your GitLab instance.
Log in to Sentry and click the “Create Project” button in the top-right corner. Click “React” under the “Choose a platform” heading. This lets Sentry tailor example code snippets to your project.
Choose when to receive alerts using the options beneath “Set your default alert settings.” Select “Alert me on every new issue” to get an email each time an error is logged. The “When there are more than” option filters out noise created by duplicate events in a given time window.
Give your project a name in the “Project name” field. Click “Create Project” to finish your setup.
Adding Sentry to Your Codebase
Now, you need to integrate Sentry with your React code. Add the Sentry library to your project’s dependencies using npm:
npm install @sentry/react
You’ll need to initialize Sentry as soon as possible in your app’s JavaScript. This gives Sentry visibility into errors that occur early in the React lifecycle. Add Sentry’s bootstrap script before your first
ReactDOM.render() call. This is typically in
index.js:
import App from "./App.js"; import React from "react"; import ReactDOM from "react-dom"; import * as Sentry from "@sentry/react"; Sentry.init({ dsn: "my-dsn" }); ReactDOM.render(App />, document.getElementById("react"));
Replace
my-dsn with the DSN that Sentry displays on your project creation screen. The DSN uniquely identifies your project so that the service can attribute events correctly.
Capturing Errors
Sentry will automatically capture and report unhandled JavaScript errors. Although it can’t prevent the crash, it lets you know that something’s gone wrong before the user report arrives.
Here’s an example
App.js:
import React from "react"; export default () => { const data = null; return data.map((val, key) => { h1 key={key}>{val}h1>; }); };
This code is broken—
data is set to
null, so the
map property will be
undefined. We try to call
data.map() regardless so that the app will crash. You should see an issue show up in Sentry.
Sentry issues include as much data about the error as possible. You can see the page URL as well as information about the user’s device. Sentry will automatically combine duplicate issues together. This helps you see whether an event was a one-off or a regular occurrence that’s impacting multiple users.
Sentry automatically fetches JavaScript source maps when they’re available. If you’re using
create-react-app, source maps are automatically generated by
npm run build. Make sure that you copy them to your web server so that Sentry can find them. You’ll see pretty stack traces from the original source code instead of the obfuscated stack produced by the minified build output.
You can mark Sentry errors as Resolved or Ignored once they’ve been dealt with. You’ll find these buttons below the issue’s title and on the Issues overview page. Use Resolved once you’re confident that an issue has been fixed. Ignored is for cases where you don’t intend to address the root cause. In React sites, this might be the case for errors caused by old browser versions.
Error Boundaries
React error boundaries let you render a fallback UI when an error is thrown within a component. Sentry provides its own error boundary wrapper. This renders a fallback UI and logs the caught error to Sentry.
import * as Sentry from "sentry"; export default () => { const data = null; return ( Sentry.ErrorBoundary fallback={h1>Something went wrong.h1>}> { data.map((val, key) => { h1 key={key}>{val}h1>; }); } Sentry.ErrorBoundary> ); };
Now, you can display a warning to users when an error occurs. You’ll still receive the error report in your Sentry project.
Adding GitLab Integration
There are two sides to integrating GitLab and Sentry. First, GitLab projects have an “Error Tracking” feature that displays your Sentry error list. You can mark errors as Resolved or Ignored from within GitLab. The second part involves connecting Sentry to GitLab. This lets Sentry automatically create GitLab issues when a new error is logged.
Let’s look at GitLab’s Error Tracking screen first. You’ll need to create a Sentry API key. Click your username in the top left of your Sentry UI, and then the API Keys in the menu. Click “Create New Token” in the top-right corner.
Add the following token scopes:
alerts:read
alerts:write
event:admin
event:read
event:write
project:read
This allows GitLab to read and update your Sentry errors.
Next, head to your GitLab project. Click Settings in the side menu, and then Operations. Expand the “Error tracking” section. Paste your Sentry authentication token into the “Auth Token” field and press “Connect.” If you’re using a self-hosted Sentry instance, you’ll also need to adjust the “Sentry API URI” field to match your server’s URI.
The “Project” dropdown will populate with a list of your Sentry projects. Select the correct project and press “Save changes.” You’re now ready to use Error Tracking in GitLab.
Click Operations> Error Tracking in the left sidebar. You’ll see your Sentry error list. It’s filtered to Unresolved issues by default. This can be changed using the dropdowns in the top-right corner. Click an error to see its detailed stack trace without leaving GitLab. There are buttons to ignore, resolve, and convert to a GitLab issue. Once you’ve opened a GitLab issue, you can assign that item to a team member so that the bug gets resolved.
Now, you can add the second integration component—a link from Sentry back to GitLab. Click Settings in your Sentry sidebar, and then Integrations. Find GitLab in the list and click the purple “Add Installation” button in the top-right corner. Click “Next” to see the setup information.
Back on GitLab, click your user icon in the top-right corner, followed by “Preferences.” Click “Applications” in the left side menu and add a new application. Use the details shown by Sentry in the installation setup pop-up.
GitLab will display an Application ID and Secret Key. Return to the Sentry pop-up and enter these values. Add your GitLab server URL (
gitlab.com for GitLab SaaS) and enter the relative URL path to your GitLab group (e.g.
my-group). The integration doesn’t work with personal projects.
Click the purple Submit button to create the integration. Sentry will now be able to display GitLab information next to your errors. This includes the commit that introduced the error, and stack traces that link back to GitLab files. Sentry users on paid plans can associate GitLab and Sentry issues with each other.
Disabling Sentry in Development
You won’t necessarily want to use Sentry when running your app locally in development. Don’t call
Sentry.init() if you want to run with Sentry disabled. You can check for the presence of a local environment variable and disable Sentry if it’s set.
if (process.env.NODE_ENV === "production") { Sentry.init({ dsn: "my-dsn" }); }
NODE_ENV is set automatically by
create-react-app. Production builds hardcode the variable to
production. You can use this to selectively enable Sentry.
Enabling Performance Profiling
Sentry can also profile your app’s browser performance. Although this isn’t the main focus of this article, you can set up tracing with a few extra lines in your Sentry library initialization:
npm install @sentry/tracing
import {Integrations} from "@sentry/tracing"; Sentry.init({ dsn: "my-dsn", integrations: [new Integrations.BrowserTracing()], tracesSampleRate: 1.0 });
Now, you’ll be able to see performance data in your Sentry project. This can help you identify slow-running code in production.
Conclusion
Sentry lets you find and fix errors before users report them. You can get real-time alerts as problems arise in production. Stack traces and browser data are displayed inline in each issue, giving you an immediate starting point for resolution.
Combining Sentry with GitLab provides even tighter integration with the software development process. If you’re already using GitLab for project management, adding the Sentry integration lets you manage alerts within GitLab and create GitLab issues for new Sentry errors.
|
https://ihomenews.com/technology-news-how-to-use-sentry-and-gitlab-to-capture-react-errors/
|
CC-MAIN-2022-05
|
refinedweb
| 1,471
| 59.4
|
#include <substream.h>
List of all members.
Definition at line 69 of file substream.h.
DisposeAfterUse::NO
Definition at line 205 of file stream.cpp.
[inline, virtual]
Obtains the current value of the stream position indicator of the stream.
Implements Common::SeekableReadStream.
Definition at line 76 of file substream.h.
SEEK_SET
.
Definition at line 215 of file stream.cpp.
Obtains the total size of the stream, measured in bytes.
If this value is unknown or can not be computed, -1 is returned.
Definition at line 77 of file substream.h.
[protected]
Definition at line 72 of file substream.h.
Definition at line 71 of file substream.h.
|
https://doxygen.residualvm.org/dd/dfe/classCommon_1_1SeekableSubReadStream.html
|
CC-MAIN-2019-47
|
refinedweb
| 107
| 62.54
|
Blynk not working on WiPy 3.0?
- hypercoffeedude last edited by
Hello everyone! I am working on a small robot built up from an old Mint+ floor cleaner, and because I can't seem to figure out how to use websockets, I decided to use Blynk for now to test things out. Unfortunately the Blynk library appears broken, unless I'm doing something wrong. My test code is basic:
import BlynkLib import time BLYNK_AUTH = 'MyAuthHere' blynk = BlynkLib.Blynk(BLYNK_AUTH) # to register virtual pins first define a handler def v3_write_handler(value): print('Current slider value: {}'.format(value)) # attach virtual pin 3 to our handler blynk.add_virtual_pin(3, write=v3_write_handler) # start Blynk (this call should never return) blynk.run()
I get the error:
AttributeError: 'module' object has no attribute 'Blynk'.
I'm not sure where to go from here. I haven't found anything useful on the forums or Google yet. I could've built a physical wifi controller with joysticks and buttons, using mqtt for communication, but I'm determined to figure this out. If nothing else, does anyone know of any good information on how to use websockets with the WiPy3?
@hypercoffeedude
I do not use blynk
but few things
- are you sure that you have not more
BlynkLibon your flash?
- What is exact error message - all python messages point to line number with error. On your post you show only
AttributeError: 'module' object has no attribute 'Blynk'.
- this lib is for Wipy1 (maybe there are some differences) - but error message is not releated to this
- hypercoffeedude last edited by hypercoffeedude
@livius This is the one I am using:
I am on WiPy firmware version '1.17.0.b1'.
@hypercoffeedude said in Blynk not working on WiPy 3.0?:
BlynkLib
which BlynkLib are you trying to use?
|
https://forum.pycom.io/topic/2791/blynk-not-working-on-wipy-3-0
|
CC-MAIN-2020-40
|
refinedweb
| 297
| 65.12
|
vptrjamming, the polymorphic behavior of an object instance is controlled by what
vtbl[]its
vptris pointing too. It's not a big leap from that to realize that by testing the value in the
vtbl[], the type of the object can be determined.
The usual way to determine the derived type of an object is by doing a
dynamic_cast. If the
dynamic_cast to a derived
class succeeds, it returns a pointer to the derived class. If it fails,
it returns NULL:
dynamic_cast is slooooww. For example:
struct A { virtual int foo(); }; struct B : A { int foo(); }; int test(A *a) { return dynamic_cast<B*>(a) != 0; }Here's the generated assembly code for the test to see if
ais really an instance of
B:
mov EAX,4[ESP] test EAX,EAX je L24 push 0 push offset FLAT:___ti?AUB@@ push offset FLAT:___ti?AUA@@ push EAX mov ECX,[EAX] push dword ptr -4[ECX] call near ptr ?__rtti_cast@@YAPAXPAX0PBD1H@Z add ESP,014h jmp short L26 L24: xor EAX,EAX L26: neg EAX sbb EAX,EAX neg EAX retThere are lots of instructions being executed, and a function call. It also relies on RTTI being generated for the class, which is bbllooaatt.
If only we could snipe the RTTI and figure out the type directly. If we've got the need for speed, we can do the following:
B tmp; int test(A *a) { return *(void**)a == *(void**)&tmp; }All this does is compare the
vptrin a with the vptr in
tmp. Most compilers put the
vptras the first member in a class most of the time, so this will work. When it doesn't, adjust the offset to
aand
&tmpto match.
The generated assembler code looks like:
mov EAX,4[ESP] mov ECX,[EAX] cmp ECX,?tmp@@3UB@@A mov EAX,1 je L15 xor EAX,EAX L15: retHoly hotrod, Batman! That brought the test for the type down to two instructions. We can even do slightly better. The Digital Mars C++ compiler has special support for RTTI sniping with the
__istypepseudo member function:
int test(A *a) { return a->__istype(B) != 0; }and we're down to one instruction:
mov EAX,4[ESP] cmp dword ptr [EAX],offset FLAT:??_QB@@6B@[4] mov EAX,1 je L13 xor EAX,EAX L13: retThe obvious question is, why doesn't
dynamic_castproduce the short, fast code? The answer is that RTTI sniping only works if the class type being tested for is the most derived class in the class heirarchy (because that determines the
vtbl[]), whereas
dynamic_castneeds to work for any derived class.
Once again, there are problems with RTTI sniping:
vtbl[]s between classes, so the
vptrfor class
Band class
A, where
Bis derived from
A, point to the same value. This clever compiler optimization must be defeated, sometimes it can be via a switch to "turn on RTTI", or by some other switch. Worst case, avoid using RTTI sniping between classes derived from one another.
vtbl[]s for the same class, so that two
vptrs can hold different addresses, but still be the same type. Fortunately, such compilers are rarely used these days. But the problem can still crop up if one DLL generates one instance while another DLL generates another. The moral is to have all the constructors for a particular object implemented in one source file, and have that be only in one DLL, not many.
this
#include "implementation.h" class Foo { private: ... the implementation ... // the interface public: void bar() { ... manipulate the implementation ... } };The trouble with this, of course, is that the implementation is still there with its bare face hanging out, and in order to compile it, every irrelevant thing that the implementation needs has to be in scope, too.
// User sees this class definition class Implementation; // stub definition class Foo { private: Implementation *pimpl; // the interface public: Foo(); void bar(); }; // Separate, hidden version of Foo #include "implementation.h" Foo::Foo() : pimpl(new Implementation()) { } void Foo::bar() { pimpl->bar(); }
But there's a way to hide the implementation completely without having an extra object.
The idea is to counterfeit the
this pointer, so that the user thinks it is one type, but the
implementation knows it is another:
// User sees this class definition class Foo { // the interface public: Foo *factory(); // create and initialize an instance void bar(); }; // Separate, hidden version of Foo #include "implementation.h" Foo *Foo::factory() { return reinterpret_cast<Foo *>new Implementation(); } void Foo::bar() { (reinterpret_cast<Implementation *>this)->bar(); }The
reinterpret_castis doing the dirty work of counterfeiting the type of the object from
Implementationto
Fooand back again.
Caveats:
Foocannot have any data members, even hidden ones like a
vptr. Therefore, it cannot have any virtual functions.
Foocannot have any constructors, because we aren't constructing a real
Foo, only a counterfeit one.
These techniques are also applicable to the D programming language[1].
Sometimes, you just feel the need for speed.
Have an opinion on the ideas presented in this article? Please post them in the forum topic for this article, Backyard Hotrodding C++..
|
http://www.artima.com/cppsource/backyard3.html
|
CC-MAIN-2014-52
|
refinedweb
| 838
| 62.88
|
Integrating Jersey and Spring: Take 2
By sandoz on Feb 01, 2008
Marc previously described how to integrate Spring with Jersey 0.4 for the instantiation of root resources. This was an great first step but it fell short in a couple of areas:
- there was some initialization code that could not be performed at initialization stage;
- it was necessary to annotate (or specify the default provider for) all resources with a Spring specific life-cycle annotation, thus it was not possible to write 'vanilla' resources for use with Spring; and
- this was only applicable to root resource classes. Jersey has other components, such as instances of MessageBodyReader/Writer, and it would be useful if those components could also be Spring-enabled.
I have spent this week unifing the instantiation of components (in addition to removing the requirement of META-INF/services for registration, it is all dynamic like for root resource classes). Instantiation of any component managed in Jersey is deferred to a ComponentProvider. By default Jersey provides a basic implementation but it is possible to provide an application-specific implementation for say Spring. Jersey will then adapt that implementation so that Spring-registered and non-Spring-registered components can be instantiated.
(All code referenced below only works with the latest build of Jersey.)
I am still pondering the best way to declare an application-specific ComponentProvider but for now i am experimenting specifically with a Spring aware Servlet. Jersey ships with a servlet that can be extended to configure the WebApplication, which makes it very easy to extend for Spring support. Below is the code for the SpringServlet:
public class SpringServlet extends ServletContainer {
private static class SpringComponentProvider implements ComponentProvider {
private ApplicationContext springContext;
SpringComponentProvider(ApplicationContext springContext) {
this.springContext = springContext;
}
private String getBeanName(Class c) {
String names[] = springContext.getBeanNamesForType(c);
if (names.length == 0) {
return null;
} else if (names.length > 1) {
throw new RuntimeException("Multiple configured beans for "
+ c.getName());
}
return names[0];
}
public Object getInstance(Scope scope, Class c)
throws InstantiationException, IllegalAccessException {
String beanName = getBeanName(c);
if (beanName == null) return null;
if (scope == Scope.WebApplication &&
springContext.isSingleton(beanName)) {
return springContext.getBean(beanName, c);
} else if (scope == Scope.ApplicationDefined &&
springContext.isPrototype(beanName) &&
!springContext.isSingleton(beanName)) {
return springContext.getBean(beanName, c);
} else {
return null;
}
}
public Object getInstance(Scope scope, Constructor contructor,
Object[] parameters)
throws InstantiationException, IllegalArgumentException,
IllegalAccessException, InvocationTargetException {
return null;
}
public void inject(Object instance) {
}
};
@Override
protected void initiate(ResourceConfig rc, WebApplication wa) {
ApplicationContext springContext = WebApplicationContextUtils.
getRequiredWebApplicationContext(getServletContext());
wa.initiate(rc, new SpringComponentProvider(springContext));
}
}
Notice that SpringServlet extends ServletContainer and the initiate method is overridden. This method creates an ApplicationContext and then initiates the WebApplication by passing in an instance of the static inner class SpringComponentProvider. This class implements ComponentProvider and the getInstance method will attempt to obtain a Spring bean that is present and matches the requested scope, if so then the bean instance is returned otherwise null is returned. (Note that the getInstance method with a Constructor type parameter is not implemented, this is because we have not determined how to support constructors with Spring beans).
The SpringServlet can be used in a web.xml as follows:
<web-app
<listener>
<description>Spring listener that initializes the ApplicationContext in ServletContext</description>
<listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
</listener>
<servlet>
<servlet-name>Jersey Spring</servlet-name>
<servlet-class>test.spring.SpringServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>Jersey Spring</servlet-name>
<url-pattern>/\*</url-pattern>
</servlet-mapping>
</web-app>
Then i can create two root resource classes. The first, SingletonResource, uses the Jersey supplied singleton life-cycle:
@Path("singleton")
@Singleton
public class SingletonResource {
private String name;
private int uses = 0;
private synchronized int getCount() {
return ++uses;
}
public SingletonResource() {
name = "unset";
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
@GET
@ProduceMime("text/plain")
public String getDescription() {
return "Name: " + getName() + ", Uses: " + Integer.toString(getCount());
}
}
and the second, PerRequestResource, uses the per-request life-cycle:
@Path("request")
public class PerRequestResource {
private String name;
private int uses = 0;
private synchronized int getCount() {
return ++uses;
}
public PerRequestResource() {
name = "unset";
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
@GET
@ProduceMime("text/plain")
public String getDescription() {
return "Name: " + getName() + ", Uses: " + Integer.toString(getCount());
}
}
The root resources are almost identical except that the former is annotated with @Singleton, but they should return different output (as shown later).
Then i can create the Spring applicationContext.xml:
<beans xmlns="">
<bean id="bean1" scope="singleton" class="test.spring.SingletonResource">
<property name="name" value="Mr. Singleton Bean"/>
</bean>
<bean id="bean2" scope="prototype" class="test.spring.PerRequestResource">
<property name="name" value="Mr. PerRequest Bean"/>
</bean>
</beans>
Where the root resource classes are registered (with the appropriate scope) and a property is specified for each.
Then i can deploy and test using curl (the '>' character is my shell prompt):
> curl ; echo
Name: Mr. Singleton Bean, Uses: 1
> curl ; echo
Name: Mr. PerRequest Bean, Uses: 1
>
> curl ; echo
Name: Mr. Singleton Bean, Uses: 2
> curl ; echo
Name: Mr. PerRequest Bean, Uses: 1
>
> curl ; echo
Name: Mr. Singleton Bean, Uses: 3
> curl ; echo
Name: Mr. PerRequest Bean, Uses: 1
Notice that the response to the URI containing the 'singleton' path segment returns the 'Mr. Singleton Bean' property as specified in the applicationContext.xml, and similarly the URI containing the 'request' path segment returns the 'Mr. PerRequest Bean' property. Also notice that the Uses value for the singleton-based URI increments for each request where as the one for the request-based URI does not. It works!
Having to edit applicationContext.xml is something i would prefer to avoid and instead utilize some Spring specific annotations. The ComponentProvider interface could probably do with some tweaks to make things more efficient for per-request life-cycle. But in general it seems to work reasonably well and i am sure the same concepts for Spring integration would apply to Guice.
Awesome! This is just what I have been waiting for. I was using a work around for managing instances using a resource factory but this approach is much cleaner.
Posted by Aaron Anderson on February 01, 2008 at 09:52 AM CET #
Would it be possible to put the 0.6-SNAPSHOT version on a maven repository somewhere ? Or point me to it if it already exists. It would make it a lot easier to test this version. Thanks.
Posted by peter on February 03, 2008 at 06:25 AM CET #
Hi Peter,
We are currently only pushing stable early access releases to the java.net repo. The reason being is that we want to push just stable versions of the JSR expert group agreed JAX-RS API.
Version 0.6 is still under development. We have not fully implemented the JAX-RS 0.6 API (which was frozen last week) and i plan to make changes to some of the Jersey specific APIs. A stable version of 0.6 is due to be released on March 7th. 0.6 is currently only available from the latest download section of the Jersey project site. This package is updated every time a commit is performed.
Can you live with things like that until the 0.6 stable release?
Paul.
Posted by Paul Sandoz on February 04, 2008 at 10:25 AM CET #
Paul,
I understand your point. But wouldn't it be possible to have a separate snapshot repository. I don't know how hard or easy that would be for a public project. Users choosing that repository know what to expect :-)
Posted by peter on February 05, 2008 at 01:52 AM CET #
Hi Paul!
You closed the comments on the, but I wanted to let you know that the OSS SDK is up here:. We also integrated both Google Guice and Apache Stripes to update the runtime model, some docs are up on the site.
Posted by Jevgeni Kabanov on February 05, 2008 at 02:13 AM CET #
Hi Peter,
I will ask for some advice on the Jersey users list as there are some maven experts lurking there. Then i will see what we can do. Perhaps we could always push the latest build out to a maven repo. Cannot promise anything just yet!
Paul.
Posted by Paul Sandoz on February 05, 2008 at 03:37 AM CET #
Hi Jevgeni,
Re: closing of comments. By default roller automatically closes the comments after 7 days. I have changed the config so it never closes them.
Re: the update. Looks like it should be reasonably easy to specify a specific JavaRebel aware WebApplication instance that implements ReloadListener. Veyr nice! I am a bit busy over the next couple of weeks but i hope i will have time to look into this before the 0.6 release on March 7th and at least get a prototype working.
BTW any interest in providing hooks for debugging with NetBeans?
Posted by Paul Sandoz on February 05, 2008 at 03:49 AM CET #
Re: NetBeans
I'm not sure if debugging with NetBeans is a problem, however both IntelliJ IDEA and Eclipse plugins are open source, so there shouldn't be need for any special hooks for NetBeans.
Posted by Jevgeni Kabanov on February 05, 2008 at 03:57 AM CET #
Hi Paul,
Would it be possible for you to list the dependency jars for you aproach to having jersey integrated with spring ?
Thanks,
Leo
Posted by leo de blaauw on February 08, 2008 at 01:48 AM CET #
Hi Leo,
The core jars for Jersey are required, as documented here [1]:
jersey/lib/asm-3.1.jar
jersey/lib/jersey.jar
jersey/lib/jsr311-api.jar
Then i included:
spring-framework-2.5.1/lib/jakarta-commons/commons-logging.jar
spring-framework-2.5.1/dist/spring.jar
Paul.
[1]\*checkout\*/jersey/trunk/jersey/docs/dependencies.html
Posted by Paul Sandoz on February 08, 2008 at 04:58 AM CET #
Paul,
In using your example I seem to have a problem not finding the @Singleton annotation on my classpath. I am using jersay 0.6 with all its libs on the classpath at this time ?
Regards,
Leo
Posted by leo de blaauw on February 08, 2008 at 05:01 AM CET #
Hi Leo,
What package name for @Singleton are you using?
Here are the import statements for the SingletonResource:
import com.sun.ws.rest.spi.resource.Singleton;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.ProduceMime;
Note that the @Singleton annotation is part of the Jersey API (not currently part of the JAX-RS API).
Paul.
Posted by Paul Sandoz on February 08, 2008 at 05:09 AM CET #
Hi Paul,
I've tried your example, but in my current version of jersey (February 1st) the mechanism only works for the singleton.
More specifically, both in your examples and my own code, the only spring beans for which a ComponentProvider.getInstance(.. is invoked are those annotated with @Singleton and @Path
The @Path requirement is logical.
But I also have a very specific question. What if you would take the Bookmark example, but used @PersistenceContext the spring way (meaning you would not pass the Entitymanager from UsersResource to UserResource, but inject it) - how would you do that? It seems like a case that would be useful for many people.
Posted by peter on February 11, 2008 at 02:11 AM CET #
Hi Peter,
Did you declare the scope of 'bean2' to be 'prototype' in applicationContext.xml? Are you using the same version of Spring (2.5.1) ?
Re: the bookmark example. I agree. We have been discussing a number of options:
1) Return a class that gets instantiated.
2) Return an instance, where injection can happen on that instance. AFAIK Spring only supports injection on beans it is in control of although IIRC WebBeans will allow this behaviour.
3) Allow the injection of ComponentProvider on resources so that they can instantiate an instance using that interface.
Paul.
Posted by Paul Sandoz on February 11, 2008 at 02:36 AM CET #
Hey all,
Well, prototype is not really a good option in my view for most use-cases. Mostly I would like to to access service classes at a lower level to allow them to be accessed via a REST interface. Therefore I need those services to be Singletons really. In relation to that I really would like Spring to be in full controll of the lifecycle of those service classes and manage the instantians and destroy of them as well as any transactions and security involved.
Regards,
Leo
Posted by Leo de Blaauw on February 11, 2008 at 08:03 AM CET #
I have now a version where the injection works, but I have a new problem.
A resource class, class example.UserResource$$EnhancerByCGLIB$$42882142, does not have any resource method, sub-resource method, or sub-resource locator
Cglib causes the annotations to dissapear.
Do you think Jersey could look at the parentclass when looking for the correct GET/PUT/DELETE annotations ?
Posted by peter on February 11, 2008 at 08:36 AM CET #
Hi Leo,
It was not exactly clear to me what 'prototype' in Spring actually meant and under what scope instances were available. Plus it does not really gel well with the constructors that take parameters. So i am a bit uncomfortable with it.
By default the life-cycle in JAX-RS is per-request, but you can configure the Web application to have an init-param of:
com.sun.ws.rest.config.property.DefaultResourceProviderClass
whose value points to the @Singleton class (see JavaDoc of ResourceConfig). Then all resource classes will be Singleton by default.
Paul.
Posted by Paul Sandoz on February 11, 2008 at 08:46 AM CET #
Hi Paul,
Prototype pretty much means a freshly initialized bean on every getBean you make. Now I agree you want a JAX-RS life-cycle per request, but not for the underlying resources like services, daos and even dbconnections etc. I havent had the chance yet to try your suggestion but I will let you know if that works out. Pretty much I would just like to insert a spring managed bean into my (REST) webservice class for utilisation there.
Regards,
Leo
Posted by Leo de Blaauw on February 11, 2008 at 08:50 AM CET #
Hi Peter,
Can you send email to the Jersey users list:
users@jersey.dev.java.net
as there are others who may be able to help you as they have got hibernate working with Jersey.
Jersey does look at all public methods on the resource class and super classes. Maybe Cglib is encapsulating and not reproducing the annotations on methods?
Paul.
Posted by Paul Sandoz on February 11, 2008 at 08:53 AM CET #
Oh,
And at the point I am doing a getbean on the service class, all injection into that service class would have allready been fully completed and it would be ready to service any requests from the webservice class. Therefore constructor injection or setter injection doesnt really matter anymore.
Regards,
Leo
Posted by Leo de Blaauw on February 11, 2008 at 08:53 AM CET #
Oh,
One more ;-) Maybe its easier to discuss these things on an irc channel ? Is there a #jersey somewhere ?
Greetz
Leo
Posted by Leo de Blaauw on February 11, 2008 at 08:54 AM CET #
Hi Leo,
Thanks for the info on prototype. There is the jersey users list, no IRC unfortunately. This is the most comments i have ever had on a blog entry!
You should be able to achieve what you want using the Spring ComponentProvider as Jersey will defer instantiation and injection to Spring and will not interfere with the Spring injected stuff you have configured. It will inject additional stuff using @HttpContext. It does ask for instantiation of some web-application scoped components for reading/writing and JAXB support on initialization. Those can also be deferred to Spring for instantiation and injection, they are always singletons. Only the scope of root resource classes can be controlled by an annotation.
Paul.
Posted by Paul Sandoz on February 11, 2008 at 09:09 AM CET #
Paul I am using netbeans to develop the RESTful web services with spring support.. Following the directions given in your blog i made an appl.. but the wadl is not formed .. and on deployment wadl not found error is shown
Posted by Vineet on March 03, 2008 at 11:15 PM CET #
Hi Vineet,
Some questions:
What URI are you using to access the WADL?
Are you getting any exceptions printed out when you try to access the WADL?
Is JAXB in the class path?
Note that in the latest build i have removed GET support for getting the WADL of a resource. It is now only possible to use OPTIONS to obtain the WADL of a resource. The reason being is using GET caused a lot of interference with the application (and especially MVC stuff i am experimenting with) and GET with a WADL-based media type could not be consistently used (i may re-investigate this to see if there is a non-intrusive way).
Using GET for the <base URI>/application.wadl remains unchanged.
Also there is currently a bug in the latest build that inadvertently introduced a runtime dependency on the Jettison jars when created WADL. This will be fixed very soon.
Hope this helps,
Paul.
Posted by Paul Sandoz on March 04, 2008 at 04:15 AM CET #
I want 2 use Jersey in Netbeans 5.5.
What is the way to do it. I have tried creating a web application added the jersey api in the library and edited the web.xml file.. But still the wadl file is not being accessed by the browser .. what else should i be doing ..?
Posted by Debashis Jana on March 05, 2008 at 03:48 AM CET #
Hi Debashis,
There should not be any restriction in using NetBeans 5.5 for development using Jersey. The easiest thing to do is copy an existing example in the Jersey distribution (which depends on NetBeans 6 but you can copy the relevant source/config files into a 5.5 project).
Did you get a basic service running? (for example, copying HelloWorldWebApp example distributed with Jersey).
What version of Jersey are you using?
What URL are you using to obtain the WADL?
Note that we changed the WADL generation functionality in 0.5 (and i have further changed it for the up and coming 0.6 release, ready this Friday)
Thanks,
Paul.
Posted by Paul Sandoz on March 05, 2008 at 03:55 AM CET #
Hey Paul actually i cant access the resources either..
i was trying to get the wadl file from base url/application.wadl.. but it is not found...n yeah jaxB is in the class path and no exception are reported but a message in the browser only ..
Posted by Vineet on March 05, 2008 at 06:00 AM CET #
Would it be possible to do something similar with Jetty and Jersey like your example with Spring and Jersey?
Thanks
Posted by Jon on March 06, 2008 at 03:33 AM CET #
Vineet,
It is hard to track the conversation via the blog would it be possible to transfer things over to email:
users@jersey.dev.java.net
?
What is the message in the browser?
Have you tried doing:
curl -v <url>
if so what is the output?
Paul.
Posted by Paul Sandoz on March 06, 2008 at 05:12 AM CET #
Hi Jon,
It should be possible as Jetty is a Web container.
I am looking for volunteers to experiment with Jetty :-) as i would like to add in process Web container support using Jetty for servlet-based unit tests.
Paul.
Posted by Paul Sandoz on March 06, 2008 at 05:15 AM CET #
Hi,
very helpful posting, this should go into the documentation :)
Just one improvement could be made to support PerRequest scoped resources that have constructor arguments:
the getInstance(Scope, Constructor, Object[]) method (which returns null in your example) should be changed to return getInstance( scope, constructor.getDeclaringClass() ).
This seems to be a difference of the PerRequestProvider and SingletonProvider.
Cheers,
Martin
Posted by Martin Grotzke on March 06, 2008 at 12:44 PM CET #
Posted by javakaffee on March 07, 2008 at 09:15 PM CET #
Do you have any plans to include the Spring integration code inside Jersey? Given how popular Spring is for IoC this would be a very good thing - plus having a little demo showing a web app using Spring for IoC with Jersey doing the JAX-RS goodness!
Posted by James Strachan on March 13, 2008 at 02:34 AM CET #
The formatting is probably gonna be lousy :) but here's an improved implementation of the SpringComopnentProvider - here - or I've posted it here......
import java.lang.reflect.Constructor;
import java.lang.reflect.InvocationTargetException;
import com.sun.ws.rest.spi.service.ComponentProvider;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.springframework.beans.factory.config.AutowireCapableBeanFactory;
import org.springframework.beans.factory.config.ConfigurableListableBeanFactory;
import org.springframework.context.ConfigurableApplicationContext;
/\*\*
\* @version $Revision: 1.1 $
\*/
public class SpringComponentProvider implements ComponentProvider {
private static final transient Log LOG = LogFactory.getLog(SpringComponentProvider.class);
private final ConfigurableApplicationContext applicationContext;
private int autowireMode = AutowireCapableBeanFactory.AUTOWIRE_AUTODETECT;
private boolean dependencyCheck;
public SpringComponentProvider(ConfigurableApplicationContext applicationContext) {
this.applicationContext = applicationContext;
}
public <T> T getInstance(Scope scope, Class<T> type) throws InstantiationException, IllegalAccessException {
String name = getBeanName(type);
Object value;
if (name != null) {
if (LOG.isDebugEnabled()) {
LOG.debug("Found bean named: " + name);
}
value = applicationContext.getBean(name, type);
}
else {
LOG.debug("No bean name so using BeanFactory.createBean");
value = applicationContext.getBeanFactory().createBean(type, autowireMode, dependencyCheck);
}
return type.cast(value);
}
protected <T> String getBeanName(Class<T> type) {
String[] names = applicationContext.getBeanNamesForType(type);
String name = null;
if (names.length == 1) {
name = names[0];
}
return name;
}
public <T> T getInstance(Scope scope, Constructor<T> constructor, Object[] objects) throws InstantiationException, IllegalArgumentException, IllegalAccessException, InvocationTargetException {
return constructor.newInstance(objects);
}
public void inject(Object object) {
String beanName = getBeanName(object.getClass());
if (beanName != null) {
ConfigurableListableBeanFactory beanFactory = applicationContext.getBeanFactory();
beanFactory.configureBean(object, beanName);
}
}
// Properties
//-------------------------------------------------------------------------
public int getAutowireMode() {
return autowireMode;
}
public void setAutowireMode(int autowireMode) {
this.autowireMode = autowireMode;
}
public boolean isDependencyCheck() {
return dependencyCheck;
}
public void setDependencyCheck(boolean dependencyCheck) {
this.dependencyCheck = dependencyCheck;
}
}
Posted by James Strachan on March 13, 2008 at 03:30 AM CET #
Hi James,
I would like very much to include Spring support plus an example with the Jersey distribution. The Spring and Guice integration with Jersey has been a very popular topic. It is a resource issue rather than anything technical (although my Spring knowledge is limited the feedback has been very helpful and instructive on how best to implement basic support) but we will strive to get something in for the 0.7 release on April 18th.
Thanks for the code update, very helpful. (BTW the link you sent returns a 404.)
Paul.
Posted by Paul Sandoz on March 18, 2008 at 03:53 AM CET #
Hi James,
is it correct, that with this SpringComponentProvider each component is created as a spring bean?
What are the consequences of these changes?
Thanx a lot,
cheers,
Martin
Posted by Martin Grotzke on March 18, 2008 at 01:19 PM CET #
Thanks for the post!
It works great on the server side but we ran into problem when we tried to access JSP resource, it showed 404 error. How should we configure web.xml?
Posted by CH L on April 16, 2008 at 08:21 PM CEST #
Hi CH L,
This is an area we need to improve.
The solution i currently use is to set the servlet-mapping URL parameter to be "/". In at least GF and Tomcat this makes the Web container check for JSP pages first, from my minimal investigations.... See the Bookstore example in the Jersey distribution for a web.xml and access to a JSP page.
I want to investigate transforming the Jersey servlet into a servlet filter as this might make it easier to integrate with existing JSP/HTML pages.
We are also working on improving the Spring integration. Martin Grotzke who has commented on this blog entry is working on it. The plan is to get this into the 0.8 release.
Hope this helps,
Paul.
Posted by Paul Sandoz on April 17, 2008 at 05:06 AM CEST #
Thanks for the quick response!
Servlet filter sounds like a good idea.
Posted by CH L on April 21, 2008 at 12:29 AM CEST #
Hi, Paul,
Nice stuff. Peter Liu has written an implementation of your example code using NB 6.1, and I'd like to blog it myself and cross-post it to NetBeans Zone, with a link back to here, if that's OK with you.
Posted by Jeff Rubinoff on May 18, 2008 at 10:51 AM CEST #
Posted by Are you being Web serviced? on May 18, 2008 at 12:02 PM CEST #
I have created a web application which contains a .jsp page and a singletone resource .whenever the .jsp page is requested it is giving the following error
Requested Url : /index.jsp
org.apache.jasper.JasperException: Unable to compile class for JSP:
An error occurred at line: 22 in the generated java file
The method getJspApplicationContext(ServletContext) is undefined for the type JspFactory
Stacktrace:
at org.apache.jasper.compiler.DefaultErrorHandler.javacError(DefaultErrorHandler.java:92)
at org.apache.jasper.compiler.ErrorDispatcher.javacError(ErrorDispatcher.java:330)
at org.apache.jasper.compiler.JDTCompiler.generateClass(JDTCompiler.java:423)
at org.apache.jasper.compiler.Compiler.compile(Compiler.java:308)
at org.apache.jasper.compiler.Compiler.compile(Compiler.java:286)
at org.apache.jasper.compiler.Compiler.compile(Compiler.java:273)
at org.apache.jasper.JspCompilationContext.compile(JspCompilationContext.java:566)
at org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:317)
at org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:320)
at org.apache.jasper.servlet.JspServlet.service(JspServlet.java:266)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:856).ApplicationDispatcher.invoke(ApplicationDispatcher.java:654)
at org.apache.catalina.core.ApplicationDispatcher.processRequest(ApplicationDispatcher.java:445)
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:379)
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:292)
at org.iitkgp.erp.service.filter.DigitalSignatureFilter.doFilter(DigitalSignatureFilter.java:65)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235))
how to solve it??????
pls help?
i am using netbeans 6.0,jax-rs 0.6.0.2(jsr311) apache tomcat of netbeans 6.0.
Posted by Somesh Ghosh on May 26, 2008 at 05:59 AM CEST #
Hi Somesh,
It appears you might have a jar mis-match, namely if you get a "the method getJspApplicationContext(ServletContext) is undefined for the type JspFactory" does it imply that the method is missing?
I am not sure of the interaction with Jersey and the JSP pages. Can you send more information e.g. zip'ing up a complete example, and sending it to:
mailto:users@jersey.dev.java.net
Thanks,
Paul.
Posted by Paul Sandoz on May 26, 2008 at 06:54 AM CEST #
sir,
I have checked that abstract method getJspApplicationContext(.) is present in JspFactory, but the application could not find this method. I can't understand why it is happening.
pls help.
thanks
Posted by Somesh Ghosh on May 27, 2008 at 11:15 PM CEST #
Hi Somesh,
I really do not know what is going on either :-( To help me help you i need you to send me some source code and instructions so that i can compile, run and reproduce.
Paul.
Posted by Paul Sandoz on May 28, 2008 at 02:13 AM CEST #
I have the same problem that Somesh has.
I have NB 6.1 with integrated Tomcat 6.0.16.
My webservices work great but I cannot run a jsp. I get the error:
The method getJspApplicationContext(ServletContext) is undefined for the type JspFactory
Now if I create a new web application without RESTful webservices, I have no problems.
One of the projects that gives me this error is the NetBeans bundled HelloWorld project.
Any ideas?
Posted by G F on May 30, 2008 at 01:54 PM CEST #
Hi Somesh, G F
The problem could be specific to the RESTful Web services support in NetBeans. I will discuss with the tooling team and get back to you.
Paul.
Posted by Paul Sandoz on June 03, 2008 at 05:04 AM CEST #
Sir,
I have an another problem.
I have created a RESTful application. Here i want a list of resources present in this application only using the name of the application . how can i get this?
Also, how can i get the physical application.wadl file?
from that file i can extract the resorces.
please give me some hints or clue?
thanking you,
Posted by Somesh Ghosh on June 09, 2008 at 09:27 AM CEST #
Somesh,
Can you please email your question to:
users@jersey.dev.java.net
As it is much easier to have a more relevant conversation. Plus others may be able to help you or benefit from the responses.
BTW on the NetBeans related issue: We found the problem, it was due to NB shipping a lower version of the JSP jar with the RESTful plugin. We are trying to the fix (remove the jar) into the 6.1 patch2 release that should be out by the end of June.
Thanks,
Paul.
Posted by Paul Sandoz on June 09, 2008 at 09:36 AM CEST #
Hi,
For testing I want to invoke this SpringServlet (using URLConnection), however it is not getting invoked, that is I cannot see corresponding URL's method getting executed. What care I should be taking (in terms of http method or type of input).
Also I have an html file as a welcome file in web.xml and it is also not getting displayed. Any idea??
Posted by tarap on August 07, 2008 at 04:53 AM CEST #
Hi Tarap,
Can you send an email to:
users@jersey.dev.java.net
describing your problem in more detail. If you can provide more information on your set up, preferably some example code and configuration files etc that would help myself and others to help you.
Thanks,
Paul.
Posted by Paul Sandoz on August 07, 2008 at 05:02 AM CEST #
Hi Paul,
Since JAXBContext objects are resource intensive, I want to create some JAXBContext objects for a particular scenario and save them in the data store which is maintained by JAXRS framework.
How can I access the JAXRS application store which holds the JAXBContext objects which are created in an application.
Posted by Thulasi on January 25, 2010 at 06:42 PM CET #
Hi Thulasi,
I think the best solution is for the application to declare to JAX-RS the JAXBContext to be used for a set of JAXB classes.
You can register an instance of ContextResolver<JAXBContext>.
For example:
@Provider
public class Foo implements ContextResolver<JAXBContext> {
private final JAXBContext jb = ...;
public JAXBContext getContext(Class<?> type) {
if (/type is part of context) {
return jb;
} else {
return null;
}
}
}
You register Foo (a provider class) as you would register any root resource classes.
Posted by Paul Sandoz on January 26, 2010 at 02:03 AM CET #
Paul,
Working with jersey 1.1.5 & json & gwt on the client.
When returning dates which have 0 milliseconds, the gwt parser crashes because the .000 milliseconds are dropped if a date has 0 millis.
Is there a way to specify the serialization/deserialization date format (we don't need the millis on the client)
Thanks
Tony
Posted by tony on June 01, 2010 at 12:55 PM CEST #
|
https://blogs.oracle.com/sandoz/entry/integrating_jersey_and_spring_take
|
CC-MAIN-2016-26
|
refinedweb
| 5,260
| 56.45
|
hey guyss..!!
I'm having some difficulties working with strings (cstyle, object oriented strings).
as this is something new for me so I'm not exactly familiar with functions of strings aswell..
write now i have a question for which i was making a solution but the code is missing something please help me out.
Question statement.
"Write a program that reads a whole paragraph (you can do that with a little common sense) from the user. Now prompt the user enters a word to be searched. Your program should read the search word and search for all the occurrences of that word in the entered text. You need to print the total number of occurrences of this word in the entered text."
My code.
#include<iostream> #include<conio.h> #include<string> using namespace std; int main() { string my_str; cout<<"Please enter your paragraph.."<<endl; getline (cin, my_str); cout<<endl<<endl; cout<<"please enter a word to be searched..!!"<<endl; char x; cin>>x; my_str.find("x"); cout<<endl; cout<<"The total number of"<< x <<"is: "<<x; getch(); return 0; }
|
https://www.daniweb.com/programming/software-development/threads/494420/finding-total-occurrences-of-any-word-in-strings
|
CC-MAIN-2018-39
|
refinedweb
| 180
| 73.37
|
Created on 2017-01-13 04:47 by Anthony Sottile, last changed 2020-01-20 12:16 by methane.
PEP420 makes __init__.py files optional:
Though it seems without them, pkgutil.walk_packages does not function as desired:
Consider the following example:
```
$ tree foo
foo
├── bar
│ ├── baz.py
│ └── __init__.py
├── __init__.py
└── womp.py
```
And a test script
# test.py
```
import pkgutil
import foo
for _, mod, _ in pkgutil.walk_packages(foo.__path__, foo.__name__ + '.'):
print(mod)
```
In both python2 and python3 I get the following output:
```
$ python2.7 test.py
foo.bar
foo.bar.baz
foo.womp
$ python3.5 test.py
foo.bar
foo.bar.baz
foo.womp
```
Removing the __init__.py files and only using python3, I get this:
```
$ find -name '__init__.*' -delete
$ python3.5 test.py
foo.bar
```
The modules are definitely importable:
```
$ python3.5 -c 'import foo.bar.baz'
$
```
Originally asked as a question on stackoverflow:
While it is rather trivial to implement the proposed functionality - all that's required here is to eliminate the check for __init__.py from pkgutil._iter_file_finder_modules - this would have undesired impacts on, e.g., pydoc.apropos:
This function would then recursively report *any* directory/subdirectory on sys.path, which is quite surely not what people want.
I think this is a fundamental problem with namespace packages: they are nice and flexible for specific imports, but they make it impossible to know whether a directory found on the filesystem is *intended* as a Python package or not.
> all that's required here is to eliminate the check for __init__.py from pkgutil._iter_file_finder_modules
Ok, I was exaggerating here. To do it right would require a more complex change, but that's all that's needed to get an estimate of the effect the real thing would have.
> PEP420 makes __init__.py files optional
This is almost wrong. PEP 420 added a new way for "namespace pacakge."
PEP 420 doesn't make empty __init__.py file in regular package.
(See)
Then, should pkgutil.walk_packages walk into all directories (e.g. node_modules) ? I don't think so.
If the resolution here is that this is behaving as intended (which personally I disagree with), I think this issue should remain open as a documentation task - the docs should clearly state that this does not apply to PEP420 namespace packages.
I am totally agree with Wolfgang:
> they make it impossible to know whether a directory found on the filesystem is *intended* as a Python package or not.
I think we shouldn't treat normal directory as namespace package until some portion in the directory is imported, or it is specified explicitly.
So walk_packages() should be used like:
walk_packages("/path/to/namespace", "namespace")
I already rejected similar issue: #29642.
If you can not agree with me, please make an thread in python-dev ML or discuss.python.org.
|
https://bugs.python.org/issue29258
|
CC-MAIN-2021-49
|
refinedweb
| 471
| 59.3
|
An anonymization tool for production databases
Project description
pynonymizer
pynonymizer is a universal tool for translating sensitive production database dumps into anonymized copies.
This can help you support GDPR/Data Protection in your organization without compromizing on quality testing data.
Why are anonymized databases important?
The primary source of information on how your database is used is in your production database. In most situations, the production dataset is usually significantly larger than any development copy, and would contain a wider range of data.
From time to time, it is prudent to run a new feature or stage a test against this dataset, rather than one that is artificially created by developers or by testing frameworks. Anonymized databases allow us to use the structures present in production, while stripping them of any personally identifiable data that would consitute a breach of privacy for end-users and subsequently a breach of GDPR.
With Anonymized databases, copies can be processed regularly, and distributed easily, leaving your developers and testers with a rich source of information on the volume and general makeup of the system in production. It can be used to run better staging environments, integration tests, and even simulate database migrations.
below is an excerpt from an anonymized database:
How does it work?
pynonymizer replaces personally identifiable data in your database with realistic pseudorandom data, from the
Faker library or from other functions.
There are a wide variety of data types available which should suit the column in question, for example:
unique_email
company
file_path
[...]
For a full list of data generation strategies, see the docs on strategyfiles
Examples
You can see strategyfile examples for existing database, such as wordpress or adventureworks sample database, in the the examples folder.
Process outline
- Restore from dumpfile to temporary database.
- Anonymize temporary database with strategy.
- Dump resulting data to file.
- Drop temporary database.
If this workflow doesnt work for you, see process control to see if it can be adjusted to suit your needs.
Requirements
- Python >= 3.6
mysql
mysql/
mysqldumpMust be in $PATH
- Local or remote mysql >= 5.5
- Supported Inputs:
- Plain SQL over stdout
- Plain SQL file
.sql
- GZip-compressed SQL file
.gz
- Supported Outputs:
- Plain SQL over stdout
- Plain SQL file
.sql
- GZip-compressed SQL file
.gz
- LZMA-compressed SQL file
.xz
mssql
- Requires extra dependencies: install package
pynonymizer[mssql]
- MSSQL >= 2008
- Due to backup/restore limitations, you must be running pynonymizer on the same server as the database engine.
- Supported Inputs:
- Local backup file
- Supported Outputs:
- Local backup file
postgres
psql/
pg_dumpMust be in $PATH
- Local or remote postgres server
- Supported Inputs:
- Plain SQL over stdout
- Plain SQL file
.sql
- GZip-compressed SQL file
.gz
- Supported Outputs:
- Plain SQL over stdout
- Plain SQL file
.sql
- GZip-compressed SQL file
.gz
- LZMA-compressed SQL file
.xz
Getting Started
Usage
CLI
- Write a strategyfile for your database
- Start Anonymizing!
usage: pynonymizer [-h] [--input INPUT] [--strategy STRATEGYFILE] [--output OUTPUT] [--db-type DB_TYPE] [--db-host DB_HOST] [--db-port DB_PORT] [--db-name DB_NAME] [--db-user DB_USER] [--db-password DB_PASSWORD] [--fake-locale FAKE_LOCALE] [--start-at STEP] [--skip-steps STEP [STEP ...]] [--stop-at STEP] [--seed-rows SEED_ROWS] [--mssql-backup-compression] [--mysql-cmd-opts MYSQL_CMD_OPTS] [--mysql-dump-opts MYSQL_DUMP_OPTS] [--postgres-cmd-opts POSTGRES_CMD_OPTS] [--postgres-dump-opts POSTGRES_DUMP_OPTS] [-v] [--verbose] [--dry-run] A tool for writing better anonymization strategies for your production databases. optional arguments: -h, --help show this help message and exit --input INPUT, -i INPUT The source dump filepath to read from. Use `-` for stdin. [$PYNONYMIZER_INPUT] --strategy STRATEGYFILE, -s STRATEGYFILE A strategyfile to use during anonymization. [$PYNONYMIZER_STRATEGY] --output OUTPUT, -o OUTPUT The destination filepath to write the dumped output to. Use `-` for stdout. [$PYNONYMIZER_OUTPUT] --db-type DB_TYPE, -t DB_TYPE Type of database to interact with. More databases will be supported in future versions. default: mysql [$PYNONYMIZER_DB_TYPE] --db-host DB_HOST, -d DB_HOST Database hostname or IP address. [$PYNONYMIZER_DB_HOST] --db-port DB_PORT, -P DB_PORT Database port. Defaults to provider default. [$PYNONYMIZER_DB_PORT] --db-name DB_NAME, -n DB_NAME Name of database to restore and anonymize in. If not provided, a unique name will be generated from the strategy name. This will be dropped at the end of the run. [$PYNONYMIZER_DB_NAME] --db-user DB_USER, -u DB_USER Database credentials: username. [$PYNONYMIZER_DB_USER] --db-password DB_PASSWORD, -p DB_PASSWORD Database credentials: password. Recommended: use environment variables to avoid exposing secrets in production environments. [$PYNONYMIZER_DB_PASSWORD] --fake-locale FAKE_LOCALE, -l FAKE_LOCALE Locale setting to initialize fake data generation. Affects Names, addresses, formats, etc. [$PYNONYMIZER_FAKE_LOCALE] --start-at STEP Choose a step to begin the process (inclusive). [$PYNONYMIZER_START_AT] --skip-steps STEP [STEP ...] Choose one or more steps to skip. [$PYNONYMIZER_SKIP_STEPS] --stop-at STEP Choose a step to stop at (inclusive). [$PYNONYMIZER_STOP_AT] --seed-rows SEED_ROWS Specify a number of rows to populate the fake data table used during anonymization. [$PYNONYMIZER_SEED_ROWS] --mssql-backup-compression [MSSQL] Use compression when backing up the database. [$PYNONYMIZER_MSSQL_BACKUP_COMPRESSION] --mysql-cmd-opts MYSQL_CMD_OPTS [MYSQL] pass additional arguments to the restore process (advanced use only!). [$PYNONYMIZER_MYSQL_CMD_OPTS] --mysql-dump-opts MYSQL_DUMP_OPTS [MYSQL] pass additional arguments to the dump process (advanced use only!). [$PYNONYMIZER_MYSQL_DUMP_OPTS] --postgres-cmd-opts POSTGRES_CMD_OPTS [POSTGRES] pass additional arguments to the restore process (advanced use only!). [$PYNONYMIZER_POSTGRES_CMD_OPTS] --postgres-dump-opts POSTGRES_DUMP_OPTS [POSTGRES] pass additional arguments to the dump process (advanced use only!). [$PYNONYMIZER_POSTGRES_DUMP_OPTS] -v, --version show program's version number and exit --verbose Increases the verbosity of the logging feature, to help when troubleshooting issues. [$PYNONYMIZER_VERBOSE] --dry-run Instruct pynonymizer to skip all process steps. Useful for testing safely. [$PYNONYMIZER_DRY_RUN]
Package
Pynonymizer can also be invoked programmatically / from other python code. See the module entrypoint pynonymizer or pynonymizer/pynonymize.py
import pynonymizer pynonymizer.run(input_path="./backup.sql", strategyfile_path="./strategy.yml" [...] )
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/pynonymizer/
|
CC-MAIN-2021-17
|
refinedweb
| 959
| 50.02
|
MPL3115A2 hangs intermittently
I’m using a WiPy 2 to gather data from a PySense board and a one-wire DS18X20 temperature sensor, send it to a data logging machine over UDP, deep sleep for a minute, and repeat. So far it’s run for up-to seven hours before it stops sending. This has happened twice. Both times I’ve been able to Telnet in and ctrl-C the program, and both times I’ve gotten the same traceback:
File "main.py", line 45, in <module> File "/flash/lib/MPL3115A2.py", line 66, in __init__ File "/flash/lib/MPL3115A2.py", line 77, in _read_status
So it seems as if _read_status() might be stuck in an infinite loop. Has anyone else run into similar problems? Could the MPL3115A2 need an occasional reset of some kind? My thought is to add some kinds of explicit timeout to _read_status(), or use the watchdog timer. Suggestions welcome.
Cheers,
Tim
hi @smbunn,
The code was not modified.
On a second thought, the altitude is just a simple function of the pressure, considering the same sea-level pressure. This is not ok, as the sea-level pressure varies in time, and from place to place.
So, I would recommend, to use just the pressure, and if altitude is required, some arithmetics code has to be added, to get the right altitude. You could use some weather forecast service to obtain the accurate and real-time sea-level pressure.
Has this problem been fixed? MY code wants to call the MPL3115A2 twice, once in pressure mode and once in atitude mode so needs to be called repeatedly.
for count in range (2000): print("Count=", count)))
so declaring it only once probably will not work?
@catalin Dear Catalin, the workaround indeed works fine. You should add it to the Pysense documentation to prevent other people falling in the same trap.
However, I am still left with one concern. Could it be that the main
while True: passloop is consuming too much processing power? I have seen this on Linux systems using
htopand it was also reported on StackExchange.
Hence, my remaining questions are:
- How can I check upon MCU usage as
htopis not available?
- Should I add a
utime.sleep(0.2)to the main
while True:loop?
- When are we going to see
uasynciobeing support by Pycom? Under my incentive, a couple of good souls removed the last obstacles for
uasyncioadoption.
I've verified the assumption that garbage collector is the problem, by calling it every loop, gc.collect(). In about 3 minutes, it's still blocked.
The problem seems to be the I2C module.
Meanwhile, I guess, you can use the workaround.
Hi guys,
Sorry for the delayed answer.
I was able to have the MPL3115A2 interrogation run for ~14 hours, by simply creating the object once, and periodically just calling pressure() method. Would this solve your problem?
Probably, a better solution is to have this MPL3115A2 class made static (as in Java).
With the previous version, the problem could have been the garbage collector, as for each new instance, new memory(heap) is allocated.
Regarding I2C lines, they were designed for up to 400kHz baudrate.
Code is here:
import pycom import pysense import machine from MPL3115A2 import MPL3115A2, PRESSURE import utime import sys class Device: def __init__(self): self.iteration = 0 self.p = MPL3115A2(board, mode=PRESSURE) # create object just once self.sense(self) self.timer = machine.Timer.Alarm(self.sense, 3, periodic=True) def sense(self, device): # barometer mode, not raw, oversampling 128, minimum time 512 ms self.iteration += 1 print('%f Pa | iteration %d' % (self.p.pressure(),()
@catalin Lowering the I2C clock frequency can do wonders.
A frequency as low as 50kHz would probably do fine and increase reliability.
Also, keep the trace capacitance of SCL and SDA as low as possible.
@catalin Does the Pysense employ I2C for communicating with the MPL3115A2?
If so, I am pretty sure we are dealing with an I2C problem.
The following reading material hints into this direction:
@catalin I do not know if this is related, but it might be helpful. A similar problem with an MPL3115A2 has been reported a while back on a Raspberry Pi.
i2c i2c-1: transfer setup timed outwas the error given by the Raspberry Pi.
Hi guys, I do reproduce the bug, I am trying to investigate.
I opened a bug report here:
I am experiencing exactly the same problem with the following short piece of code. Within a minute or two (it varies) the thing hangs without any error message.
Serial communication also breaks down and a power cycle is required to regain communication with the LoPy Pysense combo.
import pycom import pysense import machine from MPL3115A2 import MPL3115A2, PRESSURE import utime import sys class Device: def __init__(self): self.iteration = 0 self.sense(self) self.timer = machine.Timer.Alarm(self.sense, 3, periodic=True) def sense(self, device): # barometer mode, not raw, oversampling 128, minimum time 512 ms self.p = MPL3115A2(board, mode=PRESSURE).pressure() # in Pa self.iteration += 1 print('%f Pa | iteration %d' % (self.p,()
Following-up … this has continued to be an issue for me, with my WiPy 2 consistently hanging within 2 - 8 hours of operation (I’m waking-up from deep sleep once per minute to gather data and send it over HTTP). Since removing the MPL3115A2 code completely, it’s been running for 24 hours without a problem.
Cheers,
Tim
@catalin - thanks for looking into this! Here’s all of my sensor-related code (in case there’s some interaction). I’m re-running this code every time the machine boots, including after every deep sleep. FWIW, my impression was that, except for a few bits of information like
machine.reset_cause(), returning from deep sleep is like booting from scratch … is that not the case?
Thanks in advance,
Tim
sensors = pysense.Pysense() mpp = MPL3115A2.MPL3115A2(sensors, mode=MPL3115A2.PRESSURE) # This is line 45, where it hangs si = SI7006A20.SI7006A20(sensors) lt = LTR329ALS01.LTR329ALS01(sensors) ow = onewire.OneWire(machine.Pin("P10")) external_temperature = onewire.DS18X20(ow) external_temperature.start_conversion() short_wl, long_wl = lt.light() time.sleep(1.0) # The onewire thermometer requires ~750ms to take a reading record = { "battery": {"value": sensors.read_battery_voltage(), "units": "V"}, "temperature/internal/1": {"value": mpp.temperature(), "units": "degC"}, "pressure": {"value": mpp.pressure(), "units": "pascal"}, "temperature/internal/2": {"value": si.temperature(), "units": "degC"}, "humidity": si.humidity(), "light/ambient/short": short_wl, "light/ambient/long": long_wl, "temperature/water/1": {"value": external_temperature.read_temp_async(), "units": "degC"}, }
Hi @tshead, Could you post the portion of the python code where you call functions from MPL3115A2.py? Are you re-initiating (make a new instance) after each deep sleep?
|
https://forum.pycom.io/topic/2322/mpl3115a2-hangs-intermittently
|
CC-MAIN-2018-22
|
refinedweb
| 1,110
| 59.3
|
#include <SoftwareSerial.h>SoftwareSerial mySerial(2, 3); // RX, TXvoid setup() { // Open serial communications and wait for port to open: Serial.begin(57600); // set the data rate for the SoftwareSerial port mySerial.begin(57600); }void loop() // run over and over{ if (mySerial.available()) Serial.write(mySerial.read()); }
I plugged the tx pin to the nano's tx1 pin and opened up the serial monitor.
I seems to me that this setup has the data coming straight out from the IMU and through the serial port straight to the serial monitor on my pc...without really any processing by the arduino nano.... right???
The reason is that in the next step of the project, I will have to combine it with additional data from two analog input pins from the Nano then the combined data stream will go out serially & wirelessly using a bluetoothmate gold to be received by my pc and then Processing will take over from there.... just trying to get there in baby steps...
Any ideas?
but when I try to serial.println(Serial.read()); it comes out garbage
Serial.write((uint8_t) Serial.read());
what nano are you referring to?
Don't set the hardware serial (Serial.) and the software serial (mySerial.) to the same baud rate will cause conflicts.
They are related though to setting high baud rates in the SoftwareSerial, though.
looks like it has 3v and 5v outputs and it can get hardware serial inputs from my sensor and transmit USB
|
http://forum.arduino.cc/index.php?topic=133238.msg1003458
|
CC-MAIN-2014-52
|
refinedweb
| 245
| 57.16
|
Automated and Dictionary attacks to login is a security threat that every IT is quite aware of. There are many techniques that help address this problem, one of which is the CAPTCHA - an image that contains characters and/or numbers that presumably only humans can read; its value is then entered by the user manually. This helps filter out automated logins. However, this technique can be quite difficult to implement and also costly because you would have to generate image on the fly. Further, some software are designed to figure out the value on the image using technologies similar to OCR scanning. Although CAPTCHA may work most of the time, like I said, it is difficult, expensive, and does not work all the time, plus, requires your user to enter yet another value from an already difficult to read text.
I began thinking about this problem and wanted to come up with a solution that...
Suddenly, it dawned upon me when I started thinking like a hacker that if I wanted to automatically try to login using brute force, I would have to continuously generate different user ID and password combinations until I find the one that will get me through, but what is common in this? The keys! Let me explain... for example, if the login page contains two text boxes, one named "
userid" and the other "
password", all I have to do is submit values to these fields, something like, and keep on changing the values "
John" and "
cool" until I find the right combination and I will get in. The keys that are common in this scenario are "
userid" and "
password". What if these keep changing every time you make a submit attempt? You would never know which key to provide the value to, hence cripple the key-value combination attack altogether!
The basic idea in accomplishing this is to assign a different name to the
userID text box and
password text box every time the page is loaded, either by first loading or a postback is triggered. To make sure that the keys (the names assigned to the
userID textbox and
password textbox) are unpredictable, I elected to use GUID. There are four parts to this technique.
Part 1:
UserIDKey and
PwdKey private properties. (I use ViewState to store the assigned key instead of Session so that if the user spawns another instance of login page, each page would have its own keys.)
private string UserIDKey { get { if(ViewState["UserIDKey"] == null) ViewState["UserIDKey"] = Guid.NewGuid().ToString(); return (string) ViewState["UserIDKey"]; } set { ViewState["UserIDKey"] = value; } } private string PwdKey { get { if(ViewState["PwdKey"] == null) ViewState["PwdKey"] = Guid.NewGuid().ToString(); return (string) ViewState["PwdKey"]; } set { ViewState["PwdKey"] = value; } }
Part 2: Assign new names to the text boxes when the page is first loaded.
private void Page_Load(object sender, System.EventArgs e) { if(!IsPostBack) { MakeFieldNamesSecret(); } } private void MakeFieldNamesSecret() { txtPwd.ID = PwdKey; txtUserID.ID = UserIDKey; }
Part 3: Validation. When the Submit button is clicked, retrieve the values of the two text boxes to validate.
private void btnLogin_Click(object sender, System.EventArgs e) { string userID = Request.Form[UserIDKey]; string pwd = Request.Form[PwdKey]; //You must provide your own validation if(userID == "John" && pwd == "cool") Server.Transfer("PostLoginPage.aspx"); else lblErr.Text = "Invalid UserID or Password"; }
Part 4: Change the names of the text boxes on postback. This is what really prevents the key-value attack!
private void LoginPage_PreRender(object sender, System.EventArgs e) { if(IsPostBack) { UserIDKey = null; PwdKey = null; MakeFieldNamesSecret(); } }
What I found to be very interesting is the magic of thinking outside the box. What most people are doing trying to solve this problem is how to make the input values more difficult to automate, but few, perhaps thought about changing the variable that takes the value. With this very simple technique, I think I have solved a real problem. What do you think?
First revision: January 5, 2005.
General
News
Question
Answer
Joke
Rant
Admin
|
http://www.codeproject.com/KB/web-security/NoAutoLogin.aspx
|
crawl-002
|
refinedweb
| 655
| 64.61
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.