text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
On Tue, Jul 22, 2003 at 11:12:44PM -0500, Thomas Smith wrote: > Hi, > On Tuesday, July 22, 2003, at 07:03 PM, Adam Borowski wrote: > ><religious><asbestos longjohns> > >Just don't *dare* to let anyone remove /usr/bin/google or I'll kill > >you, > >your dog and your friend's uncle's son's ex-roommate's girlfriend's > >aunt's > >pet hamster. > ></asbestos></religious> > OK, how about we change the surfraw "rhyme" to accept similar command > line arguments to the rhyme-the-package "rhyme" and register them as > alternatives, with rhyme-the-package higher on the list? Note that I > have not yet downloaded rhyme-the-package to see how well this will > work... Hi Thomas, I have no problem with this (as the maintainer of rhyme (the package). I don't even necessarily need rhyme (the package) to have a higher priority. I'm not 100% convinced that alternatives is the Right Way though (even though I'll implement that if it's what we agree on in the end). How about this for a suggestion? rhyme (the package) keeps the rhyme binary. surfraw renames the rhyme binary to (say) surfrhyme or something. I arrange for rhyme the package to prominently display some notice to the effect of "This rhyme command supplants the rhyme command in surfraw. If you want the surfraw rhyme command then use the new name of 'surfrhyme'". Obviously the language needs to be cleaned up and thought about, as does the command name. There are problems with this approach as well though, obviously. It still doesn't address the namespace pollution though. There are a lot more very "generic" commands there. Still, both are solutions that I'm happy to implement as long as surfraw starts being maintained actively again. As regards the alioth project goes. Well done and thank you for getting this up and running. If you post again when it's properly running, I'll check out the CVS and might submit a few patches. Be aware that you don't always get a confirmation mail from alioth to say the project has been started...sometimes it just appears :) Have a look to see what projects your account on alioth are registered as being associated with, it might already be there :) > > I think that, since they have similar functions, this strategy should > work... if there's some other program that wants to use W then that's > less likely, but this should work for now. > > >Considering the number of people interested in surfraw, you most likely > >already received a complete set of patches. > Heh, you underestimate the laziness of people in general. go for it, > man. hehe, well, not necessarily laziness. I did take an hour or so to have a look at the surfraw package a few days ago and started to fix some trivial stuff. I was loath to do much before the question of where the commands were going was settled though. As I said earlier, once you have the package in publicly available CVS, I'll more then likely grab it and submit patches properly. Cheers, Stephen
Attachment:
pgpaTaLkMDCao.pgp
Description: PGP signature | https://lists.debian.org/debian-devel/2003/07/msg01961.html | CC-MAIN-2017-13 | refinedweb | 527 | 72.36 |
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo
You can subscribe to this list here.
Showing
7
results of 7
Okay, getting closer!
Clayton Otey wrote:
> aclocal.m4 most definitely should not be empty. I would guess that you
> have multiple installations of automake, and they are not cooperating.
>
> First, I'd try running
> cd lib-src/sbsms
> aclocal
> automake
> autoconf
> ./configure
Success. The compile went quite aways ... but ... now I appear to have a
yacc problem :)
make[3]: Entering directory
`/home/bob/src/sound/edit/audacity/lib-src/redland/raptor/src'
yacc -b parsedate -p raptor_parsedate_ -d -v ./parsedate.y
yacc: e - line 167 of "./parsedate.y", syntax error
%pure_parser
^
make[3]: *** [parsedate.c] Error 1
I appear to have a current yacc???
bob$ yacc -V
yacc - 1.9 20070509
The yacc I have appears to be byacc from the ubuntu distro.
Thanks guys!
--
**** Listen to my CD at ****
Bob van der Poel ** Wynndel, British Columbia, CANADA **
WWW:
aclocal.m4 most definitely should not be empty. I would guess that you have
multiple installations of automake, and they are not cooperating.
First, I'd try running
cd lib-src/sbsms
aclocal
automake
autoconf
./configure
manually and see if this works, to see if it's a problem with make running
the wrong versions of things.
If that doesn't work, I'd do a 'which aclocal' to make sure you're running
the right installation. This problem is strange because regardless of which
version you're running, it should have installed the init.m4 and header.m4
files, unless you're running a really old version, or the installation was
incomplete somehow.
I wish I had more insightful suggestions, but it's hard to debug these kind
of problems when you're not sitting in front of a terminal on the system in
question.
Don't know if this helps ... but I installed a current automake after
getting an error trying to compile the cvs. After installing I did a
'make clean' and 'make'.
bob$ automake --version
automake (GNU automake) 1.10.1
and /usr/share/aclocal doesn't have the files header.m4 or init.m4.
However, they are present in /usr/share/alocal-1.10.
Further, in lib-src/sbsms there is a file aclocal.m4. However, it is an
empty file.
Any suggestions as to what to try next?
Clayton Otey wrote:
>?
>
>
> ------------------------------------------------------------------------
>
> ------------------------------------------------------------------------------
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> audacity-devel mailing list
> audacity-devel@...
>
--
**** Listen to my CD at ****
Bob van der Poel ** Wynndel, British Columbia, CANADA **
WWW:?
Gale Andrews wrote:
> | From Martyn Shaw <martynshaw99@...>
> | Thu, 18 Dec 2008 23:20:59 +0000
> | Subject: [Audacity-devel] Bug Fixing - much progress!
>> Gale Andrews wrote:
>>> | From Leland <leland@...>
>>> | Thu, 18 Dec 2008 01:08:54 -0600
>>> | Subject: [Audacity-devel] Bug Fixing - much progress!
>>>>> Martyn wrote:
>>>>> one to be verified on non-windows (Muting/soloing specific stereo
>>>>> tracks when exporting)
>>>> What should be checked? If I'm understanding it correctly, if you have
>>>> 2 tracks and one is muted and you export, then ONLY the unmuted track
>>>> gets exported?
>>> That's right.
>>>
>>>> Some things I've tried...all with 2 selected mono tracks....
>>> There are a lot of permutations, and some issues seemed to require
>>> more than two tracks to trigger, and were highly unpredictable
>>> once you fiddled with them mute/solo buttons. Drove me mad.
>>>
>>> The issue as reported was with *stereo* tracks. I presume Martyn
>>> checked on stereo tracks but will try to take a quick look as well to
>>> be completely sure.
>> The problem was with stereo (linked) tracks. When you clicked 'mute'
>> only the first track of the pair had it's 'mute' flag set/cleared.
>> Similarly with 'solo'. I tested with a stereo chirp, faded in, a
>> stereo DTMF Tones, faded out, mono noise and so on. That way you can
>> see from the waveforms that it is combining the correct tracks.
>
> Thanks Martyn for that helpful extra info. I had to repurchase several CDs I
> had ripped then edited in Audacity, and sold the CDs, not knowing it had
> exported the files with a channel missing. So forgive my paranoia to check
> it's completely right :=)
>
>
>>> Here on Windows XP ANSI Release, if I export a muted mono track
>>> (the only one on screen), its a null 44 byte file, and if I drag it back
>>> in, Audacity crashes. So perhaps we need more than a message.
>>> Either create a valid silent file after a prompt, or forbid it altogether
>>> (better?)
>> I tried this on Unicode-Debug and Release. I too get a 44 byte file
>> but reading it in doesn't cause a crash on either version. I checked
>> out the file against
>> and the
>> file looks byte for byte perfect to me. What differences do we have Gale?
>
> I tried Unicode Release from a few days ago.
>
> * Imported stereo 4 minute WAV 44100 Hz 16-bit at 32-bit quality.
> * Tracks > Stereo Track to Mono
> * Depress Mute
> * File > Export > WAV (Microsoft) 16 bit PCM, 44100 Hz rate
> * No progress dialogue (that you can see)
> * Exported file is 66 bytes: crash.wav on
>
>
> What happens now is as follows. If I drag the file in, nothing happens
> and I get a "doing" if I try to click on a menu or anywhere else. If I
> task switch away (ALT + TAB), Audacity disappears from Task Switcher
> window. It still has a tab in Taskbar which makes the program reappear
> but there is nothing you can do with it except force quit.
>
> If I File > Import the file, VS pops up offering to debug the crash:
> "unhandled win32 exception [3668]"
>
> Repeated with another similar file and get a 44 byte file on export this time
> (?) which also crashes.
>
> Now generate a 3 second mono DTMF tone, mute it, export as WAV as above,
> 44 byte file crashes Audacity on import.
I tried all those things and no crashes here. Your 66 byte file looks
fine too, just a bit of metadata different from the 44 byte ones.
I guess it's another difference?
Martyn
> Gale
>
>
>
>
>
> ------------------------------------------------------------------------------
> _______________________________________________
> audacity-devel mailing list
> audacity-devel@...
>
>
Decided I'd better upgrade from 1.3.5 :)
First off I'm using Ubuntu Hardy and I did manage to comiple 1.3.5
without problems (as I recall).
On the 1.3.6 tarball I get
make <snip>
make -C portsmf
make[2]: Entering directory
`/home/bob/src/sound/edit/audacity-src-1.3.6/lib-src/portsmf'
/bin/bash ./config.status --recheck
running CONFIG_SHELL=/bin/bash /bin/bash ./configure
--prefix=/usr/local/ --with-wx-config=/usr/bin/wx-config
--cache-file=/dev/null --srcdir= --no-create --no-recursion
./configure: line 1734: syntax error near unexpected token `-Wall'
./configure: line 1734: `AM_INIT_AUTOMAKE(-Wall foreign)'
make[2]: *** [config.status] Error 2
make[2]: Leaving directory
`/home/bob/src/sound/edit/audacity-src-1.3.6/lib-src/portsmf'
make[1]: *** [portSMF] Error 2
make[1]: Leaving directory
`/home/bob/src/sound/edit/audacity-src-1.3.6/lib-src'
make: *** [audacity] Error 2
So, thinking myself quite clever I grabbed the latest CVS and (installed
in a fresh directory), I do a ./configure and make ... hey, at least it
is a different error (??):
make
<snip>
make -C sbsms
make[2]: Entering directory
`/home/bob/src/sound/edit/audacity/lib-src/sbsms'
/bin/bash ./config.status --recheck
running CONFIG_SHELL=/bin/bash /bin/bash ./configure
--disable-option-checking --prefix=/usr/local/
--with-wx-config=/usr/bin/wx-config --disable-sqlite --disable-flac
--disable-alsa RAPTOR_CFLAGS=-I../../redland/raptor/src RAPTOR_LIBS=-L..
-L../.. -lraptor REDLAND_CFLAGS=-I../../redland/raptor/src
-I../../redland/rasqal/src -I../../redland/librdf REDLAND_LIBS=-L..
-L../.. -lrdf -lraptor -lrasqal
--with-pa-include=../portaudio-v19/include --cache-file=/dev/null
--srcdir=. --no-create --no-recursion
checking build system type... i686-pc-linux-gnu
checking host system type... i686-pc-linux-gnu
checking target system type... i686-pc-linux-gnu
./configure: line 1849: syntax error near unexpected token `src/config.h'
./configure: line 1849: `AM_CONFIG_HEADER(src/config.h)'
make[2]: *** [config.status] Error 2
make[2]: Leaving directory `/home/bob/src/sound/edit/audacity/lib-src/sbsms'
make[1]: *** [sbsms-recursive] Error 2
make[1]: Leaving directory `/home/bob/src/sound/edit/audacity/lib-src'
make: *** [audacity] Error 2
Apparently I'm missing something on my system.???
Thanks,
--
**** Listen to my CD at ****
Bob van der Poel ** Wynndel, British Columbia, CANADA **
WWW:
On Wed, 2008-12-17 at 20:53 +0000, Gale Andrews wrote:
> | From Vaughan Johnson <vaughan@...>
> | Tue, 16 Dec 2008 20:49:44 -0800
> | Subject: [Audacity-devel] Release Checklist: Relegating from / promoting to P2
> > Gale (Audacity Team) wrote:
> > > * Old projects open incorrectly. Reported by Monty. With CVS HEAD, sample
> > > project created in 1.1.0 now correctly identifying the real orphans (there
> > > are 10), but the waveform is still opening as blank.
> > >
> > > Can certainly be dropped from P2, if that's the consensus. James' view back
> > > in August last year was "really we should go all the way back to 1.0 (i.e
> > > pre XML format) in our tests... we should start talking about automated tests...
> >
> > Imo, 1.1.0 is ancient history. Has any user mentioned it in the last year?
> >
> > All previous releases are available on SourceForge and I think we should
> > not be chained to anything longer than ~2 years backward compatibility,
> > so definitely in the 1.2.x series, and not 1.1.x or earlier. Users can
> > always export tracks and import them into newer versions.
>
> For most practical purposes I guess I'd have to agree. Also in an ideal
> world you should not be storing audio as Audacity projects for years
> (though I do, for lack of time). Note however you can't export envelope
> points (supported in 1.0.0 and 1.1.0).
1.2.x has been stable for over 4 years. I think that is easily far
enough back to be reaching with file format support. Given that some of
the early 1.2.x had issues with each other's projects, the fact they
import into 1.4.0 is enough. We should just put a note in the manual
that you shouldn't try and jump from pre 1.2.0 to current versions of
audacity, and should either go through 1.2.6 or via PCM files.
Richard | http://sourceforge.net/p/audacity/mailman/audacity-devel/?viewmonth=200812&viewday=20 | CC-MAIN-2015-18 | refinedweb | 1,729 | 60.01 |
math.factorial() in Python with examples
We know that Python is a high-level programming language that provides multiple modules to make coding easy and efficient. One of such modules is ‘math’, which provides numerous functions such as factorial(), sqrt(), ceil(), fabs() etc. In this tutorial, we will be learning how to use the factorial() function defined in the math module of Python with the help of some examples. For this tutorial, we will be using python3, so make sure you have python3 installed in your system.
Importing the math module in Python
1. This is step 1 of the process. We need to import the math module into our code. For this, type:
(Example A)
import math
We can also import math module as follows:
(Example B)
from math import *
Passing the value to factorial() function
2. In this step, we pass the desired values to our factorial() function. For instance, we we want to calculate the factorial of 7, the complete code will be as follows:
import math print(math.factorial(7))
Or if we have used example B, we can simply call the factorial() function as follows:
from math import * print(factorial(7))
In both the examples, output will be:
5040
Example to take a number as an input and print its factorial
from math import * a = int(input("Enter the number: ")) print(factorial(a))
Output:
Enter the number: 4 24
The above code asks user to enter a number and calculates its factorial. However, the factorial function does not works with decimal values. For example, if we re-run the above code and provide a decimal value as an input:
Output:
Enter the number: 5.6 Traceback (most recent call last): File "a.py", line 2, in <module> a = int(input("Enter the number: ")) ValueError: invalid literal for int() with base 10: '5.6' >>>
We can see, that the code raises an error. Hence, factorial() is strictly for integral values greater than or equal to zero.
NOTE: The factorial of zero is 1.
Also read: Math module of Python | https://www.codespeedy.com/math-factorial-in-python-with-examples/ | CC-MAIN-2020-45 | refinedweb | 343 | 62.48 |
jboss and struts - Struts
Deploy JBoss and Struts how to deploy struts 2 applications in jboss 4... the WAR file to the /server/default/deploy folder. Hello,You...;JBoss_home>/server/default/deploy folder.Jboss will automatically deploy Quick Start
Struts Quick Start
Struts Quick Start to Struts technology
In this post I will show you how you can quick start the development of you
struts based project... of the application fast.
Read more: Struts Quick
Start....
Deployment error:
JBoss Application Server Start Failed. HTTP Connector port 8080... by: org.netbeans.modules.j2ee.deployment.impl.ServerException: JBoss Application Server Start Failed. HTTP Connector port 8080
JBoss Application Server Start Failed. HTTP Connector port 8080 is already in use.
JBoss Application Server Start Failed. HTTP Connector port 8080 is already... error:
JBoss Application Server Start Failed. HTTP Connector port 8080... by: org.netbeans.modules.j2ee.deployment.impl.ServerException: JBoss Application Server Start Failed. HTTP Connector port 8080
Struts Tutorials
how to start and stop Tomcat, install Struts, and deploy a Struts application...
Struts Tutorials
Struts Tutorials - Jakarta Struts Tutorial
This complete reference of Jakarta Struts shows you how to develop Struts
struts programming - Framework
struts programming how to start programming using the struts2.0 by using eclipse europa and along with the jboss
Java Kick Start - Java Beginners
of examples related to different technologies like, jsp, servlet, struts, ejb... in core java, collections then you can start with JSP and Servlets.
http
jboss sever
jboss sever Hi
how to configure data source in jboss server and connection pooling
Thanks
Kalins naik
JBoss Tutorials
jsp-jboss
jsp-jboss where to keep jsp in jboss
Subset Tag (Control Tags) Example Using Start
Subset Tag (Control Tags) Example Using Start
... the start parameter. The start parameter is of integer
type. It indicates... a subset of it. The
parameter start
is of integer type and it indicates
Struts Reference
Struts Reference
Welcome to the Jakarta Online Reference page, you will find everything you
need to know to quick start your Struts Project. Here we are providing you
detailed Struts Reference. This online struts
What are the level in Jboss 5 application server to deploy the application
What are the level in Jboss 5 application server to deploy the application What are the level in Jboss 5 application server to deploy... and cryptography extensions (JCE, JSSE, JAAS)
- Java web start
J2SE 5.0 (september 30
Regarding struts validation - Struts
Regarding struts validation how to validate mobile number field should have 10 digits and should be start with 9 in struts validation? Hi... always start with 9 return true else false
You set attributes maxlength html
how to install jboss drools
how to install jboss drools how to install jboss drools
Struts Articles
Struts Articles
Building on Struts for Java 5 Users
Struts is undoubtedly the most successful Java web...) framework, and has proven itself in thousands of projects. Struts was ground-breaking
Struts 1 Tutorial and example programs
and reached end of life phase. Now you should start learning the
Struts 2 framework... Integration with EJB
Struts integration with EJB in JBOSS 3.2 ...Struts 1 Tutorials and many example code to learn Struts 1 in detail.
Struts 1
Jboss 3.2 EJB Examples
we attempt to start Tomcat4.1 server separately while JBoss is running...)
Now open another DOS window and start the JBoss server. Jboss server...\bin> run
This command will start the jboss server.
Start tomcat
start and deploy
start and deploy how to deployee java web application in glassfish by using netbeans6.7
Struts - Framework
Struts Good day to you Sir/madam,
How can i start struts application ?
Before that what kind of things necessary... that you are going to learn Struts.
Just refer "
obj_start()
obj_start() hii,
What is the use of obj_start()?
hello,
Its initializing the object buffer, so that the whole page will be first parsed and stored in output buffer so that after complete page is executed
What is JBoss
start pyramid
start pyramid how to make program use loop to make the output become like this;
*
**
**
*
import java.lang.*;
class Star
{
public static void main(String args[])
{
int k,i,j,p=4;
for(i=1;i<5;i
Struts Guide
Struts Guide
- This tutorial is extensive guide to the Struts Framework...
Struts Framework. This tutorial assumes that the reader is familiar with the web
struts - Design concepts & design patterns
Struts kick start I looking for a struts kick start tutorials.Thanks
Struts + HTML:Button not workin - Struts
Struts + HTML:Button not workin Hi,
I am new to struts. So pls... in same JSP page.
As a start, i want to display a message when my actionclass...:
http
Struts - Framework
Struts Good day to you Sir/madam,
How can i start struts application ?
Before that what kind of things necessary...,
Struts :
Struts Frame work is the implementation of Model-View-Controller integration with EJB in JBOSS3.2
-blank.war. Copy that struts-blank.war file to c:\tomcat5\webapps. Start the tomcat...\default\deploy
Start the jboss server as shown below
Go to new window to start the jboss server.
D:\jboss32\bin\>set JAVA_HOME=D:\JDK1.4.2 Resources
many tutorials and
examples to learn and kick start development of your Struts...
Struts Resources
RoseIndia.Net is the ultimate Struts Resources for the web development
community using Struts
Tomcat Quick Start Guide
Tomcat Quick Start Guide
... fast tomcat jsp tutorial, you will learn all the essential steps need to start...
Struts
Hibernate
Spring
java - Struts
give me idea how to start with Hi Friend,
Please clarify what
Deployment of your example - Struts
Deployment of your example In your Struts2 tutorial, can you show how you would war it up so it can be deployed into JBOss? There seems to be a lot of unnecessary files in it, or am I mistaken
SPRING ... A JUMP START
SPRING ... A JUMP START
-----------------------
by Farihah Noushene... context, multipart resolver, Struts
support, JSF support and web utilities.
10...:\tomcat5\common\lib and start tomcat server.
Then set classpath as shown below
which is better server for java applications? apache or jboss?
which is better server for java applications? apache or jboss? which is better server? apache or jboss 2 Application
.
Let's start developing user registration application on
Struts 2 framework...Struts 2 Application
Developing user registration application based on
Struts 2 Framework
This Struts 2 Application is a simple user registration
application
Start MySQL on Linux
Start MySQL on Linux hii,
How can i start MySQL on Linux?
is there any cmd for starting MySQL on linux?
hi,
yeah ,
By typing this cmd you can start MySQL
service mysqld start
| - application missing something?
in the struts working and all is great till I close JBoss or the server gets shut down...struts - application missing something? Hello
I added a parameter..., struts-config.xml are all updated. I didn't use anything new that hasn't already
Struts Projects
Struts Projects
Easy Struts Projects to learn and get into development ASAP.
These Struts Project will help you jump the hurdle of learning complex
Struts Technology.
Struts Project highlights:
Struts Project to make
Display helloworld using servlet in jboss
the following link:
where to start - Java Beginners
where to start Myself being a java beginner want to know wat is that I shall start practicing from the day one to make myself get the knowledge... on. Hi,
Thanks for using RoseIndia.net
You can start learning java
Welcome to the Jboss 3.0 Tutorial
Welcome to the Jboss 3.0 Tutorial
...;
Building Web Application With Ant and Deploying on Jboss 3.0... on
the Jboss 3.0 application server. After the completion of this lesson
Internationalisation problem - Struts
Internationalisation problem In struts application how can i get telugu characters equivalent to english characters which my form user types... languages and if you select one language from that list and if you start typing,< | http://roseindia.net/tutorialhelp/comment/19884 | CC-MAIN-2014-15 | refinedweb | 1,319 | 66.23 |
(For more resources on Solr, see here.)
The classic plugin for Rails is acts_as_solr that allows Rails ActiveRecord objects to be transparently stored in a Solr index. Other popular options include Solr Flare and rsolr. An interesting project is Blacklight, a tool oriented towards libraries putting their catalogs online. While it attempts to meet the needs of a specific market, it also contains many examples of great Ruby techniques to leverage in your own projects.
You will need to turn on the Ruby writer type in solrconfig.xml:
<queryResponseWriter name="ruby"
class="org.apache.solr.request.RubyResponseWriter"/>
The Ruby hash structure has some tweaks to fit Ruby, such as translating nulls to nils, using single quotes for escaping content, and the Ruby => operator to separate key-value pairs in maps. Adding a wt=ruby parameter to a standard search request returns results in a Ruby hash structure like this:
{
'responseHeader'=>{
'status'=>0,
'QTime'=>1,
'params'=>{
'wt'=>'ruby',
'indent'=>'on',
'rows'=>'1',
'start'=>'0',
'q'=>'Pete Moutso'}},
'response'=>{'numFound'=>523,'start'=>0,'docs'=>[
{
'a_name'=>'Pete Moutso',
'a_type'=>'1',
'id'=>'Artist:371203',
'type'=>'Artist'}]
}}
acts_as_solr
A very common naming pattern for plugins in Rails that manipulate the database backed object model is to name them acts_as_X. For example, the very popular acts_as_list plugin for Rails allows you to add list semantics, like first, last, move_next to an unordered collection of items. In the same manner, acts_as_solr takes ActiveRecord model objects and transparently indexes them in Solr. This allows you to do fuzzy queries that are backed by Solr searches, but still work with your normal ActiveRecord objects. Let's go ahead and build a small Rails application that we'll call MyFaves that both allows you to store your favorite MusicBrainz artists in a relational model and allows you to search for them using Solr.
acts_as_solr comes bundled with a full copy of Solr 1.3 as part of the plugin, which you can easily start by running rake solr:start. Typically, you are starting with a relational database already stuffed with content that you want to make searchable. However, in our case we already have a fully populated index available in /examples, and we are actually going to take the basic artist information out of the mbartists index of Solr and populate our local myfaves database with it. We'll then fire up the version of Solr shipped with acts_as_solr, and see how acts_as_solr manages the lifecycle of ActiveRecord objects to keep Solr's indexed content in sync with the content stored in the relational database. Don't worry, we'll take it step by step! The completed application is in /examples/8/myfaves for you to refer to.
Setting up MyFaves project
We'll start with the standard plumbing to get a Rails application set up with our basic data model:
>>rails myfaves
>>cd myfaves
>>./script/generate scaffold artist name:string group_type:string
release_date:datetime image_url:string
>>rake db:migrate
This generates a basic application backed by an SQLite database. Now we need to install the acts_as_solr plugin.
acts_as_solr has gone through a number of revisions, from the original code base done by Erik Hatcher and posted to the solr-user mailing list in August of 2006, which was then extended by Thiago Jackiw and hosted on Rubyforge. Today the best version of acts_as_solr is hosted on GitHub by Mathias Meyer at mattmatt/acts_as_solr/tree/master. The constant migration from one site to another leading to multiple possible 'best' versions of a plugin is unfortunately a very common problem with Rails plugins and projects, though most are settling on either RubyForge.org or GitHub.com.
In order to install the plugin, run:
>>script/plugin install git://github.com/mattmatt/acts_as_solr.gitt
We'll also be working with roughly 399,000 artists, so obviously we'll need some page pagination to manage that list, otherwise pulling up the artists /index listing page will timeout:
>>script/plugin install git://github.com/mislav/will_paginate.git
Edit the ./app/controllers/artists_controller.rb file, and replace in the index method the call to @artists = Artist.find(:all) with:
@artists = Artist.paginate :page => params[:page], :order =>
'created_at DESC'
Also add to ./app/views/artists/index.html.erb a call to the view helper to generate the page links:
<%= will_paginate @artists %>
Start the application using ./script/server, and visit the page. You should see an empty listing page for all of the artists. Now that we know the basics are working, let's go ahead and actually leverage Solr.
Populating MyFaves relational database from Solr
Step one will be to import data into our relational database from the mbartists Solr index. Add the following code to ./app/models/artist.rb:
class Artist < ActiveRecord::Base
acts_as_solr :fields => [:name, :group_type, :release_date]
end
The :fields array of hashes maps the attributes of the Artist ActiveRecord object to the artist fields in Solr's schema.xml. Because acts_as_solr is designed to store data in Solr that is mastered in your data model, it needs a way of distinguishing among various types of data model objects. For example, if we wanted to store information about our User model object in Solr in addition to the Artist object then we need to provide a type_field to separate the Solr documents for the artist with the primary key of 5 from the user with the primary key of 5. Fortunately the mbartists schema has a field named type that stores the value Artist, which maps directly to our ActiveRecord class name of Artist and we are able to use that instead of the default acts_as_solr type field in Solr named type_s.
There is a simple script called populate.rb at the root of /examples/8/myfaves that you can run that will copy the artist data from the existing Solr mbartists index into the MyFaves database:
>>ruby populate.rb
populate.rb is a great example of the types of scripts you may need to develop to transfer data into and out of Solr. Most scripts typically work with some sort of batch size of records that are pulled from one system and then inserted into Solr. The larger the batch size, the more efficient the pulling and processing of data typically is at the cost of more memory being consumed, and the slower the commit and optimize operations are. When you run the populate.rb script, play with the batch size parameter to get a sense of resource consumption in your environment. Try a batch size of 10 versus 10000 to see the changes. The parameters for populate.rb are available at the top of the script:
MBARTISTS_SOLR_URL = ''
BATCH_SIZE = 1500
MAX_RECORDS = 100000 # the maximum number of records to load,
or nil for all
There are roughly 399,000 artists in the mbartists index, so if you are impatient, then you can set MAX_RECORDS to a more reasonable number.
The process for connecting to Solr is very simple with a hash of parameters that are passed as part of the GET request. We use the magic query value of *:* to find all of the artists in the index and then iterate through the results using the start parameter:
connection = Solr::Connection.new(MBARTISTS_SOLR_URL)
solr_data = connection.send(Solr::Request::Standard.new({
:query => '*:*',
:rows=> BATCH_SIZE,
:start => offset,
:field_list =>['*','score']
}))
In order to create our new Artist model objects, we just iterate through the results of solr_data. If solr_data is nil, then we exit out of the script knowing that we've run out of results. However, we do have to do some parsing translation in order to preserve our unique identifiers between Solr and the database. In our MusicBrainz Solr schema, the ID field functions as the primary key and looks like Artist:11650 for The Smashing Pumpkins. In the database, in order to sync the two, we need to insert the Artist with the ID of 11650. We wrap the insert statement a.save! in a begin/rescue/end structure so that if we've already inserted an artist with a primary key, then the script continues. This just allows us to run the populate script multiple times:
solr_data.hits.each do |doc|
id = doc["id"]
id = id[7..(id.length)]
a = Artist.new(:name => doc["a_name"], :group_type => a["a_type"],
:release_date => doc["a_release_date_latest"])
a.id = id
begin
a.save!
rescue ActiveRecord::StatementInvalid => ar_si
raise ar_si unless ar_si.to_s.include?("PRIMARY KEY must be
unique") #sink duplicates
end
end
Now that we've transferred the data out of our mbartists index and used acts_as_solr according to the various conventions that it expects, we'll change from using the mbartists Solr instance to the version of Solr shipped with acts_as_solr.
Solr related configuration information is available in ./myfaves/config/solr.xml. Ensure that the default development URL doesn't conflict with any existing Solr's you may be running:
development:
url:
Start the included Solr by running rake solr:start. When it starts up, it will report the process ID for Solr running in the background. If you need to stop the process, then run the corresponding rake task: rake solr:stop. The empty new Solr indexes are stored in ./myfaves/solr/development.
Build Solr indexes from relational database
Now we are ready to trigger a full index of the data in the relational database into Solr. acts_as_solr provides a very convenient rake task for this with a variety of parameters that you can learn about by running rake -D solr:reindex. We'll specify to work with a batch size of 1500 artists at a time:
>>rake solr:start
>>% rake solr:reindex BATCH=1500
(in /examples/8/myfaves)
Clearing index for Artist...
Rebuilding index for Artist...
Optimizing...
This drastic simplification of configuration in the Artist model object is because we are using a Solr schema that is designed to leverage the Convention over Configuration ideas of Rails. Some of the conventions that are established by acts_as_solr and met by Solr are:
- Primary key field for model object in Solr is always called pk_i.
- Type field that stores the disambiguating class name of the model object is called type_s.
- Heavy use of the dynamic field support in Solr. The data type of ActiveRecord model objects is based on the database column type. Therefore, when acts_as_solr indexes a model object, it sends a document to Solr with the various suffixes to leverage the dynamic column creation. In /examples/8/myfaves/vendor/plugins/acts_as_solr/solr/solr/conf/ schema.xml, the only fields defined outside of the management fields are dynamic fields:
<dynamicField name="*_t" type="text" indexed="true"
stored="false"/>
- The default search field is called text. And all of the fields ending in _t are copied into the text search field.
- Fields to facet on are named _facet and copied into the text search field as well.
The document that gets sent to Solr for our Artist records creates the dynamic fields name_t, group_type_s and release_date_d, for a text, string, and date field respectively. You can see the list of dynamic fields generated through the schema browser at.
Now we are ready to perform some searches. acts_as_solr adds some new methods such as find_by_solr() that lets us find ActiveRecord model objects by sending a query to Solr. Here we find the group Smash Mouth by searching for matches to the word smashing:
% ./script/console
>> artists = Artist.find_by_solr("smashing")
=> #<ActsAsSolr::SearchResults:0x224889c @solr_data={:total=>9,
:docs=>[#<Artist id: 364, name: "Smash Mouth"...
>> artists.docs.first
=> #<Artist id: 364, name: "Smash Mouth", group_type: 1,
release_date: "2006-09-19 04:00:00", created_at: "2009-04-17
18:02:37", updated_at: "2009-04-17 18:02:37">
Let's also verify that acts_as_solr is managing the full lifecycle of our objects. Assuming Susan Boyle isn't yet entered as an artist, let's go ahead and create her:
>> Artist.find_by_solr("Susan Boyle")
=> #<ActsAsSolr::SearchResults:0x26ee298 @solr_data={:total=>0,
:docs=>[]}>
>> susan = Artist.create(:name => "Susan Boyle", :group_type => 1,
:release_date => Date.new)
=> #<Artist id: 548200, name: "Susan Boyle", group_type: 1,
release_date: "-4712-01-01 05:00:00", created_at: "2009-04-21
13:11:09", updated_at: "2009-04-21 13:11:09">
Check the log output from your Solr running on port 8982, and you should also have seen an update query triggered by the insert of the new Susan Boyle record:
INFO: [] webapp=/solr path=/update params={} status=0 QTime=24
Now, if we delete Susan's record from our database:
>> susan.destroy
=> #<Artist id: 548200, name: "Susan Boyle", group_type: 1,
release_date: "-4712-01-01 05:00:00", created_at: "2009-04-21
13:11:09", updated_at: "2009-04-21 13:11:09">
=> #<Artist id: 548200, name: "Susan Boyle", group_type: 1,
release_date: "-4712-01-01 05:00:00", created_at: "2009-04-21
13:11:09", updated_at: "2009-04-21 13:11:09">
Then there should be another corresponding update issued to Solr to remove the document:
INFO: [] webapp=/solr path=/update params={} status=0 QTime=57
You can verify this by doing a search for Susan Boyle directly, which should return no rows at.
Read more about this book
(For more resources on Solr, see here.)
Complete MyFaves web site
Now, let's go ahead and put in the rest of the logic for using our Solr-ized model objects to simplify finding our favorite artists. We'll store the list of favorite artists in the browser's session space for convenience. If you are following along with your own generated version of MyFaves application, then the remaining files you'll want to copy over from /examples/8/myfaves are as follows:
- ./app/controller/myfaves_controller.rb contains the controller logic for picking your favorite artists.
- ./app/views/myfaves/ contains the display files for picking and showing the artists.
- ./app/views/layouts/myfaves.html.erb is the layout for the MyFaves views. We use the Autocomplete widget again, so this layout embeds the appropriate JavaScript and CSS files.
- ./public/javascripts/blackbirdjs/ contains everything required to use the Blackbird logging library.
- ./public/stylesheets/jquery.autocomplete.css and ./public/ stylesheets/indicator.gif are stored locally in order to fix pathing issues with the indicator.gif showing up when the autocompletion search is running.
The only other edits you should need to make are:
- Edit ./config/routes.rb by adding map.resources :myfaves and map.root :controller => "myfaves".
- Delete ./public/index.html to use the new root route.
- Copy the index method out of ./app/controllers/artists_controllers. rb, because we want the index method to respond with both HTML and JSON response types.
- Run rake db:sessions:create to generate a sessions table, then rake db: migrate to update the database with the new sessions table. Edit ./config/ environment.rb and add config.action_controller.session_store = :active_record_store. As we are storing Artist model objects in our session, we need to store them in the database versus in a cookie for space reasons.
You should now be able to run ./script/server and browse to. You will be prompted to enter an artist's name to search for. If you don't receive any results, then make sure you have started Solr using rake solr:start. Also, if you have only loaded a subset of the full 399,000 artists, then your choices may be limited. You can load all of the artists through the populate.rb script and then run rake solr:reindex, it will take a long time. Something good to do just before you head out for lunch or home for the evening!
If you look at ./app/views/myfaves/index.rhtml, then you can see the jQuery autocomplete call is a bit different:
$("#artist_name").autocomplete( '/artists.json?callback=?', {
The URL we are hitting is /artists.json, with the .json suffix telling Rails that we want JSON data back instead of normal HTML. If we ended the URL with .xml, then we would have received XML formatted data about the artists. We provide a slightly different parameter to Rails to specify the JSONP callback to use. Unlike the previous example, where we used json.wrf, which is Solr's parameter name for the callback method to call, we use the more standard parameter name callback. We changed the ArtistController index method to handle the autocomplete widgets data needs through JSONP. If there is a q parameter, then we know the request was from the autocomplete widget, and we ask Solr for the @artists to respond with. Later on, we render @artists into JSON objects, returning only the name and id attributes to keep the payload small. We also specify that the JSONP callback method is what was passed when using the callback parameter:
def index
if params[:q]
@artists = Artist.find_by_solr(params[:q], :limit =>
params[:limit]).docs
else
@artists = Artist.paginate :page => params[:page], :order =>
'created_at DESC'
end
respond_to do |format|
format.html # index.html.erb
format.xml { render :xml => @artists }
format.json { render :json => @artists.to_json(:only => [:name,
:id]), :callback => params[:callback] }
end
end
At the end of all of this, you should have a nice interface for quickly picking artists:
When you are selecting acts_as_solr as your integration method, you are implicitly agreeing to the various conventions established for indexing data into Solr. acts_as_solr is a wonderful solution if you are indexing just a few unrelated models and don't have multiple data sources feeding your Solr indexes. While acts_as_solr has evolved to support more complex solutions (for example, by adding faceting support or the ability to perform more complex mappings with custom logic), it has its limits.
If you have a very complex data model with lots of inter-relationships that do not more or less map one-to-one with what you'd expect from search results, then you may find yourself running into edge cases that acts_as_solr doesn't support cleanly—especially if you are doing searches against specific fields in Solr versus the default text field. However, if your requirement is to quickly get your ActiveRecord model objects searchable, then acts_as_solr can't be beat!
Blacklight OPAC
Blacklight is an open source Online Public Access Catalog (OPAC) that demonstrates the power of a highly configurable Ruby on Rails frontend paired with Solr. OPACs are the modern web enabled version of the classic card catalog that allow libraries to easily put their collections online. Blacklight supports parsing of various standard library catalog storage formats including MARC records and TEI XML format. Blacklight 2.0 was released in March of 2009 as a Rails Engine plugin. Rails Engine plugins allow users to integrate the rich functionality of the plugin, while keeping the plugin related code and assets, such as JavaScript, CSS and images, separate from the hosting application, thus facilitating upgrades to the Blacklight Engine. You may find that Blacklight provides an excellent starting point for your own Solr/Ruby on Rails development.
Let's go ahead and index information from MusicBrainz.org into Blacklight, just to see how easy it is. Please refer to the sample application in /examples/8/ blacklightopac/blacklight/. Blacklight project is releasing frequent updates, so you should refer to the main web site at.
Almost all of the dependencies are included in the blacklight sample application. You will need to install a couple of gems:
>>sudo gem install curb
>>sudo gem install bcrypt-ruby
Indexing MusicBrainz data
Blacklight builds on top of the rsolr library for communicating back and forth with the Solr server and adds some concepts around mapping data into Solr. Unlike acts_as_solr, Blacklight doesn't require the source data to be in a database. Instead you build a custom Mapper to fetch the data for Blacklight.
Blacklight requires some synchronization between the Solr and Ruby on Rails sides to make things work. Blacklight expects a search handler called search to be configured, while specifying which schema fields are facets and which are just straight fields of data to be returned. We are going to index various artists and their music releases from the MusicBrainz.org site, while creating facets for the languages, scripts, and types of releases. For example, The Dave Matthews Band's album Under the Table and Dreaming is in English, using the standard Latin script for the album notes, and is an Album. We are going to be indexing many artists from non-Western countries. Fortunately, Solr and Blacklight support alternative character sets such as Cyrillic, Kanji, and Chinese characters. You can see that we are still using conventions for how we name the schema fields, with _t signifying text, and _facet signifying a field for faceting on:
<requestHandler name="search" class="solr.SearchHandler" >
<lst name="defaults">
<str name="fl">id, format_code_t, language_facet, script_facet,
type_facet, releases_t, title_t, score</str>
<str name="facet">on</str>
<str name="facet.mincount">1</str>
<str name="facet.limit">10</str>
<str name="facet.field">language_facet</str>
<str name="facet.field">script_facet</str>
<str name="facet.field">type_facet</str>
</lst>
</requestHandler>
We also need to tell Blacklight through the ./config/solr.xml which facets and fields to display in the UI. We are using the field title_t to store the artist's name:
facet_fields:
- type_facet
- language_facet
- script_facet
index_view_fields:
- title_t
- language_facet
- script_facet
- type_facet
- releases_t
One of the nice features about Blacklight is that it provides an architectural pattern for mapping information from any data source into Solr that you can mimic for your own use. We added ./lib/tasks/brainz.rake to give us the ability to load the information from MusicBrainz by running a simple Rake task: rake app:index: brainz. The Rake task is defined in ./lib/tasks/brainz.rake. The core of the task instantiates a BrainzMapper class (that we developed) that provides a collection of documents related to Artists and their music Releases for Solr to index. In order to reduce memory usage, we index artists alphabetically, while committing the results to Solr periodically:
solr = Blacklight.solr
mapper = BrainzMapper.new
('A' .. 'Z').each do |char|
mapper.from_brainz("#{char}*") do |doc,index|
puts "#{index} -- adding doc w/id : #{doc[:id]} to Solr"
solr.add(doc)
end
puts "Sending commit to Solr..."
solr.commit
end
puts "Complete."
The real magic of the Blacklight mapper pattern is in the BrainzMapper class in ./lib/brainz_mapper.rb. While the class may look a little hairy, it is actually quite simple. The pattern is defined by the base class BlockMapper. BlockMapper expects us to define a series of map methods for each field that we want to store in Solr. For example, to store the artist's name in the previously mentioned title_t field, we define it this way:
map :title_t do |rec,index|
rec[:artist].name
end
T his says that to map the :title_t field, we are handed our record object and the index of that record in our overall collection of records to be stored in Solr. In our case, we have populated the record object as a hash with two keys, :artist and :releases, whose values are an artist and their releases. In the :title_t mapping case, we ask the record hash for the artist object and call the .name() method.
How about a slightly more complex example, mapping all of the releases for an artist:
map :releases_t do |rec, index|
rec[:releases].collect {|release|release.entity.title}.compact.uniq
end
In this case, when we map the releases_t field, we obtain the releases object, which is an array of MusicBrainz::Model::Release objects. From each one we get the title of the release. The resulting array is compacted to remove any nil objects, and then only unique release titles are returned, as sometimes we have multiple releases listed with the same name. Blacklight properly handles storing a single value or an array of values in the releases_t field, as any field ending in _t is specified as multiValued="true" in schema.xml.
Very similar logic is used for mapping our facets as well. In this case, we are using the MusicBrainz::Utils.get_language_name method to translate from three letter language codes like "ENG" to "English" in order to have a prettier display in our facets:
map :language_facet do |rec,index|
rec[:releases].collect {|release| MusicBrainz::Utils.get_language_
name(release.entity.text_language)}.
compact.uniq
end
Okay, we've seen the mapping logic, but where does the data come from? How are we populating the individual record hash object with :artist and :releases values? Web services to the rescue! MusicBrainz has an XML based web service that follows the REST design pattern that you can learn more about at http:// musicbrainz.org/doc/XMLWebService. Even by using the web service directly, you still need to parse and manipulate XML documents. Fortunately, there is the very nice rbrainz Ruby gem available from that abstracts away all of the plumbing for communicating with MusicBrainz through XML. Instead, we work with higher level abstractions like Query and Artist objects. In the query below, we are asking for all of the artists similar to Dave Matthews Band, returning records 50 through 100.
require 'rbrainz'
query = MusicBrainz::Webservice::Query.new
results = query.get_artists({:name => 'Dave Matthews Band', :limit =>
50, :offset => 50})
MusicBrainz uses Lucene for its search engine, and it permits you to use Lucene's syntax in your queries. So, to find every band except the Dave Matthews Band we would execute:
results = query.get_artists({:name => 'Dave Matthews NOT Band'})
The method create_records_from_music_brainz(query_string) in ./lib/ brainz_mapper.rb returns a collection of record hashes containing artist and release data downloaded from MusicBrainz through rbrainz.
In order to run Blacklight, first start the included Solr in ./examples/8/ blacklightopac/blacklight/jetty through
>>java -jar start.jar
Then, run the indexing process in ./examples/8/blacklightopac/blacklight/ rails which downloads artists alphabetically from A to Z:
>>rake app:index:brainz
Indexing is very slow due to all of the HTTP requests being made to MusicBrainz web site. Artists are downloaded in batches of 100, with up to 1000 artists per letter, and then each artist requires a separate HTTP request to find their music releases. So indexing a thousand artists for the letter P requires roughly 1010 HTTP queries ((1000 / 100) + 1000). Additionally, you'll notice that the query parameter using just a single alphabetical character, such as D*, leads to somewhat odd matches. Records are only indexed into Solr once all of the artist/release data for a letter is downloaded, so you need to wait for a complete letter to finish. However, soon you will have thousands of artists and their releases in Solr that you can browse through.
Customizing display
The user interface for Blacklight is fairly clean but pretty bland and displays every type of information the same way. However, based on the format_code_t field, you can easily customize the display. If you are indexing records with different types, such as Artists, Record Labels, and so on, then you can have a different display by populating format_code_t differently. We've chosen to just index Artists in this example, and defined :format_code_t to be brainz. As every record indexed uses the same value, we populate the shared_field_data parameter when calling the from_brainz method of the mapper:
mapper.from_brainz("#{char}*", {:format_code_t => 'brainz'})
do |doc,index|
def from_brainz(query_string, shared_field_data={}, &blk)
shared_field_data.each_pair do |k,v|
# map each item in the hash to a solr field
map k.to_sym, v
end
Any values put into the shared_field_data hash will be set on every field. A common use case for the shared_field_data hash is to set an :indexed_by_s property that specifies the name of the user who invoked the indexing process.
Th ere are two ways of customizing the display of fields. One of them is the above mentioned ./config/solr.xml that allows us to filter the list of fields to display on the index page and the details page. However, that is a one-size-fits-all solution and still doesn't let you tweak the actual user interface depending on the data to display. There is another option that leverages the dynamic pathing of Rails to specify that view files should first be loaded from ./app/views, and if not found, then load them from the Blacklight plugin. For example, we created a custom partial, which is to be rendered for the detailed view of an artist that incorporates the MusicBrainz logo and some photos of the artist. By placing the partial in ./app/views/catalog/_ show_partials/_brainz.html.erb, the name of the partial is mapped directly to the format_code_t value of brainz. So, if you indexed multiple entities, then./ app/views/catalog/_show_partials/_artists.html.erb and ./app/views/ catalog/_show_partials/_releases.html.erb map onto format_code_t of artists and releases respectively. Sometimes, you don't want to override Blacklight's UI. For example, we don't have a custom display partial when displaying listings for a search. Blacklight checks for the existence of ./app/views/catalog/_ index_partials/_brainz.html.erb. If it doesn't find that file, then it defaults to the _default.html.erb partial stored in ./vendor/plugins/blacklight/app/ views/catalog/_index_partials/_default.html.erb. This makes it very easy to override the default behaviors of Blacklight without requiring changes to the underlying plugin. This facilitates the upgrade of the plugin, as new Blacklight versions are released.
solr-ruby versus rsolr
For a lower-level client interface to Solr from Ruby environments, there are two libraries duking it out to be the client of choice. In one corner you have solr-ruby, which is the client library officially supported by the Apache Solr project. solr-ruby is fairly widely used, including providing the API to Solr used by the acts_as_solr Rails plugin we looked at previously. The new kid on the block is rsolr, wh ich is a re-imagining of what a proper DSL (Domain Specific Language) would look like for interacting with Solr. rsolr is used by Blacklight OPAC as its interface to Solr. Both of these solutions are solid. However, rsolr is currently gaining more attention, has better documentation, and nice features such as a direct Embedded Solr connection through JRuby. rsolr also has support for using either curb (Ruby bindings to curl, a very fast HTTP library) or the standard Net::HTTP library for the HTTP transport layer.
In order to perform a select using solr-ruby, you would issue:
response = solr.query('washington', {
:start =>0,
:rows=>10
})
In order to perform a select using rsolr, you would issue:
response = solr.select({
:q=>'washington',
:start=>0,
:rows=>10
})
So you can see that doing a basic search is pretty much the same in either library. Differences do crop up more as you dig into the details on parsing and indexing records. Both libraries are evolving, with neither having a dominant position at this point. You can learn more about solr-ruby on the Solr Wiki at. apache.org/solr/solr-ruby and learn more about rsolr at mwmitchell/rsolr/tree.
Summary
In this article we saw the integration options for Solr, from supported client libraries in Ruby.
Further resources on this subject:
- Faceting in Solr 1.4 Enterprise Search Server [article]
- Indexing Data in Solr 1.4 Enterprise Search Server: Part1 [article]
- Indexing Data in Solr 1.4 Enterprise Search Server: Part2 [article] | https://www.packtpub.com/books/content/integrating-solr-ruby-rails-integration | CC-MAIN-2015-48 | refinedweb | 5,193 | 54.52 |
TrackPacer Part 2 - Connecting Multiple Microcontrollers Using ICSC
This is the second post in a 3 part series detailing the construction of our latest hardware project - TrackPacer. In this post, I'll cover how we implemented ICSC (Inter-Chip Serial Communication) to communicate quickly and reliably between multiple microcontrollers.
Background
The project involved controlling 12,000 LEDs across 400 meters. Given the physical and computational limitations, we chose to spread the task of controlling the LEDs across multiple microcontrollers, and use serial communication between them to pass around information. The final iteration of the prototype involved 10 slave nodes, each controlling 1,200 LEDs, and one master node running the show.
There are multiple different ways to communicate between microcontrollers. You could go the cool wireless route: Bluetooth, ZigBee, WiFi. Or throw some cables in the mix and use any number of established protocols: I2C, SPI, UART, all with their advantages and disadvantages.
The Hardware
We needed communication to be instantaneous and robust across large distances, so we went with a wired UART. In practice for the TrackPacer build, this was 11 Teensy 3.1 microcontrollers communicating through a MAX485 chip by way of a ChainDuino.
These are very handy boards that dress up an Atmega328 with a MAX485 chip and 2 RJ45 ports. Throw a Cat5 cable between two of these and they'll share power with one another, as well as have an open line for Serial communication. 11 of these boards and a quarter mile of sturdy ethernet cable gave us the infrastructure for quick and robust communication.
To understand how these all talk to each other, imagine you and 10 friends standing alongside a tunnel poking your heads inside. Anything you yell into the tunnel is heard by everyone else, and vice versa. There are some finer details including the fact that you can't yell and listen at the same time, and only one person can yell at a time, but that's the gist of it.
The Software
We turned to MajenkoLibraries's open sourced ICSC for giving us a nice framework for communicating between the controllers. This library makes it dead simple to accomplish the following:
- Establishing yourself in the tunnel with an ID
- Broadcasting a message to everyone
- Targeting a message to a specific ID
- Knowing what to do with an incoming message
With these 4 tasks, we're able to run an organized system. Taking a look at each of these tasks with bits of code, here's what they look like.
Establishing yourself in the tunnel with an ID:
#include <ICSC.h> #define NODE 1 ICSC icsc(Serial, NODE); void setup() { Serial.begin(115200); icsc.begin(); }
Here, we're including the library, initializing an ICSC instance with our defined
NODE identifier. Then kicking off all the necessary pieces in the
setup function.
Broadcasting a message to everyone:
icsc.broadcast('P', "ping");
Calling
broadcast(command, message) blasts the message out for everyone to receive.
Targeting a message to a specific ID:
icsc.send(2, 'P', "ping");
Instead of broadcasting, you can
send(target, command, message) to a specific Node. Any node that registered with that ID will catch the message, all others will simply ignore it.
Knowing what to do with an incoming message:
#include <ICSC.h> #define NODE 2 ICSC icsc(Serial, NODE); void setup() { Serial.begin(115200); icsc.begin(); icsc.registerCommand('P', &ReceivePing); } void loop() { icsc.process(); } void ReceivePing(unsigned char src, char command, unsigned char len, char *data) { if (strcmp(data, "ping") == 0) { icsc.send(1, 'P', "pong"); } }
Here's the fun part. In order to do anything with incoming messages, you need to register a callback of sorts to the message's command with
registerCommand, and define the function with that specific function signature you see here. Then call
icsc.process() as fast as you can in your loop in order to receive any inbound messages.
Summary
With the ICSC library and the ChainDuino, we've had no trouble achieving lightning fast, consistent communication between multiple microcontrollers, each of which is physically 40 meters away from each other.
If you've found this interesting, check out the other posts in this series Part 1 - A Nerdy Overview, and Part 3 - Controlling Thousands of LEDs. | https://www.viget.com/articles/trackpacer-part-2-connecting-multiple-microcontrollers-using-icsc/ | CC-MAIN-2021-43 | refinedweb | 707 | 54.12 |
How to install 'OAuth' for PHP
Ok so I am progressing...
I have installed PHP Version 5.6.8 and wish to use the PHP OAuth Extention to use the Twitter API but it says Class 'OAuth' not found.
I'm new to Linux so unsure what command I need to type in to install it.
Can anyone help please?
Hi @Brian-Moreau, OAuth in PHP is supplied by the PECL package. Can you try installing it with the following commands:
opkg update opkg install php5-pecl
- Brian Moreau
Unknown package 'php5-pecl' root@Omega-2774:/# opkg update Downloading. Updated list of available packages in /var/opkg-lists/chaos_calmer_base. Downloading. Signature check passed. Downloading. Updated list of available packages in /var/opkg-lists/chaos_calmer_packages. Downloading. Signature check passed. root@Omega-2774:/# opkg install php5-pecl Unknown package 'php5-pecl'. Collected errors: * opkg_install_cmd: Cannot install package php5-pecl. root@Omega-2774:/#
- Brian Moreau
Ok so I ran an 'opkg list' command and got listed 100's of packages.
PECL is not there...
Another forum suggested I might need PEAR?
This is a list of most of the PHP packages...
php5-mod-calendar - 5.6.8-1 - Calendar shared module php5-mod-ctype - 5.6.8-1 - Ctype shared module php5-mod-curl - 5.6.8-1 - cURL shared module php5-mod-dom - 5.6.8-1 - DOM shared module php5-mod-exif - 5.6.8-1 - EXIF shared module php5-mod-fileinfo - 5.6.8-1 - Fileinfo shared module php5-mod-ftp - 5.6.8-1 - FTP shared module php5-mod-gd - 5.6.8-1 - GD graphics shared module php5-mod-gettext - 5.6.8-1 - Gettext shared module php5-mod-gmp - 5.6.8-1 - GMP shared module php5-mod-hash - 5.6.8-1 - Hash shared module php5-mod-iconv - 5.6.8-1 - iConv shared module php5-mod-json - 5.6.8-1 - JSON shared module php5-mod-ldap - 5.6.8-1 - LDAP shared module php5-mod-mbstring - 5.6.8-1 - MBString shared module php5-mod-mcrypt - 5.6.8-1 - Mcrypt shared module php5-mod-mysql - 5.6.8-1 - MySQL shared module php5-mod-mysqli - 5.6.8-1 - MySQL Improved Extension shared module php5-mod-openssl - 5.6.8-1 - OpenSSL shared module php5-mod-pcntl - 5.6.8-1 - PCNTL shared module php5-mod-pdo - 5.6.8-1 - PHP Data Objects shared module php5-mod-pdo-mysql - 5.6.8-1 - PDO driver for MySQL shared module php5-mod-pdo-pgsql - 5.6.8-1 - PDO driver for PostgreSQL shared module php5-mod-pdo-sqlite - 5.6.8-1 - PDO driver for SQLite 3.x shared module php5-mod-pgsql - 5.6.8-1 - PostgreSQL shared module php5-mod-session - 5.6.8-1 - Session shared module php5-mod-shmop - 5.6.8-1 - Shared Memory shared module php5-mod-simplexml - 5.6.8-1 - SimpleXML shared module php5-mod-soap - 5.6.8-1 - SOAP shared module php5-mod-sockets - 5.6.8-1 - Sockets shared module php5-mod-sqlite3 - 5.6.8-1 - SQLite3 shared module php5-mod-sysvmsg - 5.6.8-1 - System V messages shared module php5-mod-sysvsem - 5.6.8-1 - System V shared memory shared module php5-mod-sysvshm - 5.6.8-1 - System V semaphore shared module php5-mod-tokenizer - 5.6.8-1 - Tokenizer shared module php5-mod-xml - 5.6.8-1 - XML shared module php5-mod-xmlreader - 5.6.8-1 - XMLReader shared module php5-mod-xmlwriter - 5.6.8-1 - XMLWriter shared module php5-mod-zip - 5.6.8-1 - ZIP shared module
@Brian-Moreau They might have removed the package from the latest build. Are you familiar with the cross compile environment? You should be able to compile the php5-pecl package with the following make file:
# # Copyright (C) 2011-2014 OpenWrt.org # # This is free software, licensed under the GNU General Public License v2. # See /LICENSE for more information. # define Package/php5-pecl/Default SUBMENU:=PHP SECTION:=lang CATEGORY:=Languages URL:= MAINTAINER:=Michael Heimpold <mhei@heimpold.de> DEPENDS:=php5 endef define Build/Configure ( cd $(PKG_BUILD_DIR); $(STAGING_DIR_HOST)/usr/bin/phpize ) $(Build/Configure/Default) endef CONFIGURE_ARGS+= \ --with-php-config=$(STAGING_DIR_HOST)/usr/bin/php-config define PECLPackage define Package/php5-pecl-$(1) $(call Package/php5-pecl/Default) TITLE:=$(2) ifneq ($(3),) DEPENDS+=$(3) endif endef define Package/php5-pecl-$(1)/install $(INSTALL_DIR) $$(1)/usr/lib/php $(INSTALL_BIN) $(PKG_BUILD_DIR)/modules/$(subst -,_,$(1)).so $$(1)/usr/lib/php/ $(INSTALL_DIR) $$(1)/etc/php5 ifeq ($(4),zend) echo "zend_extension=/usr/lib/php/$(subst -,_,$(1)).so" > $$(1)/etc/php5/$(subst -,_,$(1)).ini else echo "extension=$(subst -,_,$(1)).so" > $$(1)/etc/php5/$(subst -,_,$(1)).ini endif endef endef
Thanks for that Boken Lin
I assume I have to save the above code as a file?, of type? somewhere? on the device then run it with MAKE?
Sorry I really know nothing about Linux or OpenWtr
@Brian-Moreau You first need to set up a cross-compile environment:. Then you will need to go into the
feedsdirectory, and create a directory there. Inside the directory, you will put the Makefile. Then when you go to compile the package, you will use the
make menuconfigtool to select the
php5-peclpackage for compilation. From there, you will have the choice to build it directly into a firmware or build it as a separate package that you can then install on your Omega.
Hope this helps!
- Danny van der Sluijs
@Boken-Lin Thank you as your name is popping up all over the community with very good answers. I've tried to do as you suggested but not quite there yet. I'm trying to achieve the same here to install peel which seems to need the cross compile. I made it to the step make menuconfig and am able to set the initial options (Target system, subtarget, Target profile). Then it becomes unclear.
I've create the folder php5-pecl in the feeds directory. Inside I've created the Makefile.
But I can find the php5-pecl in the make menuconfig options.
Any tips or idea's ...?
@Danny-van-der-Sluijs Can you post the content of your
Makefile? I believe there's a configure in there that allows you to set which category you will be placing the package under.
Hi Sorry I am still totally lost...
I don't seem to be able to run the first command to setup the Cross-Compile Environment.
If I type ....
$ apt-get install -y subversion build-essential libncurses5-dev zlib1g-dev gawk flex quilt git-core unzip libssl-dev
I get $ not found, or if I just start command with apt-get .... I get apt-get not found error.
I have tried this from both the control panel command line interface and the serial interface.
What is the $ at the beginning of the commands and why is my device not recognising it?
Hi @Brian-Moreau, It seems that you are trying to setup the cross-compile environment on the Omega. You need to set it up on a Linux computer. The cross-compile environment is an environment that allows you to compile source code to Omega-specific binary. Since the Omega itself doesn't have a huge amount of computational resource, it is usually done on a desktop (or laptop) computer.
Do you know how to setup a virtual machine? We can walk you through the steps.
Hi again @Boken-Lin ,
I very much appreciate your help and assistance in this since it is not strictly an Omega problem.
Ok I am using WINXP PC and have in the past made dual boot so I should be able to manage to do that again but just wonder if you would recommend which virtual machine would be best for working with the Omega?
- Kit Bishop
@Brian-Moreau I have had a lot of success running a KUbuntu 14.04 VM in a VirtualBox VM under Windows
@Brian-Moreau Yeah, like @Kit-Bishop mentioned, one of the Ubuntu version is probably the easiest to get started. We've also got pre-compiled SDK for 64-bit Ubuntu.
Hi again @Broken-Lin
Ok I made some progress...
I installed KUbuntu on another computer.
I have followed steps 1 and 2 for setting up the Cross Compile Platform and all is good.
I now have the following prompt..
/openwrt t$
I am now at Step 3: Update Feeds but unsure what to do?
I typed cd feeds to go to the feeds directory but it says no such file or directory.
Thanks in advance
Brian
@Brian-Moreau Some small bits of clarification that may help:
- There is no directory feeds - feeds is a script file in the directory scripts under your openwrt directory
- The file feeds.conf.default that needs modifying referred to in Step 3 of the tutorial is in your openwrt directory
- The command to be run as described in Step 3 i.e.:
scripts/feeds update -a
should be run from your openwrt directory.
It runs the feeds script that is in the scripts directory.
- All other commands covered in the tutorial should similarly just be run from the openwrt directory
@Boken-Lin said:
Do you know how to setup a virtual machine? We can walk you through the steps.
This would be great been thinking about doing this while I wait on my second Omega. Could you be so kind as to post the steps for setting up VM?
@Rudy-Trujillo First you will need to get and install a copy of VirtualBox. This can be found at:
Then you will need an OS image to run on the VirtualBox - most commonly used for Omega work seems to be KUbuntu.
There are two ways to do this:
- The hard way: download an ISO of the KUbuntu version you want (these can be found here:). Then create a VM instance in VirtualBox that you install to from the ISO image - i.e. set the VM up to boot from the ISO image as a CD/DVD and follow the installation process. When that is complete, disconnect the VM from the ISO image and when you reboot the VM, you should be running the installed system.
- The easy way: Download a pre-installed VirtualBox VM image for the version you want. I would suggest the one that can be obtained from here:
Then just open it using VirtualBox and you will be running the pre-installed system.
@Rudy-Trujillo A PS to my previous message: some people prefer VMWare over VitualBox.
A free copy of VMWare can be found at:
The principle of VMWare is pretty much the same as VirtualBox
I am less familiar with the availability of pre-installed VMWare images and you may have to do a google search for one. | http://community.onion.io/topic/138/how-to-install-oauth-for-php | CC-MAIN-2018-39 | refinedweb | 1,808 | 59.9 |
I am trying to read a file and follow instructions based on its contents, add them to a linked list. I have the all the file handling written but am still having trouble with pointers and linked lists. The instructions from the input file would go like this
i n // insert value n into the list
c // clear thelist
q // quit
So if the content was:
i 1
i 2
i 3
i 4
q
The output would be:
1
1 2
1 2 3
1 2 3 4
Here is the file handling code for reference
#include <ctype.h> #include <assert.h> int main(int argc, char * argv[]) { assert(argc == 3); Node * p = NULL; int newval, retval; int op; FILE *in = fopen(argv[1],"r"); assert(in != NULL); FILE *out = fopen(argv[2],"w"); assert(out != NULL); do { op = fgetc(in); } while (op != EOF && isspace(op)); while(op != EOF && op != 'q') { switch(op) { case 'i': fscanf(in,"%d",&newval); p = orderedInsert(p,newval); printList(out,p); printList(stdout,p); break; case 'c': clearList(&p); break; default: fclose(in); fclose(out); return 0; } do op = fgetc(in); while (op != EOF && isspace(op)); } fclose(in); fclose(out); return 0; }
Here is my struct:
typedef struct node { int data; struct node *next; } Node;
And these are my functions and what I'm trying to do with them, I'm just not exactly sure how.
Allocates a new Node with data value newval and inserts into the ordered list with first node pointer p in such a way that the data values in the modified list are in an increasing order as the list is traversed.
Node *orderedInsert(Node *p, int newval) {
}
Prints the data values in the list with first node pointer p from first to last, with a space between successive values. Prints a newline at the end of the list. I think I will be using fprintf(outfile, "%d\n", p-> data); but I'm not sure how to handle successive values.
void printList(FILE *outfile, Node *p) {
}
Deletes all the nodes in the list with first node pointer *p, resulting in *p having value NULL. Note that we are passing a pointer by address so we can modify that pointer.
void clearList(Node **p) {
} | https://www.daniweb.com/programming/software-development/threads/488615/linked-list | CC-MAIN-2017-09 | refinedweb | 375 | 76.86 |
(A series of blog posts about the tech behind lagen.nu. Earlier parts are here: first, second, third, fourth, fifth and sixth)
Like most developers that have been Test infected, I
try to create regression tests whenever I can. A project like
lagen.nu, which has no GUI, no event handling, no databases, just
operations on text files, is really well suited for automated
regression testing. However, when I started out, I didn’t do
test-first programming since I didn’t really have any idea of what
I was doing. As things solidified, I encountered a particular
section of the code that lended itself very nicely to regression
testing.
Now, when I say regression testing, I don’t neccesarily mean unit
testing. I’m not so concerned with testing classes at the method
level as my ”API” is really document oriented; a particular text
document sent into the program should result in the return of a
particular XML document. Basically, there are only two methods
that I’m testing:
- The lawparser.parse() method: Given a section of law text,
returns the same section with all references marked up, as
described in part 5.
- Law._txt_to_xml(), which, given a entire law as plaintext,
returns the xml version, as described in part 4
Since both these tests operate in the fashion ”Send in really big
string, compare result with the expected other even bigger
string”, I found that pyunit didn’t work that well for me, as it’s
more centered around testing lots of methods in lots of classes,
where the testdata is so small that it’s comfortable having them
inside the test code.
Instead, I created my tests in the form of a bunch of text
files. For lawparser.parse, each file is just two paragraphs, the
first being the indata, and the second being the expected outdata:
Vid ändring av en bolagsordning eller av en beviljad koncession gäller 3 § eller 4 a § i tillämpliga delar. Vid ändring av en bolagsordning eller av en beviljad koncession gäller <link section="3">3 §</link> eller <link section="4a">4 a §</link> i tillämpliga delar.
The test runner then becomes trivial:
def runtest(filename,verbose=False,quiet=False): (test,answer) = open(filename).read().split("\n\n", 1) p = LawParser(test,verbose) res = p.parse() if res.strip() == answer.strip(): print "Pass: %s" % filename return True else: print "FAIL: %s" % filename if not quiet: print "----------------------------------------" print "EXPECTED:" print answer print "GOT:" print res print "----------------------------------------" return False
Similarly, the code to test Law._txt_to_xml() is also
pretty trivial. There are two differences: Since the indata is
larger and already split up in paragraphs, the indata and expected
result for a particular test is stored in separate files. This
also lets me edit the expected results file using nXML mode in
Emacs.
Comparing two XML documents is also a little trickier, in that
they can be equivalent, but still not match byte-for-byte (since
there can be semantically insignificant whitespace and similar
stuff). To avoid getting false alarms, I put both the expected
result file, as well as the actual result, trough tidy. This
ensures that their whitespacing will be equivalent, as well as
easy to read. Also, a good example of piping things to and from a
command in python:
def tidy_xml_string(xmlstring): """Neatifies a XML string and returns it""" (stdin,stdout) = os.popen2("tidy -q -n -xml --indent auto --char-encoding latin1") stdin.write(xmlstring) stdin.close() return stdout.read()
If the two documents still don’t match, it can be difficult to
pinpoint the exact place where they match. I could dump the
results to file and run command-line diff on them, but since there
exists a perfectly good diff implementation in the python standard
libraries I used that one instead:
from difflib import Differ differ = Differ() diff = list(differ.compare(res.splitlines(), answer.splitlines())) print "\n".join(diff)+"\n"
The result is even easier to read than standard diff output, since
it points out the position on the line as well (maybe there’s a
command line flag for diff that does this?):
[...] suscipit non, venenatis ac, dictum ut, nulla. Praesent mattis.</p> </section> - <section id="1" element="2"> ? ^^^ + <section id="1" moment="2"> ? ^^ <p>Sed semper, ante non vehicula lobortis, leo urna sodales justo, sit amet mattis felis augue sit amet felis. Ut quis [...]
So, that’s basically my entire test setup for now. I need to build
more infrastructure for testing the XSLT transform and the HTML
parsing code, but these two areas are the trickiest.
Since I can run these test methods without having a expected
return value, they are very useful as the main way of developing
new functionality: I specify the indata, and let the test function
just print the outdata. I can then work on new functionality
without having to manually specifying exactly how I want the
outdata to look (because this is actually somewhat difficult for
large documents), I just hack away until it sort of looks like I
want, and then just cut’n paste the outdata to the ”expected
result” file. | http://blog.tomtebo.org/2004/12/19/lagennu_tech_7/ | CC-MAIN-2019-13 | refinedweb | 852 | 59.43 |
On 06/28/2013 06:17 PM, Daniel P. Berrange wrote: > On Thu, Jun 27, 2013 at 08:56:25AM +0800, Gao feng wrote: >> On 06/26/2013 07:01 PM, Daniel P. Berrange wrote: >>> On Wed, Jun 26, 2013 at 05:56:19PM +0800, Gao feng wrote: >>>> On 06/26/2013 05:38 PM, Daniel P. Berrange wrote: >>>>> On Wed, Jun 26, 2013 at 10:26:10AM +0800, Gao feng wrote: >>>>>> On 06/26/2013 04:39 AM, Daniel P. Berrange wrote: >>>>>>> On Thu, Jun 13, 2013 at 08:02:18PM +0200, Richard Weinberger wrote: >>>>>>>> Within a user namespace root can remount these filesysems at any >>>>>>>> time rw. >>>>>>>> Create these mappings only if we're not playing with user namespaces. >>>>>>> >>>>>>> This is a problem with the way we're initializing mounts in the >>>>>>> user namespace. >>>>>> >>>>>> This problem exists even libvirt lxc doesn't support user namespace. >>>>> >>>>> Yes, and this is a problem that user namespace is intended to >>>>> solve. >>>>> >>>>>>> We need to ensure that the initial mounts setup >>>>>>> by libvirt can't be changed by admin inside the container. Preventing >>>>>>> the container admin from remounting or unmounting these mounts is key >>>>>>> to security. >>>>>>> >>>>>>> IIUC, the only way to ensure this is to start a new user namespace >>>>>>> /after/ setting up all mounts. >>>>>>> >>>>>> >>>>>> start a new user namespace means the container will lose controller of >>>>>> mount namespace. so the container can't do mount operation too, though >>>>>> we only can mount a little of filesystems in un-init user namespace. >>>>> >>>>> Merely being able to unmount is sufficient to exploit the host. Consider >>>>> that the container was configured with the following mapping >>>>> >>>>> / -> / >>>>> /export/mycontainer/home -> /home >>>>> >>>>> Now, if the container admin can umount /home, then they can now >>>>> see the home directory contents of the host. At least this is >>>>> likely to be information leakage, and if any of the host home >>>>> directories have UIDs that overlap with those assigned to the >>>>> container ID map, you have a potentially exploitable situation. >>>>> >>>>> Hence we need to ensure that the container cannot unmount or >>>>> remount anything setup by libvirt. AFAICT, this means that all >>>>> the mounts libvirt does, must be performed in a seprate user >>>>> namespace to that wit hthe container will eventually run in. >>>>> >>>> >>>> Libvirt mounts something for the container in one user namesapce, >>>> and then libvirt calls unshare to create a new user namespace and >>>> start the init task of container. >>>> >>>> Yes, the users in container can't do mount/unmount/remount on all >>>> of filesystem. but they can call unshare to create a new mount namespace, >>>> and they will have rights to mount/unmount/remount in this new created >>>> mount namespace. they can still umount /home to see the home directory >>>> contents of host. >>> >>> An existing filesystem mount can only be remounted/unmounted by the >>> (user ID, usernamespace) that originally mounted it. So even if you >>> start a new mount namespace, you cannot unmount stuff setup by the >>> parent user namespace. >>> >> >> Please also setup the uid_map/gid_map for the unshared user namespace. >> even in container, user has rights to setup these two files. >> >>> # unshare --mount --user /bin/sh >>> sh-4.2$ umount /sys/kernel/debug >>> umount: /sys/kernel/debug: Invalid argument >>> >> >> in terminal one >> $ id >> uid=1000(gaofeng) gid=1000(gaofeng) groups=1000(gaofeng) >> $ ./unshare --mount --user /bin/sh >> sh-4.2$ echo $$ >> 17110 >> sh-4.2$ >> >> in other terminal,setup id map for new userns. >> $echo 0 1000 1 > /proc/17110/uid_map >> $echo 0 1000 1 > /proc/17110/gid_map >> >> and then in terminal one >> sh-4.2$ umount -l /home/ > > Oh, hmm, forgot about the uid mapping. I thought the capabilities would > be allowing me unmount regardless. > > Well, given that we're at rc2 now & I'm still unclear about how some > aspects of the userns setup is working, I'm afraid we'll have to wait > until 1.1.1 for the userns LXC code to merge. I'll aim todo it next > week, so that we have plenty of time for further testing before the > 1.1.1 release. > Ok, I think Richard had tested the userns support. Hi Richard, can you give me your ack or tested-by? Thanks! | https://www.redhat.com/archives/libvir-list/2013-July/msg00000.html | CC-MAIN-2015-18 | refinedweb | 693 | 72.56 |
Callback on file save
Is there or could there be a callback for custom processing whenever a file is saved in Pythonista?
- Webmaster4o
@mikael I would love to see more things like this in Pythonista. You might be able to do something like this in
objc_utilusing something called method swizzling.
Basically, as I understand it, this would involve taking the existing method that's called on save and copying and renaming it somehow. Then, you would make a new method that first called the existing (renamed and copied) save function, and then called a new function.
You would then overwrite the existing method with that new one, which would save and do something else. Since you copied the function, saving will still happen after overwriting the function.
I'm pretty sure @ProfSpaceCadet has done this in the past.
As you can probably tell, this is really unstable, and probably won't work in many cases. You might also crash the app a lot with this approach. But it may work.
+1 for adding this into the app. That'd be great.
@Webmaster4o Is "swizzling" in Objective-C what you would call "monkey-patching" in Python and other dynamic lanugages? Basically something like this:
old_print = print def print(*args, **kwargs): old_print("Hi!", *args, **kwargs)
Actually this is kind of the wrong way to monkey-patch a method, because someone else might have modified
old_print. To avoid this kind of name conflict you can use a closure:
def _make_new_print(): old_print = print def new_print(*args, **kwargs): old_print("Hi!", *args, **kwargs) return new_print print = _make_new_print() del _make_new_print
- Webmaster4o
@dgelessus Yes, this is what I'm talking about. This might be possible for Pythonista's internal objective C methods.
Here is a very raw saveData swizzle... This probably needs to check for the existence of an existing swizzle and undo it first, otherwise you can lose the original!
# coding: utf-8 # coding: utf-8 import editor from objc_util import * from objc_util import parse_types import ctypes import inspect t=editor._get_editor_tab() def saveData(_self,_sel): print 'hi' #call original method obj=ObjCInstance(_self) orig=getattr(obj,'_original'+c.sel_getName(_sel)) orig() def swizzle(cls, old_sel, new_fcn): '''swizzles cls.old_sel with new_fcn. Assumes encoding is the same. if class already has swizzledSelector, unswizzle first. original selector can be called via originalSelectir ''' orig_method=c.class_getInstanceMethod(cls.ptr, sel(old_sel)) #new_method=c.class_getInstanceMethod(cls, sel(new_sel)) type_encoding=str(cls.instanceMethodSignatureForSelector_(sel(old_sel))._typeString()) parsed_types = parse_types(str(type_encoding)) restype = parsed_types[0] argtypes = parsed_types[1] # Check if the number of arguments derived from the selector matches the actual function: argspec = inspect.getargspec(new_fcn) if len(argspec.args) != len(argtypes): raise ValueError('%s has %i arguments (expected %i)' % (method, len(argspec.args), len(argtypes))) IMPTYPE = ctypes.CFUNCTYPE(restype, *argtypes) imp = IMPTYPE(new_fcn) retain_global(imp) new_sel='_original'+old_sel didAdd=c.class_addMethod(cls.ptr, sel(new_sel), imp, type_encoding) new_method=c.class_getInstanceMethod(cls.ptr, sel(new_sel)) # swap imps c.method_exchangeImplementations.restype=None c.method_exchangeImplementations.argtypes=[c_void_p,c_void_p] c.method_exchangeImplementations(orig_method, new_method) return new_sel t=editor._get_editor_tab() cls=ObjCInstance(c.object_getClass(t.ptr)) swizzle(cls,'saveData',saveData)
Thanks guys! Maybe just a little bit hack-y than I was hoping for, but why not?
The idea here was to flag the file "dirty" with a timestamp, and then with a background thread write the file to a cloud store when activity settles down for a moment. I would use reminders as a store.
On other devices I would still use a manual "update latest" user action instead of swizzling the load function, to avoid confusion and since reminders seem to require opening the Reminders app before they are synced.
I've updated this with a more robust and generalized swizzling method, which swizzles saveData in this particular case.
There is also a OMFileWatcher object which seems like it is doing something like what you want ... but I have been unable to get its delegate methods fired off.
Also, I have not yet tried this, but it appears fnctl might work on iOS, so would be a way to get callbacks on file modify without relying on swizzling.
Another approach would be to simply have a Timer thread which checks the list of files you are watching. If you are going to have a thread anyway, probably only slightly more effort to check file dates than to be triggered by a callback. | https://forum.omz-software.com/topic/2939/callback-on-file-save | CC-MAIN-2019-04 | refinedweb | 725 | 58.38 |
tl;dr
You love Swift's
Codable protocol and use it everywhere, who doesn't! Here is an easy and very light way to store and retrieve -reasonable amount 😅- of
Codable objects, in a couple lines of code!.
Installation
Swift Package Manager (Recommended)
You can use The Swift Package Manager to install
UserDefaultsStore by adding the proper description to your
Package.swift file:
import PackageDescription let package = Package( name: "YOUR_PROJECT_NAME", targets: [], dependencies: [ .package(url: "", from: "2.0.0") ] )
Next, add
UserDefaultsStore to your targets dependencies like so:
.target( name: "YOUR_TARGET_NAME", dependencies: [ "UserDefaultsStore", ] ),
Then run
swift package update.
CocoaPods" ~> 2.0.0 property
The
Identifiable protocol lets UserDefaultsStore knows what is the unique id for each object.
struct User: Codable, Identifiable { ... }
struct Laptop: Codable, Identifiable { var id: String { model } ... }(id: // Create a snapshot let snapshot = usersStore.generateSnapshot() // Restore a pre-generated snapshot try? usersStore.restoreSnapshot(snapshot)
Looking to store a single item only?
Use
SingleUserDefaultsStore, it enables storing and retrieving a single value of
Int,
Double,
String, or any
Codable type.
Requirements
- iOS 13.0+ / macOS 10.15+ / tvOS 13.0+ / watchOS 6.0+
- Swift 5.0+
You may find interesting
Releases
2.0.0 - 2020-08-21T12:54:15.
1.5.0 - 2020-02-22T14:38:32
- Add
init?(uniqueIdentifier:, encoder:, decoder:)to both
UserDefaultsStoreand
SingleUserDefaultsStoreto create a store with custom encoder and/or decoder
- Replace TravisCI with Github Actions
1.4.3 - 2019-11-17T18:15:13
- Fix a bug where saving an array of objects with an invalid object will cause the store to have an invalid count
- Fix a bug where project was not accessible using SPM
1.4.2 - 2019-06-27T05:51:35
1.4.1 - 2019-06-05T09:15:10
1.4 - 2019-04-04T13:27:27
v1.4 brings Swift 5.0 and Xcode 10.2 support! | https://swiftpack.co/package/omaralbeik/UserDefaultsStore | CC-MAIN-2021-04 | refinedweb | 304 | 54.08 |
density.py. For information
about downloading and working with this code, see Section 0.2., you have to integrate over x.
thinkstats2 provides a class called Pdf that represents
a probability density function. Every Pdf object provides the
following methods:
Pdf is an abstract parent class, which means you should not
instantiate it; that is, you cannot create a Pdf object. Instead, you
should define a child class that inherits from Pdf and provides
definitions of Density and GetLinspace. Pdf provides
Render and MakePmf.
For example, thinkstats2 provides a class named NormalPdf that evaluates the normal density function.
class NormalPdf(Pdf):
def __init__(self, mu=0, sigma=1, label=''):
self.mu = mu
self.sigma = sigma
self.label = label
def Density(self, xs):
return scipy.stats.norm.pdf(xs, self.mu, self.sigma)
def GetLinspace(self):
low, high = self.mu-3*self.sigma, self.mu+3*self.sigma
return np.linspace(low, high, 101)
The NormalPdf object contains the parameters mu and
sigma. Density uses
scipy.stats.norm, which is an object that represents a normal
distribution and provides cdf and pdf, among other
methods (see Section 5.2).
The following example creates a NormalPdf with the mean and variance
of adult female heights, in cm, from the BRFSS (see
Section 5.4). Then it computes the density of the
distribution at a location one standard deviation from the mean.
>>> mean, var = 163, 52.8
>>> std = math.sqrt(var)
>>> pdf = thinkstats2.NormalPdf(mean, std)
>>> pdf.Density(mean + std)
0.0333001
The result is about 0.03, in units of probability mass per cm.
Again, a probability density doesn’t mean much by itself. But if
we plot the Pdf, we can see the shape of the distribution:
>>> thinkplot.Pdf(pdf, label='normal')
>>> thinkplot.Show()
thinkplot.Pdf plots the Pdf as a smooth function,
as contrasted with thinkplot.Pmf, which renders a Pmf as a
step function. Figure 6.1 shows the result, as well
as a PDF estimated from a sample, which we’ll compute in the next
section.
You can use MakePmf to approximate the Pdf:
>>> pmf = pdf.MakePmf()
By default, the resulting Pmf contains 101 points equally spaced from
mu - 3*sigma to mu + 3*sigma. Optionally, MakePmf
and Render can take keyword arguments low, high,
and n.
Figure 6.1: A normal PDF that models adult female height in the U.S.,
and the kernel density estimate of a sample with n=500.
Kernel density estimation (KDE) is an algorithm that takes
a sample and finds an appropriately smooth PDF that fits
the data. You can read details at.
scipy provides an implementation of KDE and thinkstats2
provides a class called EstimatedPdf that uses it:
class EstimatedPdf(Pdf):
def __init__(self, sample):
self.kde = scipy.stats.gaussian_kde(sample)
def Density(self, xs):
return self.kde.evaluate(xs)
__init__ takes a sample
and computes a kernel density estimate. The result is a
gaussian_kde object that provides an evaluate
method.
__init__
gaussian_kde
Density takes a value or sequence, calls
gaussian_kde.evaluate, and returns the resulting density. The
word “Gaussian” appears in the name because it uses a filter based
on a Gaussian distribution to smooth the KDE.
gaussian_kde.evaluate
Here’s an example that generates a sample from a normal
distribution and then makes an EstimatedPdf to fit it:
>>> sample = [random.gauss(mean, std) for i in range(500)]
>>> sample_pdf = thinkstats2.EstimatedPdf(sample)
>>> thinkplot.Pdf(sample_pdf, label='sample KDE')
sample is a list of 500 random heights.
sample_pdf is a Pdf object that contains the estimated
KDE of the sample.
sample
sample_pdf
Figure 6.1 shows the normal density function and a KDE
based on a sample of 500 random heights. The estimate is a good
match for the original distribution.
Estimating a density function with KDE is useful for several purposes:
Figure 6.2: A framework that relates representations of distribution
functions.
At this point we have seen PMFs, CDFs and PDFs; let’s take a minute
to review. Figure 6.2 shows how these functions relate
to each other.
We started with PMFs, which represent the probabilities for a discrete
set of values. To get from a PMF to a CDF, you add up the probability
masses to get cumulative probabilities.
To get from a CDF back to a PMF, you compute differences in cumulative
probabilities. We’ll see the implementation of these operations
in the next few sections.
A PDF is the derivative of a continuous CDF; or, equivalently,
a CDF is the integral of a PDF.. Another option is kernel density estimation.
The opposite of smoothing is discretizing, or quantizing. If you
evaluate a PDF at discrete points, you can generate a PMF that is an
approximation of the PDF. You can get a better approximation using
numerical integration.
To distinguish between continuous and discrete CDFs, it might be
better for a discrete CDF to be a “cumulative mass function,” but as
far as I can tell no one uses that term.
At this point you should know how to use the basic types provided
by thinkstats2: Hist, Pmf, Cdf, and Pdf. The next few sections
provide details about how they are implemented. This material
might help you use these classes more effectively, but it is not
strictly necessary.
Hist and Pmf inherit from a parent class called _DictWrapper.
The leading underscore indicates that this class is “internal;” that
is, it should not be used by code in other modules. The name
indicates what it is: a dictionary wrapper. Its primary attribute is
d, the dictionary that maps from values to their frequencies.
_DictWrapper
The values can be any hashable type. The frequencies should be integers,
but can be any numeric type.
_DictWrapper contains methods appropriate for both
Hist and Pmf, including __init__, Values,
Items and Render. It also provides modifier
methods Set, Incr, Mult, and Remove. These
methods are all implemented with dictionary operations. For example:
# class _DictWrapper
def Incr(self, x, term=1):
self.d[x] = self.d.get(x, 0) + term
def Mult(self, x, factor):
self.d[x] = self.d.get(x, 0) * factor
def Remove(self, x):
del self.d[x]
Hist also provides Freq, which looks up the frequency
of a given value.
Because Hist operators and methods are based on dictionaries,
these methods are constant time operations;
that is, their run time does not increase as the Hist gets bigger.
Pmf and Hist are almost the same thing, except that a Pmf
maps values to floating-point probabilities, rather than integer
frequencies. If the sum of the probabilities is 1, the Pmf is normalized.
Pmf provides Normalize, which computes the sum of the
probabilities and divides through by a factor:
# class Pmf
def Normalize(self, fraction=1.0):
total = self.Total()
if total == 0.0:
raise ValueError('Total probability is zero.')
factor = float(fraction) / total
for x in self.d:
self.d[x] *= factor
return total
fraction determines the sum of the probabilities after
normalizing; the default value is 1. If the total probability is 0,
the Pmf cannot be normalized, so Normalize raises ValueError.
Hist and Pmf have the same constructor. It can take
as an argument a dict, Hist, Pmf or Cdf, a pandas
Series, a list of (value, frequency) pairs, or a sequence of values.
If you instantiate a Pmf, the result is normalized. If you
instantiate a Hist, it is not. To construct an unnormalized Pmf,
you can create an empty Pmf and modify it. The Pmf modifiers do
not renormalize the Pmf.
A CDF maps from values to cumulative probabilities, so I could have
implemented Cdf as a _DictWrapper. But the values in a CDF are
ordered and the values in a _DictWrapper are not. Also, it is
often useful to compute the inverse CDF; that is, the map from
cumulative probability to value. So the implementaion I chose is two
sorted lists. That way I can use binary search to do a forward or
inverse lookup in logarithmic time.
The Cdf constructor can take as a parameter a sequence of values
or a pandas Series, a dictionary that maps from values to
probabilities, a sequence of (value, probability) pairs, a Hist, Pmf,
or Cdf. Or if it is given two parameters, it treats them as a sorted
sequence of values and the sequence of corresponding cumulative
probabilities.
Given a sequence, pandas Series, or dictionary, the constructor makes
a Hist. Then it uses the Hist to initialize the attributes:
self.xs, freqs = zip(*sorted(dw.Items()))
self.ps = np.cumsum(freqs, dtype=np.float)
self.ps /= self.ps[-1]
xs is the sorted list of values; freqs is the list
of corresponding frequencies. np.cumsum computes
the cumulative sum of the frequencies. Dividing through by the
total frequency yields cumulative probabilities.
For n values, the time to construct the
Cdf is proportional to n logn.
Here is the implementation of Prob, which takes a value
and returns its cumulative probability:
# class Cdf
def Prob(self, x):
if x < self.xs[0]:
return 0.0
index = bisect.bisect(self.xs, x)
p = self.ps[index - 1]
return p
The bisect module provides an implementation of binary search.
And here is the implementation of Value, which takes a
cumulative probability and returns the corresponding value:
# class Cdf
def Value(self, p):
if p < 0 or p > 1:
raise ValueError('p must be in range [0, 1]')
index = bisect.bisect_left(self.ps, p)
return self.xs[index]
Given a Cdf, we can compute the Pmf by computing differences between
consecutive cumulative probabilities. If you call the Cdf constructor
and pass a Pmf, it computes differences by calling Cdf.Items:
# class Cdf
def Items(self):
a = self.ps
b = np.roll(a, 1)
b[0] = 0
return zip(self.xs, a-b)
np.roll shifts the elements of a to the right, and “rolls”
the last one back to the beginning. We replace the first element of
b with 0 and then compute the difference a-b. The result
is a NumPy array of probabilities.
Cdf provides Shift and Scale, which modify the
values in the Cdf, but the probabilities should be treated as
immutable.
Any time you take a sample and reduce it to a single number, that
number is a statistic. The statistics we have seen so far include
mean, variance, median, and interquartile range.
A raw moment is a kind of statistic. If you have a sample of
values, xi, the kth raw moment is:
Or if you prefer Python notation:
def RawMoment(xs, k):
return sum(x**k for x in xs) / len(xs)
When k=1 the result is the sample mean, x. The other
raw moments don’t mean much by themselves, but they are used
in some computations.
The central moments are more useful. The
kth central moment is:
Or in Python:
def CentralMoment(xs, k):
mean = RawMoment(xs, 1)
return sum((x - mean)**k for x in xs) / len(xs)
When k=2 the result is the second central moment, which you might
recognize as variance. The definition of variance gives a hint about
why these statistics are called moments. If we attach a weight along a
ruler at each location, xi, and then spin the ruler around
the mean, the moment of inertia of the spinning weights is the variance
of the values. If you are not familiar with moment of inertia, see.
When you report moment-based statistics, it is important to think
about the units. For example, if the values xi are in cm, the
first raw moment is also in cm. But the second moment is in
cm2, the third moment is in cm3, and so on.
Because of these units, moments are hard to interpret by themselves.
That’s why, for the second moment, it is common to report standard
deviation, which is the square root of variance, so it is in the same
units as xi.
Skewness is a property that describes the shape of a distribution.
If the distribution is symmetric around its central tendency, it is
unskewed. If the values extend farther to the right, it is “right
skewed” and if the values extend left, it is “left skewed.”
This use of “skewed” does not have the usual connotation of
“biased.” Skewness only describes the shape of the distribution;
it says nothing about whether the sampling process might have been
biased.
Several statistics are commonly used to quantify the skewness of a
distribution. Given a sequence of values, xi, the sample
skewness, g1, can be computed like this:
def StandardizedMoment(xs, k):
var = CentralMoment(xs, 2)
std = math.sqrt(var)
return CentralMoment(xs, k) / std**k
def Skewness(xs):
return StandardizedMoment(xs, 3)
g1 is the third standardized moment, which means that it has
been normalized so it has no units.
Negative skewness indicates that a distribution
skews left; positive skewness indicates
that a distribution skews right. The magnitude of g1 indicates
the strength of the skewness, but by itself it is not easy to
interpret.
In practice, computing sample skewness.
In a distribution that skews right, the mean is greater.
Pearson’s median skewness coefficient is a measure
of skewness based on the difference between the
sample mean and median:
Where x is the sample mean, m is the median, and
S is the standard deviation. Or in Python:
def Median(xs):
cdf = thinkstats2.Cdf(xs)
return cdf.Value(0.5)
def PearsonMedianSkewness(xs):
median = Median(xs)
mean = RawMoment(xs, 1)
var = CentralMoment(xs, 2)
std = math.sqrt(var)
gp = 3 * (mean - median) / std
return gp
This statistic is robust, which means that it is less vulnerable
to the effect of outliers.
Figure 6.3: Estimated PDF of birthweight data from the NSFG.
As an example, let’s look at the skewness of birth weights in the
NSFG pregnancy data. Here’s the code to estimate and plot the PDF:
live, firsts, others = first.MakeFrames()
data = live.totalwgt_lb.dropna()
pdf = thinkstats2.EstimatedPdf(data)
thinkplot.Pdf(pdf, label='birth weight')
Figure 6.3 shows the result. The left tail appears
longer than the right, so we suspect the distribution is skewed left.
The mean, 7.27 lbs, is a bit less than
the median, 7.38 lbs, so that is consistent with left skew.
And both skewness coefficients are negative:
sample skewness is -0.59;
Pearson’s median skewness is -0.23.
Figure 6.4: Estimated PDF of adult weight data from the BRFSS.
Now let’s compare this distribution to the distribution of adult
weight in the BRFSS. Again, here’s the code:
df = brfss.ReadBrfss(nrows=None)
data = df.wtkg2.dropna()
pdf = thinkstats2.EstimatedPdf(data)
thinkplot.Pdf(pdf, label='adult weight')
Figure 6.4 shows the result. The distribution
appears skewed to the right. Sure enough, the mean, 79.0, is bigger
than the median, 77.3. The sample skewness is 1.1 and Pearson’s
median skewness is 0.26.
The sign of the skewness coefficient indicates whether the distribution
skews left or right, but other than that, they are hard to interpret.
Sample skewness is less robust; that is, it is more
susceptible to outliers. As a result it is less reliable
when applied to skewed distributions, exactly when it would be most
relevant.
Pearson’s median skewness is based on a computed mean and variance,
so it is also susceptible to outliers, but since it does not depend
on a third moment, it is somewhat more robust.
A solution to this exercise is in chap06soln.py.
chap06soln.py
The distribution of income is famously skewed to the right. In this
exercise, we’ll measure how strong that skew is.2.py, which reads this file and transforms
the data.
The dataset is in the form of a series of income ranges and the number
of respondents who fell in each range. The lowest range includes
respondents who reported annual household income “Under $5000.”
The highest range includes respondents who made “$250,000 or
more.”
To estimate mean and other statistics from these data, we have to
make some assumptions about the lower and upper bounds, and how
the values are distributed in each range. hinc2.py provides
InterpolateSample, which shows one way to model
this data. It takes a DataFrame with a column, income, that
contains the upper bound of each range, and freq, which contains
the number of respondents in each frame.
It also takes log_upper, which is an assumed upper bound
on the highest range, expressed in log10 dollars.
The default value, log_upper=6.0 represents the assumption
that the largest income among the respondents is
106, or one million dollars.
log_upper
log_upper=6.0
InterpolateSample generates a pseudo-sample; that is, a sample
of household incomes that yields the same number of respondents
in each range as the actual data. It assumes that incomes in
each range are equally spaced on a log10 scale.
Compute the median, mean, skewness and Pearson’s skewness of the
resulting sample. What fraction of households reports a taxable
income below the mean? How do the results depend on the assumed
upper bound?
Think Bayes
Think Python
Think Stats
Think Complexity | http://greenteapress.com/thinkstats2/html/thinkstats2007.html | CC-MAIN-2017-47 | refinedweb | 2,852 | 58.79 |
So i just started learning java in treehouse.com ive done the basics but when i try everything ive learnt there on the intellij idea ide nothing works.
I've already searched the internet trying to find a solution and I still cant find it, yea I've tried system.out but ive seen other people do it with just typing console too and if I try system.out on readline it doesnt work im literally confused.
I'm used to using the
console.println("Hello world!");
String name=console.readln("whats yo name: ");
package com.company;
import java.io.Console;
public class Main
{
public static void main(String[] args)
{
console.println("dada");
}
}
Console console = new Console()
This code won't work on IntelliJ.
To take input in an IDE use the following code.
Scanner sc = new Scanner(System.in) String in = sc.nextLine(); System.out.println(in);
You can also use the older way,
BufferedReader br = new BufferedReader(new InputStreamReader(System.in))); String in = br.readLine(); | https://codedump.io/share/NqW6ivWbSKZl/1/so-i-just-started-learning-java-in-treehousecom-ive-done-the-basics-but-when-i-try-everything-ive-learnt-there-on-the-intellij-idea-ide-nothing-works | CC-MAIN-2017-51 | refinedweb | 167 | 62.04 |
[Recap: Last night, Bob was complaining that the order that services were started and stopped was undefined, saying Bob> well if you have service A dependent on service B Bob> and you don't want service A to actually start service B Bob> like let's say you have a database service or something Bob> and you have another service that wants to persist stuff using the Bob> database service so he made the order in which services are started defined as the order in which they were added to the service collection, and the reverse for stopping.] Bob, I fear this solution is too fragile if your services really depend on one another. While it will work for now when installing services is something that happens once in mktap, I fear it'll break down with more dynamic service management. Example:: application.addService("dataStore", StupidStorage()) application.addService("dataMaker", FooService()) # ...later, decide we have outgrown StupidStorage: application.removeService("dataStore") application.addService("dataStore", RealStorage()) With your ordering, the new dataStore will now stop before dataMaker does. Instead of introducing ordering dependencies into the flat top-level service namespace, I think the right answer to your problem is to do as Chris suggested: Use a hierarchical structure by making dataStore a MultiService, and adding any dataMaker services at its children. That way your dataStore can always make sure its children shut down before it does. Does this work for you? If so, I would like to revert your changes making Application.services an OrderedDict before the (imminent!) release, just because anything that requires a change to the persistnceVersion is a big potential spot for trouble. We have no inter-version testing framework at all. (This whole thing would have passed without comment had I not encountered an error in upgradeVersion13 in the acceptance tests. I *think* I fixed that bug, but I haven't yet found out what triggered it in the first place (it *should* have been *generated* with version 13, why was it upgrading?), and who knows what other uglies are hiding in the dark corners of application persistence.) - Kevin -- The moon is first quarter, 46.3% illuminated, 7.0 days old. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 189 bytes Desc: This is a digitally signed message part Url : | http://twistedmatrix.com/pipermail/twisted-python/2003-April/003514.html | CC-MAIN-2017-30 | refinedweb | 389 | 53.61 |
Unaffordable Country In Apache Spark
14 min read·
Back in September last year, the Guardian published a fantastic visualisation looking at house price affordability in the United Kingdom. They took the Prices Paid data from the Land Registry and computed some descriptive statistics about it, such as the median and range.
The raw data is easily available from data.gov.uk, and they provide monthly, annual and the complete history allowing you to work with a reasonably sized set before running on the complete data set.
Recreating the Guardian’s data process within Apache Spark felt like a great way to get an introduction into the platform.
What Is Apache Spark
Spark is one of the most common platforms used for large scale data processing today. It builds upon the MapReduce programming model introduced by Hadoop. However, It is significantly faster than Hadoop (up to 100 times) as it performs the operations in memory avoiding slow disk IO operations.
It is a general-purpose platform. You can clean, process and analyse data all within Spark. It has connectivity to various data storage platforms and can cope with either structured (for example SQL data via JDBC) or unstructured data stored (such as text files in HDFS). It is designed to cope with large scale data, way beyond what can be stored within a single machine capabilities. It integrates with Hadoop easily and can use Hadoop’s YARN system to find and control computation nodes in the network.
There is support for Java, Scala, Python and R. This means you can quickly get up and started if you have familiarity in any of these languages. For all but Java, there is also a REPL style environment. In this post, I will be looking at using Python with Spark and using a Jupyter notebook as an interactive environment to experiment with the data, but all of the commands are common across the different languages.
Spark has two ways of looking at data at each node. Either as an RDD (Resilient Distributed Datasets) or a DataFrame.
For this post, I am only looking at RDDs. An RDD is a fundamental data structure of Spark. It is an immutable, partitioned set of data. It can contain a set of any Java, Scala or Python objects (including
custom classes). They contain the lineage of the data (the steps used to create them), so are resilient as they can be easily recreated. An RDD can either be a basic table of objects or can be a set of key
and value pairs. There are some special functions for working with key based RDDs which provide great functionality and power (e.g.
reduceByKey and
groupByKey).
The lineage also allows for lazy evaluation, in other words nothing is evaluated until a result is needed. Spark handles this by having two types of functions - Transformations and Actions. Transformations
do not cause the evaluation of an RDD but instead reshape the input RDD to a new RDD. A simple example of a transformation would be a map extracting a couple of values or a filter selecting a subset of rows.
Actions cause the RDD to be evaluated and return a result. A simple example would be the count function which returns the number of rows in the RDD. One interesting thing is the while
reduce is itself an
action returning a single item, but
reduceByKey is a transformation returning a new RDD of keys and values.
The Map Reduce model
The first part of the process is ‘ingesting’ the data from a data store. This data is then partitioned and passed into different computation nodes to process. If you take the simple case of reading a flat file from the filesystem, this means just reading in multiple blocks.
These partitioned blocks of data then go through the ‘map’ part of the process. This layer might do things like filtering the data, restructuring the data or sorting the data. Following this, the resulting mapped data may need to be redistributed between nodes to allow for the next stage of the computation. This ‘shuffle’ of the data is the slowest part of the process as it involved data leaving one node and moving to another, which will generally be a different computer.
The final stage in the MapReduce process is to ‘reduce’ the data to produce a useful result set. These are summary operations such counting number of records or computing averages. The reduce process can be multiple layers with nodes computing intermediary results before passing them on to be aggregated to produce the final result set.
Installing Spark
First, we need to install some pre-requisites. Spark itself needs a Java VM to run, you can download the current version from the Java home page. We will be using Python for this tutorial. I chose to use version 3.x, but everything works in 2.x as well. In order to use a Jupyter notebook as a development environment, you also need to install that. I chose to use the Anaconda Python distribution which includes everything I needed (including the notebooks).
For this guide, we won’t be using Hadoop and will just be running a local instance of Spark. You can hence download whichever version of Spark you like from the download page. Once you have downloaded it,
extract the file to a location you are happy to run it from, I used
C:\Spark. We now need to set up some environment variables. First, add a new environment variable called
SPARK_HOME and set it to
the location you extracted Spark to. Next, add
%SPARK_HOME%\bin to the
Path variable.
If you have both Python 2 and 3 installed on the same machine, you will need to tell Spark to use Python 3. This can be done by another environment variable
PYSPARK_DRIVER and setting it to the command
to run Python 3 (e.g.
SET PYSPARK_DRIVER=python3).
To run on Windows, we need to resolve an issue to do with a permission error for Hive. To fix this:
- Download winutils.exe and save it to somewhere like
C:\Hadoop\bin.
- Create a new environment variable
HADOOP_HOMEpointing at
C:\Hadoop.
- Add an entry to the
Pathvariable equal to
%HADOOP_HOME%\bin.
- Make a new directory
C:\Tmp\Hive.
- In a console window run
winutil chmod -R 777 \Tmp\Hive.
Now to test we are all set up. Open a new console window and enter the command
pyspark. This should launch a new Python based Spark console session. We can type
sc in and check that the variable has
been set to a Spark Context:
Finally, we now want to tell Spark to use the Jupyter notebook so we can experiment. To do this we need to set two more environment variables. The first
PYSPARK_DRIVER_PYTHON should be set to
jupyter
to tell Spark to run the notebook command. The second
PYSPARK_DRIVER_PYTHON_OPTS needs to be set
notebook. Now if we run
pyspark, we will get an interactive notebook session in a browser:
While the instructions above are based on a Windows process, the same instructions will configure a Mac to run it as well. You shouldn’t remove python 2.x! You will need to add the environment variables
to
~./bashrc file:
EXPORT SPARK_HOME = /usr/local/spark EXPORT PATH = $PATH:/usr/local/spark/bin EXPORT PYSPARK_DRIVER = python3 EXPORT PYSPARK_DRIVER_PYTHON = jupyter EXPORT PYSPARK_DRIVER_PYTHON_OPTS = notebook
Reading and Parsing the Raw Data
The data file from the Land Registry is just a plain CSV file:
"{3E0330EF-67CA-8D89-E050-A8C062052140}","112000","2006-05-22 00:00","MK13 7QS","F","N","L","HOME RIDINGS HOUSE","13","FLINTERGILL COURT","HEELANDS","MILTON KEYNES","MILTON KEYNES","MILTON KEYNES","A","A" "{3E0330EF-7707-8D89-E050-A8C062052140}","900000","2006-06-29 00:00","CH3 7QN","S","N","F","CHURCH MANOR","","VILLAGE ROAD","WAVERTON","CHESTER","CHESHIRE WEST AND CHESTER","CHESHIRE WEST AND CHESTER","A","A" "{3E0330EF-A324-8D89-E050-A8C062052140}","250000","2006-07-07 00:00","DE6 3DE","T","N","F","DALE ABBEY HOUSE","","","LONGFORD","ASHBOURNE","DERBYSHIRE DALES","DERBYSHIRE","A","A" "{3E0330EF-BF0B-8D89-E050-A8C062052140}","157000","2006-12-01 00:00","M25 1HF","T","N","F","9A","","HEATON STREET","PRESTWICH","MANCHESTER","BURY","GREATER MANCHESTER","A","A" "{3E0330F0-16DA-8D89-E050-A8C062052140}","326500","2006-11-24 00:00","SW6 1LJ","F","N","L","60","","ANSELM ROAD","","LONDON","HAMMERSMITH AND FULHAM","GREATER LONDON","A","A" ...
Each field in the file is stored as a text value surrounded by quotes. They also don’t store the header in the files but details can be found in the details provided. The first task is to read the raw text file
into an RDD. This is very straight forward using
sc.textFile(FileName) and we can then verify the content by checking the first 5 lines using
take(5). It is worth noting that prior to calling
take, Spark
won’t actually have done any work.
For each line in the text file, we want to break it into an array of value and then convert from this to a dictionary attaching a header. The small script below shows one way to do this using the
map function
combined with Python lambda functions:
header = ['Transaction unique identifier','Price','Date of Transfer','Postcode','Property Type','Old/New','Duration', \ 'PAON','SAON','Street','Locality','Town/City','District','County','PPDCategory Type'] data = sc.textFile(r'C:\Downloads\pp-monthly-update-new-version.csv') \ .map(lambda line: line.strip('"').split('","')) \ .map(lambda array: dict(zip(header, array))) data.take(5)
I only want to deal with the ‘outward code’ part of the Postcode (i.e. the part before the space) and for simplicity at this stage I am going to remove records which don’t have a postcode. As the intention is
to run this over the entire dataset from 1995, I will also need the year. As I only need the year, I can just read the first four characters of the date and avoid parsing into a Python date object. Finally, I
want to create a key based RDD. All you need to do for this within Python in Spark is return tuples rather than values. I went for a simple
(year)_(postcode) for the key, with the price as the value. The
function for the data now becomes:
indexPostcode = 3 indexPrice = 1 indexDate = 2 data = sc.textFile(r'C:\Downloads\pp-monthly-update-new-version.csv')\ .map(lambda line: line.strip('"').split('","'))\ .filter(lambda d: d[indexPostcode] != '') \ .map(lambda d: (d[indexDate][0:4] + '_' + d[indexPostcode].split(' ')[0], int(d[indexPrice])))
Computing the statistics
At this point, I have a dataset shaped how I want and with keys as I wanted. In other words, we have done the Map part of the process. I now wanted to look at some basic statistics. Taking a look first at
the total count of all records and the counts by key. Unlike virtually all the other
byKey methods,
countByKey is itself an action returning a dictionary rather than an RDD. I also wanted to
look at the range of the price. Computing the maximum and minimum value can easily be done using the
reduceByKey transformation and the reading with an action such as
collect (which gets all the values from
the RDD) to see the values. The block of code below shows the calculation of these four statistics:
totalCount = data.count() countsByKeyDict = data.countByKey() maxByKey = data.reduceByKey(max) minByKey = data.reduceByKey(min)
To compute the mean and standard deviation, you need to compute the total of all the values and the sum of prices squared. Again, this can be done using the
reduceByKey but this time I need to provide a
bespoke function to do the computation. Python lambda syntax is particularly suited to this simple computation. For the sum of the squared value, the
map function is used to compute the squared
value before running
reduceByKey. Note that when using
map with a keyed RDD, the function will be passed a tuple of the key and value. I also need to be able to interact with the counts, again this
can be done using
map and
reduceByKey. Finally, to join the values together, you need to use
join to look up one value from one RDD into another based on the key. Combined with
map this can be used
to compute the mean and standard deviation. The code below will create RDDs capable of producing all of the basic statistics:
import math countByKey = data.map(lambda kvp: (kvp[0], 1)).reduceByKey(lambda a,b: a + b) maxByKey = data.reduceByKey(max) minByKey = data.reduceByKey(min) totalByKey = data.reduceByKey(lambda a,b: a + b) sumSqByKey = data.map(lambda kvp: (kvp[0], kvp[1]**2)).reduceByKey(lambda a,b: a + b) mean = totalByKey.join(countByKey).map(lambda kvp: (kvp[0], kvp[1][0] / kvp[1][1])) avgSquare = sumSqByKey.join(countByKey).map(lambda kvp: (kvp[0], kvp[1][0] / kvp[1][1])) stDev = avgSquare.join(mean).map(lambda kvp: (kvp[0], math.sqrt(kvp[1][0] - kvp[1][1]**2)))
All of these statistics can be computed in a single pass together. We need to use the
aggregateByKey function to do this. This function takes 3 parameters. The first is the value to initiate the aggregation
process with. The second is a function argument which takes the current aggregate value (or the initial value) and a single value from the RDD and then computes the new value of the aggregate. For each key,
this function is called for every value within a computation node to compute the aggregate value. If a key is split across multiple nodes, then this aggregate is passed to the final parameter. This is a function
argument which takes two aggregate value and merges them. This will be called repeatedly until a final single aggregate for the key is computed. This final function will not be called for a key, if all of its
values are within a single node.
As a simple example, the code below computes the mean of the price using
aggregateByKey. As it moved down the RDD records within each key, it aggregates them into an array containing the count and the total.
The mean is then computed from the final aggregate array for each key using a
map function.
mean = data.aggregateByKey([0, 0],\ lambda c,v: [c[0] + 1, c[1] + v],\ lambda a,b: [a[0] + b[0], a[1] + b[1]])\ .map(lambda kvp: (kvp[0], kvp[1][1] / kvp[1][0]))
For computing all of the statistics, I extend the above approach to be an array of 5 values: Count, Sum, Sum of Square, Max and Min. I find it cleaner to move away from the lambda syntax at this point and move to defining functions for each of the steps. The code below computes all of the above statistics and returns them as a dictionary:
import math initialAggregate = [0, 0, 0, 10000000000, 0] def addValue(current, value): return [ current[0] + 1, current[1] + value, current[2] + value ** 2, min(current[3], value), max(current[4], value)] def mergeAggregates(a, b): return [ a[0] + b[0], a[1] + b[1], a[2] + b[2], min(a[3], b[3]), max(a[4], b[4])] header = ['Count', 'Mean', 'StDev', 'Min', 'Max'] def aggregateToArray(a): return [a[0], a[1] / a[0], math.sqrt(a[2] / a[0] - (a[1] / a[0]) ** 2), a[3], a[4]] stats = data.aggregateByKey(initialAggregate, addValue, mergeAggregates)\ .map(lambda kvp: (kvp[0], dict(zip(header, aggregateToArray(kvp[1])))))
If you would rather use a Python class for this, there is a limitation that the PySpark cannot
pickle a class in the main script file. If you place the implementation in a separate module, then you will be able
to use it. While this is quite straight forward as a Spark Job, it is a restriction to work around within the REPL environment.
The final statistic I want to compute, is the median. While for very large datasets, we won’t be able to use a straight forward approach, the price paid data is small enough to use a simple
groupByKey method.
This method groups together all the values for a key into an array. We can then use the
map function on the array to compute the median. The limitation of this approach is that it is possible you won’t be able
to store all the values for a key in a single node in which case an out of memory exception will occur. It also requires a large amount of data being moved between the nodes. However, for this simple case the code
looks like:
import statistics medians = data.groupByKey()\ .map(lambda kvp: (kvp[0], statistics.median(kvp[1])))
We now have all the statistics needed. The last task is to join it all back together and output the results. The
join command easily allows us to join the median to the other statistics. In order to write it
out to a CSV file, we need to join the partitions back together. We can use the
repartition function to either increase or decrease the number of partitions. In this case I want to reduce to a single partition.
The code below adds a header row, creates an array of values from the statistics and converts to a comma separated string, and finally writes to a CSV file within the specified folder (
saveAsTextFile):
import copy def mergeStats(dict, median): output = copy.copy(dict) output["Median"] = median return output allStats = stats.join(medians).map(lambda kvp: (kvp[0], mergeStats(kvp[1][0], kvp[1][1]))) outputHeader = ['Count', 'Mean', 'StDev', 'Median', 'Min', 'Max'] csvData = allStats\ .map(lambda kvp: kvp[0][0:4] + ',' + kvp[0][5:] + ',' + ",".join(map(str, map(lambda k: kvp[1][k], outputHeader)))) sc.parallelize(['Year,Postcode,' + ",".join(outputHeader)])\ .union(csvData)\ .repartition(1)\ .saveAsTextFile(r'C:\Downloads\pricePaidStatistics')
Running this process produces the output below:
Creating a Spark Job
To convert this from a REPL script to a Spark Job we can run needs a little wrapping. The code below will set up the Spark context and allow you to run it using
spark-submit command:
from pyspark import SparkConf, SparkContext def main(sc): #Insert Data Code Here if __name__ == "__main__": conf = SparkConf().setAppName("APPNAME") # Update APPNAME conf = conf.setMaster("local[*]") sc = SparkContext(conf=conf) main(sc)
Once you have put together the complete script you can then run it at the command line. You need to unset the
PYSPARK_DRIVER_PYTHON and the
PYSPARK_DRIVER_PYTHON_OPTS before running the
spark-submit command:
set PYSPARK_DRIVER_PYTHON= set PYSPARK_DRIVER_PYTHON_OPTS= spark-submit spark_pricesPaid.py
This will produce a lot of log messages:
When you run a process within Spark, it automatically creates a web based UI you can use to monitor what is going. This is true in either the REPL environment or when running as a Spark job. The arrow shows the log message indicating the URL. It will be the first free port after 4040. It has some great features and is worth exploring. The screen shot below show the DAG for the process created in this post.
What Next
Hopefully this has given you a taste of the power of Spark. It is a fantastic platform for data analytics and has a huge community supporting it. There are extensions for Machine Learning and for Streaming. It is easy to get started and produce some results quickly. | http://blog.scottlogic.com/2016/12/19/spark-unaffordable-britain.html | CC-MAIN-2017-17 | refinedweb | 3,225 | 64.3 |
Integrating KubeAssert with KUTTL
KubeAssert is a kubectl plugin used to make assertions against resources on your Kubernetes cluster from command line. It is an open source project that I created on GitHub.
As the last post of KubeAssert series, in this post, I will share with you how to combine KubeAssert with KUTTL, a tool that provides a declarative approach using YAML to test Kubernetes.
About KUTTL
The KUbernetes Test TooL (KUTTL) is a tool which provides a declarative way using YAML to test Kubernetes. cluster setup. For more information on KUTTL, please go to check its website.
Combine KUTTL with KubeAssert
In KUTTL, test assert is written in YAML which can match specific objects by name as well as match any object that matches a defined state. If an object has a name set, then KUTTL will look specifically for that object to exist and verify its state matches what is defined in assert file. For example, if the file has:
apiVersion: v1
kind: Pod
metadata:
name: my-pod
status:
phase: Successful
Then KUTTL will wait for a pod whose name is
my-pod in the test namespace to have
status.phase=Successful.
However, it is too limited to make assertion like this. For example, by default, it is hard to use KUTTL to assert things such as pod restarts count should be less than a value, or there should be no pod that keeps terminating, and so on. This is where KubeAssert comes into play!
Fortunately, start from v0.9.0, KUTTLE allows users to specify commands or scripts in assert file to assert status. It gives us opportunity to combine KUTTL with KubeAssert to write much more powerful assertions against Kubernetes resources.
Writing Your First Test using KUTTL and KubeAssert
Let’s revisit the “Writing Your First Test" on KUTTL website and see how it can be modified to use KubeAssert when you write assertions.
Create a Test Case
Let’s create the directory
tests/e2e for our test suite and the sub-directory
example-test for the test case:
mkdir -p tests/e2e/example-test
Next, let’s create the test step
00-install.yaml in
tests/e2e/example-test/ to create the deployment
example-deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Then, create the test assert
tests/e2e/example-test/00-assert.yaml
apiVersion: kuttl.dev/v1beta1
kind: TestAssert
commands:
- command: kubectl assert exist-enhanced deployment example-deployment -n $NAMESPACE --field-selector status.readyReplicas=3
Here we use TestAssert with a command using KubeAssert to assert the test step is finished if
status.readyReplicas of deployment
example-deployment is
3. Please note the use of
$NAMESPACE. It is provided by KUTTL to indicate which namespace KUTTL is running the test under.
Write a Second Test Step
In the second step, we increase the number of replicas on the deployment we created from 3 to 4. It is defined in
tests/e2e/example-test/01-scale.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
spec:
replicas: 4
The assert for it in
tests/e2e/example-test/01-assert.yaml using KubeAssert:
apiVersion: kuttl.dev/v1beta1
kind: TestAssert
commands:
- command: kubectl assert exist-enhanced deployment example-deployment -n $NAMESPACE --field-selector status.readyReplicas=4
The assertion we define is almost the same as above, just the expected value of
status.readyReplicas is changed to 4.
Run the test suite and validate if the test can pass:
kubectl kuttl test — start-kind=true ./tests/e2e/
For more instructions on this sample test, please read the original document on KUTTL website.
Summary
As you can see, to integrate KUTTL with KubeAssert is quite straightforward. The above test only demonstrates some basic capabilities of KubeAssert. You can define more advanced assertions using KubeAssert when you run KUTTL tests.
You can learn more on KubeAssert by reading its online documents. If you like it, you can consider to give star to this project . Also, any contributions such as bug report and code submission are very welcome. | https://morningspace.medium.com/integrating-kubeassert-with-kuttl-2123fabcc386 | CC-MAIN-2022-27 | refinedweb | 696 | 54.63 |
Load Data From Database Using Web API
In this article we will learn about Loading Data From Database in MVC using Web API. We will use Visual Studio 2015 to create a Web API and performs the operations. In this project we are going to create a database and a table called tbl_Subcribers which actually contains a list of data. We will use our normal jQuery ajax to call the Web API, once the data is ready we will format the same in an html table. I hope you will like this.
Download the source code
Background
What is a Web API?
A Web API is a kind of a framework which makes building HTTP services easier than ever. It can be used almost everywhere including wide range of clients, mobile devices, browsers etc. It contains normal MVC features like Model, Controller, Actions, Routing etc. Support all HTTP verbs like POST, GET, DELETE, PUT.
Image Courtesy : blogs.msdn.com
Image Courtesy : forums.asp.net
Using the code
We will create our project in Visual Studio 2015. To create a project click File-> New-> Project.
Create a control
Now we will create a control in our project.
Select Empty API Controller as template.
As you can notice that we have selected Empty API Controller instead of selecting a normal controller. There are few difference between our normal controller and Empty API Controller.
Controller VS Empty API Controller
A controller normally render your views. But an API controller returns the data which is already serialized. A controller action returns JSON() by converting the data. You can get rid of this by using API controller.
Find out more: Controller VS API Controller
Create a model
As you all know, we write logic in a class called model in MVC. So next step what we need to do is creating a model.
Right click on Model and click Add new Items and then class. Name it as Subscribers. We are going to handle the subscriber list who all are subscribed to your website.
Now we will create a Database to our application.
Create Database
Once you created the database, you can see your database in App_Data folder.
Now will add a new table to our database.
You can see the query to create a table below.
CREATE TABLE [dbo].[Table] ( [SubscriberID] INT NOT NULL PRIMARY KEY, [MailID] NVARCHAR(50) NOT NULL, [SubscribedDate] DATETIME2 NOT NULL )
It seems our database is ready now.
The next thing we need to do is to create a ADO.NET Entity Data Model. SO shall we do that? Right click on your model and click on add new item, in the upcoming dialogue, select ADO.NET Entity Data Model.Name that file, Here I have given the name as SP. And in the next steps select the tables, stored procedures, and views you want.
So a new file will be created in your model folder.
Now we will create an ajax call so that you can call the web API. We will use normal Ajax with type GET since we need to just retrieve the data.
A web API control does not return any action result or any json result, so we need to manually do this. We will use the index.cshtml file as our view
We are going to call our web API as follows from the view Index.cshtml.
@section scripts { <script> $(document).ready(function () { $.ajax( { type: 'GET', dataType: 'json', contentType: 'application/json;charset=utf-8', url: '', success: function (data) { try { debugger; var' + val.MailID + '</a>' + '</td><td>' + val.SubscriberID + '</td><td>' + val.SubscribedDate + '</td></tr>'; }); html += '</tbody></table>'; $('#myGrid').html(html); } catch (e) { console.log('Error while formatting the data : ' + e.message) } }, error: function (xhrequest, error, thrownError) { console.log('Error while ajax call: ' + error) } } ); }); </script> }
Once we get the data in the success part of the ajax call we are formulating the data in an HTML table and bind the formatted html to the element myGrid.
<div id="myGrid"></div>
Please be noted that url you give must be correct, or else you will end up with some errors. Your actions won’t work
So we are calling our web api as. Do you remember we have already created a controller? Now we are going back to that. So we need to create an action which returns the total subscribed list from the database, so for that we will write few lines of codes as follows.
public List<tbl_Subscribers> getSubscribers() { try { using (var db = new sibeeshpassionEntities()) { Subscriber sb = new Subscriber(); return (sb.getSubcribers(db).ToList()); } } catch (Exception) { throw; } }
Here Subscriber is our model class, to get the reference of your model class in controller, you need to include the model namespace. We are getting a list of data in tbl_Subcribers type. Now we will concentrate on model class.
You can see the model action codes here.
public List<tbl_Subscribers> getSubcribers(sibeeshpassionEntities sb) { try { if (sb != null) { return sb.tbl_Subscribers.ToList(); } return null; } catch (Exception) { throw; } }
This will return the data which is available in the table tbl_Subcribers in sibeeshpassion DB. It seems everything is set. Now what else we need to do? Yes we need to create some entries in the table. Please see the insertion query here.
INSERT INTO [dbo].[tbl_Subscribers] ([SubscriberID], [MailID], [SubscribedDate]) VALUES (1, N'sibikv4u@gmail.com', N'2015-10-30 00:00:00') INSERT INTO [dbo].[tbl_Subscribers] ([SubscriberID], [MailID], [SubscribedDate]) VALUES (2, N'sibeesh.venu@gmail.com', N'2015-10-29 00:00:00') INSERT INTO [dbo].[tbl_Subscribers] ([SubscriberID], [MailID], [SubscribedDate]) VALUES (3, N'ajaybhasy@gmail.com', N'2015-10-28 00:00:00')
So the data is inserted. Isn’t it?
Do you know?
Like we have RouteConfig.cs in MVC, we have another class file called WebApiConfig.cs in Web API which actually sets the routes.routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } );
So shall we run our project and see the output? Before going to run, I suggest you to style the HTML table by applying some CSSs as follows.
<style> table,tr,td,th { border:1px solid #ccc; border-radius:5px; padding:10px; margin:10px; } </style>
If everything goes fine, you will get the output as follows.
That is all. We did it. Have a happy coding.
Conclusion
Did I miss anything that you may think which is needed? Did you try Web API yet? | https://sibeeshpassion.com/load-data-from-database-using-web-api/ | CC-MAIN-2018-51 | refinedweb | 1,069 | 69.07 |
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is Recipe 4.3, “How to define auxiliary class constructors.”
Problem
You want to define one or more auxiliary constructors for a Scala class to give consumers of the class different ways to create object instances.
Solution
Define the auxiliary constructors as methods in the class with the name this. You can define multiple auxiliary constructors, but they must have different signatures (parameter lists). Also, each constructor must call one of the previously defined constructors.
The following example demonstrates a primary constructor and three auxiliary constructors:
// primary constructor class Pizza (var crustSize: Int, var crustType: String) { // one-arg auxiliary constructor def this(crustSize: Int) { this(crustSize, Pizza.DEFAULT_CRUST_TYPE) } // one-arg auxiliary constructor def this(crustType: String) { this(Pizza.DEFAULT_CRUST_SIZE, crustType) } // zero-arg auxiliary constructor def this() { this(Pizza.DEFAULT_CRUST_SIZE, Pizza.DEFAULT_CRUST_TYPE) } override def toString = s"A $crustSize inch pizza with a $crustType crust" } object Pizza { val DEFAULT_CRUST_SIZE = 12 val DEFAULT_CRUST_TYPE = "THIN" }
Given these constructors, the same pizza can be created in the following ways:
val p1 = new Pizza(Pizza.DEFAULT_CRUST_SIZE, Pizza.DEFAULT_CRUST_TYPE) val p2 = new Pizza(Pizza.DEFAULT_CRUST_SIZE) val p3 = new Pizza(Pizza.DEFAULT_CRUST_TYPE) val p4 = new Pizza
Discussion
There are several important points to this recipe:
- Auxiliary constructors are defined by creating methods named
this.
- Each auxiliary constructor must begin with a call to a previously defined constructor.
- Each constructor must have a different signature.
- One constructor calls another constructor with the name
this.
In the example shown, all of the auxiliary constructors call the primary constructor, but this isn’t necessary; an auxiliary constructor just needs to call one of the previously defined constructors. For instance, the auxiliary constructor that takes the
crustType parameter could have been written like this:
def this(crustType: String) { this(Pizza.DEFAULT_CRUST_SIZE) this.crustType = Pizza.DEFAULT_CRUST_TYPE }
Another important part of this example is that the
crustSize and
crustType parameters are declared in the primary constructor. This isn’t necessary, but doing this lets Scala generate the accessor and mutator methods for those parameters for you. You could start to write a similar class as follows, but this approach requires more code:
class Pizza () { var crustSize = 0 var crustType = "" def this(crustSize: Int) { this() this.crustSize = crustSize } def this(crustType: String) { this() this.crustType = crustType } // more constructors here ... override def toString = s"A $crustSize inch pizza with a $crustType crust" }
To summarize, if you want the accessors and mutators to be generated for you, put them in the primary constructor.
Although the approach shown in the Solution is perfectly valid, before creating multiple class constructors like this, take a few moments to read Recipe 4.5, “Providing Default Values for Constructor Parameters”. Using that recipe can often eliminate the need for multiple constructors.
Generating auxiliary constructors for case classes
A case class is a special type of class that generates a lot of boilerplate code for you. Because of the way they work, adding what appears to be an auxiliary constructor to a case class is different than adding an auxiliary constructor to a “regular” class. This is because they’re not really constructors: they’re
apply methods in the companion object of the class.
To demonstrate this, assume that you start with this case class in a file named Person.scala:
// initial case class case class Person (var name: String, var age: Int)
This lets you create a new
Person instance without using the
new keyword, like this:
val p = Person("John Smith", 30)
This appears to be a different form of a constructor, but in fact, it’s a little syntactic sugar — a factory method, to be precise. When you write this line of code:
val p = Person("John Smith", 30)
behind the scenes, the Scala compiler converts it into this:
val p = Person.apply("John Smith", 30)
This is a call to an
apply method in the companion object of the
Person class. You don’t see this, you just see the line that you wrote, but this is how the compiler translates your code. As a result, if you want to add new “constructors” to your case class, you write new
apply methods. (To be clear, the word “constructor” is used loosely here.)
For instance, if you decide that you want to add auxiliary constructors to let you create new
Person instances (a) without specifying any parameters, and (b) by only specifying their name, the solution is to add
apply methods to the companion object of the
Person case class in the Person.scala file:
// the case class case class Person (var name: String, var age: Int) // the companion object object Person { def apply() = new Person("<no name>", 0) def apply(name: String) = new Person(name, 0) }
The following test code demonstrates that this works as desired:
object CaseClassTest extends App { val a = Person() // corresponds to apply() val b = Person("Pam") // corresponds to apply(name: String) val c = Person("William Shatner", 82) println(a) println(b) println(c) // verify the setter methods work a.name = "Leonard Nimoy" a.age = 82 println(a) }
This code results in the following output:
Person(<no name>,0) Person(Pam,0) Person(William Shatner,82) Person(Leonard Nimoy,82)
See Also
- Recipe 6.8, “Creating Scala Object Instances Without Using the new Keyword” demonstrates how to implement the apply method in a companion object so you can create instances of a class without having to use the
newkeyword (or declare your class as a case class)
- Recipe 4.5, “Providing Default Values for Scala Constructor Parameters” demonstrates an approach that can often eliminate the need for auxiliary constructors
- Recipe 4.14, “Generating Boilerplate Code with Scala Case Classes” details the nuts and bolts of how case classes work
The Scala Cookbook
This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly:
You can find the Scala Cookbook at these locations:
Add new comment | https://alvinalexander.com/scala/how-to-define-auxiliary-class-constructors-in-scala | CC-MAIN-2019-30 | refinedweb | 986 | 50.77 |
Enabling Time To Live
This section describes how to use the DynamoDB console or CLI to enable Time To Live. To use the API instead, see Amazon DynamoDB API Reference.
Enable Time To Live (console)
To enable Time To Live using the DynamoDB console:
Open the DynamoDB console at.
Choose Tables and then choose the table that you want to modify.
In Table details, next to TTL attribute, choose Manage TTL.
In the Manage TTL dialog box, choose Enable TTL and then type the TTL attribute name.
There are three settings in Manage TTL:
Enable TTL – Choose this to either enable or disable TTL on the table. It may take up to one hour for the change to fully process.
TTL Attribute – The name of the DynamoDB attribute to store the TTL timestamp for items.
24-hour backup streams – Choose this to enable Amazon DynamoDB Streams on the table. For more information about how you can use DynamoDB Streams for backup, see DynamoDB Streams and Time To Live.
(Optional) To preview some of the items that will be deleted when TTL is enabled, choose Run preview.
Warning
This provides you with a sample list of items. It does not provide you with a complete list of items that will be deleted by TTL.
Choose Continue to save the settings and enable TTL.
Now that TTL is enabled, the TTL attribute is marked TTL when you view items in the DynamoDB console.
You can view the date and time that an item will expire by hovering your mouse over the attribute.
Enable Time To Live (CLI)
To enable TTL on the "TTLExample" table:
aws dynamodb update-time-to-live --table-name TTLExample --time-to-live-specification "Enabled=true, AttributeName=ttl"
To describe TTL on the "TTLExample" table:
aws dynamodb describe-time-to-live --table-name TTLExample { "TimeToLiveDescription": { "AttributeName": "ttl", "TimeToLiveStatus": "ENABLED" } }
To add an item to the "TTLExample" table with the Time To Live attribute set using the BASH shell and CLI:
EXP=`date -d '+5 days' +%s` aws dynamodb put-item --table-name "TTLExample" --item '{"id": {"N": "1"}, "ttl": {"N": "'$EXP'"}}'
This example started with the current date and added five days to it to create an expiration time. Then, it converts the expiration time to epoch time format to finally add an item to the "TTLExample" table.
Note
One way to set expiration values for Time To Live is to calculate the number of seconds to add to the expiration time. For example, five days is 432000 seconds. However, it is often preferable to start with a date and work from there.
It is fairly simple to get the current time in epoch time format. For example:
Linux Terminal:
date +%s
Python:
import time; long(time.time())
Java:
System.currentTimeMillis() / 1000L
JavaScript:
Math.floor(Date.now() / 1000) | https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/time-to-live-ttl-how-to.html | CC-MAIN-2018-34 | refinedweb | 468 | 71.04 |
Creating a .NET Core 3.0 F# Console App
Matt Eland
Updated on
・12 min read
Emulating Squirrel Brains in F# with Genetic Algorithms (6 Part Series)
This is part 1 in a new tutorial series on creating a genetic algorithm in F# and .NET Core 3.0.
Learning Goals
This tutorial is focused on creating a new console application and learning some of the basics of F#. By the end of this tutorial you should be able to:
- Understand the basics of F#
- Create a new F# class library
- Create a new F# console application and link it to the class library
- Code a basic console input loop
- Code simple functions and classes in F#
Prerequisites
Before starting, you will need to have Visual Studio 2019 installed (Community edition is free and fine for this purpose). You will also need to make sure the .NET desktop development workload in checked when installing Visual Studio.
Understanding F Sharp
Important Disclaimer: The author is an F# novice. I've read a few books and written a neural net in F# as well as the code from this article series, but otherwise I am very new to the language. This represents my best attempt to share the knowledge I have learned, but does not constitute an authoritative 'best practice' type of model and may include inaccuracies based on incomplete understanding. Still, I want to share what I have learned
F# is a functional programming language that is part of the .NET language family.
The advantage of functional programming languages typically lies in application quality. F# handles nulls better by default and concepts such as discriminated unions and pattern matching make it harder to make mistakes.
Additionally, F#'s syntax is more concise than C#, meaning that it takes significantly fewer lines of code to express the same intent as it does in C# or other languages. Because you have fewer lines of code, it's harder for bugs to hide.
Because F# is part of .NET, it compiles down to IL and runs as part of .NET Framework and .NET Core.
This means that other .NET languages such as C# and VB .NET can interact with F# libraries. This also means that you can mix functional programming and object-oriented programming based on the particular needs of what you're programming.
F# is also not strictly limited to functional programming as F# can be used to create traditional classes following .NET conventions (though the syntax is sometimes very ugly to do so).
Understanding .NET Core 3.0 Console Apps
Console Applications are text-based utilities that run from the command line. They're typically used as part of automated processes or to integrate with other tools.
.NET Core 3.0 console apps operate cross-platform and are not strongly tied to Windows like .NET Framework console apps were.
In this tutorial series, the end result is not going to be a console application, but for the purposes of focusing on the code at first, we'll start with a console application for simplicity.
The Application we'll Create
This is part one of a multi-part series on building a genetic algorithm in F#. The series will feature a 2D game board featuring a squirrel, a dog, an acorn, a tree, and a rabbit. As the series progresses, we'll talk more about what we'll simulate and how it will work as well as what genetic algorithms are.
For now, we'll create a simple console application that generates a game board with a Squirrel somewhere on it, then displays the game board to the user and allows them to regenerate a new game board at random.
In order to do this we'll need:
- A
WorldPostype that stores a 2D location in the game world
- A
Squirrelclass that inherits from an Actor class
- A
Worldclass that arranges the actors in the simulation
- A console application that displays the current
Worldand prompts the user for input, repeating the loop until the user hits
Xto exit
Let's get started.
Setting up the Solution
Our solution will contain two projects initially, a .NET Core console application and a .NET Standard class library.
Create the Console App
In Visual Studio, create a new project. In the new project wizard, change the
Language Type drop down to F# and then look for the
Console App (.NET Core) option as pictured below. Make sure that it lists F# as the language.
Click
Next, name the project whatever you'd like. Whatever name you choose, I recommend you include the name 'ConsoleApp' somewhere in the name to help remember that this is the console application as we will have multiple projects.
Create the .NET Standard Library
In Visual Studio, right click on the solution at the top of your solution explorer and choose
Add > New Project....
From there, select the F# Class Library (.NET Standard) option as pictured below.
Make sure that the option has F# specified as the language and that you select the .NET Standard option, not the .NET Core option.
While .NET Core could work, the advantage of .NET Standard is that it can be referenced from a .NET Framework or .NET Core application. If you ever wanted to add a .NET Framework 4.8 application of some sort, you would be unable to reference a .NET Core class library.
My general rule is that unless I have a compelling reason not to, I will always create .NET Standard class libraries.
Click next and name the class library whatever you'd like, then create it. I recommend ending the name of the library with Logic, Domain, or DomainLogic so that it's clear that this is a class library handling application logic. For the rest of this series, I will refer to this as the domain logic library.
Reference the Class Library from the Console Application
Now that we have our two projects in the same solution, expand the console application in the solution explorer, right click on the
Dependencies node, and click
Add Reference....
On the projects tab, check the box next to the domain logic library you created above and click Ok. This will allow your .NET Core console app to reference logic in the domain logic library.
Adding the Domain Logic
Before we can implement the console application, we'll need to create the classes it references.
Note that in F#, the order of files inside of a project matters. F# will load files from top to bottom, so files at the top cannot reference values defined below them. Visual Studio lets you use alt and the arrow keys to move items up and down in the solution explorer and to add new items above or below an existing project.
The order we'll need for this application is:
- WorldPos
- Actors
- World
Go ahead and create empty F# script files named these things. You may delete or rename the default F# file that the class library starts with.
WorldPos
In the WorldPos file, we'll add the following code:
namespace MattEland.FSharpGeneticAlgorithm.Logic module WorldPos = type WorldPos = {X: int32; Y:int32} let newPos x y = {X = x; Y = y}
Here we're saying that everything belongs in the
MattEland.FSharpGeneticAlgorithm.Logic namespace instead of in a root namespace. This helps keep things organized.
Next, we declare a module called
WorldPos. This will allow other files to open (import) the logic we define here.
Next we define a simple type called
WorldPos that consists of two integer values: X and Y. This compiles down as a simple class, but notice the syntax is incredibly minimal.
Finally, we define a function named
newPos that takes in two parameters named
x and
y. This function will return a new object with an
X and
Y property.
Here's the interesting part: F# interprets return type as a
WorldPos even though no explicit syntax exists declaring this. This is because there is nothing being imported via an open statement that could match the result of
newPos besides the
WorldPos type declared above. If there were, some additional type declarations would be explicitly necessary.
Actors
Next let's look at the actors file:
namespace MattEland.FSharpGeneticAlgorithm.Logic open MattEland.FSharpGeneticAlgorithm.Logic.WorldPos module Actors = [<AbstractClass>] type Actor(pos: WorldPos) = member this.Pos = pos abstract member Character: char type Squirrel(pos: WorldPos, hasAcorn: bool) = inherit Actor(pos) member this.HasAcorn = hasAcorn override this.Character = 'S' let createSquirrel pos = new Squirrel(pos, false)
Like before, we're declaring a namespace and a module, but here we're opening another module, in this case the
WorldPos module we defined earlier.
Next we define an abstract class called
Actor and decorate it with an
AbstractClass attribute telling F# that this type should be implemented abstractly. This style of syntax is frequently needed for object-oriented programming concepts.
We define a constructor on
Actor that takes in a
WorldPos. The class defines a
Pos member that returns the
pos argument from the constructor. Note that
Pos is not implemented as a property and cannot be modified as F# values as defined immutable by default.
Next we define an abstract
Character that will return a .NET
char type.
The Squirrel type works similarly to Actor, but is not abstract. It explicitly inherits
Actor and invokes its constructor. It exposes the
hasAcorn parameter via the
HasAcorn member, and then it overrides the
Character value and represents the squirrel class with the
S character.
For those more familiar with F#, note that I'm choosing to work with abstract classes here instead of the F# concept of a discriminated union because it's easier to have sequences (F# collections) with different types sharing the same base class than it is to have sequences of different members of discriminated unions.
Finally, we expose a
createSquirrel function that creates a new
Squirrel instance at the specified
pos. It is defined without an acorn, which makes the squirrel sad.
World
Okay, so now we're seeing some repetitive patterns in defining members. Let's do something a bit more complex.
namespace MattEland.FSharpGeneticAlgorithm.Logic open System open MattEland.FSharpGeneticAlgorithm.Logic.Actors open MattEland.FSharpGeneticAlgorithm.Logic.WorldPos module World = let getRandomPos(maxX:int32, maxY:int32, random: Random): WorldPos = let x = random.Next(maxX) + 1 let y = random.Next(maxY) + 1 newPos x y let generate (maxX:int32, maxY:int32, random: Random): Actor seq = let pos = getRandomPos(maxX, maxY, random) seq { yield createSquirrel pos } type World (maxX: int32, maxY: int32, random: Random) = let actors = generate(maxX, maxY, random) member this.Actors = actors member this.MaxX = maxX member this.MaxY = maxY member this.GetCharacterAtCell(x, y) = let mutable char = '.' for actor in this.Actors do if actor.Pos.X = x && actor.Pos.Y = y then char <- actor.Character char
Here we start to see a few bits of new syntax.
getRandomPos is defined as a method (note the parentheses). This is important in this instance because otherwise F# will not re-evaluate the results of a call due to a process called memoization. Since we want to get a different random position every time, it's important to include these parentheses.
getRandomPos will declare
x and
y as results of the
System.Random instance, holding on to a location within the game world.
Finally,
getRandomPos will call
newPos to build the position object. Because this is the last line in the method, its return
WorldPos is returned by the method. Note that we do not use explicit
return statements in F#.
generate exposes some new syntax. Instead of single result types, we're now working with sequences, an F# version of an immutable collection that can be iterated over. The
Actor seq syntax indicates that the method will return a sequence of zero to many
Actor instances.
Inside of the
generate method we define a
seq { ... } block. In this block we yield instances of that sequence. For now, we're only including a single Squirrel, but in future parts of this tutorial we will include a wider variety of objects.
Next we define the
World class. This type manages the game board and arrangement of actors within it.
Note that inside of this type definition we declare an
actor variable immediately, then expose that instance via the
Actors member.
The
GetCharacterAtCell method on
World has some interesting syntax.
First,
char is defined as a mutable variable, meaning that it can be assigned a new value to it after its initial assignment. This goes back to F# declaring things as immutable by default and viewing mutability as an anti-pattern to be minimized. The
char <- actor.Character statement later will reassign
char to hold the value to the right of the arrow.
Secondly,
for actor in this.Actors do defines an F# for loop. Note that indentation governs the beginning and ending of the for block and no
end for style syntax is necessary.
Thirdly, we see an example of F# conditional logic in the
if actor.Pos.X = x && actor.Pos.Y = y then statement. This operates very similar to C# other than we do not have parentheses, we use a single
= operator, and the
if statement doesn't include an end-if, just like the
for loop.
Finally, we end the method with a single
char statement to load the
char variable into memory and return it as the last statement in the method.
Building the Console Application
Now that we can see a bit more of how F# logic flows, let's get the console application operational and play around with it.
This is a lot smaller than the domain logic library and will include a collection of helper functions related to dealing with console input and output and a function representing the main entry point in the application and user input loop.
Display Functions
namespace MattEland.FSharpGeneticAlgorithm.ConsoleTestApp open System open MattEland.FSharpGeneticAlgorithm.Logic.World module Display = let printCell char isLastCell = if isLastCell then printfn "%c" char else printf "%c" char let displayWorld (world: World) = printfn "" for y in 1..world.MaxX do for x in 1..world.MaxY do let char = world.GetCharacterAtCell(x, y) printCell char (x = world.MaxX) let getUserInput(): ConsoleKeyInfo = printfn "" printfn "Press R to regenerate or X to exit" Console.ReadKey(true)
printCell is a simple function that will display
char on the console. If
isLastCell is true, then the
printfn method will be used which includes a line break, otherwise
printf will be used which will not move down to the next row. The
"%c" char syntax formats the
char character into the string.
displayWorld uses two nested for loops to loop row by row column by column through the game world by relying on the
MaxX and
MaxY properties on the
World. From there it calls the logic we implemented earlier and then invokes the
printCell method. Note that we enclose
x = world.MaxX in parentheses in order to pass in the boolean result of that evaluation as the
isLastCell parameter.
The
getUserInput method (again, defined as a method to not memoize the results) prompts the user for input, grabs the first key from the keyboard, and returns the result of that call (since it's the last statement of the method).
Main Input Loop
Okay, now the real meat and potatoes of the console application:
open System open MattEland.FSharpGeneticAlgorithm.Logic.World open MattEland.FSharpGeneticAlgorithm.ConsoleTestApp.Display let generateWorld randomizer = new World(8, 8, randomizer) [<EntryPoint>] let main argv = printfn "F# Console Application Tutorial by Matt Eland" let randomizer = new Random() let mutable simulating: bool = true let mutable world = generateWorld(randomizer) while simulating do displayWorld world let key = getUserInput() Console.Clear() match key.Key with | ConsoleKey.X -> simulating <- false | ConsoleKey.R -> world <- generateWorld(randomizer) | _ -> printfn "Invalid input '%c'" key.KeyChar 0 // return an integer exit code
In this final class of ours, we declare a
generateWorld function to keep logic for creating a new
World object in one place.
The
main function is defined as the primary entry point of the application via the
EntryPoint attribute.
Here we define some new variables needed for the core loop, declaring
simulating and
world as mutable as they can change inside the main application loop.
The
while simulating do loop will repeatedly display the state of the world via the
displayWorld function, then grab the user input via
getUserInput, clear the console so that every iteration you only see the world's state.
Finally, we use
match to effectively switch on the
key that was pressed. The
| ... -> syntax indicates a case to match, with logic to execute to the right of the
->.
For example, when the
X key is pressed,
simulating <- false runs which sets
simulating to
false, causing the loop to terminate.
The
| _ -> syntax indicates a default match - matching any case that was not otherwise matched explicitly. In this case, we use it to tell the user they entered something not expected / supported.
The final
0 statement tells the application to terminate and return 0 for a non-error exit code.
The Finished Result
If you run the application, you should be prompted for input and be able to hit
R to regenerate the world and see the position of the squirrel change, or
X to exit the application.
If you can't build or want to look at the source code, check out the
article1 branch on GitHub.
Next Up
Next article I'll expand out the domain logic library to include the other actor types and turn the application into a mini-game where the player controls the squirrel. This will set us up for later articles where we will create a genetic algorithm and neural network to control the squirrel and display the simulation in something nicer than a text-based console application.
Emulating Squirrel Brains in F# with Genetic Algorithms (6 Part Series)
Async, Parallel, Concurrent Explained - Starring Gordon Ramsay
Complex computing concepts simplified
So why are you using f# to do oop style development?
Because I'm still learning. If you look at the next article in the series, I fix a lot of it. | https://dev.to/integerman/creating-a-net-core-3-0-f-console-app-348h | CC-MAIN-2019-51 | refinedweb | 3,022 | 56.35 |
jmorris@intercode.com.au said:> o Fixed fowner race (lockless technique suggested by Alan Cox). This looks broken to me. +static void f_modown(struct file *filp, unsigned long pid,+ uid_t uid, uid_t euid)+{+ filp->f_owner.pid = PID_INVALID;+ wmb();+ filp->f_owner.uid = uid;+ filp->f_owner.euid = euid;+ wmb();+ filp->f_owner.pid = pid;+}@@ -469,6 +491,9 @@ struct task_struct * p; int pid = fown->pid;+ if (!pid || pid == PID_INVALID)+ return;+ This introduces a window within which SIGIO will be dropped. As it stands,this will break UML. Lost SIGIOs will cause UML hangs.If you're determined to avoid spinlocks, why not do something like this:+ if (!pid)+ return;+ while(fown->pid == PID_INVALID) ;maybe with a cpu_relax() in the loop.But that starts looking a lot like a spinlock.Also, shouldn't there be a capable(CAP_KILL) in here rather than a checkfor uid == 0?+static inline int sigio_perm(struct task_struct *p,+ struct fown_struct *fown)+{+ return ((fown->euid == 0) ||+ (fown->euid == p->suid) || (fown->euid == p->uid) ||+ (fown->uid == p->suid) || (fown->uid == p->uid));+}+ Jeff-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2002/8/16/71 | CC-MAIN-2014-52 | refinedweb | 201 | 69.68 |
Types of relationships between classes: is-a, has-a, uses. Examples. Aggregation. Composition
Contents
- 1. What types of relationships exist between classes?
- 2. An example of the simplest type of is-a relationship (inheritance)
- 3. Relationship between classes of has-a type
- Related topics
Search other websites:
1. What types of relationships exist between classes?
Two types of relationships are possible between classes:
- 1. Relationship type is-a (is-a relationship). In this case, one class is a subspecies of another class. In other words, one class expands the capabilities of another class. This type is based on the use of the inheritance mechanism.
- 2. A relations in which there is a relationship between two classes. This relationship is divided into two subtypes:
- 2.1. Relationship of type has-a (has-a relationship). In this case, one or more objects of another class are declared in the class. There is also a division: aggregation, composition. If nested objects can exist independently of the class (they are not an integral part of the class), then this is aggregation. If nested objects (an object) supplement the class in such a way that the existence of a class is inconceivable without these objects, then this is a composition or union;
- 2.2. A relation of type uses (class “uses a different class). This is a generalized relation in which different forms of using one class by another are possible. If two classes are declared in a program, then optionally one class must contain an instance of another class. A class can use only some method of another class, a class can access the name of another class (use the name), a class can use the data field of another class, etc. You can read more about using the relationship type uses here.
⇑
2. An example of the simplest type of is-a relationship (inheritance)
The example demonstrates the implementation of the is-a relationship. Such relationship is necessary when it is necessary to modify (expand) an existing program code (class).
Let a class Point be defined that describes a point on the coordinate plane. The following items are implemented in the class:
- internal fields x, y;
- constructor with two parameters;
- a constructor without parameters that initialize the class fields with coordinates (0; 0);
- X, Y properties to access the internal fields x, y of the class;
- the LengthOrigin() method, which determines the length from the point (x; y) to the origin;
- method Print(), which displays the value of the fields x, y.
The next step is the need to extend the Point class with a new color element that defines the color of a point on the coordinate plane. In order not to correct the code (sometimes this is impossible) of the Point class, it is enough to implement the new ColorPoint class, which inherits (extends) the Point class and adds color to it.
In this example, the inherited ColorPoint class implements elements that complement (extend) the capabilities of the Point class:
- internal hidden field color – color of the point, which is obtained from the Colors enumeration;
- constructor with 3 parameters, initializing the value of the point with coordinates (x; y) and color value;
- property Color that implements access to the internal color field;
- The Print() method, which displays the color value and x, y coordinates of the Color base class. The method calls the Print() method of the base class of the same name.
The text of the program is as follows.
using System; using static System.Console; namespace ConsoleApp1 { // An enumeration defining a color palette, // required for use in the ColorPoint class enum Colors { Black = 0, Green = 1, Yellow = 2, Red = 3, Blue = 4 }; // Base class Point class Point { // 1. Internal fields of class - coordinates x, y private double x, y; // 2. Class constructors // 2.1. Constructor with 2 parameters - main constructor public Point(double x, double y) { this.x = x; this.y = y; } // 2.2. Constructor without parameters public Point() : this(0, 0) { } // 3. Properties X, Y public double X { get { return x; } set { x = value; } } public double Y { get { return y; } set { y = value; } } // 4. Method LengthOrigin() public double LengthOrigin() { // Pythagorean theorem return Math.Sqrt(x * x + y * y); } // 5. Method Print() public void Print() { WriteLine($"x = {x}, y = {y}"); } } // Inherited class ColorPoint class ColorPoint : Point { // 1. Hidden field - the color of point private Colors color; // 2. Constructor with 3 parameters public ColorPoint(double x, double y, Colors color) : base(x, y) { this.color = color; } // 3. Property Color public Colors Color { get { return color; } set { if (color >= 0) color = value; else color = 0; } } // 4. Method Print() void Print() { // Invoke method of base class base.Print(); // Display color Write("color = "); switch(color) { case Colors.Black: WriteLine("Black"); break; case Colors.Blue: WriteLine("Blue"); break; case Colors.Green: WriteLine("Green"); break; case Colors.Red: WriteLine("Red"); break; case Colors.Yellow: WriteLine("Yellow"); break; } } } class Program { static void Main(string[] args) { // Demonstration of Point class WriteLine("Demo Point:"); Point pt = new Point(3, 5); pt.Print(); double len = pt.LengthOrigin(); WriteLine("LengthOrigin = {0:f2}", len); // Demonstration of ColorPoint class WriteLine("Demo ColorPoint"); ColorPoint cp = new ColorPoint(1, 3, Colors.Green); cp.Print(); } } }
The result of the program
Demo Point: x = 3, y = 5 LengthOrigin = 5.83 Demo ColorPoint x = 1, y = 3
⇑
3. Relationship between classes of has-a type
With a has-a relationship, a class contains one or more objects (instances) of another class. There are two varieties of a has-a relationship:
- aggregation. This is the case when one or more nested objects is not part of the class, that is, the class can exist without these objects. A class can contain any number of such objects (even 0). See the examples below;
- composition. In this case, one or more nested objects is part of the class, that is, without these objects, the logical existence of the class itself is impossible.
Examples of classes in which the aggregation approach is implemented:
- the CarPark class can contain arrays (lists) of instances of the classes Car, Motorcycle, Bus. If at any moment in time there will not be a single car in the parking lot, the parking lot will continue to function;
- the Figures class can contain arrays of instances of the classes Rectangle, Triangle, Circle;
- the House class can contain a different number of objects of the Table, TVSet, Bed classes, etc.
Examples of interactions between classes that relate to composition:
- the Car class must contain one instance of the Engine class and four instances of the Wheel class. Instances of the Engine and Wheel classes are an integral part of the Car. If you remove one of these instances, the Car will not function and, as a result, the Car class will not work;
- the House class must contain an instance of the Roof class and four instances of the Wall class;
- the Triangle class (a triangle on the coordinate plane) contains three instances of the Point class.
⇑
3.1. Aggregation for has-a relationship type. Example
In the case of aggregation, a class contains many (one or more) objects of other classes that are not part of this class.
Example. The Figures class contains an array of Point classes and an array of Line classes. The number of elements in arrays can be arbitrary, even equal to 0. This means that the Figures class can exist without existing instances of the Point or Line classes. This type of interaction between classes is called aggregation.
The text of the demo example is as follows.
using System; using static System.Console; namespace ConsoleApp1 { // Aggregation // 1. Class that describes the point class Point { // Internal fields of class public double x; public double y; } // 2. Class that describes a line class Line { public Point pt1 = null; public Point pt2 = null; } // 3. A class that describes an array of figures class Figures { // 1. Internal fields of class public Point[] points; // array of points public Line[] lines; // array of lines // 2. Class constructor public Figures() { points = null; lines = null; } // 3. Method for displaying array items on the screen public void Print() { WriteLine("Array points:"); for (int i = 0; i < points.Length; i++) { WriteLine("x = {0}, y = {1}", points[i].x, points[i].y); } WriteLine("Array lines:"); for (int i=0; i<lines.Length; i++) { WriteLine("pt1.x = {0}, pt1.y = {1}", lines[i].pt1.x, lines[i].pt1.y); WriteLine("pt2.x = {0}, pt2.y = {1}", lines[i].pt2.x, lines[i].pt2.y); } } } class Program { static void Main(string[] args) { // Demonstration of aggregation using the Figures, Point, Line Classes // 1. Create an instance of Figures class Figures fg = new Figures(); // 2. Create an array of 5 points, which are objects of class Point // of instance of Figures class // 2.1. Allocate memory for 5 array items fg.points = new Point[5]; // 2.2. Allocate memory for each array item // and fill it for (int i = 0; i < fg.points.Length; i++) { fg.points[i] = new Point(); fg.points[i].x = i * i; fg.points[i].y = i * i * i; } // 3. Create an array of 3 lines // 3.1. Allocate memory for 3 array items fg.lines = new Line[3]; // 3.2. Allocate memory for each array item // and fill it for (int i = 0; i < fg.lines.Length; i++) { fg.lines[i] = new Line(); fg.lines[i].pt1 = new Point(); fg.lines[i].pt2 = new Point(); fg.lines[i].pt1.x = i; fg.lines[i].pt1.y = i * 2; fg.lines[i].pt2.x = i * 3; fg.lines[i].pt2.y = i * i; } // 4. Display an array of points and lines on the screen fg.Print(); } } }
The result of the program
Array points: x = 0, y = 0 x = 1, y = 1 x = 4, y = 8 x = 9, y = 27 x = 16, y = 64 Array lines: pt1.x = 0, pt1.y = 0 pt2.x = 0, pt2.y = 0 pt1.x = 1, pt1.y = 2 pt2.x = 3, pt2.y = 1 pt1.x = 2, pt1.y = 4 pt2.x = 6, pt2.y = 4
⇑
3.2. Composition for has-a relationship type. Example
Consider the Line class, which describes a line based on two points. Points are described by the Point class. The Line class contains 2 instances of the Point classes. Without these instances (objects), the Line class cannot exist, since both instances form part of the line (extreme points of the line). Thus, both instances of the Point class are part of the Line class. This type of interaction is called a composition or a union.
A fragment of the example is as follows.
... // Composition // 1. Class that describes a point class Point { // Internal fields public double x; public double y; } // 2. A fragment of the example is as follows. class Line { // The internal fields of a class are instances (objects) of the Point class. // Without these fields, the Line class does not make sense, which means // the pt1, pt2 fields complement the Line class (is part of the Line class), // it is a composition. public Point pt1 = null; public Point pt2 = null; } ...
⇑
Related topics
- The relationship between classes of type uses (the class uses another class). Examples
- Inheritance. Basic concepts. Advantages and disadvantages. General form. The simplest examples. Access modifier protected
⇑ | https://www.bestprog.net/en/2020/02/27/c-types-of-relationships-between-classes-is-a-has-a-uses-examples-aggregation-composition/ | CC-MAIN-2022-27 | refinedweb | 1,868 | 67.86 |
Search the Community
Showing results for 'barba'.
GSAP 3 Scroll Trigger Issue With BarbaJS
adamoc posted a topic in GSAPHi there, okay so thank you @ZachSaucier for your advice on what to do next with my situation. So I've made a little test , minimal and I think complete which shows the difference in the reaction of scrollTrigger with and without BarbaJS enabled on the page. So I have a homepage which has horizontal scroll perfectly available and working then I include a button to click the about page , click the button and brought to the about page (same animation but doesn't give you the same result) only able to scroll halfway across and not able to reach the footer. Wondering why this is the case and what can I do to combat the differences in the adjustment of the dimensions of the page made by Barba. Here's the site - Here's the code - index.html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="style.css"> <script src=""></script> <script src=""></script> <script src=""></script> <script defer</script> <title>Document</title> </head> <body> <div class="container"> <div class="page one">One</div> <div class="page two">Two</div> <div class="page three">Three</div> </div> <div class="about_btn_container"> <a href="about.html">About</a> </div> <footer> All Rights Reserved © Adamoc 2020 </footer> </body> </html> about.html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="style.css"> <script src=""></script> <script src=""></script> <script src=""></script> <script defer</script> <script defer</script> <title>Document</title> </head> <body data- <div class="container" data- <div class="page one">One</div> <div class="page two">Two</div> <div class="page three">Three</div> </div> <div class="about_btn_container"> <a href="about.html">About</a> </div> <footer> All Rights Reserved © Adamoc 2020 </footer> </body> </html> main.js const scrollAnimation = () => { let pages = [...document.querySelectorAll('.page')] gsap.to(pages, { xPercent: -100 * (pages.length - 1), ease: "none", scrollTrigger: { trigger: ".container", pin: true, markers: true, scrub: 1, snap: 1 / (pages.length - 1), // base vertical scrolling on how wide the container is so it feels more natural. end: () => "+=" + document.querySelector(".container").offsetWidth } }); } scrollAnimation() barba.init({ sync: true, transitions: [{ name: 'transition-base', async leave() { const done = this.async(); await delay(1000); done(); }, async enter() { window.scrollTo(0, 0); }, }], views: [ { namespace: 'about', beforeEnter(data) { }, afterEnter() { scrollAnimation(); }, } ], }); Cheers Adam
GSAP 3 Scroll Trigger Issue With BarbaJS
austenhart replied to adamoc's topic in GSAPFor any future readers, I ran into a very similar problem as Adam and was able to solve it with ScrollTrigger’s getAll() and kill() functions. Added to Barba’s afterEnter hook. My ‘cleanGSAP’ function looked like this: const cleanGSAP = () => { let existingScrollTriggers = ScrollTrigger.getAll(); for (let index = 0; index < existingScrollTriggers.length; index++) { const singleTrigger = existingScrollTriggers[index]; singleTrigger.kill(false); } ScrollTrigger.refresh(); window.dispatchEvent(new Event("resize")); };; };
Help with Slide or swipe transitions with GSAP + Barba JS
GreenSock replied to sscash's topic in GSAPI read your question a few times but I still don't quite understand what you're asking. We don't support 3rd party tools like Barba.js in these forums typically (I have zero experience with it), but we'd be happy to answer any GSAP-specific questions.?
Re-running script using Barba.JS
akapowl replied to jakob zabala's topic in GSAPHey @jakob zabala It really depends on some things how to do what best with barba. If you navigation for example is not part of the content that is being exchanged by barba but appears on every other page you won't have to re-initialize it since it will stay in the DOM and will still be accessable by the functions you initialized on page load (or whenever in that realm). This sounds like the right apporoach to me for everything that needs to be re-initialized. These forums really try to stay more focussed on GSAP related questions. But there are quite some threads in combination with barba and ScrollTrigger in these forums - maybe you can take some of those as inspiration (using the search in the upper right area of the page). Here is one of those I also remember one earlier thread where a user has posted his code as an example One other recommendation would be the learning resources by @ihatetomatoes that he linked to himself in this thread here Barba also has a really great and responsive slack-channel where you can find all sorts of help. You can find an invite link on top of that page here: Hope altogether this will help you get further with your barba-project(s) - it sure did help me
Only animate elements visible in the viewport
Perdixo75 posted a topic in GSAPI've been using GSAP for a couple of month now and i really really enjoy it! Right now, i'm working on my portfolio using Barba.js for page transitions. I have a little question that might sound weird for a non-beginner but here it is : Let's say i have a long page with a lot of content. How do you animate only the elements that are visible in the viewport when leaving the page ?
Only animate elements visible in the viewport
akapowl replied to Perdixo75's topic in GSAPHey @Perdixo75 That is really more of a general JS and barba-related question and usually these forums try to stay focussed on things that are directly related to GSAP. What you would have to do before animating is checking for which of the elements that you want to animate are in view currently - for example with a helper function similar to the one explained here That could then look something like this Of course you would have to tweek it to your liking, because as of now it will only trigger on elements that are completely in view. If you have any other questions directly related to GSAP, we'll be happy to help. Happy tweening. Edit: Of course you could also utilize GSAP's ScrollTrigger to handle the in-view-checking like maybe so
ScrollTrigger Not working after Barba Transition
akapowl replied to sixtillnine's topic in GSAPHello @Jloafs Many have had their issues with barba resolved - for example those, that I linked to above. And as also mentioned above: You will [(likely)] [linked to].
ScrollTrigger Not working after Barba Transition
Jloafs replied to sixtillnine's topic in GSAPI'm having the same problem with Barba and scrolltrigger but don't want to abandon Barba. Has anyone had that same problem and found a solution? I'm new to gsap and javascript too btw so not an advanced user by any stretch of the imagination
Custom mouse lags after various page loads
Rocha posted a topic in GSAPHi there, I noticed that after going from one page to another the custom mouse starts to lag. At first it's hard to notice but as you load 4 or 5 pages the mouse doesn't flow as smoothly as on the first page load. Am I missing anything that is making the mouse lags? I'm using Barba JS on the site so I'm not sure if I have to kill and restart the mouse function every time I transition between pages? Here's the link to my project: Thanks
Mouse Hover Image Reveal + Background Color Change
mrntld posted a topic in GSAPHi, guys! I'm relatively new to the GSAP world and this is my first post over here. I'm trying to make something that I feel it's super simple, it would be an effect similar to's hover effect (image reveal, following the mouse position, background color change) + the transition to its internal page (I'm assuming it involves barba.js, right?) Any tips will be much appreciated!
Mouse Hover Image Reveal + Background Color Change
PointC replied to mrntld's topic in GSAPHi @mrntld Welcome to the forum. We have several threads about follow by mouse. Here's a good one. The background color change should be fairly straightforward. If you're just getting started with GSAP, this is the pace to begin. I'm not sure if they're using barba.js, but that's probably a good guess. Our very own @Ihatetomatoes has a bunch of videos on that topic. If you have any GSAP specific questions as you work on your project, we're happy to help. A demo will give you the best chance at a detailed answer. More info about that. Happy tweening and welcome aboard.
- Hi, I was also having the same issue with scrollTrigger markers being pushed down after going back to the original page using barba transition, which leads to undesired behaviors on my animation. What I found is that the elements top position in the barba-container (the one barba switch between transitions) is pushed down, you can see that by logging getBoundingClientRect() in barba hooks and thus, leads to the pushed down markers. I guess scrollTrigger calculates the start and end positions based on these values under the hood (correct me if I'm wrong). One of the possible cause I found in barba docs is that during the transition, both containers of the previous and current page will exist at the same time on the DOM, so the elements in the current page may be pushed down by the elements in the previous page. Barba suggest to use data.current.container.remove() to manually remove the previous container and fix this issues. I used it in the afterLeave hook and it works just fine. Basically, I don't need to call kill() or refresh() before re-initialize the scrollTrigger and also I think it can works if you re-initalize it in afterEnter() hook as well and not neccessary have to be beforeEnter(). The relevant section in barba docs can be found here: Barba. P/s: sorry for my bad English, it is not my first language and I'm still learning it.
- two days later I did it. I wonder what it didn’t work .... maybe I made a mistake somewhere before! I've already seen such a solution on the forum. but can anyone come in handy barba.hooks.beforeLeave(() => { ScrollTrigger.getAll().forEach(trigger => trigger.kill()) }); barba.hooks.after(() => { initScrolling(); });
- sorry. I mean they are not deleted in html if you look in devtool. I've tried all the barba hooks but the triggers remain. that's why I created triggers separately from animation. they are so removed, but not created and there is no control over the animation. anyway, thank you very much for trying to help me.
ScrollTrigger Scrub not work
ZachSaucier replied to Fedya's topic in GSAPWhat do you mean by this? Also keep in mind that this is a GSAP forum. Given the issue seems to be primarily stemming from the usage with Barba, I don't know how much free support we can offer. My guess is that with the right combination of Barba hooks it will work well. I haven't used Barba much.
Reset "X" Variable on Page Load for Pinned ScrollTrigger Width
akapowl replied to pietM's topic in GSAPHey @pietM That's actually hard to tell without seeing the full code, as Zach already mentioned - in your codepen demo you don't have any killing/destroying going on. Have you tried a method similar to what is described in this thread (which is for barba - but I guess the core concept would apply for swup, too) ? I don't think a simple re-initiation as you described in your original post will be enough, you will also have to make sure to kill the old ScrollTriggers - and it would probably also be best to destroy the old locomotive-scroll instance before creating a new one. I'll also link to the recently added section in the most common ScrollTrigger mistakes article, so any future readers who stumble upon this and don't know why that is neccessary can get a quick explenation on that.
FLIP-Plugin: animate across routes in Vue.js 2
ZachSaucier replied to s94QREspJB's topic in GSAPHey @Kyle Craven and welcome to the GreenSock forums. No, we haven't been able to get to a more full tutorial with Barba.js yet (though I started work on it a while back). The closest thing currently is the introduction/overview video in the Flip docs. That along with the Flip how-to pens and showcase should get you started. If you have a specific question please ask! It'd probably be best to start a new thread though
FLIP-Plugin: animate across routes in Vue.js 2
tailbreezy replied to s94QREspJB's topic in GSAPFLIP is pretty new for tutorials, only a few months old. Could be some, but I still don't know about them. As for barba.js you can check out ihatetomatoes. He also have some other nice courses on gsap.
FLIP-Plugin: animate across routes in Vue.js 2
Kyle Craven replied to s94QREspJB's topic in GSAP@ZachSaucier are there any tutorials yet on using Flip with Barba.js?
Help - Page Transitions (GSAP + barba.js + wordpress)
alexlytle replied to clickdeproduto's topic in GSAPPlease help how i can implement WordPress and Barba do you have any source code I can look at? or learning resources?
E.C. Discussion: To jQuery or Not To jQuery
tailbreezy replied to iDad5's topic in GSAPHey, Personally, I don't use jQuerry. I see no reason for it, mostly if not entirely made of syntactic sugar. Using Vue, Nuxt, Tailwind, plain ol' JS (also express, node, mongodb/firebase/sqlite.) Depending on what you need I guess. If I go without vue, I tend to gravitate towards highway.js and barba.js for transitions.
ScrollTrigger doesn't work since i installed webpack...
nightcoder posted a topic in GSAPHello!! I have a quite heavy question I'm working on my portfolio website using ScrollTrigger / horizontal scrolling. Everything was working absolutely fine until i installed webpack in the project recently. I needed to install it because i also use barba.js for transitions and webpack makes it so easier to handle the scss from page to page when clicking a link to go to another page of the site (no need to code for stylesheet injection in the head tag of the new html page when a link is clicked). And indeed it's working! BUT very oddly, when starting the project, now i do not land at the top of the page but somewhere in the middle and it's impossible to scroll! ScrollTrigger seems to be the issue here, because the barba transitions are working fine... What happens is: when landing on the page, i see a short flash of the raw html starting from the top (cool) and then (when the JS file is loaded) i get to a certain portion of the website, depending on the xPercent i put in my horizontal-scrolling function...(xPercent: -100 gets me at the 1st section, -200 at the second, -300 at the middle of the site etc... and impossible to scroll.) I tried everything and cannot figure out why/how to fix this. Would you have an idea why this is happening? Any help/hint/link would be highly appreciated! i learned a lot from this forum sofar, maybe my case can help someone else in the future ^^ Thank you! ps: my html and js files are very long, i hope i'm not too much of a hassle... ps2: here is a link to a tweet i posted in december where you can see the scrolltrigger/horizontal scrolling working smoothly in the project. () | https://greensock.com/search/?q=barba&updated_after=any&page=3&sortby=relevancy | CC-MAIN-2022-05 | refinedweb | 2,653 | 62.58 |
Jul 20, 2012 08:26 AM|paulwilkinson|LINK
Hi Guys,
I wondered if you can help. I am currently in the process of launching. It;s currently live but I'm receving the 500 Internal Server Error.
I have uploaded and resorted Database.Bak file. All settings to allow asp.net in terms of programming function capability for the server is activated. I have adjusted the persmissions for the folders where necessary but still receiving this error.
I have tried adding the custom error configuration to the web.config file to give me a detailed list of the errors but stil not working.
Anyone have any ideas? I have the FTP details if anyone wished to take a look? I'm at a loose end.
Thanks,
Jul 20, 2012 08:34 AM|Rajneesh Verma|LINK
Have you uploaded and tested html pages? If not then upload any index.html page and test that its working or not.
Jul 20, 2012 08:50 AM|Rajneesh Verma|LINK
paulwilkinsonYes, I've just added a index.html page and it works fine. It won't pick up the index.aspx page though? It returns the 500 internal server error
I just tried :
and i got error page.
Update your web.config from:
<system.web> <customErrors mode="Off"/> </system.web>
to:
<system.web> <customErrors mode="RemoteOnly"/> </system.web>
So that we can see the error.
Jul 20, 2012 08:55 AM|paulwilkinson|LINK
When the web.config file is saved as a web.config.txt I recieve the error you mentioned.
Here is my web.config.txt file; I have made the changes as you suggested but I still receive the same error.
<?xml version="1.0"?>
<configuration>
<configSections>
<section name="rewriter" requirePermission="false" type="Intelligencia.UrlRewriter.Configuration.RewriterConfigurationSectionHandler, Intelligencia.UrlRewriter" />
</configSections>
<appSettings>
<add key="WebsiteName" value="Dental Elite" />
<add key="WebsiteUrl" value="" />
<add key="SiteURL" value="" />
<add key="Uploads" value="~\UploadedImages\Images\" />
<add key="DefaultURL" value="" />
<add key="Errors" value="~\UploadedImages\Errors\" />
<add key="adminMailID" value="paul.wilkinson@dentalelite.co.uk" />
<add key="ReferralScheme" value="paul.wilkinson@dentalelite.co.uk" />
<add key="CandidateRegister" value="candidateregistration@dentalelite.co.uk" />
<add key="Bcc" value="anujg@webcreationuk.com" />
<add key="PageTitle" value="Dental Elite" />
<add key="Cvfile" value="../UploadedImages/CVFiles/">
</add>
<add key="description" value="Dental Elite are a leading UK-based dental recruitment agency. We specialise in the placing of dentistry professionals right across the sector." />
<add key="keywords" value="Dental Jobs, Dentist Jobs, Associate Jobs, Locum jobs,Dental Nurse Jobs, Dental Hygienist Jobs, Dental Therapist Jobs, Dental Practice Sales, Buy a Dental Practice, Sell a Dental Practice, Dental Practices for Sales" />
</appSettings>
<connectionStrings>
<add name="ConnectionString" connectionString="server=plesksql2.ehosting.com;database=paulwigoo5699com5607_;uid=dentalpaul; password=paulisanidiot" />
</connectionStrings>
<system.web>
<pages enableViewStateMac="false" viewStateEncryptionMode="Never" enableEventValidation="false">
<namespaces>
<add namespace="System.Data" />
<add namespace="System.Data.SqlClient" />
<add namespace="System.IO" />
<add namespace="System.Configuration" />
<add namespace="ImageProcessor.Drawing" />
</namespaces>
</pages>
<compilation debug="true" >
<assemblies>
<add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/>
</assemblies>
</compilation>
<system.web>
<customErrors mode="RemoteOnly"/>
</system.web>
<httpRuntime maxRequestLength="20480" executionTimeout="600" />
<httpHandlers>
<add verb="GET" path="CaptchaImage.aspx" type="WebControlCaptcha.CaptchaImageHandler, WebControlCaptcha" />
</httpHandlers>
<httpModules>
<add type="Intelligencia.UrlRewriter.RewriterHttpModule, Intelligencia.UrlRewriter" name="UrlRewriter" />
</httpModules>
</system.web>
<system.net>
<mailSettings>
<smtp>
<network host="localhost" />
</smtp>
</mailSettings>
</system.net>
<rewriter>
<!-- Redirect non www version to www version -->
<!--<if header="host" match="^sitename\.co\.uk">
<redirect url="^(.*)$" to="" permanent="true" processing="stop"/>
</if>-->
<!--<if url="~/(.+).aspx">
<rewrite exists="~/$1.aspx" to="~/$1.aspx" processing="stop"/>
</if>-->
<rewrite url="~/GetImage/(.+).aspx$" to="~/GetImage.ashx?Params=$1" processing="stop" />
</rewriter>
</configuration>
Jul 20, 2012 09:05 AM|Rajneesh Verma|LINK
paulwilkinson<add name="ConnectionString" connectionString="server=xxxx;database=xxxx;uid=xxxx; password=xxxxx" />
</connectionStrings>
Its Always suggested don't share any username and password at the Forum. Always use as above notations.
Also keep your web.config as web.config don't modify it as web.config.txt.
After changing Name as (web.config) Try to browse using page name:
Jul 20, 2012 09:13 AM|Rajneesh Verma|LINK
paulwilkinsonI have changed back to web.config and now receice the 500 internal server error?
Yes i have seen.
Have you checked from control panel that its supports asp.net 3.5 ?
Just check and if disable then enable .net for iis or website.
Also check your Inbox (PM)
All-Star
52733 Points
MVP
Jul 20, 2012 09:47 AM|Ruchira|LINK
Hello,
500 is general error code. You need to get the sub code of the error (500.xx) and provide more details than this to solve this. Can you copy paste the whole error you are getting?
Please 'Mark as Answer' if this post helps youDeveloper Tools Download | Windows 10 Videos | My Tech Blog
Jul 20, 2012 12:47 PM|Rajneesh Verma|LINK
paulwilkinsonI have changed back to web.config and now receice the 500 internal server error?
I fixed the issue.
I did two changes:
1. Rename Index.aspx to Default.aspx (All references).
2. Published website using file system then uploaded using FTP.
Participant
1930 Points
Jul 23, 2012 12:09 AM|dotnetnerd|LINK
Hi Paul,
What is the full error message that you get?
Feb 11, 2013 12:02 AM|Rajneesh Verma|LINK
ANN-TechCoderHi! did you get it working? As I am on the exact same issue. Just testing 1 single page with few controls on it. And getting the same 500 error.
I solved the issue using below changes:
If still you are facing issue, let me know!
Feb 11, 2013 01:07 AM|ANN-TechCoder|LINK
Thanks so much for your reply. I have searched all day yesterday and did not fix this. Nothing I found, which was adding some extra parameters to web.config, helped. I have ASP.Net enabled on my Hosting. I have sent a request to my Hosting so they can see if it is something on their side or on mine. My page is in .NET 4, blank page with 6 controls - just a quick test. Runs fine on my local machine, but shows 500 Error when Published with VS2012 using FTP Publishing.
Files in the directory:
Feb 11, 2013 04:00 AM|Rajneesh Verma|LINK
ANN-TechCoder
Its working, I think index.aspx is not set as Default page.
Feb 11, 2013 04:17 AM|ANN-TechCoder|LINK
Hmm. It does. It was not for all day yesterday and up to about 1 hour ago. But now it works. Strange.
It seems that my Hosting support did something to my hosting settings as they still did not reply to my Trouble ticket i have opened yesterday. I will let you guys know if my hosting had to do something or it just a 1 day delay in my website updating.
Feb 11, 2013 07:13 AM|ANN-TechCoder|LINK
So. My website, that I created the simplest way possible - New Website, named main file index.apsx and added few controls. It was absolutely fine on general settings in VS 2012 - the issue was my Hosting and they solved it for me. I have no idea what they did, but it worked. Here is the message i got from them:
Rajneesh Verma
Hello Juris
Thank you for contacting us.
Our Administrators have fixed the issue and now your website is up and running.
Feel free to reopen this ticket in case you have more questions or you need further assistance. Thank you for choosing our services.
Best regards,
Tailor Hall
Dedicated Support Team
So, if someone still has the issue - contact your Hosting and aske them to investigate it - it worked for me!
16 replies
Last post Feb 11, 2013 07:13 AM by ANN-TechCoder | https://forums.asp.net/t/1826065.aspx?500+Internal+Server+Error | CC-MAIN-2018-39 | refinedweb | 1,291 | 53.68 |
Testing the code in your product by accessing a Plone instance through a web browser is very inefficient and can get quite frustrating at times. You write a small piece of code, then reload it in your zope instance, maybe reinstall through portal_quickinstaller, then manual test the feature in your browser.
I have found that using the testing features of Python and Plone it is very easy to test your code without ever opening a browser. Read on and learn!
Created by paster if using the archetype template. Many products in our svn have testing code in them (uwosh.themebase, Product.UWOshSuccess)
paster create -t archetype uwosh.example
Your product's testing code lives in the tests folder:
tests/
__init__.py
base.py
test_doctest.py
To run all the tests in an egg use "-s egg.name":
Plone/zinstance/bin/instance test -s uwosh.example
To run a specific test module in an egg use "-s egg.name -t module.name":
Plone/zinstance/bin/instance test -s uwosh.example -t test_person
Starting in Plone 4 the testrunner does not come built in. In your buildout.cfg you will need to add test to parts and add a [test] section.
parts =
...
test
[test]
recipe = zc.recipe.testrunner
eggs =
${buildout:eggs}
uwosh.example[test]
To run a test you will then do something like:
Plone/zinstance/bin/test -s uwosh.example
Roadrunner is much faster than the standard Plone method because after it creates Plone instance for testing it reuses it each time the tests are run. You will want to always use Roadrunner because of the time it will save you.
Installing Roadrunner requires adding a recipe to your buildout.cfg, where "packages-under-test" is a list of the eggs that you want to test with Roadrunner:
[roadrunner]
recipe = roadrunner:plone
packages-under-test = uwosh.example
Then just run your tests using the same options as "instance test":
Plone/zinstance/bin/roadrunner -s uwosh.example -t test_exampleperson
pdb is the python debugger. Use it anywhere in your code to see what's going on. Anytime you want to stop the code and have a look around you should use pdb. You can start the debugger at any point by adding the following line of code to your product:
import pdb; pdb.set_trace()
Commands include: continue, up, next, list, skip, step, help
dir() - Use this to see what attributes are available for any object.
>>>>> dir(x)
['__add__', '__class__',.........,'swapcase', 'title', 'translate', 'upper', 'zfill'
filter() - As dir() returns a long list of attributes it is often helpful to search through them:
>>> filter(lambda x: 'split' in x, dir(x))
['rsplit', 'split', 'splitlines']
__doc__ - the __doc__ attribute is the docstring for an object if it has one:
>>> print x.__doc__
str(object) -> string
Return a nice string representation of the object.
If the argument is a string, the return value is the same object.
Before implementing a feature, write a test for it, run the test and make sure it fails, add the feature, then victoriously watch the test pass. You will feel super great and potentially high five everyone in sight.
Best tutorial ever:
Roadrunner:
University of Wisconsin Oshkosh | http://www.uwosh.edu/ploneprojects/docs/developers/browserless-development | crawl-003 | refinedweb | 527 | 74.9 |
Middleware which implements a retryable exceptions
Project Description
This package implements a WSGI middleware filter which intercepts “retryable” exceptions and retries the WSGI request a configurable number of times. If the request cannot be satisfied via retries, the exception is reraised.
Installation
Install using setuptools, e.g. (within a virtualenv):
$ easy_install repoze.retry
Configuration via Python
Wire up the middleware in your application:
from repoze.retry import Retry mw = Retry(app, tries=3, retryable=(ValueError, IndexError))
By default, the retryable exception is repoze.retry.ConflictError (or if ZODB is installed, it’s ZODB.POSException.ConflictError); the tries count defaults to 3 times.
Configuration via Paste
If you want to use the default configuration, you can just include the filter in your application’s pipeline. Note that the filter should come before (to the “left”) of the repoze.tm filter, your pipeline includes it, so that retried requests are first aborted and then restarted in a new transaction:
[pipeline:main] pipeline = egg:Paste#cgitb egg:Paste#httpexceptions egg:repoze.retry#retry egg:repoze.tm#tm egg:repoze.vhm#vhm_xheaders zope2
If you want to override the defaults, e.g. to change the number of retries, or the exceptions which will be retried, you need to make a separate section for the filter:
[filter:retry] use = egg:repoze.retry tries = 2 retryable = egg:mypackage.exceptions:SomeRetryableException
and then use it in the pipeline:
[pipeline:main] pipeline = egg:Paste#cgitb egg:Paste#httpexceptions retry myapp
Reporting Bugs / Development Versions
Visit to report bugs. Visit to download development or tagged versions.
0.9.1 (2008-06-18)
Seek wsgi.input back to zero before retrying a request due to a conflict error.
0.9 (2008-06-15)
Fixed concurrency bug whereby a response from one request might be returned as result of a different request.
Initial PyPI release.
0.8
Added WSGI conformance testing for the middleware.
0.7
Made the retryable exception(s) configurable, removing the hardwired dependency on ZODB3.
0.6
Relaxed requirement for ZODB 3.7.2, since we might need to use the package with other verions.
0.5
Depend on PyPI release of ZODB 3.7.2. Upgrade to this by doing bin/easy_install -U ‘ZODB3 >= 3.7.2, < 3.8.0a’ if necessary.
0.4
Write retry attempts to ‘wsgi.errors’ stream if availabile.
Depend on rerolled ZODB 3.7.1 instead of zopelib.
Add license and copyright, change trove classifiers.
0.3
We now buffer the result of a downstream application’s ‘start_response’ call so we can retry requests which have already called start_response without breaking the WSGI spec (the server’s start_response may only be called once unless there is an exception, and then it needs to be called with an exc_info three-tuple, although we’re uninterested in that case here).
0.2
The entry point name was wrong (it referred to “tm”). Change it so that egg:repoze.retry#retry should work in paste configs..1
Initial release
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/repoze.retry/0.9.1/ | CC-MAIN-2018-09 | refinedweb | 515 | 59.7 |
CodePlexProject Hosting for Open Source Software
I just installed VWD 2010 EE. I created a new wep application project named BlogEngine and dropped the BE 2.5 files into this project's directory. I replaced the default files created by VWD and merged the default folders with the BE 2.5 ones
that were duplicates. Then I included all of the files. Got 300 errors on first build. Adding a reference to the BlogEngine.Core.dll resolved most of this. It got it down to 96 errors.
I now have a similar issue but cant seem to solve it in a similar way. I am running VWD 2010 as Administrator. Most of these 96 errors are still namespace or using statement issues but I can't get things into scope or correct these.
For example, The tags.aspx.cs file in the Admin.Posts namespace has an error for the using statement " using App_Code;" statement at the top of the file. I can't seem to get it to qualify fully by adding a reference. Or at least not any that
I've tried. I tried to add a "using BlogEngine;" at the top of the file. When I did this namespace came into scope but the App_Code folder wasn't an option. The only two options for this namespace were the Account and Core Namespaces. I
can't seem to qualify or identify the namespaces correctly.
I don't understand what's wrong. I can revert to last successful build and it will run. I'm not sure if I messed something up when I merged the folders or what.
// this is from the Tags.aspx.cs code behind file. you can also see the commented out BlogEngine using statemnt I added
namespace Admin.Posts
{
using System;
//using BlogEngine; // added this to try to fix the error
using App_Code; // this is original
public partial class Tags : System.Web.UI.Page
{
protected void Page_Load(object sender, EventArgs e)
{
WebUtils.CheckRightsForAdminPostPages(false);
}
}
}
Try
this doc, if it doesn't help search for "convert blogengine.net to web application" - I've seen several step-by-steps for different versions.
Thanks. I'll give it a try. I don't understand why in the article you recommended, it says "As you can see, current source plays nicely with Web Application project model". I used the web application model for my project. So,
it doesn't play nicely if this is the same issue. I hope it does fix the issue but that would almost make it more confusing.
Web site and web application are different models, you can't drop files from one to the other and expect it to work. Just as you couldn't drop web application files into MVC site - it does require adjustments. BE relatively easy to adopt to WAP, and that
"play nicely" meant that you mostly do standard conversion and don't run into BE-specific issues as in earlier BE versions.
thanks for the help. There were no issues when I dropped the files into a Website project, which I think will be fine for my purpose. It was up and running on the first build. I can always convert it to WAP later if I choose.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://blogengine.codeplex.com/discussions/286717 | CC-MAIN-2017-34 | refinedweb | 577 | 76.93 |
vector_temporary_type
Expand Messages
- Hi,
Is it correct that vector_temporary_type is not defined for expression types?
Some old code that used matrix_range of an expression does no longer compile.
I think that vector_temporary_type should be added to all expression types.
Here is a small example that does not compile. Unless i did something wrong.
This is my first experience with ublas_pure.
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/io.hpp>
#include <iostream>
int main() {
using namespace boost::numeric ;
ublas::matrix<double> m( 5, 4 ) ;
std::cout << (-m) ( ublas::range::all(), ublas::range::all() ) <<
std::endl ;
return 0 ;
}
Thanks,
Karl
- Hi Michael,
I have been sending plenty of messages to the uBLAS mailing list. Let us talk
when you are back. Many of the changes can be committed straight into the
trunk by Toon, but we may have to talk a little first in order to make sure I
did the right thing.
Here is a list of most modifications in my version:
- added vector_temporary_traits and matrix_temporary_traits, used in
matrix_proxy.hhp, vector_proxy.hpp, symmetric.hpp and hermitian.hpp
- use functor/traits for map_capacity in storage_sparse.hpp
- Typoes in mapped_vector
* (nzz_capacity() )
* (ii.first)->first instead of (ii->first).first
- missing include files in detail/iterator.hpp, detail/matrix_assign.hpp,
matrix.hpp
- Added basic_range::operator=( basic_range const& r) in storage.hpp
- type of address1 and address2 has changed in compressed_matrix::find1() and
find2() to make std::min compile
- lots of public/private stuff in expression types.
- Port to IBM for scalar_assign, scalar_plus_assign, scalar_minus_assign (to
be discussed with Toon)
- I also changed same_impl_ex in exception.hpp because sometimes the
size_type's are difference and then same_impl_ex is not found.
That's it.
Regards,
Karl
Your message has been successfully submitted and would be delivered to recipients shortly. | https://groups.yahoo.com/neo/groups/ublas-dev/conversations/topics/2317?xm=1&m=s&l=1 | CC-MAIN-2015-27 | refinedweb | 295 | 50.73 |
Data Miner
Orçamento $100-500 USD
I would like a stand alone program preferably written in VB that would help me prepare data for statistical analysis. I am not looking for statistical analysis software. I want something that will help me prepare the data.
Because I need stock data often, I would like a module that will download stock data from the Yahoo finance site. I know some of these are already available on the market, but the ones I have found do not do everything I like (like automated downloads) and you have to put up with advertising and ugly interfaces. Yahoo provides the data for a click of a button and it is not a difficult thing to program, so I would like to include such a download module as an integrated part of my program.
If I need data from other sources (like interest rate, currency, company cash flow data, etc.) I will get it myself.
The primary function of the program is to prepare data for statistical analysis. For example, to analyze stocks, I might want to have monthly data on prices and merge that data with monthly data on interest rates or currency exchange rates that come from another source. The program would need to merge the interest rate or currency exchange rate data or both by date.
Further, the data may need to be processed before the correct subset of data can be selected. For example, I might want to look only at a subset of stocks whose price has increased (or decreased) X% in the last N days/weeks/months. This would require the creation of calculated fields upon which the data selection is made.
It is possible that I would need to have several calculated fields and even calculated fields based upon calculated fields (e.g., select all stocks whose cash flow is greater than X, whose PE ratio is < average of DOW PE (itself a "nested" calculation), etc.) The calculations should user defined and not hard coded (though the program should "remember" previously devised calculations so I do not have to reinvent the wheel).
##) Stock price data collection module (which downloads stock data from Yahoo!). The module should be capable of downloading a single stock, or multiple stocks, or mutiple stocks from user created lists. The module should prompt for the amount of price history desired. It should check to see if that data has already been downloaded and exists in a database table already and give the options to update only, or to reload all data. It should be configurable so that it can be run manually or automatically on a user defined schedule. It should store the data in an Access or SQL 2000 table.
5) Data Query module that will take data in table or spread sheet form and
a) run summary calculations and create calculated fields. The summary calculations might be something like this: compute average PE ratio of the DOW Industrials stocks. (I anticipate that I would create a field in the table that marks each of the DOW Industrials stocks so all that the program would have to do is make a selection on that field and then compute the average). Likewise I would need calculated fields, such as % change in stock price for day/week/month/year. Both summary calculations and calculated fields should be user definable. The program should provide an option to remember the calculation so that it can be later chosen from a list.
b) select one or more subsets of data using one or multiple queries which may be simple queries or queries based upon the results of the summary calculations and calculated fields. I anticipate going through a process something like this: First run a query selecting only a subset of stocks. This selection might be on a field in the data table or it might be a calculated field. For example, select only those stocks which had a 10% increase in daily price on some day within the last 3 years; or select only the DOW stocks. Then select the stocks from that list whose average weekly continuously compounded rate of return (CCR) from 1 Jan 03 to 31 Mar 03 was > X. Now compute the the CCR from 31 Mar 03 to today for each stock to today. Merge this data for later statistical analysis to determine if there is a correlation between the prior quarter performance and the subsequent daily, weekly, monthly, quarterly performance. It is very important to be able to run multiple rather simple queries against the data to separate the wheat from the chaff because a simple query is easier to create and because it is less prone to errors than a complex SQL query involving multiple joined tables, etc.
c) merge the resulting result sets of the different data tables or spread sheets into a single data table or spreadsheet for further statistical analysis. Export of data should optionally include export of the calculated fields. The program needs to check the merged data to make sure the merge is correct. For example, for time series data the program should check when it merges the data to make sure that the dates are matched (a stock may not have traded on a day or week and therefore one cannot simply assume that data for that day/week/month exists). The program should warn when there is a problem with the merge and provide options such as 1) include the data anyway using the nearest data, 2) exclude the data for that missing date, 3) include the data and include a user defined value for the missing data.
## Platform
Windows 2000 or later; Access, SQL 2000, Excel | https://www.br.freelancer.com/projects/php-visual-basic/data-miner/ | CC-MAIN-2017-34 | refinedweb | 953 | 57.61 |
Install GoLang on Ubuntu and Write Your First Program in Go
GoLang is a very powerful programming language developed by Google. It is a compiled programming language. It means, Go source codes are converted to machine code or commonly known as executable file. Then you can run these executable files on other computers. Unlike Java that converts source code to byte code, then runs these byte codes using JVM (Java Virtual Machine), Go does not use any VM (Virtual Machines). It is not an interpreted language either like Python or PHP. It is very fast and built with concurrency in mind. GoLang is widely used for Web Development because it has many libraries available for such stuff.
In this article, I will show you how to install the GoLang on different versions of Ubuntu operating system and how to write, run and build your first program with Go. Let’s get started.
Installing GoLang:
GoLang is available in the official package repository of Ubuntu. First update the package repository cache of your Ubuntu operating system with the following command:
Your package repository cache should be updated.
Now you can install GoLang from the official repository of Ubuntu.
Ubuntu 16.04 LTS:
On Ubuntu 16.04LTS, you can install GoLang 1.6 from the official repository of Ubuntu. This is the recommended version of GoLang on Ubuntu 16.04 LTS.
To install GoLang 1.6 from the official repository of Ubuntu 16.04 LTS, run the following command:
If you want to install GoLang 1.9 on Ubuntu 16.04 LTS, enable the ‘xenial-backports’ ‘universe’ repository and run the following command:
Ubuntu 17.10:
On Ubuntu 17.10, you can install GoLang 1.7, GoLang 1.8 and GoLang 1.9.
To install GoLang 1.8 on Ubuntu 17.10, you can run the following command:
Or
To install GoLang 1.7 on Ubuntu 17.10, run the following command:
To install GoLang 1.9 on Ubuntu 17.10, run the following command:
I am using Ubuntu 17.10 for the demonstration in this article. I will install GoLang 1.8.
Once you run the command to install the version of GoLang you want, you should see the following prompt. Just press ‘y’ and then press <Enter> to continue.
GoLang should be installed.
Testing GoLang:
Now run the following command to verify that Go commands are accessible:
You should see similar output as shown in the screenshot below. It means Go is working correctly.
Writing your First “Hello World” Program on GoLang:
The very first program that most people write while learning a language is the “Hello World” program. I would say, “It’s the gateway to the heart of the programming language”. It is very simple. All a “Hello World” program does is; it prints “Hello World” to the console or terminal.
Now I am going to write a simple “Hello World” program in Go.
This is the code that I am going to run.
import "fmt"
func main() {
fmt.Println("Hello World");
}
It is saved in ‘~/work/helloworld.go’ file. Remember to save GoLang source files with .go extension.
GoLang can be used like an interpreted language like Python. It means that you can run a GoLang source file directly without manually compiling it first.
To run a go program, run the following command:
In my case GO_SOURCE_FILE is ‘helloworld.go’.
You should be able to see “Hello World!” output on the console as shown in the screenshot below.
The good thing about GoLang is that, you can also build an executable file out of GoLang source code. So it can be executed just as C or C++ programs.
Run the following command to compile Go source code:
In my case, GO_SOURCE_FILE is ‘helloworld.go’.
It should generate an executable file ‘helloworld’ as shown in the screenshot below.
Now you can run the executable as follows:
You should see “Hello World!” on the terminal just like before.
So this is how you install GoLang on Ubuntu and write your first program in GoLang. Thanks for reading this article. | https://linuxhint.com/install-golang-ubuntu/ | CC-MAIN-2019-39 | refinedweb | 680 | 77.74 |
0
My assignment is to develop a C++ program to count the number of capital letters in a given string. String will be entered by user. The idea is to build my knowledge of loops and loop terminations.
I've got the basics down (I think) but I just don't get what functions or commands I need to use to go through the keyboard-input string character by character. We're not supposed to know the isupper function yet, so I'm supposed to use a get. function of some kind.
#include <iostream> #include <string> #include <iomanip> namespace std; int main() { char lettr; int charactrCtr = 0; int length; int count = 0; string userInput; cout << "Enter a stream of characters, both capital and lower case, then press return..." << endl; getline(cin, userInput); length = userInput.length(); cout << length << endl; while (getline(userInput, 3000).good { if ((lettr >= 65) && (lettr <= 90)); charactrCtr = charactrCtr + 1; count = count + 1; } cout << "The number of capital letters in the sequence you entered is " << charactrCtr << endl; return 0; } // end of main
Edited by PTRMAN1: n/a | https://www.daniweb.com/programming/software-development/threads/271753/counting-capitals | CC-MAIN-2017-09 | refinedweb | 178 | 62.88 |
Dynamic JFace XML TableViewer Tutorial
Dynamic JFace XML TableViewer Tutorial
Join the DZone community and get the full member experience.Join For Free
Get the Edge with a Professional Java IDE. 30-day free trial.
The following tutorial will show you how to build a dynamic JFace TableViewer driven from an XML file. The beauty of this solution is that your data model is decoupled from view logic so when the XML data changes you don’t need to change the concrete code.
It’s a pretty simple tutorial which should take around 5 – 10 minutes to complete.
Step 1. Create a new project in eclipse 3.4
Open Eclipse -> Click File -> new Plug-in project -> Next
To keep the examples simple type galang.research in the project name and click next. You can type any name you want though you’ll need to factor this into all the code samples I will provide.
Select Yes in the Rich Client Application section, click next.
Select RCP application with a view, click next.
Select Add Branding.
Click finish, then Yes.
Unzip the file located at the URL below and save the test.xml within the zip file as /temp/test.xml to your local pc.
Right click project and configure build path, link source, then click browse and select the src location to where you have unzipped the file above.
Enter srcExt fo folder name
Your package explorer should look like this.
Modfy View.java in the galang.research package with the following code.
/** * This is a callback that will allow us to create the viewer and initialize * it. */ public void createPartControl(Composite parent) { viewer = new TableViewer(parent, SWT.MULTI | SWT.H_SCROLL | SWT.V_SCROLL); viewer.setContentProvider(new RowContentProvider()); RowLabelProvider labelProvider = new RowLabelProvider(); labelProvider.createColumns(viewer); viewer.setLabelProvider(labelProvider); viewer.setInput(getViewSite()); }
And add the following imports to View.java
import galang.research.jface.RowContentProvider; import galang.research.jface.RowLabelProvider;
Double click plugin-xml and click the Lanch eclipse application.
Congratulations, you’ve just completed this tutorial. Have a play around with the /test/temp.xml and see how it dynamically impacts the RCP application after restarting it.
From.
Get the Java IDE that understands code & makes developing enjoyable. Level up your code with IntelliJ IDEA. Download the free trial.
Published at DZone with permission of Glenn Galang . See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }} | https://dzone.com/articles/dynamic-jface-xml-tableviewer- | CC-MAIN-2018-47 | refinedweb | 411 | 52.97 |
Types of Pattern Matching in Elixir
One of the great features of both Elixir and Erlang is pattern matching. Pattern matching allows you to work with the “shape” of data. Using pattern matching, you can split an implementation up based on the shape of the data. There are various ways to define the shape of the data you expect. This post shows the various forms pattern matching can take in Elixir.
Throughout this post, we will use a simple
greet function as an example. This greet function takes a name and then prints out a “hello” message to the screen. It looks like this:
def greet(name) do IO.puts("Hello, #{name}") end
Let’s start by discussing the basic pattern matching elements.
Ignore Value
Use an underscore to ignore a value altogether.
def greet(_) do IO.puts("Hello, world") end
The greet function above ignores the given input. If for some reason you want to ignore a value, but still give it a meaningful name, prepend that value with an
_.
def greet(_name) do IO.puts("Hello, world") end
What if you want to use the value given? Give it a name without an underscore.
Variable
def greet(name) do IO.puts("Hello, #{name}") end
Now you can use the name given. While you could have used
_name, it would have given us a compiler warning. The compiler will warn you if you are using a variable starting with
_.
Exact Value
With pattern matching, you can give an exact value as well.
def greet("Benjamin") do IO.puts("Hello, Ben") end
We now have a function for someone named “Benjamin”. This works with all primitive values: integers (
1), floats (
1.0), binaries (
"Sally"), bitstrings (
<<1, 2, 3>>), tuples (
{1,2}), lists (
[1, 2, 3]), and maps (
%{a: 1, b: 2}).
We will go into more depth on lists, tuples, and maps later.
Binary starts with
def greet("Ben" <> _) do IO.puts("Hello, Ben") end
With this form, you can check if a given binary starts with the given characters. We have enhanced our greet Ben function to check for any name that starts with
"Ben". Note, you can only match on the beginning of a binary.
Lists
Lists can be pattern-matched in several ways. The first way is by matching on the number of values in the list.
def greet(["Ben" <> _, _last_name]) do IO.puts("Hello, Ben") end
The above expects a 2-element list as input.
Head/Tail
def greet([name | _]) do IO.puts("Hello, #{name}") end
This form allows you to get the first value of a list in a variable. It also gets the rest of the elements in a list as a variable. In this case, we only need the first element. You can also match on the first number of elements.
def greet([first_name, last_name | _]) do IO.puts("Hello, #{first_name} #{last_name}") end
Keyword Lists
def greet([first_name: name, last_name: _]) do IO.puts("Hello, #{name}") end
In this example, we are matching elements of a keyword list. This works exactly the same way as list pattern matching (because a keyword list is a list). Remember that order of keys matter here. So if
last_name came first in the list given to this function, it would not match.
Tuples
def greet({"Ben" <> _, _last_name}) do IO.puts("Hello, Ben") end
Tuples are like lists except that they have an arity. For example,
{1, 2} is a tuple with arity 2, or sometimes called a 2-tuple.
{1, 2, 3} is a 3-tuple. In Elixir, you can match on the values with the tuple. In the above example, our
greet function takes a 2-tuple. The first element is a name that starts with
"Ben" and we ignore the second element. Note that this will only match on 2-tuples and not any other arity tuple. If we want to update our function to take a 3-tuple, we could do as follows:
def greet({"Ben" <> _, _middle_name, _last_name}) do IO.puts("Hello, Ben") end
Maps
def greet(%{first_name: name}) do IO.puts("Hello, #{name}") end
This type of pattern matching checks for keys and values within a map. Order does not matter here. As long as the given map has the key(s) and value(s) matching, this will work.
Structs
Structs support pattern matching in a couple of ways. One is matching on the type.
def greet(%Person{} = person) do IO.puts("Hello, #{person.name}") end
This will match only if given a
Person struct.
You can even make the type something to match.
def greet(%x{} = person) when x in [Person] do IO.puts("Hello, #{person.name}") end
The variable,
x, holds the type of the struct passed into the function. We used a guard (explained later) to make sure the type is within a subset of types defined.
Bitstrings
Bitstrings are useful when working with certain types of data. They also have the most variations when it comes to pattern matching. We will discuss one example here, but be sure to read the Bitstring docs for more information.
In the example above where we matched on the beginning of a name, I mentioned that you can only match on the beginning. For that particular form of pattern matching, that is true. With bitstrings, you can match on other parts as long as you know the length of the parts before it.
def greet(<<prefix::binary-size(3), "ja", ending::binary>>) do IO.puts("Hello, #{prefix}js#{ending}") end
Notice we are matching on the size of the first element in the bitstring. We are expecting it to be a binary that is 3 characters long. Skipping the middle element, notice the end does not need to have a size. This is because it is the last element.
Guards
Guards allow you to add an extra constraint onto your pattern. What if we wanted to make sure the name given is a binary (string)?
def greet(name) when is_binary(name) do IO.puts("Hello, #{name}") end
Now only binary values will match the above. Only a small set of functions can be guards. To find out which ones, check out the Elixir Kernel doc. Most of the functions allowed have the comment “Allowed in guard tests” in their docs.
We build digital products to help companies scale, automate workflows, and deliver on innovation faster than their competitors.
We’re always looking for folks who will bring something to the team.
Keep in touch by subscribing to Coding Creativity,
a weekly digest of the product, design, and development news that fuels our industry. | https://revelry.co/resources/development/pattern-matching-elixir/ | CC-MAIN-2021-25 | refinedweb | 1,108 | 76.32 |
)
More User Interface
Alright, so we are currently here:
This is nice, but it doesn't actually lets the user choose between the two buttons, now, does it? We'd need to add some sort of input element to allow the user to pick a button variant.
Since this is an OR relation, i.e. you have to pick one - and exactly one - variant of button, a radio button is a great fit. Shopify actually provides us with a Radio Button component that has all sorts of niceties to it:
<RadioButton label="The text that appears right next to the button" helpText="Greyed-out subtext" checked={"A boolean value to indicate whether the button is checked or not"} id="An HTML element id, used for selection (must be unique)" name="An HTML element name used for identification (can be repeated)" onChange={ "A callback function, that is triggered once the radio button is changed (i.e. marked / unmarked)" } />
Let's talk a little bit about this, since this is the first time we're observing an element that is not very simple. Look at all the props we're providing the componenet (
label,
helpText,
checked etc.) - I've added a small sentence explaing what each of them does. There are two props -
checked and
onChange - that get
{} as inputs and not just text. Inside those
{} we can input whatever JavaScript we want, but they expect to get a boolean and a callback function, respectively. But, hold back one moment. Why do we need a React component for a radio button? We already have
<input type="radio">, right? Let's explore this for a second.
A normal radio button (i.e.
<input type="radio">) already has a
checked attribute and a
change event, that can replace the
checked and
onChange props. We can totally use those without having to wrap the button in a component. But, the Polaris design system would like to standardize the way radio buttons are used in Shopify Apps. Therefore, the
RadioButton component encapsulates all the styles Shopify would like you to use with the button (padding, color of the bullet in the button, color of the surroundinc circle etc.). It also allows for a somewhat more convienent wrapper around features that are often used together (like
label that removes the need for a
<label> tag and
helpText that expands the normal label with optional subtext).
The reason why
onChange is also a
RadioButton property has to do with the way React sees the world. In React, everything is interactive - an action in one element is expected to trigger something in another element, or maybe even in the backend of the application. The
change event (in the original
<input type="radio">) was created for just this purpose - to be the main source of interactivity for your radio button (when its value changes, do something - i.e. trigger a callback function). But, in practice, getting this functionality of events to work across browsers has been historically hard. React created a new type of event, that "wraps around" the original event, and that is why we have a special
onChange property inside the component. This is not the only reason, of course, but to me is the most.... comfortable one. If Dan Abramov ever reads this, and I happen to be wrong (which I sometimes am, it appears) - please accept my sincere apologies and make a comment for me to fix this. :P
Back to business - what do we want to happen when the button changes? Well, we want to first know that it did. Which means we need to store that infomration somewhere. Luckily, we can use state in our
App component to keep track of what's going on inside the page!
A Note on Redux
You will note that I, much like the offical Shopify Tutorial, chose to forego the use of a (very) popular JavaScript library called Redux. Redux allows you to have a central, instead of a distributed, location for your state. A state of a component is some information being kept in it about.... whatever, and is notriously difficult to manage as your apps get more and more complicated.
I can honestly say that the app I'm building here is just not complicated enough to justify the use of Redux, or any other central state management library. Therefore, I "bear" the complexity, and choose to manage the state myself. This might seem like I'm doing a lot of hacks to get the information around, but for the sake of simplicity I think it's the way to go.
So before we add the radio button, let's make sure to add state properties that account for which button was selected. For this, I am going to correct an oversight that any experience React developer will tell you I made (albeit intentionally) in the beginning: I omitted the constructor.
A constructor (as I mentioned in the React Sidestep 3) is a special function in a JavaScript class (and specifically inside React class components) that gets called when an object representing the class is initiated. So let's add it first:
class App extends React.Component { constructor(props) { super(props); } render() { return ( <AppProvider> ... </AppProvider> ); } } export default App;
VSCode might throw a "useless constructor" error at you (well, probably a warning - i.e. yellow squiggly lines, and not an error). This is OK - this constructor indeed doesn't do anything. All it does is call the constructor of the class above it with the props that were provided to it (since every React class component extends
React.Component, its constructor is being called with the pops provided for the current constructor). This is an implementation detail that you shouldn't really care about - it's the way React is built.
The interesting bit comes when we want to provide state to the component. This can happen simply by defining the
state attribute for the current class in the following way:
class App extends React.Component { constructor(props) { this.state = {} super(props); } render() { return ( <AppProvider> ... </AppProvider> ); } } export default App;
We now have a place in our component where we can manage our state. Let's add a property inside our state, one that shows which variant of the button has been selected:
class App extends React.Component { constructor(props) { this.state = { buttonType: "full" }; super(props); } render() { return ( <AppProvider> ... </AppProvider> ); } } export default App;
We define
buttonType to be
"full" upon initialization to provide some sort of default to the user. This means that at first initizliation, the selection box will be the one one with the full button in it. In the future, we will have this value stored in a database, and brought into the application to "remember" the prefence of the user. More about this later.
We also need to create some function that - when the button's status is changed - chages the value in the state of the component. This is a function that is called when
onChange is called on
RadioButton - i.e. a callback fucntion. Let's call this function
handleButtonTypeSelection, since it handles which type of button is used.
This function can go in one of one of 4 places, which can cause a bit of confusion. I'm choosing to add them as arrow functions inside the
render function, like so:
class App extends React.Component { constructor(props) { this.state = { buttonType: "full" }; super(props); } render() { const handleButtonTypeSelection = (changedButtonType) => { ... } return ( <AppProvider> ... </AppProvider> ); } } export default App;
I like this option because it feels, to me, like it's simpler once you figure out how arrow functions work like. For most intents and purposes, an arrow function is just another way to write a function - instead of
funcName(){}, we're writing
const funcName = () => {}. But, there are some places where the arrow function behaves a bit differently than your run-of-the-mill function - and I will warn you about them as they come up. In this case - use the arrow! :)
Our full function needs to accept the type of button that was selected, and change the state of the component's
buttonType accordingly. As you will see in a moment, this will also check the correct button by changing the
checked prop on each
RadioButton component. Let's put our full function in then:
class App extends React.Component { constructor(props) { this.state = { buttonType: "full" }; super(props); } render() { const handleButtonTypeSelection = (changedButtonType) => { this.setState({ buttonType: changedButtonType }); } return ( <AppProvider> ... </AppProvider> ); } } export default App;
This part:
this.setState({ buttonType: changedButtonType });
Changes the value of
buttonType in the state of the component. Specifically, what it's doing is passing a destructured object into the
setState function (which, as you probably guessed, sets the state). Destructuring is a totally awesome (and rather new) concept in JavaScript, that basically allows you to unpack properties from objects, and treat them as simple variables. The statement above, therefore, is exactly like doing:
const newState = { buttonType: changedButtonType; } this.setState(newState)
But the destructuring just saved me an unneccessary variable declaration.
Back to business - we now have our callback function, but still missing our
RadioButton components. Let's finally put them in, and get the following:
import React from "react"; import { Page, AppProvider, Layout, Card, RadioButton } from "@shopify/polaris"; import "@shopify/polaris/styles.css"; class App extends React.Component { constructor(props) { super(props); this.state = { buttonType: "empty", }; } render() { const handleButtonTypeSelection = (changedButtonType) => { this.setState({ buttonType: changedButtonType }); }; return ( <AppProvider> <Page title="Welcome!" subtitle="Please select the type of button you'd like to generate for your site:" > <Layout> <Layout.Section oneHalf secondary> <Card title="Black Button Variant" sectioned> <Card.Section <button>Dummy Full Button</button> </Card.Section> <Card.Section> <RadioButton label="Choose Full Button" helpText="Works well with the default Shopify themes and lighter backgrounds." checked={this.state.buttonType === "full"} <button>Dummy Empty Button</button> </Card.Section> <Card.Section> <RadioButton label="Choose Empty Button" helpText="Works well with darker backgrounds, to create a high-contrast feel." checked={this.state.buttonType === "empty"} id="empty" name="empty-button" onChange={() => handleButtonTypeSelection("empty")} /> </Card.Section> </Card> </Layout.Section> </Layout> </Page> </AppProvider> ); } } export default App;
Which should render like so:
Try checking and unchecking both
RadioButtons, and observe that only one of them can be checked at any given moment. This is due to each of them pulling its
checked prop from the value of the
buttonType state property.
That's enough for today, I think. :) We covered a lot of not-strictly-related ground, but I think it was a good detour into JavaScript and the cool features it has to offer.
An Offer
If you're working on a Shopify app, and your app uses Polaris for the front-end, I want to hear from you. I am willing to sit down and run a debug session / add a new feature with you for your application, if you agree to stream it live with me (or record it and publish it later). It's not easy writing a full-stack JS app, doubly so when you're not from within the ecosystem. Let's do it together and help all the people! :)
Discussion | https://practicaldev-herokuapp-com.global.ssl.fastly.net/redcaptom/shopify-app-from-scratch-12-user-interface-2-3327 | CC-MAIN-2020-50 | refinedweb | 1,845 | 55.24 |
Angular 5 is here
© Shutterstock / dencg.
Angular 5 is here. Don’t know about you but we’re very excited that it’s finally here! It contains a lot of new features and bugfixes but we’re not going to cover them all here.
For the complete list of features and bugfixes, see the changelog.
Build Optimizer
Stephen Fluin, Developer Advocate at Google explained in a blog post announcing Angular 5 that as of version 5.0.0, production builds created with the CLI will now apply the build optimizer by default. developers to perform server-side rendering (SSR) of Angular applications. By rendering your Angular applications on the server and then bootstrapping on top of the generated HTML, you can add support for scrapers and crawlers that don’t support JavaScript, and you can increase the perceived performance of your application, Fluin explained..
The list of highlights is much longer so make sure to check out Stephen Fluin’s blog post.
Update October 31, 2017
Four more bugfixes — the release candidate tap is still open so that means the 10th rc. is here.
Update October 30, 2017
The ninth release candidate is here. I don’t know about you but we were definitely not expecting to see rc.8. Anyway, it’s here and it brings two bugfixes:
- compiler-cli: avoid producing source mappings for host views (#19965) (2d508a3)
- platform-server: add missing packages to the UMD global rollup config (4285b6c)
Update October 27, 2017
There’s still space for one more. rc.7 is here and it brings seven bugfixes. Are we there yet??
Update October 26, 2017
Another day, another candidate release — lucky number seven this time. Obviously, there are only bugfixes (four of them, to be exact) but it’s still a good reason to celebrate: one step closer to Angular 5.
Update October 24, 2017
Release candidates come in twos now: rc.4 and rc.5 were released earlier today — the former brings three bugfixes and the latter brings just one. Is today the day Angular 5 is released? We wish we knew the answer to this question.
Update October 19, 2017
The fourth release candidate only brings bugfixes (14 of them) and you know what that means!
In theory, Angular 5 should arrive next week — it remains to be seen if the schedule is respected but we’re seeing some good signs.
Bug fixes
- animations: always fire inner trigger callbacks even if blocked by parent animations (#19753) (5a9ed2d), closes #19100
- animations: ensure animateChild() works with all inner leave animations (#19006) (#19532) (#19693) (f42d317)
- animations: ensure inner :leave animations do not remove node when skipped (#19532) (#19693) (d035175)
- bazel: fix the output directory for extractor to be genfiles/ instead of bin/ (#19716) (405ccc7)
- common: attempt to JSON.parse errors for JSON responses (#19773) (04ab9f1)
- compiler: generate correct imports for type check blocks (#19582) (60bdcd6)
- compiler: prepare for future Bazel semantics of += (#19717) (836c889)
- compiler-cli: diagnostics file paths relative to cwd, not tsconfig (#19748) (56774df)
- compiler-cli: do not add references to files outside of
rootDir(#19770) (25cbc98)
- router: RouterLinkActive should update its state right after checking the children (#19449) (6f2939d), closes #18983
- service-worker: add missing annotation for SwPush (#19721) (15a8429)
- service-worker: freshness strategy should clone response for cache (#19764) (396c241)
- service-worker: PushEvent.data has to be decoded (#19764) (3bcf0cf)
- service-worker: use posix path resolution for generation of ngsw.json (#19527) (621f87b)
Update October 13, 2017
The third release candidate is here — it contains three bugfixes and four performance improvements.
Performance improvements
- animations: reduce size of bundle by removing AST classes (#19539) (d5c9c5f)
- compiler: only type check input files when using bazel (#19581) (0b06ea1)
- compiler: skip type check and emit in bazel in some cases. (#19646) (a22121d)
- compiler: speed up loading of summaries for bazel. (#19581) (81167d9)
According to the old tentative schedule (the one which said that Angular 5 will become available on September 18 — find it below), there should be four release candidates in total. If this is still valid, there’s only one release candidate left!
Update October 6, 2017
The second release candidate is here — it contains 13 bugfixes and three performance improvements.
Performance improvements
- compiler: don’t emit summaries for jit by default (b086891)
- compiler: fix perf issue in loading aot summaries in jit compiler (fbc9537)
- compiler: only emit changed files for incremental compilation (745b59f)
Update September 29, 2017
The release candidate period has begun! The first release candidate is here — it contains over 30 bugfixes, six features, three performance improvements and one breaking change.
Get ready because Angular 5 will be here before you know it!
Features
- animations: support negative query limit values (86ffacf), closes #19259
- compiler: enabled strict checking of parameters to an
@Injectable(#19412) (dfb8d21)
- compiler: reuse the TypeScript typecheck for template typechecking. (#19152) (996c7c2)
- core: support for bootstrap with custom zone (#17672) (344a5ca)
- platform-server: add an API to transfer state from server (#19134) (cfd9ca0)
- service-worker: introduce the @angular/service-worker package (#19274) (d442b68)
Performance improvements
- compiler: make the creation of
ts.Programfaster. (#19275) (edd5f5a)
- compiler: only use tsickle if needed (#19275) (8f95b75)
- compiler: speed up watch mode (#19275) (6665d76)
Breaking changes
- compiler: The method
ngGetConentSelectors(), deprecated in Angular 4.0, has been removed. Use
ComponentFactory.ngContentSelectorsinstead.
Update September 14, 2017
If the tentative schedule still stands, we should be days away from Angular 5. However, we might have to wait until next month. No matter when Angular 5 is released, one thing is certain: the countdown has begun.
The eighth beta was released yesterday with 11 bugfixes, a couple of breaking changes and three features.
Code refactoring
- router: remove deprecated
RouterOutletproperties (a9ef858)
- update angular to support TypeScript 2.4 (ca5aeba)
Features
- compiler: deprecate i18n comments in favor of
ng-container(#18998) (66a5dab)
- platform-server: provide a way to hook into renderModule* (#19023) (8dfc3c3)
- router: add ActivationStart/End events (8f79150)
Breaking changes
- the Angular compiler now requires TypeScript 2.4.x.
- router:
RouterOutletproperties
locationInjectorand
locationFactoryResolverhave been removed as they were deprecated since v4.
Update September 4, 2017
The Angular team is racing towards the finish line [a.k.a. Angular 5]! The seventh beta contains 21 bugfixes and nine features, as well as two breaking changes. Let’s have a quick look at what beta.6 has to offer:
Features
- http: deprecate @angular/http in favor of @angular/common/http (#18906) (72c7b6e)
- common: accept object map for HttpClient headers & params (#18490) (1b1d5f1)
- common: generate
closure-locale.tsto tree shake locale data (#18907) (4878936)
- compiler: set
enableLegacyTemplateto false by default (#18756) (56238fe)
- compiler-cli: add watch mode to
ngc(#18818) (cf7d47d)
- compiler-cli: add watch mode to
ngc(#18818) (06d01b2)
- compiler-cli: lower metadata
useValueand
dataliteral fields (#18905) (0e64261)
- compiler-cli: lower metadata
useValueand
dataliteral fields (#18905) (c685cc2)
- platform-server: provide a DOM implementation on the server (2f2d5f3), closes #14638
Code refactoring
Breaking changes
- core:
OpaqueTokenhas been removed as it was deprecated since v4. Use
InjectionTokeninstead.
- compiler: the compiler option
enableLegacyTemplateis now disabled by default as the
<template>element has been deprecated since v4. Use
<ng-template>instead. The option
enableLegacyTemplateand the
<template>element will both be removed in Angular v6.
Angular 4.4.0: First release candidate is here
In other news, the first release candidate for Angular 4.4.0 is here. It contains five bugfixes and two features, namely
- compiler: allow multiple exportAs names (#18723) (7ec28fe)
- core: add option to remove blank text nodes from compiled templates (#18823) (b8b551c)
For the complete up to date list of features and bugfixes, check out the changelog.
Update August 30, 2017
The beta season is in full swing! The Angular team released beta.5 and there are plenty of things to look at: it contains 11 bugfixes, eight features and a lot of breaking changes. One thing’s sure — the 6th beta release will keep you busy; in comparison with beta.4, this one has a longer list of breaking changes, features and bugfixes.
Let’s have a quick look at the breaking changes:
- router:
RouterOutletproperties
locationInjectorand
locationFactoryResolverhave been removed as they were deprecated since v4.
- compiler: –
@angular/platform-servernow additionally depends on
@angular/platform-browser-dynamicas a peer dependency.
-
- Breaking change:
- By default Angular now only contains locale data for the language
en-US, if you set the value of
LOCALE_IDto another locale, you will have to import new locale data for this language because we don’t use the intl API anymore.
- Features:
- you don’t need to use the intl polyfill for Angular anymore.
- all i18n pipes now have an additional last parameter
localewhich allows you to use a specific locale instead of the one defined in the token
LOCALE_ID(whose value is
en-USby default).
-after the
CommonModule(the order is important):
import { NgModule } from '@angular/core'; import { CommonModule, DeprecatedI18NPipesModule } from '@angular/common'; @NgModule({ imports: [ CommonModule, // import deprecated module after DeprecatedI18NPipesModule ] }) export class AppModule { }
Don’t forget that you will still need to import the intl API polyfill if you want to use those deprecated pipes.
- Date pipe
- Breaking changes:
- the predefined formats (
short,
shortTime,
shortDate,
medium, …) now use the patterns given by CLDR (like it was in AngularJS) instead of the ones from the intl API. You might notice some changes, e.g.
shortDatewill be
8/15/17instead of
8/15/2017for
en-US.
- the narrow version of eras is now
GGGGGinstead of
G, the format
Gis now similar to
GGand
GGG.
- the narrow version of months is now
MMMMMinstead of
L, the format
Lis now the short standalone version of months.
- the narrow version of the week day is now
EEEEEinstead of
E, the format
Eis now similar to
EEand
EEE.
- the timezone
zwill now fallback to
Oand output
GMT+1instead of the complete zone name (e.g.
Pacific Standard Time), this is because the quantity of data required to have all the zone names in all of the existing locales is too big.
- the timezone
Zwill now output the ISO8601 basic format, e.g.
+0100, you should now use
ZZZZto get
GMT+01:00.
- Features
- new predefined formats
long,
full,
longTime,
fullTime.
- the format
yyyis now supported, e.g. the year
52will be
052and the year
2017will be
2017.
- standalone months are now supported with the formats
Lto
LLLLL.
- week of the year is now supported with the formats
wand
ww, e.g. weeks
5and
05.
- week of the month is now supported with the format
W, e.g. week
3.
- fractional seconds are now supported with the format
Sto
SSS.
- day periods for AM/PM now supports additional formats
aa,
aaa,
aaaaand
aaaaa. The formats
ato
aaaare similar, while
aaaais the wide version if available (e.g.
ante meridiemfor
am), or equivalent to
aotherwise, and
aaaaais the narrow version (e.g.
afor
am).
- extra day periods are now supported with the formats
bto
bbbbb(and
Bto
BBBBBfor the standalone equivalents), e.g.
morning,
noon,
afternoon, ….
- the short non-localized timezones are now available with the format
Oto
OOOO. The formats
Oto
OOOwill output
GMT+1while the format
OOOOwill be
GMT+01:00.
- the ISO8601 basic time zones are now available with the formats
Zto
ZZZZZ. The formats
Zto
ZZZwill output
+0100, while the format
ZZZZwill be
GMT+01:00and
ZZZZZwill be
+01:00.
- Bug fixes
- the date pipe will now work exactly the same across all browsers, which will fix a lot of bugs for safari and IE.
- eras can now be used on their own without the date, e.g. the format
GGwill be
ADinstead of
8 15, 2017 AD.
- Currency pipe
- Breaking change:
- the default value for
symbolDisplayis now
symbolinstead of
code. This means that by default you will see
$4.99for
en-USinstead of
USD4.99previously.
- Deprecation:
- the second parameter of the currency pipe (
symbolDisplay) is no longer a boolean, it now takes the values
code,
symbolor
symbol-narrow. A boolean value is still valid for now, but it is deprecated and it will print a warning message in the console.
- Features:
- you can now choose between
code,
symbolor
symbol-narrowwhich gives you access to more options for some currencies (e.g. the canadian dollar with the code
CADhas the symbol
CA$and the symbol-narrow
$).
- Percent.
- common:
NgForhas been removed as it was deprecated since v4. Use
NgForOfinstead. This does not impact the use of
*ngForin your templates.
- common:
NgTemplateOutlet#ngOutletContexthas been removed as it was deprecated since v4. Use
NgTemplateOutlet#ngTemplateOutletContextinstead.
- core:
Testability#findBindingshas been removed as it was deprecated since v4. Use
Testability#findProvidersinstead.
- core:
DebugNode#sourcehas been removed as it was deprecated since v4.
- router: the values
true,
false,
legacy_enabledand
legacy_disabledfor the router parameter
initialNavigationhave been removed as they were deprecated. Use
enabledor
disabledinstead.
- core:
DifferFactory.createno longer takes ChangeDetectionRef as a first argument as it was not used and deprecated since v4.
- core:
TrackByFnhas been removed because it was deprecated since v4. Use
TrackByFunctioninstead.
- platform-webworker:
PRIMITIVEhas been removed as it was deprecated since v4. Use
SerializerTypes.PRIMITIVEinstead.
- platform-browser:
NgProbeTokenhas been removed from
@angular/platform-browseras it was deprecated since v4. Import it from
@angular/coreinstead.
- core:
ErrorHandlerno longer takes a parameter as it was not used and deprecated since v4.
- compiler: the option
useDebugfor the compiler has been removed as it had no effect and was deprecated since v4.
Just a quick reminder: Angular 5 is fast approaching
For the complete up to date list of features and bugfixes, check out the changelog.
Update August 3, 2017
Angular 5.0 beta is now in testing!
We’re on our way to the latest milestone in the path to Angular 5. As mentioned earlier this year, Angular 5 has a tentative release date in September. And so, as the summer continues, things are starting to move quickly with this upcoming release.
So, what’s up with Angular 5? Some new feature and performance improvements, but mostly a lot of bugfixes.
Features
- compiler: add representation of placeholders to xliff & xmb
- forms: add options arg to abstract controls
- router: add events tracking activation of individual routes
Performance Improvements
- latest tsickle to tree shake: abstract class methods & interfaces
- core: use native addEventListener for faster rendering.
For the complete up to date list of features and bugfixes, check out the changelog.
Update July 17, 2017
What’s new in Angular 4.3?
First things first: Angular 4.3 is a minor release following the announced adoption of Semantic Versioning. This means that there are no breaking changes and that it is a drop-in replacement for 4.x.x, according to the blog post announcing the release. It contains 24 bugfixes and 12 features.
What’s new?
- Say hello to HttpClient, a smaller, easier to use, and more powerful library for making HTTP Requests. More details here.
-.
Update June 29, 2017
It’s beta season!
Angular 4.2 was released in early June and now that the month is almost over, it’s time for the beta phase for 4.3 to begin.
Beta.0 has one feature [core: update zone.js to 0.8.12 (5ac3919)] and 12 bugfixes. There’s not much to tell but now that the beta phase has officially begun, we can look towards the [near] future —in our case, 4.3.
For more details about the bugfixes and feature, see the changelog.
Update June 9, 2017
Angular 4.2 is here and it comes bearing gifts, a.k.a five bug fixes, one feature [compiler-cli: introduce synchronous codegen API (b00b80a)] and two performance improvements:
- animations: do not create a closure each time a node is removed (fe6b39d)
- animations: only apply
:leaveflags if animations are set to run (b55adee)
RC.2 was released earlier this month, shortly after the release of RC.1 and RC.0. It consisted of five bug fixes, three features and one performance improvement.
Features
- compiler: emit typescript nodes from an output ast (#16823) (18bf772)
- compiler-cli: produce template diagnostics error messages (#17125) (230255f)
- tsc-wrapped: always convert shorthand imports (#16898) (ea8a43d)
RC. 1 had seven bug fixes and three features, namely:
- compiler: add location note to extracted xliff2 files (#16791) (08dfe91), closes #16531
- core: update zone.js to 0.8.10 and expose the flush method (#16860) (85d4c4b)
- tsc-wrapped: support template literals in metadata collection (#16880) (6e41add)
RC.0 consisted of 10 bug fixes, six features and one performance improvement.
Features
- animations: introduce a wave of new animation features (16c8167)
- animations: introduce routeable animation support (f1a9e3c)
- add .ngsummary.ts files to support AOT unit tests (547c363)
- introduce
TestBed.overrideProvider(#16725) (39b92f7)
- compiler: support a non-null postfix assert (#16672) (b9521b5)
- core: introduce fixture.whenRenderingDone for testing (#16732) (38c524d)
Update April 11, 2017
Now that Angular 4 is here, it’s time to worry about the next version. All jokes aside, it’s clear that Angular is more mature. Igor Minar, one of the keynoters at the ng-conf 2017 proudly announced that “the growth in Angular is fueled by people migrating from AngularJS to Angular.”
With maturity comes responsibility, so even though most people see major versions as breaking changes, for the Angular team they mean that they achieved a lot and they need extra time — hence the extended RC periods through which they collect feedback, Minar explained.
We also want to make sure it’s very simple for you to upgrade so we’re doing a lot to make sure major versions don’t mean big obstacles.
Angular 5: The countdown has begun
We don’t know a lot about the next version but we now know that it will be released in September/ October this year.
[version 5] is going to be a much better Angular and you’ll be able to take advantage of it much easier.
One of the biggest perks of Angular 4 is the fact that it is smaller, yet faster — changes were made under the hood to what AOT generated code looks like; their aim is to reduce the size of the generated code for users’ components by roughly 60 percent in most cases.
Angular 5 will be even better: Minar promised that it would be faster and smaller than Angular 4, the updates will be smooth and it will become simpler to compile Angular applications. Since the differences between Just-in-Time and Ahead-of-Time compilation can be frustrating, the latter will become the default, thus reducing friction.
Long-Term Support (LTS) for Angular
Minar revealed that all Google apps use the latest pre-release version of Angular and that the team feels very good about the stability of Angular. There are, of course, benefits to using the latest minor version but for those who cannot use it, the answer is Long-Term Support.
Version 4 is the first one to have LTS. For the next six months, the Angular team will be actively working on it [release features, bugfixes].
In October, version 4 will enter the long-term state and from then on only the critical fixes and security patches will be merged and released.
For more details about the road to version 5 and all the details presented at the ng-conf 2017, visit their YouTube channel.
Be the First to Comment! | https://jaxenter.com/road-to-angular-5-133253.html | CC-MAIN-2020-05 | refinedweb | 3,186 | 54.32 |
bortoni Wrote:Guys,
I've been wanting to learn python for a while, so I decided to try my hand at a script that would connect to easynews and allow you to search and browse through the following news groups:
alt.binaries.multimedia
alt.binaries.movies.divx
alt.binaries.movies.xvid
alt.binaries.vcdz
Before searching, fill in the settings with your easynews username and password.
This is my first script so please be gentle!
def ParseNzb(data) :
try :
## Get start/end points ##
Exp = reCompile('(<file.*?</file>)',reDotAll)
Tar = reFindall(Exp,data)
FileData = {}
RARTYPE = None
for FTag in Tar :
Exp = reCompile('<file.*?subject=\"(.*?)\">')
Subject = reSearch('subject=\".*?\">',FTag).span()
Fields = str(FTag[Subject[0]+9:Subject[1]-2]).rsplit(' - ')
FileNm = reSub('"','',Fields[-1])
FileNm = reSub('"','',FileNm)
FileNm = reSub('"','',FileNm)
Chk1 = reSearch(' yEnc| \(',FileNm,reIgnoreCase)
if ( Chk1 ) :
FileNm = FileNm[:Chk1.start()]
Spaces = FileNm.rsplit(' ')
FileNm = Spaces[-1]
FileNm = FileNm.strip()
## Now see what type of file it is.
Record = -1
Type_1 = reSearch('part\d+\.rar$',FileNm)
Type_2 = reSearch('\.r\d+$',FileNm)
if ( Type_1 ) or ( Type_2 ) :
RecPos = reSearch('\d+$',FileNm).span()
Record = int(FileNm[RecPos[0]:RecPos[1]])
if (Type_2) : Record += 2
elif ( reSearch('.rar$',FileNm) ) :
Record = 1
if Record == -1 :
print 'Ignored -- '+ str(FileNm)
elif FileData.has_key(Record) :
print 'Duplicates -- Cannot Sort This File..'
raise Exception
else :
##print 'Processed --'+ str(FileNm) +' - In slot: '+ str(Record)
FileData[Record] = FTag
return FileData
except :
print '--ParseNzb Fail--'
traceback.print_exc()
return None
Nick8888 Wrote:Am I correct in assuming this only works for easynet at the moment? I would love to give it a whirl.
chunk_1970 Wrote:Check out the "Trial Account" link on the left of the main page.
Quote:The Easynews free trial has been discontinued due to an increased use of fraudulent credit cards on the Internet. However, we still would like you experience, risk free, the premium Usenet services that we offer.
We have a new Refund Policy that allows you to test our services for 1 week or 1 gigabyte worth of downloads, whichever comes first.
If you are interested in our services, please check them out by signing up for a regular Easynews account. If you are not satisfied for any reason within the first week and 1 gigabyte of downloads, let us know and we will reverse the signup charge on your credit card and cancel your account. No questions asked. | http://forum.kodi.tv/showthread.php?tid=28077 | CC-MAIN-2017-09 | refinedweb | 392 | 58.89 |
import "github.com/stretchr/testify/mock"
Package mock provides.
const ( // Anything is used in Diff and Assert when the argument being tested // shouldn't be taken into consideration. Anything = "mock.Anything" )
AssertExpectationsForObjects asserts that everything specified with On and Return of the specified objects was in fact called as expected.
Calls may have occurred in any order.
MatchedBy can be used to match a mock call based on only certain properties from a complex struct or some calculation. It takes a function that will be evaluated with the called argument and will return true when there's a match and false otherwise.
Example: m.On("Do", MatchedBy(func(req *http.Request) bool { return req.Host == "example.com" }))
|fn|, must be a function accepting a single argument (of the expected type) which returns a bool. If |fn| doesn't match the required signature, MatchedBy() panics.
AnythingOfTypeArgument is a string that contains the type of an argument for use when type checking. Used in Diff and Assert.
func AnythingOfType(t string) AnythingOfTypeArgument
AnythingOfType returns an AnythingOfTypeArgument object containing the name of the type to check for. Used in Diff and Assert.
For example:
Assert(t, AnythingOfType("string"), AnythingOfType("int"))
Arguments holds an array of method arguments or return values.
Assert compares the arguments with the specified objects and fails if they do not exactly match.
Bool gets the argument at the specified index. Panics if there is no argument, or if the argument is of the wrong type.
Diff gets a string describing the differences between the arguments and the specified objects.
Returns the diff string and number of differences found.
Error gets the argument at the specified index. Panics if there is no argument, or if the argument is of the wrong type.
Get Returns the argument at the specified index.
Int gets the argument at the specified index. Panics if there is no argument, or if the argument is of the wrong type.
Is gets whether the objects match the arguments specified.
String gets the argument at the specified index. Panics if there is no argument, or if the argument is of the wrong type.
If no index is provided, String() returns a complete string representation of the arguments.
type Call struct { Parent *Mock // The name of the method that was or will be called. Method string // Holds the arguments of the method. Arguments Arguments // Holds the arguments that should be returned when // this method is called. ReturnArguments Arguments // The number of times to return the return arguments when setting // expectations. 0 means to always return the value. Repeatability int // Holds a channel that will be used to block the Return until it either // receives a message or is closed. nil means it returns immediately. WaitFor <-chan time.Time // Holds a handler used to manipulate arguments content that are passed by // reference. It's useful when mocking methods such as unmarshalers or // decoders. RunFn func(Arguments) // contains filtered or unexported fields }
Call represents a method call and is used for setting expectations, as well as recording activity.
After sets how long to block until the call returns
Mock.On("MyMethod", arg1, arg2).After(time.Second)
Maybe allows the method call to be optional. Not calling an optional method will not cause an error while asserting expectations
On chains a new expectation description onto the mocked interface. This allows syntax like.
Mock. On("MyMethod", 1).Return(nil). On("MyOtherMethod", 'a', 'b', 'c').Return(errors.New("Some Error"))
go:noinline
Once indicates that that the mock should only return the value once.
Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Once()
Return specifies the return arguments for the expectation.
Mock.On("DoSomething").Return(errors.New("failed"))
Run sets a handler to be called before returning. It can be used when mocking a method (such as an unmarshaler) that takes a pointer to a struct and sets properties in such struct
Mock.On("Unmarshal", AnythingOfType("*map[string]interface{}").Return().Run(func(args Arguments) { arg := args.Get(0).(*map[string]interface{}) arg["foo"] = "bar" })
Times indicates that that the mock should only return the indicated number of times.
Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Times(5)
Twice indicates that that the mock should only return the value twice.
Mock.On("MyMethod", arg1, arg2).Return(returnArg1, returnArg2).Twice()
WaitUntil sets the channel that will block the mock's return until its closed or a message is received.
Mock.On("MyMethod", arg1, arg2).WaitUntil(time.After(time.Second))
IsTypeArgument is a struct that contains the type of an argument for use when type checking. This is an alternative to AnythingOfType. Used in Diff and Assert.
func IsType(t interface{}) *IsTypeArgument
IsType returns an IsTypeArgument object containing the type to check for. You can provide a zero-value of the type to check. This is an alternative to AnythingOfType. Used in Diff and Assert.
For example: Assert(t, IsType(""), IsType(0))
type Mock struct { // Represents the calls that are expected of // an object. ExpectedCalls []*Call // Holds the calls that were made to this mocked object. Calls []Call // contains filtered or unexported fields }
Mock is the workhorse used to track activity on another object. For an example of its usage, refer to the "Example Usage" section at the top of this document.
AssertCalled asserts that the method was called. It can produce a false result when an argument is a pointer type and the underlying value changed after calling the mocked method.
AssertExpectations asserts that everything specified with On and Return was in fact called as expected. Calls may have occurred in any order.
AssertNotCalled asserts that the method was not called. It can produce a false result when an argument is a pointer type and the underlying value changed after calling the mocked method.
AssertNumberOfCalls asserts that the method was called expectedCalls times.
Called tells the mock object that a method has been called, and gets an array of arguments to return. Panics if the call is unexpected (i.e. not preceded by appropriate .On .Return() calls) If Call.WaitFor is set, blocks until the channel is closed or receives a message.
IsMethodCallable checking that the method can be called If the method was called more than `Repeatability` return false
MethodCalled tells the mock object that the given method has been called, and gets an array of arguments to return. Panics if the call is unexpected (i.e. not preceded by appropriate .On .Return() calls) If Call.WaitFor is set, blocks until the channel is closed or receives a message.
On starts a description of an expectation of the specified method being called.
Mock.On("MyMethod", arg1, arg2)
Test sets the test struct variable of the mock object
TestData holds any data that might be useful for testing. Testify ignores this data completely allowing you to do whatever you like with it.
type TestingT interface { Logf(format string, args ...interface{}) Errorf(format string, args ...interface{}) FailNow() }
TestingT is an interface wrapper around *testing.T
Package mock imports 12 packages (graph) and is imported by 4248 packages. Updated 2020-03-06. Refresh now. Tools for package owners. | https://godoc.org/github.com/stretchr/testify/mock | CC-MAIN-2020-16 | refinedweb | 1,181 | 59.8 |
11/07/2012 · Re: taskdef class weblogic.ant.taskdef.management.WLdeployn cannot be found. Kalyan Pasupuleti-Oracle Jul 11, 2012 6:49 AM in response to 948476. Attila Mezei-Horvati Chris, thanks for the reply. It does help however now I get errors like: package org.apache.xmlbeans does not exist when it tries to compile the classes.
I am trying to run the Quartz webapp from Sun Java Studio 8.1 using Tomcat. I am using the ant scripts that come with Quartz webapp. When I try to Run in Sun Studio 8.1, I get. Overview of WebLogic Web Services Ant Tasks. Ant is a Java-based build tool, similar to the make command but much more powerful. Ant uses XML-based configuration files called build.xml by default to execute tasks written in Java.
IT issues often require a personalized solution. With Ask the Experts™, submit your questions to our certified professionals and receive unlimited, customized solutions that work for you. Comp. This maps XJCTask to an Ant task named xjc. For detailed examples of using this task, refer to any of the build.xml files used by the sample applications. Synopsis Environment Variables. ANT_OPTS - command-line arguments that should be passed to the JVM.
Name taskdef Synopsis Adds a task to the current project. This is used to define tasks not already defined in the ant.jar’s default.properties file. Attributes classname all, String,- Selection from Ant: The Definitive Guide [Book]. Morten Mortensen Hi, Try with "jasper-runtime.jar" and "jasper-compiler.jar" - for e.g. a 4.1.27 it solves all problems with missing classes. I think!
22/02/2019 ·. public class Taskdef extends Typedef. Adds a task definition to the current project, such that this new task can be used in the current project. Two attributes are needed, the name that identifies this task uniquely, and the full name of the class including the packages that implements this task. Apache Ant is a Java-based build tool. Contribute to apache/ant development by creating an account on GitHub.
Provides Ant task classes for batch-processing report files. Ant Tasks When the number of different report files that one has to deal with in a project is significant, there is a need for automating repeating or re-occurring tasks that are to be performed on those files. 28/12/2019 · Ant example source code file taskdef.xml This example Ant source code file taskdef.xml is included in the"Java Source Code Warehouse" project. The intent of this project is to help you "Learn Java by Example" TM. taskdef not receiving the classpath properly?. I'm having bizarre problems getting ant to find the class referenced in a taskdef specifically with tomcat, if it matters. The jar shows up in the. If the IntegrationTester Ant task fails, the Ant script will also fail when this flag is set to true. Note: If you are executing Ant with -verbose or -debug and the flag is set to true, a failure will result in a Java exception stack being output. 9 replies For some reason, I can't use taskdef. I have added my jar file to the proper directory /usr/share/java/ant it shows up in the classpath when starting I have tried other jars, with the same effect I have tried ten zillion different taskdef styles with no effect. I've treid 1.4.1 1.5 these are all installed from the rpms tried: my.
09/11/2016 · Hi, I am maintaining a framework that defines an ANT transformation task that among others executes a FOP transformation. After an Oxygen update, I noticed the included FOP distribution had been updated, so I had to update the jar file references. taskdef是Ant内置任务,用于将任务定义添加到当前project中, 以便可以在当前project中使用此任务。taskdef是一种将adapter和adaptto属性分别设置为org.apac. 博文 来自: 荣耀之路.
Join GitHub today. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. In taskdef, il classpathref deve essere un riferimento a un definito in precedenza path. Il percorso dovrebbe includere un archivio jar che contiene la classe che implementa l’attività o dovrebbe puntare alla directory del file system che è il radice della gerarchia di classi. 21/07/2011 · I have not managed to add the
/ | http://casacaymanrealestate.com/taskdef-classname-in-ant | CC-MAIN-2021-04 | refinedweb | 715 | 68.16 |
[The Working Programmer]
Coding Naked: Naked Properties
By Ted Neward | March 2019
Welcome back, NOFers. (I’ve decided that sounds better than calling those who use naked objects “naked coders.”) In the last piece, I started building out the domain model for my conference system, which allowed me to store speakers, but it’s a pretty plain-vanilla setup thus far. I haven’t done any of the sorts of things you’d normally need to do, like verifying that first or last names aren’t empty, or supporting a “topics” field that’s one of a bound set of options, and so on. All of these are reasonable things to want to support in a UI (as well as a data model), so if this “naked” approach is going to be used for real-world scenarios, it needs to be able to do them, as well.
Fortunately, the NOF designers knew all that.
Naked Concepts
Let’s go back and talk about how NOF handles this in the general case before I get into the specifics.
Remember, the goal of NOF is to keep from having to write UI code that could otherwise be signaled using some aspect of the domain objects themselves, and the best way to do that sort of signaling is through the use of custom attributes. In essence, you use NOF custom attributes to annotate various elements of the domain object—properties and methods, for the most part—and the NOF client understands, based on the presence of the attribute, or data contained inside the attribute, that it has to customize the UI for that object in some manner. Note that NakedObjects doesn’t need to actually define many of these custom attributes, as they come “for free” from the System.ComponentModel namespace in the standard .NET distribution. Reusability!
However, sometimes it’s not quite as simple as “this should always be the case.” For example, if certain properties have to be disabled based on the internal state of the object (such as an “on-parental-leave” property that needs to be disabled if an employee has no spouse or children), then code will need to be executed, and that’s something a custom attribute can’t provide. In those situations, NOF relies on convention: specifically, NOF will look for particularly named methods on the class. If the parental-leave property is named OnLeave, then the method that NOF will execute to determine whether to disable the OnLeave property would be called DisableOnLeave.
Let’s see how this works out in practice.
Naked Speakers, Redux
Currently, the Speaker class has just three properties on it, FirstName, LastName and Age. (That’s not counting the Id property, which isn’t visible to the user, and the FullName property, which is computed out of the FirstName and LastName properties; because they aren’t user-modifiable, they’re not really of concern here. Yet.) It wouldn’t make sense for this conference system to allow for empty first or last names, and a negative age probably wouldn’t make much sense, either. Let’s fix these first.
Specifying non-zero names is one of the easiest validations to apply, because it’s a static one. There’s no complicated logic required—the length of the strings supplied to each property has to be greater than zero. This is handled by the StringLength attribute on each property, like so:
[StringLength(100, ErrorMessage = "First name must be between 1 and 100 characters", MinimumLength = 1)] public virtual string FirstName { get; set; } [StringLength(100, ErrorMessage = "Last name must be between 1 and 100 characters", MinimumLength = 1)] public virtual string LastName { get; set; }
That takes care of the empty-names problem.
Age is even easier, because I can use the Range custom attribute to specify acceptable minimum and maximum age ranges. (Would I really consider bringing in a speaker younger than 21? Possibly, because I want to encourage school-age kids to speak, but anyone younger than 13 would probably be a tough sell.) Applying the Range attribute, then, would look like this:
Note that the StringLength and Range attributes also take an ErrorMessageResourceName value, in case error messages are stored in resources (which they should be, for easy internationalization).
Build, and run; notice how the UI will now automatically enforce these constraints. Even better, to the degree possible, the constraints will be enforced in the database, as well. Nifty!
In and of themselves, these attributes act essentially as data model validations, with a small amount of UI to support them. However, you often want to change up elements of the UI that have nothing to do with data validation, as well. For example, currently, the attributes on the Speaker object are displayed in alphabetical order, which doesn’t make a ton of sense. It would be far more realistic (and useful) if the first value displayed was the full name, followed by the individual fields for first name, last name and age (as well as any other demographic information you need to capture and use).
While this could certainly become the “Welp, that was fun, time to break down and build our own UI” moment, it doesn’t need to—this is a common requirement, and NOF has it covered, via the MemberOrder attribute. Using this attribute, I can establish an “order” in which attributes should appear in the UI. So, for example, if I want the FullName attribute to appear first in the UI, I use MemberOrder and pass in the relative ordinal position “1,” like so:
Next, I’d like to display first name, last name and age, but here I can begin to run into a problem. As I add new fields to this class over time (say, “middle name” or “email”), trying to keep all the ordinal positions in order can be tricky—if I move LastName to position 5, I have to go find everything 5 (and after) and bump each one to get the right positions. That’s a pain.
Fortunately, MemberOrder has a nifty little trick to it: The position itself can be a floating-point value, which allows fields to be “grouped,” so that now I can mark “FirstName,” “LastName,” and “Age” as ordinal positions “2.1,” “2.2,” and “2.3,” respectively, which essentially means that group “2” can be demographic information, and any new demographic information about the Speaker only requires reshuffling of the members in that particular group, as Figure 1 shows.
[MemberOrder(2.1)] [StringLength(100, ErrorMessage = "First name must be between 1 and 100 characters", MinimumLength = 1)] public virtual string FirstName { get; set; } [MemberOrder(2.2)] [StringLength(100, ErrorMessage = "Last name must be between 1 and 100 characters", MinimumLength = 1)] public virtual string LastName { get; set; } [Range(13, 90, ErrorMessage = "Age must be between 13 and 90")] [MemberOrder(2.3)] public virtual int Age { get; set; }
Note that there’s nothing really special about the values themselves—they’re used relative to one another and don’t represent any particular location on the screen. In fact, I could’ve used 10, 21, 22 and 23, if I wanted to. NOF is careful to point out that these values are compared lexicographically—string-based comparisons—and not numerically, so use whichever scheme makes sense to you.
What if users aren’t sure whether Age is in years or in days? It may seem completely obvious to you, but remember, not everybody looks at the world the same way. While it’s probably not a piece of information that needs to be present on the UI overtly, it should be something that you can signal to the user somehow. In NOF, you use the “DescribedAs” attribute to signal how the property should be described, which typically takes the form of a tooltip over the input area. (Remember, though, a given NOF client might choose to use a different way to signal that; for example, if a NOF client emerges for phones, which are touch-centric, tooltips don’t work well for that format. In that situation, that hypothetical NOF client would use a different mechanism, one more appropriate for that platform, to describe the property.)
And Speakers need a bio! Oh, my, how could I forget that—that’s like the one time speakers get to write exclusively about themselves and all the amazing things they do! (That’s a joke—if it’s one thing speakers hate most of all, it’s writing their own bio.) Bio is an easy attribute to add to the class, but most bios need to be more than just a word or two, and looking at the UI generated by NOF so far, all of the other strings are currently single-line affairs. It’s for this reason that NOF provides the “MultiLine” attribute, to indicate that this field should be displayed in a larger area of text entry than the typical string.
But I need to be careful about, in this case, a speaker’s biography, because free-form input offers the possibility for abuse: I might want/need to screen out certain words from appearing lest people get the wrong impression about the conference. I simply can’t have speakers at my show if their biographies include words like COBOL! Fortunately, NOF will allow for validation of input by looking for, and invoking, methods on the Speaker class that match a Validate[Property] convention, like so:
Wrapping Up
NOF has a pretty wide variety of options available to describe a domain object in terms that make it easier to automatically render the appropriate UI to enforce domain limitations, but thus far the model here is pretty simple. In the next piece, I’ll examine how NOF can handle a more complicated topic, that of relationships between objects. (Speakers need to be able to specify Topics they speak on, for example, and most of all, the Talks they propose.) But I’m out of space for the month, so who reviewed this article: Richard Pawson
Richard Pawson’s career in computing began in 1977 when he went to work for a company making pocket calculators; three weeks after he joined, the company announced the world’s first personal computer: the Commodore PET. In the intervening years, Richard has been a technology journalist, robotics engineer, electronic toy designer, management consultant, and software developer. Naked Objects started as his PhD thesis in 2003, and Richard has managed the development of the open source Naked Objects framework since then. In his ‘spare time’ he is restoring a 1939 Daimler Drop Head Coupe that originally belonged to England’s King George VI.
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus.
The Working Programmer - Coding Naked: Naked Properties
Ted Neward continues to build out the domain model for his conference system to support important capabilities, such as validating fields, by using Naked Objects Framework custom attributes or, for more complex scenarios, by relying on convention. Re...
Mar 1, 2019 | https://msdn.microsoft.com/en-us/magazine/mt833287 | CC-MAIN-2019-13 | refinedweb | 1,845 | 54.05 |
URL: <> Summary: problems with hyperg_U(a,b,x) for x<0 Project: GNU Scientific Library Submitted by: bjg Submitted on: Wed 21 Jul 2010 09:17:45 PM BST Category: Runtime error Severity: 3 - Normal Operating System: Status: None Assigned to: None Open/Closed: Open Release: 1.14 Discussion Lock: Any _______________________________________________________ Details: Reply-To: address@hidden From: Raymond Rogers <address@hidden> To: address@hidden Subject: Re: [Bug-gsl] hyperg_U(a,b,x) Questions about x<0 and values Date: Thu, 08 Jul 2010 11:49:15 -0500 Brian Gough <address@hidden> Subject: Re: [Bug-gsl] hyperg_U(a,b,x) Questions about x<0 and values of a To: address@hidden Cc: address@hidden Message-ID: <address@hidden> Content-Type: text/plain; charset=US-ASCII At Wed, 07 Jul 2010 10:14:34 -0500, Raymond Rogers wrote: > > > > 1) I was unable to find the valid domain of the argument a when x<0. > > Experimenting yields what seem to be erratic results. Apparently > > correct answers occur when {x<0&a<0& a integer}. References would be > > sufficient. Unfortunately {x<0,a<0} is exactly the wrong range for my > > problem; but the recursion relations can be used to stretch to a>0. If > > I can find a range of correct operation for the domain of "a" of width >1. > | Brian Gough | Thanks for the email. There are some comments about the domain for | the hyperg_U_negx function in specfunc/hyperg_U.c -- do they help? They explain some things, but I believe the section if (b_int && b >= 2 && !(a_int && a <= (b - 2))){} else {} is implemented incorrectly; and probably the preceding section as well. Some restructuring of the code would make things clea\ rer; but things like that should probably done in a different forum: email, blog, etc... I think the switches might be wrong. In any case it seems that b=1 has a hole. Is there a source for this code? Note: the new NIST Mathematical handbook might have better algorithms. I am certainly no expert on implementing mathematical \ functions (except for finding ways to make them fail). Ray Reply-To: address@hidden From: Raymond Rogers <address@hidden> To: address@hidden Subject: [Bug-gsl] Re: hyperg_U(a, b, x) Questions about x<0 and values of a, Date: Sun, 11 Jul 2010 14:43:43 -0500 hyperg_U basically fails with b=1, a non-integer; because gsl_sf_poch_e(1+a-b,-a,&r1); is throwing a domain error when given gamma(0)/gamma(a). Checking on and using b=1 after a-integer is checked is illustrated below in Octave. I also put in recursion to evaluate b>=2. I checked the b=1 expression against Maple; for a few values x<0,a<0,b=1 and x<0,a<0,b>=2 integer. -------------- Unfortunately the routine in Octave to call hyperg_U is only set up for real returns, which was okay for versions <1.14 . Sad to say I am the one who implemented the hyperg_U interface, and will probably have to go back :-( . Integrating these functions into Octave was not pleasant; but perhaps somebody made it easier. I did translate the active parts of hyperg_U into octave though; so it can be used in that way. Ray # # # Test function to evaluate b=1 for gsl 1.14 hyperg_U x<0 # function anss=hyperg_U_negx_1(a,b,x) int_a=(floor(a)==a); int_b=(floor(b)==b); #neg, int, a is already taken care of so use it if (int_a && a<=0) anss=hyperg_U(a,b,x); elseif (int_b && (b==1)) #from the new NIST DLMF 13.2.41 anss=gamma(1-a)*exp(x)*(hyperg_U(1-a,1,-x)/gamma(a)-hyperg_1F1(1-a,1,-x)*exp((1-a)*pi*I)); elseif (b>=2) #DLMF 13.3.10 anss=((b-1-a)*hyperg_U_negx_1(a,b-1,x) + hyperg_U_negx_1(a-1,b-1,x))/x; else anss=hyperg_U(a,b,x); endif # endfunction _______________________________________________________ Reply to this item at: <> _______________________________________________ Message sent via/by Savannah | http://lists.gnu.org/archive/html/bug-gsl/2010-07/msg00018.html | CC-MAIN-2018-05 | refinedweb | 662 | 54.02 |
I need a quick checksum (as fast as possilbe) for small strings (20-500 chars).
I need the source code and that must be small! (about 100 LOC max)
If it could generate strings in Base32/64. (or something similar) it would be perfect. Basically the checksums cannot use any "bad" chars.. you know.. the usual (){}[].,;:/+-\| etc
Clarifications
It could be strong/weak, that really doesn't matter since it is only for behind-the-scenes purposes.
It need not contain all the data of the original string since I will be only doing comparison with generated checksums, I don't expect any sort of "decryption".
Quick implementation in C, no copyrights from my side, so use it as you wish. But please note that this is a very weak "checksum", so don't use it for serious things :) - but that's what you wanted, isn't it?
This returns an 32-bit integer checksum encoded as an string containing its hex value.
If the checksum function doesn't satisfy your needs, you can change the
chk += ((int)(str[i]) * (i + 1)); line to something better (f.e. multiplication, addition and bitwise rotating would be much better).
EDIT: Following hughdbrown's advice and one of the answers he linked, I changed the
for loop so it doesn't call
strlen with every iteration.
#include <stdio.h> #include <stdlib.h> #include <string> char* hextab = "0123456789ABCDEF"; char* encode_int(int i) { char* c = (char*)malloc(sizeof(char) * 9); for (int j = 0; j < 4; j++) { c[(j << 1)] = hextab[((i % 256) >> 4)]; c[(j << 1) + 1] = hextab[((i % 256) % 16)]; i = (i >> 8); } c[8] = 0; return c; } int checksum(char* str) { int i; int chk = 0x12345678; for (i = 0; str[i] != '\0'; i++) { chk += ((int)(str[i]) * (i + 1)); } return chk; } int main() { char* str1 = "Teststring"; char* str2 = "Teststring2"; printf("string: %s, checksum string: %s\n", str1, encode_int(checksum(str1))); printf("string: %s, checksum string: %s\n", str2, encode_int(checksum(str2))); return 0; } | https://codedump.io/share/OrFHPjUwhVuS/1/fast-open-source-checksum-for-small-strings | CC-MAIN-2016-50 | refinedweb | 331 | 69.92 |
tobru.guru Newsletter #45
News
Software releases, news articles and other new stuff
[German] Generiere dein Datenauskunftsbegehren | Digitale Gesellschaft
#digiges, #dsg, #generator
Gemäss Datenschutzgesetz hat jede Person das Recht zu erfahren, welche Daten über sie gespeichert sind, und diese – wenn nötig – löschen oder korrigieren zu lassen. Dieses Auskunftsrecht ermöglicht es, die Kontrolle über die eigenen Personendaten zu behalten. Jede Person muss aber selber aktiv werden und dieses Recht wahrnehmen.
Very helpful tool by the Digitale Gesellschaft. I'll gladly make use of it and I'm curious to see what some companies have stored about me.
Release v0.11.0 · ActivityWatch/activitywatch
#release, #activitywatch
After 7 long months, we've finally got the v0.11 release ready, and it comes with a lot of new features, bug fixes, and UX improvements.
ActivityWatch is a very cool tool, and I'm running it since some time on my work laptop to see where I actually spend time. Very welcome upgrade it is. I would have many ideas for data to feed in, if I would just find time to write the data gathering scripts...
Announcing etcd 3.5 | etcd
#release, #etcd
When we launched etcd 3.4 back in August 2019, our focus was on storage backend improvements, non-voting member and pre-vote features. Since then, etcd has become more widely used for various mission critical clustering and database applications and as a result, its feature set has grown more broad and complex. Thus, improving its stability and reliability has been top priority in recent development.
I didn't realize that it has been so long since the last etcd release. And I think this is good as it proves that etcd is absolutely working behind the scenes.
Articles
Interesting articles and blog posts
What happens when you register a domain name? - Afnic
#dns, #domain, #registry, #explanation
The registrar and the registry must communicate with one another. The registrar asks the registry whether the name tested by the user is free and available for registration. The registrar then asks the registry to place the name in the database. This communication follows a standardised protocol called EPP, Extensible Provisioning Protocol.
Very cool insight into a process which I wasn't too familiar about.
The Wondrous World of Discoverable GPT Disk Images
#systemd, #partitions, #gpt, #linux
A number of years ago we started the Discoverable Partitions Specification which defines GPT partition type UUIDs and partition flags for the various partitions Linux systems typically deal with.
Wow, very cool. Linux gets overhauled on many places. The next time I log in to a modern Linux and wonder where `/etc/fstab` has gone, I now know where to look.
The social contract of open source
#opensource, #opinion
Remember that I didn't force you to take the software. The act of taking the software was done under free agency, so getting mad about the free gift of some open source code that you chose to take seems to be more your own problem than mine; you are totally capable of using that free agency again and stop using the source code.
Yep!
Tools
Open Source tools newly discovered
Utopia
#javascript, #react, #vscode, #design
A design and coding environment for React projects and components that runs in the browser. It combines VSCode with a design and preview tool, and full two-way synchronisation: design and code update each other, in real time. And unlike any design tool out there, it uses React code as the source of truth.
Ingress Builder
#kubernetes, #ingress, #helper
Ingresses are Kubernetes resources used to direct HTTP and HTTPS traffic to your cluster services. In order to correctly use an ingress resource a dedicated ingress-controller must be present and running in the host cluster. Ingress Builder allows users to select any annotation from the list of available controllers, to add to the ingress manifest.
nushell/nushell: A new type of shell
#linux, #shell
Nu draws inspiration from projects like PowerShell, functional programming languages, and modern CLI tools. Rather than thinking of files and services as raw streams of text, Nu looks at each input as something with structure. For example, when you list the contents of a directory, what you get back is a table of rows, where each row represents an item in that directory. These values can be piped through a series of steps, in a series of commands called a 'pipeline'.
senthilrch/kube-fledged: A kubernetes add-on for creating and managing a cache of container images directly on the cluster worker nodes, so application pods start almost instantly
#kubernetes, #operator, #image
kube-fledged is a kubernetes add-on for creating and managing a cache of container images directly on the worker nodes of a kubernetes cluster. It allows a user to define a list of images and onto which worker nodes those images should be cached (i.e. pre-pulled). As a result, application pods start almost instantly, since the images need not be pulled from the registry.
slackhq/nebula: A scalable overlay networking tool with a focus on performance, simplicity and security
#network, #overlay, #vpn
Nebula is a scalable overlay networking tool with a focus on performance, simplicity and security. It lets you seamlessly connect computers anywhere in the world. Nebula is portable, and runs on Linux, OSX, Windows, iOS, and Android. It can be used to connect a small number of computers, but is also able to connect tens of thousands of computers.
reactive-tech/kubegres: Kubegres is a Kubernetes operator allowing to create a cluster of PostgreSql instances and manage databases replication, failover and backup.
#kubernetes, #operator, #postgresql
K.
replicatedhq/outdated: Kubectl plugin to find and report outdated images running in a Kubernetes cluster
#kubernetes, #images, #update, #security, #kubectl
The plugin will iterate through readable namespaces, and look for pods. For every pod it can read, the plugin will read the podspec for the container images, and any init container images. Additionally, it collects the content sha of the image, so that it can be used to disambiguate between different versions pushed with the same tag. | https://tobru.ch/newsletter-45/ | CC-MAIN-2022-21 | refinedweb | 1,013 | 51.89 |
07 May 2013 00:06 [Source: ICIS news]
HOUSTON (ICIS)--India-based Jindal Poly Films has agreed to purchase ?xml:namespace>
The deal is valued at $235m (€179m) and should close by the end of July, Jindal Poly Films said in a press release.
The two companies agreed to the framework of the deal in October 2012 and signed a sales agreement on 3 May 2013, the New Delhi-headquartered flexible packaging films producer said.
Five BOPP production locations in the
Also included are a technology centre and sales office in
The acquisition will increase Jindal Poly Films’ global combined BOPP capacity to about 445,000 tons/year, according to the | http://www.icis.com/Articles/2013/05/07/9665396/indias-jindal-poly-films-to-purchase-exxonmobils-bopp.html | CC-MAIN-2014-35 | refinedweb | 111 | 53.55 |
Investors in BorgWarner Inc (Symbol: BWA) saw new options begin trading today, for the May 2020 expiration. One of the key inputs that goes into the price an option buyer is willing to pay, is the time value, so with 249 days until expiration the newly trading contracts represent a potential opportunity for sellers of puts or calls to achieve a higher premium than would be available for the contracts with a closer expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the BWA options chain for the new May 2020 contracts and identified one put and one call BWA, that could represent an attractive alternative to paying $36.49/share today.
Because the $25 2.20% return on the cash commitment, or 3.22% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for BorgWarner Inc, and highlighting in green where the $25.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $40.00 strike price has a current bid of $1.00. If an investor was to purchase shares of BWA stock at the current price level of $36.49/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $40.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 12.36% if the stock gets called away at the May 2020 expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if BWA shares really soar, which is why looking at the trailing twelve month trading history for BorgWarner Inc, as well as studying the business fundamentals becomes important. Below is a chart showing BWA's trailing twelve month trading history, with the $40.00 strike highlighted in red:
Considering the fact that the $40.74% boost of extra return to the investor, or 4.02% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 50%, while the implied volatility in the call contract example is 39%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 250 trading day closing values as well as today's price of $36.49) to be. | https://www.nasdaq.com/articles/interesting-bwa-put-and-call-options-for-may-2020-2019-09-09 | CC-MAIN-2022-05 | refinedweb | 405 | 61.26 |
Fix rewriting invalid shifts to errors
Fixes #16449 (closed).
5341edf3 removed a code in rewrite rules for bit shifts, which broke the "silly shift guard", causing generating invalid bit shifts or heap overflow in compile time while trying to evaluate those invalid bit shifts.
The "guard" is explained in Note [Guarding against silly shifts] in PrelRules.hs.
More specifically, this was the breaking change:
--- a/compiler/prelude/PrelRules.hs +++ b/compiler/prelude/PrelRules.hs @@ -474,12 +474,11 @@ shiftRule shift_op ; case e1 of _ | shift_len == 0 -> return e1 - | shift_len < 0 || wordSizeInBits dflags < shift_len - -> return (mkRuntimeErrorApp rUNTIME_ERROR_ID wordPrimTy - ("Bad shift length" ++ show shift_len))
This patch reverts this change.
Two new tests added:
T16449_1: The original reproducer in #16449 (closed). This was previously casing a heap overflow in compile time when CmmOpt tries to evaluate the large (invalid) bit shift in compile time, using
Integeras the result type. Now it builds as expected. We now generate an error for the shift as expected.
T16449_2: Tests code generator for large (invalid) bit shifts. | https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1021/diffs | CC-MAIN-2022-21 | refinedweb | 170 | 63.8 |
NAMEglob, globfree - find pathnames matching a pattern, free memory from glob()
SYNOPSIS
#include <glob.h>
int glob(const char *restrict pattern, int flags, int (*errfunc)(const char *epath, int eerrno), glob_t *restrict pglob); void globfree(glob_t *pglob);
DESCRIPTIONThe glob() function searches for all the pathnames matching():
- filesystem.) VALUEOn successful completion, glob() returns zero. Other possible returns are:
- GLOB_NOSPACE
- for running out of memory,
- GLOB_ABORTED
- for a read error, and
- GLOB_NOMATCH
- for no found matches.. glob() calls those functions, so we use race:utent to remind users.
CONFORMING TOPOSIX.1-2001, POSIX.1-2008, POSIX.2.
NOTESThe structure elements gl_pathc and gl_offs are declared as size_t in glibc 2.1, as they should be according to POSIX.2, but are declared as int in glibc 2.0.
BUGSThe glob() function may fail due to failure of underlying function calls, such as malloc(3) or opendir(3). These will store their error code in errno.
EXAMPLESOne example of use is the following code, which simulates typing]); | https://man.archlinux.org/man/glob.3.en | CC-MAIN-2022-05 | refinedweb | 164 | 66.23 |
This
Sample Results
This approach works, but I found that result will vary greatly based on the quality of input.
Transcribing a Reading by My Wife
I asked my wife to read something out loud as if she was dictating to Siri for about 1.5 minutes. She is a native English speaker and we recorded using a microphone on iPhone 6s.
Which resulted in the following transcript:
00:00:00 this Dynamic Workshop aims to provide up to date information on pharmacological approaches, issues, and treatment in the geriatric population to assist in preventing medication-related problems, appropriately and effectively managing medications and compliance. The concept of polypharmacy parentheses taking multiple types of drugs parentheses will also be discussed, as the
00:00:30 is a common issue that can impact adverse side effects in the geriatric population. Participants will leave with a knowledge and considerations of common drug interaction and how to minimize the effects that limit function. Summit professional education is approved provider of continuing education. This course is offered for 6
00:01:00 . this course contains a Content classified under the both the domain of occupational therapy and professional issues.
I think that Google Cloud Speech API did an amazing job, getting over 95% of the content right. Especially considering that this was not a professional recording and that you can hear my kid saying something in the background 🙂
Transcribing a Radio Broadcast with Few Different Voices
A reader sent me the following audio file recorded from 95.5 Sports Hub radio (broadcast on January 26th 2018), Toucher & Rich morning show. This too, turned out better than I expected.
00:00:00 announced that there was going to be a new XXX FL it was going to start in two years and here’s what he had to say that you accept kickoff in 2020 quite frankly we’re going to give the game of football back to fans I’m sure everyone has a lot of questions for me but I also have a lot of questions for you in fact we’re going to ask a lot of questions and listen to players coaches
00:00:30 call experts technology executive members of the media and anyone else who understands and loves the game of football but most importantly we’re going to be listening to someone ask that the will the question of what would you do if you can reimagine the game of professional football would you frenchtons eliminate halftime would you have if you were commercial breaks but the game of foot
00:01:00 I’ll be faster when the rules be simpler can you ask Chef elevated fan Centric with all the things you like to see in the last of the things you don’t and no doubt a lot of Innovations along the way we will put you at a shorter faster-paced family-friendly and easier to understand game don’t get me wrong it’s still football but it’s professional football reimagined Sims 4 launching a 20
00:01:30 hey we have two years which is plenty of time to really get it right so aside from family friendly which I just think means that you have to stand for the national anthem I have no idea because the other one was very sex. That’s why is it either it was the cheerleaders with the super tight outfits and stuff cheerleaders were dressed and I stripped it sounds like a very good idea sounds like he has he has no plan no he does he’s taking everything he does have
00:02:00 and it said all the teams are going to be owned by the same entity he knows that they’re starting with a team and that they’re going to be shorter games with maybe no halftime with inferior Talent no not necessarily interior Town there’s already a saturation of football as is that is the biggest thing that people been complaining about the game what is he thinking you know what he said you ate yesterday you said we’re going to make it short and then we want your ideas no gimmicks all the things that God was just playing around
00:02:30 this does feel like a guy who’s had enormous prefer
Transcribing a Speech by Winston Churchill
I wanted to challenge the script further, so I decided to run in on a famous speech by Winston Churchill, titled The Threat of Nazi Germany.
Here is the audio file:
Which resulted in the following transcript:
00:00:00 many people think that the best way to escape War if the dwelling and then print them DVD for the younger generation they plump the grizzly photographs Before Their Eyes they feel that they dilate of generals and admirals they do not fit the crime I didn’t think they’d father
00:00:30 human strife how old is teaching in preventing us from attacking or invading any other country with the do so how would it help if we were attacked or invaded on stove that is a question we have to ask what did they does contempt of the Lord Beaverbrook
00:01:00 I’ll listen to the impassioned the field by George would they agree to meet that famous South African general identity I have bone responsibilities for the safety of this country in grievance time
00:01:30 we could convince and persuade them to go back play my play it seems to me you are rich we are what we are hungry it would be in Victoria’s we have been defeated you have valuable, we have not you have your name you have had the phone
00:02:00 set up pencil future about all I see are they would say you are weak and we are strong after all my friend your nephew all the way by that railing for nation of nearly 70 million the most educated industrial scientific discipline people in the world loving cup from childhood
00:02:30 all Epic Gloria Texas iron and death in battle at the noblest face for men yeah I need the nation we could have been done in order to augment its Collective Strength yeah definition of a group of preaching a gospel of intolerance and unrestrained by the wall by Parliament
00:03:00 public opinion in that country all packages speeches or morbid Wahlberg off of getting off the press I’m down you cable of Columbus they have a meeting dial shalt not kill it is the plenty of photos and or both now
00:03:30 play Ariana me with the upload speed I’m ready to that end lamentable weapon Javier against which all Navy is no defense and before which women and children so weak and frail capacity of the warriors on the front-line trenches all live equal adding partial patio
00:04:00 play with you but with the new weapon, new method of compelling the submission of racing bike terrorizing and torturing population and worst of all the more
00:04:30 the ball in cricket the structure of its social and economic life some more of those who may make it there praying love you too fat Grim despicable fact and invasive affect ionic again what are we to do
The result is an order of magnitude worse than my wife’s recording. Most likely it is caused by poor audio quality. In addition, Churchill used a lot of words that are no longer commonly used.
If you are still reading, let’s get started.
1. Sign Up for a Free Tier Account
Google Cloud offers a Free Tier plan, which will be used in this tutorial. An account is required to get an API key.
2. Generate an API Key
Follow these steps to generate an API key:
- Click “APIs & Services”
- Click “Credentials”
- Click “Create Credentials”
- Select “Service Account Key”
- Under “Service Account” select “New service account”
- Name service (whatever you’d like)
- Select Role: “Project” -> “Owner”
- Leave “JSON” option selected
- Click “Create”
- Save generated API key file
- Rename file to
api-key.json
Make sure to move the key into speech-to-text cloned repo, if you plan to test this code.
3. Convert Audio File to Wav format
I ran into issues when trying to convert my audio file via a command line tools. Instead, I used Audacity (an open source audio editing tool) to convert my file to wav format. Audacity is great and I highly recommended it.
The steps to convert:
- Open file in Audacity
- Click “File” menu
- Click “Save other”
- Click “Export as Wav”
- Export it with default setting
4. Break up audio file into smaller parts:
# Clean out old parts if needed via rm -rf parts/* ffmpeg -i source/genevieve.wav -f segment -segment_time 30 -c copy parts/out%09d.wav
Where,
source/genevieve.wav is the name of the input file, and
parts/out%09d.wav is the format for output files.
%09d indicated that the file number will be padded with 9 zeros (i.e.
out000000001.wav), allowing files to be sorted alphabetically. This way
ls command returns files sorted in the right order.
5. Install required Python modules
I added requirements.txt in example repo with all needed libraries. It can be used to install all via:
pip3 install -r requirements.txt
The real hero on this list is the SpeechRecognition. It does most of the heavy lifting.
The rest of the libraries came with the official
google-api-python-client package.
I also used tqdm module to show progress in the slower version of the script.
6. Running the Code
Finally, we can run the Python script to get the transcript. For example
python3 fast.py.
The slow version
Here is the Github link.
This script:
- Loads API key from step 2 in memory
- Gets a list of files (chunks)
- For every file, calls speech to text API endpoint
- Adds results to a list
- Combines all results and adds a timestamp (every 30 seconds)
- Saves results to
transcript.txt
import os import speech_recognition as sr from tqdm import tqdm with open("api-key.json") as f: GOOGLE_CLOUD_SPEECH_CREDENTIALS = f.read() r = sr.Recognizer() files = sorted(os.listdir('parts/')) all_text = [] for f in tqdm(files): name = "parts/" + f # Load audio file with sr.AudioFile(name) as source: audio = r.record(source) # Transcribe audio file text = r.recognize_google_cloud(audio, credentials_json=GOOGLE_CLOUD_SPEECH_CREDENTIALS) all_text.append(text) transcript = "" for i, t in enumerate(all_text): total_seconds = i *) print(transcript) with open("transcript.txt", "w") as f: f.write(transcript)
The code works, but it does take a while on longer source files.
Faster version
To speed things up, I added threading to my slow version. I describe the method used in detail in Simple Python Threading Example post.
Here is the GitHub Link.
The main difference is that I moved processing into a function and added logic, in the end, to sort processed results in the right order.
import os import speech_recognition as sr from tqdm import tqdm from multiprocessing.dummy import Pool pool = Pool(8) # Number of concurrent threads with open("api-key.json") as f: GOOGLE_CLOUD_SPEECH_CREDENTIALS = f.read() r = sr.Recognizer() files = sorted(os.listdir('parts/')) def transcribe(data): idx, file = data name = "parts/" + file print(name + " started") # Load audio file with sr.AudioFile(name) as source: audio = r.record(source) # Transcribe audio file text = r.recognize_google_cloud(audio, credentials_json=GOOGLE_CLOUD_SPEECH_CREDENTIALS) print(name + " done") return { "idx": idx, "text": text } all_text = pool.map(transcribe, enumerate(files)) pool.close() pool.join() transcript = "" for t in sorted(all_text, key=lambda x: x['idx']): total_seconds = t['idx'] *['text']) print(transcript) with open("transcript.txt", "w") as f: f.write(transcript)
Conclusion
Results may vary, but there is utility even in poor transcriptions. For example, I had an hour and a half audio recording from a hand-over meeting with my former co-worker. I remembered that he mentioned something at some point, but was dreading listening through 1.5-hour audio file to find it. I ran the recording through this script and was able to quickly find needed keywords and timestamp pointed me to the right part of the audio file.
For native English speakers like my wife, Google Cloud Speech API can easily replace a professional transcribing service, at a fraction of a cost.
What if the file contains 4 minutes of audio? I think its gonna be bit messy; instead breaking them in to smaller parts, is there anyway to break them 4 minutes each for 8 minutes audio file? Why does Google Cloud Speech API only accepts files no longer than 60 seconds?If Google Cloud Speech API works to transcribe a large audio file in one shot instead splitting them then it could have been easier for us. Since we are not a tech geek though we have caliber to learn a bit of coding.
Does this API app really help me to transcribe both small n larger audio files into the text format? Since I am a Transcriber
Hi Alex! Thank you for this article, excelent!!!;
I tried to run the script to slice the audio and got the following error:
SyntaxError: invalid syntax
[Finished in 0.9s with exit code 1]
[shell_cmd: python3 -OO -u “/Users/SilvinoDiaz/Desktop/speech-to-text-master/untitled.py”]
[dir: /Users/SilvinoDiaz/Desktop/speech-to-text-master]
[path: /Users/SilvinoDiaz/opt/anaconda3/bin:/Users/SilvinoDiaz/opt/anaconda3/condabin:/Users/SilvinoDiaz/anaconda3/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin:/anaconda3/bin:/Library/Frameworks/Python.framework/Versions/3.6/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/share/dotnet:/opt/X11/bin:~/.dotnet/tools:/Library/Frameworks/Mono.framework/Versions/Current/Commands]
The IDLE is ST3
I don’t know if it has to do with the installation of ‘anconda’ which causes the failure.
Any idea?
Thank you very much.
Hi, Thanks for this code. For more than 10 minutes, the chunk number 11 and 12 appears as the second oaragraph and this part of the text becomes misplaced. My question is why is this happening?
Alex, when I try and run ffmpeg to break up the audio file, it keeps giving me an error saying that it couldn’t segment and write the headers, how would I change the command so that ffmpeg creates each wav file as it goes??
Alex, I am getting this error when I try and use ffmpeg to break up my audio file:
How can I change the code so that it creates a new wav file everytime it needs to??
Alex, when I run ffmpeg to try and break up my audio file, it is giving me this error:
It is saying that it failed to open segment, it seems that though this might mean that and empty .wav needs to be waiting for each segment??? How can I change the code so that it creates a .wav file when it needs to?
Hi Alex,
I am using your code to convert some voice commands to text, but run into this error when I run the ‘fast.py’ script.
—
File “/Users/Tony/anaconda3/lib/python3.7/site-packages/speech_recognition/init.py”, line 937, in recognize_google_cloud
if “results” not in response or len(response[“results”]) == 0: raise UnknownValueError()
UnknownValueError
I think I’ve followed all the steps correctly, except for step 4, as my files are already smaller than 30 seconds. Have very little coding experience, any insight on this would be greatly appreciated! 🙂
Kind regards,
Tony
Hi, have you thought about implementing a self-hosted audio transcribe server. This would be a great addition to the community as I agree that many of the professional services costs too much for individuals who uses it occasionally (like me!). Thanks for the insightful article.
I have, it would still need Google Cloud Auth, unless I wanted to pay for it myself. I think it would be fairly simple for somebody to do using Google Cloud API as outlined in this article, but ultimately I didn’t feel like I wanted to make a business out of it and didn’t have time to work on it as a side project (my free time is fairly limited since I have two little kids).
Alex, probably a duplicate reply here, didn’t save first, my bad. I have made a fork and a couple of enhancements without over engineering and didn’t know if you want “forks” or “contributions to new branch or master. Sent a Tweet as well.
Hi Alex,
FYI – First, love it, great example of how to get off the ground! Thank you so much for what you have produced and shared!
QUESTION / ACTION REQUESTED: I have a couple of DCR/Issues I found and I have made changes to address them and wanted to know how you would propose integrating them?
My proposals
2a. a new git hub project branched from yours since it is reference for the article
2b. You determine and establish collaboration guidelines on your github project and I and others like MP below create issues and code check-ins against them (with maybe dev tests 🙂 ) on a separate branch which you can review and decide if they warrant inclusion in your project based on your goal and scope and release as a new version
2c. Something better you or MP or others come up with.
Cheers!
Sorry, I don’t think I ever got notified of this. I just changed jobs, and it’s possible that I overlooked it.
I think it’s a great idea and I am happy to make you a co-owner of that if you are interested. Can you ping me on Twitter again or drop me a line here and we can continue the discussion via email.
Now that I think about it, I can just move the article version into a branch and make master a living thing. The repo already has 69 starts, so it would be a shame to give it up 🙂
I also faced the same error. It’s because of the ‘google-api-python-client’ version. Install the google-api-python-client as:
pip install google-api-python-client==1.6.4
So my previous post I’ve solved all the issues that came about and reading over the comments the following function may help others too. I found that reducing the silence blocks much like what would be useful for podcasts solved all issues with returning null transcripts.
Silence how-to
remove_silence () {
tempfile=
date '+%Y%m%d%H%M%S'
Removes short periods of silence
sox $1 $tempfile.wav silence -l 1 0.1 1% -1 2.0 1%
Shorting long period of silence and ignoring noise burst
sox $1 $tempfile.wav silence -l 1 0.3 1% -1 2.0 1%
mv -v $1 $tempfile'_original_'$1
mv -v $tempfile.wav $1
}
Hi Alex, I’ve been updating the components of processing larger files and the fast and slow scripts are pausing on seemingly kosher wav files, and the fast script seems to bring down the network even when I bring down the threads, I was wondering if there were any thoughts on writing out the transcription files more often so that the whole batch of queries is not lost? And has anyone updated the script to work a little more failsafey over a say 10 hour audio chunk? Thanks a bunch its nice to have something to use to bring down the cost of online transcription services!
Hi Alex,
I am using a shorter version of the code on a single file:
##############
import speech_recognition as sr
r = sr.Recognizer()
with open(“api-key.json”) as f:
GOOGLE_CLOUD_SPEECH_CREDENTIALS = f.read()
test_audio = sr.AudioFile(‘C://users//me//desktop//page2.wav’)
with test_audio as source:
audio = r.record(source)
r.recognize_google_cloud(audio, language = ‘es-MX’,
credentials_json=GOOGLE_CLOUD_SPEECH_CREDENTIALS)
##############
but I am getting two error messages for this snippet. The first is ModuleNotFoundError: No module named ‘oauth2client’. I have pip installed oauth2client as well as oauthlib and google auth.
The second related error is:
RequestError: missing google-api-python-client module: ensure that google-api-python-client is set up correctly.
I haven’t been able to solve these issues despite troubleshooting at length. Do you have any idea how to fix this?
Sorry, no idea. Try using virtual environment if you haven’t already, and may be Python 2 instead of 3. You can control that with virtual environment as well.
This post is getting kind of old, may be it’s also a good time to check out Google official Python client, and see if it works better.
Hi Alex,
First off, thank you so much for this code! Now, I don’t know if the below error is an issue from my side or GCloud is being messy, but I would love any help you and this community can provide. Here is my error –
Traceback (most recent call last):
File “C:\Python36\lib\site-packages\speech_recognition__init__.py”, line 930, in recognize_google_cloud
response = request.execute()
File “C:\Python36\lib\site-packages\oauth2client_helpers.py”, line 133, in positional_wrapper
return wrapped(*args, **kwargs)
File “C:\Python36\lib\site-packages\googleapiclient\http.py”, line 842, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError:
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “fast.py”, line 28, in
all_text = pool.map(transcribe, enumerate(files))
File “C:\Python36\lib\multiprocessing\pool.py”, line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File “C:\Python36\lib\multiprocessing\pool.py”, line 608, in get
raise self._value
File “C:\Python36\lib\multiprocessing\pool.py”, line 119, in worker
result = (True, func(*args, **kwds))
File “C:\Python36\lib\multiprocessing\pool.py”, line 44, in mapstar
return list(map(*args))
File “fast.py”, line 21, in transcribe
text = r.recognize_google_cloud(audio, credentials_json=GOOGLE_CLOUD_SPEECH_CREDENTIALS)
File “C:\Python36\lib\site-packages\speech_recognition__init__.py”, line 932, in recognize_google_cloud
raise RequestError(e)
speech_recognition.RequestError:
I’ve waited for 10 minutes after enabling the API and tried again, but no luck.
Thanks in advance.
Regards,
Rashmil.
Not sure, could be file formatting. Have you tried with sample files?
Hi Alex and Rashmil,
Have you found any solution to this issue. I have the same issue and dont know how to proceed.
Thanks in advance
Best
Ali
Hi Alex,
After changing the sound file I had better results. Still if google.cloud could not recognize some parts of the audio an error pops. So is there any way to tell google client to ignore if some parts of the audio not clear.
Thank you so much for providing this code. I would like to run the code for 100 audio file. How would that be possible?
Not sure, I think if you look at the pull requests in the repo somebody automated file conversion (although I haven’t merged that in yet). From there you may be able to automate it further.
Hi Alex, thanks for sharing your code. I managed to run it as it is and also used different mp3 audio files, which I converted to wav using Audacity. Works perfectly! I will trying using a microphone as an audio source.
Once more many thanks.
Gideon
Thank you for this grate work. I follow your steps, but I faced this error:
“C:\Program Files (x86)\Python37-32\python.exe” C:/Users/hudad/PycharmProjects/speech-to-text-master/slow.py
0%| | 0/3 [00:00<?, ?it/s]
Traceback (most recent call last):
File “C:\Users\hudad\AppData\Roaming\Python\Python37\site-packages\speech_recognition__init__.py”, line 885, in recognize_google_cloud
try: json.loads(credentials_json)
File “C:\Program Files (x86)\Python37-32\lib\json__init__.py”, line 348, in loads
return _default_decoder.decode(s)
File “C:\Program Files (x86)\Python37-32\lib\json\decoder.py”, line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File “C:\Program Files (x86)\Python37-32\lib\json\decoder.py”, line 355, in raw_decode
raise JSONDecodeError(“Expecting value”, s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “C:/Users/hudad/PycharmProjects/speech-to-text-master/slow.py”, line 19, in
text = r.recognize_google_cloud(audio, credentials_json=GOOGLE_CLOUD_SPEECH_CREDENTIALS)
File “C:\Users\hudad\AppData\Roaming\Python\Python37\site-packages\speech_recognition__init__.py”, line 886, in recognize_google_cloud
except Exception: raise AssertionError(“
credentials_jsonmust be
Noneor a valid JSON string”)
AssertionError:
credentials_jsonmust be
Noneor a valid JSON string
Process finished with exit code 1
Please help
Luke, your last audio file is crashing the code because there is no speech to transcribe, listen to your last file, if it is just music and no voice, delete it and it should work.
Hey Alex,
Thanks for putting together the comprehensive tutorial and code – I’ve managed to transcribe some of my own audio but am running into problems with other files.
I have a collection of files, all of which I’m converting to mono @ 48000hz (doing this to remove variables for debugging) and then running through fast.py.
The problem I’m encountering appears to occur when attempting to process the final 30s audio chunk in the ‘parts’ folder. For example, my current file has been split into 74 parts – all of which were successfully processed apart from #74.
This is the traceback I’m getting:
Traceback (most recent call last):
File “fast.py”, line 28, in
all_text = pool.map(transcribe, enumerate(files))
File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py”, line 253, in map
return self.map_async(func, iterable, chunksize).get()
File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py”, line 572, in get
raise self._value
speech_recognition.UnknownValueError
Do you have any suggestions why this might be the case?
Unsure why it’s working fine for some files, but not for others.
Thanks
Luke
Thanks again Alex for this code and your guide.
I am having the same problem as Luke,
Some files just keep getting the same error ^^
Try listening to the last track. If there is no speech and just music, or audio without words, delete that track and try again.
Very good job. Thank you.
I tried your code for my country France (World champion ;=)). Excellent
Change in fast.py
1/ text = r.recognize_google_cloud(audio_data=audio, credentials_json=GOOGLE_CLOUD_SPEECH_CREDENTIALS,language=”fr-FR”)
2/ transcript = transcript + “{:0>2d}:{:0>2d}:{:0>2d} {}\n”.format(h, m, s, t[‘text’].encode(‘utf8’))
and it’s OK to have text in french language.
Hi Alex,
Your code is very helpful…can you tell me what will be code for Punctuation of the end of the line.
Please share me …….
Regards,
Milan
Hello … any update for my problem?? please share…
Hi,
I am getting below error::
“Sync input too long. For audio longer than 1 min use LongRunningRecognize with a ‘uri’ parameter.”>”
Which I understand is due to the length of the audio file(more than 1 min). I googled the error and I got the suggestion mentioned in the web link:-
The above link unltimatly takes me to the below sample code
=======================
def transcribe_gcs(gcs_uri):
“””Asynchronously transcribes the audio file specified by the gcs_uri.”””
from google.cloud import speech
from google.cloud.speech import enums
from google.cloud.speech import types
client = speech.SpeechClient()
print(‘Confidence: {}’.format(result.alternatives[0].confidence))
So does this means I will have to re-write the code using different sets of module, or can we adjust the “.long_running_recognize” function somewhere in your code?
amitesh
Hi Alex, does Google Speech to Text API support multi-speaker recognition while transcribing? Also, does it output timestamps for each word or sentence as well? Sorry for shooting so many questions, but my final question is does it have a offline version that one can use? Thanks.
I don’t know of a way to do this. There is an open GitHub issue if somebody want’s to pitch in.
Hello alex, i tried to generate an api key and it says that i have to create a billing account which requires credit card infromation.So, how does it work? Is that free? Do i need to pay to get the script work?Thanks.
Yes, unfortunately credit card is required to register, but they do offer a free tier, so you shouldn’t be charged anything.
How can we use this google API to convert streaming speech to text? What should be our code be looking like?
Hello Alex,
I am at the very early stage of this activity. i.e. I have installed all the libraries mentioned by you. I am using windows 10 to perform the activity.
I wanted to generate the API key, but I guess I need to pay for that, right? Second, I couldn’t locate “API Manager” in the google cloud console. All I could see 3 tiles
I am not sure. You should be able to do it under the free trial. Re UI, may be they redesigned it. Seems like other people were able to get it to work. I’ll have to check it out later. If anybody knows, please comment.
Hi, Finally, I got the API key generated. I just had to browse extra to the website. Than you
The ffmpeg command “ffmpeg -i source/genevieve.wav -f segment -segment_time 30 -c copy parts/out%09d.wav” doesen’t work when i try to run it,
Guessed Channel Layout for Input Stream #0.0 : mono
Input #0, wav, from ‘source/genevieve.wav’:
Duration: 00:01:10.33, bitrate: 768 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, mono, s16, 768 kb/s
[segment @ 0000021e48be0dc0] Opening ‘parts/out000000000.wav’ for writing
[segment @ 0000021e48be0dc0] Failed to open segment ‘parts/out000000000.wav’
Could not write header for output file #0 (incorrect codec parameters ?): No such file or directory
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Last message repeated 1 times
I don’t know how to fix it or what am i doing wrong.
I’m seeing the same problem… did you find a solution?
Hey José,
The -c copy parts/out%09d.wav part of the code expects there to be a folder in the speech-to-text-master folder called “parts”.
Create this and the parts will be saved there!
Found a way to avoid breaking up a long audio file:
1. Convert the audio file to FLAC (downmix from stereo to mono) — Audacity can export to FLAC, make note of the bitrate
2. Upload FLAC file to Google Cloud Storage — create new bucket if need be, no need to make it public
3. Edit transcribe_async.py — find bitrate for FLAC and change it accordingly also update the timeout value to 600 (10m)
4. Run command: python transcribe_async.py gs://bucketname/filename.flac
Hello Alex, thank you very much for your collaboration.
Alex, if I wanted to change the language of the API, for example, the parameter language_code = ‘es-CO’, where should I do it? Thank you
I didn’t have this use case, and not sure that the 3rd party library that I used supports this option.
This example from Google might be helpful, but I did not try it myself:
did you manage to make it work?
The tutorial is great. It is working for me. Nevertheless, my audiofiles are also non-english. Have you found a solution for setting the language
I managed to set the language. If you use the slow.py version, you could modify line 19 where the “recognize_google_cloud” function of the library is used like this:
text = r.recognize_google_cloud(audio, credentials_json=GOOGLE_CLOUD_SPEECH_CREDENTIALS, language=”de-DE”)
See the documentation here:–none-language-str–en-us-preferred_phrases-unioniterablestr-none–none-show_all-bool–false—unionstr-dictstr-any
Seems to work for me 🙂
Here’s something I tried. I already had WAV recordings I obtained from an MP3 Player.
Hence, I decided to skip the MP3->WAV conversion step.
I ran into multiple errors, mainly due to format inconsistency with the native WAV type.
And so, I’m posting this.
I’ve used “VOICE001.wav” as an example. It works well with MP3 inputs as well.
For MP3, skip step 1.
Converting to the right WAV format
Check for your WAV file’s properties.
ffprobe VOICE001.wav
# Input #0, wav, from ‘VOICE001.wav’:
Duration: 00:01:16.54, bitrate: 128 kb/s
Stream #0:0: Audio: adpcm_ima_wav ([17][0][0][0] / 0x0011), 32000 Hz, 1 channels, s16p, 128 kb/s
Convert & Replace the WAV file to native type using Audacity.
Again
ffprobe VOICE001.wav
# Input #0, wav, from ‘VOICE001.wav’:
Duration: 00:01:16.28, bitrate: 512 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 32000 Hz, 1 channels, s16, 512 kb/s
For remaining WAV files, use the native format details for conversion using ffmpeg.
ffmpeg -i VOICE001.wav -acodec pcm_s16le -ar 32000 VOICE001-win.wav
# Output #0, wav, to ‘VOICE001-win.wav’:
Metadata:
ISFT : Lavf58.3.100
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 32000 Hz, mono, s16, 512 kb/s
Metadata:
encoder : Lavc58.9.100 pcm_s16le
size= 4768kB time=00:01:16.28 bitrate= 512.0kbits/s speed= 246x
video:0kB audio:4768kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.001598%
* Here, The Audio Codec & Sampling Rate fields have been altered to fit the native format settings.
I tried using the code with the source files that you provided (genevieve.wav), however I get the following error:
ValueError: Audio file could not be read as PCM WAV, AIFF/AIFF-C, or Native FLAC; check if file is corrupted or in another format
I did not change any code. Any ideas on what I’m doing wrong here?
Did you generate parts with ffmpeg?
I just re-run it fresh and it worked for me.I am using Python3 on MacOS.
What system are you on, at what point does it fail?
I’m not sure exactly what I was doing wrong, but it works now. Sorry for the inconvenience.
Hi,
Like @Jamshed, I’m getting that same error when I run on genevieve.wav :
ValueError: Audio file could not be read as PCM WAV, AIFF/AIFF-C, or Native FLAC; check if file is corrupted or in another format
It also includes this in the result:
wave.Error: file does not start with RIFF id.
I check the file:
$ file out000000002.wav
out000000002.wav: RIFF (little-endian) data, WAVE audio, Microsoft PCM, 16 bit, mono 48000 Hz
$ file -i out000000002.wav
out000000002.wav: regular file
`$ mediainfo out000000002.wav
General
Complete name : out000000002.wav
Format : Wave
File size : 966 KiB
Duration : 10 s 302 ms
Overall bit rate mode : Constant
Overall bit rate : 768 kb/s
Writing application : Lavf56.36.100
Audio
Format : PCM
Format settings : Little / Signed
Codec ID : 1
Duration : 10 s 302 ms
Bit rate mode : Constant
Bit rate : 768 kb/s
Channel(s) : 1 channel
Sampling rate : 48.0 kHz
Bit depth : 16 bits
Stream size : 966 KiB (100%)`
So I’m wondering if something is wrong with my ffmpeg install? Any advice appreciated, and thank you for sharing all this.
Sorry, not sure. I did mine on Mac OS, with ffmpeg installed via Homebrew. What is your set up?
I solved it. It seemed to be conflicting packages in my python install. I set up a fresh python3 environment, re-installed ffmpeg etc, and it works really really well now. Thanks!
Hi Alex
One issue I found is that if the number of files in the parts folder exceed the pool workers,say you have 20 files in the parts folder and u have pool = Pool(8) only the first 8 files are processed in ORDER and after that alll remaining files in the parts folder are processed OUT of sequence. Tried a few thing but still not working. Seems like even though the map function is supposed to keep the sort order but for some reason the order is only kept for the first 8 files.
Strange, what platform/python version are you using?
Using aws ec2 amazon Linux , python 36,
Have a wav file about 60 mb, I partikn the file in 55 or 60 sec, generates about 57 files in the part folder,use a pool size of 8, the first 8 files are in order, the remaining are all in mixed orders.
Tried to sort the list first and confirmed that’s in order but after the first 8 files, the order is lost. Trying the google asyn but not working yet.
Reading over the code I see that I am taking by an extra step to sort by idx. So the only thing I can think off of those ids come in the wrong order.
Can you confirm that when you call os.listdir files show up in the right order?
No, they are not and what I had done was to apply sort like: file = sorted(os.listdir(‘parts/’). If I don’t use the sort, the entire transcript is all over the place, meaning the beggining of the wav file could be transcribed in the middle of the text and so on. Next I applied sort(os.listdir(‘parts/’) and confirmed in the shell that all the “files” are sorted. Next I ran the script and I confirmed that ONLY the first batch of the pool (in this case only the first 8 files) are ordered correctly, the next pool worker loses the sort again. do you know what I mean?
here is the list dir wihtout the sort:
Here is the list dir with sort
But still for some reason only the first batch of the pool workers are in the right order in the transcribe file, starting 0009.wav on wards the transcribe file is no longer in order.
Even though the map function is supposed to keep the order.
Strange
Even if
mapdoesn’t keep them sorted, this line
sorted(all_text, key=lambda x: x['idx']):shoudl re-sort them back in order.
Try to debug this sort/idx and see if something funky happens around there.
I am having the same problem as daz… I added the sort also and it is not sorting correctly.. ( on the fast)
I am testing the slow ( unthreaded verison) to see if it is the threading that is causing the ordering problem.
files =sorted(os.listdir(‘parts/’))
parts/out0000.wav started
parts/out0002.wav started
parts/out0006.wav started
parts/out0010.wav started
parts/out0014.wav started
parts/out0008.wav started
parts/out0004.wav started
The limitation for 60 seconds only applies to synchronous requests (). Is there a reason you didn’t use an asynchronous request rather than splitting up the file?
I just didn’t know that was an option. Thanks for the tip, I’ll have to investigate. May be it was just a limitation of the library I was using.
Tried the google async example but it fails half way through. Do u have a working example with the google async to concert a wav file to text?
Thanks
Is there a way to overcome the 30 sec limitation where I can do the whole file in one try? Or if I have to break the file would it be possible to have the transcript numbered? Like if the input wave file is wave01.wav wave02.wav the output be transcript0102.txt? thanks for the great script.
Sorry I don’t think I follow. I believe it already does both, final transcript is one text file.
Here is the use case: I have multiple wav files, Alex.wav, Vida.wav, Jim.wav. I like to modify the program such that it reads the inputwav folder containing all the wav files (alex.wav, vida.wav, jim.wav) and runs it through the python program to output alex_transcript.txt, vida_transcript.wav, jim_transcript.wav. But I am having difficulty getting it to work. So I ran each files individually. Thanks Alex
Ah I c. Yes, then it goes back to figuring a way to convert file to proper wav programmatically and then calling the split files command (and probably adding a clean up step later).
I didn’t get this far.
Another idea that I didn’t get to do is splitting file by silence around 30 seconds, instead of hard 30 second split, which can cut mid sentence/word.
God luck! Let me know if you figure any of this out.
“ffmpeg -i input.mp3 output.wav” converts the mp4 file to wav file without any compression.
It is better to have a command to do the task instead of a new software if we are we are automating a task
Unfortunately something was off about this type of wav, which I did not dig in to figure out. Transcription did not work with wav created like this. May be it was just something local to my Mac.
I tried the same thing, but for some reason I think it read the wav file backwards, meaning it starts from the end of the file transcribing. Thanks Alex for pointing this out. I go back to using Audacity.
Thanks for writing all this up! It’s been super helpful. Not sure if it’s still an issue, but I had the same problem. It seems like ffmpeg ignores the format when you’re doing the segmentation… Running it in two lines works for me, though there was probably a better way to actually fix the problem. ffmpeg -i db/foo.m4a -c:a pcm_s16le db/stage1.wav | https://www.alexkras.com/transcribing-audio-file-to-text-with-google-cloud-speech-api-and-python/ | CC-MAIN-2020-24 | refinedweb | 7,012 | 73.27 |
29 March 2011 18:20 [Source: ICIS news]
SAN ANTONIO, Texas (ICIS)--MISC Berhad, a subsidiary of Malaysia’s state-owned oil and gas firm Petronas, expects to be done with its fleet expansion by early next year, in time for better demand that should start in the second quarter of 2012, a company official said late on Monday.
In April, MISC expects delivery of two vessels – a 45,000dwt (deadweight tonne) ship and one with a 19,000dwt capacity. In 2012, two more 19,000dwt vessels will be added to the fleet, the official said on the sidelines of the International Petrochemical Conference (IPC).
So far, the company has received eight 38,000dwt vessels, three 45,000dwt vessels and two 19,000dwt vessels, he said.
Some old vessels would be scrapped, the official said, without providing details.
MISC is the largest single owner-operator of LNG carriers in the world, according to its website.
Shipments of LNG into ?xml:namespace>
But overall shipping demand this year has not improved significantly, he said.
Come the second quarter of 2012, operating conditions for the shipping industry is expected to turn better, the official said.
MISC currently has a fleet of more than 100 vessels, including LNG, petroleum, chemical tankers and containerships that operate in more than 40 countries, based on the company’s website.
Hosted by the National Petrochemical & Refiners Association | http://www.icis.com/Articles/2011/03/29/9448185/npra-11-malaysias-misc-to-complete-fleet-expansion-by-q2.html | CC-MAIN-2014-52 | refinedweb | 230 | 51.48 |
Akka actor ask FAQ: Can you share an example that shows how one Akka actor can ask another actor for information?
Sure. Here’s a quick Scala example to demonstrate how one Akka actor can ask another Akka actor for some information and wait for a reply. When using this “ask” functionality, you can either use the
ask method, or the
? operator, and I show both approaches below:
import akka.actor._ import akka.dispatch.Await import akka.dispatch.Future import akka.pattern.ask import akka.util.Timeout import akka.util.duration._ implicit val timeout = Timeout(5 seconds) val future = myActor ? AskNameMessage val result = Await.result(future, timeout.duration).asInstanceOf[String] println(result) // (2) this is a slightly different way to ask another actor val future2: Future[String] = ask(myActor, AskNameMessage).mapTo[String] val result2 = Await.result(future2, 1 second) println(result2) system.shutdown }
The future/await syntax is shown in the Akka documentation, but the part I wasn’t sure about was how my
TestActor should reply to this request. Should it simply return a
String, or should it return a
String to the sender using the
! operator? As you can see, using the
! operator is the correct approach.
It's important to note that this example shows a blocking approach, which may not be good for many cases. But in my real-world problem (not shown here), one of my actors needs to query another actor about its state, and with the current design, I need something like this ask/future/timeout/await/result approach to get that state information.
Also note that there might be better ways to approach this problem, and other ways to “ask” another actor about its state, but for today, this is what I know. | https://alvinalexander.com/scala/scala-akka-actors-ask-examples-future-await-timeout-result/ | CC-MAIN-2022-33 | refinedweb | 293 | 50.63 |
This Instructable will show how to use any infrared remote control with a Arduino..
Teacher Notes
Teachers! Did you use this instructable in your classroom?
Add a Teacher Note to share how you incorporated it into your lesson.
Step 1: Get the Arduino IRremote Library
The first thing you will need to do is get the Arduino IRremote library from this link IRremote library download..
The site has the instructions on how to download and install the library.
The installation of the library is easy, just follow all the instructions to make sure it will run properly, When you unzip the folder into a new folder remember to open the new folder first then copy the IRremote library file!!
After you get the library installed the next step is to find a remote to use and test it to see if it works. If you are not sure if your remote works testing it is simple, if you look at it when you push a button you wont see anything. To see if it is working all you will need is a digital camera, The camera on your phone will work fine. Turn the camera on and point the remote at your camera, when you push one of the buttons on the remote you will see it light up on your camera screen. You can't see IR light but your camera can.
Step 2: Build the Circuit
The parts you will need
IR Sensor (got mine out of a junk DVD player).
100uF capacitor.
Resistor to bring the voltage to the right level for your IR sensor if needed.
Breadboard.
Jumper wires.
Any Arduino board.
The Circuit
The two pictures above show the simple circuit you will need to build, The pin configuration of your IR sensor may be different so please look up the data sheet and modify the construction as needed...
Step 3: Maping Your Remote!
Now that you have the circuit built its time to map your remote. The first thing to do is open the arduino IDE and then go to examples and open IRrecvDemo. After you have it open go down the code till you find the line that is circled in the picture above and delete the ( , HEX), the reason to do this is because it will be easer to work with when you are writing your code. Using HEX numbers will complicate things unless you really under stand how to use them when writing your arduino sketch.
Next its time to get your remote codes, All you have to do is open the serial monitor then point your remote at the IR sensor and push a button. When you push a button a number will come up on the serial monitor, write that number down so you can use it when writing your sketch. If you hold down a button you will notice that a different number will show up and it will be the same for any button you hold down, write this number down two it will be useful if you want to have values change in your sketch if you hold down a button.
Step 4: Using Your IR Codes in a Sketch..
Now that you have mapped your remote its time to use the codes to control something, the following sketch will show you how to control the speed of a old computer cooling fan with your remote.
The following code I wrote to control a 12v fan, you will need to use a transistor and a 12v power supply for this to work. The code is pretty straight forward so I'm not going to explain it, just put your IR codes in where it says too be placed and your good to go..
#include <IRremote.h>
int RECV_PIN = 11; //conect IR receiver output to pin 11
IRrecv irrecv(RECV_PIN);
decode_results i;
int fan = 9; //conect fan to pin 9
int dir =0 ;
int val = 0;
void setup()
{
Serial.begin(9600);
irrecv.enableIRIn(); // Start the receiver
pinMode(fan, OUTPUT);
}
void loop() {
analogWrite(fan, val);
if (irrecv.decode(&i))
{
if (i.value == put your code here && dir == 0) // put the ir code for fan speed + here
{
val = val + 10;
dir = 1;
}
else if (i.value == put your code here && dir == 1) // this is where you put your hold down button code
{
val = val + 10;
dir = 1;
}
else if (i.value == put your code here && dir == 1) // put the ir code for fan speed - here
{
val = val - 10;
dir = 0;
}
else if (i.value == put your code here && dir == 0) // this is where you put your hold down button code
{
val = val - 10;
dir = 0;
}
irrecv.resume(); // Receive the next value
}
val = constrain(val, 0, 255);
Serial.println(val);
}
Discussions | https://www.instructables.com/id/Use-any-IR-Remote-With-Your-arduino/ | CC-MAIN-2019-43 | refinedweb | 791 | 75.13 |
Related
Tutorial
Introduction to Jest Snapshot Testing.
Snapshot testing is a type of testing in Jest which monitors regression in your code and also serves as an integration test. The first means that if you add more code to your project and something small breaks, snapshot testing can catch it. The second means that snapshot testing is a way of making sure an entire component runs the way you intend it to.
The way snapshot testing works is that the very first time you run
jest, snapshots are generated of the DOM. On subsequent runs of your test suite, the DOM that’s constructed gets compared to these snapshots. Since you may have changed your code, your snapshots still matching the ones generated the first time tells you things are still working.
Some questions naturally come up: what if I make a significant change to your program which results in different DOM contents? Jest allows you to generate new snapshots and such a scenario would warrant that. What if there is non-deterministic content on my page? There are multiple ways to handle this, and we’ll see this shortly!
App Setup
We’ll now setup our application. Head over to the setup section of our tutorial on testing Vue using Jest for setting up a simple app for testing. Here’s what your
App.vue file could look like:
<template> <div id="app"> <div> <h3>Let us test your arithmetic.</h3> <p>What is the sum of the two numbers?</p> <div class="inline"> <p>{{ x1 }} + {{ x2 }} =</p> <input v- <button v-on:Check Answer</button> </div> <button v-on:Refresh</button> <p>{{message}}</p> </div> </div> </template> <script> export default { name: 'App', data() { return { x1: Math.ceil(Math.random() * 100), x2: Math.ceil(Math.random() * 100), guess: "", message: "" } }, methods: { check() { if (this.x1 + this.x2 === parseInt(this.guess)) { this.message = "SUCCESS!" } else { this.message = "TRY AGAIN" } }, refresh() { this.x1 = Math.ceil(Math.random() * 100); this.x2 = Math.ceil(Math.random() * 100); } } } </script> <style> #app { font-family: Avenir, Helvetica, Arial, sans-serif; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-align: center; color: #2c3e50; margin-top: 60px; } .inline * { display: inline-block; } img { height: 350px; } </style>
And here’s what our started
app.spec.js looks like:
import { mount } from "@vue/test-utils"; import App from "./../src/App.vue"; describe("App", () => { // Inspect the raw component options it("has data", () => { expect(typeof App.data).toBe("function"); }); }); describe("Mounted App", () => { const wrapper = mount(App); test("is a Vue instance", () => { expect(wrapper.isVueInstance()).toBeTruthy(); }); it("renders the correct markup", () => { expect(wrapper.html()).toContain( "<p>What is the sum of the two numbers?</p>" ); }); // it's also easy to check for the existence of elements it("has a buttons", () => { expect(wrapper.contains("button")).toBe(true); }); it("renders correctly with different data", async () => { wrapper.setData({ x1: 5, x2: 10 }); await wrapper.vm.$nextTick(); expect(wrapper.text()).toContain("10"); }); it("button click without correct sum", () => { expect(wrapper.vm.message).toBe(""); const button = wrapper.find("button"); button.trigger("click"); expect(wrapper.vm.message).toBe("TRY AGAIN"); }); it("button click with correct sum", () => { wrapper.setData({ guess: "15" }); const button = wrapper.find("button"); button.trigger("click"); expect(wrapper.vm.message).toBe("SUCCESS!"); }); });
Keep in mind that
it is just an alias for
test in Jest. Run
npm run test and all tests should pass.
Let’s Start Snapshot Testing!
Run
npm install --save-dev jest-serializer-vue then make the below addition to
package.json
{ ... "jest": { "snapshotSerializers": ["jest-serializer-vue"] }, ... }
Add some code to the second describe block.
it('renders correctly', () => { const wrapper = mount(App) expect(wrapper.element).toMatchSnapshot() })
Run your tests and notice how the first time you run your tests you should see “1 snapshot written”. Notice that a directory called
__snapshots__ has been created next to
app.spec.js.
Feel free to take a look at the snapshot file, which has a file ending with the
.snap extension; you’ll notice that the whole template section of the component has been reproduced, except for attributes that have the prefix
v-.
Run your tests again.
Error!
Why?!
If you examine the snapshot test’s output in your Terminal, it’s clear why: we have randomly generated numbers on our page. You should also be able to see what the numbers in the snapshot are. Go ahead and substitute those into your test by passing
data when you mount your component; the function you pass will be merged into the component’s own
data.
It should look something like this once you have it passing again:
it('renders correctly', () => { const wrapper = mount(App, { data() { return { x1: 37, x2: 99 } } }) expect(wrapper.element).toMatchSnapshot() })
Another approach is to write a mock function for the nondeterministic function in our code. In our case that is
Math.random().
You’d wind up with the something like the following:
it('renders correctly with mock', () => { Math.random = jest.fn(() => .37); const wrapper = mount(App) expect(wrapper.element).toMatchSnapshot() })
Let’s say we wanted to move our header to be above the photo on the page. This is an easy modification to your Vue component so go ahead and do that. Try running your test suite again.
Error!
It failed because our snapshot has the page arranged differently. We must update that part of our snapshot and we can do that by running
npm test -- -u.
Now our tests pass again.
Success!
If you wanted to update snapshots interactively, you can run
npm test -- -i.
Conclusion
Snapshots can be very useful in keeping abreast of any accidental changes to your application’s interface. Snapshots should be checked into Git like any other code. If tests fail, check what has happened before reflexively updating your snapshots.
Snapshot testing should be very useful to you in testing your Vue applications, especially as they get more complex. Good luck out there! | https://www.digitalocean.com/community/tutorials/vuejs-jest-snapshot-testing-in-vue | CC-MAIN-2020-34 | refinedweb | 976 | 58.99 |
Notebooks.azure.com provides a FREE Jupyter notebooks the service allows you to use Jupyter notebooks () and the programming language Python (). The Azure Notebook service is a FREE web application at that allows you to create and share Juypter documents.
The solution allows you to build interactive notebooks which contain live code, LATEX equations, and graphical visualizations and text which allow you to interactive run experiments and assessments on data and can be used in a varity of situations including data cleaning, data transformation, simulation, statistics, modeling, machine learning and much more with no overhead of server, infrastructure or service management.
Azure notebook service
The Azure notebook service provides cloud-based Jupyter notebook environments at. Dr Harry Strange at the University College London and Dr Garth Wells at the University of Cambridge are two academics, I know who are both are extensively using within UG and PG teaching both academics/institutions now operate dedicated Azure Jupyter Notebook for use to their students and courses.
Running and viewing the course material
The way in which the institutions utilise Notebooks is by simply having a dedicated Notebook activities for the course. Students simply click on a notebook to view it. Students then can simply Click 'Clone and Run!' which creates their own copy of the Notebook on Azure and allows them to own a runnable and editable versions of the activity notebooks. This allows students to freely experiment with their own your copy of the activity notebooks and allows them to always return to the master version at any time.
Creating your first Juypter notebook on Notebooks.azure.com
Log into Azure Juypter Notebook server at. title is untitled, to rename your notebook simply click on untitled and enter the new name
When you return to Notebooks.azure.com your notebooks are displayed, to open a notebook simply just click on the title.
Running Jupyter locally or Via an Azure Virtual Machine
You can run Jupyter and Python locally on your own computer if you wish or you can use the Data Science VM on Azure which comes with Juypter.
We currently have Juypter Notebooks available on Azure as service
Or you can have Juypter available on a Windows Virtual Machine
Or a Linux Virtual Machine
What I really like about Cambridge and UCL course is that they are fully supporting Azure within teaching and learning and utilising the Azure hosted Juypter Notebooks environment.
The following are some top Tips for getting started with Notebooks
These following recommendations are from Dr Garth Wells at University of Cambridge who has provided this guidance in his Jupyter Notebooks available at or at Notebooks.azure.com here
The materials and course resources are available under Creative Common’s and MIT licenses and license details available at ().
The following top tips have been adapted from Dr Garth Wells materials to provide a high level insight of how to get started with Juypter Notebooks.
Editing and running notebooks
Jupyter notebooks have text cells and code cells. If you double-click on part of a notebook in a Jupyter environment (see below for creating a Jupyter environment on Azure), the cell will become editable. You will see in the menu bar whether it is a text cell ('Markdown') or a code cell ('Code'). You can use the drop-down box at the top of a notebook to change the cell type. You can use
Insert from the menu bar to insert a new cell.
The current cell can be 'run' using
shift-return (the current cell is highlighted by a bar on the left-hand side of the page). When run, a text cell will be typeset, and the code in a 'code cell' will be executed. Any output from a code cell will appear below the code.
Often you will want to run all cells from the start of a notebook. You can do this with
Kernel -> Restart & Run All from the notebook menu bar. In this case the cells are executed in order (first through to last).
Below is a code cell:
print(2 + 2)
Output
4
Formatting text cells
Text cells are formatted using Markdown, and using LaTeX syntax for mathematics. Make extensive use of text cells to explain what your program does, and how it does it. Use mathematical typesetting to express yourself mathematically.
Markdown
You can find all the details in the Jupyter Markdown documentation. Below is a brief summary.
Headings
Using Markdown, headings are indicated by '
#':
# Top level heading ## Second level heading ### Third level heading
Text style
The Markdown input
Opening passage `A passage of text` *Some more text*
appears as:
Opening passage
A passage of text
Some more text
Lists
You can create bulleted list using:
- Option A - Option B
to show
- Option A
- Option B
and enumerated lists using
1. Old approach 1. New approach
to show
- Old approach
- New approach
Markdown resolves the list number for you.
Code
Code can be typeset using:
```python def f(x): return x*x ```
which produces
def f(x): return x*x
You can include images in Jupyter notebooks - see Jupyter Markdown documentation
LaTeX
Markdown cells support LaTeX syntax for typesetting mathematics. LaTex is the leading tool for technical documents and presenting mathematics, and it is free.
To typeset an inline equation, use:
The term of interest in this case is $\exp(-2x) \sin(3 x^{4})$.
which will appear as:
'The term of interest in this case is exp(−2x)sin(αx4)>exp(−2x)sin(αx4) .'
For a displayed equation, from
We wish to evaluate $$ f(x) = \beta x^{3} \int_{0}^{2} g(x) \, dx $$ when $\beta = 4$.
we get:
'We wish to evaluate
f(x)=βx3∫20g(x)dxf(x)=βx3∫02g(x)dx
when β=4β=4 .'
LaTeX commands for different mathematical symbols. If you see an example of mathematical typesetting in a notebook, you can also double-click it in a Jupyter environment to see the syntax. | https://blogs.msdn.microsoft.com/uk_faculty_connection/2016/11/15/jupyter-notebooks-on-azure-with-notebooks-azure-com/ | CC-MAIN-2019-43 | refinedweb | 989 | 59.13 |
This article assumes that you've read the first two installments of this series and are familiar with the healthcare reservation system and its architectural framework at the heart of this scenario.
In this scenario, remote medical offices completely delegate the function of client office visits to a centralized system that prepares the data for each service provider (medical office), encrypts the confidential information through a WebSphere DataPower XML Security Gateway XS40 box and, thus, sends it to WebSphere Enterprise Service Bus. The complete architectural picture is shown below in Figure 1.
Figure 1. Reservation system
This section provides detailed information about instrumenting WebSphere Enterprise Service Bus to:
- Recognize encrypted data: WebSphere Enterprise Service Bus has strict schema validation features that apply to data coming through its export bindings. If the format of the incoming message doesn't match the definition of the export interface, the message is discarded from WebSphere Enterprise Service Bus and an exception is raised. You must instrument WebSphere Enterprise Service Bus, as far as the encrypted data's structure, providing ad hoc data types and interfaces.
- Perform protocol switching: WebSphere Enterprise Service Bus gets these messages as SOAP/HTTP requests and then forwards them to the Java™ Message Service (JMS) topic using specific import and export bindings. The export binding describes how a client communicates with the mediation module. The import binding, instead, describes how the mediation module communicates with the defined service. Bindings guarantee transparent transport protocol switching capabilities and make it possible for the central reservation system to connect to the medical offices without any communication logic required within the application code.
Figure 2. Protocol switching
- Make use of message selectors: According to JMS specifications, a JMS client can filter by message selectors to understand which messages it should process. Because each medical office would like to receive only the data destined to it, the JMS message should contain the
serviceProviderIdin the header. Starting from the 6.0.2 release, WebSphere Enterprise Service Bus provides a message element setter mediation primitive that you can use for this purpose: When the mediation module gets a request message, it inspects the incoming message format, leverages the message setter primitive to retrieve the service provider ID from the SOAP body, and stores it (through a copy action) in the output message's JMS header (see Figure 3).
Figure 3. Message selectors
To implement the described scenario, we defined a mediation module and wired it to a SOAP/HTTP export binding and a JMS import binding.
The mediation module is composed of four functional pieces:
- An export interface to get the SOAP HTTP message containing sensitive data
- A message selector to augment the JMS message header with selectors information
- An XSLT transformation that matches the import and export interfaces and, thus, shows you how it's possible to provide some mediation logic, even when the message contains encrypted data
- An import interface that sends the sensitive data to the topic as a JMS payload
Figure 4 illustrates the mediation module.
Figure 4. Mediation module
This article shows you how to build all the functional parts that the mediation module is composed of. In the following sections, you can find detailed information to complete the steps needed to do the following:
- Create a new library.
- Define data types.
- Define export and import interfaces.
- Define the mediation module.
As a first step, you should create a new library in which to store both the data types and the import and export interfaces to be used within the mediation module. You do this by following these steps:
- Select File > New > Project. A new window appears.
- Select Library (see Figure 5).
Figure 5. New project
- In the window that opens, type
Asynch-Libraryin the Library Name field, and keep the Use Default check box flagged.
- Click Finish. The wizard takes a second to generate the library (see Figure 6).
Figure 6. New library
If you looked carefully at the previous articles of this series, Part 2 in particular, you should know the format of the SOAP message coming from the central reservation system. It contains all the information concerning the calendar of the reserved slots, as displayed in Listing 1.
Listing 1. SOAP message
As you can see in Listing 1, the sensitive data are included within an
EncryptedData section. Because WebSphere Enterprise
Service Bus performs schema validation on the messages coming through its export
binding, it's clear that it's necessary to make WebSphere Enterprise Service
Bus
aware of the
EncryptedData definition.
If you look carefully at the format of the message, you see that the
EncryptedData element is officially defined in the W3C
Standard XML encryption namespace. You can also see that as far as the
KeyInfo is concerned, its definition is part of the XML
digital signature namespace. For this reason, if you import from the official W3C
site (see the Resources section for the URL), both the
xenc-schema, including the definition of the XML encryption namespace and the
xmldixencg-core schema containing the definition of the XML digital signature
namespace, you should have a good chance to make WebSphere Enterprise Service Bus
aware of the
EncryptedData type definition.
To import these schemas:
- In the IBM WebSphere Integration Developer console, select File > New > Other. In the window that opens, check the Show All Wizards box, then expand the XML folder, and select XML Schema, as shown in Figure 7.
Figure 7. New XML schema
- If a window opens asking you for the enablement of the XML development capabilities, click OK to allow the required capability (see Figure 8).
Figure 8. Confirm enablement
- Type
xmldsig-core-schema.xsdin the File name field, select the previous created library as the schema location, and then click Finish, as shown in Figure 9.
Figure 9. Create XML schema
- In the schema editor that opens, cancel the whole content of the file. Then download the xenc schema from the Download section of this article. Copy the content of this schema, and paste it in the schema editor. Finally, save the file.
- You should get an error message complaining of a duplicated attribute, PGPKeypacket, in the newly created schema. This problem is due to the fact that, as far as the 6.0.2 FP 1 release is concerned, WebSphere Enterprise Service Bus doesn't support choice elements and, thus, sees two PGPKeypackets within the same schema definition (even if the choice oppositely marks these as two alternative paths), as shown in Figure 10.
Figure 10. Mediation module
This bug has been resolved since the new 6.1 release. To solve this problem, cancel one of the two sequence elements within the choice. We decided to remove the second one. Clearly when removing one of the two sequences, the choice construct is useless, so you must remove it, too, from the schema. If you save the schema, the previous error should disappear.
- Following the above steps, create a new XSD file, and name it xenc-schema.xsd (see Figure 11).
Figure 11. XSD core schema
In the XSD schema editor that opens, cancel the whole content of the file. Then download the xenc-schema.xsd schema from the Download section of this article. Copy the content of this schema, and paste it in the XSD schema editor. Save the file; you shouldn't get any error this time.
- When you save the files, the wizard automatically parses the schemas and generates a new data type for each element type defined within the schema. You can easily verify this by expanding Data Types under the Asynch-Library (see Figure 12).
Figure 12. Data types
In particular, the wizard generates an
EncryptedDataType data type that, as far as WebSphere
Enterprise Service Bus's schema validation is concerned, seems to be the natural
counterpart for matching the sensitive data contained within the
EncryptedData section. This said, creating a new export
interface and using an
EncryptedData object of type
EncryptedDataType might let the encrypted data pass
through WebSphere Enterprise Service
Bus. (This isn't totally true, because a light adjustment is needed,
as you'll see in the following section; but you're on the right path.)
Define the export/import interface
Now you're ready to define the interfaces. You need an export interface for the
sendUpdatedAgenda and an import interface for the
publishAgenda asynchronous interaction.
- Right-click the Asynch-Library project, and select New > Interface. Name it
AsynchResSystemExport, and click the Finish button (see Figure 13).
Figure 13. New export interface
- In the window that appears, select the Add One Way Operation icon to add a new operation (remember the use case requires an asynchronous interaction). Name the new operation
sendUpdatedAgenda, and add the following two parameters (using the Add Input icon):
EncryptedDataof type
EncryptedDataType
serviceProviderIdas a
String
Figure 14. Export interface
Click Save, and close the interface dialog.
- Even if this interface definition seems at first glance to perfectly match the SOAP message's format, in reality this isn't completely true. If you open the interface definition by right-clicking it and choosing the open with the XML editor option, you see that the wizard generated with the
EncryptedDataelement in the Asynch-Library namespace doesn't match with the
EncryptedDataelement of the xmlenc namespace. To solve this problem, comment out the element marked in red, and replace it with the element marked in green, which is a real reference to the
EncryptedDataelement of the xmlenc namespace, as shown in Figure 15.
Figure 15. Asynchronous resource system export
- After completing the above step, choose New Import Interface to add a new interface for the import binding, as shown in Figure 16.
- Right-click the Interface folder within the Asynch-Library, and select New > Interface.
- Select Asynch-Library from the Module list, and type
AsynchResSystemImportinto the Name field.
Figure 16. New import interface
- In the interface dialog box that opens, select the Add One Way Operation icon to add a new operation. Name the operation
publishAgenda, and add a new
EncryptedDataparameter of type
EncryptedDataType(see Figure 17).
Figure 17. Publish agenda
- Repeat step 5 to change the
EncryptedDataelement definition in the Import Interface.
Figure 18. Asynchronous resource system import
- Save the WSDL file and close it.
The above modifications aren't the only ones to be performed to make encrypted data correctly pass through WebSphere Enterprise Service Bus and have this data rightly copied as a JMS textual payload. If you look at the SOAP message format, you see that the
KeyInfoelement contains an
EncryptedKeyelement embedding all the information, as far as the encryption algorithm used to encrypt the key and the encrypted form of the key. Being the
KeyInfodefined in the xmldsig namespace, you'd expect to find the
EncryptedKeyelement listed within the complex type definition of the
KeyInfoelement within the schema. However, if you open the xmldsig-core-schema.xsd file through the XML editor and look for the
KeyInfodefinition, you should see something like Figure 19.
Figure 19. xmldsig-core-schema
As you can see, there's no mention of the
EncryptedKeyelement, but the schema assumes that any element can be added to complete the
KeyInfodefinition. As a consequence of this, using this schema makes WebSphere Enterprise Service Bus ignore any
EncryptedKeyinformation within an incoming SOAP message. So if no change is performed, this piece of information is missed and not copied to the JMS payload (you get a better understanding of what this means in the section dealing with the XSLT transformation). To overcome this limitation, you should perform a light change in the xmldsig-core-schema.xsd schema.
- You should comment out the row in blue and add the row in red, as shown in Figure 20.
Figure 20. xmldsig-core-schema
- Moreover, because you are adding an element from a foreign namespace, this namespace must be declared in some way. So you should add the three red lines shown in Figure 21 to the namespace definition section at the beginning of the file.
Figure 21. xmldsig-core-schema
- If some errors appear complaining of duplicated namespace definitions, open the xenc-schema.xsd schema, and modify the import namespace, as shown in Figures 22 and 23.
Figure 22. xenc-schema
Figure 23. xenc-schema
This should clear all the previous errors if any.
Define the mediation module
Now you're ready to create a new mediation module and add the necessary mediation primitives.
- Select File > New > Project. Then choose Mediation Module from the Select a wizard window.
Figure 24. New mediation module
- Type
AsynchResSystemModulein the Module Name field, and keep the defaults as displayed in Figure 25.
Figure25. Add interface
- Click Next. In the Select Required Libraries window that appears, select Asynch-Library to add it to the module and, thus, allow the mediation module to use the data types and interfaces you previously defined within the library.
- When the wizard completes the module's creation, double-click the Assembly Diagram icon to work with the assembly diagram. The Assembly Editor opens.
- Add an import and an export component by dragging the relative icons onto the diagram. Rename them
WS_Exportand
JMS_Import.
- Right-click the export component and select add interface to add the
AsynchResSystemExportinterface to the export component (see Figure 26) and the
AsynchResSystemImportinterface to the import component (see Figure 27).
Figure 26. Add export interface
Figure 27. Add import interface
- Select the Mediation Module component, and perform the following actions:
- Right-click, select the Add Interface option, and add the AsynchResSystemExport interface.
- Right-click, select the Add Reference option, and add the AsynchResSystemImport interface.
- Rename the module to
Asynch_Mediation.
- Select the WS_Export component, right-click, and select the Wire to Existing option.
- Select the JMS_Import component, right-click, and select the Wire to Existing option.
Figure 28. Assembly diagram
- Because the module must get a SOAP/HTTP message and convert it to a JMS payload, you should generate two bindings:
- A Web service binding
- A JMS message binding
- Right-click the WS_Export component, and select the Generate Binding option.
- From the window that appears, select Web Service Binding, and then select the SOAP/HTTP option in the Transport Selection window (see Figure 29).
Figure 29. Transport selection
- Right-click the JMS-Import component, and select the Generate Binding option again.
- Select JMS Message. In the window that appears, select Publish-Subscribe as the messaging domain, then select the Use pre-configured messaging provider resources check box, and provide the topic name and the topic connection factory settings according to the configuration steps already performed in the first article of this series.
Figure 30. Configure JMS import
Remember to select the Business Object XML using JMSTextMessage option as serialization type. This way you ensure that the encrypted body of the SOAP message is passed as it is in an XML form to the topic and then can be parsed and reconstructed using some XML security standard decryption algorithm, as described in Part 2 of this series.
- After being configured, the import and export bindings provide a mediation flow for this module. To do this, right-click the module component, and select Generate Implementation, as shown in Figure 31.
Figure 31. Generate implementation
- In the window that appears, select AsynchResSystemModule, and click OK (see Figure 32). A new mediation flow is generated.
Figure 32. Asynchronous resource system module
- Link the sendUpdateAgenda and publishAgenda operations in the Operation Connections window.
- Add a new message setter primitive, and name it
setMessageSelector.
- Add an XSLT transformation primitive, and name it
adaptInterfaces.
- Order and wire these primitives as shown in Figure 33.
Figure 33. Mediation flow
- Click setMessageSelector, and in the properties window at the bottom of the page, select the Details tab.
- Click the Add button.
- Select Copy from the Type drop-down list. Two CustomXPath buttons appear, giving you the possibility of providing guided XPath expressions for both the Target and the Value fields.
Figure 34. Message selector
- Select JMSType in the Target field and serviceproviderId in the Value field. This way the
messageSettercopies the
serviceProviderIdstring that univocally identifies each service provider, from the message's body to the JMS header, thus acting as a message selector, as explained in the first article of this series.
- Save the mediation flow, then select the adaptInterfaces XSLT transformation primitive.
- In the Properties window at the bottom of the page, select the Details tab, then click the New button to create a new transformation.
- In the window that appears, keep the defaults and click Finish.
Figure 35. XSLT mapping
- In the window that appears next, expand the tree on both sides until you reach the
KeyInfoelement. Expand this element, too. You should see a
ds:KeyNameelement with an arrow on its right. The arrow is displayed as a result of the
KeyInfobeing defined through a Choice construct in the xmldsig schema.
Figure 36. XSLT mapping
- Click the arrow and scroll down until you see the
EncryptedKeyelement, then select it on both sides. Remember that you modified the
KeyInfocomplex type definition to add the
EncryptedKeyelement within the Choice; now you should have a better understanding of the reason. Without this change, in fact, you wouldn't be able to select the
EncryptedKeyelement from the arrow, so all the key-related information, as far as the algorithm used to encrypt the key and the encrypted values, wouldn't be copied to the JMS payload (see Figure 37).
Figure 37. Transformation
- Select the xenc:EncryptedData element on both sides, and right-click the mouse to select the match mapping option. This way you instrument WebSphere Enterprise Service Bus to recursively copy the whole content of the incoming
EncryptedDatasection from the incoming SOAP request's body to the JMS payload.
- Save the transformation, and close the window.
- Finally, save the whole mediation flow.
This article described in detail how to instrument WebSphere Enterprise Service Bus to accept SOAP messages containing encrypted portions of data, perform protocol switching, and then forward these messages to a JMS topic where the service providers are registered. You've learned how to:
- Properly define an export interface to get the SOAP HTTP message containing sensitive data.
- Configure a message selector to augment the JMS message header with selectors information.
- Introduce an XSLT transformation to match the import and export interfaces and, thus, add mediation logic even when the message contains encrypted data.
- Provide an import interface to send the sensitive data to the topic as a JMS payload.
At this point, the whole scenario has been completely covered. The last installment of the series will address:
- Concerns about the client side of the application.
- Getting the JMS messages from WebSphere Enterprise Service Bus.
- Decrypting the sensitive data with the service provider's private key to reconstruct the original message that's been sent from the central reservation system and that has been encrypted through the WebSphere DataPower SOA Appliances box.
This part will be deeply covered in the fourth and final article of this series, so stay tuned!
Information about download methods.
-
- Download XENC Schema, and XMLDSIG CORE Schema.
- Innovate your next development project with IBM trial software, available for download or on DVD.
Discuss
- Get involved in the developerWorks community by participating in developerWorks blogs.. | http://www.ibm.com/developerworks/webservices/library/ws-soa-real3/ | crawl-003 | refinedweb | 3,192 | 52.09 |
Regperf - The followup
By damiencooke on Aug 07, 2007
Background:
Tom Daly and I have been looking at open source solutions in the web application framework area. It complemented our shared interest of open source databases. We noticed there was a large amount of interest around Ruby on Rails. It seemed like an obvious place to start researching where the right place, if such a place existed for frameworks like RoR, was and what type of applications were it's sweet spot. On the way we learned about Grails also, a similar purpose project and we decided to include Grails in our investigation.
Regperf:
Regperf is a trivial application to represent many small web applications that individuals and corporations build to do a simple task, in this case keep details about software registrations. The genre of applications that Regperf belongs to is usually used for very simple, one or more table, applications to Create/Read/Update/Delete simple data for a particular purpose. In it's current state it is a single table application that can be simply generated for each of the three platforms we chose to compare. Java EE 5.0, Ruby on Rails (which includes JRuby on Rails) and Grails. We had intended getting this data up on our blogs earlier but I got busy on other stuff. Recently there has been some activity around our JavaOne presentation Some of the blogs I have seen: Joab Jackson and Grails Team So I thought I better spend the time.
The Ruby Performace Tests:
Before we start we should state that Regperf is not nor can be, in it's current state, considered a benchmark. The purpose was to simply test the characteristics of each of the frameworks compared to each other to determine where the framework was best suited and to make recommendations as to where each might be used. It was later we decided that Regperf using FABAN might be useful to others and decided to extend the application from a single table to a multi table application (Regperf 2.0)
Regperf 1.0
We generated the trivial application and tested it using FABAN on each of the platforms. Running the RoR application standalone and with in Netbeans 6 (JRuby). We ran the Java EE 5.0 version on Glassfish and the Grails version was also run on the Glassfish application Server. Our findings were published in the Java One presentation.
Configuring Regperf 1.0 for Ruby:
Download the following files:
Application
FABAN driver
Sample Database
Start Postgres
Create a regperf user (createuser regperf)
Create a database called regperf ($ createdb regperf)
import the database ($pg_restore -d regperf regperfdb.tar)
To start the application:
cd regperf
script/server
In your web browser point to localhost:3000/subscription The application should be running (email as much detail as possible if you are having problems and need some assistance include postgres server log, screen shots etc) Now all we need is the test suite (Faban Driver) which needs to be expanded in $FABAN_HOME/samples directory for pre-installed FABAN. Now you can conduct your own tests. Instructions that need to be followed are:
Setup Faban and the Faban Regperf driver Download and install Faban. Note Faban has a driver and also a harness for automatically queuing and running multiple benchmarks but in these instructions we are going to run just the driver component and again assuming localhost set the FABAN_HOME environment variable and add $FABAN_HOME/bin to your PATH mkdir $FABAN_HOME/output (this is where the tests results will be stored) Download the Faban Regperf driver for Rails (as indicated above) and unjar it into $FABAN_HOME/samples. Edit the $FABAN_HOME/samples/regperf-grails/config/run.xml, change the value for outputDir to match your $FABAN_HOME/output directory. Modify the number of simulated users by changing the value for <scale>1<scale> you will get 10 users for each scale factor e.g. scale 1 will give 10 simulated users. Note the number of loaded subscribers should match the number of rows you load into the database (this is set initially to 100) refer to the Faban documentation for other values to change in the run.xml
Running the performance using the Rails runtime (webbrick)
cd $FABAN_HOME/samples/regperf-rails/sbin, Assuming that the database and the rails application is still running execute ./loader.sh 10 start the rmi registry (this runs in the background) ./registry.sh &
Start and execute the benchmark
./master.sh Starts up all of the simulated users and writes results of the run into $FABAN_HOME/output/xx directory. The results can be conveniently viewed with a web browser.
We would love to hear what you find with either our app or your own. Let us know if you have any problems and stay tuned for Regperf 2.0 | https://blogs.oracle.com/damien/category/Rails%2CGrails+and+Java+EE | CC-MAIN-2015-48 | refinedweb | 801 | 51.68 |
This page shows you how to perform basic tasks in Cloud Storage using the Google Cloud Console.
Costs that you incur in Cloud Storage are based on the resources you use. This quickstart typically uses less than $0.01 USD worth of Cloud Storage resources.
Before you begin
If you don't already have one, sign up for a new account.
Create a bucket
Buckets are the basic containers that hold your data in Cloud Storage.
To create a bucket:
- Open the Cloud Storage browser in the Google Cloud Console.
Open the Cloud Storage browser
Click Create bucket to open the bucket creation form.
Enter your bucket information and click Continue to complete each step:
Enter a unique Name for your bucket.
Do not include sensitive information in the bucket name, because the bucket namespace is global and publicly visible. next to another period or dash. For example, ".." or "-." or ".-" are not valid in DNS names.
Choose Standard for Storage class and us-east1 (South Carolina) for Location.
Choose Set object-level and bucket-level permissions for Access control model. an object
To download the image from your bucket:
Click the drop-down menu associated with the object.
The drop-down menu appears as three vertical dots to the far right.
Click Download.
The image is saved to your local system.
Share an object publicly
To create a publicly accessible URL for the file:
Click the drop-down menu associated with the file.
The drop-down menu appears as three vertical dots to the far right.
Select Edit permissions from the drop-down menu.
In the overlay that appears, click the + Add item button.
In the row that is added, do the following:
- In the Entity column, select User.
- In the Name column, enter allUsers.
- In the Access column, select Reader.
Click Save.
You should see that the file is publicly accessible and has a link icon. The link icon reveals a shareable URL that looks like:<your-bucket-name>/kitten.png
To stop sharing the file publicly:
Click the drop-down menu associated with the file.
The drop-down menu appears as three vertical dots to the far right.
Select Edit permissions from the drop-down menu.
In the overlay that appears, click the X to the right of the allUsers entry.
Click Save.
You should see that the file no longer has a link icon associated with it.
Create folders
- Click Create folder.
- Enter folder1 for Name and click Create.
You should see the folder in the bucket with an image of a folder icon to distinguish it from objects.
Create a subfolder and upload a file to it:
- Click folder1.
- Click Create folder.
- Enter folder2 for Name and click Create.
- Click folder2.
- Click Upload files.
- In the file dialog, navigate to the screenshot that you downloaded and select it.
After the upload completes, you should see the file name and information about the file, such as its size and type.
Delete objects
- Click.
What's next
- Work through the Cloud Storage Quickstart using the gsutil tool.
- Review the available guides for completing tasks in Cloud Storage.
- Work through the tutorial How to Host a Static Website using Google Cloud Storage. | https://cloud.google.com/storage/docs/quickstart-console?hl=nl | CC-MAIN-2020-05 | refinedweb | 533 | 76.52 |
22 November 2011 08:04 [Source: ICIS news]
SINGAPORE (ICIS)--JX Nippon Oil & Energy has proposed a $30/tonne drop to the paraxylene (PX) Asia Contract Price (ACP) for December, while producer Idemitsu Kosan is eyeing a rollover price from the previous month, customers of the Japanese aromatics makers said on Tuesday.
JX Nippon Oil has nominated a December PX ACP of $1,470/tonne (€1,088/tonne) CFR (cost & freight) Asia, while Idemitsu Kosan has proposed a contract price of $1,500/tonne CFR Asia, according to market sources.
ExxonMobil and ?xml:namespace>
The November PX ACP was fully settled at $1,500/tonne CFR Asia.
Spot PX cargoes for delivery in January was heard to be discussed at around $1,365-1,385/tonne CFR Taiwan and/or CMP (
The buying interest among end-users is weak because of squeezed margins from the downstream purified terephthalic acid (PTA) sector.
($1 = €0.74)
For more on paraxylene | http://www.icis.com/Articles/2011/11/22/9510316/jx-nippon-oil-eyes-dec-px-acp-down-30tonne-idemitsu-eyes.html | CC-MAIN-2014-41 | refinedweb | 158 | 51.72 |
I am passing the given query
in my python script i want to take the 'path' only how can i get it?
i use
def get(self): data = self.request.get('path')
but it not working
I am passing the given query
in my python script i want to take the 'path' only how can i get it?
i use
def get(self): data = self.request.get('path')
but it not working
what module are you using?
self.request.get('path')
means nothing
My php firstpage
<a href="">click</a> My php showcustomerpage
$execute='C:\wamp\bin\apache\Apache2.2.11\cgi-bin\datesname.py'; $start =$_GET["key"]; $end =$_GET["path"]; $go = $start."=".$end."&path=".$end; exec("$execute $go",$out);
My Pythonpage datesname.py
#!c:/Python27/python.exe -u import cgi, cgitb, os, sys import datetime import logging from urlparse import parse_qs cgitb.enable(); # formats errors in HTML sys.stderr = sys.stdout query = os.environ.get('QUERY_STRING') def get(self): data = self.request.get('path') print data
But i didnt get value
Confusion: The token
self is a 'community standard reserved word': It is legal to use as a token, but everybody who codes in Python uses it as the first argument to member functions; and in classes to refer to the class instance it self... and for no other purpose. So your code seems meaningless to those of us who have been programming in Python for more than a few weeks, because we see
self and our habits tell us to look for the rest of the class definition.
Your
query variable holds a string. In Python, strings have member functions, but no attribute named 'request'.
You can always take the part of the string before the ? character...
Look for string.split('?'). | https://www.daniweb.com/programming/software-development/threads/342111/query-string | CC-MAIN-2019-04 | refinedweb | 293 | 66.74 |
Today’s Programming Praxis problem is about cellular automata. Let’s dive in.
As usual, our imports:
import Data.Bits import Data.List
Programming Praxis’ author uses ones and zeros to represent on and off cells. That works, but you run the risk of accidentally having another number somewhere and crashing your program. We Haskell programmers like our type safety, so we’re going to use booleans. First we need a function to get the successor of a group of cells:
successor :: Int -> [Bool] -> Bool successor r bs = testBit r . sum $ zipWith (shiftL . fromEnum) (reverse bs) [0..]
Using that, we can generate the next row as follows:
nextRow :: Int -> [Bool] -> [Bool] nextRow r xs = map (successor r) . take (length xs) . transpose . take 3 . iterate (drop 1) $ [head xs] ++ xs ++ [last xs]
Since lists of booleans are not that easy to read, let’s use spaces and Xs.
displayRow :: [Bool] -> String displayRow = intersperse ' ' . map (\b -> if b then 'X' else ' ')
Each run starts with one active cell and simply consists of repeating nextRow as often as needed.
cells :: Int -> Int -> [String] cells r h = map displayRow . take (h + 1) . iterate (nextRow r) $ replicate h False ++ [True] ++ replicate h False
And finally we test out program. Let’s make a lovely Sierpinski triangle:
main :: IO () main = mapM_ putStrLn $ cells 82 15
Done. 10 lines of code. Not bad.
Tags: automata, cellular, kata, praxis, programming, sierpinski | http://bonsaicode.wordpress.com/2009/05/15/programming-praxis-cellular-automata/ | CC-MAIN-2013-48 | refinedweb | 233 | 68.97 |
Now we're ready to figure out how to do something smarter than using hardwired default connection parameters?such as letting the user specify those values at runtime. The previous client programs have a significant shortcoming in that the connection parameters are written literally into the source code.. For example, the programs in the MySQL distribution accept parameters in either of two forms, as shown in the following table.
For consistency with the standard MySQL clients, our next client program, client3, will accept those same formats. It's easy to do this because the client library includes support for option processing. In addition, our client will have the ability to extract information from option files, which allows you to put connection parameters in ~/.my.cnf (that is, the .my.cnf file in your home directory) or in any of the global option files. Then you don't have to specify the options on the command line each time you invoke the program..")
Before writing client3 itself, we'll develop a couple programs that illustrate how MySQL's option-processing support works. These show how option handling works fairly simply and without the added complication of connecting to the MySQL server and processing queries. so that when you parse the command options, you get the connection parameters as part of your normal option-processing code. The options are added to argv[] immediately after the command name and before any other arguments (rather than at the end), so that any connection parameters specified on the command line occur later than and thus override any options added by load_defaults().
The following is a little program, show_argv, that demonstrates how to use load_defaults() and illustrates how it modifies your argument vector:
/* show_argv.c - show effect of load_defaults() on argument vector */ #include <my_global.h> #include <mysql.h> static const char *client_groups[] = { "client", NULL }; int main (int argc, char *argv[]) { int i; printf ("Original argument vector:\n"); for (i = 0; i < argc; i++) printf ("arg %d: %s\n", i, argv[i]); my_init (); load_defaults ("my", client_groups, &argc, &argv); printf ("Modified argument vector:\n"); for (i = 0; i < argc; i++) printf ("arg %d: %s\n", i, argv[i]); exit (0); }
The option file-processing code involves several components:
client_groups[] is an array of character strings indicating the names of the option file groups from which you want to obtain options. Client programs normally include at least "client" in the list (which represents the [client] group), but you can list as many groups as you want. The last element of the array must be NULL to indicate where the list ends.
my_init() is an initialization routine that performs some setup operations required by load_defaults().
load_defaults() reads the option files. It takes four arguments: the prefix used in the names of your option files (this should always be "my"), the array listing the names of the option groups in which you're interested, and the addresses of your program's argument count and vector. Don't pass the values of the count and vector. Pass their addresses instead because load_defaults() needs to change their values. Note in particular that even though argv is already a pointer, you still pass &argv, that pointer's address.
show_argv prints its arguments twice to show the effect that load_defaults() has on the argument array. First it prints the arguments as they were specified on the command line, and then it calls load_defaults() and prints the argument array again.
To see how load_defaults() works, make sure you have a .my.cnf file in your home directory with some settings specified for the [client] group. (On Windows, you can use the C:\my.cnf file.) Suppose the file looks like this:
[client] user=sampadm password=secret host=some_host
If that is the case, executing show_argv should produce output like this:
% ./show_argv a b Original argument vector: arg 0: ./show_argv arg 1: a arg 2: b Modified argument vector: arg 0: ./show_argv arg 1: --user=sampadm arg 2: --password=secret arg 3: --host=some_host arg 4: a arg 5: b
When show_argv prints the argument vector the second time, the values in the option file show up as part of the argument list. It's also possible that you'll see some options that were not specified on the command line or in your ~/.my.cnf file. If this occurs, you will likely find that options for the [client] group are listed in a system-wide option file. This can happen because load_defaults() actually looks in several option files. On UNIX, it looks in /etc/my.cnf and in the my.cnf file in the MySQL data directory before reading .my.cnf in your home directory. On Windows, load_defaults() reads the my.ini file in your Windows system directory, C:\my.cnf, and the my.cnf file in the MySQL data directory.
Client programs that use load_defaults() almost always specify "client" in the list of option group names (so that they get any general client settings from option files), but you can set up your option file processing code to obtain options from other groups as well. Suppose you want show_argv to read options in both the [client] and [show_argv] groups. To accomplish this, find the following line in show_argv.c:
const char *client_groups[] = { "client", NULL };
Change the line to this:
const char *client_groups[] = { "show_argv", "client", NULL };
Then recompile show_argv, and the modified program will read options from both groups. To verify this, add a [show_argv] group to your ~/.my.cnf file:
[client] user=sampadm password=secret host=some_host [show_argv] host=other_host
With these changes, invoking show_argv again will produce a different result than before:
% ./show_argv a b Original argument vector: arg 0: ./show_argv arg 1: a arg 2: b Modified argument vector: arg 0: ./show_argv arg 1: --user=sampadm option group names are listed in the client_groups[] array. This means you'll probably want to specify program-specific groups after the [client] group in your option file. That way, if you specify an option in both groups, the program-specific value will take precedence over the more general [client] group value. arrange for that yourself by using getenv(). I'm not going to add that capability to our clients, but what follows is a short code fragment that shows how to check the values of a couple of the standard MySQL-related environment variables:
extern char *getenv(); char *p; int port_num = 0; char *socket_name = NULL; if ((p = getenv ("MYSQL_TCP_PORT")) != NULL) port_num = atoi (p); if ((p = getenv ("MYSQL_UNIX_PORT")) != NULL) socket_name = p;
In the standard MySQL clients, environment variable values have lower precedence than values specified in option files or on the command line. If you check environment variables in your own programs and want to be consistent with that convention, check the environment before (not after) calling load_defaults() or processing command line options.
Using load_defaults(), we can get all the connection parameters into the argument vector, but now we need a way to process the vector. The handle_options() function is designed for this. handle_options() is built into the MySQL client library, so you have access to it whenever you link in that library.
The option-processing methods described here were introduced in MySQL 4.0.2. Before that, the client library included option-handling code that was based on the getopt_long() function. If you're writing MySQL-based programs using the client library from a version of MySQL earlier than 4.0.2, you can use the version of this chapter from the first edition of this book, which describes how to process command options using getopt_long(). The first-edition chapter is available online in PDF format at the book's companion Web site at.
The getopt_long()-based code has now been replaced with a new interface based on handle_options(). Some of the improvements offered by the new option-processing routines are:
More precise specification of the type and range of legal option values. For example, you can indicate not only that an option must have integer values but that it must be positive and a multiple of 1024.
Integration of help text, to make it easy to print a help message by calling a standard library function. There is no need to write your own special code to produce a help message.
Built in support for the standard --no-defaults, --print-defaults, --defaults-file, and --defaults-extra-file options. These options are described in the "Option Files" section in Appendix E.
Support for a standard set of option prefixes, such as --disable- and --enable-, to make it easier to implement boolean (on/off) options. These capabilities are not used in this chapter, but are described in the option-processing section of Appendix E.
Note: The new option-processing routines appeared in MySQL 4.0.2, but it's best to use 4.0.5 or later. Several problems were identified and fixed during the initial shaking-out period from 4.0.2 to 4.0.5.
To demonstrate how to use MySQL's option-handling facilities, this section describes a show_opt program that invokes load_defaults() to read option files and set up the argument vector and then processes the result using handle_options().
show_opt allows you to experiment with various ways of specifying connection parameters (whether in option files or on the command line) and to see the result by showing you what values would be used to make a connection to the MySQL server. show_opt is useful for getting a feel for what will happen in our next client program, client3, which hooks up this option-processing code with code that actually does connect to the server.
show_opt illustrates what happens at each phase of argument processing by performing the following actions:
Set up default values for the hostname, username, password, and other connection parameters.
Print the original connection parameter and argument vector values.
Call load_defaults() to rewrite the argument vector to reflect option file contents and then print the resulting vector.
Call the option processing routine handle_options() to process the argument vector and then print the resulting connection parameter values and whatever is left in the argument vector.
The following discussion explains how show_opt works, but first take a look at its source file, show_opt.c:
/* * show_opt.c - demonstrate option processing with load_defaults() * and handle_options() */ } }; my_bool get_one_option (int optid, const struct my_option *opt, char *argument) { switch (optid) { case '?': my_print_help (my_opts); /* print help message */ exit (0); } return (0); } int main (int argc, char *argv[]) { int i; int opt_err; printf ("Original connection parameters: ("Original argument vector:\n"); for (i = 0; i < argc; i++) printf ("arg %d: %s\n", i, argv[i]); my_init (); load_defaults ("my", client_groups, &argc, &argv); printf ("Modified argument vector after load_defaults():\n"); for (i = 0; i < argc; i++) printf ("arg %d: %s\n", i, argv[i]); if ((opt_err = handle_options (&argc, &argv, my_opts, get_one_option))) exit (opt_err); printf ("Connection parameters after handle_options(): ("Argument vector after handle_options():\n"); for (i = 0; i < argc; i++) printf ("arg %d: %s\n", i, argv[i]); exit (0); }
The option-processing approach illustrated by show_opt.c involves the following aspects, which will be common to any program that uses the MySQL client library to handle command options:
In addition to the my_global.h and mysql.h header files, include my_getopt.h as well. my_getopt.h defines the interface to MySQL's option-processing facilities.
Define an array of my_option structures. In show_opt.c, this array is named my_opts. The array should have one structure per option that the program understands. Each structure provides information such as an option's short and long names, its default value, whether the value is a number or string, and so on. Details on members of the my_option structure are provided shortly.
After calling load_defaults() to read the option files and set up the argument vector, process the options by calling handle_options(). The first two arguments to handle_options() are the addresses of your program's argument count and vector. (Just as with load_options(), you pass the addresses of these variables, not their values.) The third argument points to the array of my_option structures. The fourth argument is a pointer to a helper function. The handle_options() routine and the my_options structures are designed to make it possible for most option-processing actions to be performed automatically for you by the client library. However, to allow for special actions that the library does not handle, your program should also define a helper function for handle_options() to call. In show_opt.c, this function is named get_one_option(). The operation of the helper function is described shortly.
The my_option structure defines the types of information that must be specified for each option that the program understands. It looks like this:
struct my_option { const char *name; /* option's long name */ int id; /* option's short name or code */ const char *comment; /* option description for help message */ gptr *value; /* pointer to variable to store value in */ gptr *u_max_value; /* The user defined max variable value */ const char **str_values; /* array of legal option values (unused) */ enum get_opt_var_type var_type; /* option value's type */ enum get_opt_arg_type arg_type; /* whether option value is required */ longlong def_value; /* option's default value */ longlong min_value; /* option's minimum allowable value */ longlong max_value; /* option's maximum allowable value */ longlong sub_size; /* amount to shift value by */ long block_size; /* option value multiplier */ int app_type; /* reserved for application-specific use */ };
The members of the my_option structure are used as follows:
name
The long option name. This is the --name form of the option, without the leading dashes. For example, if the long option is --user, list it as "user" in the my_option structure.
id
The short (single-letter) option name, or a code value associated with the option if it has no single-letter name. For example, if the short option is -u, list it as 'u' in the my_option structure. For options that have only a long name and no corresponding single-character name, you should make up a set of option code values to be used internally for the short names. The values must be unique and different than all the single-character names. (To satisfy the latter constraint, make the codes greater than 255, the largest possible single-character value. An example of this technique is shown in "Writing Clients That Include SSL Support" section later in this chapter.)
comment
An explanatory string that describes the purpose of the option. This is the text that you want displayed in a help message.
value
This is a gptr (generic pointer) value. It points to the variable where you want the option's argument to be stored. After the options have been processed, you can check that variable to see what the option's value has been set to. If the option takes no argument, value can be NULL. Otherwise, the data type of the variable that's pointed to must be consistent with the value of the var_type member.
u_max_value
This is another gptr value, but it's used only by the server. For client programs, set u_max_value to NULL.
str_values
This member currently is unused. In future MySQL releases, it might be used to allow a list of legal values to be specified, in which case any option value given will be required to match one of these values.
var_type
This member indicates what kind of value must follow the option name on the command line and can be any of the following:
The difference between GET_STR and GET_STR_ALLOC is that for GET_STR, the option variable will be set to point directly at the value in the argument vector, whereas for GET_STR_ALLOC, a copy of the argument will be made and the option variable will be set to point to the copy.
arg_type
The arg_type value indicates whether a value follows the option name and can be any of the following:
If arg_type is NO_ARG, then var_type should be set to GET_NO_ARG.
def_value
For numeric-valued options, the option will be assigned this value by default if no explicit value is specified in the argument vector.
min_value
For numeric-valued options, this is the smallest value that can be specified. Smaller values are bumped up to this value automatically. Use 0 to indicate "no minimum."
max_value
For numeric-valued options, this is the largest value that can be specified. Larger values are bumped down to this value automatically. Use 0 to indicate "no maximum."
sub_size
For numeric-valued options, sub_size is an offset that is used to convert values from the range as given in the argument vector to the range that is used internally. For example, if values are given on the command line in the range from 1 to 256, but the program wants to use an internal range of 0 to 255, set sub_size to 1.
block_size
For numeric-valued options, if this value is non-zero, it indicates a block size. Option values will be rounded down to the nearest multiple of this size if necessary. For example, if values must be even, set the block size to 2; handle_options() will round odd values down to the nearest even number.
app_type
This is reserved for application-specific use.
The my_opts array should have a my_option structure for each valid option, followed by a terminating structure that is set up as follows to indicate the end of the array:
{ NULL, 0, NULL, NULL, NULL, NULL, GET_NO_ARG, NO_ARG, 0, 0, 0, 0, 0, 0 }
When you invoke handle_options() to process the argument vector, it skips over the first argument (the program name) and then processes option arguments?that is, arguments that begin with a dash. This continues until it reaches the end of the vector or encounters the special "end of options" argument ('--' by itself). As it moves through the argument vector, handle_options() calls the helper function once per option to allow that function to perform any special processing. handle_options() passes three arguments to the helper function?the short option value, a pointer to the option's my_option structure, and a pointer to the argument that follows the option in the argument vector (which will be NULL if the option is specified without a following value).
When handle_options() returns, the argument count and vector will have been reset appropriately to represent an argument list containing only the non-option arguments.
The following is a sample invocation of show_opt and the resulting output (assuming that ~/.my.cnf still has the same contents as for the final show_argv example in the "Accessing Option File Contents" section earlier in this chapter):
% ./show_opt -h yet_another_host --user=bill x Original connection parameters: host name: (null) user name: (null) password: (null) port number: 0 socket name: (null) Original argument vector: arg 0: ./show_opt arg 1: -h arg 3: yet_another_host arg 3: --user=bill arg 4: x Modified argument vector after load_defaults(): arg 0: ./show_opt arg 1: --user=sampadm arg 2: --password=secret arg 3: --host=some_host arg 4: -h arg 5: yet_another_host arg 6: --user=bill arg 7: x Connection parameters after handle_options(): host name: yet_another_host user name: bill password: secret port number: 0 socket name: (null) Argument vector after handle_options(): arg 0: x
The output shows that the hostname is picked up from the command line (overriding the value in the option file) and that the username and password come from the option file. handle_options() correctly parses options whether specified in short-option form (such as -h yet_another_host) or in long-option form (such as --user=bill).
The get_one_option() helper function is used in conjunction with handle_options(). For show_opt, it is fairly minimal and takes no action except for the --help or -? options (for which handle_options() passes an optid value of '?'):
my_bool get_one_option (int optid, const struct my_option *opt, char *argument) { switch (optid) { case '?': my_print_help (my_opts); /* print help message */ exit (0); } return (0); }
my_print_help() is a client library routine that automatically produces a help message for you, based on the option names and comment strings in the my_opts array. To see how it works, try the following command; the final part of the output will be the help message:
% ./show_opt --help
You can add other cases to get_one_option() as necessary. For example, this function is useful for handling password options. When you specify such an option, the password value may or may not be given, as indicated by OPT_ARG in the option information structure. (That is, you can specify the option as --password or --password=your_pass if you use the long-option form or as -p or -pyour_pass if you use the short-option form.) MySQL clients typically allow you to omit the password value on the command line and then prompt you for it. This allows you to avoid giving the password on the command line, which keeps people from seeing your password. In later programs, we'll use get_one_option() to check whether or not a password value was given. We'll save the value if so, and, otherwise, set a flag to indicate that the program should prompt the user for a password before attempting to connect to the server.
You may find it instructive to modify the option structures in show_opt.c to see how your changes affect the program's behavior. For example, if you set the minimum, maximum, and block size values for the --port option to 100, 1000, and 25, you'll find after recompiling the program that you cannot set the port number to a value outside the range from 100 to 1000 and that values get rounded down automatically to the nearest multiple of 25.
The option processing routines also handle the --no-defaults, --print-defaults, --defaults-file, and --defaults-extra-file options automatically. Try invoking show_opt with each of these options to see what happens.
Now let's strip out from show_opt3.c, is as follows:
/* * client3.c - connect to MySQL server, using connection parameters * specified in an option file or on the command line */ #include <string.h> /* for strdup() */ int ask_password = 0; /* whether to solicit password */ static MYSQL *conn; /* pointer to connection handler */ } }; void print_error (MYSQL *conn, char *message) { fprintf (stderr, "%s\n", message); if (conn != NULL) { fprintf (stderr, "Error %u (%s)\n", mysql_errno (conn), mysql_error (conn)); } } my_bool get_one_option (int optid, const struct my_option *opt, char *argument) { switch (optid) { case '?': my_print_help (my_opts); /* print help message */ exit (0); case 'p': /* password */ if (!argument) /* no value given, so solicit it later */ ask_password = 1; else /* copy password, wipe out original */ { opt_password = strdup (argument); if (opt_password == NULL) { print_error (NULL, "could not allocate password buffer"); exit (1); } while (*argument) *argument++ = 'x'; } break; } return (0); }); } /* ... issue queries and process results here ... */ /* disconnect from server */ mysql_close (conn); exit (0); }
Compared to the client1, client2, and show_opt programs that we developed earlier, client3 does a few new things:
It allows a database to be selected on the command line; just specify the database after the other arguments. This is consistent with the behavior of the standard clients in the MySQL distribution.
If a password value is present in the argument vector, get_one_option() makes a copy of it and then wipes out the original. This minimizes the time window during which a password specified on the command line is visible to ps or to other system status programs. (The window is only minimized, not eliminated. Specifying passwords on the command line still is a security risk.)
If a password option was given without a value, get_one_option() sets a flag to indicate that the program should prompt the user for a password. That's done in main() after all options have been processed, using the get_tty_password() function. This is a utility routine in the client library that prompts for a password without echoing it on the screen. You may ask, "Why not just call getpass()?" The answer is that not all systems have that function?Windows, for example. get_tty_password() is portable across systems because it's configured to adjust to system idiosyncrasies.
client3 connects to the MySQL server according to the options you specify. Assume there is no option file to complicate matters. If you invoke client3 with no arguments, it connects to localhost and passes your UNIX login name and no password to the server. If instead you invoke client3 as shown in the following command, it prompts for a password (because there is no password value immediately following -p), connects to some_host, and passes the username some_user to the server as well as the password you type:
% ./client3 -h some_host -p -u some_user some_db
client3 also passes the database name some_db to mysql_real_connect() to make that the current database. If there is an option file, its contents are processed and used to modify the connection parameters accordingly.
The work we've done so far to produce client3 accomplishes something that's necessary for every MySQL client?connecting to the server using appropriate parameters. The process is implemented by the client skeleton, client3.c, which you can use as the basis for other programs. Copy it and add to it any application-specific details. That means you can concentrate more on what you're really interested in?being able to access the content of your databases. All the real action for your application will take place between the mysql_real_connect() and mysql_close() calls, but what we have now serves as a basic framework that you can use for many different clients. To write a new program, just do the following:
Make a copy of client3.c.
Modify the option-processing loop if you accept additional options other than the standard ones that client3.c knows about.
Add your own application-specific code between the connect and disconnect calls.
And you're done. | http://etutorials.org/SQL/MySQL/Part+II+Using+MySQL+Programming+Interfaces/Chapter+6.+The+MySQL+C+API/Client+3Getting+Connection+Parameters+at+Runtime/ | CC-MAIN-2018-09 | refinedweb | 4,255 | 52.8 |
//uNavDir.h,v 1.9 2007/03/25 20:48:31 leonb Exp $ // $Name: debian_version_3_5_20-7 $ #ifndef _DJVUNAVDIR_H #define _DJVUNAVDIR_H #ifdef HAVE_CONFIG_H #include "config.h" #endif #if NEED_GNUG_PRAGMAS # pragma interface #endif #include "GString.h" #include "GThreads.h" #include "GURL.h" #ifdef HAVE_NAMESPACES namespace DJVU { # ifdef NOT_DEFINED // Just to fool emacs c++ mode } #endif #endif class ByteStream; /** @name DjVuNavDir.h Files #"DjVuNavDir.h"# and #"DjVuNavDir.cpp"# contain implementation of the multipage DjVu navigation directory. This directory lists all the pages, that a given document is composed of. The navigation (switching from page to page in the plugin) is not possible before this directory is decoded. Refer to the \Ref{DjVuNavDir} class description for greater details. @memo DjVu Navigation Directory @author Andrei Erofeev <eaf@geocities.com> @version #$Id: DjVuNavDir.h,v 1.9 2007/03/25 20:48:31 leonb Exp $# */ //@{ //***************************************************************************** //********************* Note: this class is thread-safe *********************** //***************************************************************************** /** DjVu Navigation Directory. This class implements the {\em navigation directory} of a multipage DjVu document - basically a list of pages that this document is composed of. We would like to emphasize, that this is the list of namely {\bf pages}, not {\bf files}. Any page may include any number of additional files. When you've got an all-in-one-file multipage DjVu document (DjVm archive) you may get the files list from \Ref{DjVmDir0} class. The \Ref{DjVuNavDir} class can decode and encode the navigation directory from {\bf NDIR} IFF chunk. It's normally created by the library during decoding procedure and can be accessed like any other component of the \Ref{DjVuImage} being decoded. In a typical multipage DjVu document the navigation directory is stored in a separate IFF file containing only one chunk: {\bf NDIR} chunk. This file should be included (by means of the {\bf INCL} chunk) into every page of the document to enable the navigation. */ 00120 class DjVuNavDir : public GPEnabled { private: GCriticalSection lock; GURL baseURL; GArray<GUTF8String> page2name; GMap<GUTF8String, int> name2page; GMap<GURL, int> url2page; protected: DjVuNavDir(const GURL &dir_url); DjVuNavDir(ByteStream & str, const GURL &dir_url); public: int get_memory_usage(void) const { return 1024; }; /** Creates a #DjVuNavDir# object. #dir_url# is the URL of the file containing the directory source data. It will be used later in translation by functions like \Ref{url_to_page}() and \Ref{page_to_url}() */ 00139 static GP<DjVuNavDir> create(const GURL &dir_url) {return new DjVuNavDir(dir_url);} /** Creates #DjVuNavDir# object by decoding its contents from the stream. #dir_url# is the URL of the file containing the directory source data. */ 00145 static GP<DjVuNavDir> create(ByteStream & str, const GURL &dir_url) { return new DjVuNavDir(str,dir_url); } virtual ~DjVuNavDir(void) {}; /// Decodes the directory contents from the given \Ref{ByteStream} void decode(ByteStream & str); /// Encodes the directory contents into the given \Ref{ByteStream} void encode(ByteStream & str); /** Inserts a new page at position #where# pointing to a file with name #name#. @param where The position where the page should be inserted. #-1# means to append. @param name The name of the file corresponding to this page. The name may not contain slashes. The file may include other files. */ void insert_page(int where, const char * name); /// Deletes page with number #page_num# from the directory. void delete_page(int page_num); /// Returns the number of pages in the directory. int get_pages_num(void) const; /** Converts the #url# to page number. Returns #-1# if the #url# does not correspond to anything in the directory. */ int url_to_page(const GURL & url) const; /** Converts file name #name# to page number. Returns #-1# if file with given name cannot be found. */ int name_to_page(const char * name) const; /** Converts given #page# to URL. Throws an exception if page number is invalid. */ GURL page_to_url(int page) const; /** Converts given #page# to URL. Throws an exception if page number is invalid. */ GUTF8String page_to_name(int page) const; }; //@} #ifdef HAVE_NAMESPACES } # ifndef NOT_USING_DJVU_NAMESPACE using namespace DJVU; # endif #endif #endif | http://djvulibre.sourcearchive.com/documentation/3.5.20-7ubuntu2/DjVuNavDir_8h-source.html | CC-MAIN-2018-09 | refinedweb | 630 | 58.08 |
Odoo Help
This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers.
Thanks for you answer. I managed to make the value unique by inheriting the product module. This page explains how to write a costum module.
Here the python file (if someone wants to do the same):
from openerp.osv import fields, osv
class unique_reference(osv.osv):
_inherit = "product.product"
_sql_constraints = [
('uniq_defaut_code', 'unique(default_code)', "A external reference already exists with this name . External reference must be unique!"),
]
unique_reference()
You need to add some custom code, inheriting the product module and adding a constraint to the field. That will make it unique, even with a small message telling the user their input should be unique.
By default I don't think there is any module that does this for you, although I would be happy to proven! | https://www.odoo.com/forum/help-1/question/change-internal-reference-to-unique-64176 | CC-MAIN-2016-44 | refinedweb | 157 | 68.26 |
Details
- Type:
Sub-task
- Status: Open
- Priority:
Minor
- Resolution: Unresolved
- Affects Version/s: 2.1.8
-
- Component/s: Infrastructure
- Labels:None
Description
actual implemention of MultiResourceItemReader does not permit usage with step scope, to get it working the getCurrentResource needs to be added to an interface (itemstream perhaps?) or a new interface has to be introduced
Activity
- All
- Work Log
- History
- Activity
- Transitions Summary
actually i would like an overhaul here, because:
- getCurrentResource works reliable only if the user knows some spring batch internals, e.g. which listener to use (StepExecutionListener would be almost entirely wrong), when open/read/update is called, etc.
- rather strange configuration to get it working with a stepScoped MRIR (..and stepScoped usage seems to be the standard here) and not so trivial coding to extract the target from the scoped proxy, actually i did not find an example for that on spring-core/batch doc
some suggestions:
must-have:
- add more javaDoc to the getCurrentResource(...), right now there is simply none:
- user guide entry for usage of step scoped beans as reference in other beans
nice-to-have:
- extend itemStream interface
- or add multi-resource-reader interface
- and/or change internal implementation to save current.resource.0,1, and so on in executioncontext (loosely following the MultiResourcePartitioner implementation)
Fair enough, but no showstoppers there as far as I can see.
Can you clarify a bit what you mean by knowing which listener to use?
I think you are on the wrong track if you are thinking about how to extract targets from AOP proxies. All you need to do is set proxy-target-class=true on the StepScope and you will be able to inject MultiResourceItemReader as a concrete type, not its interface.
>> Can you clarify a bit what you mean by knowing which listener to use?
to make sense it has to be at least a ChunkListener, StepExecutionListener and its after/beforeStep methods are of no use in combination with MultiResourceItemReader.getCurrentResource (except - and maybe - in case of restart, beforeStep might call reader.open(...))
>> you will be able to inject MultiResourceItemReader as a concrete type, not its interface
can't get that to work, keep getting
java.lang.IllegalStateException: Cannot convert value of type [$Proxy8....
but without proxy-target-class=true i can't extract the target so i'm halfway, is aspectj/cglib needed for this? until now i have none of these on my classpath
If you see "[$Proxy*..." you are not using proxy-target-class=true. You will need CGlib on your classpath - the logs will tell you so if you set proxy-target-class=true and try and start an application that uses step scope.
Honestly the getCurrentResource() method wasn't meant to be public, it's actually an accidentally leaked implementation detail. Given it has found its use cases it would be interesting to hear more about them and perhaps we can improve the MRIR based on that.
is there a working test case for this ? right now i'm getting two stepScope Objects - yes i know don't mix namespace and stepScope bean, but i don't know how to set the proxyTargetClass to true while using the namespace and getting stepScope implicitly
I'd also like to see some better support for accessing the file currently being processed, putting it in the execution context seems pretty reasonable, though in my use case it would seem more direct to get the information from a FlatFileParseException directly.
cglib is on my classpath and I set
<bean class="org.springframework.batch.core.scope.StepScope">
<property name="proxyTargetClass" value="true"/>
</bean>
and
<bean id="reader" class="org.springframework.batch.item.file.MultiResourceItemReader" scope="step" >
but still got errors trying to perform constructor injection of MRIR. Here are the two different errors, with and without setting proxyTargetClass=true. The [$Proxy4 appears in both cases, though the first shows the cglib is being used, so I'm not so sure that seeing [$Proxy is an indicator cglib isn't being used.
with proxyTargetClass=true'; nested exception is java.lang.IllegalStateException: Cannot]: no matching editors or conversion strategy found
without proxyTargetClass=false'; nested exception is java.lang.IllegalStateException: Cannot]: no matching editors or conversion strategy found
@Mark: maybe you are seeing the same problem as Michael (duplicate StepScope beans in the context and the wrong one is being used)?
@Michael: there is a test for StepScope registration (AutoRegistering*Tests), but it doesn't test what happens if there is one already defined I think. Feel free to propose an improvement (but please open another JIRA ticket).
@Mark: I slightly prefer the idea of extending the exception to carry the resource, to that of introducing a new interface to expose a method that (as Robert said) is not supposed to be public. It seems like it could be done, but not in a point release. Hopefully 2.2 will be the next release though, so it might be a good opportunity to do something in the exception imlementation.
I never liked that getter. Anyway, you can still use it with StepScope if you set proxy-target-class=true on the scope instance (should be in the user guide). If you really don't like that then shout, or else I'll resolve this issue. | https://jira.springsource.org/browse/BATCH-1831 | CC-MAIN-2014-10 | refinedweb | 879 | 51.18 |
Tax
Have a Tax Question? Ask a Tax Expert
Hi,
Please don't shoot the messenger here, but, yes, given the information you've provided, she has a right to deduct 1/2/ of the mortgage interest
And taxes
Typically the question we get here is HOW do we split the mortgage interest and real-estate taxes
Here's an excellent article on that:
Given the fact that you had no choice but to file separately, (to file jointly would have required her signature, and the claim that she was not available and that one together would suggest REAL trouble, forgery)
The intuitive thought here is that you should just file an amended return
and make the adjustment ... that would be the cleanest, most efficient, and the solution with the least acrimony
If you wanted to fight the battle, you MIGHT (and this one might be a reach) try to use something called innocent spouse relief and ask to be granted the deductions, (because you had no way of contacting her, and her behavior lead you to believe that shw would abandon her other obligations as well
But UNDERSTAND, that innocent spouse relief is TYPICALLY used to grant relief from joint and several liability on joint returns when you have no control over the situation
Again, a reach
another though, take no action, and then use the aforementioned facts to ask for a first time penalty abatement if you end up having a small penalty for the eventual additional taxes owed
And that is really all that will happen here ... given the facts as you presented them here, you had no intent, so this will not be a tax evasion issue, it would simply reslut on your owing addition taxes and maybe a small late penalty if it isn't resolved fairly quickly
... situation isn't really all that "ugly" If you can't afford the additional taxes (and POSSIBLE penalty), as long as the amount owed is less than $50,000 you can set up an installment agreement online here:
Before we go further, what did you mean by "forgery"?
Just to come full circle, she DOES have the ability to claim the deductions, and you can simply amend your return to reduce them by half
If she is not there..... and a joint return floated in, someone (presumably you) would have to have forged her signature on the joint return
all that is to say ... you had no choice but to file separately
and further, that it would have been reasonable to assume that she was not going to file, given her abandonment of all the other joint obligations ... You may want to have your attorney use that logic to argue that you have a right to the deductions
I am a lawyer. Not a tax lawyer. I still don't understand you reference to forgery. I obediently filed my separate return on time when I couldn't find my wife and asked H&r Block to take any deductions to which I was entitled. I was willing to file a joint return but my wife went off the radar and couldn't be found. If she refuses to pay taxes on time am I not entitled to file in accordance with H&R block's advice. I can't afford to renogiate this issue. If I gave up half the deductions for mortgage interest and real estate taxes it would be suicidal in my financial situation. Since the mortgge inteest was paid from a joint account doesn't that allow me to claim that I paid the mortgage. After all either party could have emptied that account at their will. My lawyer has already overcharged me by thousands for what I consider to be an abominable piece of work. I can't afford to ask him any more questions.
You're over-reacting.
I am agreeing with you ... the forgery is the ONLY WAY YOU COULD HAVE filed a joint retun .... SHE WASN'T THERE
First you say that you don't want to re-negotiate, then you say that your having access to a joint return gives you the right to deduct ALL of the mortgage interest. ... If the money came from a joint account, that only reinforces that she had a right to deduct 1/2
pardon me ... access to a "joint account"
That one's a reach too
My apologies, but YES, she has a right to deduct, especially if the actual payment was made from joint funds, even ore so if her name is XXXXX XXXXX loan
If she (1) is on the loan (2) is on the deed to the securing asset, the home (3) owns 1/2 interest in the property of your checking account, you have no case (Again, hopefully having all the facts will allow you to "see around some corners" here.)
But again, the income tax piece here, is simply the reduction by 1/2 of the property tax and the mortgage interest. (and the cost to you will only be your marginal tax rate x that number) ... if this is a reduction in tax deductions of say 20,000 and you are in the 35% bracket, then the cost will be $7000 in tax
Questions?
I see your options as (1) simply refusing to sign anything saying she has the RIGHT to 1/2, since she wasn't there to file the joint return (they could buckle) ... (2) amend your return and ask for an installment agreement on the additional funds owed ... If she's on the loan, is on the deed and has joint onwership in the account from which the mortgage interest was paid, she DOES have a right to 1/2 of this deductions
Are you still with me here? We've been at this for and hour and 45 minutes, If we go another 15 that makes my pay about $15 per hour
How else can I help you today?
Yes I'm with you but also on the phone with IRS. BaCK IN A MOMENT
Very good, I'll wait ... Thank you
sorry for the delay. IRS can be quite wordy and confusing. She is on the loan but was never on the deed. She and I owned the checking account jointly out of which the mortgage payments were made. The ultimate question is whether (without any knowledge of her finances at all) I should now propose to her lawyers that we amend and file jointly or fight out the fact that she made herself unavaible to file timely and I was forced to file separately and take all deductions H&R block told me were legitimate and proper. If relevant, my income is approximately $30,000 a year from SS and drawing down my IRA and other investment income. She is employed has several pensions and earns approximately $100,000 a year.
Ahhh, it certainly does affect her more financially: Here are the brackets for filing separate:
So, as you can see she has a lot more to lose here, because those deduction are worth at least 33% to her ... and 15% to you
Tat may be something you can use .... BUT, I think you've distilled it well ... maybe you can counter with taking a larger portion of the deduction that would have the effect of equalizing your costs
So, what should I do? I am barely living on the poverty line and using up the last of my investment income. In five to seven years I will be a ward of the State of Nevada. I can't live on SS of $970 a month. Money spent for any purpose at this stage of life simply shortens my life span as I will not permit myself to become a ward of the State of Nevada.
All I can contribute it what the IRS should have told you, that since she has liability for the loan (I think that's what it really turns on) she has as much right to deduct the interest as you ... and I think that the policy consideration of the law here would be that you are supposed to work these thing out ... as you probably remember from your law school days, courts expect married couples to work things out and don't usually enforce contracts ... and please don't think I'm making light here, ...
But I think you've answered your own questions .. If you're not willing to, you need to fight on this one ... as I said very early on here
You had to do what you did because she forced the issue byt leaving... irresponsibly
Further H&R Bolck advised you to take the deductions
What will have to be balanced here is that irresponsibility, with the fact that under NORMAL circumstances she is entitled to the mortgage interest deduction too
I don't think you are making light. My only bargaining leverage seems to be that her lawyer is obsessed with getting a release from me for whatever rights to her pension I may have. My lawyer is too busy billing time to actually act as a responsible, ethical lawyer but his billing system does not depend on accomplishment. Like most lawyers its strictly by the hour which the client can't really monitor.
OK, now I am not a divorce lawyer but given what you said about your respective incomes, you certainly have a case to whatever pension rights are there ... I don't want to presume that I understand everything here, but maybe you sue for divorce and ask for a settlement that reflects what you both brought to the marriage
Have you though about asking a good divorce lawyer (a different one) to look at this on a contingency basis?
Thank you. We've probably gone as far as we can on this subject. I have never known a divorce lawyer to take a contingency case. They are too greedy for that unless you have a slam dunk case; which obviously I don't have. Once I get the separation agreement I will file in Nevada and represent myself. But in Nevada to get a quickie divorce you must have a signed and notarized Marital Separation Agreement. And there is the rub.
One other subject, when I signed onto this website it said the fee was $66. Is that still correct?
Sounds like the fight is right here right now then .. he's trying to scare you with the TAX word, but you've done nothing wrong... My gut say stick to your guns
That's right 33 for me 33 for them
Thank you. I will give you a big smilie. | http://www.justanswer.com/tax/7yb5s-wife-abandoned-home-early-march-2012-not.html | CC-MAIN-2016-26 | refinedweb | 1,786 | 73.1 |
blocksThere:. You will find the class constraint.
3 LaTeX blocks and the Writer mon
5 Packages
LaTeX, in addition to its predefined commands, has a big number of packages that increase its power. HaTeX functions for some of these packages are defined in separated modules, one module per package. This way, you can import only those functions you actually need. Some of these modules are below explained.
5.1 InputencThis package is of vital importance if you use non-ASCII characters in your document. For example, if my name is Ángela, the Á character will not appear correctly in the output. To solve this problem, use the
import Text.LaTeX.Base import Text.LaTeX.Packages.Inputenc thePreamble :: LaTeX thePreamble = documentclass [] article <> usepackage [utf8] inputenc <> author "Ángela" <> title "Issues with non-ASCII characters"
5.2 GraphicxWith the that contributed to it with patches, opinions or bug reports. Thanks.:. | https://wiki.haskell.org/index.php?title=HaTeX_User's_Guide&oldid=45494 | CC-MAIN-2017-26 | refinedweb | 147 | 50.33 |
gfxRGBA has some things that can be cleaned out. Not Used: 265 #ifdef MOZILLA_INTERNAL_API 266 /** 267 * Convert this color to a hex value. For example, for rgb(255,0,0), 268 * this will return FF0000. 269 */ 270 // XXX I'd really prefer to just have this return an nsACString 271 // Does this function even make sense, since we're just ignoring the alpha value? 272 void Hex(nsACString& result) const { 273 nsPrintfCString hex(8, "%02x%02x%02x", PRUint8(r*255.0), PRUint8(g*255.0), PRUint8(b*255.0)); 274 result.Assign(hex); 275 } 276 #endif and 41 #ifdef MOZILLA_INTERNAL_API 42 #include "nsPrintfCString.h" 43 #endif 44 Not implemented, and also not used: 222 /** 223 * Initialize this color by parsing the given string. 224 * XXX implement me! 225 */ 226 #if 0 227 gfxRGBA(const char* str) { 228 a = 1.0; 229 // if aString[0] is a #, parse it as hex 230 // if aString[0] is a letter, parse it as a color name 231 // if aString[0] is a number, parse it loosely as hex 232 } 233 #endif 234 Not used: PACKED_ABGR_PREMULTIPLIED, but PACKED_ARGB_PREMULTIPLIED is used) PACKED_XBGR PACKED_XRGB
Created attachment 419555 [details] [diff] [review] V1: Cleanup of gfxRGBA in gfxColor.h Note, the removal of #include nsPrintfCString.h requires the addition of #include nsString.h in gfxPlatform, as that one uses nsString.
I'd prefer if you got Vlad and/or Stuart to review these changes because they put the unused code in.
Comment on attachment 419555 [details] [diff] [review] V1: Cleanup of gfxRGBA in gfxColor.h Doesn't look like they are going to do their review duty, back to you.
Comment on attachment 419555 [details] [diff] [review] V1: Cleanup of gfxRGBA in gfxColor.h Why not.
Try run for bb4988e00b64 is complete. Detailed breakdown of the results available here: Results (out of 17 total builds): failure: 17 Builds available at
Created attachment 554812 [details] [diff] [review] Patch v2 Turned out that PACKED_XRGB was used after all. I added it back and fixed up a couple of #include problems. Sorry about that.
(It does pass try now: <>.) | https://bugzilla.mozilla.org/show_bug.cgi?id=537223 | CC-MAIN-2017-34 | refinedweb | 348 | 66.64 |
Asked by:
'Add' and no extension method 'Add' accepting a first argument of type 'example.Data.ExamStage.Staff'
Question
Hi,
I am new to .NET, C# when I run my ETL with C# code project, getting the following error:
'example.Data.ExamStage.Staff' does not contain a definition for 'Add' and no extension method 'Add' accepting a first argument of type 'example.Data.ExamStage.Staff' could be found (are you missing a using directive or an assembly reference?) C:\example\ContactInfo\ContactInfoDataLoader.cs
Please do the needful.
Thanks in advance,
Krish
Monday, July 30, 2012 11:45 AM
- Moved by Rob Va - MSFT Tuesday, August 28, 2012 6:34 PM (From:FAST Products (pre-2010))
All replies
Hi Krish,
As this forum is for Fast specific issues, I will be moving this to a forum dealing with .NET/C# questions.
Thanks!
Rob Vazzana | Sr Support Escalation Engineer | US Customer Service & SupportTuesday, August 28, 2012 6:32 PM
Hi Krish,
Welcome to the MSDN Forum.
Based on my understanding, it seems that you need to redesign your class "example.Data.ExamStage.Staff", add one more method "Add".
Best regards,
Ghost,
Call me ghost for short, Thanks
To get the better answer, it should be a better question.Wednesday, August 29, 2012 7:36 AM
- Show us the code for 'Add' and how u r calling it.
Please mark this post as answer if it solved your problem. Happy Programming!Monday, September 3, 2012 2:58 AM
Error is that C# compiler can not resolve method or extension method Add that takes example.Data.ExamStage.Staff as an parameter so you are probably missing one, are not referencing required assembly or have not resolved the correct namespace.
If you are using newest Visual Studio then by hovering over the Add that is not found, click context menu open and look for Resolve. This should resolve problem if you are just missing the using statement of the namespace. If there is no resolve option, then error is somewhere else.
Other option is that you need to add reference to assembly that contains class that has correct Add method you can look under the References section if there is any invalid reference, usually those are reference that can not be resolved what might occur if for example you have copied project or moved the referenced assembly.
Last option is that you need to implement the correct method, but surely you should know if that's the case.Tuesday, September 11, 2012 12:48 PM | https://social.msdn.microsoft.com/Forums/en-US/986dd4d2-24b0-44d6-bcfd-0fa3a200028f/add-and-no-extension-method-add-accepting-a-first-argument-of-type?forum=clr | CC-MAIN-2021-39 | refinedweb | 420 | 61.16 |
- NAME
- SYNOPSIS
- DESCRIPTION
- Operating-Modes
- Runtime configuration
- EMBPERL_FILESMATCH
- EMBPERL_ALLOW (only 1.2b10_ERRORS_TO
- EMBPERL_COOKIE_NAME
- EMBPERL_COOKIE_DOMAIN
- EMBPERL_COOKIE_PATH
- EMBPERL_COOKIE_EXPIRES
- EMBPERL_SESSION_CLASSES
- EMBPERL_SESSION_ARGS
- SYNTAX
- Variable scope and cleanup
- Predefined variables
- Session handling
- (Safe-)Namespaces and opcode restrictions
- Utility Functions
- Input/Output Functions
- Inside Embperl - How the embedded Perl code is actually processed
- Performance
- Bugs
- Compatibility
- Support
- References
- Author
- See Also
NAME
HTML::Embperl - Perl extension for embedding Perl code in HTML documents
SYNOPSIS
Embperl is a Perl extension module which gives you the power to embed Perl code directly in your HTML documents (like server-side includes for shell commands).
DESCRIPTION
Operating-Modes
Embperl can operate in one of four modes:
Offline.
- extention or directory.
Example of Apache
srm.conf:
<Directory /path/to/your/html/docs> Action text/html /cgi-bin/embperl/embpcgi.pl </Directory>
NOTE 1: Out of security reasons, embpexec.pl must not be used anymore as CGI script!
NOTE 2: CGI Scripts are not so secure. You should consider using EMBPERL_ALLOW to restrict the access to the rights documents.
From mod_perl (Apache httpd):
SetEnv EMBPERL_DEBUG 2285 Alias /embperl /path/to/embperl/eg <Location /embperl/x> SetHandler perl-script PerlHandler HTML::Embperl Options ExecCGI </Location>
Another possible setup (for Apache 1.3bX see below).
NOTE: Since <Files> does not work the same in Apache 1.3bX as it does in Apache 1.2.x, you need to use <FilesMatch> instead.
<FilesMatch ".*\.epl$"> SetHandler perl-script PerlHandler HTML::Embperl Options ExecCGI </FilesMatch>!
By calling HTML::Embperl::Execute (\%param) an different file).
There are two forms you can use for calling Execute. A short form which only takes an filename and optional additional parameters or a long form which takes a hash reference as its argument. This gives it the chance to vary the parameters according to the job that should be done.
(See eg/x/Excute.pl for more detailed examples):
- inputfile.
- tell's Embperl not to execute the page, but define all subrountines found inside. This is neccessary before calling them with Execute by the sub parameter or for an later import.
A value of one tell's Embperl to define to subrountines inside the file (if not already done) and to import them as perl subroutines into the current namespace.
See [$ sub $] metacommand and section about subroutines for more infos.
-.
- cleanup = 1
Immediate cleanup
- param.
-.
- uri
The URI of the request. Only needed for the virtlog feature.
- compartment
Same as "EMBPERL_COMPARTMENT" (see below).
- input_func
Same as "EMBPERL_INPUT_FUNC" (see below). Additionaly you can specify an code reference to an perl function, which is used as input function or an array reference, where the first element contains the code reference and further elements contains additional arguments passed to the function.
- output_func
Same as "EMBPERL_OUTPUT_FUNC" (see below). Additionaly you can specify an code reference to an perl function, which is used as output function or an array reference, where the first element contains the code reference and further elements contains, as long as there are any.
Helper functions for Execute
- HTML::Embperl::Init ($Logfile, $DebugDefault)
This function can be used to setup the logfile path and (optional).
- HTML::Embperl::ScanEnvironement (\%params)
Scans the %ENV and setups %params for use by Execute. All Embperl runtime configuration options are recognized, except EMBPERL_LOG.
EXAMPLES for Execute:
#') -]
Runtime configuration.
EMBPERL_FILESMATCH) cohabitating in the same directory.. Especialy in a CGI environenemt this can be usefull to make a server more secure.
EMBPERL_COMPARTMENT
Gives the name of the compartment from which to take the opcode mask. (See the chapter about "(Safe-)Namespaces and opcode restrictions" for more details.)
EMBPERL_ESCMODE
Specifies the initial value for "$escmode" (see below).
EMBPERL_LOG
Gives the location of the log file. This will contain information about what Embperl is doing. How much).
EMBPERL_PACKAGE
The name of the package where your code will be executed. By default, Embperl generates a unique package name for every file. This ensures that variables and functions from one file can not affect those from another file. (Any package's variables will still be accessible with explicit package names.) how to set it up in your srm.conf. to not send its own errorpage in case of failure, instead shows, instead it returns the error back to Apache or the calling programm. When running under mod_perl this gives you the chance to use the Apache ErrorDocument directive to show a custom error-document.
-RawInput = 16 charcaters automatically (e.g., `<' appears as `<' in the source), you should not set this option. Embperl will automatically convert the HTML input back to the Perl expressions as you wrote them.
- optEarlyHttpHeader = 64.
- optDisableChdir = 128
Without this option, Embperl changes the currect directory to the one where the script resides. This gives><SELECT><UL>)
- not be insert output data by using the [+ ... +] block, or printing to the filehandle OUT.
-.
-Allow removing of spaces and empty lines from the output. This is usefull for other sources then HTML.
- optOpenLogEarly = 2097152 (only 1.2b5 and above)
This option causeses
Lists>
- dbgDefEval = 16384
Shows every time new Perl code is compiled.
- dbgCacheDisable = 32768
Disables the use of the p-code cache. All Perl code is recompiled every time. (This should not be used in normal operation as it slows down Embperl dramatically.) This option is only here for debugging Embperl's cache handling. There is no guarantee that Embperl behaves the same with and without cache (actually is does not!)
-apces.
EMBPERL_INPUT_FUNC, $mtime, additional parameters...) ;
- $r
Apache Request Record (see perldoc Apache for details)
- $in
a reference to a scalar, to which the input should be returned.
Example: open F, "filename" ; local $\ = undef ; $$in = <F> ; close F ;
- $mtime
a reference to a scalar, to which the modification time should be returned.
Example: $$mtime = -M provides you with the possibility to chain Embperl and other modules together.
EMBPERL_OUTPUT_FUNC...) ;
- $r
Apache Request Record (see perldoc Apache for details)
- $out provides you with the possibility to chain Embperl and other modules together.
EMBPERL_MAILHOST
Specifies which host the MailFormTo function uses as SMTP server. Default is localhost.. Default is none.
EMBPERL_SESSION_CLASSES
Space separted list of object store and lock manager for Apache::Session (see "Session handling")
EMBPERL_SESSION_ARGS
List of arguments for Apache::Session classes (see "Session handling") Example:
PerlSetEnv EMBPERL_SESSION_ARGS "DataSource=dbi:mysql:session UserName=www Password=secret"
SYNTAX.
[+ Perl code +].
[- Perl code -]
Executes the Perl code, but deletes the whole command from the HTML output.
Examples:
[- $a=1 -] Set the variable $a to one. No output will be generated. [- use SomeModule -] You can use other modules. [- .
[! Perl Code !]
Same as [- Perl Code -] with the exception that the code is only executed at the first request. This could be used to define subroutines, or do one-time initialization.
[* Perl code *]
(only version 1.2b2 or higher)
This is similar to [- Perl Code -], the main difference is, while [- Perl Code -], has always it's own scope, all [* Perl code *] blocks runs in the same scope. This gives you the possibilty to define "local" variables with a scope of the whole page. Normaly you don't need to use local, because Embperl takes care of separate namespaces of different documents and cleanup after the request is finished, but in special cases it's necessary. For example if you want recursivly: Since the execution of [- ... -] and metacommands is controlled by Embperl, there is a much better debugging output in the logfile for this two ones. Also no restriction where they can be used apply to meta-commands. You can use them anywhere even inside of html tags that are interpreted by Embperl.
[# Some Text #] (Comments)
.
[$ Cmd Arg $] (Meta-Commands)
Execute an Embperl metacommand. Cmd can be one of the following. (Arg varies depending on <Cmd>).
- if, elsif, else, endif.
- while, endwhile.
- do, until
Executes a loop until the Arg given to until is true.
Example: [- $i = 0 -] [$ do $] [+ $i++ +] <BR> [$ until $i > 10 $]
- foreach, endforeachatis change by the browser if the user doesn't behave exactly as you expect. Users have a nasty habit of doing this all of the time. Your program should be able to handle such situations properly.
- var
The var command declares one or more variables for use within this Embperl document and sets the strict pragma. The variable names must be supplied.
- sub
.
HTML Tags
Embperl recognizes the following HTML tags specially. All others are simply passed through, as long as they are not part of a Embperl command.
- TABLE, /TABLE, TR, /TR the expressions in which $row or $cnt occurs is/are defined.
Embperl repeats all text between <tr> and </tr>, as long the expressions in which $col or $cnt occurs is.
- TH, und
.)
- TEXTAREA, /TEXTAREA
The
TEXTAREAtag is treated exactly like other input fields.
- META HTTP-EQUIV=
"); -]
- A, EMBED, IMG, IFRAME, FRAME, LAYER
The output of perl blocks inside the
HREFattribute of the
ATags and the
SRCattribute of the other Tags are URL escaped instead of HTML escaped. (see also $escmode). Also when inside such a URL, Embperl expands array refernces to URL paramter syntax. Example:
[- %A = (A => 1, B => 2) ; @A = (X, 9, Y, 8, Z, 7) -] <A HREF="?[+ [ %A ] +]"> <A HREF="?[+ \@A +]">
is expanded by Embperl to
<A HREF=""> <A HREF="">
Variable scope and cleanup expections ; } !]
Predefined variables
Embperl has some special variables which have a predefined meaning.
%ENV
Contains the environment as seen from a CGI script.
%fdat informations provied by the CGI.pm uploadInfo function. To get the filename just print out the value of the correspondig are not supported anymore.
) fill the %udat hash from Apache::Session with just the same values as you have stored for that user. (See also "Session handling")
%mdat (only 1.2b2 or higher) just the same values as you have stored within the last request to that page. (See also "Session handling")
$row, $col
Row and column counts for use in dynamic tables. (See "HTML tag table".)
$maxrow, $maxcol
Maxium:
end of table
- $tabmode = 1
End when one of the expressions with $row becomes undefined. The row containing the undefined expression is not displayed. Only those expression are observed which contains an access to the varibale $row.
- $tabmode = 2
End when an expression with $row becomes undefined. The row containing the undefined expression is displayed.
- $tabmode = 4
End when $maxrow rows have been displayed.
end of row
- $tabmode = 16
End when one of the expression with $col becomes undefined. The column containing the undefined expression is not displayed. Only those expression are observed which contains an access to the varibale ).
- $escmode = 3
The result of a Perl expression is HTML-escaped (e.g., `>' becomes `>') in normal text and URL-escaped (e.g., `&' becomes `%26') within of
A,
EMBED,
IMG,
IFRAME,
FRAMEand
LAYERtags.
- $escmode = 2
The result of a Perl expression is always URL-escaped (e.g., `&' becomes `%26').
- $escmode = 1 an location header Embperl will automaticly set the status to 301 (Redirect). Example:
[- $http_headers_out{'Location'} = "" -]
see also META HTTP-EQUIV=
$optXXX $dbgXXX:
- $optDisableVarCleanup
-
- $optSafeNamespace
-
- $optOpcodeMask
-
- $optDisableChdir
-
- $optEarlyHttpHeader
-
- $optDisableFormData
-
- $optAllFormData
-
- $optRedirectStdout
-
- $optAllowZeroFilesize
-
- $optKeepSrcInMemory
-
%CLEANUP
Embperl cleanups up only variables with are defined within the Embperl page. If you want Embperl to cleanup addtional variables you can add them to the hash %CLEANUP, with the key set to the variable name and the value set to one. The other way round you could prevent Embperl from cleaning up some variables, by adding them to this hash, with a values of zero.
Session handling
From 1.2b1 and higher Embperl is able to handle per user sessions for you. You can store any data in the %udat hash and if the same user request again an Embperl document, you will see the same values in that hash again.
From 1.2b2 and higher Embperl is able to handle per module/page persitent data for you. You can store any data in the %mdat hash and if any request comes to the same Embperl document, you will see the same values in that hash again.
To configure Embperl to do session management for you, you must have installed Apache::Session (1.00 or higher) and tell Embperl which storage and locker classes you would like to use for Apache::Session. This is done by setting the environement variable
EMBPERL_SESSION_CLASSES. You may have a startup.pl for your httpd which looks like this:
BEGIN { $ENV{EMBPERL_SESSION_CLASSES} = "DBIStore SysVSemaphoreLocker" ; $ENV{EMBPERL_SESSION_ARGS} = "DataSource=dbi:mysql:session UserName=test" ; } ; use HTML::Embperl ;
For Solaris it's neccessary to set the
nsems Argument if you use SysVSemaphoreLocker
$Apache::Session::SysVSemaphoreLocker::nsems = 16;
You may also put this in the httpd/srm.conf:
PerlSetEnv EMBPERL_SESSION_CLASSES "DBIStore SysVSemaphoreLocker" PerlSetEnv EMBPERL_SESSION_ARGS "DataSource=dbi:mysql:session UserName=test" PerlModule HTML::Embperl ;
EMBPERL_SESSION_ARGS is a space separated list of name/value pairs, which gives additional arguments for Apache::Session classes.
NOTE: The above configuration works only with Embperl 1.2b11 or above. The way Apache::Session was used in earlier versions still works, but I have removed the documentation to avoid confusion. Changes are, that you don't need to load Apache::Session anymore on your own and that Apache::Session 1.00 takes totaly different arguments then Apache::Session 0.17.
Now you are able to use the %udat and %mdat hashs for your user/module sessions. As long as you don't touch %udat or %mdat Embperl will not create any session, also Apache::Session is loaded. As soon as you store any value to %udat, Embperl will create a new session and send a cookie to the browser to maintain it's id, while the data is stored by Apache::Session. (Further version may also be able to use URL rewriting for storing the id). When you store data to %mdat Embperl will store the data via Apache::Session and retrieves it when the next request comes to the same page.
(Safe-)Namespaces and opcode restrictions then one person, it may be neccessary to really.
Utility Functions
AddCompartment($Name)');
MailFormTo($MailTo, $Subject, $ReturnField) sccesfully sent! </BODY> </HTML>
This will send a mail with all fields of the form to webmaster@domain.xy, with the Subject 'Mail form WWW Form' and will set the Return-Path of the mail to the address which was entered in the field with the name 'email'.
NOTE: You must have Net::SMTP (from the libnet package) installed to use this function.
exit (just like always when running under mod_perl).
Input/Output Functions
ProxyInput ($r, $in, $mtime, $src, $dest) be also possible to use two httpd's on different ports. In this configuration, the source and the URI location could be the same.
LogOutput ($r, $out, $basep)
-> +]
1. Remove the HTML tags. Now it looks like
[+ .
2. Translate HTML escapes to ASCII characters> -] muliple [+ ... +] blocks.
7. Send the return value as output to the destination (browser/file)
Now everything is done and the output can be sent to the browser. If you haven't set dbgEarlyHttpHeaders, the output is buffered until the successful completion of document execution of the document,.
Performance, dbgFlushLog or dbgCacheDisable in a production environment. More debugging options are useful for development where it doesn't matter if the request takes a little bit longer, but on a heavily-loaded server they should be disabled. Addtionaly the options optDisableChdir, optDisableHtmlScan, optDisableCleanup have consequences for the performance.
Also take a look at mod_perl_tuning.pod for general ideas about performance.
Bugs.
Compatibility
I have tested Embperl succesfully
on Linux 2.x with
- perl5.004_04
-
- perl5.005_03
-
- apache_1.2.5
-
- apache_1.2.6
-
- apache_1.3.0
-
- apache_1.3.1
-
- apache_1.3.2
-
- apache_1.3.3
-
- apache_1.3.4
-
- apache_1.3.5
-
- apache_1.3.6
-
- apache_ssl (Ben SSL)
-
- Stronghold 2.2
-
- Stronghold 2.4.1
-
- Apache_1.3.3 with mod_ssl 2.0.13
-
- Apache_1.3.6 with mod_ssl 2.3.5
-
I know from other people that it works on many other UNIX systems
on Windows NT 4.0 with
on Windows 95/98 with
Support
CVS
The lastest developements are available from a CVS. Look at "perldoc CVS.pod" for a detailed description.
Author
G. Richter (richter@dev.ecos.de)
See Also
perl(1), mod_perl, Apache httpd
1 POD Error
The following errors were encountered while parsing the POD:
- Around line 1443:
Non-ASCII character seen before =encoding in '-> '. Assuming ISO8859-1 | http://web-stage.metacpan.org/pod/release/GRICHTER/HTML-Embperl-1.2b11/Embperl.pod | CC-MAIN-2019-39 | refinedweb | 2,685 | 57.98 |
javacgi-document@orbits.com.
The software that I wrote to aid in this is called Java CGI. You can get it from. (The version number may have changed.).
g.
There.
There are currently three main classes supported -- CGI, Email and HTML. I am considering adding classes to deal with MIME-formatted input and output -- MIMEin & MIMEout, respectively.
There are also a few support and test classes.
CGI_Test,
HTML_Test are intended to be used to
test your installation.
They can also be used as a starting-point for your own Java programs
which use this class library.
The
Text class is the superclass for both the
HTML classes.
public class CGI
The CGI class holds the ``CGI Information'' -- Environment variables
set by the web server and the name/value sent from a
form when its submit action is selected.
All information is stored in a
Properties class object.
This class is in the ``Orbits.net'' package.
CGI() // Constructor. getNames() // Get the list of names. getValue() // Get form value by specifying name.
CGI_Test.
Constructs an object which contains the available CGI data.
public CGI()
When a CGI object is constructed, all available CGI information is sucked-up into storage local to the new object.
List the names which are defined to have corresponding values.
public Enumeration getKeys ()
Provides the full list of names for which coresponding values are defined.
An
Enumeration of all the names defined.
Retrieves the value associated with the name specified.
public String getValue ( String name )
This method provides the corespondence between the
names and
values sent from an HTML form.
The key by which values are selected.
A
String containing the value.
This class provides both an example of how to use the
CGI class
and a test program which can be used to confirm that the Java CGI
package is functioning correctly.
main() // Program main().
CGI.
Provide a
main() method.
public static void main( String argv[] )
This is the entry point for a CGI program which does nothing but return a list of the available name/value pairs and their current values.
Arguments passed to the program by
the
java.cgi script.
Currently unused.
public class Email extends Text
Messages are built up with the
Text class
add*() methods
and the e-mail-specific methods added by this class.
When complete, the message is sent to its destination.
This class is in the ``Orbits.net'' package.
Email() // Constructor. send() // Send the e-mail message. sendTo() // Add a destination for message. subject() // Set the Subject: for message.
Constructs an object which will contain an email message.
public Email()
Sets up an empty message to be completed by the Email methods.
Text.
Send the e-mail message.
public void send ()
This formats and sends the message. If no destination address has been set, there is no action taken.
Add a destination for this message.
public String sendTo ( String address )
Add
address to.
A destination to send this message to.
Set the subject for this message.
public void subject ( String subject )
This method sets the text for the e-mail's
Subject:
line.
If called more than once, the latest subject set is the one that is used.
The text of this message's
Subject: line.
This class provides both an example of how to use the
main() // Program main().
Provide a
main() method.
public static void main( String argv[] )
This is the entry point for a CGI program which returns
a list of the available name/value pairs and their current values.
It will also send this list to the address specified in the
Arguments passed to the program by
the
java.cgi script.
Currently unused.
public class HTML extends Text.
HTML_Test, Text.
Constructs an object which will contain an HTML message.
public HTML()
Sets up an empty message to be completed by the HTML methods.
Text.
Set the name of the document author.
public void author ( String author )
Set the name of the document author to
author.
The text to use as the author of this message.
title().
Start a definition list.
public void definitionList ().
definitionListTerm(),
endList(),
listItem().
Add a term to a definition list.
public void definitionListTerm ()
Add a term to a definition list.
The text for the term part of the current list entry should be appended
to the message after this method is called and before a corresponding
listItem method is called.
definitionList(),
listItem().
End a list.
public void endList ()
End a list. This method closes out a list. Note that, currently, lists cannot be nested.
definitionList().
Add an entry to a list.
public void listItem ()
public void listItem ( String item )
public boolean listItem ( String term, String item )
Add an entry to a list.
If the first form is used, the text for the current list item should be
appended to the message after this method is called and before any other
list methods are called.
In the second and third forms, the
item text is specified as a
parameter to the method instead of (or in addition to) being appended to
the message.
The third form is specific to definition lists and provides both the
term and the definition of the list entry.
The text of this list entry.
The text of this definition list entry's term part.
definitionList(),
definitionListTerm(),
endList().
Send the HTML message.
public void send ()
Send the HTML message.
Set the text for the document title.
public void title ( String title )
Set the text for the document title.
The text of this message's title.
author().
This class provides both an example of how to use the
HTML class
and a test program which can be used to confirm that the Java CGI
package is functioning correctly.
main() // Program main().
HTML.
Provide a
main() method.
public static void main( String argv[] )
This is the entry point for a CGI program which returns a list of the available name/value pairs in an HTML document, with each name/value pair displayed in a definition list element.
Arguments passed to the program by
the
java.cgi script.
Currently unused.
public abstract class Text
This class is the superclass of the
HTML
classes.
Messages are built up with the methods in this class and completed and
formatted with the methods in subclasses.
This class is in the ``Orbits.text'' package.
Text() // Constructor. add() // Add text to this object. addLineBreak() // Add a line break. addParagraph() // Add a paragraph break.
HTML.
Add text to this item.
public void add ( char addition )
public void add ( String addition )
public void add ( StringBuffer addition )
Add
addition to the contents of this text item.
Text to be added to the text item.
addLineBreak(),
addParagraph().
Force a line break at this point in the text.
public void addLineBreak ()
Add a line break to the text at the current point.
add(),
addParagraph().
Start a new paragaph.
public void add ()
Start a new paragraph at this point in the text flow.
add(),
addLineBreak().
Used when we know how much space the message will need to have allocated.
Add a list of primary destinations to the e-mail message.
Add a Carbon-Copy destination to the e-mail message.
Add a list of Carbon-Copy destinations to the e-mail message.
Add a Blind Carbon-Copy destination to the e-mail message.
Add a list of Blind Carbon-Copy destinations to the e-mail message.
Used when we know how much space the message will need to have allocated.
Start an unordered list.
Start an ordered list.
Start a directory list.
Start a menu list.
Specify an anchor.
Specify a link.
Specify an applet link.
Makefile.
Testclass, which would use every method in this package.
CGI_Test,
HTML_Testbuild on each other to provide incremental tests for debugging purposes.
Orbits.net.*, the support class
Textis in
Orbits.text.Text.
CGItestto
CGI_Test.
CGIclass and
java.cgihad to be modified.
javacgitest.htmldocument is made part of the distribution.
makeupon installation are provided with names that end with -dist. | http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/html_single/Java-CGI-HOWTO.html | CC-MAIN-2016-30 | refinedweb | 1,321 | 68.87 |
Back
my_string = "C2H6O" a = re.findall("((Cl|H|O|C|N)[0-9]*)", my_string) print(a)
my_string = "C2H6O"
a = re.findall("((Cl|H|O|C|N)[0-9]*)", my_string)
print(a)
The output is [("C2", "C"), ("H6", "H"), ("O", "O")], but I expected ["C2", "H6", "O"].
I understand how the tuple works, but I feel like this code should not have the second element in the tuple ("C2", "C").
I will suggest some ways to solve this issue:-
For your desired output you will have to remove the extra bracket that you are using:-
import remy_string = "C2H6O"a =re.findall("([Cl|H|O|C|N][0-9]*)", my_string)print(a)
import re
my_string = "C2H6O"
a =re.findall("([Cl|H|O|C|N][0-9]*)", my_string)
You are getting that output because your pattern contains capturing groups.
So a Capturing group(regular expression) means that a part of a pattern can be enclosed in parentheses (...).
If you want to get rid of them, use this pattern:- r"(?:Cl|H|O|C|N)[0-9]*
import remy_string = "C2H6O"a=re.findall(r"(?:Cl|H|O|C|N)[0-9]*", my_string)print(a)
a=re.findall(r"(?:Cl|H|O|C|N)[0-9]*", my_string)
So what it does is it removes the (unneeded) outside capture group completely and uses a non-capturing group for the alpha. | https://intellipaat.com/community/913/why-does-this-code-return-a-tuple-with-2-elements | CC-MAIN-2021-43 | refinedweb | 224 | 66.84 |
You need to look for lines matching a given regex in one or more files.
Write a simple grep-like program.
As I've mentioned, once you have a regex package, you can write a grep-like program. I gave an example of the Unix grep program earlier. grep is called with some optional arguments, followed by one required regular expression pattern, followed by an arbitrary number of filenames. It prints any line that contains the pattern, differing from Recipe 4.5, which prints only the matching text itself. For example:
grep "[dD]arwin" *.txt
searches for lines containing either darwin or Darwin in every line of every file whose name ends in .txt.[3] Example 4-5 is the source for the first version of a program to do this, called Grep0. It reads lines from the standard input and doesn't take any optional arguments, but it handles the full set of regular expressions that the Pattern class implements (it is, therefore, not identical with the Unix programs of the same name). We haven't covered the java.io package for input and output yet (see Chapter 10), but our use of it here is simple enough that you can probably intuit it. The online source includes Grep1, which does the same thing but is better structured (and therefore longer). Later in this chapter, Recipe Recipe 4.12 presents a Grep2 program that uses my GetOpt (see Recipe 2.6) to parse command-line options.
[3] On Unix, the shell or command-line interpreter expands *.txt to match all the filenames, but the normal Java interpreter does this for you on systems where the shell isn't energetic or bright enough to do it.
import java.io.*; import java.util.regex.*; /** Grep0 - Match lines from stdin against the pattern on the command line. */ public class Grep0 { public static void main(String[] args) throws IOException { BufferedReader is = new BufferedReader(new InputStreamReader(System.in)); if (args.length != 1) { System.err.println("Usage: Grep0 pattern"); System.exit(1); } Pattern patt = Pattern.compile(args[0]); Matcher matcher = patt.matcher(""); String line = null; while ((line = is.readLine( )) != null) { matcher.reset(line); if (matcher.find( )) { System.out.println("MATCH: " + line); } } } } | https://flylib.com/books/en/2.213.1.67/1/ | CC-MAIN-2020-50 | refinedweb | 366 | 67.65 |
String Manipulation Interview Questions
- 0of 0 votes
Remove 3 or more consecutive characters from a string, repeat until there are no more.
eg.
ABCCCCBBA => ABBBA => AA
- 0of 0 votes
# Given a set of strings, print them in Lexicographic order (dictionary/alphabetical order)
# Example,
# Input:
# “ABCDEF”, “AA”, “BEF”, “A”, “AABB”
# Output:
# “A”, “AA”, “AABB”, “ABCDEF”, “BEF”
-
- 1of 1 vote
Reverse this string 1+2*3-20. Note: 20 must be retained as is.
Expected output: 20-3*2+1
- 0of 0 votes
Perform left and right shift on string
- 1of 1 vote
Reverse the words in string eg. 'The Sky is Blue'. then print 'Blue is Sky The'.
- 2of 2 votes
You are given set of strings, You have return anagrams subsets from it. An anagram set is that one where every string is an anagram of another string. If the subset contains only one string, don't include that in the result.
- 0of 0 votes
Regex matching algorithms
You will be given a string and a pattern string consisting of only '*','?', and small letters. You have to return tree or false based upon the comparisons.
? repersent one char.
* means zero or n number of char for any positive n.
Example
abc, a?c : true
abc, a*?c : true
abc, * : true
abc, ?c : false
- 2of 2 votes
Given an input string and a dictionary of words, find out if the input string can be segmented into a space-separated sequence of dictionary words.
Ex: "bedbathandbeyond" would be "bed bath and beyond" which are all dictionary words.
- 0of 0 votes
You are given two string (like two statements). You have to remove all the words of second string from first string and print the remaining first string. Please maintain the order of the remaining words from the first string. You will be only removing the first word, not all occurrence of a word.
Example: Str1 = "A Statement is a Statement", Str2 = "Statement a"
Output: "A is Statement"
- 0of 0 votes
You are given an array of words. from each word, you make a chain, in that, you remove one char at a time and you remove that char only when the remaining word is present in the input array.
For Example, if the input is {a, b, ab, ac, aba}
then the possible chains are
from 'a', there is no chain, the chain it 'a' itself (of length 1)
similarly, from 'b', the chain length is 1 one (length is defined by the number of words in the chain)
now from 'ab', there are two possibilities which are ({ab -> b when you remove a},{ab -> a when you remove b}). So the max length is 2 here
now from 'ac', we only have one possibility which is ({ac -> a when we remove c}), because, when we remove 'a', we left with 'c' which is not present in the input.
Now, you have to find the length of the biggest such chain.
Input: array of words
Output: length of the biggest such chain.
- 1of 1 vote
Given an input string "aabbccba", find the shortest substring from the alphabet "abc".
In the above example, there are these substrings "aabbc", "aabbcc", "ccba" and "cba". However the shortest substring that contains all the characters in the alphabet is "cba", so "cba" must be the output.
Output doesnt need to maintain the ordering as in the alphabet.
Other examples:
input = "abbcac", alphabet="abc" Output : shortest substring = "bca".
- -5of 5 votes
na
- 0of 0 votes
Given a string, print out all of the unique characters and the number of times it appeared in the string
- 0of 0 votes
You have a string aaabbdcccccf, transform it the following way => a3b2d1c5f1
ie: aabbaa -> a2b2a2 not a4b2
- 4of 4 votes
You are given a scrambled input sentence. Each word is scrambled independently, and the results are concatenated. So:
'hello to the world'
might become:
'elhloothtedrowl'
You have a dictionary with all words in it. Unscramble the sentence.
- 1of 1 vote
Programming Challenge Description:
Develop a service to help a client quickly find a manager who can resolve the conflict between two employees. When there is a conflict between two employees, the closest common manager should help resolve the conflict. The developers plan to test the service by providing an example reporting hierarchy to enable the identification of the closest common manager for two employees. Your goal is to develop an algorithm for IBM to efficiently perform this task. To keep things simple, they just use a single relationship "isManagerOf" between any two employees. For example, consider a reporting structure represented as a set of triples:
Tom isManagerOf Mary
Mary isManagerOf Bob
Mary isManagerOf Sam
Bob isManagerOf John
Sam isManagerOf Pete
Sam isManagerOf Katie
The manager who should resolve the conflict between Bob and Mary is Tom(Mary's manager). The manager who should resolve the conflict between Pete and Katie is Sam(both employees' manager). The manager who should resolve the conflict between Bob and Pete is Mary(Bob's manager and Pete's manager's manager).
Assumptions:
There will be at least one isManagerOf relationship.
There can be a maximum of 15 team member to a single manager
No cross management would exist i.e., a person can have only one manager
There can be a maximum of 100 levels of manager relationships in the corporation
Input:
R1,R2,R3,R4...Rn,Person1,Person2 R1...Rn - A comma separated list of "isManagerOf" relationships. Each relationship being represented by an arrow "Manager->Person". Person1,Person2 - The name of the two employee that have conflict
Output:
The name of the manager who can resolve the conflict Note: Please be prepared to provide a video follow-up response to describe your approach to this exercise.
Test 1:
Test Input
Frank->Mary,Mary->Sam,Mary->Bob,Sam->Katie,Sam->Pete,Bob->John,Bob,Katie
Expected Output
Mary
Test 2:
Test Input
Sam->Pete,Pete->Nancy,Sam->Katie,Mary->Bob,Frank->Mary,Mary->Sam,Bob->John,Sam,John
Expected Output
Mary
- 3of 3 votes
Given an input string and ordering string, need to return true if the ordering string is present in Input string.
input = "hello world!"
ordering = "hlo!"
result = FALSE (all Ls are not before all Os)
input = "hello world!"
ordering = "!od"
result = FALSE (the input has '!' coming after 'o' and after 'd', but the pattern needs it to come before 'o' and 'd')
input = "hello world!"
ordering = "he!"
result = TRUE
input = "aaaabbbcccc"
ordering = "ac"
result = TRUE
-]
- 1of 1 vote
print all the characters present in the given string only once in a reverse order. Time & Space complexity should not be more than O(N).
e.g.
1)Given a string aabdceaaabbbcd
the output should be - dcbae
2)Sample String - aaaabbcddddccbbdaaeee
Output - eadbc
3)I/P - aaafffcccddaabbeeddhhhaaabbccddaaaa
O/P - adcbhef
Answer :
import java.util.Iterator;
import java.util.LinkedHashSet;
import java.util.Scanner;
import java.util.Set;
public class StringQAmazon {
public static void main(String args[]) {
Scanner sc = new Scanner(System.in);
String inputStr = sc.nextLine();
System.out.println(stringManipulation(inputStr));
}
static String stringManipulation(String str) {
if(str.isEmpty())
return "";
else if(str.length()==1)
return str;
else {
str.toLowerCase();
StringBuilder strBuilder = new StringBuilder();
strBuilder.append(str);
strBuilder.reverse();
Set<Character> set = new LinkedHashSet<Character>();
for(int i =0; i<strBuilder.length(); i++){
set.add(strBuilder.charAt(i));
}
Iterator<Character> iter = set.iterator();
strBuilder=new StringBuilder();
while(iter.hasNext()){
strBuilder.append(iter.next());
}
return strBuilder.toString();
}
//return null;
}
}
- 0of 0 votes]
If the chemical string matches more than one symbol, then choose the one with longest length. (ex. 'Microsoft' matches 'i' and 'cro')
My solution:
(I sorted the symbols array in descending order of length and ran loop over chemicals array to find a symbol match(using indexOf in javascript) which worked. But I din't make it through the interview, I am guessing my solution was O(n2) and they expected an efficient algorithm.
- 3of 3 votes
Given a string e.g. ABCDAABCD. Shuffle he string so that no two smilar letters together.
E.g. AABC can be shuffled as ABAC.
- 0of 0 votes
Given a DNA sequence e.g. AAAGTAAGTAAGTGGG.....
Find all the duplicates with length 10.
- 1of 1 vote
Write 2 functions to serialize and deserialize an array of strings. strings can contain any unicode character. Do not worry about string overflow.
-
- 0of 0 votes
Split the string
example:
String: programmingproblem
Pattern: 6 5 7
separator: ;
Result: progra;mming;problem
exception if the string has less or more characters given in the pattern.
the program will take three inputs from the user, first string, second pattern and third pattern separator
- 0of 0 votes
Given a string and array of strings, find whether the array contains a string with one character difference from the given string. Array may contain string of different lengths.
Ex: Given string
banana
and array is
[bana, apple, banaba, bonanza, banamf]
and the outpost should be true as banana and banaba are one character difference.
- 0of 0 votes
A multiset or a bag is a collection of elements that can be repeated. Contrast with a set, where elements cannot be repeated.
Multisets can be intersected just like sets can be intersected.
Input :
A = [0,1,1,2,2,5]
B = [0,1,2,2,2,6]
Output :
A ∩ B = C = [0,1,2,2]
Input :
A = [0,1,1]
B = [0,1,2,3,4,5,6]
Output
A ∩ B = C = [0,1]
Write a function to find the intersection of two integer arrays in that way ? | https://careercup.com/page?pid=string-manipulation-interview-questions | CC-MAIN-2018-39 | refinedweb | 1,579 | 55.74 |
Submitted by Richard Caudle on Tue, 06/10/2014 - 10:38
How To Update Filters On-The-Fly And Build Dynamic Social Solutions.
Start Consuming V1
Now that we have our stream defined, we can compile the definition and start consuming data. In this example we'll use the Pull destination to get resulting data.
For this example I'll use the Python helper library.):
Start Consuming V2
The next step is to start streaming V2 of the stream in parallel with V.
In Conclusion
And so ends my quick tour. I hope this post illustrates how you can switch to new stream definitions on the fly. This capability is likely to be key to real-world solutions you create, and hopefully inspires you to create some truly responsive applications.
To stay in touch with all the latest developer news please subscribe to our RSS feed at
And, or follow us on Twitter at @DataSiftDev.
Submitted by Jason on Fri, 05/02/2014 - 11:10
Facebook Pages Managed Source Enhancements
Taking into account some great customer feedback, on May 1st, 2014 we released a number of minor changes to our Facebook Pages Managed Source.
Potential Breaking Changes
Facebook Page Like and Comment Counts have been Deprecated
The facebook_page.likes_count and facebook_page.comment_count fields have been deprecated from DataSift's output. We found this data became outdated quickly; a better practice for displaying counts of likes and comments in your application is to count like and comment interactions as you receive them.
Format for facebook_page.message_tags has Changed
facebook_page.message_tags fields were previously in two different formats dependant on whether they came from comments, or posts. This change ensures that all message_tags are provided in a consistent format; as a list of objects. An example of the new consistent format can be seen below:
Please ensure that if your application utilizes these fields, it can handle them as a list of objects.
New Output Fields
We have introduced a number of new output fields in interactions from the Facebook Pages Managed Source. You will be able to filter on many of these fields.
New “Page Like” Interactions
By popular request, we have introduced a new interaction with the subtype “page_like” for anonymous page-level likes.
This should now allow you to track the number of likes for a given page over time.
This subtype has two fields, `current_likes` and `likes_delta`. The first represents the current number of likes for a Facebook Page at the time of retrieval. The second represents the difference with the previously retrieved value. We only generate interactions of this type if `likes_delta` is not zero. Also note that `likes_delta` can be negative, when the number of unlikes is greater than the number of likes between two retrievals.
This interaction type should allow visualizing page likes as a time series. In addition, filters on `likes_delta` could be used to detect trending pages.
‘from' Fields now Include a Username Where Available
Where it is provided to us, .from fields in Facebook Pages interactions now contain a .username field.
Please note that in some cases, this field is not returned by Facebook.
New Comment ‘Parent' Field
Objects of type comment include an optional .parent object, which contains a reference to a parent comment. The object structure is self-similar.
This will allow you to tell whether comments are nested or not, and associate them with a parent comment if so.
New ‘From’ Field in Post Objects
Objects of type comment/like include an additional .from field in their .post context object, which contains information about the author of the post they are referring to.
New CSDL Targets
We have introduced 12 new Facebook Pages targets. This includes targets to allow you to filter on the likes count of a page, the parent post being commented on, a Facebook user's username, and more. These new targets can all be found in our Facebook Pages targets documentation.
Other Changes
New Notifications for Access Token Issues
If a case occurs where all tokens for a given source have permanent errors, the source will become “disabled", and you will receive a notification. You should then update the source with new tokens, and restart it.
Note that every error will also be present in the /source/log for that Managed Source.
Summary of Changes
- facebook_page.likes_count and facebook_page.comment_count fields will be deprecated from DataSift's output
- The facebook_page.message_tags output field format is changing to become a list of objects
- We are introducing a new interaction with the subtype “page_like” for anonymous page-level likes
- .from fields in Facebook Pages interactions now contain a .username field where available
- Comment interactions will now include a parent object, referencing the parent comment
- We are introducing a .from field to Facebook Pages .post objects, containing information about the post author
- We are introducing a number of new CSDL targets for Facebook Pages
- You will receive better notifications about issues with your Facebook Access Tokens
Submitted by Richard Caudle on Wed, 04/30/2014 - 11:56
Platform Updates - Content Age Filtering, Larger Compressed Data Deliveries
This is a quick post to update you on some changes we've introduced recently to help you work with our platform and make your life a little easier.
Filtering On Content Age
We aim to deliver you data as soon as we possibly can, but for some sources there can be a delay between publication to the web and our delivery which is out of our control.
In most cases this does not have an impact, but in some situations (perhaps you only want to display extremely fresh content to a user) this is an issue.
For these sources we have introduced a new target, .age, which allows you to specify the maximum time since the content was posted. For instance if you want to filter on blog posts mentioning 'DataSift', making sure that you only receive posts published within the last hour:
blog.content contains "DataSift" AND blog.age < 3600
This new target applies to the Blog, Board, DailyMotion, IMDB, Reddit, Topix, Video and YouTube sources.
Push Destinations - New Payload Options
Many of our customers are telling us they can take much larger data volumes from our system. We aim to please, so have introduced options to help you get more data quicker.
Increased Payload Sizes
To enable you to receive more data quicker from our push connectors, we have upped the maximum delivery sizes for many of our destinations. See the table below for the new maximum delivery sizes.
Compression Support
As the data we deliver to you is text, compression can be used to greatly reduce the size of files we deliver, making transport far more efficient. Although compression rates do vary, we are typically seeing an 80% reduction in file size with this option enabled.
We have introduced GZip and ZLib compression to our most popular destinations. You can enable compression on a destination by selecting the option in your dashboard, or by specifying the output_param.compression parameter through the API.
When data is delivered you can tell it has been compressed in two ways:
HTTP destination: The HTTP header 'X-DataSift-Compression' will have the value none, zlib or gzip as appropriate
S3, SFTP destinations: Files delivered to your destination will have an addition '.gz' extension is they have been compressed, for example DataSift-xxxxxxxxxxxxxxxxxxx-yyyyyyy.json.gz
Here's a summary of our current push destinations support for these features.
Stay Up-To-Date
To stay in touch with all the latest developer news please subscribe to our RSS feed at
And, or follow us on Twitter at @DataSiftDev
Submitted by Richard Caudle on Fri, 04/25/2014 - 10:44
Announcing Tencent Weibo - Broaden Your Coverage Of Chinese Conversation
In a previous post I discussed how we're broadening our reach to help you get the best out of East Asian sources such as using our Chinese tokenization engine.
To build on this momentum, I'm excited to be able to announce a new data source for Tencent Weibo, another huge Chinese network you'll be eager to get your hands on. Now you can build more comprehensive solutions for the Chinese market with ease.
Tencent Weibo - A Key Piece In Your Chinese Social Jigsaw
China has the most active social network community in the world. With over 600 million Internet users on average spending 40% of their online time using social networks, there's an awful lot of conversation out there which no doubt you'd love to harness.
There are a wide variety of social networks used in China, one of the largest is Tencent Weibo. Tencent Weibo gives great coverage of 3rd and 4th tier cities, essentially emerging markets which already have large populations and are experiencing massive growth. To generate full insights, and generate maximum opportunity from Chinese markets it is essential that you listen to these conversations.
Understanding Tencent Weibo Interactions
Tencent Weibo is modelled largely on Twitter. Just like Twitter users can use up to 140 characters for a post, and can share photos and videos. As a result Tencent Weibo lends itself to similar use cases you may already have set up with Twitter.
We expose as much data as possible to you through targets. A full list of the Tencent Weibo targets can be found in our documentation. Here are a few highlights to get you started though.
Types of Interaction
Tencent also has it's own types of activity which are very similar to Twitter. A 'post' is the equivalent of a tweet, and a 'repost' is the equivalent of retweet.
A reply is slightly different however. If you reply on Twitter, you mention the user you are replying to. On Tencent Weibo when you reply you are actually continuing a specific thread and do not need to mention the user you are replying to.
To distinguish between these types you can use the tencentweibo.type target.
Thread ID
As I mentioned above Tencent Weibo runs a threaded conversation model. You can filter to certain conversations by using the thread ID, exposed by the tencentweibo.thread_id target.
This is very useful because you can for example pick up a first post which discusses a topic you're interested in, then you can make a note of the thread ID and track any replies which follow.
Author's Influence
Frequently you'll want to know a little more about the content's author. Three useful pieces of metadata you can work with are:
- tencentweibo.author.followers_count: The number of followers a user has
- tencentweibo.author.following_count: The number of users the user follows
- tencentweibo.author.statuses_count: The number of posts the user has created
Commonly we use similar features to identify spam on Twitter. For example we might filter out content from users who follow a high number of users, but themselves have few followers, as this is a common signature for a bot.
Tencent In Action
Ok, so you've decided that you want to tap into the world of Tencent Weibo conversation. How does this work in practice? Let's look at a quick example.
A common use of the new data source will be brand monitoring, so let's write some CSDL that picks out well-known brands from Tencent Weibo chatter. For this example I'm going use the targets I discussed above to filter down to influential authors who are posting original content, this will give us the more pertinent data for our use case.
To filter to influential users I can use the tencentweibo.author.followers_count target:
tencentweibo.author.followers_count >= 10000
To filter to original posts (so exclude replies and reposts) I can use the tencentweibo.type target:
tencentweibo.type == "post"
To filter to a list of brands I'm interested in (Coca-Cola, Walmart, etc.):
tencentweibo.text contains_any [language(zh)] "可口可乐, 谷歌, 沃尔玛, 吉列, 亚马逊, 麦当劳, 联合利华, 葛兰素史克, 路虎, 维珍航空"
Trust me for now on the translations! Things will get clearer in a minute.
The expression I've used here uses the tencentweibo.text target, which exposes the text content of the post. Following this I make use of Chinese tokenization, using the [language(zh)] switch as explained in my previous post to ensure accurate matching of my brand names.
My finished filter becomes:
So now I have a stream of original content from influential authors discussing my collection of brands. In just a few minutes and extremely powerful definition.
A Helping Hand From VEDO
Honestly, I struggle when working with Chinese data, because I can't speak a word of Mandarin or Cantonese. (I did once spend a month in China and picked up my Chinese nickname of 'silver dragon', but unfortunately I got no further.) Fortunately I can make use of VEDO tagging to help me understand the data.
I can write a simple tag to pick out each brand mention, for example "Coca-Cola", as follows:
tag.brand "Coca-Cola" { tencentweibo.text contains [language(zh)] "可口可乐" }
Notice that tag.brand is part of VEDO tagging, this declares a namespace for the "Coca-Cola" tag which follows. The braces that follow the tag contain an expression, which if matched for an interaction will cause the tag to be applied to the interaction. When the data arrives at my application the data is tagged with the brand name in English and therefore makes it much easier for me to work with.
Remember that VEDO tags are applied to data that has been first filtered by a filter wrapped in the return clause. In my final definition I'll add a line for each brand.
For a refresher on VEDO, please take a look at my earlier posts.
Putting It All Together
I can put my filter together with my tags by wrapping the filter in a return clause. My completed CSDL is as follows:
Running this stream in preview you can see that conversation on Tencent Weibo is being nicely categorised so it can be much more easily understood.
Over To You...
This concludes my whirlwind introduction to Tencent Weibo. Technology aside, it's definitely worth emphasising again that Tencent Weibo is a vital source if you want to maximise opportunities in Chinese marketplaces.
For a full reference on Tencent Weibo targets, please see our technical documentation.
To stay in touch with all the latest developer news please subscribe to our RSS feed at.
Submitted by Richard Caudle on Wed, 03/26/2014 - 12:29
Chinese Tokenization - Generate Accurate Insight From Chinese Sources Including Sina Weibo
Pages
| http://dev.datasift.com/blog | CC-MAIN-2015-06 | refinedweb | 2,428 | 54.42 |
This is the mail archive of the archer@sourceware.org mailing list for the Archer project.
Hi Sami, sending a followup with patch I got. Regards, Jan
--- Begin Message ---
- From: Michael Matz <matz at suse dot de>
- To: Jan Kratochvil <jan dot kratochvil at redhat dot com>
- Cc: Richard Guenther <rguenther at suse dot de>
- Date: Wed, 13 May 2009 16:44:53 +0200 (CEST)
- Subject: Support bogus global namespace emitted by g++ 4.1Hi, (it's my understanding that Richard already mentioned this, but anyway) due to some old G++ emit an explicit namespace declaration for the global namespace for some decls. This is bogus and indeed confuses gdb (not the current CVS it seems, but the archer branch at least). Unfortunately we now have to deal with this in gdb as the bogus debug info is already out there. So, here's a patch for that. I saw two options: (1) hacking determine_prefix to return "" when it was just about to return "::" for namespaces. (2) not even linking that bogus namespace DIE into its children It seems to me that option (2) is slightly more clear, so that's what I've chosen. It works with the simple testcases I have using global decls. Let me know what you think. Ciao, Michael. -- Index: gdb-6.8.50.20090302/gdb/dwarf2read.c =================================================================== --- gdb-6.8.50.20090302.orig/gdb/dwarf2read.c 2009-05-13 15:53:22.000000000 +0200 +++ gdb-6.8.50.20090302/gdb/dwarf2read.c 2009-05-13 16:40:44.000000000 +0200 @@ -6540,6 +6540,13 @@ load_partial_dies (bfd *abfd, gdb_byte * /* We'll save this DIE so link it in. */ part_die->die_parent = parent_die; + /* Special hack for bogus global namespace that is emitted as an + explicit namespace with the name '::' in g++ 4.1, for some decls. */ + if (parent_die && parent_die->name && parent_die->die_parent == NULL + && parent_die->name[0] == ':' + && parent_die->name[1] == ':' + && parent_die->name[2] == 0) + part_die->die_parent = NULL; part_die->die_sibling = NULL; part_die->die_child = NULL;
--- End Message --- | https://www.sourceware.org/ml/archer/2009-q2/msg00089.html | CC-MAIN-2017-51 | refinedweb | 327 | 65.52 |
>
Might sound petty of me, but I struggle with C# naming convention. It is fine when I only have a private member and a public accessor as follow:
private int size;
public int Size { get { return size;} }
but it gets confusing when you use an enum. Clearly, I can't name my accessor the same as my enum :
public enum Size { Tiny = 0, Small = 1, Medium = 2, Large = 4}
private Size size;
public Size Size { get { return size;} }
By convention I shouldn't use plural enum names unless they are bitwise which it isn't. The following could solve it, but I doubt that's coding friendly when it is only a simple getter.
public Size GetSize(){ return size; }
Is there other alternatives?
Answer by Jeshira
·
Jan 25, 2018 at 06:14 PM
After a bit more research, I managed to find a bit of counter-intuitive information. The suggested structure is to pull the enum out of my class :
public class Item {
Size size;
public Size Size { get { return size; } }
}
public enum Size {
tiny = 0,
small = 1,
medium = 2,
large = 4
}
This way you are not conflicting with naming convention. Problem I see with this is you must not have more than one Size enum in the namespace.
public class Weapon : Item {
Type type;
public Type Type { get { return type; } }
}
public class Armor : Item {
Type type;
public Type Type { get { return type; } }
}
We'll agree armor and weapon can't have the same type. Can't nest enum due to original issue to not conflict with naming convention. So only solution left would be to rename Type into ArmorType and WeaponType, and if we are renaming them it won't conflict with accessor name right? :
public class Weapon : Item {
public enum WeaponType {
ranged,
melee
}
WeaponType type;
public WeaponType Type { get { return type; } }
}
But how fun will it be to type :
Weapon.WeaponType type = Weapon.WeaponType.ranged;
So no nesting :
public class Weapon : Item {
WeaponType type;
public WeaponType Type { get { return type; } }
}
public enum WeaponType {
ranged,
melee
}
Or you can look at it the way that you changed the name from Size to ItemSize.
In general, I'm avoiding giving too generic names, even if being within a class makes it clearer, but in the end it is nicer to have the enum independently of the class it will mostly be used, or even in a separate .cs file.
Size
ItemSize
I agree. Why I say it is counter-intuitive is that it would make sense to have Item.Size.tiny, and as you say best is to avoid generic names, but Size or Type are and more are difficult to avoid without making it confusing with synonyms.
Item.Size.tiny
Absolutely, that's what I always tried to go for! But by now I've ran so many times into the problem of the getter being named the same as the enum type, that I don't do it anymore. Well, that's C#...
I also tend to keep my namespaces (especially the global one) as clean as possible. However you should only nest types if they really only have a meaning to the outer class. If you may use the nested type elsewhere / outside the outer class it's better to declare it seperately. By convention it's actually recommended to use a seperate file which makes it easier to find it in the folder structure.
Answer by unity_3-HQNOBbN6qQng
·
Jan 25, 2018 at 06:08 PM
put a underscore _ before your private filed. it's very common in C#. like this:
public enum Size { Tiny = 0, Small = 1, Medium = 2, Large = 4}
private Size _size;
public Size Size { get { return _size;} }
Adding an underscore changes nothing to the problem unless you plan to rename accessor with lowercase, but doing so you conflict with naming convention of accessors.
Right, the problem is not the private field but the fact that the class has two "things" named "Size", one nested type and one2 People are following this question.
I can access the field but it wont update the value?
3
Answers
how to save time and take the next variable in enum?
1
Answer
My public enum isn't functioning like I want it to
0
Answers
Enum comparison C#
1
Answer
C# missingReferenceException when comparing enums
0
Answers | https://answers.unity.com/questions/1459815/c-naming-related-enum-private-member-and-public-ac.html | CC-MAIN-2019-26 | refinedweb | 721 | 66.47 |
{-# LANGUAGE CPP #-} #ifdef __HASTE__ import Haste.DOM import Haste.Events #else import System.Console.Readline #endif import Control.Arrow import Control.Monad import Data.Char import Data.Function import Data.List import Data.Tuple import Text.ParserCombinators.Parsec data Kind = Star | Kind :=> Kind deriving Eq data Type = TV String | Forall (String, Kind) Type | Type :-> Type | OLam (String, Kind) Type | OApp Type Type data Term = Var String | App Term Term | Lam (String, Type) Term | Let String Term Term | TLam (String, Kind) Term | TApp Term Type instance Show Kind where show Star = "*" show (a :=> b) = showA ++ "->" ++ show b where showA = case a of _ :=> _ -> "(" ++ show a ++ ")" _ -> show a showK Star = "" showK k = "::" ++ show k instance Show Type where show ty = case ty of TV s -> s Forall (s, k) t -> '\8704':s ++ showK k ++ "." ++ show t t :-> u -> showL ++ " -> " ++ showR where showL = case t of Forall _ _ -> "(" ++ show t ++ ")" _ :-> _ -> "(" ++ show t ++ ")" _ -> show t showR = case u of Forall _ _ -> "(" ++ show u ++ ")" _ -> show u OLam (s, k) t -> '\0955':s ++ showK k ++ "." ++ show t OApp t u -> showL ++ showR where showL = case t of TV _ -> show t OApp _ _ -> show t _ -> "(" ++ show t ++ ")" showR = case u of TV _ -> ' ':show u _ -> "(" ++ show u ++ ")" instance Show Term where show (Lam (x, t) y) = "\0955" ++ x ++ showT t ++ showB y where showB (Lam (x, t) y) = " " ++ x ++ showT t ++ showB y showB expr = '.':show expr showT (TV "_") = "" showT t = ':':show t show (TLam (s, k) t) = "\0955" ++ s ++ showK k ++ showB t where showB (TLam (s, k) t) = " " ++ s ++ showK k ++ showB t showB expr = '.':show expr show (Var s) = s show (App x y) = showL x ++ showR y where showL (Lam _ _) = "(" ++ show x ++ ")" showL _ = show x showR (Var s) = ' ':s showR _ = "(" ++ show y ++ ")" show (TApp x y) = showL x ++ "[" ++ show y ++ "]" where showL (Lam _ _) = "(" ++ show x ++ ")" showL _ = show x show (Let x y z) = "let " ++ x ++ " = " ++ show y ++ " in " ++ show z instance Eq Type where t1 == t2 = f [] t1 t2 where f alpha (TV s) (TV t) | Just t' <- lookup s alpha = t' == t | Just _ <- lookup t (swap <$> alpha) = False | otherwise = s == t f alpha (Forall (s, ks) x) (Forall (t, kt) y) | ks /= kt = False | s == t = f alpha x y | otherwise = f ((s, t):alpha) x y f alpha (a :-> b) (c :-> d) = f alpha a c && f alpha b d f alpha _ _ = False
Type operators
In Haskell, Map Integer String describes a map of integers to strings. Thus Map is an example of a type operator, because it takes 2 types and returns a type.
GHC has an syntax sugar extension called “type operators”. We use the term differently; for us, a type operator is a type-level function.
We introduce simply-typed lambda calculus at the level of types. We have operator abstractions and operator applications. We say kind for the type of a type-level lambda expression, and define the base kind * for proper types that is, the types of (term-level) lambda expressions.
For example, the Map type constructor has kind * -> * -> *. No term has type Map. The Integer and String types both have kind *, so Map Integer String has kind * and it is therefore a proper type. Another example of a proper type is (String -> Int) -> String.
Definitions
Our Type and Term data types both have their own variables, abstractions, and applications. The new Kind data type holds typing information for Type values, and as before, Type holds typing information for Term values.
Because we’re extending System F, we also have Forall, TLam, and TApp for functions that take types and return terms; without these, we obtain a system known as \(\lambda\underline{\omega}\). [I don’t know much about \(\lambda\underline{\omega}\), but because types and terms undergo beta reduction in their own separate worlds, I sense it’s only a minor upgrade for simply-typed lambda calculus.]
The kinding ::* is common, so we elide it.
Parsing
With 3 different abstractions, we must tread carefully. Different conventions exist for denoting them:
We use the notation in first column to avoid the uppercase lambda.
Writing \x:X y. was previously equivalent to \x:X.\y. but now X y is parsed as an operator application. One solution is write more lambdas.
We add the typo expression, which is a type-level let expression.
data FOmegaLine = Empty | Typo String Type | TopLet String Term | Run Term deriving Show line :: Parser FOmegaLine line = (((eof >>) . pure) =<<) . (ws >>) $ option Empty $ typo <|> (try $ TopLet <$> v <*> (str "=" >> term)) <|> (Run <$> term) where typo = Typo <$> between (str "typo") (str "=") v <*> typ term = letx <|> lam <|> app letx = Let <$> (str "let" >> v) <*> (str "=" >> term) <*> (str "in" >> term) lam0 = str "\\" <|> str "\0955" lam1 = str "." lam = flip (foldr ($)) <$> between lam0 lam1 (many1 bind) <*> term where bind = (&) <$> v <*> option (\s -> TLam (s, Star)) ( (str "::" >> (\k s -> TLam (s, k)) <$> kin) <|> (str ":" >> (\t s -> Lam (s, t)) <$> typ)) typ = olam <|> fun olam = flip (foldr OLam) <$> between lam0 lam1 (many1 vk) <*> typ fun = oapp `chainr1` (str "->" >> pure (:->)) oapp = foldl1' OApp <$> many1 (forallt <|> (TV <$> v) <|> between (str "(") (str ")") typ) forallt = flip (foldr Forall) <$> between fa0 fa1 (many1 vk) <*> typ where fa0 = str "forall" <|> str "\8704" fa1 = str "." vk = (,) <$> v <*> option Star (str "::" >> kin) kin = ((str "*" >> pure Star) <|> between (str "(") (str ")") kin) `chainr1` (str "->" >> pure (:=>)) app = termArg >>= moreArg termArg = (Var <$> v) <|> between (str "(") (str ")") term moreArg t = option t $ ((App t <$> termArg) <|> (TApp t <$> between (str "[") (str "]") typ)) >>= moreArg v = try $ do s <- many1 alphaNum when (s `elem` words "let in forall typo") $ fail "unexpected keyword" ws pure s str = try . (>> ws) . string ws = spaces >> optional (try $ string "--" >> many anyChar)
Type-level lambda calculus
In System F, for type-checking, we needed a beta-reduction which substitued a given type variable with a given type value.
This time, this routine is used to build a type-level evaluation function that returns the weak head normal form of a type expression, which in turn is used to compute its normal form.
newName x ys = head $ filter (`notElem` ys) $ (s ++) . show <$> [1..] where s = dropWhileEnd isDigit x tBeta (s, a) t = rec t where rec (TV v) | s == v = a | otherwise = TV v rec (Forall (u, k) v) | s == u = Forall (u, k) v | u `elem` fvs = let u1 = newName u fvs in Forall (u1, k) $ rec $ tRename u u1 v | otherwise = Forall (u, k) $ rec v rec (m :-> n) = rec m :-> rec n rec (OLam (u, ku) v) | s == u = OLam (u, ku) v | u `elem` fvs = let u1 = newName u fvs in OLam (u1, ku) $ rec $ tRename u u1 v | otherwise = OLam (u, ku) $ rec v rec (OApp m n) = OApp (rec m) (rec n) fvs = tfv [] a tEval env (OApp m a) = let m' = tEval env m in case m' of OLam (s, _) f -> tEval env $ tBeta (s, a) f where _ -> OApp m' a tEval env term@(TV v) | Just x <- lookup v (fst env) = case x of TV _ -> x _ -> tEval env x tEval _ ty = ty tNorm env ty = case tEval env ty of TV _ -> ty m :-> n -> rec m :-> rec n Forall sk t -> Forall sk (rec t) OApp m n -> OApp (rec m) (rec n) OLam sk t -> OLam sk (rec t) where rec = tNorm env tfv vs (TV s) | s `elem` vs = [] | otherwise = [s] tfv vs (x :-> y) = tfv vs x `union` tfv vs y tfv vs (Forall (s, _) t) = tfv (s:vs) t tfv vs (OLam (s, _) t) = tfv (s:vs) t tfv vs (OApp x y) = tfv vs x `union` tfv vs y tRename x x1 ty = case ty of TV s | s == x -> TV x1 | otherwise -> ty Forall (s, k) t | s == x -> ty | otherwise -> Forall (s, k) (rec t) OLam (s, k) t | s == x -> ty | otherwise -> OLam (s, k) (rec t) a :-> b -> rec a :-> rec b OApp a b -> OApp (rec a) (rec b) where rec = tRename x x1
Kind checking
We require type lambda expressions to be well-kinded to guarantee strong normalization. Much of the code is similar to type checking for simply typed lambda calculus. A few checks verify that proper types have base type *.
kindOf :: ([(String, Type)], [(String, Kind)]) -> Type -> Either String Kind kindOf gamma t = case t of TV s | Just k <- lookup s (snd gamma) -> pure k | otherwise -> Left $ "undefined " ++ s t :-> u -> do kt <- kindOf gamma t when (kt /= Star) $ Left $ "Arr left: " ++ show t ku <- kindOf gamma u when (ku /= Star) $ Left $ "Arr right: " ++ show u pure Star Forall (s, k) t -> do k' <- kindOf (second ((s, k):) gamma) t when (k' /= Star) $ Left $ "Forall: " ++ show k' pure Star OApp t u -> do kt <- kindOf gamma t ku <- kindOf gamma u case kt of kx :=> ky -> if ku /= kx then Left ("OApp " ++ show ku ++ " /= " ++ show kx) else pure ky _ -> Left $ "OApp left " ++ show t OLam (s, k) t -> (k :=>) <$> kindOf (second ((s, k):) gamma) t
Type checking
For App and TApp, we find the weak head normal form of the first argument to check it is a suitable abstraction. In the case of App, we compare the normal form of the type of the abstraction binding against the normal form of the type of the second argument to check that the application can proceed.
typeOf :: ([(String, Type)], [(String, Kind)]) -> Term -> Either String Type typeOf gamma t = case t of Var s | Just t <- lookup s (fst gamma) -> pure t | otherwise -> Left $ "undefined " ++ s App x y -> do tx <- rec x ty <- rec y case tEval gamma tx of ty' :-> tz | tNorm gamma ty == tNorm gamma ty' -> pure tz _ -> Left $ "App: " ++ show tx ++ " to " ++ show ty Lam (x, t) y -> do k <- kindOf gamma t if k == Star then (t :->) <$> typeOf (first ((x, t):) gamma) y else Left $ "Lam: " ++ show t ++ " has kind " ++ show k TLam (s, k) t -> Forall (s, k) <$> typeOf (second ((s, k):) gamma) t TApp x y -> do tx <- tEval gamma <$> rec x case tx of Forall (s, k) t -> do k' <- kindOf gamma y when (k /= k') $ Left $ "TApp: " ++ show k ++ " /= " ++ show k' pure $ tBeta (s, y) t _ -> Left $ "TApp " ++ show tx Let s t u -> do tt <- rec t typeOf (first ((s, tt):) gamma) u where rec = typeOf gamma
Evaluation
We again erase types as we lazily evaluate a given term.
Because this system is getting complex, it may be better to treat type substitutions as part of the computation to verify our code works as intended. For now, we leave this as an exercise.
eval env (Let x y z) = eval env $ beta (x, y) z eval env (App m a) = let m' = eval env m in case m' of Lam (v, _) f -> eval env $ beta (v, a) f _ -> App m' a eval env (TApp m _) = eval env m eval env (TLam _ t) = eval env t eval env term@(Var v) | Just x <- lookup v (fst env) = case x of Var v' | v == v' -> x _ -> eval env x eval _ term = term beta (v, a) f = case f of Var s | s == v -> a | otherwise -> Var s Lam (s, _) m | s == v -> Lam (s, TV "_") m | s `elem` fvs -> let s1 = newName s fvs in Lam (s1, TV "_") $ rec $ rename s s1 m | otherwise -> Lam (s, TV "_") (rec m) App m n -> App (rec m) (rec n) TLam s t -> TLam s (rec t) TApp t ty -> TApp (rec t) ty Let x y z -> Let x (rec y) (rec z) where fvs = fv [] a rec = beta (v, a) fv vs (Var s) | s `elem` vs = [] | otherwise = [s] fv vs (Lam (s, _) f) = fv (s:vs) f fv vs (App x y) = fv vs x `union` fv vs y fv vs (Let _ x y) = fv vs x `union` fv vs y fv vs (TLam _ t) = fv vs t fv vs (TApp x _) = fv vs x rename x x1 term = case term of Var s | s == x -> Var x1 | otherwise -> term Lam (s, t) b | s == x -> term | otherwise -> Lam (s, t) (rec b) App a b -> App (rec a) (rec b) Let a b c -> Let a (rec b) (rec c) TLam s t -> TLam s (rec t) TApp a b -> TApp (rec a) b where rec = rename x x1 norm env@(lets, gamma) term = case eval env term of Var v -> Var v -- Record abstraction variable to avoid clashing with let definitions. Lam (v, _) m -> Lam (v, TV "_") (norm ((v, Var v):lets, gamma) m) App m n -> App (rec m) (rec n) Let x y z -> Let x (rec y) (rec z) TApp m _ -> rec m TLam _ t -> rec t where rec = norm env
User Interface
Our user interface code grows uglier still, because to support let expressions, we now must maintain three association lists in the environment: one for terms, one for types, and one for kinds.
#ifdef __HASTE__ main = withElems ["input", "output", "evalB", "resetB", "resetP", "churchB", "churchP"] $ \[iEl, oEl, evalB, resetB, resetP, churchB, churchP] -> do let reset = getProp resetP "value" >>= setProp iEl "value" >> setProp oEl "value" "" run (out, env) (Left err) = (out ++ "parse error: " ++ show err ++ "\n", env) run (out, env@(lets, types, kinds)) (Right m) = case m of Empty -> (out, env) Run term -> case typeOf (types, kinds) term of Left msg -> (out ++ "type error: " ++ msg ++ "\n", env) Right t -> (out ++ show (norm (lets, types) term) ++ "\n", env) Typo s typo -> case kindOf (types, kinds) typo of Left msg -> (out ++ "kind error: " ++ msg ++ "\n", env) Right k -> (out ++ "[" ++ show (tNorm (types, kinds) typo) ++ " : " ++ show k ++ "]\n", (lets, (s, typo):types, (s, k):kinds)) TopLet s term -> case typeOf (types, kinds) term of Left msg -> (out ++ "type error: " ++ msg ++ "\n", env) Right t -> (out ++ "[" ++ s ++ ":" ++ show t ++ "]\n", ((s, term):lets, (s, t):types, kinds)) reset resetB `onEvent` Click $ const reset churchB `onEvent` Click $ const $ getProp churchP "value" >>= setProp iEl "value" >> setProp oEl "value" "" evalB `onEvent` Click $ const $ do es <- map (parse line "") . lines <$> getProp iEl "value" setProp oEl "value" $ fst $ foldl' run ("", ([], [], [])) es #else repl env@(lets, types, kinds) = do let redo = repl env ms <- readline "> " case ms of Nothing -> putStrLn "" Just s -> do addHistory s case parse line "" s of Left err -> do putStrLn $ "parse error: " ++ show err redo Right Empty -> redo Right (Run term) -> case typeOf (types, kinds) term of Left msg -> putStrLn ("type error: " ++ msg) >> redo Right ty -> do putStrLn $ "[type = " ++ show ty ++ "]" print $ norm (lets, types) term redo Right (Typo s typo) -> case kindOf (types, kinds) typo of Right k -> do putStrLn $ "[" ++ show (tNorm (types, kinds) typo) ++ " : " ++ show k ++ "]" repl (lets, (s, typo):types, (s, k):kinds) Left m -> do putStrLn m redo Right (TopLet s term) -> case typeOf (types, kinds) term of Left msg -> putStrLn ("type error: " ++ msg) >> redo Right t -> do putStrLn $ "[type = " ++ show t ++ "]" repl ((s, term):lets, (s, t):types, kinds) main = repl ([], [], []) #endif
Applications
Type operators make System F less unbearable, though in our example the savings are miniscule. We do get to write List X once, which is nice.
Brown and Palsberg describe a representation of System Fω terms which powers a self-interpreter and more, though still stops short of a self-reducer.
Haskell’s type constructors are a restricted form of type operators. In practice, the full power of type operators is rarely needed, so we limit them to simplify type checking.
Above, we saw 3 sorts of abstraction. We’re only missing a way of feeding a term to a function and getting a type, namely dependent types. We can add these while still preserving decidable type checking and strong normalization.
However, real programming languages often support unrestricted recursion and hence it is undecidable whether a term normalizes. Adding dependent types to such a language would lead to undecidable type checking. System Fω is about as far as we can go if we want unrestricted recursion and decidable type checking. | http://crypto.stanford.edu/~blynn/lambda/typo.html | CC-MAIN-2017-43 | refinedweb | 2,667 | 56.76 |
Now that d3 is in a stable state and has achieved near-ubiquity amongst data visualization people, gears shift to document: how selections work, via reimplementation, presentations, and so on.
Most of this focuses on the ‘hard part’ of d3, which is the concept of the selection and data join. This is about what you might call an easier part, or an even-lower-level one: SVG.
You can use d3 without SVG - you can use it with Canvas or just with HTML. But SVG’s the most popular output, since it works well with d3’s idea of selections and can express most kinds of graphics.
SVG, short for Scalable Vector Graphics, is a standard for vector drawing that integrates with HTML and is implemented by most browsers. Unlike Canvas, it’s not raster-based but rather preserves the structure of your drawing and is much like HTML in terms of events, updates, and being-a-tree. Unlike Canvas, it’s old, big, and complex.
Just like math, we tend to use a subset of SVG for most drawings: here’s that.
There are a few basic shapes available in SVG:
<circle> is a circle: you’ll always want to set its attribute
r for radius.
Circles are positioned with
cx and
cy, or with transforms.
<rect> is a rectangle: it requires a
width and
height attribute to show up,
and is positioned with
x and
y attributes, or transforms.
<path> is the most versatile kind of shape: filled, it can look like a polygon, unfilled
it can look like a line. You can shape it into a circle or a rectangle, or use
it as the path for text to shape around. Even though SVG also has a
<polygon> and
<polyline> element, most of the maps
and drawings you see in d3 are made of paths, since paths can express all
of the same shapes.
dAttribute
The path data attribute is a source of much confusion for new users of
d3, since it’s confusing. That is, for the purposes of efficiency, it’s very
concise and has optional syntax: sometimes coordinates are separated with
,
but they don’t have to be, and saying
L10 10 L10 10 is equivalent to saying
L10 10 10 10.
The basic parts of path data are commands, like
L, which say ‘draw
a line from here to there’ or ‘start a new line’ or ‘draw a squiggly line’
or ‘close this line by adding a segment back to the first point’.
Then there are coordinates, like
10 10, which are X and Y positions
of places to go. Path data is just coordinates and commands, strung together
to tell lines where to start, go, and end.
This railroad diagram may or may not bring you some sense of higher enlightenment about path data.
You may be familar with CSS Transforms - they’re an efficient way to move around HTML elements without causing browser reflows, and with more flexibility than usual old CSS positioning.
SVG has its own transforms, with a very similar syntax, which you can use to position any element. And unlike HTML, there are no ‘float’ or ‘inline’ ideas in SVG - the position of each element doesn’t affect the position of elements outside of its subtree.
On the downside, SVG transforms are sometimes not 3D accelerated when HTML transforms are - so they aren’t necessarily fast.
There are also has positioning attributes -
you can set the position of a
<rect> with
x and
y attributes, and of a circle
with
cx and
cy. Sometimes these are useful - especially with text,
when you can set position relative to the baseline with positioning attributes
and then positioning in the page with a
transform.
<g>element
A bit odd from the perspective of HTML is the
<g> element, but it’s incredibly
useful, for a variety of reasons.
The
g element is a group: meaning, you can put elements into
g and those
elements are transformed when
g is transformed, they’re removed when it’s
removed, and so on.
SVG currently doesn’t support z-indexes order of elements on a page is their
order in rendering. Thus, it makes sense to have logical groups of elements
in the page - for instance, the iD editor uses
several
<g> elements to stack roads, buildings, and rivers on top of
each other in the right order.
SVG also doesn’t use nesting as much as HTML, so
<g> is used
to group items that are really just stacked on top of each other, like
if you need a multi-circle stack for a bullseye.
The
<g> element is often used in a very similar fashion to Canvas’s idea of transformations -
by using a
<g>, you can shift the contents of a graph so that there’s a margin
around its boundary, or scale groups of shapes all at once.
Finally,
<g> is really useful for d3 because it provides a DOM way of
expressing subselections - you can use a
<g> element for each series of a
multiple series chart or for each chapter of a book, and easily bind the group
of data to the
<g> and the subsets to its contents.
SVG can be styled with CSS, but uses different properties to do roughly equivalent
styling. It’s basically more unified - to set the fill of a circle, you set
its
fill, and the same thing to set the fill of text or a path.
/* css for HTML */ #foo { background: #eee; border: 1px solid #000; } /* css for SVG */ #foo { fill: #eee; stroke: #000; stroke-width: 1; }
d3 protects us from the oddity of interacting with SVG elements through some built-in magic. If you used d3 in the olden days, you would see code like
<!-- why do we have to write svg: --> svg.append("svg:text") <!-- instead of just --> svg.append("text")
Herein lies a little detail: HTML and SVG use XML Namespaces
so that both SVG and HTML can have tags named
text without a conflict.
SVG uses the namespace, so instead of simply
calling
document.createElement, you’ll need to use its cousin
document.createElementNS.
To create an SVG element with a circle of radius 5, you would write:
A little dirty, right? Anyway, d3 simplifies this since 2.6
by automatically using the namespace to create
the
<svg> element, and then selections
automatically use that namespace for each new insertion. | https://macwright.org/2013/06/25/just-enough-svg.html | CC-MAIN-2018-09 | refinedweb | 1,073 | 65.56 |
- milestone: --> Trunk
- assigned_to: nobody --> kimmov
WinMerge returns "1" (instead of "0" = "OK") upon closing, if files compared were read-only. (Files were not modified - just compared.)
The issue does not occure if the cursor was not placed in the file text body (either manually or by going through differences).
I do not see any reason why WinMerge should return anything but "successful" on file comparison. The exception should only be technical problems, but none of the user actions that the user is aware of (e.g. purposly not saving changes, etc.).
Logged In: YES
user_id=631874
Originator: NO
Thanks for reporting this. It is not actually about files being read-only. WinMerge simply tries to return compare status.
This "feature" was added in back for selftests or some such and unfortunately nobody really thought about it.
I'll fix this for next stable release to return 0 unless there is an error.
Logged In: YES
user_id=631874
Originator: NO
I'll fix this bug with this simple patch which makes WinMerge always to return 0.
So the approach I'll take is start with always returning zero. And add possible error returns when we really need those. At the moment I don't know situation where we must return something other than 0. Some merge scripts etc probably would want return value if the merge succeeded, but that is not easy as user can open another files or folders and close original files so we don't even know which file's return value we should return (ones given from command-line? last opened?...)
--- Merge.cpp (revision 4956)
+++ Merge.cpp (working copy)
@@ -447,7 +447,7 @@
charsets_cleanup();
delete m_mainThreadScripts;
CWinApp::ExitInstance();
- return m_nLastCompareResult;
+ return 0;
}
static void AddEnglishResourceHook()
Logged In: YES
user_id=631874
Originator: NO
Committed to SVN trunk:
Completed: At revision: 4970
Closing as fixed.
Log in to post a comment. | https://sourceforge.net/p/winmerge/bugs/1626/ | CC-MAIN-2018-26 | refinedweb | 312 | 64.2 |
Before you start
About this tutorial
Prerequisites.
Inheritance and abstraction
What's inheritance?
Classes in Java code exist in hierarchies. Classes above a given class in a hierarchy are superclasses of that class. That particular class is a subclass of every class higher up. A subclass inherits from its superclasses. The class
Object is at the top of every class's hierarchy. In other words, every class is a subclass of (and inherits from)
Object.
For example, suppose we have an
Adult class that looks like this:
public class Adult { protected int age = 0; protected String firstname = "firstname"; protected String lastname = "lastname"; protected String gender = "MALE"; protected int progress = 0; public Adult() { } public void move() { System.out.println("Moved."); } public void talk() { System.out.println("Spoke."); } }
Our
Adult class implicitly inherits from
Object. That's assumed for any class, so you don't have to type
extends Object in the class definition. But what does it mean to say that our class inherits from its superclass(es)? It simply means that
Adult has access to the exposed variables and methods in its superclasses. In this case, it means that
Adult can see and use the following from any of its superclasses (we have only one at the moment):
publicmethods and variables
protectedmethods and variables
- Package protected methods and variables (that is, those without an access specifier), if the superclass is in the same package as
Adult
Constructors are special. They aren't full-fledged OO members, so they aren't inherited.
If a subclass overrides a method or variable from the superclass -- if the subclass implements a member with the same name, in other words -- that hides the superclass's member. To be accurate, overriding a variable hides it, and overriding a method simply overrides it, but the effect is the same: The overridden member is essentially hidden. You can still get to the members of the superclass by using the
super keyword:
super.hiddenMemberName
In the case of
Adult, all it inherits at the moment are the methods on
Object (
toString(), for example). So, the following code snippet is perfectly acceptable:
Adult anAdult = new Adult(); anAdult.toString();
The
toString() method doesn't exist explicitly on
Adult, but
Adult inherits it.
There are "gotchas" here that you should keep in mind. First, it's very easy to give variables and methods in a subclass the same name as variables and methods in that class's superclass, then get confused when you can't call an inherited method. Remember, when you give a method the same name in a subclass as one that already exists in a superclass, you've hidden it. Second, constructors aren't inherited, but they are called. There is an implicit call to the superclass constructor in any subclass constructor you write, and it's the first thing that the subclass constructor does. You must live with this; there's nothing you can do to change it. For example, our
Adult constructor actually looks like this at runtime, even though we didn't type anything in the body:
public Adult() { super(); }
That line in the constructor body calls the no-argument constructor on the superclass. In this case, that's the constructor of
Object.
Defining a class hierarchy
Suppose we have another class called
Baby. It looks like this:
public class Baby { protected int age = 0; protected String firstname = "firstname"; protected String lastname = "lastname"; protected String gender = "MALE"; protected int progress = 0; public Baby() { } public void move() { System.out.println("Moved."); } public void talk() { System.out.println("Spoke."); } }
Our
Adult and
Baby classes look very similar. In fact, they're almost identical. That kind of code duplication makes maintaining code more painful than it needs to be. We can create a superclass, move all the common elements up to that class, and remove the code duplication. Our superclass could be called
Person, and it might look like this:
public class Person { protected int age = 0; protected String firstname = "firstname"; protected String lastname = "lastname"; protected String gender = "MALE"; protected int progress = 0; public Person() { } public void move() { System.out.println("Moved."); } public void talk() { System.out.println("Spoke."); } }
Now we can have
Adult and
Baby subclass
Person, which makes those two classes pretty simple at the moment:
public class Adult { public Adult() { } } public class Baby { public Baby() { } }
One we have this hierarchy, we can refer to an instance of each subclass as an instance of any of its superclasses in the hierarchy. For example:
Adult anAdult = new Adult(); System.out.println("anAdult is an Object: " + (Adult instanceof Object)); System.out.println("anAdult is a Person: " + (Adult instanceof Person)); System.out.println("anAdult is anAdult: " + (Adult instanceof Adult)); System.out.println("anAdult is a Baby: " + (Adult instanceof Baby));
This code will give us three
true results and one
false result. You can also cast an object to any type higher up in its hierarchy, like so:
Adult anAdult = new Adult(); Person aPerson = (Person) anAdult; aPerson.move();
This code will compile without problems. We can cast an
Adult to type
Person, then call a
Person method on it.
Because we have this hierarchy, the code on our subclasses is simpler. But do you see a problem here? Now all
Adults and all
Babys (excuse the bad plural) will talk and move in the same way. There's only one implementation of each behavior. That's not what we want, because grownups don't speak or move like babies. We could override
move() and
talk() on the subclasses, but then we've got essentially useless "standard" behavior defined on our superclass. What we really want is a way to force each of our subclasses to move and talk in its own particular way. That's what abstract classes are for.
Abstraction
In an OO context, abstraction refers to the act of generalizing data and behavior to a type higher up the hierarchy than the current class. When you move variables or methods from a subclass to a superclass, you're abstracting those members.
Those are general terms, and they apply in the Java language. But the language also adds the concepts of abstract classes and abstract methods. An abstract class is a class that can't be instantiated. For example, you might create a class called
Animal. It makes no sense to instantiate such a class: In practice, you'd only want to create instances of a concrete class like
Dog. But all
Animals have some things in common, such as the ability to make noise. Saying that an
Animal makes noise doesn't tell you much. The noise it makes depends on the kind of animal it is. How do you model that? You define the common stuff on the abstract class, and you force subclasses to implement concrete behavior specific to their types.
You can have both abstract and concrete classes in your hierarchies.
Using abstraction
Our
Person class contains some method behavior that we don't know we need yet. Let's remove it and force subclasses to implement that behavior polymorphically. We can do that by defining the methods on
Person to be abstract. Then our subclasses will have to implement those methods.
public abstract class Person { ... abstract void move(); abstract void talk(); } public class Adult extends Person { public Adult() { } public void move() { System.out.println("Walked."); } public void talk() { System.out.println("Spoke."); } } public class Baby extends Person { public Baby() { } public void move() { System.out.println("Crawled."); } public void talk() { System.out.println("Gurgled."); } }
What have we done in this listing?
- We changed
Personto make the methods abstract, forcing subclasses to implement them.
- We made
Adultsubclass
Person, and implemented the methods.
- We made
Babysubclass
Person, and implemented the methods.
When you declare a method to be
abstract, you require subclasses to implement the method, or to be abstract themselves and pass along the implementation responsibility to their subclasses. You can implement some methods on an abstract class, and force subclasses to implement others. That's up to you. Simply declare the ones you don't want to implement as
abstract, and don't provide a method body. If a subclass fails to implement an abstract method from a superclass, the compiler will complain.
Now that both
Adult and
Baby subclass
Person, we can refer to an instance of either class as being of type
Person.
Refactoring to abstract behavior
We now have
Person,
Adult, and
Baby in our hierarchy. Suppose we wanted to make the two subclasses more realistic by changing their
move() methods, like this:
public class Adult extends Person { ... public void move() { this.progress++; } ... } public class Baby extends Person { ... public void move() { this.progress++; } ... }
Now each class updates its instance variable to reflect some progress being made whenever we call
move(). But notice that the behavior is the same again. It makes sense to refactor the code to remove this code duplication. The most likely refactoring is to move
move() to
Person.
Yes, we're adding the method implementation back to the superclass we just took it out of. This is a simplistic example, so this back-and-forth might seem wasteful. But what we just experienced is a common occurrence when you write Java code. You often see classes and methods change as your system grows, and sometimes you end up with code duplication that you can refactor to superclasses. You might even do that, decide it was a mistake, and put the behavior back down in the subclasses. You simply can't know the right place for all behavior at the beginning of the development process. You learn the right place for behavior as you go.
Let's refactor our classes to put
move() back on the superclass:
public abstract class Person { ... public void move() { this.progress++; } public abstract void talk(); } public class Adult extends Person { public Adult() { } public void talk() { System.out.println("Spoke."); } } public class Baby extends Person { public Baby() { } public void talk() { System.out.println("Gurgled."); } }
Now our subclasses implement their different versions of
talk(), but share the same behavior for
move().
When to abstract ... and when not to
Deciding when to abstract (or create a hierarchy) is a hotly debated topic in OO circles, especially among Java language programmers. Certainly there are few right and wrong answers about how to structure class hierarchies. This is an area where conscientious and skilled practitioners can (and often do) disagree. That said, there are some good rules of thumb to follow with regard to hierarchies.
First, don't abstract first. Wait until the code tells you that you should. It is almost always better to refactor your way to an abstraction than to assume that you need it at the outset. Don't assume that you need a hierarchy. Many Java programmers overuse hierarchies.
Second, resist the use of abstract classes when you can. They're not bad; they're just restrictive. We used an abstract class to force our subclasses to implement certain behavior. Would an interface (which we'll discuss in Interfaces ) be a better idea? Quite possibly. Your code will be easier to understand if it isn't made up of complex hierarchies with a mix of overridden and implemented methods. You might have a method defined three or four classes up the chain. By the time you use it in a sub-sub-sub-subclass, you might have to hunt to discover what that method does. That can make debugging frustrating.
Third, do use a hierarchy and/or abstract classes when it's smart to do so. There are many coding patterns that make use of the Java language abstract method and abstract class concepts, such as the Gang of Four Template Method pattern (see Resources).
Fourth, understand the price you pay when you use a hierarchy prematurely. It really can lead you down the wrong path quickly, because having the classes there, named as they are, with the methods they have, makes it very easy to assume all of that should be as it is. Maybe that hierarchy made sense when you created it, but it might not make sense anymore. Inertia can make it resistant to change.
In a nutshell, be smart about using hierarchies. Experience will help you be smarter, but it won't make you all-wise. Remember to refactor.
Interfaces
What's an interface?
The Java language includes the concept of an interface, which is simply a named set of publicly available behaviors and/or constant data elements for which an implementer of that interface must provide code. It doesn't specify behavior details. In essence (and to the Java compiler), an interface defines a new data type, and it's one of the more powerful features of the language.
Other classes implement the interface, which means that they can use any constants in that interface by name, and that they must specify behavior for the method definitions in the interface.
Any class in any hierarchy can implement a particular interface. That means that otherwise unrelated classes can implement the same interface.
Defining interfaces
Defining an interface is straightforward:
public interface interfaceName { final constantTypeconstantName = constantValue; ... returnValueTypemethodName( arguments ); ... }
An interface declaration looks very similar to a class declaration, except that you use the
interface keyword. You can name the interface anything you want, as long as the name is valid, but by convention interface names look like class names. You can include constants, method declarations, or both in an interface.
Constants defined in an interface look like constants defined in classes. The
public and
static keywords are assumed for constants defined in an interface, so you don't have to type them. (
final is assumed as well, but most programmers type it out anyway).
Methods defined in an interface look different (generally speaking) from methods defined in classes, because methods in an interface have no implementation. They end with semicolon after the method declaration, and they don't include a body. Any implementer of the interface is responsible for supplying the body of the method. The
public and
abstract keywords are assumed for methods, so you don't have to type them.
You can define hierarchies of interfaces just as you can define hierarchies of classes. You do this with the
extends keyword, like so:
public interface interfaceName extends superinterfaceName, ... { interface body... }
A class can be a subclass of only one superclass, but an interface can extend as many other interfaces as you want. Just list them after
extends, separated by commas.
Here's an example of an interface:
public interface Human { final String GENDER_MALE = "MALE"; final String GENDER_FEMALE = "FEMALE"; void move(); void talk(); }
Implementing interfaces
To use an interface, you simply implement it, which means providing behavior for the methods defined in the interface. You do that with the
implements keyword:
public class className extends superclassName implements interfaceName, ... { class body }
By convention, the
extends clause (if there is one) comes first, followed by the
implements clause. You can implement more than one interface by listing the interface names separated by commas.
For example, we could have our
Person class implement the
Human interface (saying "implements
Human" means the same thing) like so:
public abstract class Person implements Human { protected int age = 0; protected String firstname = "firstname"; protected String lastname = "lastname"; protected String gender = Human.GENDER_MALE; protected int progress = 0; public void move() { this.progress++; } }
When we implement the interface, we provide behavior for the methods. We have to implement those methods with signatures that match the ones in the interface, with the addition of the
public access modifier. But we've implemented only
move() on
Person. Don't we have to implement
talk()? No, because
Person is an abstract class, and the
abstract keyword is assumed for methods in an interface. That means any abstract class implementing the interface can implement what it wants, and ignore the rest. If it does not implement one or more methods, it passes that responsibility on to its subclasses. In our
Person class, we chose to implement
move() and not
talk(), but we could have chosen to implement neither.
The instance variables in our class aren't defined in the interface. Some helpful constants are, and we can reference them by name in any class that implements the interface, as we did when we initialized
gender. It's also quite common to see interfaces that contain only constants. If that's the case, you don't have to implement the interface to use those constants. Simply import the interface (if the interface and the implementing class are in the same package, you don't even have to do that) and reference the constants like this:
interfaceName.constantName
Using interfaces
An interface defines a new reference data type. That means that you can refer to an interface anywhere you would refer to a class, such as when you cast, as illustrated by the following code snippet from a
main() method you can add to
Adult:
public static void main(String[] args) { ... Adult anAdult = new Adult(); anAdult.talk(); Human aHuman = (Human) anAdult; aHuman.talk(); }
Both calls to
talk() will display
Spoke. on the console. Why? Because an
Adult is a
Human once it implements that interface. You can cast an
Adult as a
Human, then call methods defined by the interface, just as you can cast
anAdult to
Person and call
Person methods on it.
Baby also implements
Human. An
Adult is not a
Baby, and a
Baby is not an
Adult, but both can be described as type
Human (or as type
Person in our hierarchy). Consider this code somewhere in our system:
public static void main(String[] args) { ... Human aHuman = getHuman(); aHuman.move(); }
Is the
Human an
Adult or a
Baby? We don't have to care. As long as whatever we get back from
getPerson() is of type
Human, we can call
move() on it and expect it to respond accordingly. We don't even have to care if the classes implementing the interface are in the same hierarchy.
Why interfaces?
There are three primary reasons for using interfaces:
- To create convenient or descriptive namespaces.
- To relate classes in different hierarchies.
- To hide underlying type details from your code.
When you create an interface to collect related constants, that interface gives you a descriptive name to use to refer to those constants. For example, you could have an interface named
Language to store constant string names for languages. Then you could refer to those language names as
Language.ENGLISH and the like. This can make your code easier to read.
The Java language supports single inheritance only. In other words, a class can only be a subclass of a single superclass. That's rather limiting sometimes. With interfaces, you can relate classes in different hierarchies. That's a powerful feature of the language. In essence, an interface simply specifies a set of behaviors that all implementors of the interface must support. It's possible that the only relationship that will exist between classes implementing the interface is that they share those behaviors that the interface defines. For example, suppose we had an interface called
Mover:
public interface Mover { void move(); }
Now suppose that
Person extended that interface. That means that any class implementing
Person would also be a
Mover.
Adult and
Baby would qualify. But so would
Cat, or
Vehicle. It's reasonable to assume that
Mountain would not. Any class that implemented
Mover would have the
move() behavior. A
Mountain instance wouldn't have it.
Last, but not least, using interfaces lets you ignore the details of specific types when you want to. Recall our example of calling
getPerson(). We didn't care what underlying type we got back; we just wanted it to be something we could call
move() on.
All of these are good reasons to use interfaces. Using one simply because you can is not.
Nested classes
What's a nested class?
As its name suggests, a nested class in the Java language is a class declared within another class. Here's a simple example:
public class EnclosingClass { ... public class NestedClass { ... } }
Typically, good programmers define nested classes when the nested class only makes sense within the context of the enclosing class. Some common examples include the following:
- Event handlers within a UI class
- Helper classes for UI components within those components
- Adapter classes to convert the innards of one class to some other form for users of the class
You can define a nested class as
public,
private, or
protected. You also can define a nested class as
final (to prevent it from being changed),
abstract (meaning that it can't be instantiated), or
static.
When you create a
static class within another class, you're creating what is appropriately called a nested class. A nested class is defined within another class, but can exist outside an instance of the enclosing class. If your nested class isn't
static, it can exist only within an instance of the enclosing class, and is more appropriately called an inner class. In other words, all inner classes are nested classes, but not all nested classes are inner classes. The vast majority of the nested classes you will encounter in your career will be inner classes, rather than simply nested ones.
Any nested class has access to all of the members of the enclosing class, even if they're declared
private.
Defining nested classes
You define a nested class just as you define a non-nested class, but you do it within an enclosing class. For a somewhat contrived example, let's define a
Wallet class inside
Adult. While in real life you could have a
Wallet apart from an
Adult, it wouldn't be all that useful, and it makes good sense that every
Adult has a
Wallet (or at least something to hold money, and
MoneyContainer sounds a little odd). It also makes sense that
Wallet wouldn't exist on
Person, because a
Baby doesn't have one, and all subclasses of
Person would inherit it if it existed up there.
Our
Wallet will be quite simple, since it only serves to illustrate the definition of a nested class:; } }
We'll define that class inside
Adult, like this:
public class Adult extends Person { protected Wallet wallet = new Wallet(); public Adult() { } public void talk() { System.out.println("Spoke."); } public void acceptMoney(int aBill) { this.wallet.addBill(aBill); } public int moneyTotal() { return this.wallet.getMoneyTotal(); } protected class Wallet { ... } }
Notice that we added
acceptMoney() to let an
Adult accept more money. (Feel free to expand the example to force your
Adult to give some up, which is a more common event in real life.)
Once we have our nested class and our new
acceptMoney() method, we can use them like this:
Adult anAdult = new Adult(); anAdult.acceptMoney(5); System.out.println("I have this much money: " + anAdult.moneyTotal());
Executing this code should report that
anAdult has a money total of 5.
Simplistic event handling
The Java language defines an event handling approach, with associated classes, that allows you to create and handle your own events. But event handling can be much simpler than that. All you really need is some logic to generate an "event" (which really doesn't have to be an event class at all), and some logic to listen for that event and then respond appropriately. For example, suppose that whenever a
Person moves, our system generates (or fires) a
MoveEvent, which we can choose to handle or not. This will require several changes to our system. We have to:
- Create an "application" class to launch our system and illustrate using the anonymous inner class.
- Create a
MotionListenerthat our application can implement, and then handle the event in the listener.
- Add a
Listof listeners to
Adult.
- Add an
addMotionListener()method to
Adultto register a listener.
- Add a
fireMoveEvent()method to
Adultso that it can tell listeners when to handle the event.
- Add code to our application to create an
Adultand register itself as a handler.
This is all straightforward. Here's our
Adult with the new stuff:
public class Adult extends Person { protected Wallet wallet = new Wallet(); protected ArrayList listeners = new ArrayList(); public Adult() { } public void move() { super.move(); fireMoveEvent(); } ... public void addMotionListener(MotionListener aListener) { listeners.add(aListener); } protected void fireMoveEvent() { Iterator iterator = listeners.iterator(); while(iterator.hasNext()) { MotionListener listener = (MotionListener) iterator.next(); listener.handleMove(this); } } protected class Wallet { ... } }
Notice that we now override
move(), call
move() on
Person first, then call
fireMoveEvent() to tell listeners to respond. We also added
addMotionListener() to add a
MotionListener to a running list of listeners. Here's what a
MotionListener looks like:
public interface MotionListener { public void handleMove(Adult eventSource); }
All that's left is to create our application class:
public class CommunityApplication implements MotionListener { public void handleMove(Adult eventSource) { System.out.println("This Adult moved: \n" + eventSource.toString()); } public static void main(String[] args) { CommunityApplication application = new CommunityApplication(); Adult anAdult = new Adult(); anAdult.addMotionListener(application); anAdult.move(); } }
This class implements the
MotionListener interface, which means that it implements
handleMove(). All we do here is print a message to illustrate what happens when an event is fired.
Anonymous inner classes
Anonymous inner classes allow you to define a class in place, without naming it, to provide some context-specific behavior. It's a common approach for event handlers in user interfaces, which is a topic beyond the scope of this tutorial. But we can use an anonymous inner class even in our simplistic event-handling example.
You can convert the example from the previous page to use an anonymous inner class by changing the call to
addMotionListener() in
CommunityApplication.main() like so:
anAdult.addMotionListener(new MotionListener() { public void handleMove(Adult eventSource) { System.out.println("This Adult moved: \n" + eventSource.toString()); } });
Rather than having
CommunityApplication implement
MotionListener, we declared an unnamed (and thus anonymous) inner class of type
MotionListener, and gave it an implementation of
handleMove(). The fact that
MotionListener is an interface, not a class, doesn't matter. Either is acceptable.
This code produces exactly the same result as the previous version, but it uses a more common and expected approach. You will almost always see event handlers implemented with anonymous inner classes.
Using nested classes
Nested classes can be very useful. They also can cause pain.
Use a nested class when it would make little sense to define the class outside of an enclosing class. In our example, we could have defined
Wallet outside
Adult without feeling too badly about it. But imagine something like a
Personality class. Do you ever have one outside a
Person instance? No, so it makes perfect sense to define it as a nested class. A good rule of thumb is that you should define a class as non-nested until it's obvious that it should be nested, then refactor to nest it.
Anonymous inner classes are the standard approach for event handlers, so use them for that purpose. In other cases, be very careful with them. Unless anonymous inner classes are small, focused, and familiar, they obfuscate code. They can also make debugging more difficult, although the Eclipse IDE helps minimize that pain. Generally, try not to use anonymous inner classes for anything but event handlers.
Regular expressions
What is a regular expression?
A regular expression is essentially a pattern to describe a set of strings that share that pattern. For example, here's a set of strings that have some things in common:
- a string
- a longer string
- a much longer string
Each of these strings begins with "a" and ends with "string." The Java Regular Expression API helps you figure that out, and do interesting things with the results.
The Java language Regular Expression (or regex) API is quite similar to the regex facilities available in the Perl language. If you're a Perl programmer, you should feel right at home, at least with the Java language regex pattern syntax. If you're not used to regex, however, it can certainly look a bit weird. Don't worry: it's not as complicated as it seems.
The regex API
The Java language's regex capability has three core classes that you'll use almost all the time:
Pattern, which describes a string pattern.
Matcher, which tests a string to see if it matches the pattern.
PatternSyntaxException, which tells you that something wasn't acceptable about the pattern you tried to define.
The best way to learn about regex is by example, so in this section we'll create a simple one in
CommunityApplication.main(). Before we do, however, it's important to understand some regex pattern syntax. We'll discuss that in more detail in the next panel.
Pattern syntax
A regex pattern describes the structure of the string that the expression will try to find in an input string. This is where regular expressions can look a bit strange. Once you understand the syntax, though, it becomes less difficult to decipher.
Here are some of the most common pattern constructs you can use in pattern strings:
The first few constructs here are called quantifiers, because they quantify what comes before them. Constructs like
\d are predefined character classes. Any character that doesn't have special meaning in a pattern is a literal, and matches itself.
Matching
Armed with our new understanding of patterns, here's a simple example of code that uses the classes in the Java Regular Expression API:
Pattern pattern = Pattern.compile("a.*string"); Matcher matcher = pattern.matcher("a string"); boolean didMatch = matcher.matches(); System.out.println(didMatch); int patternStartIndex = matcher.start(); System.out.println(patternStartIndex); int patternEndIndex = matcher.end(); System.out.println(patternEndIndex);
First, we create a
Pattern. We do that by calling
compile(), a static method on
Pattern, with a string literal representing the pattern we want to match. That literal uses regex pattern syntax, which we can understand now. In this example, the English translation of the pattern is: "Find a string of the form 'a', followed by zero or more characters, following by 'string'".
Next, we call
matcher() on our
Pattern. That call creates a
Matcher instance. When that happens, the
Matcher searches the string we passed in for matches against the pattern string we created the
Pattern with. As you know, every Java language string is an indexed collection of characters, starting with 0 and ending with the string length minus one. The
Matcher parses the string, starting at 0, and looks for matches against the pattern.
After that process completes, the
Matcher contains lots of information about matches found (or not found) in our input string. We can access that information by calling various methods on our
Matcher:
matches()simply tells us if the entire input sequence was an exact match for the pattern.
start()tells us the index value in the string where the matched string starts.
end()tells us the index value in the string where the matched string ends, plus one.
In our simple example, there is a single match starting at 0 and ending at 7. Thus, the call to
matches() returns
true, the call to
start() returns 0, and the call to
end() returns 8. If there were more in our string than the characters in the pattern we searched for, we could use
lookingAt() instead of
matches().
lookingAt() searches for substring matches for our pattern. For example, consider the following string:
Here is a string with more than just the pattern.
We could search it for
a.*string and get a match if we used
lookingAt(). If we used
matches() instead, it would return
false, because there's more to the string than just what's in the pattern.
Complex patterns
Simple searches are easy with the regex classes, but much greater sophistication is possible.
You might be familiar with a wiki, a Web-based system that lets users modify pages to "grow" a site. Wikis, whether written in the Java language or not, are based almost entirely on regular expressions. Their content is based on string input from users, which is parsed and formatted by regular expressions. One of the most prominent features of wikis is that any user can create a link to another topic in the wiki by entering a wiki word, which is typically a series of concatenated words, each of which begins with an uppercase letter, like this:
MyWikiWord
Assume the following string:
Here is a WikiWord followed by AnotherWikiWord, then YetAnotherWikiWord.
You can search for wiki words in this string with a regex pattern like this:
[A-Z][a-z]*([A-Z][a-z]*)+
Here's some code to search for wiki words:
String input = "Here is a WikiWord followed by AnotherWikiWord, then SomeWikiWord."; Pattern pattern = Pattern.compile("[A-Z][a-z]*([A-Z][a-z]*)+"); Matcher matcher = pattern.matcher(input); while (matcher.find()) { System.out.println("Found this wiki word: " + matcher.group()); }
You should see the three wiki words in your console.
Replacing
Searching for matches is useful, but we also can manipulate the string once we find a match. We can do that by replacing matches with something else, just as you might search for some text in a word processing program and replace it with something else. There are some methods on
Matcher to help us:
replaceAll(), which replaces all matches with a string we specify.
replaceFirst(), which replaces only the first match with a string we specify.
Using these methods is straightforward:
String input = "Here is a WikiWord followed by AnotherWikiWord, then SomeWikiWord."; Pattern pattern = Pattern.compile("[A-Z][a-z]*([A-Z][a-z]*)+"); Matcher matcher = pattern.matcher(input); System.out.println("Before: " + input); String result = matcher.replaceAll("replacement"); System.out.println("After: " + result);
This code finds wiki words, as before. When the
Matcher finds a match, it replaces the wiki word text with
replacement. When you run this code, you should see the following on the console:
Before: Here is WikiWord followed by AnotherWikiWord, then SomeWikiWord. After: Here is replacement followed by replacement, then replacement.
If we had used
replaceFirst(), we would've seen this:
Before: Here is a WikiWord followed by AnotherWikiWord, then SomeWikiWord. After: Here is a replacement followed by AnotherWikiWord, then SomeWikiWord.
Groups
We also can get a little fancier. When you search for matches against a regex pattern, you can get information about what you found. We already saw some of that with the
start() and
end() methods on
Matcher. But we also can reference matches via capturing groups. In each pattern, you typically create groups by enclosing parts of the pattern in parentheses. Groups are numbered from left to right, starting with 1 (group 0 represents the entire match). Here is some code that replaces each wiki word with a string that "wraps" the word:
String input = "Here is a WikiWord followed by AnotherWikiWord, then SomeWikiWord."; Pattern pattern = Pattern.compile("[A-Z][a-z]*([A-Z][a-z]*)+"); Matcher matcher = pattern.matcher(input); System.out.println("Before: " + input); String result = matcher.replaceAll("blah$0blah"); System.out.println("After: " + result);
Running this code should produce this result:
Before: Here is a WikiWord followed by AnotherWikiWord, then SomeWikiWord. After: Here is a blahWikiWordblah followed by blahAnotherWikiWordblah, then blahSomeWikiWordblah.
In this code, we referenced the entire match by including
$0 in the replacement string. Any portion of a replacement string that takes the form
$<some int> refers to the group identified by the integer (so
$1 refers to group 1, and so on). In other words,
$0 is equivalent to this:
matcher.group(0);
We could've accomplished the same replacement goal by using some other methods, rather than calling
replaceAll():
StringBuffer buffer = new StringBuffer(); while (matcher.find()) { matcher.appendReplacement(buffer, "blah$0blah"); } matcher.appendTail(buffer); System.out.println("After: " + buffer.toString());
We get these results again:
Before: Here is a WikiWord followed by AnotherWikiWord, then SomeWikiWord. After: Here is a blahWikiWordblah followed by blahAnotherWikiWordblah, then blahSomeWikiWordblah.
A simple example
Our
Person hierarchy doesn't offer us many opportunities for handling strings, but we can create a simple example that lets us use some of the regex skills we've learned.
Let's add a
listen() method:
public void listen(String conversation) { Pattern pattern = Pattern.compile(".*my name is (.*)."); Matcher matcher = pattern.matcher(conversation); if (matcher.lookingAt()) System.out.println("Hello, " + matcher.group(1) + "!"); else System.out.println("I didn't understand."); }
This method lets us address some
conversation to an
Adult. If that string is of a particular form, our
Adult can respond with a nice salutation. If not, it can say that it doesn't understand.
The
listen() method checks the incoming string to see if it matches a certain pattern: one or more characters, followed by "my name is ", followed by one or more characters, followed by a period. We use
lookingAt() to search a substring of the input for a match. If we find one, we construct a salutation string by grabbing what comes after "my name is", which we assume will be the name (that's what group 1 will contain). If we don't find one, we reply that we don't understand. Obviously, our
Adult isn't much of a conversationalist at the moment.
This is an almost trivial example of the Java language's regex capabilities, but it illustrates how they can be used.
Clarifying expressions
Regular expressions can be cryptic. It's easy to get frustrated with code that looks very much like Sanskrit. Naming things well and building expressions can help.
For example, here's our pattern for a wiki word:
[A-Z][a-z]*([A-Z][a-z]*)+
Now that you understand regex syntax, you should be able to read that without too much work, but our code would be much easier to understand if we declared a constant to hold the pattern string. We could name it something like
WIKI_WORD. Our
listen() method would start like this:
public void listen(String conversation) { Pattern pattern = Pattern.compile(WIKI_WORD); Matcher matcher = pattern.matcher(conversation); ... }
Another trick that can help is to define constants for the parts of patterns, then build up more complex patterns as assemblies of named parts. Generally speaking, the more complicated the pattern, the more difficult it is to decipher, and the more prone to error it is. You'll find that there's no real way to debug regular expressions other than by trial and error. Make life simpler by naming patterns and pattern components.
Collections
Introduction.
List implementations():
public void spendMoney(int aBill) { this.wallet.removeBill(aBill); }
This method calls
removeBill() on
Wallet:
protected void removeBill(int aBill) { Iterator iterator = bills.iterator(); while (iterator.hasNext()) { Integer bill = (Integer) iterator.next(); if (bill.intValue() == aBill) iterator.remove(); } }:
protected void removeBill(int aBill) { bills.remove(new Integer(aBill)); }:
public void spendMoney(List bills) { this.wallet.removeBills(bills); }
We need to add
removeBills() to our
wallet to make this work. Let's try this:
protected void removeBills(List billsToRemove) { this.bills.removeAll(bills); }
This is the most straightforward implementation we can use. We call
removeAll() on our
List of bills, passing in a
Collection. That method then removes all the elements from the list that are contained in the
Collection. Try running this code:
List someBills = new ArrayList(); someBills.add(new Integer(1)); someBills.add(new Integer(2)); Adult anAdult = new Adult(); anAdult.acceptMoney(1); anAdult.acceptMoney(1); anAdult.acceptMoney(2); List billsToRemove = new ArrayList(); billsToRemove.add(new Integer(1)); billsToRemove.add(new Integer(2)); anAdult.spendMoney(someBills); System.out.println(anAdult.wallet.bills);:
protected void removeBills(List billsToRemove) { Iterator iterator = billsToRemove.iterator(); while (iterator.hasNext()) { this.bills.remove(iterator.next()); } }
This code removes single matches only, rather than all matches. Remember to be careful with
removeAll().
Set implementations:
protected Set nicknames = new HashSet();
Then we add a method to add nicknames to the
Set:
public void addNickname(String aNickname) { nicknames.add(aNickname); }
Now try running this code:
Adult anAdult = new Adult(); anAdult.addNickname("Bobby"); anAdult.addNickname("Bob"); anAdult.addNickname("Bobby"); System.out.println(anAdult.nicknames);
You'll see only a single occurrence of
Bobby in the console.
Map implementations:
protected Map creditCards = new HashMap();
Then we add a method to add a credit card to the
Map:
public void addCreditCard(String aCardName) { creditCards.put(aCardName, new Double(0)); }:
public double getBalanceFor(String cardName) { Double balance = (Double) creditCards.get(cardName); return balance.doubleValue(); }
All that's left is to add the
charge() method, which allows us to add to our balance:
public void charge(String cardName, double amount) { Double balance = (Double) creditCards.get(cardName); double primitiveBalance = balance.doubleValue(); primitiveBalance += amount; balance = new Double(primitiveBalance); creditCards.put(cardName, balance); }
Now try running this code, which should show you
19.95 in the console:
Adult anAdult = new Adult(); anAdult.addCreditCard("Visa"); anAdult.addCreditCard("MasterCard"); anAdult.charge("Visa", 19.95); adAdult.showBalanceFor("Visa");).
The Collections class:
List source = new ArrayList(); source.add("one"); source.add("two"); List target = new ArrayList(); target.add("three"); target.add("four"); Collections.copy(target, source); System.out.println(target);:
List strings = new ArrayList(); strings.add("one"); strings.add("two"); strings.add("three"); strings.add("four"); Collections.sort(strings); System.out.println(strings);
You'll get
[four, one, three, two] in the console. But how can you sort classes you create? We can do this for our
Adult. First, we make the class mutually comparable:
public class Adult extends Person implements Comparable { ... }
Then we override
compareTo() to compare two
Adult instances. We'll keep the comparison simplistic for our example, so it's less work:
public int compareTo(Object other) { final int LESS_THAN = -1; final int EQUAL = 0; final int GREATER_THAN = 1; Adult otherAdult = (Adult) other; if ( this == otherAdult ) return EQUAL; int comparison = this.firstname.compareTo(otherAdult.firstname); if (comparison != EQUAL) return comparison; comparison = this.lastname.compareTo(otherAdult.lastname); if (comparison != EQUAL) return comparison; return EQUAL; }:
assert this.equals(otherAdult) : "compareTo inconsistent with equals.";
The other approach to comparing objects is to extract the algorithm in
compareTo() into a object of type
Comparator, then call
Collections.sort() with the collection to be sorted and the
Comparator, like this:
public class AdultComparator implements Comparator { public int compare(Object object1, Object object2) { final int LESS_THAN = -1; final int EQUAL = 0; final int GREATER_THAN = 1; if ((object1 == null) ;amp;amp (object2 == null)) return EQUAL; if (object1 == null) return LESS_THAN; if (object2 == null) return GREATER_THAN; Adult adult1 = (Adult) object1; Adult adult2 = (Adult) object2; if (adult1 == adult2) return EQUAL; int comparison = adult1.firstname.compareTo(adult2.firstname); if (comparison != EQUAL) return comparison; comparison = adult1.lastname.compareTo(adult2.lastname); if (comparison != EQUAL) return comparison; return EQUAL; } } public class CommunityApplication { public static void main(String[] args) { Adult adult1 = new Adult(); adult1.setFirstname("Bob"); adult1.setLastname("Smith"); Adult adult2 = new Adult(); adult2.setFirstname("Al"); adult2.setLastname("Jones"); List adults = new ArrayList(); adults.add(adult1); adults.add(adult2); Collections.sort(adults, new AdultComparator()); System.out.println(adults); } }.
Using collections:
Adult adult1 = new Adult(); Adult adult2 = new Adult(); Adult adult3 = new Adult(); List immutableList = Arrays.asList(new Object[] { adult1, adult2, adult3 }); immutableList.add(new Adult());
This code throws an
UnsupportedOperationException, because the
List returned by
Arrays.asList() is immutable. You cannot add a new element to an immutable
List. Keep your eyes open.
Dates
Introduction
The Java language gives you lots of tools for handling dates. Some of them are much more frustrating than the tools available in other languages. That said, with the tools that the Java language provides, there is almost nothing you can't do to create dates and format them exactly how you want.
Creating dates
When the Java language was young, it contained a class called
Date that was quite helpful for creating and manipulating dates. Unfortunately, that class did not support internationalization very well, so Sun added two classes that aimed to help the situation:
Calendar
DateFormat
We'll talk about
Calendar first, and leave
DateFormat for later.
Creating a
Date is still relatively simple:
Date aDate = new Date(System.currentTimeMillis());
Or we could use this code:
Date aDate = new Date();
This will give us a
Date representing the exact date and time right now, in the current locale format. Internationalization is beyond the scope of this tutorial, but for now, simply know that the
Date you get back is consistent with the geography of your local machine.
Now that we have an instance, what can we do with it? Very little, directly. We can compare one
Date with another to see if the first is
before() or
after() the second. We also can essentially reset it to a new instant in time by calling
setTime() with a
long representing the number of milliseconds since midnight on January 1, 1970 (which is what
System.currentTimeMillis() returns). Beyond that, we're limited.
Calendars
The
Date class is now more confusing than useful, because most of its date processing behavior is deprecated. You used to be able to get and set parts of the
Date (such as the year, the month, etc.). Now we're left having to use both
Date and
Calendar to get the job done. Once we have a
Date instance, we can use a
Calendar to get and set parts of it. For example:
Date aDate = new Date(System.currentTimeMillis()); Calendar calendar = GregorianCalendar.getInstance(); calendar.setTime(aDate);
Here we create a
GregorianCalendar and set its time to the
Date we created before. We could have accomplished the same goal by calling a different method on our
Calendar:
Calendar calendar = GregorianCalendar.getInstance(); calendar.setTimeInMillis(System.currentTimeMillis());
Armed with a
Calendar, we can now access and manipulate components of our
Date. Getting and setting parts of the
Date is a simple process. We simply call appropriate getters and setters on our
Calendar, like this:
calendar.set(Calendar.MONTH, Calendar.JULY); calendar.set(Calendar.DAY_OF_MONTH, 15); calendar.set(Calendar.YEAR, 1978); calendar.set(Calendar.HOUR, 2); calendar.set(Calendar.MINUTE, 15); calendar.set(Calendar.SECOND, 37); System.out.println(calendar.getTime());
This will print the formatted output string for July 15, 1978 at 02:15:37 a.m. (there also are helper methods on
Calendar that allow us to set some or almost all of those components simultaneously). Here we called
set(), which takes two parameters:
- The field (or component) of the
Datewe want to set.
- The value for that field.
We can reference the fields with named constants in the
Calendar class itself. In some cases, there is more than one name for the same field, as with
Calendar.DAY_OF_MONTH, which can also be referenced with
Calendar.DATE. The values are straightforward, except perhaps the ones for
Calendar.MONTH and the one for
Calendar.HOUR. Months in Java language dates are zero-based (that is, January is 0), which really makes it wise to use the named constants to set them, and can make it frustrating to display dates correctly. The hours run from 0 to 24.
Once we have an established
Date, we can extract parts of it:
System.out.println("The YEAR is: " + calendar.get(Calendar.YEAR)); System.out.println("The MONTH is: " + calendar.get(Calendar.MONTH)); System.out.println("The DAY is: " + calendar.get(Calendar.DATE)); System.out.println("The HOUR is: " + calendar.get(Calendar.HOUR)); System.out.println("The MINUTE is: " + calendar.get(Calendar.MINUTE)); System.out.println("The SECOND is: " + calendar.get(Calendar.SECOND)); System.out.println("The AM_PM indicator is: " + calendar.get(Calendar.AM_PM));
Built-in date formatting
You used to be able format dates with
Date. Now you have to use several other classes:
DateFormat
SimpleDateFormat
DateFormatSymbols
We won't cover all the complexities of date formatting here. You can explore these classes on your own. But we will talk about the basics of using these tools.
The
DateFormat class lets us create a locale-specific formatter, like this:
DateFormat dateFormatter = DateFormat.getDateInstance(DateFormat.DEFAULT); Date aDate = new Date(); String formattedDate = dateFormatter.format(today);
This code creates a formatted date string with the default format for this locale. On my machine, it looks something like this:
Nov 11, 2005
This is the default style, but it's not all that is available to us. We can use any of several predefined styles. We also can call
DateFormat.getTimeInstance() to format a time, or
DateFormat.getDateTimeInstance() to format both a date and a time. Here is the output of the various styles, all for the U.S. locale:
Customized formatting
These predefined formats are fine in most cases, but you can also use
SimpleDateFormat to define your own formats. Using
SimpleDateFormat is straightforward:
- Instantiate a
SimpleDateFormatwith a format pattern string (and a locale, if you wish).
- Call
format()on it with a
Date.
The result is a formatted date string. Here's an example:
Date aDate = new Date(); SimpleDateFormat formatter = new SimpleDateFormat("MM/dd/yyyy"); String formattedDate = formatter.format(today); System.out.println(formattedDate);
When you run this code, you'll get something like the following (it will reflect the date that's current when you run the code, of course):
11/05/2005
The quoted string in the example above follows the pattern syntax rules for date formatting patterns. Java.sun.com has some excellent summaries of those rules (see Resources). Here are some helpful rules of thumb:
- You can specify patterns for dates and times.
- Some of the pattern syntax isn't intuitive (for example,
mmdefines a two-digit minute pattern; to get an abbreviated month, you use
MM).
- You can include text literals in your patterns by placing them in single quotes (for example., using
"'on' MM/dd/yyyy"above produces
on 11/05/2005).
- The number of characters in a text component of a pattern dictates whether its abbreviated or long form will be used (
"MM"yields
11, but
"MMM"yields
Nov, and
"MMMM"yields
November).
- The number of characters in a numeric component of a pattern dictates the minimum number of digits.
If the standard symbols of
SimpleDateFormat still don't meet your custom formatting needs, you can use
DateFormatSymbols to customize the symbols for any component of a
Date or time. For example, we could implement a unique set of abbreviations for months of the year, like this (using the same
SimpleDateFormat as before):
DateFormatSymbols symbols = new DateFormatSymbols(); String[] oddMonthAbbreviations = new String[] { "Ja","Fe","Mh","Ap","My","Jn","Jy","Au","Se","Oc","No","De" }; symbols.setShortMonths(oddMonthAbbreviations); formatter = new SimpleDateFormat("MMM dd, yyyy", symbols); formattedDate = formatter.format(now); System.out.println(formattedDate);
This code calls a different constructor on
SimpleDateFormat, one that takes a pattern string and a
DateFormatSymbols that defines the abbreviations used when a short month appears in a pattern. When we format the date with these symbols, the result looks something like this for the
Date we saw above:
No 15, 2005
The customization capabilities of
SimpleDateFormat and
DateFormatSymbols should be enough to create any format you need.
Manipulating dates
You can go forward and backward in time by incrementing and decrementing dates, or parts of them. Two methods let you do this:
add()
roll()
The first lets you add some amount (or subtract by adding a negative amount) of time to a particular field of a
Date. Doing that will adjust all other fields of the
Date accordingly based on the addition to a particular field. For example, assume we begin with November 15, 2005 and increment the day field by 20. We could use code.add(Calendar.DAY_OF_MONTH, 20); System.out.println("After: " + formatter.format(calendar.getTime()));
The result looks something like this:
Before: Nov 15, 2005 After: Dec 05, 2005
Rather simple. But what does it mean to roll a
Date? It means you are incrementing or decrementing a particular date/time field by a given amount, without affecting other fields. For example, we could roll our date from November to December.roll(Calendar.MONTH, true); System.out.println("After: " + formatter.format(calendar.getTime()));
Notice that the month is rolled up (or incremented) by 1. There are two forms of
roll():
roll(int field, boolean up)
roll(int field, int amount)
We used the first. To decrement a field using this form, you pass
false as the second argument. The second form of the method lets you specify the increment or decrement amount. If a rolling action would create an invalid date value (for example, 09/31/2005), these methods adjust the other fields accordingly, based on valid maximum and minimum values for dates, hours, etc. You can roll forward with positive values, and backward with negative ones.
It's fine to try to predict what your rolling actions will do, and you can certainly do so, but more often than not, trial and error is the best method. Sometimes you'll guess right, but sometimes you'll have to experiment to see what produces the correct results.
Using Dates
Everybody has a birthday. Let's add one to our
Person class. First, we add an instance variable to
Person:
protected Date birthdate = new Date();
Next, we add accessors for the variable:
public Date getBirthdate() { return birthdate; } public void setBirthdate(Date birthday) { this.birthdate = birthday; }
Next, we'll remove the
age instance variable, because we'll now calculate it. We also remove the
setAge() accessor, because
age will now be a derived value. We replace the body of
getAge() with the following code:
public int getAge() { Calendar calendar = GregorianCalendar.getInstance(); calendar.setTime(new Date()); int currentYear = calendar.get(Calendar.YEAR); calendar.setTime(birthdate); int birthYear = calendar.get(Calendar.YEAR); return currentYear - birthYear; }
In this method, we now calculate the value of
age based on the year of the
Person's
birthdate and the year of today's date.
Now we can try it out, with this code:
Calendar calendar = GregorianCalendar.getInstance(); calendar.setTime(new Date()); calendar.set(Calendar.YEAR, 1971); calendar.set(Calendar.MONTH, 2); calendar.set(Calendar.DAY_OF_MONTH, 23); Adult anAdult = new Adult(); anAdult.setBirthdate(calendar.getTime()); System.out.println(anAdult);
We set
birthdate on an
Adult to March 23, 1971. If we run this code in January 2005, we should get this output:
An Adult with: Age: 33 Name: firstname lastname Gender: MALE Progress: 0
There are a few other housekeeping details that I leave as an exercise for you:
- Update
compareTo()on
Adultto reflect the presence of a new instance variable.
- Had we implemented it, we would have to update
equals()on
Adultto reflect the presence of a new instance variable.
- Had we implemented
equals(), we would have implemented
hashCode()as well, and we would have to update
hashCode()to reflect the presence of a new instance variable.
I/O
Introduction
The data that a Java language program uses has to come from somewhere. More often than not, it comes from some external data source. There are many different kinds of data sources, including databases, direct byte transfer over a socket, and files. The Java language gives you lots of tools with which you can get information from external sources. These tools are mostly in the
java.io package.
Of all the data sources available, files are the most common, and often the most convenient. Knowing how to use the available Java language APIs to interact with files is a fundamental programmer skill.
In general, the Java language gives you a wrapper class (
File) for a file in your OS. To read that file, you have to use streams that parse the incoming bytes into Java language types. In this section, we will talk about all of the objects you'll typically use to read files.
Files
The
File class defines a resource on your filesystem. It's a pain in the neck, especially for testing, but it's the reality Java programmers have to deal with.
Here's how you instantiate a
File:
File aFile = new File("temp.txt");
This creates a
File with the path
temp.txt in the current directory. We can create a
File with any path string we want, as long as it's valid. Note that having this
File object doesn't mean that the underlying file actually exists on the filesystem in the expected location. Our object merely represents an actual file that may or may not be there. If the underlying file doesn't exist, we won't know there's a problem until we try to read from or write to it. That's a bit unpleasant, but it makes sense. For example, we can ask our
File if it exists:
aFile.exists();
If it doesn't, we can create it:
aFile.createNewFile();
Using other methods on
File, we also can delete files, create directories, determine whether a file system resource is a directory or a file, etc. The real action occurs, though, when we write to and read from the file. To do that, we need to understand a little bit about streams.
Streams
We can access files on the filesystem using streams. At the lowest level, streams allow a program to receive bytes from a source and/or to send output to a destination. Some streams handle all kinds of 16-bit characters (types
Reader and
Writer). Others handle only 8-bit bytes (types
InputStream and
OutputStream). Within these hierarchies are several flavors of streams (all found in the
java.io package). At the highest level of abstraction, there are character streams and byte streams.
Byte streams read (
InputStream and subclasses) and write (
OutputStream
and subclasses) 8-bit bytes. In other words, byte streams could be considered a more raw type of stream. As a result, it's easy to understand why the Java.sun.com tutorial on essential Java language classes (see Resources) says that byte streams are typically used for binary data, such as images. Here's a selected list of byte streams:
Character streams read (
Reader and its subclasses) and write (
Writer and its subclasses) 16-bit characters. Subclasses either read to or write from data sinks, or process bytes in transit. Here's a selected list of character streams:
Streams are a large topic, and we can't cover them in their entirety here. Instead, we'll focus on the recommended streams for reading and writing files. In most cases, these will be the character streams, but we'll use both character and byte streams to illustrate the possibilities.
Reading and writing files
There are several ways to read from and write to a
File. Arguably the simplest approach goes like this:
- Create a
FileOutputStreamon the
Fileto write to it.
- Create a
FileInputStreamon the
Fileto read from it.
- Call
read()to read from the
Fileand
write()to write to it.
- Close the streams, cleaning up if necessary.
The code might look something like this:
try { File source = new File("input.txt"); File sink = new File("output.txt"); FileInputStream in = new FileInputStream(source); FileOutputStream out = new FileOutputStream(sink); int c; while ((c = in.read()) != -1) out.write(c); in.close(); out.close(); } catch (Exception e) { e.printStackTrace(); }
Here we create two
File objects: a
FileInputStream to read from the source file, and a
FileOutputStream to write to the output
File. (Note: This example was adapted from the Java.sun.com
CopyBytes.java
example; see Resources.) We then read in each byte of the input and write it to the output. Once done, we close the streams. It may seem wise to put the calls to
close() in a
finally block. However, the Java language compiler will still require that you catch the various exceptions that occur, which means yet another
catch in your
finally. Is it worth it? Maybe.
So now we have a basic approach for reading and writing. But the wiser choice, and in some respects the easier choice, is to use some other streams, which we'll discuss in the next panel.
Buffering streams
There are several ways to read from and write to a
File, but the typical, and most convenient, approach goes like this:
- Create a
FileWriteron the
File.
- Wrap the
FileWriterin a
BufferedWriter.
- Call
write()on the
BufferedWriteras often as necessary to write the contents of the
File, typically ending each line with a line termination character (that is,
\n).
- Call
flush()on the
BufferedWriterto empty it.
- Close the
BufferedWriter, cleaning up if necessary.
The code might look something like this:
try { FileWriter writer = new FileWriter(aFile); BufferedWriter buffered = new BufferedWriter(writer); buffered.write("A line of text.\n"); buffered.flush(); } catch (IOException e1) { e1.printStackTrace(); }
Here we create a
FileWriter on
aFile, then we wrap it in a
BufferedWriter. Buffered writing is more efficient than simply writing bytes out one at a time. When we're done writing each line (which we manually terminate with
\n), we call
flush() on the
BufferedWriter. If we didn't, we wouldn't see any data in the file, which defeats the purpose of all of this file writing effort.
Once we have data in the file, we can read it with some similarly straightforward code:
String line = null; StringBuffer lines = new StringBuffer(); try { FileReader reader = new FileReader(aFile); BufferedReader bufferedReader = new BufferedReader(reader); while ( (line = bufferedReader.readLine()) != null) { lines.append(line); lines.append("\n"); } } catch (IOException e1) { e1.printStackTrace(); } System.out.println(lines.toString());
We create a
FileReader, then wrap it in a
BufferedReader. That allows us to use the convenient
readLine() method. We read each line until there are none left, appending each line to the end of our
StringBuffer. When reading from a file, an
IOException could occur, so we surround all of our file-reading logic with a
try/
catch block.
Wrap-up
Summary
We've covered a significant portion of the Java language in the "Introduction to Java programming" tutorial (see Resources) and this tutorial, but the Java language is huge and a single, monolithic tutorial (or even several smaller ones) can't encompass it all. Here's a sampling of some areas we did not explore at all:
Resources
Learn
- Take Roy Miller's "Introduction to Java programming" for the basics of Java prgramming. (developerworks, November 2004)
- The java.sun.com Web site has links to all things Java programming. You can find every "official" Java language resource you need there, including the language specification and API documentation. You can also can find links to excellent tutorials on various aspects of the Java language, beyond the fundamental tutorial.
- Go to Sun's Java documentation page, for a link to API documentation for each of the SDK versions.
- Check out John Zukowski's excellent article on regex with the Java language here. This is just one article in his Magic with Merlin column.
- The Sun Java tutorial is an excellent resource. It's a gentle introduction to the language, but also covers many of the topics addressed in this tutorial. If nothing else, it's a good resource for examples and for links to other tutorials that go into more detail about various aspects of the language.
- The java.sun.com Web site has some excellent summaries of the date pattern rules here and more on the
CopyBytes.javaexample here.
- The developerWorks New to Java technology page is a clearinghouse for developerWorks resources for beginning Java developers, including links to tutorials and certification resources.
- You'll find articles about every aspect of Java programming in the developerWorks Java technology zone.
- Also see the Java technology zone tutorials page for a complete listing of free Java-focused tutorials from developerWorks.
Get products and technologies
- Download the intermediate.jar that accompanies this tutorial.
- You can download Eclipse from the Eclipse Web site. Or, make it easy on yourself and download Eclipse bundled with the latest IBM Java runtime (available for Linux and Windows).
Discuss
- Participate. | http://www.ibm.com/developerworks/java/tutorials/j-intermed/j-intermed.html | CC-MAIN-2015-06 | refinedweb | 10,567 | 57.16 |
(Random observation: Hmmm, strange, in the Data.Map version of primes above, we are missing 5 primes?) Hi Chaddai, Your algorithm does work significantly better than the others I've posted here :-) So much so, that we're going for a grid of 10000000 to get the timings in an easy-to-measure range. Here are the results: J:\dev\haskell>ghc -O2 -fglasgow-exts -o PrimeChaddai.exe PrimeChaddai.hs J:\dev\haskell>primechaddai number of primes: 664579 30.984 J:\dev\test\testperf>csc /nologo primecs.cs J:\dev\test\testperf>primecs number of primes: 664579 elapsed time: 0,859375 So, only 30 times faster now, which is quite a lot better :-D Here's the full .hs code: module Main where import IO import Char import GHC.Float import List import qualified Data.Map as Map import Control.Monad import System.Time import System.Locale..]] calculateNumberOfPrimes max = length $ takeWhile ( < max ) primes gettime :: IO ClockTime gettime = getClockTime main = do starttime <- gettime let( show(secondsfloat) ) return () On 7/15/07, Chaddaï Fouché <chaddai.fouche at gmail.com> wrote: > > Or if you really want a function with your requirement, maybe you > could take the painful steps needed to write : > let numberOfPrimes = length $ takeWhile (< 200000) primes > ? > -------------- next part -------------- An HTML attachment was scrubbed... URL: | http://www.haskell.org/pipermail/haskell-cafe/2007-July/028913.html | CC-MAIN-2014-15 | refinedweb | 212 | 68.77 |
Singleton instance of QSplashScreen
how to use QSplashScreen in an application of 100 files and each file has to change the message of QSplashScreen using showMessage.
i.e., singleton instance of splash screen.
pls any one show me light.
You can make wrapper class for working with QSplashScreen and use it as singleton.
since i'm a newbie ,could you please explain how to may use of wrapper class.
Just create your own custom singleton-class which will have private QSplashScreen field and two public methods: one for setting this field (from place where you are creating splashscreen) and second for setting message.
Thanks for ur reply..
I have created the singleton class SplashScreen
i'm using this
SplashScreen::getInstance->setMessage();
to set the message at different files.
But still i have a doubt where to initialize QApplication.
Mmm, QApplication is commonly initialized in main() method. Maybe you mean QSplashScreen? If yes, than I think it will be ok for you to init it in main() too.
@class MySplash : public QSplashScreen
{
Q_OBJECT
public:
explicit MySplash(QObject parent = 0);
static MySplash instance();
private:
MySplash* _inst;
}@
.cpp
@
MySplash MySplash::_inst = 0;
MySplash* MySplash::instance()
{
if ( !_inst )
_inst = new MySplash();
return _inst;
}@
@class SplashScreen : public QSplashScreen{
public:
static SplashScreen* getInstance();
void destroyInstance();
void SetSplashScreen();
void SetMessage(const std::wstring& message);
QString *splashMessage;
CString progressText;
private:
QSplashScreen *hostSplash_;
SplashScreen() {}
~SplashScreen(){}
static SplashScreen* m_pInstance;
};@.h file
@SplashScreen* SplashScreen::m_pInstance = NULL;
SplashScreen* SplashScreen::getInstance() {
if(NULL == m_pInstance ) {
m_pInstance = new SplashScreen();
}
return m_pInstance;
}
void SplashScreen::destroyInstance() {
delete m_pInstance;
m_pInstance = NULL;
}
void SplashScreen::SetMessage(const std::wstring& message ) {
progressText = message.c_str();
char pC = (char)(LPCTSTR)progressText ;
splashMessage->append(QString::fromAscii(pC));
hostSplash_->showMessage(*splashMessage);
}
void SplashScreen::SetSplashScreen() {
hostSplash_ = new QSplashScreen(QPixmap(":Resource Files/splash.jpg"));
}@
prakash02, and?
This is .cpp file.
I initialized the QApplication in main function, but my doubt does i can use the SplashScreen instance other than from main class.
Any help is appreciable.Thanks in advance.
prakash02, @SplashScreen* SplashScreen::m_pInstance = NULL;@ it's C style, it's better to use C++ style:
@SplashScreen* SplashScreen::m_pInstance = 0;@
And why do you use std::wstring instead QString?
I defined resource file of strings,now i want to use those messages to be display on QSplashScreen when the program execution calls that function which can be on different namespace or on different class.
Why do you use CString and std::wstring for setting/storing text? Why not QString? Also using CString and LPCTSTR type will break cross-platform compatibility.
Denis Kormalev. :)
2moderators of this section: please move this thread to more correspondent category (Desktop I think)
prakash02, how do you defiine resource file of strings?
Yes i can use QString, but previously those are used to display on the native windows dialog.Now I want to change those to QT to improve the look and feel.
try to use QObject::tr("") for strings instead string table in resource file. | https://forum.qt.io/topic/1011/singleton-instance-of-qsplashscreen/18 | CC-MAIN-2019-09 | refinedweb | 482 | 56.66 |
Re: DSN-less connection for Informix database
- From: "jeff" <jhersey at allnorth dottt com>
- Date: Tue, 24 Apr 2007 04:58:09 -0700
no, build an odbc for using the specificed driver ... my case in this
example ... Pervasive ODBC client Interface ... I know every user's machine
will have this DRIVER installed. But what I do not want to rely on is
having to SETUP an odbc connection profile .. dsn ... for each user or for
each machine.
So, what I did was created an ODBC System DSN ... opened the Regedit ...
navigate to ...
HKEY_LOCAL_MACHINE
SOFTWARE
ODBC
ODBC.INI
Find the DSN profile I just created...
Look at the KEY and VALUE list...
Dup this in the parameter string you see below..
For example, here is one that will connect to an access database ... ODBC
....
'Driver={Microsoft Access Driver
(*.mdb)};UID=admin;PWD=;DBQ=C:\Test.MDB',ConnectOption
='SQL_DRIVER_CONNECT,SQL_DRIVER_NOPROMPT;'
I assumed you question was how to use an ODBC driver to connect to a
database without needing a User or System DSN profile. Is this correct.
If so, the above should work for you ... building your connection string.
Does this help?
If you are useing Informix ... and your DSN is jeco ...
Dim FactorJaco As String = "dsn=jaco;catalog=factor;uid=user;pwd=passord"
in your connection string, replace the DNS=jaco ... with a string you build
using the information you get from the registry entry.
the example I gave you, was for a PERVASIVE dsn'less connection string ...
instead of me using a NAMED dsn connection profile ...
DNS='myDNSProfile';uid=MyUserName;pwd=MyPassword ...
I use ...
Driver={Pervasive ODBC Client Interface}; ServerDSN=purchasingdata;
ServerName=10.10.10.7.1583; TCPPort=1583; TransportHint=TCP:SPX ;
uid=MyUserName'pwd=MyPassword
If I look at myDSNProfile in the ODBC.ini section of the registry, they will
be the following keys...
ServerDSN=purchasingData
ServerName=10.10.10.7.1583
....
all the information that is required by the Pervasive ODBC Client Interface
driver ...
....
remember, the registry only contains parameter / value lists for the
driver, nothing else. So, if you can duplicate the parameter string, you do
not need to rely on a predefined DSN. I can not give you the specific
connection string for you INFORMIX connection becuase I do not have your
driver loaded on my machine - which would allow me to create a dns profile,
and review the parameters in the registry.
Jeff
"Bill Nguyen" <billn_nospam@xxxxxxxx> wrote in message
news:%236oCX4hhHHA.2332@xxxxxxxxxxxxxxxxxxxxxxx
Jeff;
Where do you go to get this info in the registry?
Are you talking about ODBC for Informix database?
Thanks
Bill
"jeff" <jhersey at allnorth dottt com> wrote in message
news:OkT5vkahHHA.4296@xxxxxxxxxxxxxxxxxxxxxxx
To create dsn'less ODBC connections, I create a DSN on my machine and
look at the registry to see what data elements it creates, I then simply
create a connection string. FOr example, here is on I use to connect
Timberline Software - P/O system.
Driver={Pervasive ODBC Client Interface}; ServerDSN=purchasingdata;
ServerName=10.10.10.7.1583; TCPPort=1583; TransportHint=TCP:SPX
I then add all the other connection data to it - user name, password and
so on. This string above replaces the need for me to create a DSN
connection profile on each client machine or on the citrix server.
Jeff.
"Bill Nguyen" <billn_nospam_please@xxxxxxxx> wrote in message
news:eT0MI24gHHA.1240@xxxxxxxxxxxxxxxxxxxxxxx
I have this connection string using ODBC DSN
Dim FactorJaco As String =
"dsn=jaco;catalog=factor;uid=user;pwd=passord"
This requires an ODBC DSN (jaco) at every client PC. I need to use
DSN-less connection for terminal server setup.
Any help is greatly appreciated.
Bill
.
- Follow-Ups:
- Re: DSN-less connection for Informix database
- From: Bill Nguyen
- References:
- DSN-less connection for Informix database
- From: Bill Nguyen
- Re: DSN-less connection for Informix database
- From: jeff
- Re: DSN-less connection for Informix database
- From: Bill Nguyen
- Prev by Date: Re: strings and input files
- Next by Date: Re: microsoft.visualbasic namespace obsolete?
- Previous by thread: Re: DSN-less connection for Informix database
- Next by thread: Re: DSN-less connection for Informix database
- Index(es): | http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.languages.vb/2007-04/msg01594.html | crawl-002 | refinedweb | 685 | 58.89 |
ULiege - Aerospace & Mechanical Engineering
Due to historical reasons, the preprocessor of SAMCEF (called BACON) can be used to generate a mesh and geometrical entities. The module
toolbox.samcef defines functions that convert a
.fdb file (
fdb stands for “formatted database”) to commands that Metafor understands. If a
.dat file (i.e. a BACON script file) is provided, Metafor automatically runs BACON first as a background process in order to create the missing
.fdb file. In this case, a SAMCEF license is required.
The example simulation consists of a sheared cube (lower and upper faces and fixed and moved one with respect to another). The figure below shows the geometry:
The BACON input file named
cubeCG.dat (located in
apps/ale) contains the commands used to create the geometry, the mesh and node groups (selections).
Geometry
The geometry is created with the following BACON commands (see BACON manual for details):
.3POIfor the 8 points.
.3DROfor the 12 edges.
.CONfor the 6 wires.
.FACEfor the 6 sides.
.PLANto create a surface (required for the 2D domain, see below).
.VPEAUfor the skin.
.DOMfor the domain (one 3D domain - the cube - and one 2D domain - the lower side which will be meshed and extruded).
Mesh Generation
.GEN.
MODIFIE LIGNE”.
transfini”) and extruded with the command “
EXTRUSION”.
Choice of Element Type
.HYP VOLUMEis used (in 2D and 3D).
Definition of node/element groups (selections)
.SELis used
.RENis used to change the numbering of nodes and mesh elements, it should be done before the
.SEL.
Manual Creation of the
.fdb file
BACON can be started manually with the command:
samcef ba aleCubeCg n 1
From the
.dat file, a
.fdb is created with BACON with the commands:
INPUT .SAUVE DB FORMAT .STO
Summary: What can be imported from BACON?
Line-
Arc(points generated automatically with a negative numbed in Metafor are shifted (
numPoint = numMaxPoints-numPoint)
Plan-
Ruled-
Coons(Only planes defined using three numbers are imported (other planes generate cpoints outside db) ).
MultiProjSkinobjects are created, it is best to create them in the Python data set.
MeshedPoints)
nodeOnLine, …)
Reading the BACON file from Metafor
In Metafor, the file
.dat is converted thanks to a conversion module named
toolbox.samcef (see
apps.ale.cubeCG):
import toolbox.samcef bi = toolbox.samcef.BaconImporter(domain, os.path.splitext(__file__)[0]+'.dat' ) bi.execute()
where
domain is the domain that should be filled with the converted mesh and geometry. The second argument corresponds to the full path to the file
cubeCG.dat (it is computed from the full path of the python input file).
If all goes well, a file
cubeCG.fdb is then created in the folder
workspace/apps_ale_cubeCG
Element Generation in Metafor
The BACON attributes are converted to
Groups in Metafor. For example, if attribute #99 has been used when generating mesh in BACON, all the elements are stored in
groupset(99) in Metafor.
app = FieldApplicator(1) app.push(groupset(99)) interactionset.add(app)
Boundary conditions in Metafor
Selections in BACON are translated into
Groups with the same number. Boundary conditions such as prescribed displacements or contact can be thus defined easily in Metafor. For example, a selection such as
.SEL GROUPE 4 NOEUDS can lead to the following command in the input file: | http://metafor.ltas.ulg.ac.be/dokuwiki/doc/user/geometry/import/tuto2 | CC-MAIN-2021-43 | refinedweb | 535 | 60.61 |
Well, then "what name is in which namespace" is relative to which function we're considering. In my example above, imagine aa has x declared, bb has y, and cc has z. Then y and z are in the same namespace when we look from the perspective of aa, but they are not in the same namespace from the perspective of bb. Even worse, cc sees z but doesn't see y. How can they be in the same namespace then?
I always thought about namespaces as mappings from names to objects, independent of perspective. Whether two names are in the same namespace, should be a question with an objective answer. | https://bugs.python.org/msg373068 | CC-MAIN-2021-21 | refinedweb | 111 | 81.02 |
Eddie.
Eddie is written from scratch and is developed using my experiences of porting Mark Pilgrim's Feedparser to perl. With that knowledge in hand Eddie is designed to be as cleanly designed as is possible. The parser has been developed using Feedparser's excellent documentation and their extensive test cases.
100% Java
Parses RSS 0.90, Netscape RSS 0.91, Userland RSS 0.91, RSS 0.92, RSS 0.93, RSS 0.94, RSS 1.0, RSS 2.0, Atom 0.3, Atom 1.0 feeds.
Passes 97% of the 3502 FeedParser unit tests.
Parses non-wellformed feeds
Open Source
The intention is to add support for improved date parsing support in the very near future. The library should be able to pass 100% of the FeedParser unit tests. In the medium term I intend to add a ROME compatibility layer to allow you to use it as a drop in replacement for ROME.
Released 0.2, which has massively improved character encoding support, support for CDF, numerous bug fixes, support for input other than files and support for using TagSoup to sanitize the input. You can download the new release in the Download section.
Eddie is licensed under the GNU Public License a license certified by the Open Source Initative (),.
Basically, if your program is Open Source, you may use Eddie. If you wish to use Eddie in a commercial application, contact me
You can download Eddie 0.2 using the links below
Eddie has very few dependencies. It relies on features of Java 1.5. It also uses a number of external projects. It uses Xerces () for XML parsing, Log4J () for logging, Apache Commons-Codecs () for Base64 decoding and Jython ( as part of the testing framework. There is also optional support for TagSoup () to sanitize entries.
To parse a feed, create a Parser object, optionally pass it some HTTP headers and then call parse().
import uk.org.catnip.eddie.parser.Parser; import uk.org.catnip.eddie.Feed; public class Main { public static void main(String[] args) { Parser parser = new Parser(); Feed feed = parser.parse(args[0]); System.out.println(feed); } } | http://www.davidpashley.com/projects/eddie.html | crawl-001 | refinedweb | 353 | 60.61 |
Hi All -
I’m trying to use my Arduino as an I2C slave, where a master can poll the Arduino and read the current state of its digital and analog inputs. I’d like the I2C master to be able to poll either single parts of the current status, or just grab the whole ‘current state’.
As a first step, I’ve programmed my Arduino to use two different data sets and flip between the two sets every thirty seconds. I’m having trouble making the Arduino respond correctly to multi-byte reads, though. The ‘request’ callback only gets called once on a multi-byte read, and I have no idea how to tell in the request handler how many bytes were requested.
Here’s my Arduino code:
#include <Wire.h> byte datasetA[] = {0xaa, 0x55, 0x01, 0xff}; // test pattern byte datasetB[] = {0x33, 0x31, 0x34, 0x32}; // Pi, more or less int ptr = 0; boolean state = HIGH; int counter = 0; void setup(){ Wire.begin(0x50); // write address 0xa0, read address 0xa1 Wire.onRequest(requestHandler); Wire.onReceive(receiveHandler); Serial.begin(9600); Serial.println("Initialized"); } void loop(){ counter++; counter = counter % 30; if (counter == 0){ state = !state; } delay(1000); // yeah, I realize this will drift slightly as loop() takes > 1 second to complete } void requestHandler(){ if (state){ Wire.send(datasetA[ptr]); }else{ Wire.send(datasetB[ptr]); } Serial.println("Request handled"); ptr++; ptr = ptr % 4; } void receiveHandler(int count){ Serial.print("Receive "); Serial.print(count, HEX); Serial.println(" bytes"); }
The symptom is that if I request a single byte from the Arduino, I get a correct byte. If I issue multiple single-byte reads, I get the complete data set and it repeats just like it should, and changes every 30 seconds just like it should. If I issue a n-byte read to the Arduino, I get one byte of the current dataset, followed by (n-1) bytes of 0xFF.
I spent a little while digging around the Wire library source but wasn’t able to figure out what to do. It doesn’t look like I can tell from ‘userspace’ (what I call the arduino Wire class) that the bus master is clocking the Arduino several times and not sending a NACK.
I did try another thing – I hard-coded my requestHandler to respond with all 4 bytes – and this works…but it isn’t really what I want, because I’d like the I2C master to be able to request one byte (or any arbitrary number of bytes, really) at a time sometimes.
If anybody has any clues to get me started, I’d be all eyes.
Thanks for any insights,
Reid | https://forum.arduino.cc/t/arduino-as-i2c-slave/21743 | CC-MAIN-2022-05 | refinedweb | 439 | 69.21 |
Utilities to list log files generated by CANedge loggers
Project description
CANedge Browser - List Log Files (Local, S3)
This package lets you easily list CANedge CAN data log files. Simply specify the source (local disk or S3 server) and the start/stop period. The listed log files can then be used with other packages such as
mdf_iter and
can_decoder.
Key features
1. Extract a subset of log files between a start/stop date & time
Installation
Use pip to install the
canedge_browser module:
pip install canedge_browser
Dependencies
fsspec(required)
mdf_iter(required)
Module usage example
In the below example, we list log files between two dates from a MinIO S3 server:
import canedge_browser import s3fs from datetime import datetime, timezone fs = s3fs.S3FileSystem( key="<key>", secret="<secret>", client_kwargs={ "endpoint_url": "", }, ) devices = ["<bucket>/23AD1AEA", "<bucket>/86373F4D"] start = datetime(year=2020, month=8, day=4, hour=10, tzinfo=timezone.utc) stop = datetime(year=2020, month=9, day=9, tzinfo=timezone.utc) log_files = canedge_browser.get_log_files(fs, devices, start_date=start, stop_date=stop) print("Found a total of {} log files".format(len(log_files))) for log_file in log_files: print(log_file)
Regarding timezone
NOTE: All time inputs into the library must include a timezone. If in doubt, set this to UTC (+00:00).
Regarding S3 server types
If you need to connect to e.g. an AWS S3 server, simply use the relevant endpoint (e.g.). Similarly, for MinIO servers, you would use the relevant endpoint (e.g.).
HTTP vs. HTTPS
To connect to a MinIO S3 server where TLS is enabled via a self-signed certificate, you can connect by adding the path to your public certificate in the
verify field in the
setup_fs_s3 function.
Regarding path syntax
Note that all paths are relative to the root
/. For POSIX systems, this will likely follow the normal filesystem structure. Windows systems gets a slightly mangled syntax, such that
C:\Some folder\a subfolder\the target file.MF4 becomes
/C:/Some folder/a subfolder/the target file.MF4.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/canedge-browser/0.0.6/ | CC-MAIN-2021-49 | refinedweb | 350 | 56.76 |
Today we will discuss about one of the most useful design patterns.
The Factory Method
It's a creational design pattern meaning that it helps us create objects efficiently.
Efficient object creation is a regular problem, that's why Factory Method is one of the most widely used design patterns.
By the end of this article, you will:
- Understand the core concepts of Factory Method
- Recognize opportunities to use Factory Method
- Learn to create a suitable implementation of Factory Method to suit your application
Definition
“One advantage of static factory methods is that, unlike constructors, they have names.” ― Joshua Bloch, Effective Java
Factory Method is a creational design pattern that provides an interface to create objects to a parent-class but allows child classes to modify the type of object that will be created.
Problem
Let's imagine that you have a shipping company. When it first started you only supported trucks.
In terms of code, it's gonna look like this:
- Shipping class — manages the shipping mechanism.
- Truck class — class that represents a truck.
class Truck: def __init__(self, truck_id): self.truck_id = truck_id pass def ship(self): # Logic for shipping pass; class Shipping: def __init__(self, info): self.truck = Truck(info["truck_id"]) self.status = "Packaged" self.package = info["package"] pass def send(self): self.status = "Pending" self.truck.ship() # Initializing a shipping mission data = { "truck_id": 1, "package": "some_package" } mission = Shipping(info=data) mission.send()
As you can see, the
shipping class directly create a
truck object, making them very coupled to one another. The
shipping class would not work without a truck object.
Now, imagine that your shipping company became popular and clients are wanting to ship their products overseas. So naturally, you introduce a new type of transportation, the plane.
But, how will we be able to add that to our code?
A naive solution will be to check if the
shipping class uses a
truck or a
plane.
class Shipping: def __init__(self, info, type): self.type = type if self.type == "truck": self.truck = Truck(info["truck_id"]) self.status = "Packaged" self.package = info["package"] else: self.plane = Plane(info["plane_id"]) self.status = "Packaged" self.package = info["package"] def send(self): self.status = "Pending" if self.type == "truck": self.truck.ship() else: self.plane.ship()
This can get messy very quickly:
- The more the types of transportation, the more conditional code we have to write.
- Due to more code, we have added complexity.
- This breaks the single responsibility principle, the
shippingclass handles shipping of many different types of transportation.
- This breaks the open-closed principle because every time we have to add a new type of transportation we have to edit existing code.
- Changing in any of the code in the
truckor
planeclass will require changes to the
shippingclass.
Our problem here is that we don't exactly know what types of objects we will have, sometimes it's a truck, plane, or even a new type of transportation.
This is what the factory method pattern fixes.
Solution
The factory method pattern suggests that it's better to instantiate objects outside the constructor to a separate method and let subclasses control which objects get created.
So what we can do to our shipping class is:
- Remove the object instantiation from the constructor
- Make the
Shippingclass an abstract class
- Add an abstract method
setTransportthat will set the appropriate transport
- Create two new classes
TruckShippingand
PlaneShippingthat inherit from
Shippingclass.
- Implement the
setTransportfunction in the new classes.
import abc # Python Library that allows us to create abstract classes class Shipping(metaclass=abc.ABCMeta): def __init__(self, package): self.status = "Packaged" self.package = package self.transport = None @abc.abstractmethod def setTransport(self, data): pass class TruckShipping(Shipping): def setTransport(self, id): self.transport = Truck(id=id) class PlaneShipping(Shipping): def setTransport(self, id): self.transport = Plane(id=id)
But what about the
send method we had in the first version of the
Shipping class. The first version of the
send method directly accessed the
truck object and ran the method
ship inside it.
Factory method pattern encourages us to create an interface or abstract class to our object types, to make them consistent and interchangeable.
Let's do another refactor:
- Create an abstract class called
Transportwith an abstract method
ship.
- Create two new classes that inherit from
Transportcalled
Truckand
Plane.
class Transport(metaclass=abc.ABCMeta): def __init__(self, id): self.id = id @abc.abstractmethod def ship(self): pass class Truck(Transport): def ship(self): # Shipping instructions for trucks pass class Plane(Transport): def ship(self): # Shipping instructions for planes pass
Finally in our Shipping abstract class, we can create a method called
send that calls the
ship method of a transport.
def send(self): self.transport.ship()
When to use the pattern?
Design patterns should always be used within reason, as software engineers we need to be able to detect these opportunities.
For factory method pattern there are a couple of reasons why you might want to use it:
- When you don't know the exact types of objects your class will be interacting with. In our example, we could've had one transport system, or a hundred. You just don't know.
- Use factory method pattern when you want your users to extend the functionality of your library or framework.
- The factory method can also be used to save memory usage. Instead of creating a new object every time, you can save it in some sort of caching system.
Pros & Cons
Pros
- Removes tight coupling between classes.
- Increases readability.
- Allows us to construct subclasses with different properties than the parent class without using the constructor.
- Follows the single responsibility principle, making the code easier to support.
- Follows the open-closed principle, if you want to add a new type of transport, you can simply create a new subclass, without modifying existing code.
- Easier to test.
Cons
- The code becomes more complicated because you introduced many new subclasses.
Conclusion
Factory method is one of the most widely used design pattern, and one that you just might use next time at work!
In this article, you learned:
- What the factory method is, and it's components.
- How to refactor code, into the factory method pattern.
- Situations where factory method would be useful.
- Pros and cons of factory method
Further Reading
If you want to learn more about the design patterns, I would recommend Diving into Design Patterns. It explains all 23 design patterns found in the GoF book, in a fun and engaging manner.
Another book that I recommend is Heads First Design Patterns: A Brain-Friendly Guide, which has fun and easy-to-read explanations.
Discussion (1)
I have already been using the factory method but it is with your article that I can you know about its nunaces. Also, just a suggestion for cleaner code, in your naive solution example, you may want to take package and status initialisation out of if-else block. | https://dev.to/tamerlang/design-patterns-factory-method-4icj | CC-MAIN-2021-31 | refinedweb | 1,153 | 58.48 |
Things used in this project
Story
The Idea
Recently Hackster.io is flooding with OLED based projects. OLEDs are cool little display which we already know. I was looking for a programmable power supply for powering my prototypes and testing components. So, I decided to make one.
It has the following features:
- Programmable
- Rechargeable
- Portable
- Step Variable
- Voltage/Current/Power Meter
- Protected Relay
- Customizable, Compact and Cute
- Cool OLED User Interface
- Push Button User Control and Menu-based Navigation
- Firmware Upgradable for more features!
And most versatile power supply for low power electronics projects.
Live Action!
Spec of the Device
The device has following specification:
- Max Output DC Load Current: 400 mA
- Voltage Range: 2.0 Volts - 12.0 Volts
- Voltage Step: 0.1 Volts Approx
- Best Efficiency: 75 %
- Current Measurement Accuracy: +/- 1 mA
- Voltage Measurement Accuracy: +/- 0.02 Volts
Please note that this device is a quick prototype. It is possible to make 0-30, even Negative Supply and more Output Current by using additional batteries more electronics and upgraded design.
Working Principle
The design itself is hardware intensive. Lots of stuffs happening here. A crude block diagram of the system is something like this:
Source of power is the 3.7 V Li-Po Battery which is USB rechargeable. Using a XL6009 DC-DC boost module first we make 15.6 volts from the Li-Po. To run the MCU we also make a 5 Volt using 7805 Regulator.
The Arduino UNO clone Atmega328P is connected with 2 Interrupt based User Input Switch, an Elegant OLED Output Display. Rx/Tx/DTR firmware (sketch) upload port through USB/Serial from PC. (module 1)
The Heart of the project is the MCP4131 Digital Potentiometer (Digipot) + LM 358 OpAmp based Step Voltage Generator. This voltage is the Control Voltage of LM317 Adjustable Regulator. (module 2)
Digipot is controlled from the Arduino through Pseudo-SPI like command. LM317 is designed such a way that the Output Pin Voltage is always 1.25 Volts higher than the Adjust Pin Voltage provided that the INPUT Pin's Voltage is high enough (here 15.6 volts). (module 3)
The step voltage is fed to the Adjust Pin to create variable Output from the Arduino as needed by the User.
The ADC measures all the voltages associated with supervision and protection; battery voltage, boosted voltage, charging sense voltage and output voltage are conditioned through voltage divider network for feeding the ADC range, which is 0-1.1 volts here. I have used the INTERNAL REFERENCE of Arduino which creates an reference voltage of 1.1 volts.
For the current sensing, the return (Load Gnd) from the Output Load is connected in series with 1 Ohms Current Sense Resistor to the System Gnd. When current flows through the external loads, there is also a voltage drop in this sense resistor. This voltage is amplified through OP07 Precision Operation Amplifier and fed to one of the ADC pin.
Lastly, for the battery charging, 5 volts from the USB is connected in series with a 4007 diode and a 5 ohms current limiting resistor to the Li-Po Battery. This is a crude charging method, not the best for Li-Po charging.
Operation Summery: The MCP4131 Digital Potentiometer creates step voltages with in 0-5 Volts range in step of about 40mV (7-bit 10K Digipot has 129 steps 5V/128 = 0.40 mV), which is then 2.5 times amplified by the LM358 that gives 0-12.5 volts control voltage range with steps of 0.1 volt. This amplified step voltage signal is fed to the Adjust Pin of LM317. LM317 generates an output voltage of V_Step+1.25 Volts which is supplied to the external loads. The return/ground of the external load is connected to the internal ground through 1 Ohms Current Sense resistor. Suppose: x mA current is flowing to external load, it will create x mV drop (Ohms Law V=I*R) on the 1 Ohms Current Sense Resistor. This small voltage signal is fed to Low Offset (10uV) OpAmp OP07 configured with 2.5X gain, which will generate 2.5x mV Output. The Arduino ADC is configured with 1.1 Volts internal reference so that voltages form 0 -1100 mV can be sensed in step of about 1mV (1100/1023). Output of OP07 is connected to Arduino ADC for current sensing. This is why the current limit is 400mA. It can be increased/reduced by changing the gain of OP07. Similarly output voltage range can be changed by changing the boost voltage & gain of LM358. Other voltages are measured with resistive voltage divider network attenuating voltages to fit the ADC Range. The latch relay has 2 coils. By applying momentary power to any of the coil, relay contacts can be switched. Once switched it remains there, so the coil is powered off immediately.
Building the Project
First we start with a single switch socket box, and make necessary cuts and alignments for placing the battery, USB charging port, power switch, etc.
Next, heat sink is made with copper tape and coin for the DC-DC boost module.
The boost module is placed inside the socket box:
Using the above parts, the following 3 modules are made:
- Arduino + I/O + Control Module
- Step Voltage and Adjustable Regulator Module
- Current Sensing Module
Finally the spider web connections among all the boards are connected and soldered.
After using the hot glue as a filler, finally we have it:
Developing the Firmware & Operating Procedure
The firmware (Arduino Sketch) is right now 1.0.2 Beta. Not all features are available right now. But most important features like controlling voltage, connecting/disconnection relay, viewing information are enabled. In the
void setup()
there are few initialization functions to warm up the Arduino pins associated to different external hardware.
INPUT: There are 2 interrupt-based input button for increasing/decreasing output voltage, access menu (not available on this version). INT0 & INT1 on Arduino Pin 2 and 3 are coded for FALLING EDGE INTERRUPT. You will see 2 capacitors in parallel with mechanical switches for de-bouncing. Code is written to trigger interrupts when user presses these switches to turn on/off output through relay or increase/decrease voltage (Beta).
OUTPUT: The 1306 OLED shows output information acquiring data from ADC, internal timer (for device up time) and flag variables to inform user about Output enable/disable status. Based on U8G library the OLED prints info as text and numerical. I have plans for using graphical (Analog type) representation.
5 digital pins of SSD1306 (OLED from Waveshare) clk,din,cs,d/c,res are connected to Arduino 10, 9, 11, 13, 12 pins and programmed accordingly. In the main loop
update_display() function is called every time to update the info on the OLED.
Internal Timer 1 of Atmega328P in configured to periodically trigger every 1 sec to keep track of time.
CONTROL: The MCP 4131 Digital Potentiometer is controller with
increment_digipot() &
decrement_digipot() functions where data is shifted out with proper clocking and delay using Pin 6, 7, 8 as CS, Clk, Data Pins. It's like slow soft SPI. Since I have already used up Hardware SPI pins somewhere else, this was the only solution then.
Two digital pins 4 & 5 are used to control the latching relay. A short high pulse is fed to the relay driving transistors to energize the 2 coils to flip the relay. It happens both automatically (during overload/short circuit) or manually by the user .
ADC: The
calc_VI() function in the main loop performs
analogRead
to get 20 times averaged Voltage and Current information and updated the variable for new information which is then printed on the display
The sketch is written in multiple tabs to organize code for different functions associated to different operations. There are ADC, Digipot,
Display_Fn, Interrupt, Relay and Timer tabs arranging all the user defined functions. I will try too add more comments explaining all the functions, but you should not find it hard to understand because those functions are based on multiple Arduino functions doing certain tasks.
Conclusion & Limitation
This programmable power supply will help me to make projects/prototypes more efficiently. The OLED is cool, loving it!
Measurement of voltage current power will help for precognitive analysis for other projects.
There are some serious limitations of this device:
- Voltage can't go below 2.0 V
- Voltage output is stepped not continuous
- Current measurement creates ground shifting for high current
- ADC measurement has low resolutions
- Efficiency is the worst in class at low voltage high current loading
- Non-standard, slightly unsafe Li-Po charging
But it does the job, because good enough is perfect. It's my perfect Aibo!
Schematics
Code
Arduino Programmable Portable Power SupplyArduino
// Pin Reset, D0 & D1 for uploading Sketch // Pin D9,D10,D11,D12,D13 for controlling OLED Display // ADC A0 Pin for Sensing V_boost // ADC A2 Pin for Sensing V_batt (LiPo) // ADC A3 Pin for Sensing I_Output (Load) // ADC A4 Pin for Sensing V_USB (Charging) // ADC A5 Pin for Sensing V_Output (Load) // Latch Relay's 2 Coils Driving Pin D4 &D5 #define RC1 4 #define RC2 5 // User Input Switchs connected to Pin D2 & D3 #define SW1 2 #define SW2 3 // Pin D6,D7,D8 for Digital Pot Control Pins #define CS_PIN 6 #define CLK_PIN 7 #define DATA_PIN 8 volatile uint8_t Switch1 = 1; volatile uint8_t Switch2 = 1; float V_Out = 0.0; float I_Out = 0.0; float V_Bat = 0.0; float V_Bst = 0.0; float V_Chg = 0.0; uint32_t time = 0; #include "U8glib.h" // OLED Display Control Pins //SSD1306 oled waveshare(clk,din,cs,d/c,res); // THIS FOR WAVESHARE U8GLIB_SSD1306_128X64 u8g(10, 9,11, 13,12); void setup(void) { // flip screen, if required analogReference(INTERNAL); u8g.setRot180(); button_init(); relay_init(); init_timer1(); digipot_init(); } void loop(void) { update_display(); calc_VI(); if (Switch1==0) { rc1_latch(); Switch1=1; increment_digipot(); } if (Switch2==0) { rc2_latch(); Switch2=1; decrement_digipot(); } delay(100); }
Code Ver 1.0.1 BetaC/C++
No preview (download only).
Code Ver 1.0.2 BetaC/C++
Bug Fix for Overload Trip
Few more bugs will be fixed on next release
No preview (download only).
Credits
Replications
Did you replicate this project? Share it!I made one
Love this project? Think it could be improved? Tell us what you think! | https://www.hackster.io/PSoC_Rocks/programmable-pocket-power-supply-with-oled-display-3f9d36 | CC-MAIN-2017-26 | refinedweb | 1,711 | 64.41 |
SPH Movie Pointillism?
In this tutorial we will learn how to make the frames for a modified version of the above movie (frames were then combined into a movie using Quicktime). Make sure you are set up correctly first and you have read the first AstroBlend tutorial which discusses the general setup of python scripts in Blender and the second AstroBlend tutorial which shows how to use AstroBlend to load data and render images.
Get the Data
We are starting from the blend file "usual.blend" which has an Image editor, python console and 3D viewer in its layout. This can be found on the file download page.
Before proceeding further, you need to download the data we'll be using for this example here. As of this tutorial, all SPH data needs to be formatted in a text file in five columns which are:
(1) the particle identifier number (or a place holder number if there is none), (2)-(4) the x/y/z coordinates of the particle in whatever units you'd like, (5) a number specifying the "type" of particle (more on this in a bit).This huge text file is then read into Blender in serial. This is terribly inefficient, and the above movie took about 1.5 days on 8 cores to render. Ick. Therefore, in this tutorial we'll be using a subset of the data consisting of 1/100th of the particles (so, about 10,000 particles). Updates to AstroBlend will read in parallel, but we're just not there yet.
Also, its worth noting that some SPH codes are supported with direct reading from files via yt. More on that in a subsequent tutorial.
Render A Single Image
Before going about the business of making a whole movie, perhaps it would be best to start with reading in a single data file and figuring out which set of particles we want to render, where to place the camera, etc.
First, lets do the usual setting up of the script as discussed in the first tutorial and the camera and the rendering as discussed in the second tutorial which can be done by copying and pasting the following code into Blender's python console:
import science # import AstroBlend library # set up camera cam = science.Camera() cam.location = (2.0,0,0) cam.pointing = (0,0,0) # initialize render render_directory = '/Users/jillnaiman/blenderRenders/' render_name = 'mysphrender_' render = science.Render(render_directory, render_name) # initialize lighting light = science.Lighting('EMISSION') # light by surface emission
From this typical setting up of stuff we will see the following camera tracked to an empty:
Next, lets choose a file to read in:
ddir = '/Users/jillnaiman/data/sphdec/' # directory where text files are stored dfile = ddir + 'outputs_dec_005.txt' # create filename from directory + name of a single file
In this particular simulation, the following particle types are used:
0 = Gas Particles 1 = Dark Matter Halo Particles 2 = Origional Disk Stars 3 = Origional Buldge Stars 4 = Newly Formed Stars 5 = Central Supermassive Black Holes (only 2 of these particles)
So, lets choose what colors to make the different particle types by specifying their RGB triplets:
colors = [(1,1,0), (0,0,1), (1,0,0), (1,0,0), (1,1,1), (0,1,0)]
This set of simulation files has distances in units of kpc, which will put things at large distances in the Blender volume. Since the camera in Blender will only resolve a specific distance scale (more about camera clipping here), we want to rescale the imported distance coordinates:
scale = (0.1, 0.1, 0.1) # rescale all x/y/z coords by this
The last thing to do before importing the data and rendering is to figure out the size "halo" we want to give each particle. Loading many particles into Blender as individual spheres would be too memory intensive for large scale simulations, so we will be applying a modified form of Dr. Kent's particle loader, optimized for easy importing and deleting. In this scheme, each particle of a certain type is a vertex in a larger "type object", and each vertex is given a little glowing halo to make it look like a particle during the render. We need to choose the best size for our little halos by more or less trial and error - too large and all of the particles will smear together, too small and they won't be rendered. For this simulation, I've found the following to work for the six different particle types:
halo_sizes = (0.0008,0.0008,0.0008,0.0008,0.0008,0.008)
We can now read in the particle data with the simple command:
myobject = science.Load(dfile, scale = scale, halo_sizes = halo_sizes, particle_num=6, particle_colors=colors)
Now you should see all the data loaded into Blender like so:
where the particle types are labeled "output_dec_005particle##" in the Object Selector panel on the right (by default the last particle type imported is selected, in this case "particle05").
So, this looks a bit more like a swarm of particles and less like two galaxies merging, no? That is because the particles we are seeing are the dark matter halo particles, and not the particles making up the galaxies embedded within. To see the galaxy particles, we need to "hide" the dark matter particles with the following lines:
# hide the 1st particle part_hide = (False, True, False, False, False, False) myobject.particle_hide = part_hide
If we hit the render button we see the following in the UV editor window:
which is totally not a galaxy. What went wrong? Well, we can clearly see the central
black holes from our simulation (green dots), but you'll recall we made the halos for
these particular sorts of particles an order of magnitude larger than the rest of the
particles. This is where the camera clipping stuff comes into play - the camera is
simply not able to render the smaller haloed particles. To fix this we can play with
the camera clipping (
cam.clip_begin) the halo sizes (
myobject.halo_sizes),
or simply move the camera in, which is the route we will take here:
cam.location = (1.0, 0.0, 0.0)
If we like this view and want to save this render to file, all we need to do is the usual render command:
render.render()
Again, one must be careful when deleting objects particle set as Blender will leak memory like crazy if you aren't careful. To delete your particle set do:
science.delete_object(myobject) # delete all particles in set
Before moving onto the movie making portion of this tutorial, I'll just note here that the full script which loads and renders a single image is the "rendersphframe.py" script on the List of Scripts page.
Make A Movie
Now that we know how to render a frame, rendering a whole movie is trivial. All we need to change in our script is to create a list of files to load in instead of a single text file, and we need to call the SPH movie renderer which will iteratively call import, render, and delete particle data. This is encapsulated in the following script (called "sphmovie.py" on the List of Scripts page):
import science # import AstroBlend library # set up camera cam = science.Camera() cam.location = (1.0,0,0) cam.pointing = (0,0,0) # initialize render render_directory = '/Users/jillnaiman/blenderRenders/' render_name = 'mysphrender_' render = science.Render(render_directory, render_name) # initialize lighting light = science.Lighting('EMISSION') # light by surface emission # generate the list of files to iterate over ddir = '/Users/jillnaiman/data/sphdec/' # where data is stored df = 'outputs_dec_' # base name of data files dfiles = [] nf = 200 ns = 1 # create list of files for i in range(ns,nf): num = "%03d" % (i) # create strings from numbers, with 3 digits dfiles.append(ddir + df + num + '.txt') # various color and scale parameters colors = [(1,1,0), (0,0,1), (1,0,0), (1,0,0), (1,1,1), (0,1,0)] scale = (0.1, 0.1, 0.1) # rescale all x/y/z coords by this halo_sizes = (0.0008,0.0008,0.0008,0.0008,0.0008,0.008) # hide the 1st particle part_hide = (False, True, False, False, False, False) for i in range(0,nf-1): myobject = science.Load(dfiles[i], scale = scale, halo_sizes = halo_sizes, particle_num=6, particle_colors=colors) myobject.particle_hide = part_hide render.render() science.delete_object(myobject)
This little low resolution movie took about 20 minutes for all the frames to render, again emphasising the need for me to implement parallel particle data importing. Stay tuned for fuller SPH support in the future!
Beginning Camera MotionsPrevious Tutorial
yt+AstroBlend SurfacesNext Tutorial | http://www.astroblend.com/tutorials/tutorial_simpleSphMovie.html | CC-MAIN-2021-39 | refinedweb | 1,435 | 58.42 |
Image Drag with Mouse in JavaFX
By vaibhavc on Nov 29, 2008
So, we got a discussion here. Last week we(me, Subrata and Vikram, both my office colleagues) are discussing about dragging an image with mouse pointer in JavaFX.
So, this was the first code. Point is to drag an image from the same place where we first hit the mouse, like it happens when we drag a folder :
package sample.input.MouseEvent; import java.lang.System; var x: Number; var y: Number; var im = Image { url: "{__DIR__}im2.PNG" }; var temp1:Number = 0; var temp2: Number = 0; var count: Integer = 1; Stage { title: "Application title" width: 250 height: 280 scene: Scene { content: [ ImageView { x: bind x - temp1 y: bind y - temp2 image: Image { url: "{__DIR__}im2.PNG" } onMouseDragged: function( e: MouseEvent ):Void { x = e.x; y = e.y; if(count <= 1) { temp1 = e.x; temp2 = e.y; } count++; } } ] } }You can see those patches of counts and flags which makes the code so unstable. And a bug, when you leave the mouse once, it cant grip the image from your mouse point again.
Subrata has written a cleaner code which works correct and here it is :
package mousedrag;import javafx.stage.Stage; import javafx.scene.Scene; import javafx.scene.image.ImageView; import javafx.scene.image.Image; import javafx.scene.input.MouseEvent;/\*\* \* @author Subrata Nath \*/var imgX : Number = 20; var imgY : Number = 20; var startX : Number; var startY : Number ; var distX : Number; var distY : Number ;Stage { title: "Mouse smooth drag" width: 250 height: 280 scene: Scene { content: [ ImageView { x : bind imgX y : bind imgY image: Image {url: "{__DIR__}Mail.png" } onMousePressed: function( e: MouseEvent ):Void { startX = e.x; startY = e.y; // Calculate the distance of the mouse point from the image top-left corner // which will always come out as positive value distX = startX - imgX; distY = startY - imgY; } onMouseDragged: function( e: MouseEvent ):Void { // Find out the new image postion by subtracting the distance part from the mouse point. imgX = e.x - distX; imgY = e.y - distY; } } ]} }
I actually did this awhile back (in June 2007) -
all the syntax has changed. I wanted to update it so that when you (Mouse)clicked on the image it would change to a different image today. It used to be you could do "url: bind pix" - now it complains "url has script-only (default) bind in javafx.scene.image.image". Any ideas ?
Posted by Charles Ditzel on November 30, 2008 at 03:27 AM IST #
O yes, little modification will work for you :)
image: bind Image {
url: pix
}
in place of
image: Image {
url: bind pix
}
They made it script-only maybe because changing url will change the image itself, so bind the image in place of url.
This is your code for Catcing the mouseEvent example :
/\*
\* Main.fx
\*
\* Created on Nov 30, 2008, 3:20:48 PM
\*/
package sample.transform.\*;
import javafx.scene.input.MouseEvent;
var pix = "";
var switchit = false;
Stage {
title: "Catching MouseEvents"
width: 250
height: 280
scene: Scene {
content: [
ImageView {
image: bind Image {
url: pix
}
scaleX: 0.75
scaleY: 0.75
// Mouse click changes the initial image
onMouseClicked: function( e: MouseEvent ):Void {
if (switchit == false) {
pix = "";
switchit = true;
}
else {
pix = "";
switchit = false;
}
}
}
]
}
}
Posted by Vaibhav on November 30, 2008 at 07:28 AM IST #
Thanks for the info!
Posted by Charles Ditzel on November 30, 2008 at 04:01 PM IST #
Again, Thanks. I have blogged an update :
Posted by Charles Ditzel on November 30, 2008 at 06:14 PM IST #
Thanks Charles and you are welcome too :)
Posted by Vaibhav Choudhary on December 01, 2008 at 03:02 07:38 AM IST #
thanks Alanna !
Posted by Vaibhav on March 12, 2009 at 02:14 AM IST #
Hello Vaibhav
I have a larger than frame image in my webpage that I would like to see piece by piece when I click and drag the mouse.
Is there a simple java script for this? thanks!
Posted by Natalia on March 14, 2009 at 12:13 PM IST #
no idea about JS but in FX you can do that by clip feature. I guess in JS also there must be some feature in which you can clip a big image into a clip size (defined by you) and then mouse move can help it going here and there.
Posted by Vaibhav on March 14, 2009 at 12:18 PM IST #
Excellent post. Thanks for sharing.
Posted by clipping path on April 07, 2009 at 08:03 AM IST #
Posted by watchgy on December 20, 2009 at 12:35 PM IST #
Thanks a lot! this works for me (a lot better than:)
Posted by Peter on April 16, 2010 at 03:13 PM IST #
Again me :-/
I wanted to drag not only an image. This was not possible with this solution so I went back to the javafx gettingstarted example I mentioned before. And if you download the provided example as a NB project you want get an exception like I did when I was working with the explanation in the tutorial:
AssignToBoundException: Cannot assign to bound variable
So you can use the getting started example but download the project (at the very top)!
Posted by Peter on April 18, 2010 at 10:33 AM IST # | https://blogs.oracle.com/vaibhav/entry/image_drag_with_mouse_in | CC-MAIN-2016-22 | refinedweb | 872 | 78.59 |
Originally published November 2018. Updated September 2020. This article describes the features and functionality of TypeScript 4.0.
While TypeScript is very simple to understand when performing basic tasks, having a deeper understanding of how its type system works is critical to unlocking advanced language functionality. Once we know more about how TypeScript really works, we can leverage this knowledge to write cleaner, well-organized code.
If you find yourself having trouble with some of the concepts discussed in this article, try reading through the Definitive Guide to TypeScript first to make sure you’ve got a solid understanding of all the basics.
Behind the
class keyword
In TypeScript, the
class keyword provides a more familiar syntax for generating constructor functions and performing simple inheritance. It has roughly the same syntax as the ES2015
class syntax, but with a few key distinctions. Most notably, it allows for non-method properties, similar to this Stage 3 proposal. In fact, declaration of each instance method or property that will be used by the class is mandatory, as this will be used to build up a type for the value of
this within the class.
But what if we couldn’t use the
class keyword for some reason? How would we make an equivalent structure? Is it even possible? To answer these questions, let’s start with a basic example of a TypeScript class:
class Point { static fromOtherPoint(point: Point): Point { // ... } x: number; y: number; constructor(x: number, y: number) { // ... } toString(): string { // ... } }
This archetypical class includes a static method, instance properties, and instance methods. When creating a new instance of this type, we’d call
new Point(, ), and when referring to an instance of this type, we’d use the type
Point. But how does this work? Aren’t the
Point type and the Point constructor the same thing? Actually, no!
In TypeScript, types are overlaid onto JavaScript code through an entirely separate type system, rather than becoming part of the JavaScript code itself. This means that an interface (“type”) in TypeScript can—and often does—use the same identifier name as a variable in JavaScript without introducing a name conflict. (The only time that an identifier in the type system refers to a name within JavaScript is when the typeof operator is used.)
When using the
class keyword in TypeScript, you are actually creating two things with the same identifier:
- A TypeScript interface containing all the instance methods and properties of the class; and
- A JavaScript variable with a different (anonymous) constructor function type
In other words, the example class above is effectively just shorthand for this code:
// our TypeScript `Point` type interface Point { x: number; y: number; toString(): string; } // our JavaScript `Point` variable, with a constructor type let Point: { new (x: number, y: number): Point; prototype: Point; // static class properties and methods are actually part // of the constructor type! fromOtherPoint(point: Point): Point; }; // `Function` does not fulfill the defined type so // it needs to be cast to <any> Point = <any> function (this: Point, x: number, y: number): void { // ... }; // static properties/methods go on the JavaScript variable... Point.fromOtherPoint = function (point: Point): Point { // ... }; // instance properties/methods go on the prototype Point.prototype.toString = function (): string { // ... };
TypeScript also has support for ES6 Class expressions.
Adding type properties to classes
As mentioned above, adding non-method properties to classes in TypeScript is encouraged and required for the type system to understand what is available on the class.
class Animal { species: string; color: string = 'red'; id: string; }
In this example,
className,
color, and
id have been defined as being properties that can exist on the class. However by default, className and id have no value. TypeScript can warn us about this with the
--strictPropertyInitialization flag, which will throw an error if a class property is not assigned a value directly on the definition, or within the constructor. The value assigned to color is not actually assigned directly to the
prototype. Instead, its value is assigned inside the constructor in the transpiled code, meaning that it is safe to assign non-primitive types directly without any risk of accidentally sharing those values with all instances of the class.
A common problem in complex applications is how to keep related sets of functionality grouped together. We already accomplish this by doing things like organizing code into modules for large sets of functionality, but what about things like types that are only applicable to a single class or interface? For example, what if we had an
Animal class that accepted an
attributes object:
export class Animal { constructor(attributes: { species: string; id: string; color: string; }) { // ... } } export default Animal;
In this code, we’ve succeeded in defining an anonymous type for the
attributes parameter, but this is very brittle. What happens when we subclass
Animal and want to add some extra properties? We’d have to write the entire type all over again. Or, what if we want to reference this type in multiple places, like within some code that instantiates an
Animal? We wouldn’t be able to, because it’s an anonymous type assigned to a function parameter.
To solve this problem, we can use an
interface to define the constructor arguments and export that alongside the class.
export interface AnimalProperties { species?: string; id?: string; color?: string; } export class Animal { constructor(attributes: AnimalProperties = {}) { for (let key in attributes) { this[key] = attributes[key]; } } } export default Animal;
Now, instead of having an anonymous object type dirtying up our code, we have a specific
AnimalProperties interface that can be referenced by our code as well as any other code that imports
Animal. This means that we can easily subclass our attributes parameter while keeping everything DRY and well-organized:
import Animal, { AnimalProperties } from './Animal'; export interface LionProperties extends AnimalProperties { roarVolume: string; } // normal class inheritance… export class Lion extends Animal { // replace the parameter type with our new, more specific subtype constructor(attributes: LionProperties = { roarVolume: 'high' }) { super(attributes); } } export default Lion;
As mentioned earlier, using this pattern, we can also reference these types from other code by importing the interfaces where they are needed:
import Animal, { AnimalProperties } from './Animal'; import Lion from './Lion'; export function createAnimal< T extends Animal = Animal, K extends AnimalProperties = AnimalProperties >(Ctor: { new (...args: any[]): T; }, attributes: K): T { return new Ctor(attributes); } // w has type `Animal` const w = createAnimal(Animal, { species: 'rodent' }); // t has type `Lion` const t = createAnimal(Lion, { species: 'feline', roarVolume: 'massive' });
As of TypeScript 4.0, class property types can be inferred from their assignments in the constructor. Take the following example:
class Animal { sharpTeeth; // <-- no type here! 😱 constructor(fangs = 2) { this.sharpTeeth = fangs; } }
Prior to TypeScript 4.0, this would cause
sharpTeeth to be typed as any (or as an error if using a strict option). Now, however, TypeScript can infer that
sharpTeeth is the same type as
fangs, which is a number.
Note that more complex initialization code, such as using an initialization function, will still require manual typing. In the following example, Typescript will not be able to infer types, and you will have to manually type the class properties.
class Animal { sharpTeeth!: number; constructor() { this.initialize(); } initialize() { this.sharpTeeth = 2; } }
Access Modifiers
Another welcome addition to classes in TypeScript is access modifiers that allow the developer to declare methods and properties as
public,
private,
protected, and
readonly. As of TS 3.8, ECMAScript private fields are also supported via the
# character resulting in a hard private field. Note that access modifiers cannot be used on hard private fields.
class Widget { class: string; // No modifier implies public private _id: string; #uuid: string; readonly id: string; protected foo() { // ... } }
If no modifier is provided, then the method or property is assumed to be
public which means it can be accessed internally or externally. If it is marked as
private then the method or property is only accessible internally within the class. This modifier is only enforceable at compile-time, however. The TypeScript compiler will warn about all inappropriate uses, but it does nothing to stop inappropriate usage at runtime.
protected implies that the method or property is accessible only internally within the class or any class that extends it but not externally. Finally,
readonly will cause the TypeScript compiler to throw an error if the value of the property is changed after its initial assignment in the class constructor.
Getters and Setters
Class properties can have getters and setters. A getter lets you compute a value to return as the property value, while a setter lets you run arbitrary code when the property is set.
Consider a class that represents a simple two-dimensional vector.
class Vector2 { constructor(public x: number, public y: number) {} } const v = new Vector2(1, 1);
Now say we wanted to give this vector a length property. One option is to add a property that is kept up to date whenever the x or y values change. We can monitor the x and y values using a setter.
class Vector2 { private _x = 0; private _y = 0; length!: number; get x() { return this._x; } get y() { return this._y; } set x(value: number) { this._x = value; this.calculateLength(); } set y(value: number) { this._y = value; this.calculateLength(); } private calculateLength() { this.length = Math.sqrt(this._x ** 2 + this._y ** 2); } constructor(x: number, y: number) { this._x = x; this._y = y; this.calculateLength(); } } const v = new Vector2(1, 1); console.log(v.length);
Now, whenever x or y changes, our
length is recalculated and ready to be used. Although this works, this is not a very practical solution. Recalculating the vector length whenever a property changes could potentially result in a lot of wasted computations. If we aren’t using the
length property in our code, we don’t need to perform this calculation at all!
We can craft a more elegant solution using a getter. Using a getter, we’ll define a new readonly property,
length, that is calculated on the fly, only when requested.
class Vector2 { get length() { return Math.sqrt(this.x ** 2 + this.y ** 2); } constructor(public x: number, public y: number) {} } const v = new Vector2(1, 1); console.log(v.length);
This is much nicer! Not only do we have less overall code here, but our length
computation is only run when we need it.
Abstract Classes
TypeScript supports the
abstract keyword for classes and their methods, properties, and accessors. An abstract class may have methods, properties, and accessors with no implementation, and cannot be constructed. See Abstract classes and methods and Abstract properties and accessors for more information.
Mixins and Compositional Classes
TypeScript 2.2 made some changes to make it easier to implement mixins and/or compositional classes. This was achieved by removing some of the restrictions on classes. For example, it’s possible to extend from a value that constructs an intersection type. They also changed the way that signatures on intersection types get combined.
Symbols, Decorators, and more
Symbols
Symbols are unique, immutable identifiers that can be used as object keys. They offer the benefit of guaranteeing safety from naming conflicts. A symbol is a primitive value with the type of “symbol” (
typeof Symbol() === 'symbol').
// even symbols created from the same key are unique Symbol('foo') !== Symbol('foo');
When used as object keys, you don’t have to worry about name collision:
const ID_KEY = Symbol('id'); let obj = {}; obj[ID_KEY] = 5; obj[Symbol('id')] = 10; obj[ID_KEY] === 5; // true
Strong type information in TS is only available for built-in symbols.
See our ES6 Symbols: Drumroll please! article for more information about Symbols.
Decorators
Please note that decorators were added to TypeScript early and are only available with the
--experimentalDecorators flag because they do not reflect the current state of the TC39 proposal. A decorator is a function that allows shorthand in-line modification of classes, properties, methods, and parameters. A method decorator receives 3 parameters:
target: the object the method is defined on
key: the name of the method
descriptor: the object descriptor for the method
The decorator function can optionally return a property descriptor to install on the target object.
function myDecorator(target, key, descriptor) { } class MyClass { @myDecorator myMethod() {} }
myDecorator would be invoked with the parameter values
MyClass.prototype,
'myMethod', and
Object.getOwnPropertyDescriptor(MyClass.prototype, 'myMethod').
TypeScript also supports computed property names and Unicode escape sequences.
See our TypeScript Decorators article for more information about decorators.
Interface vs. Type
Typescript has both
interface and
type aliases but they can often be used incorrectly. One of the key differences between the two of these is that an Interface is limited to describing
Object structures whereas
type can consist of Objects, primitives, unions types, etc.
Another difference here is their intended use. An interface primarily describes how something should be implemented and should be used. A type on the other hand is a definition of a type of data.
// union type of two species type CatSpecies = 'lion' | 'tabby'; // interface defining cat shape and using the above type interface CatInterface { species: CatSpecies; speak(): string; } class Cat implements CatInterface { constructor(public species: CatSpecies) { } speak() { return this.species === 'lion' ? 'ROAR' : 'meeeooow'; } } const lion = new Cat("lion"); console.log(lion.speak()); // ROAR
One benefit of types is you can use computed properties via the
in keyword. This programmatically generates mapped types. You can take this example further and combine it with the use of a generic to define a type that requires the keys of the generic passed in to be specified.
type FruitColours<T> = { [P in keyof T]: string[] }; const fruitCodes = { apple: 11123, pear: 33343, banana: 33323 }; // This object must include all the keys present in fruitCodes. // If you used this type again and passed a different generic // then different keys would be required. const fruitColours: FruitColours< typeof fruitCodes > = { apple: ['red', 'green'], banana: ['yellow'], pear: ['green'] };
In conclusion
Hopefully, this post has helped to demystify parts of the TypeScript type system and given you some ideas about how you can exploit its advanced features to improve your own TypeScript application structure. If you have any other questions or want some expert assistance writing TypeScript applications, get in touch to chat with us today! | https://www.sitepen.com/blog/advanced-typescript-concepts-classes-and-types | CC-MAIN-2021-10 | refinedweb | 2,346 | 55.13 |
On Mon, Oct 28, 2002 at 11:54:54PM -0500, Branden Robinson wrote: > On Wed, Oct 23, 2002 at 08:10:43PM -0400, Matt Zimmerman wrote: > > I don't think that existing .config handling necessarily needs to change > > at this point, unless we want to provide a standard way to suppress all > > attempts at automatic configuration for a particular config file. > > Well, Joey Hess is of the opinion that "manage this config file with > debconf"-style questions are Evil and Wrong. > > So, assuming I want to get with the Debconf orthodoxy, I would be changing > my config scripts to eliminate this sort of prompting. I can't speak for joeyh, but I think his beef is that maintainers use this kind of question to be lazy about preserving changes. I'm guilty of this myself, because I do not want to parse some arbitrary config format using shell commands to seed debconf with the appropriate responses. We're all about preserving changes here, so the light of the One True Way should guide us to salvation. I was thinking of things more similar to --force-confnew/--force-confold than to "manage this config file with debconf" questions. > > In other maintainer scripts, we need to be able to say "I have generated a > > configuration file /tmp/blah as a possible replacement for /etc/foo/bar." At > > that point, the maintainer script is done with the file, and passes control > > to us. > > Here we run into a problem: > > * For most packages, the "other maintainer script" is going to be the > postinst. In fact it's difficult for me to imagine a scenario when > anything but the postinst would be generating a config file.[1] >From my perspective, the tool doesn't care from which maintainer script it's called. postinst would seem to be the only useful place to call it, I agree. > * The postinst, by definition, runs during the configuration phase. > Your proposal is going to pull us farther away from the utopia of > being able to handle all interactivity before packages are even > unpacked, a la dpkg-preconfigure. While dpkg's conffile > prompts already violate this, they *can* be replaced with pre-unpack > prompting, because dpkg-preconfigure can suck new conffiles out of a > package just as well as it can config and templates files. > > Non-conffile config files cannot enjoy this luxury, because they don't > exist within the package. Right. As we discussed on IRC, there seems to be a fundamental conflict between preconfiguration and generated configuration files because the package is by definition not configured yet (for the new version) at preconfiguration time. Packages with simple requirements could theoretically generate a configuration in .config, but that is not the right place for it, and packages with simple requirements can use conffiles. I had not considered that one day dpkg could prompt about conffiles ahead of time. That weakens my "no worse than conffiles" argument considerably. Preconfiguration should remain unchanged in (IMO) the two most important cases: initial installation, and novice user. In the former case, we will never need to prompt in postinst, and in the latter, we should have a reasonable default for them. > * On the other hand, if we're doing an upgrade instead of an install, > the tool(s) we use to generate the config file may already be on the > system at "pre-configure time". However, if those tools change, and > a package's .config script needs to be able to use the > config-generation tool that's in the corresponding version of the > package, you'd need to have a way of declaring this requirement so > that config file generation could be deferred to package > configuration. Not to mention the fact that the runtime dependencies > of the tools used to generate the configuration files would need to be > present at pre-configure time. Oy vey. Yes, this would be a mess. Upgrades from one Debian release to the next would be fragile and difficult to maintain and test effectively. > > We check again whether the file has been modified since last time by > > comparing it to a saved copy or checksum (the copy is optional, but gives > > much more flexibility than storing only a checksum). > > Why not just a checksum? Do you have a specific application in mind, or > do you think the copy is a good idea for the sake of people cleverer > than we? :) The copy makes it possible to calculate the merge. Since I am pretty confident that we will want merges, I think we should go ahead and use the copy for the comparison as well. It might be marginally more efficient to cache checksums for the comparison, but I doubt it will be worth worrying about. > > [prompts for various cases] > > > > In the common cases, this should be possible with a single prompt, though it > > could be split into two phases or selected from either a "simple" or > > "advanced" method, or even suppressed entirely for novice users if a sane > > default action sequence could be decided (always preserve? merge, and if > > that fails, preserve? warn?). > > As you said, this is really two different questions: > > 1) generated file only / existing file only / merge > 2) if merge: > prompt for review on problems / always prompt for review > > I think the latter is a prime candidate for a shared template that is used > system-wide. Agreed. I can also envision global preferences like "always mail me diffs" which could be implemented down the road. > If someone wants to implement object-based debconf templates with > inheritance, then we might want to consider making it package specific. > :) I think that db_register and db_subst give us pretty much everything that we would need to implement package-specific prompting. We would need to reserve our own namespace under the package's template namespace (package/configtool_blahblah), but I don't see many problems beyond that. Now that I think about it, since we're prompting in the postinst anyway, we could probably even share the templates by depending on a common package which provides them. It should be possible to ensure that they are in the debconf database that way. > [generalizing the tools] > > However, for trial-run purposes in XFree86, I'll probably just encapsulate > all of this inside my own package. If the concept flies, it will be > broken out into its own package. This seems reasonable to me. > > - What should happen at preconfiguration time to minimize interaction for > > novice users? > > > > - What sort of preferences would be useful, either at a global scope or a > > per-package scope? "always leave my foobar config alone" "always merge > > my changes if there are no merge conflicts" > > As debconf is itself configured using debconf questions (default > frontend and question priority), so too should this config file handler > be configured when it is installed, giving the user the chance to set a > global policy for configuration file handling. Hmm...how will this fit with including the infrastructure in the xfree86 package? Will we have a separate binary package for it, or include it in each binary package? > Since package maintainers will not be expected to have to manage this > sort of thing on their own -- we won't have to trust them to write their > if statements correctly, in other words :) -- I propose we have shared > templates like this. > > * Template: shared/mdz-config/merge-policy > Type: select > Choices: always keep existing config, \ > always use generated config, \ > always merge existing and generate configs, \ > always prompt > > The last choice effectively means "handle on a per-package (actually > per-config-file) basis. Choosing a default here will be difficult, and this strikes me as the kind of question that should not be shown to inexperienced users. Of course, they're not supposed to edit config files, so they shouldn't need a policy like this, but it has been established that they do anyway, for better or for worse. The first two choices are never interactive, the third might be interactive sometimes, and the fourth always interactive. Certainly a non-interactive choice should be the default if it can be made to do the right thing often enough. The user who pastes in a config snippet someone gave them, without knowing what it does, will certainly not be able to handle a merge conflict. If we try to merge silently, we need to handle the failure silently as well. It should work in most cases to keep the existing config and send a note, except where, for example, the configuration has become incompatible, or the maintainer's changes are needed in order for a package to continue to work correctly with other packages. I have not thought about whether it would be feasible to have a mechanism for tracking this kind of incompatibility. It seems excessivevly ambitious at this point. > * Template: shared/mdz-config/diff-viewing-policy > Type: select > Choices: always view diffs between existing and generated config, \ > only view diffs when merging > > I'm not sure we should have an option for "never view diffs even if there > is a problem merging", as that could cause the system to break. > > I'm also not sure it's worthwhile having an "always prompt" choice as we > do with the merge-policy. That just seems excessively tedious. Since we're saving config files anyway, it'd be easy to provide some simple tools for the admin to use to pull up the diffs later anyway, so we only need a couple of convenient options here. I had imagined that the interactive merge process (intended exclusively for people who can understand the 'diff' and 'merge' concepts) would display the diff immediately, and then present the options for how to proceed. Ideally, these would be on the same screen, but I don't think that this is possible with current debconf frontends, so we would need something like: Choices: proceed with the proposed merge, \ discard the proposed merge and keep the existing configuration, \ discard the proposed merge and use the newly-generated configuration, \ view the diff again So on a per-config file basis, I see these interfaces: shared/mdz-config/show-diff shared/mdz-config/merge shared/mdz-config/merge-conflict-warning shared/mdz-config/merge-conflict-resolution The "expert" process would go something like this: - if conflicts, do merge-conflict-warning - show-diff (which would implicitly highlight conflicts) - if conflicts, merge-conflict-resolution, else merge "merge" presents options like those above, while merge-conflict-resolution does not even offer the possibility of using the broken configuration, and instead provides options like: Choices: let me resolve the conflicts interactively, discard the proposed merge and use the existing configuration, discard the proposed merge and use the generated configuration The first option would, say, write out the config with conflict markers to a temporary file, display a confirmation question which is displayed while the admin edits in another terminal (throw away the merge attempt, or install it). Alternatively, we could just dump the attempted merge somewhere and let them edit and install it later. Telling them to go edit it somewhere else is awkward. The more I see it, the more I think we need clearer terminology when talking about the various config files involved (the existing, generated, merged, merged-with-conflict-markers, etc.). Also, all of these templates should have soothing words about how nothing bad is going to happen and all of the configuration files involved will be saved for later review. > > Implementation issues: > > > > - How to store the saved config files, so that it is intuitively obvious to > > which package they belong, and their installed location, so that they are > > conveniently available to the admin? This should be a public interface. > > Well, there are two ways you could do it: > > 1) Make it package specific: /var/lib/mdz-config/packagename, I guess. > This gets chancy if there are many config files and they are in a > weird hierarchy in /etc, though, and especially tricky if there are > multiple config files in the same package with the same basename. > > 2) Make it mirror /etc. I.e., /var/lib/mdz-config/X11 would "mirror" > /etc/X11. This solves the problems in 1) but IMO it's ugly. > > There may not be enough packages going hog-wild with generate config > files for 1) to be a problem, though. The requirements for configuration files are at least: - intuitive mapping to installed config files - intuitive versioning: it should be clear where each file came from, and what it is (generated, backup of locally edited version, etc.) I had thought a little about the two options that you propose (and share your concerns in both cases), and also a third option with its own tradeoffs: storing the alternate versions next to the installed config files as is done with conffiles. My biggest quarrel with that is that if we want intuitive versioning, we end up with big ugly filename extensions like foo.conf.new-1.2.4-1 and foo.conf.local-1.2.3-2 that will infuriate people like me who like to have at least three columns of ls(1) output. I am concerned that if we choose 1), there will definitely one day be a naming conflict. When that does happen, the convention will not have a logical way to resolve the conflict while preserving the mapping. This would suck. > [adding templates to the package] > Oh, wait, I know what you mean. We can't just blindly append the > mdz-config templates to each package's templates file every time the > package is built. Hmm, yup. Going to need Joey Hess's assistance on > that problem, but it's not a problem for prototyping this in > xserver-xfree86. Right, we'll just be including the templates in the usual way for this experiment. -- - mdz | https://lists.debian.org/debian-x/2002/10/msg00520.html | CC-MAIN-2014-15 | refinedweb | 2,296 | 56.79 |
Using the Object Synchronization Feature
This appendix describes the object synchronization feature, which you can use to synchronize specific tables in databases that are on “occasionally connected” systems.
Introduction to Object Synchronization
Object synchronization is a set of tools available with InterSystems IRIS® objects that allows application developers to set up a mechanism to synchronize databases on “occasionally connected” systems. By this process, each database updates its objects. Object synchronization offers complementary functionality to InterSystems IRIS system tools that provide high availability. Object synchronization is not designed to provide support for real-time updates; rather, it is most useful for a system that needs updates at discrete intervals.
For example, a typical object synchronization application would be in an environment where there is a master copy of a database on a central server and secondary copies on client machines. Consider the case of a sales database, where each sales representative has a copy of the database on a laptop computer. When Mary, a sales representative, is off site, she makes updates to her copy of the database. When she connects her machine to the network, the central and remote copies of the database are synchronized. This can occur hourly, daily, or at any interval.
Object synchronization between two databases involves updating each of them with data from the other. However, InterSystems IRIS does not support bidirectional synchronization as such. Rather, updates from one database are posted to the other; then updates are posted in the opposite direction. For a typical application, if there is a main database and one or more local databases (as in the previous sales database example), it is recommended that updates are from the local to the main database first, and then from the main database to the local one.
For object synchronization, the idea of client and server is by convention only. For any two databases, you can perform bidirectional updates; if there are more than two databases, you can choose what scheme you use to update all of them (such as local databases synchronizing with a main database independently).
This section addresses the following topics:
The GUID
To ensure that updates work properly, each object in a database should be uniquely distinguishable. To provide this functionality, InterSystems IRIS gives each individual object instance a GUID — a globally unique ID. The GUID makes each object universally unique.
The GUID is optionally created, based on the value of the GUIDENABLED parameter. If GUIDENABLED has a value of 1, then a GUID is assigned to each new object instance.
Consider the following example. Two databases are synchronized and each has the same set of objects in it. After synchronization, each database has a new object added to it. If the two objects share a common GUID, object synchronization considers them the same object in two different states; if each has its own GUID, object synchronization considers them to be different objects.
How Updates Work
Each update from one database to another is sent as a set of transactions. This ensures that all interdependent objects are updated together. The content of each transaction depends on the contents of the journal for the “source” database. The update can include one or more transactions, up to all transactions that have occurred since the last synchronization.
Resolution of the following conditions is the responsibility of the application:
If two instances that share a unique key have different GUIDs. This requires determining if the two records describe a single object or two unique objects.
If two changes require reconciliation. This requires determining if the two changes were to a common property or to non-intersecting sets of properties.
The SyncSet and SyncTime Objects
When two databases are to be synchronized, each has transactions in it that the other lacks. This is illustrated in the following diagram:
Here, database A and database B have been synchronized at transaction 536 for database A and transaction 112 for database B. The subsequent transactions for each database need to be updated from each to the other. To do this, InterSystems IRIS uses what is called a SyncSet object. This object contains a list of transactions that are used to update a database. For example, when synchronizing database B with database A, the default contents of the SyncSet object are transactions 547, 555, 562, and 569. Analogously, when synchronizing database A with database B, the default contents of the SyncSet object are transactions 117, 124, 130, and 136. (The transactions do not use a continuous set of numbers, because each transaction encapsulates multiple inserts, updates, and deletes — which themselves use the intermediate numbers.)
Each database holds a record of its synchronization history with the other. This record is called a SyncTime table. For database, its contents are of the form:
Database Namespace Last Transaction Sent Last Transaction Received ------------------------------------------------------------------------------ B User 536 112
The numbers associated with each transaction do not provide any form of a time stamp. Rather, they indicate the sequence of filing for transactions within an individual database.
Once database B has been synchronized with database A, the two databases might appear as follows:
Because the transactions are being added to database B, they result in new transaction numbers in that database.
Analogously, the synchronization of database B with database A results in 117, 124, 130, and 136 being added to database A (and receiving new transaction numbers there):
Note that the transactions from database B that have come from database A (140 through 162) are not updated back to database A. This is because the update from B to A uses a special feature that is part of the synchronization functionality. It works as follows:
Each transaction in a database is labeled with what can be called “a database of origin.” In this example, transaction 140 in database B would be marked as originating in database A, while its transaction 136 would be marked as originating in itself (database B).
The SyncSet.AddTransactions() method, which bundles a set of transactions for synchronization, allows you to exclude transactions that originate in a particular database. Hence, when updating from B to A, AddTransactions() excludes all transactions that originate in database A — because those have already been added to the transaction list for database B.
This functionality prevents creating infinite loops in which two databases continually update each other with the same set of transactions.
Modifying the Classes to Support Synchronization
Object synchronization requires that the sites have data with matching sets of GUIDs. If you are starting with an already existing database that does not yet have GUIDs assigned for its records, you need to assign a GUID to each instance (record) in the database, and then make sure there are matching copies of the database on each site. In detail, the process is:
For each class being synchronized, set the value of the OBJJOURNAL parameter to 1.
Parameter OBJJOURNAL = 1;
This activates the logging of filing operations (that is, insert, update, or delete) within each transaction; this information is stored in the ^OBJ.JournalT global. An OBJJOURNAL value of 1 specifies that the property values that are changed in filing operations are stored in the system journal file; during synchronization, data that needs to be synchronized is retrieved from that file.Note:
OBJJOURNAL can also have a value of 2, though the possible use of this value is restricted to special cases. It is never for classes using the default storage mechanism (%Storage.Persistent). A value of 2 specifies that property values that are changed in filing operations are stored in the ^OBJ.Journal global; during synchronization, data that needs to be synchronized is retrieved from that global. Also, storing information in the global increases the size of the database very quickly.
Optionally also set the value of the JOURNALSTREAM parameter to 1.
Parameter JOURNALSTREAM = 1;
By default, object synchronization does not support synchronization of file streams. The JOURNALSTREAM parameter controls whether or not streams are journaled when OBJJOURNAL is true:
If JOURNALSTREAM is false and OBJJOURNAL is true, then objects are journaled but the streams are not.
If JOURNALSTREAM is true and OBJJOURNAL is true, then streams are journaled. Object synchronization tools will process journaled streams when the referencing object is processed.
For each class being synchronized, set the value of its GUIDENABLED parameter to 1; this tells InterSystems IRIS to allow the class to be stored with GUIDs.
Parameter GUIDENABLED = 1;
Note that if this value is not set, the synchronization does not work properly. Also, you must set GUIDENABLED for serial classes, but not for embedded objects.
Recompile the class.
For each class being synchronized, give each object instance its own GUID by running the AssignGUID() method:
Set Status = ##class(%Library.GUID).AssignGUID(className,displayOutput)
where:
className is the name of class whose instances are receiving GUIDs, such as "Sample.Person".
displayOutput is an integer where zero specifies that no output is displayed and a nonzero value specifies that output is displayed.
The method returns a %Status
value, which you should check.
Put a copy of the database on each site.
Performing the Synchronization
This section describes how to perform the synchronization. The database providing the updates is known as the source database; and the database receiving the updates is the target database. To perform the actual synchronization, the process is:
Each time you wish to synchronize the two databases, go to the instance with the source database. On the source database, create a new SyncSet using the %New() method of the %SYNC.SyncSet class:
Set SrcSyncSet = ##class(%SYNC.SyncSet).%New("unique_value")
The integer argument to %New(), unique_value, should be an easily identified, unique value. This ensures that each addition to the transaction log on each site can be differentiated from the others.
Call the AddTransactions() method of the SyncSet instance:
Do SrcSyncSet.AddTransactions(FirstTransaction,LastTransaction,ExcludedDB)
Where:
FirstTransaction is the first transaction number to synchronize.
LastTransaction is the last transaction number to synchronize.
ExcludedDB specifies a namespace within a database whose transactions are not included in the SyncSet.
This method collects the synchronization data and puts it in a global, ready for export.
Or, to synchronize all transactions since the last synchronization, omit the first and second arguments:
Do SrcSyncSet.AddTransactions(,,ExcludedDB)
This gets all transactions, beginning with the first unsynchronized transaction to the most recent transaction. The method uses information in the SyncTime table to determine the values.
ExcludedDB is a $LIST created as follows:
Set ExcludedDB = $ListBuild(GUID,namespace)
Where:
GUID is the system GUID of the target system. This value is available through the %SYS.System.InstanceGUID
class method; to invoke this method, use the ##class(%SYS.System).InstanceGUID() syntax.
namespace is the namespace on the target system.
Call the ErrCount() method to determine how many errors were encountered. If there have been errors, the SyncSet.Errors query provides more detailed information.
Export the data to a local file using the ExportFile() method:
Do SrcSyncSet.ExportFile(file,displaymode,bUpdate)
Where:
file is the file to which the transactions are being exported; it is a name with a relative or absolute path.
displaymode specifies whether or not the method writes output to the current device. Specify “d” for output or “-d” for no output.
bUpdate is a boolean value that specifies whether or not the SyncTime table is updated (where the default is 1, meaning True). It may be helpful to explicitly set this to 0 at this point, and then set it to 1 after the source receives assurance that the target has indeed received the data and performed the synchronization.
Move the exported file from the source machine to the target machine.
Create a SyncSet object on the target machine using the SyncSet.%New() method. Use the same value for the argument of %New() as on the source machine — this is what identifies the source of the synchronized transactions.
Read the SyncSet object into the InterSystems IRIS instance on the target machine using the Import() method:
Set Status = TargetSyncSet.Import(file,lastSync,maxTS,displaymode,errorlog,diag)
Where:
file is the file containing the data for import.
lastSync is the last synchronized transaction number (default from synctime table).
maxTS is the last transaction number in the SyncSet object.
displaymode specifies whether or not the method writes output to the current device. Specify “d” for output or “-d” for no output.
errorlog provides a repository for any error information (and is called by reference to provide information for the application).
diag provides more detailed diagnostic information about what is happening when importing
This method puts data into the target database. It behaves as follows:
Important:
If the method detects that the object has been modified on both the source and target databases since the last synchronization, it invokes the %ResolveConcurrencyConflict() callback method; like other callback methods, the content of %ResolveConcurrencyConflict() is user-supplied. (Note that this can occur if either the two changes both modified a common property or the two changes each modified non-intersecting sets of properties.) If the %ResolveConcurrencyConflict() method is not implemented, then the conflict remains unresolved.
If, after the Import() method executes, there are unsuccessfully resolved conflicts, these remain in the SyncSet object as unresolved items. Be sure to take the appropriate action regarding the remaining conflicts; this may involve resolution, leaving the items in an unresolved state, and so on.
The Import() method returns a status value but that status value simply indicates that the method completed without encountering an error that prevented the SyncSet from being processed. It does not indicate that every object in the SyncSet was processed successfully without encountering any errors. For information on synchronization error reporting, see Import() in the class reference for %SYNC.SyncSet.
Once the first database updates the second database, perform the same process in the other direction so that the second database can update the first one.
Translating Between GUIDs and OIDs
To determine the OID of an object from its GUID or vice versa, there are two methods available:
%GUIDFind() is a class method of the %GUID
class that takes a GUID of an object instance and returns the OID associated with that instance.
%GUID() is a class method of the %Persistent
class that takes an OID of an object instance and returns the GUID associated with that instance; the method can only be run if the GUIDENABLED parameter is TRUE for the corresponding class. This method dispatches polymorphically and determines the most specific type class if the OID does not contain that information. If the instance has no GUID, the method returns an empty string.
Manually Updating a SyncTime Table
To perform a manual update on the SyncTime table for a database, invoke the SetlTrn() method, which sets the last transaction number:
Set Status=##class(%SYNC.SyncTime).SetlTrn(syncSYSID, syncNSID, ltrn)
Where:
syncSYSID is the system GUID of the target system. This value is available through the %SYS.System.InstanceGUID
class method; to invoke this method, use the ##class(%SYS.System).InstanceGUID() syntax.
syncNSID is the namespace on the target system, which is held in the $Namespace variable.
ltrn is the highest transaction number known to have been imported. You can get this value by invoking the GetLastTransaction() method of the SyncSet.
The SetlTrn() method sets the highest transaction number synchronized on the target system instead of the default behavior (which is to set the highest transaction number exported from the source system). Either approach is fine and is a choice available during application development. | https://docs.intersystems.com/healthconnectlatest/csp/docbook/stubcanonicalbaseurl/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_objsync | CC-MAIN-2021-49 | refinedweb | 2,573 | 53 |
Hi,
when I compile the attached program with openCC, the resulting program reports the string to be empty, although it certainly is not as compiling the same program with g++ and running it prooves.
g++ creates the following code:
if (a.empty())
1e3: 48 8d 7d e0 lea -0x20(%rbp),%rdi
1e7: e8 00 00 00 00 callq 1ec <main+0x64>
1ec: 84 c0 test %al,%al
1ee: 74 2a je 21a <main+0x92>
openCC does:
if (a.empty())
d6: 48 8d bd 60 fe ff ff lea -0x1a0(%rbp),%rdi
dd: e8 00 00 00 00 callq e2 <main+0x80>
e2: 85 c0 test %eax,%eax
e4: 74 33 je 119 <main+0xb7>
i.e. openCC expects string.empty() to set the full 32 bit of %eax to zero and g++ only requires the lower 8 bits %al to be set to zero.
Since the return type of empty() is bool, and sizeof(bool) is 1 for openCC, the code should most certainly do the same as g++ and only test %al.
Best,
Ger t
#include <iostream> #include <string> #include <sstream> using namespace std; int main(int argc, char *args[]) { const string a("1.0*ssd"); if (a.empty()) cout << "boned: the string '" << a << "' is considered to be empty\n"; else { cout << "Live is good\n"; return 0; } bool second_try = a.empty(); if (second_try) cout << "Well, at least the compile sticks to his opinion.\n"; else cout << "Arrg, now its correct\n"; return 0; }
Thanks for reporting this issue, I see the same problem with SLES11, which comes with gcc 4.3.2.
Note that our next release will support SLES11, but not by using the default g++/gcc library includes since our front end only 4.2 based. We will be supplying our own include files and C++ standard libraries that are 4.2 base.
Note that we reported similar ABI issue in the release notes:
ReleaseNotes.txt:
8. Known Issues and Limitations
===============================
...
o The x86 Open64 compiler ABI is designed to be compatible with
the GNU ABI. However, recently an ABI incompatibility
has been uncovered.
The 64-bit ABI used by gcc assumes that the high order 32
bits in %rax are undefined for a function that returns a
32-bit quantity; thus the caller is responsible for
converting %eax into a 64-bit value when the result is
used as an operand to a 64-bit operation.
The ABI used by the Open64 compiler assumes that a
function that returns a 32-bit value will produce a
64-bit extended result in %rax; thus no conversion is
needed when the result is used in a 64-bit operation.
In practice this incompatibility will produce bad results
in rare situations. This issue will be addressed in a
future release of the x86 Open64 compiler.
--------------------------------------
As your example shows, it is not just 32-bit values are not being properly handled, when later versions of g++ are being used.
Note that we don't see this error on SLES11 SP2 or RHEL 5.3, since the library code for empty zero extends the result:
(gdb) x/5i _ZNKSs5emptyEv
0x316089b5e0 <_ZNKSs5emptyEv>: mov (%rdi),%rax
0x316089b5e3 <_ZNKSs5emptyEv+3>: cmpq $0x0,-0x18(%rax)
0x316089b5e8 <_ZNKSs5emptyEv+8>: sete %al
0x316089b5eb <_ZNKSs5emptyEv+11>: movzbl %al,%eax
0x316089b5ee <_ZNKSs5emptyEv+14>: retq
Your running on Ubuntu 9 right? What version of the g++ are you using?
Doug | https://community.amd.com/thread/120382 | CC-MAIN-2019-13 | refinedweb | 562 | 68.4 |
The objective of this post is to explain how to create a file in MicroPython. The code was tested on both the ESP32 and the ESP8266.
Introduction
The objective of this post is to explain how to create a file in MicroPython. The code was tested on both the ESP32 and the ESP8266. The prints shown here were from the tests on the ESP32.
The file will be created on MicroPython’s internal file system, which uses a FAT format and is stored in the FLASH memory [1].
The code
First of all, we will start by opening the file for write. Note that the file doesn’t need to be previously created. To do so, we call the open function, passing as input the name of the file to create and the mode. We will create a file named “myTestFile.txt” and, as second argument, we pass the string “w“, indicating that we want to write content.
This call will return an object of class TextIOWrapper, which we will store in a variable called file. We will need this object to write the actual content to the file. Just to confirm, we will also print the type of the object returned.
file = open ("myTestFile.txt", "w") print(type(file))
Check the expected output at figure 1. As can be seen, the object returned by the call is of class TextIOWrapper.
Figure 1 – Opening a file for writing with MicroPython.
Now, to write the actual content, we will call the write method on the file object, passing as input the content we want to write. This call will return the number of bytes written [1]. So, let’s write some content with the command shown bellow.
file.write("Writing content from MicroPython")
You should get an output similar to figure 2, which indicates 32 bytes were written to the file.
Figure 2 – Writing content to a file.
In the end, we need to close the file, by calling the close method on our file object.
file.close()
We can confirm the creation of the file by listing the existing files in the current directory. To do so, we just need to import the os module and call the listdir function.
import os os.listdir()
Check the expected result in figure 3, which shows our created “myTestFile.txt“. Note that there is a pre-existing file called boot.py, which is a special file from MicroPython that runs when the board boots.
Figure 3 – Listing the files in the current directory.
Just to finish our tutorial, we will rename the newly created file from “myTestFile.txt” to just “testFile.txt”. To do so, we will use the rename function of the os module. This function receives as first argument the actual file name, and as second argument the new file name. After this method call, we will print the contents of the directory again.
os.rename('myTestFile.txt', 'testFile.txt') os.listdir()
You can check bellow at figure 4 the expected result.
Figure 4 – Renaming the created file with the rename function.
References
[1]
Pingback: ESP32 / ESP8266 MicroPython: Reading a file | techtutorialsx
Pingback: ESP32 / ESP8266 MicroPython: Uploading files to the file system | techtutorialsx
Pingback: ESP32 / ESP8266 MicroPython: Running a script from the file system | techtutorialsx | https://techtutorialsx.com/2017/06/03/esp32-esp8266-micropython-writing-a-file/ | CC-MAIN-2017-34 | refinedweb | 544 | 74.79 |
NAME
Net::TiVo - Perl interface to TiVo.
SYNOPSIS
use Net::TiVo; my $tivo = Net::TiVo->new( host => '192.168.1.25', mac => 'MEDIA_ACCESS_KEY' ); for ($tivo->folders()) { print $_->as_string(), "\n"; }
ABSTRACT
Net::TiVo provides an object-oriented interface to TiVo's REST interface. This makes it possible to enumerate the folders and shows, and dump their meta-data.
DESCRIPTION
Net::TiVo has a very simple interface, and currently only supports the enumeration of folder and shows using the REST interface. The main purpose of this module was to provide access to the TiVo programmatically to automate the process of downloading shows from a TiVo.
Net::TiVo does not provide support for downloading from TiVo. There are several options available, including LWP, wget, and curl. Note: I have used wget version >= 1.10 with success. wget version 1.09 appeared to have an issue with TiVo's cookie.
BUGS
One user has reported 500 errors when using the library. He was able to track the bug down to LWP and Net::SSLeay. Once he switched from using Net::SSLeay to Crypt::SSLeay the 500 errors went away.
CACHING
Net::TiVo is slow due to the amount of time it takes to fetch data from TiVo. This is greatly sped up by using a cache.
Net::TiVo's
new method accepts a reference to a
Cache object. Any type of caching object may be supported as long as it meets the requirements below. There are several cache implementations available on CPAN, such as
Cache::Cache.
The following example creates a cache that lasts for 600 seconds.
use Cache::FileCache; my $cache = Cache::FileCache->new( namespace => 'TiVo', default_expires_in => 600, } my $tivo = Net::TiVo->new( host => '192.168.1.25', mac => 'MEDIA_ACCESS_KEY', cache => $cache, }
Net::TiVo uses positive caching, errors are not stored in the cache.
Any
Cache class may be used as long as it supports the following method signatures.
# Set a cache value $cache->set($key, $value); # Get a cache value $cache->get($key);
METHODS
- folders()
Returns an array in list context or array reference in scalar context containing a list of Net::TiVo::Folder objects.
SEE ALSO
Net::TiVo::Folder, Net::TiVo::Show
AUTHOR
Christopher Boumenot, <boumenot@gmail.com> | https://metacpan.org/pod/Net::TiVo | CC-MAIN-2019-13 | refinedweb | 368 | 58.89 |
It looks like you're new here. If you want to get involved, click one of these buttons!
I have to create an array from an entered value. I also need to display array contents, which I have done and that performs to my liking. I am just having some issues trying to have the program count the amount of digits and display them "digits detected" I also need to multiply the original entered value (number) by 11 and display it at the end of my program.
Problems I need help debugging:
1.) Counting and displaying how many digits was entered
2.) Multiplying first entered number by 11.
#include <stdio.h> int main() { int number, count, mult = 11; int numbers[9]; printf("Please enter a number:\n>"); scanf("%d", &number); for(count = 4; count >= 0; count--) { if(number <= 0) numbers[count] = 0; numbers[count] = number % 10; number /= 10; } for(count = 0; count < 5; count++) printf("Digit Value: %d\t\n", numbers[count]); while(number!=0) { number/=10; ++count; } printf("Digits Detected: %d\n", number); return 0; }
Your code is a mess.
1. You only take 1 number, what's the array for?
2. Why negative numbers set to 0?
3. Your array is size of 9 but your loops only loop through 5 elements.
4. You overwrite your numbers array with digits and zeroes because there is no stop at last digit.
5. While loop does not make sense.
6. At the end of all this you lost user input long ago and have nothing to multiply.
Should go something like this (pseudocode): | http://programmersheaven.com/discussion/437401/count-number-of-digits-in-array-and-display-them-plus-multiply-it-by-given-number | CC-MAIN-2017-34 | refinedweb | 263 | 73.98 |
table of contents
NAME¶
iconv - perform character set conversion
SYNOPSIS¶
#include <iconv.h>
size_t iconv(iconv_t cd, char **inbuf, size_t *inbytesleft, char **outbuf, size_t *outbytesleft);
DESCRIPTION¶
The iconv() function converts a sequence of characters in one character encoding to a sequence of characters in another character encoding. The cd argument is a conversion descriptor, previously created by a call to iconv_open(3); the conversion descriptor defines the character encodings that iconv() uses for the conversion. The inbuf argument is the address of a variable that points to the first character of the input sequence; inbytesleft indicates the number of bytes in that buffer. The outbuf argument is the address of a variable that points to the first byte available in the output buffer; outbytesleft indicates the number of bytes available in the output buffer..
RETURN VALUE¶
The iconv() function returns the number of characters converted in a nonreversible way during this call; reversible conversions are not counted. In case of error, it sets errno and returns (size_t) -1.
ERRORS¶
The following errors can occur, among others:
VERSIONS¶
This function is available in glibc since version 2.1.
ATTRIBUTES¶
For an explanation of the terms used in this section, see attributes(7).
The iconv() function is MT-Safe, as long as callers arrange for mutual exclusion on the cd argument.
CONFORMING TO¶
POSIX.1-2001, POSIX.1-2008.
NOTES¶
In each series of calls to iconv(), the last should be one with inbuf or *inbuf equal to NULL, in order to flush out any partially converted input.
Although inbuf and outbuf are typed as char **, this does not mean that the objects they point can be interpreted as C strings or as arrays of characters: the interpretation of character byte sequences is handled internally by the conversion functions. In some encodings, a zero byte may be a valid part of a multibyte character.
The caller of iconv() must ensure that the pointers passed to the function are suitable for accessing characters in the appropriate character set. This includes ensuring correct alignment on platforms that have tight restrictions on alignment.
SEE ALSO¶
iconv_close(3), iconv_open(3), iconvconfig(8)
COLOPHON¶
This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. | https://manpages.debian.org/bullseye/manpages-dev/iconv.3.en.html | CC-MAIN-2021-49 | refinedweb | 391 | 51.38 |
Dec
31
PBS 27 of x – JS Prototype Revision | HTML Forms
Filed Under Computers & Tech, Software Development on December 31, 2016 at 3:28.
Solution to PBS 26 Challenge
At the end of the previous instalment, we had created a third iteration of our world clock API. This iteration was object oriented, allowed for clocks to be automatically created when the page loads without the need to write any JavaScript, and it allowed the timezone to be altered at any stage, even after the page was loaded.
The challenge was to use that as a starting point, and add the ability to customise each clock by specifying whether it should show 12 or 24 hour time, whether the separators should blink, and whether or not to show seconds. Each of these options, like the timezone in our starting code, should be configurable in three ways – via the constructor, via an accessor method, and via a data attribute.
Finally, the option was given to use a namespace of your own choosing instead of
pbs.
I wrote my solution as a finished API that I’ve released as open source on GitHub – bartificer.worldclock.js.
I decided to take things a little further, and make even more things about the clocks configurable – the length of time any animations should happen over, the opacity the separators should be shown when they are ‘on’ and ‘off’, and whether or not to show AM or PM when showing the time in 12 hour format.
In the example API, only
span elements could be transformed into clocks. My API allows
spans,
divs, paragraphs (
p), and headings (
h1 …
h6) to be transformed into clocks.
I also added support for showing a clock in the timezone of the visitor’s browser by setting the timezone to the special string
LOCAL.
Finally, I added some additional functions to allow clock objects to be queried for the current time in their configured timezone in various formats, including ISO 8601.
The entire API is available on GitHub, along with detailed documentation. The code is also heavily commented. Rather than work through it all, I just want to draw your attention to three aspects of my solution.
Firstly, I’d like to highlight the approach I took to items that may or may not be visible like the seconds, their separator, and the AM/PM region. I decided to create a span for every element that might possibly be needed, and to hide any spans that were not needed. That way, when the user changes their mind, and decides they do want seconds after all, it’s merely a matter of showing the hidden spans. The render function also always writes the seconds to the seconds span, even when it’s hidden. The alternative was to use jQuery’s
.after() and
.remove() functions to add and remove the seconds region as needed, but that seemed a lot more work than simply hiding and showing pre-existing elements as needed.
Secondly, rather than dealing with each possible option individually by adding an argument to the constructor for each, and adding a separate accessor method for each, I choose to collapse all the options into a single argument and a single accessor method (
.option()). I did this by altering the constructor so it accepts a single plain object as the second argument (the first argument remains a reference to a jQuery object representing the element to be transformed into a clock). Users can then pass as many or as few options as they want in this single optional second argument.
All the options can be accessed via a single
.options() accessor function that behaves a lot like jQuery’s
.css() function. The first argument it expects is a string with the name of the option to be accessed. If there is no second argument, the function returns the current value of the option, if there is a second argument, the value of the option is updated to the value of that second argument.
Rather than using a long cascade of
if statements, (or a long complex
switch statement) within the
.option() function to deal with every possible option, I defined a private object named
optionDetails to store the information about all the supported options.
For each possible option three key-value pairs are always defined –
description (an English-language description of what makes a value valid),
default (the value to use for the option when none is provided by the user), and
validator (a reference to a function to validate values for the option). The definition for the timezone option is a nice example of this:
Two optional additional key-value pairs are also supported –
coerce (a reference to a function to transform invalid values for the option into valid ones), and
onChange (a reference to a function which should be executed each time the value of the option changes).
The constructor makes use of the defaults specified in the
optionDetails data structure, and both the constructor and the accessor method make use of the rest of the information. Both the constructor and the accessor method use the validator and coercion function references (or callbacks if you prefer) to validate all values that come from the user. There is only one coercion used – a function that turns any value into
true or
false based on its truthiness. This allows the boolean options to be more forgiving to users, while still ensuing the values stored inside the objects are always actual booleans.
The
onChange callbacks are called once at the end of the constructor to make sure everything is correct when a new clock is created, and each time the value of an option is altered using the
.option() accessor method. It’s these callbacks that do things like hide and show the seconds or the AM/PM region as needed.
The big advantage to this approach is that the logic related to each option is collected together into just two regions within the code – the
optionDetails data structure, and the
renderClock() function. Adding a new option does not require either the constructor or the accessor method to be altered in any way. This makes the code much easier to maintain, and, to expand and enhance with additional options in the future.
The final thing I want to draw your attention to is the way in which both the
.option() accessor function and the constructor invoke the
onChange callbacks defined in the
optionDetails data structure. The callbacks are invoked in such a way that within them, the special
this variable will be a reference to the clock object on which the value of the option is being altered. This allows the code within the callbacks to access all the data within the object being updated, including the value of all other options, and, references to all the spans that make up the clock. For example, the
onChange callback for the
use12HourFormat option accesses the value of the
showAmpm option, and sets the visibility of span containing the AM/PM part of the time:
We have come across this kind of behaviour before – within callbacks passed to jQuery’s
.each() function, the special
this variable is always a reference to the HTML element currently being iterated over. This very useful behaviour is achieved using the
.call() function which is part of JavaScript’s
Function prototype.
When we first learned about functions we learned that in JavaScript, functions are objects. We didn’t dwell on that fact, but it bears a closer look now. JavaScript functions are not just objects, they are objects with the prototype
Function. That prototype brings with it a number of functions that can be applied to any/every function. One of those functions is
.call(). As its name suggest,
.call() calls (or invokes) the function, but it does so in a very useful way – the first argument you pass to
.call() will be used as the special
this variable within the function as it executes. The second argument to
.call() will be passed as the first argument to the function, the third argument to
.call() as the second argument to the function, and so on.
So – given all that, this is how the
.option() accessor method actually sets the new value of an option:
Within the
.option() function, where the above code snippet exists,
this is a reference to the clock object who’s option is being altered. By passing
this as the first argument to
.call(), the
this variable within the callback also becomes a reference to the clock who’s option is is being updated.
Introducing Web Forms
User input within web pages is collected into groups known as forms. A single form contains one or more inputs of one kind or another, and a single page can contain arbitrarily many forms. Forms can’t be nested within each other, so each input belongs to exactly one form.
The HTML tag to represent a form is simply
<form></form>.
Our First Form (Doing it Wrong)
To get the ball rolling on user input, let’s create a simple web page containing a very naive and simple form. It will contain just one text box, and one button. The text box will allow you to enter your name, and when you click the button, a paragraph will be appended to the end of the page saying hello to you.
You’ll find the above code in the instalment’s ZIP file as
pbs27-1a.html. Try loading this page in your browser, typing your name, and then clicking on the button. At first glance this page appears to work as expected.
We can see that the HTML tag for representing a text box is <input type=”text” />. Note that
input is a void tag, so there is no closing
input tag. We can also see that from a jQuery perspective we can access the value of the text box with the
.val() function. In typical jQuery fashion,
.val() is both a getter and a setter. If you call
.val() on a jQuery object representing a text box without any arguments it will return the contents of the text box as a string. If you call
.val() on a jQuery object representing a text box with a string as the first argument, it will put the given string into the text box. You can see this in action by opening a web console on the example page above, and executing the following JavaScript:
We can also see that the HTML tag for a button is
<button type="button">Some Text</button>. Unfortunately, the default value for the
button tag’s
type attribute is
submit. We’ll see why we want to override that default with the rather dumb looking
type="button" in a moment. We can also see that you can add click handlers to buttons in exactly the same way we added them to things like paragraphs many instalments ago – by using jQuery’s
.click() function.
So, what’s wrong with this naive first form? Well – let’s break it by doing something very simple and natural – type some text into the text box, and, while your cursor is still in the text box, hit
enter.
Huh? What just happened there?
It may not be immediately obvious, but the page reloaded when you hit enter. Why? That would be because the
form tag comes with some serious historical baggage.
The
form Tag’s
action Attribute
When the Word Wide Web was born, there was no such thing as client-side scripting, hence, no JavaScript. However, even in these early days, there were web forms. The way all forms worked originally is that one URL contained the
form tag, all its inputs, and a submit button. The user would fill in the form, and when they were done, they would press the submit button. This would cause the browser to submit the form data to a URL. The web server would receive that data, and respond with a whole new web page. So, clicking submit would always result in a page change or reload.
On the modern web, few forms still behave like this. Instead, we click on buttons, and without the browser browsing to another URL or reloading the current URL, something happens. This is because most modern web pages use JavaScript to process the forms within our browsers (i.e. on the client side) rather than submitting them to a server to be dealt with by some kind of server-side scripting.
BTW – we will be covering server-side scripting much later in this series, but for now, we will be exclusively using JavaScript to process our forms on the client side.
While most forms are now processed by JavaScript, the form tag’s default behaviour has not changed – if you don’t explicitly specify that a form should not submit to a URL, it will. In addition the
form tag’s out-dated default, many browsers also implement keyboard shortcuts that submit forms. This is what caused the strange behaviour we saw with our naive first form. When your cursor is active within a text box, hitting enter will trigger the browser to submit the form the text box belongs to.
So, if forms were initially designed to submit their data to a URL – how do we specify this URL, and what’s the default? Where a
form submits is controlled by its
action attribute. What ever value you place in a
form tag’s
action attribute will be interpreted as a URL by the browser. The default value of this attribute is an empty string. When you interpret an empty string as a URL, what you get is the relative URL to the current page. So, by default, submitting a form will refresh the current URL.
Now that we know why hitting enter caused the page to refresh, how do we stop this unwanted behaviour? We simply give the
action attribute the special value
javascript:void(0);. For now, always write your
form tags like so:
A Better First Form
Given what we’ve just learned, let’s update our first form so its
form tag has an
action of
javascript:void(0);:
You’ll find this code in the instalment’s ZIP file as
pbs27-1b.html.
If you load this page in your browser you’ll find you can no longer make the form act strangely by hitting
enter while the text box is focused.
At this stage you might think we’re done, but there’s still something wrong with this form. On well written web forms, you can focus an input of any kind by clicking on its name. This is especially useful for checkboxes and radio buttons, but it’s always helpful. More importantly, a form that does not explicitly map textual labels to the inputs they describe is simply not accessible. Screen readers depend on developers to properly label form inputs.
Labeling Form Inputs
We can explicitly label a form input using the
<label> tag. This tag can be used in one of two ways.
The simplest usage is to wrap both the text describing the input, and the input itself in a single
<label> tag like so:
There are often reasons why you may want to separate out the input and the matching label within your HTML. In these situations, the solution is to give the input an ID, and then use the
<label> tag’s
for attribute to map the label to the input by its ID:
Let’s put it all together and create a final, good, version of our first form:
If you load this page in your browser you’ll find that you can now focus the text box by clicking on its label (the word Name).
A Challenge
Because we have not yet learned enough about web forms to set a meaningful challenge, I’m going to use this opportunity for some revision.
Feedback from readers/listeners suggests that many are still struggling a little with JavaScript prototypes, which we learned about back in instalment 17, and re-visited in the Complex Number challenge in instalment 19.
I believe the only way to get comfortable creating your own prototypes is to create your own prototypes – so let’s get some practice in!
I’ve chosen dates and times as the basis for these challenges because everyone understands what they are without my needing to explain them. JavaScript has its own prototypes for dealing with these things, and those prototypes are infinitely superior to those we will be creating here. The point here is not to create better date or time prototypes than JavaScript provides, but to practice creating prototypes.
To avoid clashing with the names of JavaScript’s built in prototypes, we’ll be working inside the
pbs namespace, and we’ll go back to using the PBS JavaScript playground.
Before we get stuck in, let’s remind ourselves what prototypes are for, and how we create our own.
A prototype defines a kind of object. All objects with a given prototype will contain certain named pieces of data, and provide certain functions. All prototypes have a constructor, which is used to initialise the data within objects of the prototype, and it’s good practice to implement an accessor method for each piece of data, and to implement a
.toString() function for generating a string representation of objects with your prototype. So – when ever you want to build a prototype, you need to work through the following steps:
- Gather your requirements, specifically, what data do your objects need to store, and, what.
- Write the functions you need to provide.
- Provide a
.toString()function.
Let’s put this algorithm into use with a simple example – we’ll write a prototype named
pbs.Name to represent a person’s name.
Step 1 (gather requirements) – for our purposes, name objects will contain two pieces of data – a first name and last name, and provide just two functions (on top of the accessors and
.toString());
.fullName() and
.initials(), which will both return strings.
Step 2 (set up your namespace etc.): Normally, when not in the playground, the first step would look like this:
However, because of how the PBS playground works, your code will not work if you include the first line in the above snippet. So, when working in the playground, and ONLY when working in the playground, comment it out:
Step 3 (Create the Constructor): Remember that your constructor is a function with the same name as your prototype – in our case,
pbs.Name. Also remember that you can avoid sloppy code duplication by making use of the accessor functions you know you will be writing later from within your constructor.
Step 4 (Create the Accessor Methods):
Step 5 (Create the Needed Functions):
Step 6 (Provide a
.toString() Function): In this case, we don’t need to do much work here. A sane way to convert a name to a string would be to return the full name as a string. We already have a function that does that (
.fullName()) – so why not just re-use it?
We have now created a prototype to represent a name. Below is some sample code that makes use of our prototype to create some actual objects with it:
Putting it all together, we get the following code for running within the playground:
Challenge 1 – A Simple Time prototype
Create a prototype named
pbs.Time to represent arbitrary times. Each time object will contain three pieces of data – the number of hours (in 24 hour format), the number of minutes, and the number of seconds. The prototype should implement the following functions;
.time12() which will return the time as a string in 12 hour format, and
.time24(), which will return it in 24 hour format.
You can test your prototype with the following code:
Challenge 2 – A Simple Date prototype
Create a prototype named
pbs.Date to represent arbitrary dates. Each date object will contain a day, month, and year. The prototype should implement the functions
.american() and
.european() to return the date in MM/DD/YYYY format and DD/MM/YYYY format respectively.
You can test your prototype with the following code:
Challenge 3 – A Simple Date/Time prototype
Make use of your previous two prototypes to create a third prototype named
pbs.DateTime to represent arbitrary times on arbitrary dates.
Each Date/Time will have a date, and a time. The prototype should provide the following functions;
.american12Hour(),
.american24Hour(),
.european12Hour(), and
.european24Hour() which will return the date/time as an appropriately formatted string.
You can test your prototype with the following code:
Conclusions
In this instalment we made a start on the basics of using web forms with JavaScript. We learned that inputs go inside
form tags, and that inputs should be labeled with
label tags. We’ve also seen how to create basic text boxes and buttons, and how to interact with them in basic ways using jQuery. This is just the beginning – we need to learn a lot more about buttons and text boxes, then we need to learn about other inputs like checkboxes, radio buttons, dropdowns, date pickers etc., and we need to learn a lot more about how jQuery can interact with forms. That’s what we’ll be spending the next few instalments on.
[…] On this week’s continuing series Programming By Stealth, Bart introduces us to HTML forms in order to take user input. It’s a pretty basic installment so not as head bendy as they have been lately. He also gives us some more repetitive homework to get more practice creating and using prototypes and accessor methods. The full written tutorial can be found at bartbusschots.ie/…. […]
[…] PBS 27 of x – Introducing HTML Forms […] | https://www.bartbusschots.ie/s/2016/12/31/pbs-27-of-x-introducing-html-forms/ | CC-MAIN-2017-17 | refinedweb | 3,642 | 68.91 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.