text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
I installed Windows Management Framework 3.0 on a Windows 7 machine and tried to use the New-SMBShare cmdlet but when I ran get-command new-smb*, no cmdlets were found. How can I use this new cmdlet in Windows 7 along with any other new cmdlets?
The SMB commands (and several other modules) are specific to Windows 8 and Server 2012. You can use them from Windows 7 via remoting, but they do not target Windows 7 or Server 2008 R2.
The SMB commands target a new WMI namespace (ROOT/Microsoft/Windows/SMB) that does not exist on earlier versions of the OS. | https://serverfault.com/questions/436843/how-to-use-new-smbshare-with-powershell-3-0-on-windows-7/436852 | CC-MAIN-2021-25 | refinedweb | 105 | 71.85 |
------------------------------------------------------------------------------- Dojo Demos - used on demos.dojotoolkit.org ------------------------------------------------------------------------------- Project state: varies. ------------------------------------------------------------------------------- Project description Demos which show off all of Dojo, potentially including a server-side component. Each demo resides in a sub-directory. Demos should not depend on other demos for code, but may depend on things in the top-level resources directory or in dojo, dijit, dojox, and util namespaces. These demos are not included in releases, and unlike tests inside of Dijit, they may depend on DojoX components or not be fully accessible, etc. Additionally, icons, images, and other resources not licensable for use in the mainline toolkit may be included in the demos/ directory so long as their license status is noted in demo-specific LICENSE files. Server-side components are recommended to be written in PHP 5 and may use the Zend Framework should a MVC framework be required for a particular demo. ------------------------------------------------------------------------------- Dependencies: Dojo Core Dijit DojoX ------------------------------------------------------------------------------- Submission Instructions All these demos follow a simple pattern. Each demo has a "name", which is the name of the folder it lives in. eg: demos/myDemo/ The root demo filename is to be named `demo.html`. It should include a <script> tag pointing to ../../dojo/dojo.js, and a src.js "layer" as a sibling to `demo.html` .Styles should be external and located in a file `demo.css`. A README file (similar to this document) should also be present. The most basic of demos should look like: demos/myDemo/ demo.html demo.css src.js If multiple modules are needed, they should appear in a /src/ folder, making src.js a rollup, only requiring other modules. demos/myDemo/ src.js src/ Module.js Proper dojo.provide calls for the src.js and Module.js files should be issued: dojo.provide("demos.myDemo.src.Module"); About the README The README format is a slightly-fragile custom format, which follows the same design as this one. The second and last lines are parsed off for meta information. Line 2 should be a hyphen separated description: 1. -------------------------------------------------- 2. Short Title - Longer Description about the purpose 3. -------------------------------------------------- The last line should be made up of @tag:value pairs. These tags can be arbitrary, and some go unused (though may be implemented at a later date). The most important tag is @rank, providing a way to weight the demo in the index at @rank:-999 will mark the demo as experimental, and not complete. @rank:15 will give a +15 ranking to the demo, adjusting the index. The higher the value, the higher in the list it will appear. Additional Resources The thumbnail that appears in the demo index can be placed in: demos/resources/images/myDemo.png Providing a 128x128 png icon bumps the rank value slightly. Building the Demos Each demo should add itself to the profiles/ folder in the util repository: util/buildscripts/profiles/demos-all.profile.js Each demo should create a layer making src.js the layer target, dependent on demos.myDemo.src. This way, each demo has 100% of the require resources available in the rollup layer and requires no work to shift between built and unbuilt states. Create the tree by running a build: ./build.sh action=release profile=demos-all cssOptimize=comments.keepLines version=1.x.x ------------------------------------------------------------------------------- @rank:-999 | https://bitbucket.org/dojo/demos/src | CC-MAIN-2017-51 | refinedweb | 545 | 59.4 |
This article is concerned with how to manage the .NET assemblies in your Project or GAC (Global Assembly Cache). All .NET programs that are constructed from these assemblies and almost everything you do in .NET leads to the creation of an assembly of some order. Every program runs on a layer of Software and Hardware abstraction called CLR (Common Language Runtime). CLR cannot directly convert the code to hardware platform (binary form). It has to perform some specific checks like version information, security permissions, properties, etc. The file or programming unit that satisfies all these needs of CLR is called an Assembly. Furthermore, assemblies are classified into two main types called Private assemblies and Shared assemblies.
StrongFile
Strong
Greeting
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
public class strong
{
public string Greeting(string name)
{
return ("Your assembly says Hi : " + name);
}
static void Main(string[] args)
{
}
}
Now assign the Strong Name to the assembly by using sn.exe (if you are a Visual Studio user). For Express Edition users, you can name it by going under Properties Page=>under Signing Tab => give the name, without using password protection. Here we are creating an assembly that will be shared by external developers, so assigning passwords creates sharing conflicts, hence do not make assembly password protected since we want to use it for global purposes.
Note = Express Edition users must set the output type of the application to the Class library to get the assembly, so click on Properties=>Application=>OutputType and set it to Class Library.
Edit the AssemblyInfo.cs file in the Properties folder of our project directory, i.e., StrongFile=>Properties=>AssemblyInfo.cs. Mention the following lines in the code that we created earlier.
[assembly: AssemblykeyFile("FileKey.snk")]
Now build the Project; this will create signed StrongFile.dll in the bin=>Release=>StrongFile.dll. Drag it to GAC. Now we can use our StrongFile assembly whenever we want it. Just reference it!
You can change the version by simply going to Project=>"Project" Properties=>Application=>Assembly Information=>Assembly Version. Build the Project. Don't forget to put the new assembly in GAC
Note: You may notice though that after the project build, there is no assembly version change in Windows Explorer, but don't worry. Still place this new assembly in GAC - there you can see the new version of StrongFile. Be sure to make changes only in the Assembly version NOT AssemblyFileVersion. For the changes to take effect, change something in our StrongFile code like I change here:
StrongFile
AssemblyFileVersion
public class strong
{
public string Greeting(string name)
{
return ("Your 'NEW' assembly says Hi : " + name);
}
static void Main(string[] args)
{
}
}
After placing the new Assembly in GAC:
There is a minor difference between Traditional DLL and Assembly. Traditional DLLs could be copied to the client's directory for the client to use it and no other programs could know about them (that is the reason why still some game and software can be cracked). Assemblies take this feature. If there are only few applications that would ever use your library assembly, you can put them in the relevant client applications directory instead of putting in GAC. This will save your time in strong naming the assembly, editing AssemblyInfo.cs etc. Those assemblies which can be put in the applications directory and which do not need a strong name are called "Private Assemblies". Private assemblies can be installed/uninstalled without any fear of breaking the application. You may notice that one of our shared assembly StrongFile's properties is LocalCopy set to false, this means this assembly could not be copied locally to the applications' directory, but private Assembly has this property default "true" i.e., they are able to copy locally.
Private
LocalCopy
false
true
Another important concept related to private assemblies is "Delayed Signing"... which we going to study in the next section. Delayed Signing = It is possible that you come to a situation where you have to give your assembly for further modification to an external person/developer, but you cannot give him the private key of the assembly, then how could he work? This problem has been solved by the Delayed Signing technique. In this, we can skip the signing of the assembly with the private key and turn off the assembly verification. So now, the Assembly signs with only Public key. Now you can give the developer the assembly and public key only, there is no need of private key. This can be done like this, we have already created "StrongFile" assembly.
With the help of sn.exe, we can extract the public key in the file:
Syntax : sn -p [infile] [outfile]
Extract the public key in key pair in [infile] and export it to the [outfile].
sn /p FileKey.snk PublicKey.snk
This will create the file "PublicKey.snk" which contains the public key of our assembly StrongFile.
Now it's time to edit the AssemblyInfo.cs like this:
Build Assembly StrongFile again.
Note = If you delayed signing of assembly by Properties Signing tab, then you need not edit AssemblyInfo.cs.
Now we have to turn off the verification process in this way:
sn /Vr StrongFile.dll
Then, you will get the following message:
Now you can send the developer your assembly with the public key file only, i.e., StrongFile.dll assembly with ThePublicKey.snk only. After he is done with his work, you can reassign the assembly with a private key like this:
sn /R StrongFile.dll FileKey.snk
NOTE
Games and many wares are cracked in this way. Malicious users have replaced unsigned DLLs with the original ones. Even high engineered engines have failed to detect these changes.
To use these utilities permanently, change the Environment variables to the bin folder.
Environment
Now we are moving to the last section of this article i.e. Manifests.
An assembly is a self-describing programming unit. How is each assembly different from one another where assemblies store their information? The answer is "Manifests". The manifest identifies the assembly. It describes, lists other assemblies it depends on, all types and resources exposed by the assembly or used by assembly, its security requirements. Manifests can be viewed with the IL Dissembler - Intermediate Language Dissembler (ildasm.exe).
Using IL DASM = Go to Visual Studio 2008 command prompt and type ildasm - a window appears like this:
ildasm
Here I open StrongFileApp.exe. Click on the MANIFEST and you will see:
MANIFEST
You can see all the assemblies that our application referenced. If you further expand StrongFileApp node, then you can clearly see that every information of application is revealed like namespace, methods, buttons, etc.
StrongFileApp
ILDASM is used in debugging and testing of applications.
For now, we covered all functions related with Assemblies. In my next article, we will cover the topics related to assembly like assembly manipulation, probing, code base, etc. I hope this article will help Visual Studio as well as Express Editions. | http://www.codeproject.com/Articles/80730/All-About-Assemblies?fid=1571874&PageFlow=FixedWidth | CC-MAIN-2016-26 | refinedweb | 1,163 | 56.86 |
I can't seem to determine what a model's field type is from within a template. I'm iterating through all rows and fields and want to implement special handling for certain field types, but it doesn't work. Here's how my object looks in models.py:
class MyModel(models.Model)
Field1 = models.CharField(max_length=30)
Field2 = models.DateTimeField()
And this is what is in views.py:
def MyView(request):
entries = serializers.serialize( "python", MyModel.objects.all()[:10] )
return render_to_response('MyTemplate.html', entries, context_instance=RequestContext(request))
And finally, my template file:
<table>
{% for entry in entries %}
<tr>
{% for field, value in entry.fields.items %}
<td>
{% if field|field_type = "DateTimeField" %}
hooray!, this is a DateTimeField
{% else %}
boo! no this is not a DateTimeField!
{% endif %}
</td>
{% endfor %}
</tr>
{% endfor %}
</table>
You'd think that I'd end up with one "hooray" and one "boo", but I'm getting all "boo"'sI've tried "field.type" and a number of other ways but nothing seems to return what I'm looking for. Can anyone offer help on this?
Sorry, not "scripting dictionary". Anyhow.......
Nobody responded, but I did find a solution, so I'll post it here for the sake of being polite in case someone else has the same problem in the future. I decided to just use a nested dictionary instead of going with the serialized solution previously posted. So my model is the same, but here's my new view:))
....and here's my template:
<table cellpadding="1px" cellspacing="1px" border="1px">
{% for key, row in MyDictionary.items %}
<tr>
{% for key2, field in row.items %}
<td>
{% if field.type == "DateTimeField" %}
hooray!, this is a DateTimeField
{% else %}
boo! no this is not a DateTimeField!
{% endif %}
</td>
{% endfor %}
</tr>
{% endfor %}
</table>
Since someone else writes the var names with a lot of our stuff, I make pretty heavy use of stuff like
<p>{{ field | pprint }}</p>
in my pages. I do this a lot for objects/dictionaries where I'm not sure what all properties they have, or I know they have some kind of property but not its name.
Or, if I'm looking at the shop running in a terminal, in the .py file I can print field
to see in the terminal, but prettyprint in JInja2 makes a nice filter.
This is a quick poor-man's debugging/development tool, but it'll give you a better idea of what your objects actually have in them, and what to call individuals inside.
This topic is now closed. New replies are no longer allowed. | http://community.sitepoint.com/t/determine-model-field-type-from-within-my-template/44701 | CC-MAIN-2015-18 | refinedweb | 426 | 68.67 |
Comment on Tutorial - Vector example in Java By Grenfel
Comment Added by : mangai
Comment Added at : 2011-07-07 09:46:10
Comment on Tutorial : Vector example in Java By Grenfel
Good example for com.anand;
import java.sql.
View Tutorial By: Anand at 2012-11-12 00:00:07
2. Hi, you put a semicolon after "int main()&quo
View Tutorial By: noel4037 at 2009-12-28 06:40:17
3. Thanks
View Tutorial By: Cecilia at 2012-08-07 05:05:51
4. Hi,
I want to read the barcode and print it
View Tutorial By: Santu at 2013-03-12 18:15:18
5. i am getting a force close...can anyone please hel
View Tutorial By: jack at 2012-03-22 15:18:37
6. thank u very much......... its very helpful
View Tutorial By: ankit at 2012-10-14 20:52:24
7. nice article...........................
View Tutorial By: abhijit at 2012-09-24 11:12:46
8. I liked it.
View Tutorial By: savitha at 2011-05-06 02:23:46
9. Hi dude's it's working fine for me.first i got no
View Tutorial By: raavi.surendra at 2011-11-16 10:10:31
10. why 27's example is prone to a deadlock ? when pro
View Tutorial By: kaka at 2011-08-14 00:59:41 | http://java-samples.com/showcomment.php?commentid=36357 | CC-MAIN-2018-43 | refinedweb | 225 | 70.19 |
UFDC Home
|
Florida Newspapers
|
Estero Island Historical Society
myUFDC Home
|
The Key West citizen ( November 15, 15,:
E20090414_AAABRD_xml.txt
00258.txt
00255.txt
00256.txt
oai_xml.txt
00257.txt
Full Text
Associated Service.Press Day Win 1! Wt ([ 'tl "i Key West, Florida, has the
For Beat 55 Interests Yean Devoted of Key West to the e Cp t t3en most countrywith equable weather an average in the
I range of only 14 Fahrenheit
'-'
VOLUME LVI. No. 272. KEY WEST, FLORIDA, FRIDAY, NOVEMBER 15, 1935. PRICE
FIVE CENTS
I Hitler Silence Called OminousAs CRIMINAL COURT P Reconstruction Finance Chief- LAWS\ PASSED BY Roosevelt's Adviser On Consumer
TO MEET MONDAY LEGISLATURE ARE i
-British Leaders S Warn Nation I I I JURORS SUMMONED- BY DEP. Favors Building Of Bridges PUBLISHED TODAY Problems, Capital's. Mystery Man'
:: uSHERIFF TO RE- -
Unchallenged, Mussolini 1"1'111111'1.:1 (by Aswoelatea Pros) 1'IAT.lIII"I"A
PORT 10 O'CLOCK WASHINGTON, Nov.: 15.-Jesse Jones, Re- FULL LIST OF VARIOUS Worry Over Cost Of liv-
Holds Spotlight PRINCEPSIn
Stage; I TO WINTER HEREIn ,,I construction Finance chief, said today that fed- MEASURES ENACTED -AT. ing Puts Hamilton In
Shown To Be FocusedOn Summons have been served on LAST SESSION APPEARS 'IN! yesterday's editorial ia
Key West today i i. all available citizens who were eral agencies were studying plans to finance THE The Citizen congratulatingthe Spotlight Of Washington -
CITIZEN
RomeBy Jonathan Latimer widely named on the regular and special I newly elected city officials -
read feature writer who will venires for cirmmal court, which construction of the Overseas Highway bridges and mentioning their Activities
later I be>> joined by his motherto were drawn from the jury box at iSS-:; names, that of Jim Robert
ROGER D. GREENE spend the winter season. the short session of court held from Miami\ to Key West. Today's The Citizen I was inadvertently. omitted
Mr Auwlalrd Pma Mr. Latimer has had a wide Monday morning by Judge J. Vin- contains 26 pages twenty-two from the list of councilmea. (nr A .cla
( "It's I will work out for
LONDON Nov 15.-Almost experience in the writing ing Harris. a thing hope they which comprise supplements carrying As a matter of record Sunny WASHINGTON Nov 15.-Dr.
field. He wa at one time There were 18 names on the certainly need it down there/* Jones said. a full list of the new lawsof Jim lead the list with the Walton H. Hamilton, the Prc.i.dent' -
overnight amid the storm raging associated with Secretary regular venire and 22 names on Florida, which were enacted at handsome total of 1,338. If
Harold Ickes in publicity the recent session of the State anyone of this number failedto adviser on. consumer problems -
over Europe from the Italo-Ethi- the special. Of the regular ven- lie indicated his organization might pur-
work, and has been on the ire 16 have been summoned and Legislature. notify the editor of the is Washington' new "my.
opine war, Reichfuehrer Adolf reportorial and editorial The new measures are shownto error, let him forwardto
of the special venire 15 have been chase bonds by which sponsors of the projects come **
Hitler has become a silent dictator staff of the Chicago Tri called. cover most every activity rela- make the list complete.The tery man.
bune.When Chief Deputy Bernard Waite could produce 55 percent of the bridge cost to tive to various business concerns editorial writer the He has been a special presidential -
ominously silent according to I not engaged In feature including corporations, and all proof-reader and the compositor -
warnings voiced by Winston I and editorial work Mr. said that 10 o'clock instructions Monday were morningin to report obtain a 45 percent grant by the Public Works other syndicates operated for each passed the buck>> I adviser since June 30, but
Latimer devotes his time to I the county court house when I profit, and also that dealing with to the other in order to be i only since the arrival on WaahI -
ChurchilL mystery and detective stories, Administration. individuals in different capacities absolved from the stigma of .I
court will be called in session.
coming under state iagton desks of pamphlet called
Today .unchallenged Mussolini many of which have been jurisdiction !I. not having X'd the leader of a
I
--- ------ --- with provisions made for observ-
published in book form and the pack and so to be impartial "The Consumer" edited ia his di-
hold the stage. The world spotlight in JAMES RUSSELL ance of the laws in all respects. each
magazines. to a stinging rebuke
so *ong focused on Berlin, I Among the many stories Ii FIVE HUNDRED MEN WILL BE PUT trhcih Many have of been the measures encated into adopted law, for an unpardonable I vision has the fickle Washington
I which have been consideredmasterpieces omission. spotlight turned to him.
has turned to Rome. DIED YESTERDAYFUNERAL are of much interest to many citi
of this able
I I TO WORK MONDAY IN WPA PROGRAMWith 1 zens and residents in general, It doe so just as rumbles ofdiscontent
Hitler has dropped out. A writer are "Murder in aMadhouse" I !I and all persons in this community '.I.1'.lI.1JI
and "Headed for the
I over rising pricoof
shadowy figure offstage in the I! a He.r*e." SERVICES WILL BE i approximately five sons were urged this morning I.I are urged to read the law as DEMERIT
1 published and familiarize them- BANDS bread reach the capitaL
new drama that has seen the na-I .I'.I'.AT 1.8" .A CONDUCTED SUNDAY i hundred men called for workon by WPA officials to observe .1 selves with any changes that may I {ICommentators examine" tw..j
have been made in enactment of
tions leagued against Mussolini i AFTERNOON I the board and if their I NUMEROUS BIRDS facts and wonder: the rising cost
WPA projects, every i the new measures by which the I
of living and the direct entre to
SEVEN ARRIVE the lists I people of the state of Floridaare I Ii
they were allied against Kaiser :, available employabl e man names appear on now governed. the White House granted HamIlton -
Wilhelm two decades ago, Hitler! !t James Russell 66, died 5:15 report: immediately for i SUPERINTENDENT OF LIGHTHOUSE whose particular worry i" the
ABOARD PL4NE o'clock yesterday afternoon in his in Key West probably will work. rising cost of living.
has become virtually the "forgotten TENDERVORKING\ DEPT. TAGS TOTAL Staff Gather Facts
t I home in Knowles' Lane. Funeral
/ |: be .
working by Monday.The
His include these objectives
man" of Europe. j I services will be held 4 o'clock Most of those called are OF 421 I program -
I RESERVATIONS MADE FOR i 1'i Sunday afternoon at the Baptist list of men called. being placed on the sewer ON \WEST COAST where "trouble: inquiries into pnu< ,-.
British Heed spots" keep t p
Hitler's Aide J [ church, where the body will be i
Yet Britain uneasily wonders PASSENGERS ON OUTWARD placed 2 o'clock. Rev. James S. was posted late yesterday' project, although other projects product out of the consume :
I II : Since July 26 when he started reach; education of the public t 10>
when he will become the "big TRIP THIS AFTERNOONOne Day will.officiate. afternoon at the WPA will be increased with ,: i I I standards h
use quality and to
J I POINCIANA WILL COME TO banding birds at Key West William a-
noise" again--and while the Ger. 'Funeral arrangements are in of Simonton their full, i : for grade labeling advice to ,
I garage, corner man-power to W. Demeritt, representing ; ix <>-
dictator's seldom, charge of Lopez Funeral Home. !'I THIS PORT WITHIN i pie buying co-operatively! and t-\.
man name of the smaller, seven passengers i! Mr. Russell was a member of i and Greene streets. All per I quota. I i the United States biological sur- ; ,
figures in the headlines, yet display -, planes of the Pan American -J|: Dade Lodge 14,tF. and A. M., Independent -I -- - -- NEXT FEW WEEKS I: vey of the department of agriculture -:.I pansion For some of consumer time Hamilton council-!,star
is given to the words of one Airways arrived at Key i Order of Red Men, i has placed on the legs of ,
of six economists; has been gatheing i-
of his aides General von Epp J West this morning with mail and I Knights of the Golden Eagle r BULL LINE STEAMER RUNNING : 421 doves the sign which shows: facts about key fndusti'>; -
governor of Bavaria. Discussingthe seven passengers. I Spanish War Veterans and Key Tender Poinciana. now work they had been caught and released I women's dresses, shoe tiu.
of Germany's I Arrivals were: Clifford G.' West Fire Department. Pallbearers ing on the west coast is expectedto at Key West.
sore-spot question auto, ice and paper. When the
lost colonies, General von! Hicks, Vernon Uanept Rafph will, be selected! from these or- I AGROUND WILL BE ABANDONED arrive in port within _the next Among other birds which hare : surveys are done the. facts will be-
Coleman, John Rolle Edith Mof-. ganizations. two or three weeks. The tender been caught and banded are]I,
.. __. .. .... ..l. {t made available to its urer .
Epp was-qnotetl assaying: ""- \ is-snrvivexHSy-hJs f- -- -" -
t faf AKnaTmimaway; Eugene t) :yhg-deceased f* now en route -sooth frees-Eg- marsh hawks-end dttck-h wi(,r-Of J r iicf'tW --
"Der Fuehrer is examining the i Brandon. widow :Mrs. Fay Russell one I The Bull Line Steamship Eliza ments made to light the ship at! mont Key with creosoted pilling the latter species, two fine specimens -I I them.
question. He will do what is'' Reservations for five passengers daughter, Mrs. Violet Athey and beth which was driven ashore at night in the event the department and will stop for work at Sara- were sent to J. Meredith of Hamilton believes
making
the I' have been made for the return three grandchildren Mary, James
when time is fact. public. He wonders fin
ripe"Perhaps : m-
necessary trip to Miami 4 o'clock this aft- R., and Nathan Athey, of New cane of November 4,. is .bej! do so at this time. bas Pass, Coon Key' and Indian receipt o, the hawks; wfa-ch ha: i stance, how who.tan' ..
by coincidence,,shortly ernoon. York.FARLEY'S j'' abandoned. : J" : .1|l Captain P. L. Cosgrove in court Key Pass. tj*. trains for the ancieif.;i sport of i I factoriii Jo mike many w9 opal**en'fe drreS --
after that statement was published .._- ----- This information way conyeedto ,i mand of the Tender Ivy which is Tender #y :'which, has been falconry. t.OJ knofc'tJjffevWage' .life of suet; i,
Winston Churchill sidetracked Superintendent William' :W.'ii !I working in the vicinity of Miami, restoring lights':and structures on j! Among the relent kapturej in I rb
I i. Demerit of.the Seventh toghi-j I was instructed by the the intra.coastal water is Mr. Demeritt's Ivae tt plan klythree J*a"'"
superintendent
the main issue of the foreign !' way. ex- | a catbird *
i ELECTION 'POST MORTEM'CALLED He iritis'! \ over th*ffeaustry don!
i I!house District thus morning a I I to install today a red 70> petted to arrive in rt this evening -'. which was 'c: bJht! banded t I
the house tdering
affairs debate in of II' along with machines whirr"
telegram from Captain M, Wil-':!candle power light flashing every or tomorrow morning take!I and released in January ol this
thunder the challenge I .I are 30 years old on the ave'iisr; ::<
commons to SHREWD\ POLITICAL MOVE I[ hams port captain for the Bull'! second. Port Captain Williams,|on material and leave for work in j[year. It was cught in the traps. He thinks they could sell mou, .alI
: Line in Miami. j i of the Bull Line, was advised of. other sections |I Sunday and released
"The whole population of Germany I The telegram informed Mr. De- : the action. I I I I lower prices if they had kept 11 11-
'----1 to-date.
is being trained from childhood (Dy As_*elatc 4 Press) meritt of the owners' intention of The Elizabeth is a vessel 328,
up to war. A; mighty armyis By HERBERT PLUMMER I abandoning the ship and that the! feet long 46 feet beam and depth!I 'Just An AdviserBut
coming into being. All Ger-I WASHINGTON Nov. 15.-t Over in New Jersey Mayor I II United States district engineer's:! of 23 feet, and is of 3482 gross i BOB 'TREE TOAD' D AVIS TELLS TALE I he points oat, he has no
is armed The j ) I power to go in and say, "YOU'I"gathering "
many an camp. [Frank Hague's record victory office at Jacksonville was also being tonnage. She was built in 1919]
German industries are mobilizedfor I' Politician here say "Big Jim"i!I I i Hudson county-bulwark of-j advised. at Wilmington, Delaware and is'I OF HIS BOYHOOD\ ITB BROTHER BILL moss. Junk it." He ,i-,
war to an extent ours were Farley robbed>> Republican of !1 I I Jersey Democracy-of 137,0001 Remainder of the crew of the I i owned by the Bull Steamship' just an adviser.. Bat he belioi-
not mobilized even a year after majority for his assembly candi-j ship is to be removed and arrange-]. Company of New Jersey. I in advice."NBA
their victory goodness knows mad
much effectiveness ia !
the great war started. I dates, would keep the state in the i ____ II J (Br AMoeiatrd Press) ,
'hut
enough mistakes he it
says
Voice Solemn
Warning chant resulting from elections Democratic column if repeated in I By JOHN SELBY
"The Italo-Ethiopian conflict," 1936. In 1932 the Democrats, OPERETTA CAST BENJAMIN TREVOR J NEW YORK, Nov. 15.-Theunmistakable later he had been stripped painted did make business mm think ,m
he warned, with solemn emphasis, in. New York state by a I larger terms; about whole m<1u--
rolled -
117,000-vote majority
a beautiful to his "
up I a even
green
.
tries instead of
"is a small matter indeed flavor of boyhood, single plants.
very shrewdly-worded statement on in Hudson enough to offset .
eyelids, and
persuaded that he I
compared to the dangers I have TO MEET MONDAY INJURED IN FALL:I I He grins at the cynical sug!:?!:' --
I Republican victories in 17 of the flavor that make Huck Finn' was actually invisible. ,,
just described." I I the night returns were counted. i I tion that facts may not force tl> <'
the 21 counties and to carry the Brother Bill pointed to the
Members of parliament fumed Newspapers of the morning I state for Roosevelt by 30,000.In 1 -=-- I! and Tom Sawyer live through'warard sickle pear tree: I inefficient manufacturer to
change his routine."You'd .
while Chur.-n.ll; ; fpok"Wh3r1!
I I
the preponderantly Republican TO ARRANGE FOR ACCIDENT HAPPENED YESTERDAY "Go and help yourself. .climb
doesn't he get on with the main; after the election along with annoancements famine La I been>> catturedagain. be surprised how mall
I II I, stronghold of Philadelphia PRESEN-1. up and eat your fill. .no one can I business intellectual\ !
back-bencher I men are "
business a was I that Republicans haven't elec- TATION OF WHILE WORKINGAT
I where the Democrats see, you cannot be keen. ..Godis ."
heard to "Nobody about I I curious enough to give us hi'IP
cares
Hitler now.say.He's quiet. Mussoliniand shad I captured the sembly, elected! ted a mayor since 1881 the G. OF PENZANCEThe I LAUNDRY PLANT The new boy is named Tree takin' care of you. .bring some I he 1I&ys.UThia week two of th<-
I O. P. candidate won over the 1 back to me, ripe ones."
"sanctions are the big dan- I a mayor in. Philadelphia and had Democrat by only 47,000 votes.Politicians I I I Toad although he is known more I On the way to the tree Bob indulged largest offered organizations us the free in the use country <,f
gers. I I desire he had had I
one for
But others, in addition to registered victories e'sewhere! re-I'I ImpressedThe first meeting of the cast Benjamin D. Trevor owner and formally a* Robert H. Davis of a long time-he thumbed his their books. We will study 01,"
effect of Farley's
psychological nose ,
Churchill, feel not so easy in their]:,:corded "Big Jim" as saying in I statement that results of of the Pirates of Penzance will executive head of the Columbia I at his father, who was an Episcopal I set to find out the average wag> -
minus( about this "quiet" : I Ii the New York Sun probably the ,earner's credit and how hi- U-
bystand-I i capacity a* chairman of both national the scattered November 5 elections -. be held at 7:45 o'clock Monday Steam Laundry, received painful I I clergyman. Mr. Davis did not ;jit."
in the turbulent !(
er '
"were entirely satisfactory,'night' in the Colonial Hotel's main injuries yesterday afternoon when most widely traveled newspaperman notice which proved that the I I Son Builds' Boats
the : of nations. Democratic 'I
European family and New York tate -: I
i green-painted
to the Democrats" is what im- second floor. he fell while making an adjust- boy really was invisible -.
"The most significant thing in I dining room on the in the world. The total is ', Hamilton can't remember when
committee j I' analysts. ment to of the j I lie climbed into thetree. ;
I : presses political some plant ma-
memberof started being an econonn-,
Mills
the heaving Europe of today is George White ![ jhe
j i There can be no doubt, they chinery. 700,000 miles, and the man it j
I I | has studied the
but he
the silence of Germany, says A. "The federal administrationwas the cast, who will direct the Father Could See j subject :m
point out, that taken as a whole Placing two of the clothes:
G. Gardiner, a noted commentatoron sustained (in New York known: the world four colleges and taught it in sir.
this has over a* author Once there Bob decided to hiss.
whatever crowing is to be done operetta year expressedthe boxes on top. of one another on
foreign affairs. I(I state) by more than 500,000 majority I j I i He is on leave from Yale univt-i-
his father and that fataL !!
"It is as important for us to I which ought to be a sufficient -<[ can be justified only by Republicans -, hope that all members of last the machine known in laundry par- editor ..d raconteur. Tree Toad This dialogue ensued was :': sit l' now, and the smell of auburnon ,
and not Democrats. The results year's cast will be back with the lance as a mangle, Mr. Trevor : the air makes him Bttte homesick
know what Hitler i is thinking as itj:i answer to the question of I' a -
] is "Bob Davis himself 60 "Can you see me! I enquire!,I
show unmistakably a mounted on of these to yearsago.
top
: trendf per- for New Haven where" hi-
is to know what Mussolini is doing -:{ the continued popularity of same buoyant enthusiasm displayed I hoping to clear the atmosphere.!
from the "new deal, as form the task. :
away wife still maintains their home.
Hitler watches and waits;! Franklin D. Roosevelt. { "
An is j
last appeal "Plainly.
year.
1932 and 1934. The Mr. I
compared to baskets toppled and
( The Hamiltons have three children
while Germany's former foes quarrel Those who scanned election! I "Where if" -
Farley's happy thought, how- also made for persons who were Trevor fell about 15 feet on some I The very young Bob Davis became -; am but it's Edward they grin
and the fate of the League of. headlines the morning after were; "There !" Somewhat
ever, of immediately calling at- not in the cast last to at- of the plant's equipment on the I you are ;
Nations hangs in the balance. :' at loss to understand the Farley: year I Tree Toad one afternoon bewildered with about. He turned up his nose at
he prodded me
In Peace'"If mental Some tention to the fact that the pop tend the meeting Monday. floor and sustained severe bruises when" the sickle I the suggestion of college; and all
'Sword
Sharpened : processes. even pears; were ripe .1 the grass rake "Right there!
ular vote for the Democrats in There about the right side. I ,
leading
are some partsto this in the
'I face of his father's d-l
+ the League is sabotaged, he chortled out loud. j in Nebraska he explains in "Tree j The feeling of security along
larger cities did! much to offset be given to those most capableof I grees. Edward wants to !hail l ;\.I
will be left with a disrupted How He FiguredAs II Toad," just published. j with my faith began to ooze out!
Europe. filled with bitter resent- i I Republican total gains, has the performing the parts to be I awnmooed I I It began 'of boats, so he's building boats.
returns became more complete -. effect of placing G. O. P. leaders} filled. There will be only one I to a local hospital where this afternoon i when the young man's mother me. j Presidential Advisor Han-nit- !
ments which will supersede: the "Father will please see.
memories of the World war and however and political I tin i the position of defending their rehearsal held next week, but it he was reported resting :stepped out on the porch, a bright i Mother about, this you!" j says there was nothing he could,
analysts settled down to the serious -'j victory. will be necessary to hold two rehearsals comfortably as can be expected. : toad do
about
make his friendship the most for- i I''I green dropped into her lap "Yes, my on, later. Come it, and that the boy may
midable business of figuring out what Furthermore "Happy" Chand- weekly as the time draws i and' she explained the mysteriesof be right
continent. down.
asset on the really happened they were quick'''' let's gubernatorial victory in the nearer to the night for the per protective coloring in nature.I The man came down was,
Regular Saturday Nile i to see what Farley had in mind j I I border state of Kentucky is view- formance. CHRISTMAS GIFTS Invisible Bob! tubbed young with various paint removers -
The total vote cast for mem-'jed as offsetting to a large extent Good male voices are needed Brother William I -
Toilet Good, Evening !. Paris a year or so given other significant treat
DANCETOMORROW bers of the assembly in New York!, whatever gains Republicans can for the chorus. It is also plannedto older than Bob waited until bed. ra.m.
Sets Boudoir Sets, Compacts ment and put to The next $td sand
actually showed that the Demo-':': boast thus far in the east. It's rehearse other standard chorus I en duties called Mrs. Brother Bill's birthday.He .
Etc.GARDNER'S day was
'
NIGHTHabanaMadrid \ crats piled up one-half million I i generally conceded that the major numbers besides those to be used side. Then he informed Bob gave Bob a present which was L 441"
Club i I votes more than the Republicans,I'I'I I political engagement of '36 will!in the Pirate of Penzance with PHARMACYPhone he had "a great idear." kitcb-j the nickname Tree Toad. It stuck WHISKIES AND GINS
I (although at the same time losing,;I be centered in the sectors of the hopes of holding concerts during i 177 Free Delivery followed him under the through prank after prank and
Script !-: t-i i$1.00 (control of the state assembly. j[I border and western states < the winter months. I When he emerged a short page after page.
----------- ----
-- ._
---
WHENEVER PEOPLE LIKE REALLY GOOD BEER, YOU'LL FIND THAT WAGNER BEER IS THEIR FIRST CHOICE. PERFECT. BREW. TRY IT. FACA 22RBB80tlDb
i
.. .
.
- -
-
---- -- _
--------- fl
m --- Ju _
-- --- -- -
r
AGrJ TWO THE KEY 'WEST CITIZEN FRIDAY NOVEMBER- 15, 1935.
........................
is
A CALL FOR HELP Mortgage company: erecting: a
U
, ttfJt lirp a1tsttttim ,, UNIHUAL FACTS REVEALED I KEY WEST IN suburban which u beautiful office in in the every aubdivision detail.I Today In History
Pvt>lt hr
TilE CITIZEN PrBLlSktr CO, I
Cross began "on Armistice Day. Once John Mason and Jeremiah
L. P. ARTSIAX. lreMcBt The coast Cutter Saukee will 1763- -
JOB ALLEN. A......., Bailor K...... again the greatest organized relief ilappeairag Her Jatt 10 YearsAg. Dixon of England landed in
agencyin : Today A. Take: FromTI take the Marine eleven to Havana
From The Citizen LundingCorner
Green and Ann Street the world seeks funds, with which to File Of Th. CUizea Friday, November 20 to play the Philadelphia to survey the disputed
I boundary-line between Maryland -
of Havana
---- affirm, throughout the succeeding year, University ptayers.t and Pennsylvania-later tbecome ,>
O I ly Dally New.pap.10iI. Key West sad Monroe W. L. IWes, of the Gulf Realty Two games have been .
i the tine
.._. _. the brotherhood of man in a practical, / has received congratulations there..One the day following their, famous as separating
-& company the free and slave States.
Sntered at Key Rest Florida a* eeond clan ratter helpful way. .I by James R. Nicholson past arrival and again the next .day
" FIFTT-SiSTH TEAR When hurricanes and storms strike t' 'irl grand exalted ruler B. P. O. Elks, following.; The next Monday the;
- Member .t the A..eta/ed rre- unguarded cities suffering is certain to I and an acknowledgement of funds locals will return. 1777 Continental Congressadopted .
sent from Key West for the Old the Article of Confederation
In* Associated! PreSS I. ,lntTelT: entitled to a** follow. When great rivers overflow and .,
for rtt3i.viatiO' of aU new dispatch* credited to Ironsides fund. The amount collected II I- Mr. and Mrs. George H. Bert, for the general r etn-
the.* or local not news otherwise published. credited here.UIONAl In. this paper and also flood vast areas, misery and want abide v in Key West was $125. In of Bethel Conn., arrived by 'meat of the cotanies-finally
after the water recedes. When sudden I his letter Chairman Nicholson ex- Steamer Henry R. Mallory Sundayand i'ratified by the States in 1781.
ErtTORIAlASSOCIATION presses his thanks to the local will spend the winter in Key
N disaster maims and kills unsuspecting peo- 1 rywat ]) .
;:. committee and appreciation of West. Mr. Hoyt served in the ar-" 1S64-Sherman starts his famous
} <=:/t 11 / 93.5SDSt.'RIPTON ple there is immediate need for outside I their splendid activity and co my during the war between theM!,: march through Georgia to
- assistance. operation. states and will no- doubt find aj i i the sea-it was then harvest time
RATES k;
on* Tear .._. .._ : .. -11800 To whom do the' American people number of comrades in the city. i I: In one of the South'. richest"
Blr Month .: ...::::::::=- : .: :== --- 0.e9 Barge Pat Harrison was towed I States.
Tare Month t21 trust the duty of being prepared for en-I \ Kffi MtHftard,Cotw+/ba Star could da clnuast I into the
Tug 1
yesterday by -
port I
"
_._ .85.SO
1
>.e Month Included i in the of
Weekly .._-._-. ., emergencies? The American Red Cross, -=- vtHf JiffuMlZ rope tntk atttaeujt.cf8.Ha I Harry G. Lytle. The barge has on Sestets returning' group from the north Key I I 1918-New German Govern.
ADVERTISING RATES chartered by the American government, board 1,115,000 feet of lumber.) on the Steamship Henry R. Mallory I I meat appeals) to Wilson to save
Made known on application. demonstrates its value as a relief organ- Work of unloading began this' are Mrs. George W. Allen,I I people from starvation first or-
I Br ant oan jafdu ride morning and it will take about 10,, American -
I i demobilization of
SPECIAL NOTICH BGJthg mat daughter: bliss Lilla Allen. Mrs-: der for
All reading notice., cards of thanks resolution* of ization many times every year. i l tJu movit act- Kin's horse without days to finish. From Key West the', Allen's grandson Billy Warren,!' Army in Europe and camps
rpevt, obituary. notice, etc. will ts charged for at I lumber is to be sent to various
Certainly, if disaster overtook Key 'III btina u trvducttLBefort Allied begin
the rate of 10 cents a line. KJy tP rliakL Mrs. Allen's niece Mrs. Donald'I'here, and armies
while'
Notices for entertainments by cLr rcl u from which i' flu'shotting'cf'WuternGuttva cities on the east coast I' marching into Germany.
West tomorrow, causing untold suffering 1 Stuart and two children and 11liss
f lr
ona
p
.... !
a revenue la to be derl.ed are S e a line.Tho I MftxkilL much of it is to remain in Key I
"Mtizen I. an open forum ar. inTlte dlseu- destroying life and property, and leavingin i4' a//'y ,,, ,, Cestraga West Mary Phillips. I
tan of public issue and subject of local or general I *"&juecfa.frmutLTODAY'S I! I 1920-Assembly .f the Leagueof
abroad rtrltli I
Interest but it will not publish ac ujrmona communication its dread wake an injured population ofmet. ? Nations first met in Geneva.
*. { i"f'0 /" : The first white man to behold
I ur overt Annual memorial exercises of
and children the
women telegraphwires
r ,
B. P. O. Elks 551 of Key West the Grand Canyon was Garcia:
VTST would carry an appeal for urgent will be held on December 6. Ex-'!t 1 Lopez de Cardenas, who haJ 1928-She Grand Conned ef
IMPROVEMENTS FOR KEY: : relief. The conscious citizens of Key alted Ruler G. N. Goshorn has ap-'I 1 been sent from Zuni N. M., to the National Fascist Party made
ADVOCATED BY V.tE: CITIZEN I pointed the following committeeon I'find a river far to the west of an integral part of the Italian
West would and adequate
expect a prompt arrangements: W. L. Bates which natives had spoken. Government.
response from America, because the peo- I chairman; Joseph Beaver. Edward '- --- ---
1. Water and Sewerage. II ple of our country do not ignore such ap rirlppwar.y,' + I* Freyburg, L. R. Warner and:
2: Bridges to complete Toad_ to peals., LeRoy Blackwell. An unusually:
--- CONDENSED
STATEMENT
CONDITION OF
Main'I''I' beautiful program will be arranged OF
land. When the inevitable response to the
Shee Port. call came the people of Key West would the committee announces.l THE FIRST NATIONAL BANK
4. Hotels and Apgrnn nts. I thank God for the presence of the Red l r WEATHER 1 Seven hundred delegates of the 'OF KEY WEST -, '
American Association of Rail,
5. Bathing Par'ion. I Cross, an organization organized to appealfor road. Ticket Agents arrived 4,.. a* at the close! of bnsiness November 1, 1935, 1
6. Airports-Land and Sea. necessary help, trained and equippedto Temperature* J \\TAtHfR! : CONDITIONS special train from Miami this'! V f( Comptroller". Calla
provide such assistance.Let Highest ...................,................77 I morning. They have been attend-'! &. '" RESOURCES
7.Consolidatiol1,1 County .and City us hope no call for relief will issueto Lowest .._...............................__.. .67 1 Pressure is low this morning ing the annual convention in St.I ""' -"<
Governments.: I Mean ......._....................._n..___ .72 : over the North Pacific States Petersburg. They embarked onl:! Loans and Investments:... $ 304,746.10
t the nation from Key West but let do .._..._. .....
us Normal Mean\ ._____. -__--..74 the Governor Cobb and ) Overdrafts ..oo- U. ? 89.14
: with a disturbance off the coast Cuba.\ _.
\ : Furniture
House
t' banking
our part in keeping up the alert and a Rainfall steamships of the P. and O.
Washington, Tatooch' Inland, /and Fixtures _._._.. 33920.76
Few persons do enoug'. good turns to ready organization of nurses, doctors and Y sterJa's: Precipitation T. Ins.. 29.58 company and sailed for Havana Stock of the Federal Reserve -
inches while
make them dizzy. relief workers of the American Red Cross. Normal Precipitation .. ,, .07 Ins. ; a strong high where they will spend several Bank _---_.,_.- 4,500.00
I. rreoni :u-... *wrl.4U &pressure: area, crested over cen days. Only a portion of the delegates -' Temporary Federal Deposit >
The call this month is not from stricken r.trrtt. ....r..... 1 1T
any -"K t H >! fhht > c; tral Canada, overspreads all oth- came to Key West as about I Insurance Fund 1,954.01
Japan has an idea that this is a good section but from the Red Cross itself, Tomorrow.. AlmanacSun er sections of the country, Coch- 300 members remained in Miami. i United States Govern- .
rises ____. *_.....__.. 6:45 a. m. ment Obligations direct
time to get the piece of China that shewants. asking you to join it and through the payment .i rane Ontario, 30.86 inches.! Rain acd fully guaranteed _
Sun sets ......_____........ 5:39 p. m. Editorial comment: Either at
of $1, $5, or $10 as a membershipfee Moon rises ..._...........11:25 p. m.! has occurred during the last 24 man must be competent or he is / ...__.._602788.13 .
i Marketable Bonds
to participate in all the splendid work Moon sets ...*_.__.........12:03 p. m.1To..w. hours on the north Pacific coast being carried on somebody's back.' Other and Securities!! .__-.- 178668.49 .-
** Tides end in northern and central Texas Cash and due from Banks.2T7,578.38 1,059,035.00(
It seems that the upkeep of a pretty that this organization will be called uponto AM. 1".11'1 I IHigh and snow or rain in the central All contracts ;:; construction t! .
face is about as expensive as that of a perform. ......._..__..... 1:14 l and southern Plains States. il work at Martello Towers haft $1,404,245.01
homely one.Politics. Low ..._.._..'_.. 8 23 730kliarometer \,There has also been light rain in been let. This announcement was!
8 a, m. today I I many sections from the middle LIABILITIES
OBJECTS OF EDUCATIONIn ,made by the manager of the Bur-
Sea level, 30.00. j jWEA1IISU and loner Lake '1
Mississippi Valley
.
bank Realty company, E., A. Capital $ 100.000.01Surplus
I __ __ !
football legion eastward to the Atlantic The White: Undivided Profits 64,207.16
and rboth rough Strunk, and others. and I -
a series of newspaper articles Dr. I .ECAST. I eor..-t. Temperatures' have fallenin I Way will be in this week. The Deposits _.._.__ I I 1,240,037.86
but there's
far
games, moi -s sportsmanshipin Minnesota and most of the Da-
Glenn Frank, president of the Universityof sewer pipe and water pipe are on IfclL t
the latter. (Till 8 p. m., Saturday) I kotas, Devils Lake. N. D., report- the ground and sidewalks are in: :a $1,404,245.01
\ :::. some time ago discussed a I Key West and Vicinity: Partly lag ft minimum this morning of
few of the laws which he believes underlie cloudy tonight and Saturday'; hot I I r 14 dgrreesbelow; zero, and read- two place.weeks.Paving The: will first be completed building; in inl j' Member of Federal Reserve System
America in ruins, if that's where she a thoroughly modern education. Coming ings are somewhat higher: in southeastern MartelhTowers has been com-} Member of Federal Depoait Insurance Corporatioa
is, still seems to be in better shape than I from such a distinguished source they are bs districts and over the far I i iI I I pleted. The Florida Bond and
!I:northwest t t .
any-other country. -
s. NHiw. I -- -----
worthy of serious consideration. 1 i G KF.
Among the principles set forth are I -- Officer' in Charge i I .I".A".I'.I".I".I'.I'.I 1.1'.1'.11'1111:1.4'.1'1": 1..IIT.I'.I'.1 A.
Every community has doers and un- these: We learn by action rather than I .. .....a. !
doers. To which class do you belong? absorption; learning is specific rather than II:Today's;1 ; HoroscopeThis i i; i
You know the answer. general; the best things to learn are those I ........................ rSPECIAL--OFFER |
which are important in our life and work;
'r'I day bestows a strong attachment
Ethiopian war correspondents fill 1 I
we should study the things that will most Partly Cloudy to the home and par
space by denying today what other correspondents directly contribute to our efficiency and I: ents. The life may be narrow,
said yesterday. much change in temperature; perhaps confined, but not, on the
happiness.Like MOUNT VERNONIVaR
gentle to moderate northerly. whole unhappy. The native is
many other progressive educators winds. restless and a little too impul-1 S
There has been a decrease in suicides of today, Dr. Frank does not believe 'Florida Partly cloudy tonight sift and not quite enough; foresight :.
lately. Folks are sticking around waitingfor that a college education is either necessaryor and Saturday; somewhat colder may be used for great; success -{
Saturday afternoon in extreme ; but with: reasonable pre-l
the Townsend plan to get going. desirable for everyone. The four years north portion. caution. the facilities of modern'n'
spent in college might often be better employed Jacksonville: to Florida Straits education should prevent any}
Don't forget the Red Cross roll call- in gaining practical experience in a and East Gulf: Moderate winds.mOi.tIy failure. o
1I01.therf.! and partly overi i
be a. member and help in the work of workshop or business. It depends largely rant. weather tonight and .Batar-j i f Subscribe to The Cittscfi --ZOc I'I'f a
mer? y that means so much to afflicted peo- on the,natural talents and inclinations of a day. I I i weekly.,' Q
1 f e4
ple. young person whether he should go to col .
let e.v' !I : .,
An old time shoemaker complainsthat For one who is of a studious disposition -
hides are not properly: tanned anymore and aspires to enter one of the learned
But perhaps his loss *3 little Willie'a professions a college education is indis-
gain. pensable of course. For those who go to ; s
college merely to be able to say that they
Customs of old Egypt would have I have gone to college, it is a waste of val- f 3
cramped the style c rppJsrr; politicians. I'! uable time. USE
There the bull was :h i!'d sacred and neve As Dr. Frank very truthfully says: ICEit
shot. "We cannot prove that a college educa i ;.. I ITS 4 :::: \\
tion guarantees a big income, or that it ;
LASTS
LONGER!
1 II ( Io
Ethiopia will no doubt prove an o' t- makes us happier men and women. Mostof
let for some of Italy's surplus population, the old argument for college educationis L'U I I ICE REFRIGERATORS j II S
but it
might be I less painful if Mussolini in the ash-can." _
could have them jump into tae Mediterranean ( Made of all metal equipped ,t S
CAN YOU-DO YOU-READ? ;!I ,I with WATER COOLERS &a 1.i 1.
If mer- and Ttomer came face to face The average citizen of Key West does i 1! They're Economical .eE.9\&I;
with the real problems that confront the not take advantage of this wonderful age k 10O Per Cent Refrigeration
unfortunate they would n. '. hesitate to of printing. There is hardly a subject, L 1 M Satisfaction
i
subscribe to any fund that: ties to make frivolous or serious, about which one can + Priced at 0
life better. not find excellent books extremely reasonably -
A drowning--person-continues- his struggles I" each priced year. yet most of us read only a few r and 35 i1'i : 16-PIECE SET, consisting of 4 plates, 4 Cereals, 4 Cups
until exhausted. Even if your business Each family in this city would be bet i t EASY TERMS 10 DAYS <9 4 Saucers while they last at per set $1.00r
is on the point of being submerged, ter off if it made a practice of buying not FREE TRIAL I ,
revivification, through ad\ertising, can be less than one good book every month, and i m
accomplished.When reading it. There are well written stories, I 1 THOMPSON'S ICE CO.
4 Florida & Co.
South Engineering
Contracting
-- excellent popular treatments of the sciences INIPhone .- I
the Constitution was being fine biographies and discussions and remi- No. 8
adopted there was ''me opposition; a few niscences without end. By exercising a c: = ..1 PhC'.e 595 White .001 Eliza Streets
wanted a monarchy. Thc.e; ire a few a* little judgment the average citizen could n 1 I "Your base is worthy of' the best"
the present time who wouh' like to see the ai'd wonderfully to his knowledge and at t 1 1J'I
next thing to it-a dictate -iiip. the same time greatly enjoy the process. J 1'fi' .I" 'III-I.4T.I.I'II.I.I.1.1.I'"
.. : .-i ";.::3 '.. ".' '- .
'
;
7 ,
.... '
1. r'" < '---. .
--- ----------- -------
NOVEMBER 15, 1935. THE KEY 'WEST?' llIZE:7 PAGE. THREE
.
......... ........... CSCCSCSCSSCSC5CSS! .S .5..5..5..SiCSj5ob
---
I 'HIGH COURAGE I BIG TEN OFSOCIAL .l CLASSIFIED; Today's
t e ... J"" ....._...___ LEAGUE COLUMN I Anniversaries
SPORTS ) ........................
............. !
STXOPSlSr Awls rarmrortftfca Mom says tt'd be better If yon .......... 1708-William Pitt, Earl of
no*right teamed to aer.aMC-tkat t\mt actnallf the aka to hat a had one to start off with. She's tell- (By JOVE) Advertisements under this head Ii I Chatham English statesman, who
nametett mttforth orphan home trough hut *not np tn even tilt Ing the rest of the kids that that' .................5.55... will be inserted in The Citizen at laid the foondatwfis of Britain's
adopted. She learnt after the death who Jou are and only Aunt Ltlsa The ten leading hitters in the Empire father of an illustrious
the rate of Ic word for each in-
a
"
/ her 'pam"U.oC the deft not will know the trnth. Aunt Llisa I born. Died
11
even have a there in thetr large Social Diamondball League fol- sertion the son, May 1178.I .
fortune, and roes to Attoria tot rp lives here with us you know. She's but the minimum for I
to learn eonetMng abort key peat pa's sister. --- -- lows: first insertion: in every Instance is I -
She it to+ Ike horn* ot recta Bo.-I:" I
key former warn, who. seem hfceiv "When he went back to the old t ..............5..I PlayerAB R. H. Ave.:25e. 1730-Baron von Steaben.
to help lane radio jutt hat put Anne DIAMONDBALL I Prussian major-general under
to beds a Itttle room wich be country and forgot to come back, LOPEZ PLAYERS I J. Roberts ._._.....30 10 19 ..633J. Payment fov classified advertisements -
long* to her daughter Jfiin*. she moved In here and she's been LEAGUE STANDINGS Washington, whese uttvkes in disciplining
I ...__.. is invariably in advance
helpiag keep things going. You'll I Lopez 16 9 9 .562 the Army of the Revolution
like her. She's so cranky she's WON ONE-SIDED I GAME TONIGHTBy Soldano ......__._.14 5 9 .542 but regular advertisers with ledger was invaluable, born Remunerated -
Chapter 19 funn,. She's book-keeper down at I (By JOVE) J. Navarro _...34 9 17 .500 accounts may have their advertisements I by a grant of land near
NEW NAME the CaDUE'f',1'Oar--
H. Wickers 34 10 17 .500
> ...i.5..a15.Ce000Ce5C5. Advertisers should give their!
at the Farnsworth Canneries +
JOVE Nov. 28 1794.
( ) cabin and died ,
,
SLEEP well. Anne sat up, folded she finished, lamely. M. Lopez _____.34 13 15 .441 street address as well as their telephone i ,
Into a hump thumped 1 SENIOR!' LEAGUE It will be a "hot" game tonight I -
Nikki Neilsen. She rather liked It. I Blackwell .__...25 5 10 .400 number if they desire re-
!
1st, and lay down again. A fresh rain Beginning of Seco d-Ha1f at Bayview Park. 1787-Eliza Leslie. PhBadel-
.storm It was so different from the,other. OVERWHELMING SCORE' OF I Johnson _...15 6 6 .400 sults.With
was blowing up. It pattered on Perhaps she would bob her hair, let I' Club- W. L. Pet The winner of the first-half of,; each classified advertisement phia's popular writer on domestic
]Ute roof like the feet of tiny mice. TO MARKED AGAINST Papy ...___...._.10 1 4 .400
13 2 Citizen will and authoress
It reminded her it bleach out the walt would natur Lopez Funeral Home 1 0 1.000 the Senior League schedule and The give free anAntoitrop economy short-story
I she had taken with of a Luke camping and trip Laclnda ally. Only l Yvonne knew the trouble BAR-B-Q OUTFIT IN GAME Bakers _._..__..._. 0 0 .000 the, team that has won three 'H. Sands _..15 1 6 .400 i Razor Outfit. Ask for of her day born in Philadel
she took to keep It Salk:; because it. phia. Died Jan. 1, 1858.
years before. They had Farnsworth'S\ Firemen _._......+-.._ 0 0 .000 straight games will tackle the I II i
Lucinda hair had been
BAYVIEW PARK
AT
stopped at a cabin In the hills and dark. Sunshine and wind turned It Bar-B-Q ._...._.._.____ 0 1 .000 diamondball champions of Key OTHER DATA l WANTED -
=latter they had retired, a pack rat I Most times at bat: J. Navarro 1807-Peter H. Burnett Mis-
and his family had scampered beck'and' tawny gold. West. In other words the Fire II
Lopez and Wickers most runs ANTED-Seeond hand souri lawyer, Oregen pioneer and
iui for Mllna. ; dining
1"11 tl iron. offered- SOCIAL LEAGUE
forth across the thin root. (By JOVE) men referred to in the first part f
"Mom sars 'd betted not down scored M. Lopez most hits: J. room suit in good condition. jurist. California 49'er, and her
The rain had a homey, comfortIng you go : ;
In the of the d-Ha1f
first game second End of Seca of this paragraph: will meet the
first born in Nashville
town for i few sod Write Box E Citizen. governor ,
to days Tel care
sound.' Tecla was pretty when we ,
Wickers stolen
Senior Navarro and most
half
of the ;
she smiled. She had dimples high conM get Violet Jotnunen to give : Club=- W. L. Pet. Bakers. I nov3-3tx Tenn Died in San Francisco
la. her cheeks. John Neuman's eyes: I you a permanent If yon could afford ball League's schedule Diamond-I I Administration ..._..._ 9 3 .750 I One of the youngest and most bases: Pellicier 5, Lopez and'Lewis. Map 17, 1895.
were so blue, sailor blue. He had It." Funeral Home outfit 4 .630 3 each; most doubles: Mat-i FOR RENT -
the cellar-occupants the Bar-B-Q Lighthouse ._......"._" 1 outstanding pitchers in the city E ----
nice hands firm and strong, and. 2. I School 5 1 .415 Johnnie Walker Jr. will be in thews 6, Whitma'-sh Archer, :'! 1815-Edward L. Daveapott, a
The 13 to ,
ten. final High
.. score was --'-'" LOWER FURNISHED APARTMENT
Anne before mirror
sat -
moth broad shoulders. Wasn't therea
song about rain on the roof? No I LATER drapfed' around her shonl A very loose game on the part I Bayview Park .. 4 7 .360 the box for the Fire laddies and :! Lopez J. Navarro and Barker 3 [ : -2 bed rooms!, conveniences leading'American actor of his
of the losers was played. The each; most triples: J. Navarro 2; day, whose five daughters and
519 Elizabeth
need to worry now or think. She ders. Above her stood lIlna, lips M. Acevedo will be behind the I porches.
slept. pursed in a tight! line as she lifted Embalmers scored six runs in the End of First-Half plate. The veteran and reliable most home runs: Arias, R. Roberts Street. nov13 two sons all achieved stage success -
I I born in Boston. Died Sept.
very first inning. After that the Club- W. L. Pct. Ward will toss 'em over for the i I J. Lopez, Lewis, Stanley, B. 1, 1877.
somewhat
Sandwich boys played a Bakers with Gabriel the back- FURNISHED APAP.TMENT
as Demeritt, Soldano, E. Roberts 11
better game but the damage had Lighthouse ._.._ 8 2 .800 :Modern conveniences 628
of i already been done. Service .._.._.._ 1 3 .700 stop.It will not be surprising if the jl each; most strike outs: B. Pinder : White street, or call at Gains 1833-William F. Durfee engi
The win automatically places Administration _._.5 5 .500 game goes into extra innings. 7; most walks: Whitmarsh and 1 Barber Shop. nov2 neer-inventor who superintended
the Lopez ten in first place- Bayview Stars ........ 4 6 .00 Lights will fo on at 6:45 p. m. Blackwell 5 each. 1 the making of the first Bessemer
until tonight's ame. and the contest will get underway :; 1I I, FURNISHED APARTMENT with steel here in the 1860's. born at
R At bat, the leaders were: In- I at 7:30 o'clock.BASEBALL : electric ice box. 1001 Eaton New Bedford, Mass. Died Nov.
I,. graham, with n double and two his opponent Samuel1a was getting -' I FERA TEN CHAMPS f phone 879-J. nov9-lmox 14. 1899.
I 40 Bill Malone was against!
four times Cates,
in
singles! up; I I
with the same average, and Arias I, Otto Kirchheiner and beat Otto] GAME ATNAVYFIELDSUNDAY FURNISHED HOUSE. Modem
Another -' OF SECOND-HALF BENJAMIN LOPEZ
47-50.
with two out of three. I by one stroke with ; I conveniences: three bed rooms,
Jones. Castro and Bazo played :" recipe for playing in the 30's i I garage. Apply 610 White FUNERAL HOME
--
i well for the losers in the field, as I according to Mr. Pious Hokey I Street. nov9-5tx Serving Key West
L did Ingraham, Nodine and Ster- I Pokey is to start on No. 6 and j (By JOVE) I Half Century
Inig for the winners... I play the round. He was the one I I The Administration ten defeated NICELY FURNISHED APARTMENT 24 Hoar Ambmlascc Scenes
Score by innings: R. H. E. .I who objected to the procedure (By JOVE) the High School players I : with garage. Apply 827 Licensed Emoatmer
Bar-B-Q 000 011 000- 263' but from now on that will be his I I Sunday afternoon at the Navy I yesterday and won the second-! Duval street. oct9 Phone 135 Night C9-W
I Lopez u 610 200 13x-13 15 2 starting place. And he Field, the Dean Stars will playa half title of the Social Diamondball -
Batteries: Rosam Castro and I to have his score card: aathenti-I I I baseball gme with a club composed -I League's schedule.The FOR SALE \\\1I'iU"I//I
I Jones; C. Gates and Ingraham. cated by a notary i of some of the best players game ended with. the score I SECOND SHEETS 500 for 50e. 1
I in the city and will be managed i reading 9 to 0.
51...................... The Artman Press. angt SPEND
LEGALS by Mario Sanchez. This afternoon the FERA out-
,. Doings Around The ro.J'I The batteries will probably be I fit will meet t'em Lighthouse crew. i. PERSONAL CARDS-100 printed YOUR ,
l s it a, -- I IV THE COUNTY jrDGFTS .i Sanchez and Gabriel for the picked winners of the first-half. This : cards $1.26. The Artmaa
Golf Links I> %M) KOIl MOiROF.rnoniTE." COUNTY, aggregation and Fischer and I'will be the first of a three-game i Press. aug7 VACATIDN !
13 Ft.tminv.Estate of 1v i Mcintosh for the Dean elan. [series< to determine the champions
Ir ; r In JOAQUIN re: DeARMAS.r, "I The game will be called at 3:30 i lof the league. : OLD PAPERS FOR SALE-fw ; THIS YEAR IN =
t' ah.yM x J (By GRAVY) NOTICE TO CREDITORS*eased. I o'clockIn order to tllow time for I bundles for 5c. The Citizen! Of
To" all creditors and all persons I the swimming meet to be over The Japan Air Transport com- fice. oct6 FLORIDAeirt !
.......555.55.5..5..5It
7 claims or demands against I
having I before the contest starts. pany has started weekly airmail I -
said Estate:
seems that a slight error was TYPEWRITING PAPER BOO { HfAl1H.
The storm had blown You and each of you. are hereby I service between Kyushu and Formosa _
over. I made in the results of last Sun- notified and required to presentany I Subscribe to The Citizen-20c covering a four-day steam sheets: '1Sc:. The Artman Press
Once she awakened barber's scissors. I in that Handsome Horace was'' claims and demands which you W J;
heard foot a sharp pair of day or either of you may have against weekly er route in 10 hours. I angT 11111"'I"\\\\
steps tiptoeing past the door, beard Clip.clip. Anne felt that her past charged with a loss when as a the estate of Joaquin DeArtnas, deceased .. ,
,
-- -- -'
the far away rattle of stove lids! and life was being cut from her. Clip, matter of fact he beat Charlie late of Monroe County Florida -- -- -- --- -- ---
i from the open window caught the clip. She felt a frantic desire to stay logeboom, Doe Lund, 5 &; 10 Judge, to of the Monroe Hon. Hugh County Gunn, at County his office -
fragrance of wood smoke, as the Miina's hand. She was acting too Fripp et aL So "Pretty Boy" the in the County Courthouse in
wind whisked it Jnto _the room. A hastily ._ is Monroe County Florida within one -.
column apologizes and sorryto from the date of the first publication -
child's laugh sonnikd. be hushed Yon look better already," declared have wronged you. Somethingelse year h reof. All claims and demands
abruptly. When she opened her eyes Miina.stepping back and surveying not presented within the
again an oblong of sunlight lay her.I left it kind of long but has come to the attention of time and In the manner prescribed i I FIRMS
the column. It snake that herein shall be barred as provided
was a
across the plain pine floor, like a yellow the curl will take It up. And If you'lllet by law I I
rug. me fix your brows like mine* Charlie Salas was killing and not Dated September JO, A. D IU$.
Later that day after the obliging his ball being knocked out of the ANA MIRA DeARMAS, I
She arose and went to the window I As Administratrix of the Estate of
and looked out.The storm had blown Violet bad ministered to her.: Anne rough. It seems Mr. Salas met Joaquin DeArmas, Deceased. I.I
turned to the mirror and with 3-ft. rattlesnake near octi-11-18-25: ; novlS1I $-::
again a
up
over. Below, the roofs of Onion I Who Rush To Give You Service-Patronize Them
stared An elfin face .
,Town were steaming In the warmthof In surprise. fo.. 9 green and yesterday every f
THE
the peered out from a mass of curls: the ball that was knocked into the THE CIRCUIT COURT OP
early Of
spring sun. Beyond the SIT JUDICIAL CIRCUIT
bay and the far waters of the Pat straight black heavy brows had I rough remained there. So if you I FLORIDA. IV AND FOR MOV-!
cine were glinting, tossing foam- given away to thin half archer want to find a good ball just look LORAN ROE COUNTY H. PREVO IV and ClIAXCERT.
capped naves. She was. pretty now. as Shsrlee I any place on the golf course. RUATHA: PREVO, his wife Star American Coffee
had been pretty, but she bad lost Complainant, ROSES
The wind was chill[ so Anne that distinctiveness which bad set va Aa-reeaieBt. F.r Deed JOHN C. PARK TIFT'SCASH
closed the window. She pawed Some of the boys were playing F.reclOT>r. Has a tat and aroma that
'her from the average girl.
apart
through her bag for a robe and had anda golf for the fun of it and so : HENRY ENKEMEIER, and is only found IB food coffee .
face .
And she A
was glad. new KNKEMEIER. his wife FLORAL PIECES A
donned It when Mima rapped at the new name, at this time meant every man was for himself. Cookie|; whose Christian name Is 328 SIMONTON ST.aad you can he sure that it's GROCERY
door. more of a chance for peace. Mesa took one of his co-workers< unknown Defendants. fresh because it's blended in SPECIALTY
"Coffee" she announced, comingIn The children came In from school, (CrnicVshank: ) and played along j ORDER FOR PUBLICATIONIt Key West. 1101 Division Street
with a tin tray in her band. surveyed her decided she was all with the Duke of Rock Sound and; appearing by affidavit appended PLUMBINGDURO
to the bill of complaint filed in LARGO
"Thank you Milna." Anne smiled I, right, and chatted with her half in Louie L Pierce After playing 27 j the above stated cause that the defendants It 18c; PLANTS and VINES PHONE 29 '
at her then, pouring cold water intoa English,half In Finnish,much to her holes "Old Bye" called it a day Henry Enkemeier and STAR lb 25c;
china bowl gasped as she rinsed 'bewilderment But she learned much and quit. Cookie as usual had a.; tian Enkemeler name ,is his unknown wife, ,whose are residents chris- V. & S. ft ISc. Staple and Fancy-
i SOUTH FLORIDA
her face In it. She thought of of the household Into which she had pretty good score only he didn't j of a state or country other than the d PUMPS
'
Yvonne and :Mate; of Florida and that their residence -
the warm bathroom: forced her way. GrocarieaPLUMBING
ell Some sort of
: me. an argument
is unknown that there is no
thought of her again as she brushed There were signs of rigid economy was going on on number nine person In the State; of Florida the SUPPLIES STAR COFFEE MILL
her long hair honey-brown In ,the : explained by Milna In her Aitwood's service of subpoena upon whom NURSERY
hole too because voice 1
and Street Complete Line Fresh
512 Greene
sunlight f frank manner. "Pa just up and left: would bind said defendants
carried over those treetops like that they are each over the age of
"Gee you'd be pretty If you cut : left ma with the bouse and all the i twenty-one years. PHONE 348 Phone 256 Fruits and Vegetables
business.
that off and got you a permanent" kids and until we were old enough to nobody's I IT IS THEREFORE ORDERED, PHONE 597
offered Milna. "You'd never know help she had a pretty hard .time. I,I that and said they non-resident are hereby required defendantsbe to
yourself" and then she blushed. Aunt Liisa's salary helps. Both the It seems that Fred Ayala was j I appear to the Bill of Complaint
don't mean you're not pretty now, big boys George and Ore! are boat going to play with Ike Russell I filed in said cause on or beforeMonday SELECT, SEA tOODSREAD- HAPPY1'DAYS ARE HERB
but you'd look like oh, like Joan pullers. They'd like to have a boat and give Pete Taylor to Bascom I A. D. 19IS, the, otherwise and day of December allegations Jewfiso, 2 Ibs. 35e INSURANCE
Crawford, maybe. of their own then Len could help Grooms but it was no go on ac- of said bill wilt be eta as
when he's out of school. In that way, count of once before that arrange confessed IS FURTHER by said defendants.IT ORDERED. that Yellowtail Steak 2 Ibs. ..;. 35c THE KEY WEST 1to
T.1 EVER know myself repeated with me working in the cannery. ment was made and a couple of I I this order be published once a week Yellowtail o. Bone, 2 tbs. 25c Offices 319 Dnval Street A-
Anne"that's an Idea." Mom could stay at home. But" and for free meaL for four consecutive weeks in the '
gentlemen paid a I Grouper, 2 11>>.. 25cSaapper AD.
West Citizen published -
Key a newspaper
She crawled back into bed and she hunched her shoulders in a futile SUNDAY sr'JtI\
-
in said County and State. }I 2 Ibs. 2Sc
,
accepted the tray, and as she sippedthe : gesture "we won't ever get William Penababe Kemp was This October 24th. A. D. 1S35. t TELEPHONE NO. 1
hot black liquid. Milna talked.We'ye enottgh to buy a boat and an outfit. I (CIRCUIT COURT SEAL,) I, Mutton Fun, 2 Ibft 25
having good time yesterday be- ROSS C. SAWYER
a
Per Year
$2
Subscription
got a name for yon" she "Do they cost so much! Anne remembered Clerk of Circuit Court.
cause he had 43-43 which is as FRESH SHRIMP
began. i a little of what Luke had By FLORENCE E. SAWYER Try T..r Meal At
"A name for me!" questionedAnne 'said that night In Lee's library. good a score as he has had in a Deputy Clerk. Large Select Oysters Key West's Only Sunday -THE-
"Oh they could start five hundred long time. Se he teamed with KURTZ & REED Fresh Crab Meat ia A 65c .
and then she remembered. on Solicitors for Complainants. >> >> cans Paper Delmomco Restaurant
"What, Is It!" I The cooperative cannery Melvin (Cupid) Russell and play I octIS; noTl-S-lS-21 FREE PROMPT DELIVERY PORTERALLENCOMPANY
"Your name." Milna said, hugging would let them start with that, then ed against< Bob Spottswood and Business Office Citizen Buowe!..r Beer ........ lie
her knees as she !sat at the other take the payments out of their hanL" Curry Harris. The first round IV IV:THE AND:" COlTVTr FOR MONROE JUDGE'S: COUNTY COURT: LOWE FISH COMPANY Building Six c...... DiMer, --.-__
end of the bed" Is Nikki Neilsen. (Copyright DSS. ly Jeanne Eatcman) was very rosy for Doe and Mel STATE OF FLORIDA.
Like It?* [ but the last was awful and the IB re Estate of PHONE 151 PHONE 51 ------ JOe, 7Sc, and 85.
Anne learns the worst. tomr. MORENO.Deceased..
LAURA CURRY
i
"NIkkl !-:ensenrepe"ILted Anne.Today's row, from Eve; Portland, carss-: final was too many for Bob &: i
Curry. And Willie made up for To all Creditors, Legatees, Dls-
- ------ '--
- all Persons Having
tributees and
AARON McCONNELL
........................ of New York City columnistau.I lost skins he got five. Claims or Demands Against saidEstate Our Reputation is Wrap. PRITCHARD
: 536 Fleming Street
( Birthdays I thor born in Chicago, 54 years Charley Ketchum and Timo-I by Yon notified, and and each required of you to are presentany here- ped in every package: JENNIE B. DE BOER
I ago. theus Pittman got together claims or demands which yon FUNERAL HOME
.............eaa.5..e.5 .I neither would tell you who won.! or either of you, may have against of
i I the estate of Laura Curry ).Moreno
Prof Felix Frankfurter of the Maybe both did. I deceased late of Monroe County '
Vincent Astor of New York :::I:: -
Harvard Law School, born in head of the house in America, l J Monroe Florida,County to the County Florida, at Judge his of ol PRINTINGDONE I Dlplfied, Syapatbtie
Vienna, 53 years ago. born in New York 44 years ago. Now boy if you want to playa flee in the Court House of said C.*rtiy
i I good golf game just get a growing -, County Florida, at Key within West twelve Monroe. month County BY US NOTARY PUBLIC
Dr. S. Josephine Baker of New youngster in the family. The Ifroin' the time of the first publication EMBAUffEBAvbrnlaa
York noted child hygienist bornat Lewis Stone, actor, born at best score of his career was ob- of this notice to-wit, October WATCHMAKER JEWELE& -THE- LICENSED
Poughkeepsie, N. Y, 62 years "Worcester Mass? 56 years ago. i tained by William Wesley Wat- must 4. !>!>be; said duly claims sworn to or and demand presented AND ENGRAVER .
-
ap. kins. It seems Willie started! to the said County Judge ai t See Him For Your Neit Wed ARTMAtTPRESSCitizen ** 3.rvle.
with a birdie and when he finished aforesaid, or same will be barred
Gerhart famed I by limitation. SEDUCED
Mr Bertha M. Sinclair-Cowan Hanptmann nine holes he had a 39. And Datea this 4th day of October, A. ALL PRICES Bid LADY ATTENDANT. .
German dramatist-author born *.
,
("D. M. Bower"), of De Poe Bay. when he was through seven in D. 1935. Howrst 9 t. 12-1 to- Citizen OfficeFRIDAY
73 B. CURRY MORENO
Oref., noted novelist, born at years ago.Subscribe such good shape he said he needed As Administrator of the Estate ol PHONE 51 Hen
Ope. Sahrrmay Nigbta' Sleep
Cleveland. Mien., 60 years ago. two pars for a 39 and proceededto Laura Carry ),Moreno deceased. I n.ua
I to The Citizen-2 make same. Not only that but lw. CURRY,. for HARRIS Administrator. t
Franklin P. Adams ("F. P, A."), weekly, while he was getting that score ectl-ll-lJ-25; -15-SS-2 .
-
: ...--- .,)0. '"f., :.'s; r- -
: --
",",x .,.. t- '._
:..
t
.
-
t
J .. nm.-_ -- - -- -- - --- -- -. -...-- . .-. . -. -. -. ---- .. .. ---... . .-' .- n .- -
tn -I
.. ... .
U T. .
FACE FOUR Th L.: :: -.T : ". - - -- -
- - - -- - --- -
..... ... ... ....... ........................ CARD OF THANKS r THANKS, VOTERS!
I Record Number of Disasters Year; VIEWS AND REVIEWS
I ; ; i To my loyal Supporters in Tues- l I, Joe Cabrera, your Councilman -
:-: SOCIETY : : Relief Given in 128 by Red Cross I wish day'to Election thank :and assure you 'defeated for the for re-election past two years have, but endeavored
................................................. What They Say Whether that mere words are inadequateto to personally thank my
Right Or Wrong I
Junior Club Names Arrange DanceAt .. .. ." ., ........................ preciation express for to you the my sincere ap-'. ardent friends and supporters and
r1 support given I I I to those that I have not been able
Committee Members Raul's ClubDell IIr Howard Winner, cameraman in me, and to those who did not vote!,! to see in person, I now extend my
r t rs Ethiopia for me, I want to say that there |I j sincere thanks and appreciation
'Committee members of the Woods' Orchestra is once "Hotel accommodations are is no malice within me against i i, for their efforts and recognition
again complete having recently rs lousy the bugs and fleas have me you and that your opinion and I I[ of my past services for the well -
Junior Woman's Club of Key: added to their line-up Albert e ;..f covered with bites and sores, and privilege will always be respected II II I fare of the city.I .
West for the Club Year 1935-36, Manucci who will appear with the food is enough to poison a and cannot change the friendship l I will leave my duties with that
were named at a very interesting' the band* at the dance at Raul's goat.". which I hold for you. Politics is ,i! satisfaction that I served faithful -
I' r f fY one thing and I hope friendship I[ !, honest and and
business meeting held this week. j I Olub tomorrow night. The affair impartial, regret
Five committees were named as'I I will start' at 10:00 p. m. and last Laura Momttock, dietician is another. :: that the majority of the
until 3:00 a. m. "A person can do a mUCh bet Yours sincerely and truly, voters failed to weigh+ and endorse
follows: I has AMBROSE W. CLEARE. my record a sa whole.I .
The new san phone player ter day's work on a !good break-
Finance committee: Mrs. more than filled the vacancy fast. I novl5-lt hope that I will be able to
'1 I I
Dorothy McCarthy, chairman;!, created by Tommy Thompson's 4 repay each and everyone in the
Miss Florrie Retchings, Mrs. Horner -'' departure to Miami, it is said. ys Morris FUhbein. doctor: APPRECIATION future for their kindness, and
Herrick. j I I The orchestra played with one "Long-haired dogs develop ra- II -; I'I hope that they will join me in
:Miss Although unopposed in Tuesday's congratulating the newly elected
Membership committee: man short last Saturday night bies less requently than short-
]
Nellie L. Russell, chairman; Mrs.-; due to the fact that the new alem-' haired dogs. election, I want to conveyto I officials and cooperate for the
Earl Julian, Miss Xenia Hoff. j j ber was detained by "Wedding the citizens of Key West my best interest! of our city. Believe
sincere appreciation of the handsome -'I me sincerely and once a friend,
House committee: Mrs. George Bells. Left-Red Cross worker aids family in New York state floods. Injured father tells how mother and children were Mr.. 0. D, Ohpbaat, Legion Auxiliary complimentary vote '
Gomez chairman; Miss Marguerite!I] For Saturday night, Ida has trapped in flooded house until rescued by Red Crots. Right-Terrain,stripped of homes and verdure by tornadoIn official: given I I always one.
Goshorn Mrs. Dumont Huddles-', luncheon to be servedin Cross, me. In the future as in the past,: noviS-It JOE CABRERA.DANCE.
prepared a Gloster Mississippi. Insert-Admiral Cary T. Grayson,new chairman of Red who directs relief work. "The first pacifist of this land
Pangle. i j of Raul's.A I shall in every way endeavor to
ton, Mrs Edgar the atmosphere .
Entertainment committee: Miss new cozy library of Broawday releases -I OLD Mother Nature visited an the 48 states and Alaska. Food of more than 300 world war veterans j I is }Irs. Franklin D. Roosevelt." justify the confidence reposed in
medical aid In road
and government construction
housing clothing
number of varied cat-
me by faithful administrationof
chairman :Mrs. Rob- will be rendered for the a
Alce Curry ; aclysms on her children during the were given to 110,000 persons in 306 camps In the keys who lost their Mr. Franklin! D. Roosevelt, wife the office of
!I city treasurer.
the best
William Norman. West to
Mrs. first time in Key
ert Dopp of lives listed
counties-or In almost one-tenth or were among the missing of the President
all :
Miss' past year causing distress over I WILLIAM T. ARCHER. j Tonight From 9 Tin ?
Publicity committee: ] of the orchestra's ability. the territory of the nation. in the hurricane.The .
I the nation to many thousands of "Perhaps the first and most novl5-lt CUBAN CLUB
Florence Sawyer. Two called into
I disasters which work of mercy for these I
I men women and children. practical step that nations could
Auspices Convent Senior ClassGents
Arrange Dance At the field every available worker of many sufferers was directed personally |1 I I
Her repertoire of disastrous occurrences take towards would be Subscribe to The Cit -20c
I peace to 45c Ladies Fre.
Club Plans the relief forces of the Red Cross by Admiral Cary T. Gray- ;
Wheel Habana-Madrld : included drought and were the floods In New York state son,new chairman of the Red Cross. I buyout the munitions makers." I I weekly. t
Ride For Tonight I'I dust storms in the midwest; explosions -I In August, and the Florida hurri- Funds are provided for this type
Big things ;; planned for to-' III es, floods epidemics of cane In September both of which of work by memberships in the Red I Ed Howe Kansas author:
Members of the Key West Coral morrow night at the Habana-Ma- disease shipwrecks tornadoes and claimed a heavy toll of life, and Cross and by special relief funds "My hole is to go to bed one
Wheel Club will enjoy one of drid's big weekly dance. The af-i hurricanes In many sections.As caused great property damage. In raised In a restricted area. Member- night, after a hard day's work I
their regular rides tonight; start-; fair will start at the usual hour of I a result the American Red' New York state the Red Cross had ships in the Red Cross are sought I i and never waken. That would be ACCREDITED AGENCY I
Kirtland home I Cross reports that this year the organization more than 5,000 families listed for each year at the annual roll call the absolute triumob."
ing from the Fred 10:00 o'clock. _
gave relief in the greatest I rehabilitation aid after the storm period and support both the local 1
South street at 8 o'clock. The Habana-Madrid's orchestra I I -
on !( number of catastrophes In any wreckage was cleared away. In chapter work and the national disaster ;: For The Following Nationally Advertised Lines:
All members of the club are will play 'an unusually good pro William E. Boran, fightitg Republican
expected one single year in Its history. Re- Florida the Red Cross prepared to public health nursing war
to be present and other gram of dance music and Manager lief was carried to victims of 128, aid a thousand families, and also veteran and other work of the so Old Guard: HART, SCHAFFNER & MARK CLOTHES
wheel enthusiasts who desire to Carbonell will be on 'hand to disasters, which occurred in 37 ofSUNDAY ; to act In problems of the dependents ciety. "Heaven knows: the Old Guard
become members are invited. see that everyone bask. a.joodtime. has little to offer in the way ofa NUNN-BUSH SHOES
_._ ,, .
--- ---- --- ----- except repentance and
program EDGETON SHOES CHENEY CRAVATS
I
Dance Tonight 1 Club tonight starting at 9:00 DINNER STEAMER OZARK REV. PHILLIPS no fessions.one "would accept their pro- j MANHATTAN SHIRTS, PAJAMAS,
At Cuban Club
I o'clock. UNDERWEAR AND SPORTWEAR
John Pritchard's Orchestra will i I Howard C. Naffzinger doctor, ol
COMES TO PORT VISITING HEREAmong
:Members of the Senior Oass oft, be there to furnish a good pro- I California: MONITA SOCKS MALLORY HATS I
Convent of Mary Immcaulate will( gram of dance music for the' oc I I "The skull is the one place BELTS AND SUSPENDERS BY PIONEER
sponsor a big dance at the Cuban _casion. By ANN PAGEPORK 1 where you can remove the tissue!
_-- _- w__ __ is really cht'aper-not just a BRAZOS DUE TO ARRIVE TO- the visitors in Key West j[that harbors a tumor, sterilize the HARTMAN LUGGAGE COOPER'S UNDERWEARMENDELL'S : ,
--
-
ial cut but loins, fresh hams I today, is REV. J. A. Phillips superintendent i bone and replant it."
spareribs and sausage Other meatprices NIGHT FROM CAL of Latin Missions in! EXCLUSIVE MEN'S SHOP
BEAUTIFICATIONUNIT NEW ARRIVAL IN remain about as last week.
White eggs are a little; cheaper and VESTON:: Florida, with headquarters in I Ralph C. Epstein; business: school
butter again somewhat higher. Tampa. executive: 517 Duval Street
ASSEMBLESWEEKLY ZACHRY FAMILY' should Green be beans plentiful barring and lower accidents priced Steamship Ozark, of the Clyde- Rev. Phillips will preach at I "For a plentiful distribution of
than in weeks. Spinach and cauliflowerare Mallory Lines arrived in port 9 Salvador Church Sunday eveningat the wisdom of economic 'doctors'and
unusually attractive in quality and I 7:30 o'clock, and will also suspect (with reason) that
o'clock this from Jack
Idaho and morning
price. baking potatoes
sweet potatoes are worthy of special I sonville discharged heavy hold the quarterly conference of I j their permanent cures may be I
CLASS HELD MEET-; Mr. and Mrs. J. W. Zachry, mention. cargo the institution. I' worse than the diseases."
and sailed this afternoon for!
i Apple Week continues with this : WILL ROGERS MEMORIAL FUND
INC HERE THIS Panama, Canal Zone, announce most popular. of fruits available in II'I New Orleans. +j '
t the birth of a boy weighing 7% many .varieties at low cost. Grapefruit Power Boat Sea Belle, was in'l I MONROE THEATER! Local Committee for Key West
MORNING are taking an important place in
pounds. Mother and son are reported the fruit marketFlorida oranges are port today, took on cargo of containers -, Medicated'o" BIondell-Glenda Farrell
arriving in greater volume and improving II I with ingredients of Date, _. _
I as doing nicely. quality.Here for the Doxsie Sea Food ,
Mrs. Zachry is a former Key are three menus planned to company and sailed for Caxam- Vicks VapoRub "
Another WE THE MONEY
of the weekly classes I ARE IN .
suit graded budgets.
bas, Fla. TO THE EDITOR ,
Wester. Miss Violet Roker, daughter :
in beautification and plant cul- of Mr.':and Mrs. John Roker Low Cost Dinner !'I'. Steamship: Granada, of the i Matinee i :;; Balcony, lOc; Orchestra I
tivation being carried on by the who are now making:'their}: homein Shoulder Lamb Chops Bared Potatoes Standard 'Fruit'' ;;and Steamship I5-20c; Night: 15-25c Wishing to have a part in perpetuating the I
beautification department of the Riviera, Fla., from:where news Bread Creamed and Spinach Butter company from Philadelphia to I I' -- -- memory of one of our most beloved and useful
KeyV est Administration was of their new grandson was sent to. 1 Brown Betty. I I Frontera Mexico, is due in' port
Tea or Coffee Milk: fuel ; citizens I enclose herewith contribution of
The Citizen today ,for >\toil at the Porter my
held this when all foremen
morning
-,
and supervisors of beautification -, Medium Cost DnneJ"I; I I Dock comp ny. Steamship Ceiba,) STEAMSHIP Co. to the Will Rogers Memorial
projects met in the Key ........................ RoastPork Crin i f-thezsaM: irojnpany, and for the. P&O UNITED STATES FASTMAIL
West Baked Yams Battered pnions j i,i I sabie\ j distillation .is due tomor-, Fund. I understand that this gift will be added to
Administration Building. i .Bread and Butter ROUTES FOR
Ralph E. Gunn in charge of PERSONAL MENTION Rice Pudding I I I row from New York for-.fue1 oils' I others from Key West and will go without any!
Teaor.Coffee Milk at the Porter Dock: ) PORT TAMPA-HAVANA-WEST INDIES
beautification work, is conducting'the ........................ MIAMI-- I deductions whatsoever to the National Fund to be
weekly classes taking one: Very Special Dinner i I'" Leave Key West for Havana'' Mbndays and Thursdays: .
subject at each session, and outlining -! Clifford G. Hicks, WPA assistant Tcmato Juice Stuffed
supervisor of construction Roast Uuei. Apple Sauce I -om- Leave Key West for Port Tampa on Tuesdays and I
the principles Involved in
Mashed Yams and t morial Committee determine.
the particular operation being discussed projects in this district, who was I Rutabaga Turnips WAVES J I II Fridays. I may
1.on a business trip to Miami re- Rolls Green and Beans Butter I Two Permanents 55.00( \ I Leave Key West for Miami Wednesday 10.-00 P. M. 1i i
Name __.__._
The sessions are turning into! turned by plane this morning. Baked Caramel CostardCoffee for 1 i
I Sale of tickets for Havana will stop at 8:00 A. M.
highly educational classes. During Ii Better Waves, $5.00 and up 'I' '": c'. .. ... '
them, foremen and supervisors; Harry Baldwin first assistant I i MRS. MILLER !i For farther information call Phone 71 Address ._ .
ask questions relative to their keeper at Carysfort lighthouse,I 1 i 407 South Street Phone 574-. i --.7.H. COSTAR A tent. I I i i
work, learn of the latest developed; arrived this week from the light: PALACEJOHN I I II I 1e J JI N3
methods of plant cultivation and I to spend his quarterly vacation.
treatment of plants and have with hU: family. WAYNE in I ............................................... .
ai; I I- _
---
round table discussion of various THE NEW FRONTIER LEE BAKER'S II
problems which confront them Mrs. Victor Moffat who was Serial and Comedy J II r.IIT.I"I.I'.I'II.I'61"'I. 'I.I''I.
during the week visiting for several weeks in Miami MEAT MARKET I FOR SALE
was a returning passengeron Matinee: 5-1 Oc; Night: 10-15c
Ilaile Selassie warns his the morning plane today. I
riors ehirts"to dirty keep, to their prevent"shammas attract-war- Subscribe to The Citizen r Heavy STEER Western MEAT I .. t. -. ... .- ... .- ....'" --. "Make Your Paper Pay
a
Veal Spring Lamb I Ii
::
ing the attention !
WESTCOLONIAL
of Italians. We'd KEY
give a pretty penny to be a'' Pork Hams Pork Shoulder : ; '
laundryman in Ethiopia if the' I HOTEL Sugar Cured Hams :: 4r ?.,_ & You Dividends!
FRYERSAll
HENS and
soldiers wore white pants. I In the Center of the Business Ingredients for Souse ,
J.G. KANTOR -
I and Theater District A.
checks i REMEMBER WE CARRY 'y
COLDS 1 I I -INC. First Class-Fireproof- : THE BEST MEATS!
6 6 6 i Money Saving Prices
I
t and i .
Sensible Rates
FEVERfirst 1 II The House of Blue Serges \ I! 822 FLEMING PHONE 695 STREET 1 eQllsuss
Ligsi T.cts! dayHEADACHES Largest assortment of I COFFEE SHOP NOW OPEI i iI I We Opal and Make Deliveries / tt f a I Ni tlu 4.. r.r.---
$alve NeM Dr.ps in 30 minutes i FRENCH BLUE I Garage Elevator I Sunday Morning f Watch THE KEY WEST CITIZEN'S Advertisers -
I OSWEGE SERGE SUITSfor i r Popular Prices 1 closely for special savings on your
".I. .._..llIe'. Lea.I.R .Iel" II
men and students- II household necessities. Make your paper pay you S
--- --
THE SEMINOLECHS. \ I guaranteed fast colors, I! 'It S dividends. The most interesting news in the
i f specially priced$15.50 =, 11 I paper today is the advertising where you .an
_% .... I I I II
...,", '" i save many dollars on the family budget by
,.:fr.itY"o;=;:J t'...rt ;. -j Ij r Oversea Transportation Co. patronizing goods. those merchants who advertise their
t'j-: :" -TO- ,
._ .
\- OWNED AND OPERATED BY .
--!; '-- 1 II :
1IyiJseks..1II.
;;: :
.i': ; f $19.50HATS : Thompson Fish Co. Inc. Key West THESE PRICES ARE BRINGING THE COST
OF LIVING DOWN TO THE LEVEL OF EVERYBODY'S .
I ; HATS
.. Fl...... REGULAR AND RELIABLE FREIGHT PURSE.A .
It. (."':"iEn. Manager i New Fall Hats, new 1 SERVICE BETWEEN I
A human, hwme-ilk. Institution :
where you will find Tear ..ildaal'ter colors, smart styles, silk- I merchant who advertises. I his prices ?
--,.... ... miens....... a mat lined |
of great -a--Importance.A I onlySHOES Key West and Miami Furnished two-story house and lot at 1307 White- charged a customer 20 cents for an article that anonadvertising .
steel firr.rf fc.IUI.K ...,.... $1.95 head street. In exclusive neighborhood. Beautiful viewof merchant charged 40 cents for. W
!. the heart .1 the rllfEver .1 I NOW MAKING DELIVERIES AT KEY WEST Why this big difference? K
the sea and overlooking Coral Park
Room with Combination i -ON-
? Tub I
and Shower Hath. Radio. Electric,,I SHOES For price and terms apply to
Ceiling Ventilation.Kan.' Comfortable Slat Door for Beds Summer with I SHOES I TUESDAY AND FRIDAY MORNINGS By patronizing the merchants who present
Mattresses, of Inner Spring Con their prices frankly and openly each week you
.t met Lara Ion pa.and Individual Reading I To close out a big lot of I WE FURNISH PICK-UP AND DELIVERY I L. P. ARTMAN, help to KEEP OLD HIGH COST OF LIVING OUT
Peter's .
KITE! Slmglt I Shoes- : SERVICE' #.:.. : The Citizen Office OF KEY WEST.
7* (!__.. Private Hath _____ a ZJW I :
*. Ku._.. Private Dash .____ :iS e OFFICE: 813 CAROLINE STREET V or
*. R__.. Private Itath a..eS I $1.95UP !
!(*._.. Private Ilath ___ ._ s..se TELEPHONES 68 AND 92 Residence 1309 Whitehead Street Make your paper pay you dmdenrU.
.
8aa>.le RH.. Private Dst4.01. i _
OUGHT I>CRF.A.R FOR DOIDLRocctrrjicT : 1 ,
r J ......................... ........ ... .......... W,,rl".lIIIII.I".I.I"'JIJ'- .
.
t* *
.. ..**
..
L.
e ___ L. ._ I '..-_.WA:n -"- -,+,.tm +ri.....- ..----'- -- .. .. 1 ,
-
Full Text
xml version 1.0
xml-stylesheet type textxsl href daitss_report_xhtml.xsl
REPORT xsi:schemaLocation 'http: http:' xmlns:xsi 'http:' xmlns 'http:'
DISSEMINATION IEID 'E20090414_AAABRD' PACKAGE 'UF00048666_00352' INGEST_TIME '2009-04-14T18:02:16-04:00'
AGREEMENT_INFO ACCOUNT 'UF' PROJECT 'UFDC'
REQUEST_EVENTS TITLE Disseminate Event
REQUEST_EVENT NAME 'disseminate request placed' TIME '2018-05-21T13:58:08-04:00' NOTE 'request id: 314512; E20090414_AAABRD' AGENT 'UF73'
finished' '2018-05-21T13:58:42-04:00' '' 'SYSTEM'
FILES
FILE SIZE '1176134' DFID 'info:fdaE20090414_AAABRDfileF20090414_AACTEM' ORIGIN 'DEPOSITOR' PATH 'sip-files00255.jp2'
MESSAGE_DIGEST ALGORITHM 'MD5' d74c923ddf02b7743ad9002146a45bd3
'SHA-1' 26d0f7cbd10c8c4ffd403c4747b1cf1dc6b2503a
EVENT '2018-05-21T13:58:19-04:00' OUTCOME 'success'
PROCEDURE describe
'938131' 'info:fdaE20090414_AAABRDfileF20090414_AACTEN' 'sip-files00255.jpg'
6e3a56bc63e36efc0526b943397ff6bd
7499579addc669058a1c612322375a2fc7f8aa0d
'2018-05-21T13:58:20-04:00'
describe
'98608' 'info:fdaE20090414_AAABRDfileF20090414_AACTEO' 'sip-files00255.QC.jpg'
93e8314dfac621353947e7685b90d901
409c5b7ebfd76cdbe7a2377dea1a7a063c49a8b2
'2018-05-21T13:58:17-04:00'
describe
'9402139' 'info:fdaE20090414_AAABRDfileF20090414_AACTEP' 'sip-files00255.tif'
52bef08a4967bb8c489361a3692d9dcf
7535fb81394a4a3118ec4f41494819266158eae9
describe
Value offset not word-aligned: 293
WARNING CODE 'Daitss::Anomaly' Value offset not word-aligned
'61363' 'info:fdaE20090414_AAABRDfileF20090414_AACTEQ' 'sip-files00255.txt'
df20f94f2cc9b33645cc750052e47929
62bf65fdd7bd9dc9e5a89c1d4bd2907b3a8a3a0a
describe
Invalid character
Not valid first byte of UTF-8 encoding
Invalid character
Not valid first byte of UTF-8 encoding
'22034' 'info:fdaE20090414_AAABRDfileF20090414_AACTER' 'sip-files00255thm.jpg'
1da7fdabb2286459f93de3107ae1508f
62c6602a55b873b88b604a0b46d376ace390eb07
'2018-05-21T13:58:18-04:00'
describe
'1175905' 'info:fdaE20090414_AAABRDfileF20090414_AACTES' 'sip-files00256.jp2'
8df7156010a0ed9fc2d67196f5db02af
7ac447dc869c716f4ddf8074df063eae2e42940e
describe
'898776' 'info:fdaE20090414_AAABRDfileF20090414_AACTET' 'sip-files00256.jpg'
fe98878628d9797d5ed2d23895e18b86
8f65facc536fb4bb7dc0c7fcc9b57b4bb6d4e1e5
describe
'95462' 'info:fdaE20090414_AAABRDfileF20090414_AACTEU' 'sip-files00256.QC.jpg'
5bc080501e3a79c4496a8ea95b4dd67a
c0a7b0cb58f509d10198b9fd204ce5e9567a4414
describe
'9400429' 'info:fdaE20090414_AAABRDfileF20090414_AACTEV' 'sip-files00256.tif'
7d0feeb05525d2b6cfe3b46caa4663b2
2107b5d6b7a43fade712341f7b2ecb453359a923
describe
Value offset not word-aligned: 293
Value offset not word-aligned
'42625' 'info:fdaE20090414_AAABRDfileF20090414_AACTEW' 'sip-files00256.txt'
3199ec1141d6338ff845cd0ed0c9ac82
0d532b347abc8afb42f052a1a617674d3ea829a6
describe
Invalid character
Not valid first byte of UTF-8 encoding
Invalid character
Not valid first byte of UTF-8 encoding
'22068' 'info:fdaE20090414_AAABRDfileF20090414_AACTEX' 'sip-files00256thm.jpg'
4e73aafff5829d47f8a087d6bb9658b3
966c90add5494227de13b7cee5f990425d6e2cfc
describe
'1179216' 'info:fdaE20090414_AAABRDfileF20090414_AACTEY' 'sip-files00257.jp2'
525c976d57787dbf4a3a8bd3993189ec
92ce2fa4dc4d9af76ac343dddb3cbc3826a4c4c7
describe
'943423' 'info:fdaE20090414_AAABRDfileF20090414_AACTEZ' 'sip-files00257.jpg'
89cb90c28e857f1ab7aed6b486053517
11161ed2fe716e97699cdb6d7e2e9a797e1ccb97
describe
'100683' 'info:fdaE20090414_AAABRDfileF20090414_AACTFA' 'sip-files00257.QC.jpg'
12d42e8167953749d459dfe724b2b0c5
3e6b51e86f3d7041bc8b8bb5b55677a0656ce88f
describe
'9430042' 'info:fdaE20090414_AAABRDfileF20090414_AACTFB' 'sip-files00257.tif'
bd6d490c632e047dbcb589404d04a621
aa8b36610af84118225e289699556ac64dbf5a51
describe
Value offset not word-aligned: 293
Value offset not word-aligned
'55913' 'info:fdaE20090414_AAABRDfileF20090414_AACTFC' 'sip-files00257.txt'
fb3f36248033df4908ab3ecb02e91a81
a8e5e4515e3c301cf35208d413c8d9d35ce32611
'2018-05-21T13:58:16-04:00'
describe
Invalid character
Not valid first byte of UTF-8 encoding
Invalid character
Not valid first byte of UTF-8 encoding
'22482' 'info:fdaE20090414_AAABRDfileF20090414_AACTFD' 'sip-files00257thm.jpg'
d3e26b7b48363abbba09b9a0f20d49d7
84a0f783417db5faa158ecf866c5a99f21e095ad
describe
'1156549' 'info:fdaE20090414_AAABRDfileF20090414_AACTFE' 'sip-files00258.jp2'
df15466c025f91c43b7f0c585a672340
aced86c8220b0a2ab50a652b32575e248445fb16
describe
'929730' 'info:fdaE20090414_AAABRDfileF20090414_AACTFF' 'sip-files00258.jpg'
ca419dbecec7d7e942524424f9412983
5f8a6ab8b1865d075b5b071382121ba84ebdb699
describe
'98987' 'info:fdaE20090414_AAABRDfileF20090414_AACTFG' 'sip-files00258.QC.jpg'
7c3489a131e61df251016a100225413d
0ee23c9626f099b8c58af2a78a257846ebe29c30
describe
'9245507' 'info:fdaE20090414_AAABRDfileF20090414_AACTFH' 'sip-files00258.tif'
44a6e536b5a25ac26b02de5a2f4c2938
c193b25a57f1e06644a44e34c4a376a4ae809712
describe
Value offset not word-aligned: 293
Value offset not word-aligned
'45957' 'info:fdaE20090414_AAABRDfileF20090414_AACTFI' 'sip-files00258.txt'
efc9adc80fa4bc99a0514d3903abbdb2
7c8f599a248755ec2a20015194cb22ad91f37916
describe
Invalid character
Not valid first byte of UTF-8 encoding
Invalid character
Not valid first byte of UTF-8 encoding
'22668' 'info:fdaE20090414_AAABRDfileF20090414_AACTFJ' 'sip-files00258thm.jpg'
3a3da6b2139db5894b77d659e06401b0
d68a21802a21108fd77887ee1df8e6378b87c405
describe
'10168' 'info:fdaE20090414_AAABRDfileF20090414_AACTFK' 'sip-files1935111501.xml'
7532069031acf5a92b34b59c6dddf1e9
f6c9e59b865a57da353d7198326b26f577922fb4
describe
The element type "div" must be terminated by the matching end-tag "
".
'2018-05-21T13:58:22-04:00' 'mixed'
xml resolution
BROKEN_LINK schema
The element type "div" must be terminated by the matching end-tag "".
'11676' 'info:fdaE20090414_AAABRDfileF20090414_AACTFN' 'sip-filesUF00048666_00352.mets'
4540130c96139b16212f49ddce81cee2
c20cc58cac8d6e622c49042f9c1ee5123216d77d
describe
TargetNamespace.1: Expecting namespace '', but the target namespace of the schema document is ''.
'2018-05-21T13:58:21-04:00'
xml resolution
TargetNamespace.1: Expecting namespace '', but the target namespace of the schema document is ''.
'11425' 'info:fdaE20090414_AAABRDfileF20090414_AACTFQ' 'sip-filesUF00048666_00352.xml'
3c7d91427ec32469bf9890b66a2f9a7a
d05d64ca5396696e693a7e3a9006ba6ad9be9f02 15, | https://ufdc.ufl.edu/UF00048666/00352 | CC-MAIN-2021-25 | refinedweb | 14,800 | 75.2 |
Investors in Liberty Global plc - Class A Ordinary Shares (Symbol: LBTYA) saw new options begin trading this week, for the November 15th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the LBTYA options chain for the new November 15 LBTYA, that could represent an attractive alternative to paying $24.80/share today.
Because the $20 97%..86% annualized — at Stock Options Channel we call this the YieldBoost.
Below is a chart showing the trailing twelve month trading history for Liberty Global plc - Class A Ordinary Shares, and highlighting in green where the $20.00 strike is located relative to that history:
Turning to the calls side of the option chain, the call contract at the $25.00 strike price has a current bid of 70 cents. If an investor was to purchase shares of LBTYA stock at the current price level of $24.80/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $25.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 3.63% if the stock gets called away at the November 15th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if LBTYA shares really soar, which is why looking at the trailing twelve month trading history for Liberty Global plc - Class A Ordinary Shares, as well as studying the business fundamentals becomes important. Below is a chart showing LBTYA's trailing twelve month trading history, with the $25.00 strike highlighted in red:
Considering the fact that the .82% boost of extra return to the investor, or 21.01% annualized, which we refer to as the YieldBoost.
The implied volatility in the put contract example is 57%, while the implied volatility in the call contract example is 44%.
Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $24.80). | https://www.nasdaq.com/articles/interesting-lbtya-put-and-call-options-for-november-15th-2019-09-27 | CC-MAIN-2021-17 | refinedweb | 345 | 61.36 |
Combining LESS with ASP.NET
Everyone knows how cool LESS is. If you’re not, then here’s the elevator speech. LESS extends CSS with dynamic behavious such as variables, mixins, namespaces and functions, it makes CSS easy to work with. Now, it’s important to remember it doesn’t code your CSS for you – it isn’t a magic CSS editor. You still need to know how to work with CSS. It allows you to write CSS once and use it in multiple places. This is something I’ve wanted for a long time and now it’s here.
There are other libraries which perform similar functions, such as SASS, but I’ll focus on LESS as that’s what I’m familiar with. I’m going to be concentrating on how to use this with ASP.NET.
I’m going to be using Visual Studio 2010 for this demonstration, as I had a few issues using LESS with Visual Studio 11.
Running LESS On the Client
LESS can be run purely on the client or from the server. To run it on the client is a simple three step process.
- Add a reference to the LESS JavaScript file
- Update the rel value in the LINK tag to rel=”stylesheet/less”
- Add a new .less file to your project and reference that in your LINK tag
Updating the rel value to stylesheet/less is necessary because the LESS library looks for this value. Once it’s found it processes that file. Your page should look like this now.
<link href="styles/my.less" rel="stylesheet/less" /> <script src=""></script>
I’m referencing a file called my.less, so let’s define some LESS code to ensure this is working.
@back-color: #000; @font-color: #fff; body { background-color: @back-color; font-size: .85em; font-family: "Trebuchet MS", Verdana, Helvetica, Sans-Serif; margin: 0; padding: 0; color: @font-color; }
I’ll skip over the syntax for now, but if you run the website and look in the developer tools, you’ll see that LESS code is being served as valid CSS.
Running LESS On the Server
There are several ways to install LESS on the server. The easiest approach is via NuGet. There’s a package called dotless. Install it with the following command inside Visual Studio.
Once installed, you can remove the JavaScript reference from your page. Also make sure you update the LINK tag and remove the /less from the rel attribute.
<link href="styles/my.less" rel="stylesheet" />
The package has also added some entries to your web.config file. There’s a new configSection defined.
<configSections> <section name="dotless" type="dotless.Core.configuration.DotlessConfigurationSectionHandler, dotless.Core" /> </configSections/> <dotless minifyCss="false" cache="true" web="false" />
And a new HTTP handler has been added to cater for .less requests.
<system.webServer> <handlers> <add name="dotless" path="*.less" verb="GET" type="dotless.Core.LessCssHttpHandler,dotless.Core" resourceType="File" preCondition="" /> </handlers> </system.webServer>
The nice feature about dotless is that it can automatically minify the CSS for you via the minifyCss attribute. If you update that to true and run the website now, you’ll see the minified CSS.
That’s it. LESS is now running on the server.
When To Use It?
I think LESS is great for development, but when you need your site to run as fast as possible, you don’t want to transform each .less request on the fly. This is why I’d recommend using this only during development. The good news is when you install dotless, it installs the dotless compiler. This can be in the packagesdotless1.3.0.0Tool folder in your website folder.
You can add this to your pre-build event from the build properties tab.
“$(SolutionDir)packagesdotless.1.3.0.0tooldotless.Compiler.exe” “$(ProjectDir)contentmy.less” “$(ProjectDir)contentmy.css”
This way you get the best of both worlds.
Before moving away from Visual Studio, there are extensions you can install that gives you the familiar syntax highlighting. LessExtension seems like one of the better ones.
LESS Syntax
I haven’t covered any of the syntax in this article. I wanted to focus on LESS with ASP.NET. Ivaylo Gerchev has a good article on the syntax and that can be found here.
I think LESS is a must tool to have during development. It will make your life easier when coding CSS.
- Ediz
- USPaperchaser
- Wyatt Barnett
- Nick G
- Brian | http://www.sitepoint.com/combining-less-with-asp-net/ | CC-MAIN-2014-41 | refinedweb | 740 | 67.65 |
NAMEI(9) MidnightBSD Kernel Developer’s Manual NAMEI(9)
NAME
namei, NDINIT, NDFREE, NDHASGIANT — pathname translation and lookup operations
SYNOPSIS
#include <sys/param.h>
#include <sys/proc.h>
#include <sys/namei.h>
int
namei(struct nameidata *ndp);
void
NDINIT(struct nameidata *ndp, u_long op, u_long flags, enum uio_seg segflg, const char *namep, struct thread *td);
void
NDFREE(struct nameidata *ndp, const uint flags);
int
NDHASGIANT(struct nameidata *ndp);
DESCRIPTION
The namei facility allows the client to perform pathname translation and lookup operations. The namei functions will increment the reference count for the vnode in question. The reference count has to be decremented after use of the vnode, by using either vrele(9) or vput(9), depending on whether the LOCKLEAF flag was specified or not. If the Giant lock is required, namei will acquire it if the caller indicates it is MPSAFE, in which case the caller must later release Giant based on the results of NDHASGIANT().
The NDINIT() function is used to initialize namei components. It takes the following arguments:
ndp
The struct nameidata to initialize.
op
The operation which namei() will perform. The following operations are valid: LOOKUP, CREATE, DELETE, and RENAME. The latter three are just setup for those effects; just calling namei() will not result in VOP_RENAME() being called.
flags
Operation flags. Several of these can be effective at the same time.
segflg
UIO segment indicator. This indicates if the name of the object is in userspace (UIO_USERSPACE) or in the kernel address space (UIO_SYSSPACE).
namep
Pointer to the component’s pathname buffer (the file or directory name that will be looked up).
td
The thread context to use for namei operations and locks.
NAMEI OPERATION FLAGS
The namei() function takes the following set of ‘‘operation flags’’ that influence its operation:
LOCKLEAF
Lock vnode on return. This is a full lock of the vnode; the VOP_UNLOCK(9) should be used to release the lock (or vput(9) which is equivalent to calling VOP_UNLOCK(9) followed by vrele(9), all in one).
LOCKPARENT
This flag lets the namei() function return the parent (directory) vnode, ni_dvp in locked state, unless it is identical to ni_vp, in which case ni_dvp is not locked per se (but may be locked due to LOCKLEAF). If a lock is enforced, it should be released using vput(9) or VOP_UNLOCK(9) and vrele(9).
WANTPARENT
This flag allows the namei() function to return the parent (directory) vnode in an unlocked state. The parent vnode must be released separately by using vrele(9).
MPSAFE
With this flag set, namei() will conditionally acquire Giant if it is required by a traversed file system. MPSAFE callers should pass the results of NDHASGIANT() to VFS_UNLOCK_GIANT in order to conditionally release Giant if necessary.OBJ
Do not call vfs_object_create() for the returned vnode, even though it meets required criteria for VM support.
NOFOLLOW
Do not follow symbolic links (pseudo). This flag is not looked for by the actual code, which looks for FOLLOW. NOFOLLOW is used to indicate to the source code reader that symlinks are intentionally not followed.
SAVENAME
Do not free the pathname buffer at the end of the namei() invocation; instead, free it later in NDFREE() so that the caller may access the pathname buffer. See below for details.
SAVESTART
Retain an additional reference to the parent directory; do not free the pathname buffer. See below for details.
ALLOCATED ELEMENTS
The nameidata structure is composed of the following fields:
ni_startdir
In the normal case, this is either the current directory or the root. It is the current directory if the name passed in does not start with ‘/’ and we have not gone through any symlinks with an absolute path, and the root otherwise.
In this case, it is only used by lookup(), and should not be considered valid after a call to namei(). If SAVESTART is set, this is set to the same as ni_dvp, with an extra vref(9). To block NDFREE() from releasing ni_startdir, the NDF_NO_STARTDIR_RELE can be set.
ni_dvp
Vnode pointer to directory of the object on which lookup is performed. This is available on successful return if LOCKPARENT or WANTPARENT is set. It is locked if LOCKPARENT is set. Freeing this in NDFREE() can be inhibited by NDF_NO_DVP_RELE, NDF_NO_DVP_PUT, or NDF_NO_DVP_UNLOCK (with the obvious effects).
ni_vp
Vnode pointer to the resulting object, NULL otherwise. The v_usecount field of this vnode is incremented. If LOCKLEAF is set, it is also locked.
Freeing this in NDFREE() can be inhibited by NDF_NO_VP_RELE, NDF_NO_VP_PUT, or NDF_NO_VP_UNLOCK (with the obvious effects).
ni_cnd.cn_pnbuf
The pathname buffer contains the location of the file or directory that will be used by the namei operations. It is managed by the uma(9) zone allocation interface. If the SAVESTART or SAVENAME flag is set, then the pathname buffer is available after calling the namei() function.
To only deallocate resources used by the pathname buffer, ni_cnd.cn_pnbuf, then NDF_ONLY_PNBUF flag can be passed to the NDFREE() function. To keep the pathname buffer intact, the NDF_NO_FREE_PNBUF flag can be passed to the NDFREE() function.
FILES
src/sys/kern/vfs_lookup.c
SEE ALSO
uio(9), uma(9), VFS(9), VFS_UNLOCK_GIANT(9), vnode(9), vput(9), vref(9)
AUTHORS
This manual page was written by Eivind Eklund 〈eivind@FreeBSD.org〉 and later significantly revised by Hiten M. Pandya 〈hmp@FreeBSD.org〉.
BUGS
The LOCKPARENT flag does not always result in the parent vnode being locked. This results in complications when the LOCKPARENT is used. In order to solve this for the cases where both LOCKPARENT and LOCKLEAF are used, it is necessary to resort to recursive locking.
Non-MPSAFE file systems exist, requiring callers to conditionally unlock Giant.
MidnightBSD 0.3 September 21, 2005 MidnightBSD 0.3 | http://www.midnightbsd.org/documentation/man/NDINIT.9.html | CC-MAIN-2014-15 | refinedweb | 953 | 56.15 |
This is your resource to discuss support topics with your peers, and learn from each other.
12-28-2010 10:41 PM
My app has a timer which is started when the user clicks a button. But when the app is minimized, the timer pauses. I would like it to keep going so that a user can minimize the app, perform another task and then return to the app and have the timer be up to date, so to speak.
We've seen that videos and songs will keep playing when the apps that control them are minimized. Can this be done for a Timer in an AIR app?
Thanks,
David
12-29-2010 07:33 AM
Hi,
What I think would work for you is to get the current time in milliseconds when your app is minimized, then again when it is resumed. A simple calculation will find out how long it was 'asleep' and what needs to be done to continue as if it hadn't stopped.
Harry
12-29-2010 07:55 AM - edited 12-29-2010 08:03 AM
interesting, i haven't arrived yet at working with the Activate and Deactivate events for my application, but i was under the impression that we were responsible for stopping active processes during a deactivation in order to save on system resources, that the application would continue running normally in a minimized state.
try preventing the default action (a pause of your application) when the application becomes minimized, or try continuing your application from where it would normally become deactivated:
NativateApplication.nativeApplication.addEventList
ener(Event.DEACTIVATE, callback) private function callback(evt:Event):void { evt.preventDefault(); //or //continue application run }ener(Event.DEACTIVATE, callback) private function callback(evt:Event):void { evt.preventDefault(); //or //continue application run }
12-29-2010 07:59 AM
David, you seem to be asking for two unrelated things. You want a "timer" (as in an instance of the Timer class? or something else?) which is "up to date" when the user returns to the app. What's that actually mean? The system date, for example, will certainly be up to date. A periodic Timer would start firing again when the app was reactivated. What exactly do you want there?
You also say you want behaviour similar to the video player that runs while minimized... but that's an entirely different thing. It means the app doesn't actually get paused, but keeps running, with full access to the system even while minimized.
I don't think the two things are the same, and it's not clear which, or what, you actually want.
12-29-2010 08:08 AM
@peter: its clear David wants a function of his app to continually run in the background no matter what state his app is in (activated or deactivated).
@david: i think its best to take what harry and darkin said. since ur timer starts and keeps going and stops when it is deactivated, what you can do is listen for the deactivate event. then record the time when the app was deactivated and then onces the app is reactivated record the current time. subtract the two numbers and you'll get the total amount of time your app was deactivated and just add that time to the timer counter and it should be "up to date".
its not real time but manipulating the timer in such a way with events can get you the effect. good luck!
12-29-2010 08:24 AM
JRab wrote:
@peter: its clear David wants a function of his app to continually run in the background no matter what state his app is in (activated or deactivated).
Thanks, Joynal, but when he says "... a user can minimize the app, perform another task and then return to the app and have the timer be up to date, so to speak." it most certainly is not clear (to me) that he expects dynamic behaviour from his app while minimized.
I agree that's a likely interpretation. Just not a certain one. As I've said, I try not to read too much into what people really want, after many years of assisting with requirements definition, problem reports, and such. (In this case, on the balance of probabilities I agree you're likely right about the interpretation, but that doesn't mean my comments won't help David be clearer in future posts, saving him and all of us wasted time.)
12-29-2010 08:34 AM
i just ran the following code thru the simulator and the application did not pause when it was minimized or when i switched to another application.
the debugger continued to trace random numbers from 0 to 10k, albeit a touch slower when the application was deactivated, but i feel that might have something to do with the simulator.
package { import flash.display.Sprite; import flash.events.Event; public class PlayBookTest extends Sprite { public function PlayBookTest() { init(); } private function init():void { stage.addEventListener(Event.ENTER_FRAME, enterFrameEventHandler); } private function enterFrameEventHandler(evt:Event):void { trace(Math.random() * 10000); } } }
12-29-2010 08:37 AM
hey peter,
sorry didnt mean to make it sound like you werent helping. was justs stating the OP's intention. although i agree that the OP could have been more clear (a lot of times they could be) but sometimes its difficult for them to convey what they want. and if we're wrong in assuming what they want im sure they'll reply with a correction.
but i absolutely agree that your post will help them convey their message better in the future.
12-29-2010 09:01 AM - edited 12-29-2010 09:05 AM
TheDarkIn1978 wrote:
... albeit a touch slower when the application was deactivated, but i feel that might have something to do with the simulator.
Kudos for the quick test. By the way, I've read recently (here's a similar article though it's not the one I read) that applications that are deactivated have their frame rate decreased to about 3 frames per second. Presumably there will be APIs to modify that behaviour, but of course nothing official has been said about it yet. Anyway, that could explain the apparent decrease in update rate that you saw.
12-29-2010 11:24 AM
Thanks to all for the thoughtful replies.
To clarify, what I am referring to in my original post as "a timer" was an instance of the Timer class. The Timer is being used as a game timer, like a timer for increasing blinds in Texas Hold 'Em Poker. As such, it's unfortunate that there's no apparent way of keeping the timer going when the app is minimized.
The best solution seems to be to catch the current time in miliseconds using Date() when the app is minimized/deactivated and then to catch the current time in miliseconds again when it is reactivated, and then use the difference to update the Timer. Until a better solution is found, this may be what I'll have to do.
Thanks again to all,
David | https://supportforums.blackberry.com/t5/Adobe-AIR-Development/How-to-keep-timer-going-when-app-is-minimized/m-p/709327 | CC-MAIN-2016-44 | refinedweb | 1,186 | 62.27 |
In the last topic, we studied different types of loops. We also saw how loops can be nested.
Normally, if we have to choose one case among many choices,
if-else is used. But if the number of choices is large,
switch..case makes it a bit easier and less complex.
Let's control Case Wise
switch...case is another way to control and decide the execution of statements other than
if/else. This is used when we are given a number of choices (cases) and we want to perform a different task for each choice.
Let's first have a look at its syntax.
switch(expression)
{
case constant1:
statement(s);
break;
case constant2:
statement(s);
break;
/* you can give any number of cases */
default:
statement(s);
}
In
switch...case, the value of expression enclosed within the brackets ( ) following switch is checked. If the value of the
expression matches the value of
constant in any of the
case, the
statement(s) corresponding to that case are executed.
If expression does not match any of the constant values, then the statements corresponding to
default are executed.
Let's see an example.
#include <stdio.h> int main() { char grade ; printf("Enter your grade\n"); scanf(" %c" , &grade); switch(grade) { case 'A': printf("Excellent!\n"); break; case 'B': printf("Outstanding!\n"); break; case 'C': printf("Good!\n"); break; case 'D': printf("Can do better\n"); break; case 'E': printf("Just passed\n"); break; case 'F': printf("You failed\n"); break; default: printf("Invalid grade\n"); } return 0; }
D
Can do better
break is used to break or terminate a loop whenever we want and is also used with
switch.
In this example, the value of 'grade' is 'D'. Since the value of the constants of the first three cases is not 'D', so
case 'D' will be executed and 'Can do better' will be printed. Then
break statement will terminate the execution without checking the rest of the cases.
If there is no
break in any statement, then after execution of the correct case, every case will also be executed. Look at the following code for an example.
#include <stdio.h> int main() { char grade ; printf("Enter your grade\n"); scanf(" %c" , &grade); switch(grade) { case'A': printf("Excellent!\n"); case 'B': printf("Outstanding!\n"); case 'C': printf("Good!\n"); case 'D': printf("Can do better\n"); case 'E': printf("Just passed\n"); case 'F': printf("You failed\n"); default: printf("Invalid grade\n"); } return 0; }
D
Can do better
Just passed
You failed
In the above example, value of grade is 'D', so the control jumped to
case 'D'. Since there is no
break statement after any case, so all the statements after case 'D' also get executed.
As you can see, all the cases after case D have been executed.
breakstatement.
Now let's see an example with the expression value as an integer.
#include <stdio.h> int main() { int i = 2; switch(i) { case 1: printf("Number is 1\n"); break; case 2: printf("Number is 2\n"); break; default: printf("Number is greater than 2\n"); } return 0; }
Using break with loops
We can also terminate a loop in the middle of its execution using
break. Just type
break; after the statement after which you want to break the loop.
As simple as that!
Let's consider an example.
#include <stdio.h> int main() { int a; for(a = 1; a <= 10; a ++) { printf("Hello World\n"); if(a == 2) { //loop will now stop break; } } return 0; }
Hello World
In this example, after the first iteration of the loop,
a++ increases the value of 'a' to 2 and 'Hello World' got printed. Since the condition of
if satisfies this time,
break will be executed and the loop will terminate.
Continue
The continue statement works similar to break statement. The only difference is that
break statement
terminates the loop whereas
continue statement passes control to the conditional test i.e., where the condition is checked, skipping the rest of the statements of the loop.
#include <stdio.h> int main() { int a ; for(a = 1; a <= 10; a ++) { printf("Hello World\n"); if (a == 2) { //this time further statements will not be executed. Control will go to for continue; } printf("a is not 2\n"); } return 0; }
a is not 2
Hello World
Hello World
a is not 2
Hello World
a is not 2
Hello World
a is not 2
Hello World
a is not 2
Hello World
a is not 2
Hello World
a is not 2
Hello World
a is not 2
Hello World
a is not 2
Notice that at the second time, 'a is not 2' is not printed. It means that when 'a' was 2, then 'continue' got executed and control went to for loop without executing further codes. | https://www.codesdope.com/c-decide-and-loop/ | CC-MAIN-2021-39 | refinedweb | 803 | 70.73 |
This tutorial gets you up and running with a simple chat bot for Twitch channel.
Who's this tutorial for?
Beginners to coding and experienced coders new to Python.
Contents
We'll start out by setting up the accounts, getting the secrets, and installing the softywares. Then we'll setup and code the bot. By the end of this tutorial you should be ready to start adding your own custom commands.
BUT FIRST... we need to make sure we have our credentials in order. 👏
Papers, please!
📢 Glory to Arstotzka!
- Make an account on Twitch (for your bot) or use the one you stream with. Make it something cool like
RealStreamer69😎
- Request an oauth code. You'll need to login and give the app permissions to generate it for you.
- Register your app with Twitch dev and request a client-id (so you can interface with Twitch's API)
Keep the oauth and client id somewhere handy but not public. You'll need them later to configure the bot.
💡 PROTIP™ -- Keep it secret. Keep it safe.
Install all the things! 🧹
- Install Python 3.6 or 3.7 -- Windows // Linux // OS X
- Install PIPENV. In the console, run ⇒
pip install pipenv
Create a cozy home for the bot to live in
Virtual environments require a couple extra steps to set up but make developing Python apps a breeze. For this tutorial, we'll use PIPENV which marries pip and venv into a single package.
- In the console, navigate to your working directory
- Run ⇒
pipenv --python 3.6or
pipenv --python 3.7
- This is going to create
pipfileand
piplock. They hold venv info like Python version and libraries you install.
- Then run ⇒
pipenv install twitchio
Configuring and authorizing the bot
Create 2 files in your working directory. One called
bot.py and another called
.env (no file name, just the extension - it's weird, I know).
/.env
Your secrets will live inside the
.env file. Add the oauth token and client-id from above after the
= in the file. Fill in the other vars as well.
# .env TMI_TOKEN=oauth: CLIENT_ID= BOT_NICK= BOT_PREFIX=! CHANNEL=
/bot.py
Inside
bot.py, import the libraries we'll need and create the bot obj that we'll start in the next step.
# bot.py import os # for importing env vars for the bot to use from twitchio.ext import commands bot = commands.Bot( # set up the bot irc_token=os.environ['TMI_TOKEN'], client_id=os.environ['CLIENT_ID'], nick=os.environ['BOT_NICK'], prefix=os.environ['BOT_PREFIX'], initial_channels=[os.environ['CHANNEL']] )
💡 PROTIP™ -- When we run
bot.pyusing PIPENV, it first loads the variables from the
.envfile into the virtual environment and then runs
bot.py. So, inside this venv, we have acess (on an instance-basis) to these variables. We're going to use python's
osmodule to import them, just like we would import environment variables on our native environment.
At the bottom of the file, we need to make sure the bot runs when we call
bot.py directly using
if __name__ == "__main__":
# bot.py if __name__ == "__main__": bot.run()
WAKE BOT UP (wake bot up inside!)
Let's test the bot and make sure it can connect to Twitch.
- In the console, run ⇒
pipenv run python bot.py
If it worked, you shouldn't get any errors - that means the environment variables loaded correctly and your bot successfully connected to Twitch!
If you got errors, check out the next section before moving on.
Error: Can't wake up. [save me]
A wild
Request to join the channel has timed out. Make sure the channel exists. appears. You evade with..
Make sure you have the right tokens (oauth and client-id) in the .env file and that they're in the same directory/folder as
bot.py
Your directory structure at this point should look like this...
working-directory/ ├─ .env ├─ bot.py ├─ Pipfile └─ Pipfile.lock
If that still doesn't fix it, comment below and we'll sort it out for ya!
Adding some functionality to the bot
Greet the chat room!
Back to
bot.py.... Below the bot object, let's create a function called
event_ready with the decorator
@bot.event. This function will run once when we start the bot. It then reports to terminal and chat that it successfully connected to Twitch.
# bot.py, below bot object !")
Go ahead and test the bot again. It should greet chat when it comes online now.
Respond to messages in chat
Next up, we're going to add a function that's run every time a message is sent in your channel. You can add all sorts of logic here later, but we'll start out with making sure the bot ignores itself.
# bot.py, below event_ready @bot.event async def event_message(ctx): 'Runs every time a message is sent in chat.' # make sure the bot ignores itself and the streamer if ctx.author.name.lower() == os.environ['BOT_NICK'].lower(): return
After that, we'll drop in a line of code that will annoyingly echo back every message sent in chat. ᴷᵃᵖᵖᵃ
# bot.py, in event_message, below the bot-ignoring stuff await ctx.channel.send(ctx.content)
Restart the bot and check it out!
💡 PROTIP™ -- Comment out that line now cuz it's actually really annoying.
Making a chat command
Any command you make needs to follow this format when defining them..
- Decorated with
@bot.command(name='whatever')
- Be asynchronous functions with names that match the
namevariable in the decorator
- Pass the message context in through the function
How the function works and what it does is all up to you. For this example, we'll create a command called
!test that says
test passed! in chat when we call it.
# bot.py, below event_message function @bot.command(name='test') async def test(ctx): await ctx.send('test passed!')
Before this can work, we need to make sure that the bot knows to listen for commands coming through.
Add this just below the ignore bot code in
event_message:
#bot.py, in event_message, below the bot ignore stuffs await bot.handle_commands(ctx)
Alright! Time to test it out. Reboot the bot and send
!test in chat!
Responding to specific messages
Tell my bot I said... "Hello."
You can respond to specific messages in your chat too, they don't have to be
!commands. Let's write some code that says hi when people say hello.
# bot.py, at the bottom of event_message if 'hello' in ctx.content.lower(): await ctx.channel.send(f"Hi, @{ctx.author.name}!")
Go ahead and test it out! You've got the framework to start buildng your bot and adding commands.
Here's what you should have when you're done
/bot.py
import os from twitchio.ext import commands # set up the bot bot = commands.Bot( irc_token=os.environ['TMI_TOKEN'], client_id=os.environ['CLIENT_ID'], nick=os.environ['BOT_NICK'], prefix=os.environ['BOT_PREFIX'], initial_channels=[os.environ['CHANNEL']] ) !") @bot.event async def event_message(ctx): 'Runs every time a message is sent in chat.' # make sure the bot ignores itself and the streamer if ctx.author.name.lower() == os.environ['BOT_NICK'].lower(): return await bot.handle_commands(ctx) # await ctx.channel.send(ctx.content) if 'hello' in ctx.content.lower(): await ctx.channel.send(f"Hi, @{ctx.author.name}!") @bot.command(name='test') async def test(ctx): await ctx.send('test passed!') if __name__ == "__main__": bot.run()
And ofc your
.env with your secrets and a
pipfile and
Piplock.
I've uploaded the files to a github repo too, if that's your thing.
Congrats!! 🥳🎉
You've made it this far.. You know what that means? Time to celebrate by clicking this GitKraken referral link and signing up so I can get free socks (or maybe a Tesla? 😎).
Also, feel free to check out the Live Coding Stream recap where we developed this tutorial. Shoutouts to everyone in chat that collaborated!
What do you want to do next?
Questions? Comments? Ideas? Let me know in the comments below!
I'll be following up to this post soon with some tips on how to get the most out of the TwitchIO library -- more stuff that's not really well-documented and was a PITA to figure out, like how to get the author, using badges for permissions, etc.
Discussion (48)
I have a problem at the point where i have to wake the bot up with "pipenv run python bot.py". It keeps saying "Loading .env environment variables…" but nothing more. Its just stuck at this point and doesnt even give me an error.
I would really appreciate an answer since im stuck here for several hours and its driving me nuts. Thank you in advance
The reason "Loading .env environment variables..." will stay, is because there is nothing to print to the console afterwards. In the "event_ready" function, if you print something, then there will be output.
I'm so sorry that it took someone more than a year to find this but this is a super easy fix that was not very clear in the guide.
(the following example has a couple extra spaces to avoid formatting)
the:
if __ name __ == "__ main __":
bot.run()
must go at the bottom of your bot.py file
if there are functions after it, it will not load.
Hi I am having the same problem and can't get around it. I am stuck on this error. Did anyone get a fix? The name/channel seems to be key but I can't get a working combination. I don't understand the user who said the nick is the channel.
Not sure if its actually an issue to be stuck on "Loading .env environment variables…" because I can continue with the rest of the code, and my bot does go online.
This happens when my Bot_Nick and Channel are the same. Before making my Bot_Nick the same as my channel name, I was having the "channel does not exist" error. If I make my Bot_Nick my channel name, it doesn't seem to matter what the Channel input is, because the bot still goes online as my channel name.
Would love to see this fixed because it's very easy to get running otherwise!
Bot_Nick is not a nickname, it is the name of the channel.
Same problem, eventually when trying to abort it prints this error:
C:\Users\User.virtualenvs\Chatbot_work--fd3mIVH\lib\site-packages\twitchio\websocket.py:618: RuntimeWarning: coroutine 'WebSocketCommonProtocol.close' was never awaited
self._websocket.close()
If it says nothing more and your still at beginning of tutorial, then you did it right, it's in the channel, do the next step and it should put a message in chat like it says in tutorial
Hey y'all! Like most of you I ended up here and had trouble getting the channel to connect.
What you want to do is lead the channel name with a "#" in your env file.
For instance:
CHANNEL="#space_cdt"
After this I was able to connect!
Could you be a little more specific? Doing this still leads to having it not set the correct channel name,
This is more likely an error with you TMI token or Client ID. Make sure both are generated using your bot's Twitch account and not your personal Twitch account.
Hi all! After some headaches with this yesterday, I wanted to make some clarifications that would have helped me when I started trying to make my bot!
1) The TMI token and Client ID should both be generated using the Twitch account you set up for your bot. If you are having the "channel does not exist" error, try re-registering your app with Twitch dev and make sure you save the changes and get the updated Client ID! If you are using a personal computer as opposed to a web server, make sure for the entry box "OAuth Redirect URLs" you are entering "localhost" (no quotes).
2) The
.envfile is a way to keep your TMI token and Client ID secret even if you use version control software like Git to store your code somewhere like GitHub. You need to have a Python package called
dotenvto load these variables into your
bot.pyfile. To download this package, you can do it using
pipwith the command
pip install python-dotenv. You then need to import the function
load_dotenv()from the package at the top of your file. This can be done by including
from dotenv import load_dotenvwith all your other package imports at the top of the file. Then you need to call the
load_dotenv()function at the top of your
bot.pyfile (below the imports but above the
os.environcalls. This allows
osto properly import those variable from the
.envfile. If you don't want to download
dotenv, you can list these variables directly in
bot.commands(), but realize that if you do this, you shouldn't share this code publicly because someone else getting their hands on the TMI token and Client ID would allow them to control your bot.
3) The variables in the .env file should be as follows:
TMI_TOKEN=oauth:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
CLIENT_ID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
BOT_NICK=BotChannelName
BOT_PREFIX=!
CHANNEL=ChannelNameOfChatToEnter
Important note: BOT_PREFIX refers to the prefix of the commands that the bot will respond to. Most Twitch bots use "! " (e.g. !discord), so it's probably a good idea to keep it as a single exclamation mark. A lot of comments say to use you bot's channel name, but this will make it so that your bot doesn't respond to commands!
4) When testing your bot, you may want to edit the code in your
bot.pyfile. To do this, you must stop the code from executing and re-execute the code to update the bot with the new info in the saved
bot.pyfile. To do this, go to the command line that your prompts are run from and hit Crtl + C. This will stop the execution and allow you to re-execute.
I hope this helps anybody that might be confused in ways similar to how I was!
It keeps saying "Make sure channel exists. I have tried the url type, twitch.tv/channelname, and just the channel name. I have made a new oauth, and tried everything I can. EDIT: Just looked at the other comments, and figured out that bot_nick needs to be the channel name, so what do I put at CHANNEL=?
BOT_NICKis the channel name of the bot account,
CHANNELis the channel name of the channel whose chat you'd like the bot to join. Just the usernames, don't use the "twitch.tv/" prefix.
you need to define CHANNEL in your .env file like this:
CHANNEL="#channel_name"
I keep trying to run the bot but get the error: KeyError: 'jordo1'.
asyncio.exceptions.TimeoutError: Request to join the "jordo1" channel has timed out. Make sure the channel exists.
My channel certainly exists. I tried it with both Python 3.7 and 3.8. My code is basically the same, but I made my own class inheriting Bot instead.
I regenerated my oauth and reconfirmed my client ID.
Please help. Driving me nuts for hours.
Edit: Just realized the bot is working but this error still appears despite joining the channel. Strange. The nick isn't working though, it's just using my twitch username.
I'm getting the same error and resolution. And I can't get any of the bot.command functions to work, even with copy/pasting the code given here. If I fold it inside the event_message function it works, but not as its own thing.
To everyone with the error:
self._websocket.close()
If you scroll up there should be an earlier error saying:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1097)
To resolve this I used this: gist.github.com/marschhuynh/31c937...
Simply copy this into another python file called install_ssl_certs.py and run it with
pipenv run python install_ssl_certs.py
After doing this I was able to connect and got the message: Ready | frenchtoastbot_
i get this error
Traceback (most recent call last):
File "B:\ChatLoyalty\ChatLoyalty.py", line 7, in
irc_token=os.environ['TMI_TOKEN'],
File "C:\Users\user\AppData\Local\Programs\Python\Python38\lib\os.py", line 675, in getitem
raise KeyError(key) from None
KeyError: 'TMI_TOKEN'
what am i doing wrong?
I figured this out. What you want to do is put in what is in these quotations for each of the environments into bot.py instead of .env.
"os.environ['TMI_TOKEN']=
os.environ['CLIENT_ID']=
os.environ['BOT_NICK']=
os.environ['BOT_PREFIX']=
os.environ['CHANNEL']=
"
For some reason, the OP didn't specify how to call the .env file before it tries to run the rest of bot.py, but doing that eliminates the need for .env
this wound up working great! ty for the start :) what section of the api docs did you find these?
EDIT:nvm... i just figured it out... you need to use the Account name as the NICK not the botname... might help if you make that a bit more clear in the instructions.
I'm having the same issue as Jordan and Cai...
I think its safe to assume it is connecting to twitch API but something about the channel name variable is not working.
I created a new twitch account just for the bot and registered the application on that account.
I have tried with a different channel (a friends) to no avail.
I have checked and double checked my oAuth and Client-ID and they're both correct.
My (4) files are all in the same directory. (changing the channel name changes name in the error so pipenv is definitely working from the correct .env and other files...)
could you maybe post a "throw away" screen cap of what its supposed to look like all filled out? maybe its a simple syntax derp that i'm missing... i'm currently not using " marks or anything else after the = in the .env file.
-error return below-
PS C:\Users\thato\Documents\Python Files> pipenv run python bot.py=TimeoutError('Request to join the "chreaus" channel has timed out. Make sure the channel exists.')>
Traceback (most recent call last):
File "C:\Users\thato.virtualenvs\Python_Files-vA_Pk7W6\lib\site-packages\twitchio\websocket.py", line 280, in _join_channel
await asyncio.wait_for(fut, timeout=10)
File "c:\users\thato\appdata\local\programs\python\python37\lib\asyncio\tasks.py", line 449, in wait_for
raise futures.TimeoutError()
concurrent.futures._base.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\thato.virtualenvs\Python_Files-vA_Pk7W6\lib\site-packages\twitchio\websocket.py", line 228, in auth_seq
await self.join_channels(channels)
File "C:\Users\thato.virtualenvs\Python_Files-vA_Pk7W6\lib\site-packages\twitchio\websocket.py", line 271, in join_channels
await asyncio.gather([self._join_channel(x) for x in channels])
File "C:\Users\thato.virtualenvs\Python_Files-vA_Pk7W6\lib\site-packages\twitchio\websocket.py", line 285, in _join_channel
f'Request to join the "{channel}" channel has timed out. Make sure the channel exists.')
concurrent.futures._base.TimeoutError: Request to join the "chreaus" channel has timed out. Make sure the channel exists.
It was working properly an then got this error
Traceback (most recent call last):
File "bot.py", line 11, in
initial_channels=[os.environ['CHANNEL']]
TypeError: init() missing 1 required positional argument: 'token'
I'm having trouble registering an app on the dev.twitch.tv/console/apps/create, using localhost gives me Cartman.GetAuthorizationToken: 401: {"code":401,"status":"invalid oauth token"}, any idea what I'm supposed to do to get it to work?
Keep getting this error, have looked at the comments and everything people say seems to be correct. Please help cause this is confusing me very much.
Task exception was never retrieved
future: exception=TimeoutError('Request to join the "josh_liv3" channel has timed out. Make sure the channel exists.')>
Traceback (most recent call last):
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/twitchio/websocket.py", line 280, in _join_channel
await asyncio.wait_for(fut, timeout=10)
File "/usr/lib/python3.8/asyncio/tasks.py", line 501, in wait_for
raise exceptions.TimeoutError()
asyncio.exceptions.TimeoutError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/twitchio/websocket.py", line 228, in auth_seq
await self.join_channels(channels)
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/twitchio/websocket.py", line 271, in join_channels
await asyncio.gather([self._join_channel(x) for x in channels])
File "/opt/virtualenvs/python3/lib/python3.8/site-packages/twitchio/websocket.py", line 284, in _join_channel
raise asyncio.TimeoutError(
asyncio.exceptions.TimeoutError: Request to join the "josh_liv3" channel has timed out. Make sure the channel exists.
Still says channel not found for me, do I put my channel name in the "channel" variable?
Make sure you are using the TMI token and Client ID for your bot channel and not your personal Twitch account.
BOT_NICKis the channel name of your bot and
CHANNELis the channel name of the account whose chat you'd like the bot to join.
you need to define CHANNEL in your .env file like this:
CHANNEL="#channel_name"
FOR THOSE HAVING "Make sure channel exists." make sure your "BOT_NICK" is the name of your bot account. Also make sure the "CHANNEL" is the name of the channel you're trying to join. For "BOT_PREFIX" I just put the same name as my bot_NICK. If this works let me know! I had spent some time on this issue because of the confusion on this discussion post.
For anybody reading,
BOT_PREFIXshould be the prefix used for your bot commands. Most often on Twitch, bot commands start with "!" which I believe is why the original author has the line
BOT_PREFIX=!
hey, my bot runs and displays the arrival message (in twitch chat)
but it doesn't execute any commands I've tried printing ctx.author.name.lower() before the check to see if the message was from the bot and it only prints out a name when the bot types the arrival command.
I had this issue too! My problem was that I was replacing the line
BOT_PREFIX=!in the file
.envwith an incorrect value. This line tells the bot to look for commands that start with the value passed, so if you are trying to execute a command that starts with an exclamation mark (e.g. !discord), don't change this line!
Why does
event_ready()use
send_privmsg()instead of
channel.send()?
The method documentation says privmsg "should only be used directly in rare circumstances where a
twitchio.abcs.Messageableis not available."
I apologize in advance for maybe not being as clear as someone more advanced, I do not have advanced programming knowledge. I would like to add pygame functionality to this bot. Where I'm caught up is in basically having to do with the main loop. Either my pygame stuff works one time then everything is limited to the bot or my pygame stuff is looped without the bot loading. Anyone able to provide any guidance on this matter?
Did you solve your problem? Sounds like a cool use for a bot! Since the loops are getting you caught up, maybe it would be better to take an OOP approach similar to the example used in the official docs: twitchio.readthedocs.io/en/latest/...
What do I put in the "channel" section in the .env?
If I put my channel name it says.
asyncio.exceptions.TimeoutError: Request to join the "[my channel]" channel has timed out. Make sure the channel exists.
Traceback (most recent call last):
File "bot.py", line 11, in
initial_channels=[os.environ['CHANNEL']]
TypeError: init() missing 1 required positional argument: 'token'
Facing pipenv run python bot.py
I´m having the issue
File "bot.py", line 2
from twitchio.ext
^
SyntaxError: invalid syntax
I already Installed twitchio and i can find it in site-packages, have i to put it into another folder?
Lg Red
Ps: If you need more informations, I will give it to you...
How do you add more then 1 channel?
It says that the twitchio module isn't found. I also looked in the directory I ran the commands in, and there's no pipfile.lock
Install the TwitchIO module using pip on Windows with the command
py -m pip install twitchio. To install on Mac/Linux, I believe simply
pip install twitchioshould do the trick.
Got this working today, wow. This is beautiful. Thank you!
how do I add this bot to multiple channels ??
Just wanted to ask, what do you fill in for the 'CHANNEL' variable in the .env file?
Can I change the name of the bot? My friend doesent want it to be called after me!
I had to make a new Twitch account for my bot in order to change the name per the advice I found here: discuss.dev.twitch.tv/t/twitch-cha...
The bot will use the name that the OAuth token is assigned to.
how to send whispers?
How would I daemonize this script. I'd like to run a few at a time to manage different channels.
Currently I have each run detached using screen. | https://dev.to/ninjabunny9000/let-s-make-a-twitch-bot-with-python-2nd8 | CC-MAIN-2021-49 | refinedweb | 4,216 | 68.47 |
Welcome to Cisco Support Community. We would love to have your feedback.
For an introduction to the new site, click here. If you'd prefer to explore, try our test area to get started. And see here for current known issues.
I cant seem to connect my cisco 881W to a 3COM smart switch. The switch does not seem to receive any dhcp IP address that is given by the router when devices are conneceted to the switch. However if I use an unmanaged switch, i can get an Ip address. I am wondering if there is a onfiguration that is needed to ensure that the router can be used with a smart switch that already is set to be used with DHCP.
These are the current config
Any help would be grateful.
version 15.2
service timestamps debug datetime msec
service timestamps log datetime msec
no service password-encryption
!
hostname yourname
!
boot-start-marker
boot-end-marker
!
!
logging buffered 51200 warnings
!
no aaa new-model
!
!
!
ip dhcp excluded-address 10.10.10.51 10.10.10.254
!
ip dhcp pool ccp-pool
import all
network 10.10.10.0 255.255.255.0
default-router 10.10.10.1
dns-server 202.188.0.133 8.8.8.8
lease infinite
!
!
!
ip name-server 202.188.0.133
ip name-server 8.8.8.8
ip cef
no ipv6 cef
!
!
interface FastEthernet0
no ip address
!
interface FastEthernet1
no ip address
!
interface FastEthernet2
no ip address
!
interface FastEthernet3
switchport mode trunk
no ip address
!
interface FastEthernet4
ip address dhcp
ip nat outside
ip virtual-reassembly in
duplex auto
speed auto
!
interface Wlan-GigabitEthernet0
description Internal switch interface connecting to the embedded AP
no ip address
!
interface wlan-ap0
description Service module interface to manage the embedded AP
ip unnumbered Vlan1
!
interface Vlan1
ip address 10.10.10.1 255.255.255.0
ip directed-broadcast
no ip proxy-arp
ip nat inside
ip virtual-reassembly in
ip tcp adjust-mss 1452
!
ip forward-protocol nd
ip http server
ip http authentication local
ip http secure-server
ip http timeout-policy idle 60 life 86400 requests 10000
!
ip nat inside source list 199 interface FastEthernet4 overload
!
access-list 199 permit ip any any
no cdp run
!
!
!
line con 0
login local
no modem enable
line aux 0
line 2
no activation-character
no exec
transport preferred none
transport input all
stopbits 1
line vty 0 4
privilege level 15
login local
transport input telnet ssh
line vty 5 15
access-class 23 in
privilege level 15
login local
transport input telnet ssh
!
scheduler allocate 20000 1000
!
end
Does the 3com have vlans configured on it? The only vlan that needs to be on there is vlan 1, and it needs to be untagged.
HTH,
John
*** Please rate all useful posts ***
Thanks for the reply John,
But the 3COM switch is already on Vlan 1, Untagged. I have restored the configuration of the switch to factory default, but the problem still exist.
However i think that it is a VLAN issue as an unmanged switch does not seem to have a problem.
On the router, must vlan be turned on? It seems so. The interface for FE0,1,2,3 are assigned to vlan 1 with the DHCP Pool. Is there away i can disable this. On my cisco configuration pro software, the FE LAN 0,1,2,3 does not have an ip address. But i can use the internet. I think the VLAN is assignning it to them.
Any help would be appreciated.
Are you saying that hosts connected to the switch don't get an address from your Cisco router? The configuration of the router is fa0 - 3 are members of vlan 1, and interface fa4 is a wan port, so the configuration is correct for that part of it. If you connect a normal switch, dhcp works? What port are you connecting the switch to from the 3com?
HTH,
John
*** Please rate all useful posts ***
Yes, that is correct. Any computers connected to the 3COM switch does not get an ip adrress from the cisco router. However if i connect a normal oswitch which is an unmanaged Dlink switch, i can get an IP address and use the internet as normal. Im connecting the 3COM switch to port 1. And the unmanaged to port 3.
Any help would be appreciated
Bryan
Can you post "sh int fa1" and can you post the configuration of the switchport of the 3com that you're connected to? Is this 3com a l2 or l3 switch? Do you have a model number?
HTH,
John
*** Please rate all useful posts ***
What do you mean by sh int fa 1?
Again what do you mean the switchport configuration of the 3COM that youre connected to. Do you mkean the configuration of the 3COM itself or what?
This is a l2 switch. Its a 3COM baseline 2226 SFP Plus.
Thanks.
Are you telnetted or consoled into the router? If so, run that command "sh int fa1" and copy/paste the results. If you're using a web browser to manage the router, I'm not versed in that enough to help walk you through it unfortunately.
Each port will have a configuration on the switch and the router. I wanted to see the configuration of the 3com switch to see how it was configured.
HTH,
John
*** Please rate all useful posts ***
Yes, I am connected via telnet. This is the results for that command
FastEthernet1 is up, line protocol is down
Hardware is Fast Ethernet, address is c067.af1c.4bc7 (bia c067.af1c.4bc7)
MTU 1500 bytes, BW 100000 Kbit/sec, DLY 100 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Auto-duplex, Auto-speed
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output never, pause output
0 output buffer failures, 0 output buffers swapped out
The 3COM switch is set to factory default.
Im sorry for asking some basic questions. I am new to this. Everyhelp is appreciated.
Thank you
No problem. The interface is up/down at the moment. Can you check the port of the 3com to make sure it's in Auto/Auto? Also, do you have a crossover cable that you can use to connect this switch to the router? The 3com may not support auto-mdix and needs a crossover cable to make a connection. You did have it connected at the time you do the above command correct?
HTH,
John
*** Please rate all useful posts ***
Yes, the 3COM switch are all auto MDI/MDIX ports. And i also tried using a cross over but still it does not work.
No, to get the results from the command i took off the 3COM switch and plugged it directly to the router. If i went through the switch i cant get a connection as there is no IP.
Im totally confused why this is happening as my prvious cisco was a 871W. and it worke dperfectly fine. The only difference was that last time i did not need to set a vlan to the FE 0,1,2,3 and the FE0,1,2,3 has an ip directly. Not too sure what the difference is here.
The 3Com switch is set to a static ip of 169.254.166.34. using a DHCP address also does not work. But if i connected straight from my cable modem to the 3Com switch it works. Currently my set up is Cable modem > Cisco Router > 3Com Switch which does not work but previously on the 871W it did. But if i did cable modem > 3Com switc. it will work.
Any help would be appreciated.
Thanks
Do you have another 3com that you could test? It's not a vlan issue with the Cisco or you wouldn't get an address when connected directly to it or with the unmanaged switch.
Out of curiosity, what port are you connecting to on the 3com? Do you at least get link lights when connected?
HTH,
John
*** Please rate all useful posts ***
No i dont have another 3COM switch. Because on the 871W it works, but on the new 881W it deosnt. And it will work if connected directly do a DLINK cable modem. Only doesnt work when connected to the cisco.
Im connecting FA1 to port 1 of the 3COM. Ive swapped to diiferent ports also but it still does not work. And yes. I get the activity lights when connected. Only not an IP address.
This is extremely weird.
Any help would be appreciated.
Thanks
Bryan. | https://supportforums.cisco.com/t5/lan-switching-and-routing/cisco-881w-with-smart-switch-issue/td-p/2364865 | CC-MAIN-2017-39 | refinedweb | 1,446 | 76.22 |
This is just another example of a problem I decided to attempt as a break at work, trying to form the solution before the cookie on the Project Euler site ran out..
Python Solution
This could be written in C or Cython to perform much more quickly, and I’m sure there are other optimizations that could be made to this code, but this was written and executed in a total of about three minutes using the Sage Notebook.
import time start = time.time() def numDigits(n): r""" return a sorted list of digits in a number """ L = [] while True: L.append(n % 10) n /= 10 if n == 0: break L.sort() return L i = 1 while True: a = numDigits(i) for j in range(1,7): b = numDigits(i * j) if a != b: break if j == 6: print i i = 0 if i == 0: break i += 1 elapsed = time.time() - start print "result found in %s seconds" % elapsed
When executed, we find the following result.
142857 result found in 1.92401385307 seconds | http://code.jasonbhill.com/2013/07/ | CC-MAIN-2017-13 | refinedweb | 173 | 78.69 |
Inventors of Unix Win Japan Prize 105
jbrodkin writes "The inventors of Unix and the C programming language, one of whom also created the first master-level chess-playing machine, have been awarded the prestigious Japan Prize for their work in building the Unix operating system in 1969. Ken Thompson, who is now a distinguished engineer at Google, and Dennis Ritchie, who is retired, were researchers at Bell Labs four decades ago when they 'developed the Unix operating system which has significantly advanced computer software, hardware and networks over the past four decades, and facilitated the realization of the Internet,' the Japan Prize Foundation said Tuesday in awarding them the 2011 prize. The pair join previous winners such as Vint Cerf and Tim Berners-Lee. In addition to developing Unix, Thompson also played a key role in building Belle, the first chess-playing computer to achieve a master-level rating and five-time winner of the now-defunct North American Computer Chess Championship in the 1970s and 1980s. Ritchie and Thompson have also been credited with developing the C programming language, a process that occurred in conjunction with the development of Unix."
mad props (Score:3)
and congrats... 40 years later their influence is still amazing.
Re: (Score:1)
Indeed, that this OS is still vibrant and alive after so long is a real acheivement.
As a matter of curiousity, could someone please answer why Unix and the various derivatives are still so strong? Why are there so few new OSs that match this one in terms of security etc? Did these guys create the best OS it was possible to make first time or are there better, new OSs waiting in the wings? As you can probably tell, i dont know too much about this so please be gentle....
Re:mad props (Score:4, Informative)
You apparently never used Unix during the 70s and 80s. Unix "security" was a constant joke at least until the mid 90s.
Re: (Score:2)
You apparently never used Unix during the 70s and 80s. Unix "security" was a constant joke at least until the mid 90s.
Blasphemy!
Re:mad props (Score:4, Informative)
UNIX was designed to be as scalable, robust, and secure (relative to standards in those days) as they could possibly build it.
Redirection, Pipes, shells, heck the whole IO structure of UNIX was/is IMHO a great work of art.
Then other people started adding stuff to UNIX and eventually Linux that just kept making it better and better like PERL, Apache, X,
.... many more.
UNIX is just and has always been good stuff.
Re: (Score:1)
lol, perl
Re: (Score:1)
Re: (Score:3, Interesting).
Re: (Score:3)
Re: (Score:1)
everything is a file, even on remote nodes (Score:1)
>.
There is a reason NFS actually stands
Re: (Score:1)
Re: (Score:1)
Beyond what JWW typed I assume not designing for the low-end desktops helped to. If there was any at the time.
Since the machines it was built for had more capability maybe that helped it last until even the simplest machines has as much or more capability.
failure enables (Score:3)
Unix succeeded so well because it encourages failure. The C language is Spartan and direct. There are no safety nets. People who can't keep their pointers straight soon find themselves working in a different profession, such as programming in Java. This is the same dynamic described by Adam Smith for the free market.
Re:mad props (Score:5, Insightful)
It's not so much about security as it is about flexibility and a new way of doing things. At the time Unix was created, most operating systems where huge, ugly and complex beasts, developed in a bureaucratic way by enormous corporations. Software development was done similarly to the way processors are designed. It was a land of engineers, not a land of hackers. Unix was simpler, more elegant, modular and hacker friendly. At the time, OSs where written in assembly, almost no exceptions. Have you ever seen a mainframe sysadmin? Those guys where running the circus back then. Then this bunch of hippies came in and wrote an OS in a high-level language, and it turned out to be awesome. Unix was the software-world response to the social events and revolutions during the 60's.
At first, it wasn't as evolved or secure as other systems, and it was ridiculed because of that. But Unix is like Lego, and there was a huge amount of young people in computing that related to this concept, and could do awesome things with the building blocks provided by Unix.
It was the first OS to change the way things where done and introduce metaphors in computing. People think thap FApple and m$ started the metaphor-in-computing trend, with icons, menues and folders. That's just not true. "Everything is a file" was a revolution. The simple, short commands, pipes, advanced interactive shells, all of that made Unix the choice of a new generation. And it still is, anyone serious about software development is on some kind of Unix variant. It's wasn't the technical merits of Unix, it was the philosophy that made it so huge.
I once asked RMS if he could imagine the Free Software world as it is today, developing something like the Incompatible time-sharing system. Of course, this is RMS and I didn't really get a straight answer, he just rambled about how it wasn't a valid question because the Incompatible time-sharing system wasn't modern enough to be usable nowdays. But I know the answer is NO. The Unix model and Free Software have a LOT in common, and Unix helped pave the way for the way the world works right now. Whether the usual suspects like it or not, Free Software runs most of the Internet, and the world as we know wouldn't exist without the internet. Unix has always been the man behind the curtain, but it's been more relevant in the last 40 years of history than many think. Even now, it's still obscure, think, for instance, how everyone has a Unix OS in their pocket (Android phones/tablets and other devices, ipods/iphones/ipads), and most don't even know about it. It was about damn time that it got some mainstream recognition.
Re: (Score:1)
Re: (Score:2)
As a matter of curiousity, could someone please answer why Unix and the various derivatives are still so strong?
I think that initially the primary strength of Unix was fork(). It allowed incredibly easy process creation and management. The file system was also incredible. Continued popularity was due to its penetration of the university market followed eventually by the availability of open source versions.
Re: (Score:3, Informative)
The UNIX-HATERS Handbook [simson.net] [pdf]
Foreword
By Donald A. Norman hacke
Re: (Score:1)
"As for me? I switched to the Mac. No more grep, no more piping, no more SED scripts."
I'm using a Mac right now, almost entirely because underneath all the shiny widgets, I can pull up a terminal window with the shell of my choice (zsh of course; but bash, csh, ksh, sh and tcsh are available straight out of the box) and still use sed, awk, pipes and all those other useful toys to get my
Re: (Score:1)
Re: (Score:2)
and congrats... 40 years later their influence is still amazing.
Indeed. If a certain unnamed church recognizes them within the next 359 years it will beat their recognition of Galileo
;-)
Re: (Score:2)
"Ecraser l'infâme " - Voltaire
Thanks to Unix (Score:5, Funny)
...you can download all the Japanese anime tentacle pr0n you ever wanted!
Re: (Score:2, Interesting)
Instrumental in creating commercial Internet (Score:5, Informative)
Here is the actual Al Gore quote:
During my service in the United States Congress, I took the initiative in creating the Internet. I took the initiative in moving forward a whole range of initiatives that have proven to be important to our country's economic growth and environmental protection, improvements in our educational system.
Clumsy and self serving wording, yes. Claims to have invented the Internet? No, not at all. He was just saying that his policies helped create the Internet as we know it today, which is somewhat true. What he REALLY did was cosponsor the Information Infrastructure and Technology Act of 1992 which opened the Internet to commercial traffic.
So, we can really thank Gore for pop-up ads and spam, not the whole Internet.
Re: (Score:1)
Thanks to Unix you can download all the Japanese anime tentacle pr0n you ever wanted!
Amazingly that's also what inspired them to write it in the first place!
Re: (Score:1)
...you can download all the Japanese anime tentacle pr0n you ever wanted!
No, you host the pr0n on Unix. You download and view it using Windows. So really, both operating systems have been instrumental in creating the life we enjoy today.
Re: (Score:2)
>You download and view it using Windows
Maybe _you_ do
What is the Japan Prize? (Score:2)
The Japan prize is actually ONE HUNDRED DARA!!! You win ONE HUNDRED DARA for invent Unix operating system!!! You big winna!!! *insert loud obnoxious noises and strange mascot here* *insert crazy cheering audience here*
Ken Thompson is also at Google? (Score:1)
Google sure has an impressive amount of cool people working there...
Re: (Score:3)
Re: (Score:2)
His current work has mostly been on a new programming language called Go [golang.org] (for those who have not heard of it). A young, but thus far impressive systems programming language.
Re: (Score:2)
The real story (Score:5, Funny)
Ken actually used his nifty hack [bell-labs.com] of the C compiler and the login program to break into the computer that stored the committee's votes and flipped his and Steve Ballmer's vote.
Re:The real story (Score:5, Insightful)
No flame-war yet? (Score:1)
I'm surprised to see that some Programming Language flame-war has started yet.
Oh wait, it's still early.
Re: (Score:2, Funny)
I'm surprised to see that some Programming Language flame-war has started yet.
Oh wait, it's still early.
COBOL I tell you! It can do anything even grate cheese to a fine shredding! It will also clean your toilet! No other programming languages can do that. HA!
Re: (Score:2)
Wanna start one? They are so much fun...
You want a flame war? (Score:1)
Re: (Score:2, Flamebait)
Flame war? Flame wars are built around personal preferences. You're stating facts.
:D
Re: (Score:2)
You jest, but it's actually true. There is no denying that, both on code quality and user interface, vi is vastly superior to any other editor out there. Some people might not like it, some may not want to invest a modicum of time to learn how the tool they use works, but basic fact is that for editing text there is nothing that surpasses vi.
And what's wrong with Notepad? Not only is it simple and elegant (and free as in beer), it also takes one or two minutes to learn and runs on the world's most popular operating system.
And don't forget, it's the program that is used to code our own well-loved Slashdot,
Re: (Score:2)
Best keybindings, maybe
the rest of the story (Score:5, Funny)
Thompson and Ritchie invented Unix and C because they needed a decent programming environment for the PDP-7 to develop their game "Space Wars". To my knowledge, the Bell Labs Space Wars title still hasn't shipped, thus inaugurating the tradition of galactic video game vaporware that continues to this day.
While Unix is great, I looooooove C =) (Score:5, Interesting)
After struggling for years with a dozen programming languages I instantly fell in love with C because I could write tight code which compiled tiny and executed swiftly. Libraries were friendly (compared to Fortran, PL/1, Cobol, etc.) and who could not love linked lists? I liked it so much I bought too copies of The C Programming Language by Dennis Ritchie & Brian Kernighan - one copy for work and one for home.
It's sad to see the crap I have to code in now. =(
Re:While Unix is great, I looooooove C =) (Score:5, Interesting)
The highest accolade for C came from my Computer Music professor, Paul Lansky: [wikipedia.org] . He did stuff with FORTRAN, which he described as a "clunky" language, and then started moving to C. I can't remember the precise words that he used, but he seemed to get across that programming in C was like composing music for him.
A music professor? Programming in C? Yep, that happens.
Re: (Score:2, Funny)
He writes symphonies in C, why not write code in it too?
Re: (Score:2)
That seems a dangerous road, I'm sure he writes symphonies in C# as well...
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
What languages do you find yourself programming in now? C++, C#, or ?
Thompson can't check-in code at Google because... (Score:5, Interesting)
Re: (Score:3)
Compliance is definitely not aligned with invention; not saying that non-compliance is sufficient for invention, but seems to me as being necessary.
Re:Thompson can't check-in code at Google because. (Score:5, Interesting)
Besides, he's since gone on to work on Go for them, so I'm guessing he did feel a need to be able to check code in, and probably just took the test.
Re: (Score:2)
Anyone here actually using Go? It seems like a sweet little language, basically an update of C that is true to the original spirit of the language (small, close to the hardware). When C was created, garbage collection wasn't a mature technology; now it is, so it makes sense to have it built in. However, Go seems pretty raw, and there are other carefully designed C-like languages (D, objective C) that have a huge head start. It's also a drag that Go's binary interface isn't compatible with C's, and I'm not a
Re: (Score:2, Informative)
"Prove his mettle" is not exactly correct. I took the Google C++ coding test. It's not to test that you can code well; it's just to test that you are aware of Google's internal style guidelines (things like indentation, variable naming conventions, and the like). It's a good way to emphasize the importance of stylistically consistent code.
Incidentally, I love this approach because I HATE having to go through messy code. Ugh. For some bizarre reason, master's students with several years of industry expe
One word too many in the title (Score:1)
Article would have been way more awesome without the word "Prize"
Re: (Score:1)
Free Tibet
When you purchase one at regular price
Re:Yeah, they got it right. (Score:4, Informative)
OSX is unix with an aqua graphical user interface/theme.
Re: (Score:2, Interesting)
I know this, but it's still intrusive. I suppose you kill aqua and live in console most of the time on your Mac, right? That's what I thought. I've tried a lot of window managers; from fast light to evilwm to olvm to fvwm2 to mwm to enlightenment to whatever, and three big DEs and OS X is more intrusive than any of them. The interface/theme is so tightly woven to the user experience that, without it, OS X would be an also-ran. To push its Unix guts as if that was the central power feature is a bit of a red
Re: (Score:1)
Re: (Score:2)
That's because most of the Unix-compatible environments honestly... stink. Whether it's because of X or other reasons, I'm not sure.
Which is also why OS X is not just Unix with a pretty face, but is Unix with a pretty well integrated environment. it's a flavor of Unix with some pretty unique attributes
If you want OS X without the UI, get Darwin - it's all the open-source bits. It works and it gets you to the console alright, and without all the Aqua stuff you hate. It also runs on any PC, too. But then agai
Re: (Score:1)
i have no idea what point you were trying to make. you can boot os x into run level 3 and do everything from a console if you want. i was just pointing out that of the 3 things he listed 2 of them were the same. I use my OSX macbook almost identically to how I use my linux laptop. I open up firefox, thunderbird and a terminal and I do all of my "computer stuff" through the terminal.
The next one will go to BS. (Score:5, Insightful)
Bjarne Stroustrup, that is. After all, C++ has those ++ over C...
Re: (Score:2)
Yeah but they gotta recognize Thompson first as Stroustrup's contributions only increment after C is parsed.
Re: (Score:1)
Bjarne Stroustrup, that is. After all, C++ has those ++ over C...
its one awesomer than c.
Multics? (Score:5, Interesting)
Multics was heavily influential in the development of Unix. The inventor(s) of Multics perhaps deserve as much credit.
Re: (Score:2)
Re: (Score:2)
Only up to a point: Many Unnecessarily Large Tables In Core Simultaneously.
Re: (Score:1)
yes. they showed the unix team how not to do things.
Re: (Score:2)
yes. they showed the unix team how not to do things.
Mostly in terms of implementation, not concept. Unix was largely an attempt to keep the good ideas of Multics but without the bloat. (However, if they waited a while, then hardware would catch up to the bloat.)
Platform neutral (Score:5, Insightful)
The C programming language was designed with the same platform-abstracting ideas in mind. Unfortunately later C libraries (past those of ANSI/ISO C) started becoming more and more platform specific (mostly as a result of vendors either doing it "their way" or deliberately tying people to their platform). Later on, Java would grow for the same reason again, but with far more extensive standardized libraries covering what people wanted to do in the Internet Age (sockets, HTTP, multi-threading, platform-independent GUI [Swing with Nimbus looks great and performs well ever since rendering was fully hardware accelerated in 1.6.0_u10]).
Unfortunately we're at the stage where vendors are seeking to close things out again. Apple makes wonderful hardware but their walled garden approach is counterproductive from a global industry perspective (and why they will arguably 'fail' to set the standards for software a second time around, for the same reasons, but will make a colossal amount of money anyway). Google's Android is better, but is still a little bit of a walled garden. Hopefully innovation in profit will move elsewhere ('standardization' of one sort or another eventually comes to almost all technologies) and allow things to settle down in the phone space - and allow the cross-platform ideals of UNIX to once again return. One day I hope that phones are sufficiently powerful (processing and energy/battery life) that developing for them is as simple as for the embedded, desktop and server spaces (which have specialized libraries but are essentially the same these days [if you are using Java]).
Re:Platform neutral (Score:5, Informative)
Re: (Score:2)
The philosophy behind Apple's way here is that their API is tied to their specific user interface concept anyways. They enabled developers to use C++ for the backend if they so choose (with all the downsides that come with it) by extending gcc (and llvm). It's perfectly possible to write games without ever touching an OS-specific API using libraries like GLUT, SDL and Ogre3D.
Cross platform user interfaces are a stupid idea that only programmers could have come up with (I'm saying that as a programmer myself
Re: (Score:2)
This makes no sense whatsoever. Just because you might not be able to construct a good user interface doesn't mean others can't (not just "rich clients", but "filthy rich clients" can and are cross-platform, efficient, and intuitive to use - if you know what you are doing).
hello, world (Score:2, Funny)
If Ritchie had had any clue about how universal his "hello, world" program would become in the world of programming, maybe he and his book's co-author would've spent an extra afternoon kicking around the possibilities:
#include "stdio.h"
int main()
{
printf( "I'm here on the inside, and you're not.\n" );
return 0;
}
Huzzah! (Score:1)
Ah, awards (Score:2)
Not that these two don't deserve it, but I sometimes wonder if accepting awards is like a full-time job for them.
(Currently taking a break from writing C code. In Unix.)
Summary for real nerds: (Score:1)
Skip useless introductions.
Prediction (Score:1)
Inventors of Unix Win (Score:1)
Re: (Score:1)
The Phelps clan found slashdot? There goes the neighborhood.
Re: (Score:1)
4. It has the worlds's worst - and I do mean worst - text editor - VI. VI requires 3 or 4 keystrokes that better editors can do in 1 or 2. Yes I know you can use others.
.
As far as I know, 1 or 2 keystrokes with broken finger joints are way slower than 3 or 4 normal keystrokes. | http://tech.slashdot.org/story/11/01/26/2213210/inventors-of-unix-win-japan-prize | CC-MAIN-2015-06 | refinedweb | 3,625 | 71.65 |
The objective of this post is to explain how to connect to a WiFi network using MicroPython on the ESP32.
Introduction
The objective of this post is to explain how to connect to a WiFi network using MicroPython on the ESP32. The procedure shown here is based on the guide provided for the ESP8266, on the MicroPython documentation website, which I encourage you to read.
We are going to execute the Python code by sending commands to the prompt, so we can see things step by step. If you haven’t yet configured MicroPython on the ESP32, please check this previous post.
I will be using Putty to establish the serial connection to the Python prompt, but you can use other software that allows this kind of connection.
The code
First of all, we will need to import the network module, in order to access all the functions needed for establishing the connection to the WiFi Network.
import network
Once you import the module, some information is printed on the console, as shown in figure 1. It won’t be needed for this tutorial, but I will leave it here for illustration purposes.
Figure 1 – Importing the MicroPython network module.
Since we are going to connect to a WiFi network, our device will operate in station mode. So, we need to create an instance of the station WiFi interface [1]. To do it, we just need to call the constructor of the WLAN class and pass as its input the identifier of the interface we want. In this case, we will use the network.STA_IF interface.
station = network.WLAN(network.STA_IF)
Now we will activate the network interface by calling the active method on our station object and passing True as input, since it accepts Boolean values.
station.active(True)
Again, after executing this command, you should get an output on the command line, indicating that we are in station mode and that the interface was started, as shown in figure 2.
Figure 2 – Activating station mode.
Finally we will use the connect method to connect to the WiFi network. This method receives as input both the SSID (network name) and the password.
station.connect("YourNetworkName", "YourNetworkPassword")
It will again print some information to the console. Take in consideration that the connection may take a while. Also note that the “>>>>” is not printed once the connection is established, so it may seem that the device is still processing after the last message is printed, as shown in figure 3. You can hit enter to continue normally.
Figure 3 – Connecting to the WiFi network.
Finally, we will confirm the connection by calling the isconnected method, which returns true if the device is connected to a WiFi network [2]. We are also going to call the ifconfig method, which returns the IP address, subnet mask, gateway and DNS as output parameters [2].
station.isconnected() station.ifconfig()
Check bellow in figure 4 the final output for these commands, which indicate that we are correctly connected to a WiFi network.
Figure 4 – Confirming the connection to the WiFi network.
Note that the IP assigned to the ESP32 is local, so we can’t use it to receive connections from outside our network without portforwarding the router.
References
[1]
[2]
Pingback: ESP32 / ESP8266 MicroPython: Automatic connection to WiFi | techtutorialsx | https://techtutorialsx.com/2017/06/01/esp32-micropython-connecting-to-a-wifi-network/ | CC-MAIN-2017-34 | refinedweb | 554 | 53.21 |
Call Python function from another Python file
In this article, you will know how to call a function of other Python files using the import keyword. Also, you will get to know how to import a single class, not the whole file.
Building software needs a network of codes and files of codes in a systematic way. The network can be created by calling functions from one file to another.
Python has a simple way to use the functions of other Python files. Importing that file using import keyword and aliasing it is very simple. For example, the same directory has two Python file baseFile.py and callerFile.py having their functions. Then how you can use baseFile.py functions in callerFile.py, that will see in the code below.
baseFile.py
def intro(): return 'This is baseFile' def secFun(): return 'This is second function'
callerFile.py
import baseFile as b print(b.intro())
Output:
This is baseFile
Import only class you want
In some cases, you import the whole file but you need some of the classes functions only. There is no need to import the whole file and just import that specific class only. So if you want to import any specific class then you can follow this way of import.
baseFile.py
class First: def firstFun(): return 'This is First class' class Second: def secFun(): return 'This is Second class'
callerFile.py
from baseFile import Second as s print(s.secFun())
Output:
This is Second class
Also, read:
Import all classes, functions and variables using *
* is a wild card symbol that is used to import all classes, functions and variables present in the python file. In the various field, it is used to search all the content present in the table of file.
from baseFile import * print(Second.secFun())
I hope you got the idea of using functions, functions of a class from other Python files. | https://www.codespeedy.com/call-python-function-from-another-python-file/ | CC-MAIN-2020-34 | refinedweb | 319 | 74.9 |
#include <jevois/Core/VideoBuf.H>
A V4L2 video buffer, to be held in a shared_ptr.
Requests an MMAP'ed memory area from the given file descriptor at construction, and unmaps it at destruction. VideoBuf is used to pass MMAP'ed video buffers from Camera and Gadget drivers to application code, via RawImage. The actual memory allocation is performed by the kernel driver. Hence, VideoBuf pixel arrays cannot be moved from one memory location to another.
Definition at line 29 of file VideoBuf.H.
Construct and allocate MMAP'd memory.
Mostly for debugging purposes (supporting VideoDisplay), if fd is -1 then we perform a regular memory allocation instead of mmap.
Definition at line 27 of file VideoBuf.C.
References length(), and PLFATAL.
Destructor unmaps the memory.
Definition at line 49 of file VideoBuf.C.
Get the number of bytes used, valid only for MJPEG images.
Definition at line 123 of file VideoBuf.C.
Get a pointer to the buffer data.
Definition at line 105 of file VideoBuf.C.
Get the dma_buf fd associated with this buffer, which was given at construction.
Definition at line 129 of file VideoBuf.C.
Get the allocated memory length.
Definition at line 111 of file VideoBuf.C.
Referenced by VideoBuf().
Set the number of bytes used, eg, for MJPEG images that application code compressed into the buffer.
Definition at line 117 of file VideoBuf.C.
Sync the data.
This may be useful in some cases to avoid cache coherency issues between DMA-capable driver and CPU.
Definition at line 85 of file VideoBuf.C.
References clearcache(), and LERROR. | http://jevois.org/doc/classjevois_1_1VideoBuf.html | CC-MAIN-2022-27 | refinedweb | 262 | 62.24 |
Get this book -> Problems on Array: For Interviews and Competitive Programming
Reading time: 35 minutes
The problem is to count all the possible paths from top left to bottom right of an
m x n matrix with the constraints that from each cell you can either move only to right or down. The cell is represented by Zeros and Ones, 0 means that there is path possible and 1 means it is an obstacle and there is no path possible through it.
We will explore two approaches:
- Brute force which takes O(N^N) time complexity
- Efficient Dynamic Programming approach O(N * M) time complexity
- A graph based approach in O(N * M) time complexity
Brute force 【O(N^N)】
Once approach is to generate all paths and then, determine which paths are valid.
The keys involved are:
- path length will be M + N
- There are M * N vertices/ cells
- The number of paths will be in the order of O((M * N)^(M+N)) that is O(N^N) if M=N
- There will be a few valid paths which we can determine by checking:
- if two cells in the path are adjacent or connected
- if the cells are available (0)
This will take exponential time O(N^N)
Dynamic Programming 【O(M * N)】
We will have to look for the subproblems to apply the dynamic programming approach. Can this problem be divided into subproblems so that each of those problems can be solved easily? The answer is, yes. You can find the value of number of paths to reach a particular cell by using the number of paths to reach the upper and left cell.
- Optimal Substructure
- there are two possibilities for every cell X, either X is a valid path or it's a obstacle. If X is a valid path, then the value of total paths to reach X is the sum of total paths to reach the upper and left cell. If X is an obstacle, then the we can't reach that cell ever so the number of paths to reach that cell would be zero.
- let PATHS(X) indicate the number of paths to reach cell X.
PATHS(X) = PATHS(X.UP) + PATHS(X.LEFT) if X is zero = 0 if X is one
here X.UP indicates upper cell of cell X and X.LEFT indicates cell to the left of X.
- Overlapping Subproblems
While solving the subproblems, we encounter that we have to find the solutions of the same subproblems again and again. When we have to calculate total number of paths to reach a particular cell, we would have to calculate the number of paths to reach to the cell left of it and the upper cell. We will need to calculate the number of paths to reach a cell multiple times i.e. when we are calculating the number of paths for the right cell and bottom cell of it. This is shows that overlapping subproblems exist.
Since Optimal Substructure and Overlapping Subproblems exist, we can use Dynamic Programming to derive an efficient solution.
Complexity
For a
n x m matrix,
- Time complexity:
Θ(mn)
- Space complexity:
Θ(mn)
Implementation
import java.util.Scanner; public class Solution { public static void main(String[] args) { Scanner s = new Scanner(System.in); int n = s.nextInt(); int m = s.nextInt(); int mat[][] = new int[n][m]; for(int i = 0;i<n;i++) for(int j = 0;j<m;j++) mat[i][j] = s.nextInt(); int dp[][] = new int[n][m]; for(int i = 0;i<n;i++) { if(mat[i][0] == 0)dp[i][0] = 1; else break; } for(int i = 0;i<m;i++) { if(mat[0][i] == 0) dp[0][i] = 1; else break; } for(int i = 1;i<n;i++) { for(int j = 1;j<m;j++) { if(mat[i][j]!=1) dp[i][j]+=dp[i-1][j]+dp[i][j-1]; } } System.out.println(dp[n-1][m-1]); } }
Example
Consider the following matrix, 1 shows an obstacle.
Input: 3 3 0 1 0 0 0 0 1 0 0 Output: 2
For this input, the final dp array will be constructed as
Graph based approach 【O(M * N)】
The idea is to:
- Create a graph with all cell as different vertices
- Edges in the graph will denote if movement between two cell is possible
- Once the graph is ready, each node will hold a 1D matrix such that:
- matrix[v1] which denotes number of paths from vertex v1 to the current vertex
- While traversal using DFS or BFS, moving from vertex v1 to v2 will be such that:
- matrix[S] of v2 is the product of matrix[S] of all directly connected vertices
- Finally, on reaching the destination vertex, the answer is matrix[S] where S is the source vertex
The actual complexity is O(E) where E is the number of edges
In a M * N matrix, there will be M * N edges at maximum and hence, the complexity is O(M * N).
On close inspection, you will see that:
- This graph based approach is same as the dynamic programming approach (for this problem)
- This approach can solve a wider problem with less restrictions on movement
Further reading
An extension to this problem is to allow moves in any of the four directions. We can't solve this using dynamic programming because current state depends not only on left and upper cells but also on right and bottom cells. This can be solved using:
- Dijkstra's Algorithm
- the graph approach we mentioned | https://iq.opengenus.org/count-paths-from-top-left-to-bottom-right-of-a-matrix/ | CC-MAIN-2022-27 | refinedweb | 928 | 62.31 |
Using the sine wave example from the home page how to, I’ve been able to
get gnuplot to generate the plot but it immediately terminates, flashing
only briefly on the screen. I bypassed the which method and hardcoded
the executable in the gnuplot.rb starting at 44:
def Gnuplot.gnuplot( persist = true )
#cmd = which( ENV[‘RB_GNUPLOT’] || ‘gnuplot’ )
#cmd += " -persist" if persist
cmd = “C:\Gnuplot\pgnuplot.exe” + " -persist" if persist
cmd
end
It appears that the -persist flag is not being passed to the command
line. Any thoughts on how to get the -persist flat to function and keep
the graph active long enough for me to view it?
Thanks,
-j | https://www.ruby-forum.com/t/ruby-gnuplot-on-windows/60627 | CC-MAIN-2022-21 | refinedweb | 111 | 70.84 |
memkind man page
memkind — Heap manager that enables allocations to memory with different properties.
This header expose EXPERIMENTAL API in except of STANDARD API placed in section LIBRARY VERSION. API Standards are described below in this man page.
Synopsis
#include <memkind.h> Link with -lmemkind ERROR HANDLING: void memkind_error_message(int err, char *msg, size_t size); HEAP MANAGEMENT: void *memkind_malloc(memkind_t kind, size_t size); void *memkind_calloc(memkind_t kind, size_t num, size_t size); void *memkind_realloc(memkind_t kind, void *ptr, size_t size); int memkind_posix_memalign(memkind_t kind, void **memptr, size_t alignment, size_t size); void memkind_free(memkind_t kind, void *ptr); KIND MANAGMENT: int memkind_create_pmem(const char *dir, size_t max_size, memkind_t *kind); int memkind_check_available(memkind_t kind); DECORATORS: void memkind_malloc_pre(memkind_t *kind, size_t *size); void memkind_malloc_post(memkind_t kind, size_t size, void **result); void memkind_calloc_pre(memkind_t *kind, size_t *nmemb, size_t *size); void memkind_calloc_post(memkind_t kind, size_t nmemb, size_t size, void **result); void memkind_posix_memalign_pre(memkind_t *kind, void **memptr, size_t *alignment, size_t *size); void memkind_posix_memalign_post(memkind_t kind, void **memptr, size_t alignment, size_t size, int *err); void memkind_realloc_pre(memkind_t *kind, void **ptr, size_t *size); void memkind_realloc_post(memkind_t kind, void *ptr, size_t size, void **result); void memkind_free_pre(memkind_t *kind, void **ptr); void memkind_free_post(memkind_t kind, void *ptr); LIBRARY VERSION: int memkind_get_version();
Description
memkind_error_message() converts an error number err returned by a member of the memkind interface to an error message msg where the maximum size of the message is passed by the size parameter.
HEAP MANAGMENT:
The functions described in this section define a heap manager with an interface modeled on the ISO C standard API's, except that the user must specify the kind of memory with the first argument to each function. See the Kinds section below for a full description of the implemented kinds.
memkind_malloc() allocates size bytes of uninitialized memory of the specified kind. The allocated space is suitably aligned (after possible pointer coercion) for storage of any type of object. If size is 0, then memkind_malloc() returns NULL.
memkind_calloc() allocates space for num objects each size bytes in length in memory of the specified kind. The result is identical to calling memkind_malloc() with an argument of num*size, with the exception that the allocated memory is explicitly initialized to zero bytes. If num or size is 0, then memkind_calloc() returns NULL.
memkind_realloc() changes the size of the previously allocated memory referenced by ptr to size bytes of the specified kind. high bandwidth memory is returned.
Note: memkind_realloc() may move the memory allocation, resulting in a different return value than ptr.
If ptr is NULL, the memkind_realloc() function behaves identically to memkind_malloc() for the specified size. The address ptr, if not NULL, must have been returned by a previous call to memkind_malloc(), memkind_calloc(), memkind_realloc(), or memkind_posix_memalign() with the same kind as specified to the call to memkind_realloc(). Otherwise, if memkind_free(kind, ptr) was called before, undefined behavior occurs.
memkind_posix_memalign() allocates size bytes of memory of a specified kind such that the allocation's base address is an even multiple of alignment, and returns the allocation in the value pointed to by memptr. The requested alignment must be a power of 2 at least as large as sizeof(void *). If size is 0, then memkind_posix_memalign() returns NULL.
memkind_free() causes the allocated memory referenced by ptr to be made available for future allocations. This pointer must have been returned by a previous call to memkind_malloc(), memkind_calloc(), memkind_realloc(), or memkind_posix_memalign(). Otherwise, if memkind_free(kind, ptr) was already called before, undefined behavior occurs. If ptr is NULL, no operation is performed. The value of MEMKIND_DEFAULT can be given as the kind for all buffers allocated by a kind that leverages the jemalloc allocator. In cases where the kind is unknown in the context of the call to memkind_free() 0 can be given as the kind specified to memkind_free() but this will require a look up that can be bypassed by specifying a non-zero value.
KIND MANAGEMENT:
There are built-in kinds that are always available, and these are enumerated in the Kinds section. The user can also create their own kinds of memory. This section describes the API's that enable the tracking of the different kinds of memory and determining their properties.
memkind_create_pmem() is a convenience function used to create a file-backed kind of memory. It allocates a temporary file in the given directory dir. The file is created in a fashion similar to tmpfile(3), so that the file name does not appear when the directory is listed and the space is automatically freed when the program terminates. The file is truncated to a size of max_size bytes and the resulting space is memory-mapped.
Note that the actual file system space is not allocated immediately, but only on a call to memkind_pmem_mmap() (see memkind_pmem(3)). This allows to create a pmem memkind of a pretty large size without the need to reserve in advance the corresponding file system space for the entire heap. The minimum max_size value allowed by the library is defined in <memkind_pmem.h> as MEMKIND_PMEM_MIN_SIZE. Calling memkind_create_pmem() with a size smaller than that will return an error. The maximum allowed size is not limited by memkind, but by the file system specified by the dir argument. The max_size passed in is the raw size of the memory pool and jemalloc will use some of that space for its own metadata.
memkind_check_available() Returns a zero if the specified kind is available or an error code from the Errors section if it is not.
DECORATORS:
The memkind library enables the user to define decorator functions that can be called before and after each memkind heap management API. The decorators that are called at the beginning of the function end are named after that function with _pre appended to the name, and those that are called at the end of the function are named after that function with _post appended to the name. These are weak symbols, and if they are not present at link time they are not called. The memkind library does not define these symbols which are reserved for user definition. These decorators can be used to track calls to the heap management interface or to modify parameters. The decorators that are called at the beginning of the allocator pass all inputs by reference, and the decorators that are called at the end of the allocator pass the output by reference. This enables the modification of the input and output of each heap management function by the decorators.
LIBRARY VERSION
The memkind library version scheme consist major, minor and patch numbers separated by dot. Combining those numbers, we got the following representation:
major.minor.patch, where:
-major number is incremented whenever API is changed (loss of backward compatibility),
-minor number is incremented whenever additional extensions are introduced, or behavior has been changed,
-patch number is incremented whenever small bug fixes are added.
memkind library provide numeric representation of the version by exposing the following API:
int memkind_get_version() return version number represented by a single integer number, obtained from the formula:
major * 1000000 + minor * 1000 + patch
Note: major < 1 means unstable API.
API standards:
-STANDARD API, API is considered as stable
-NON-STANDARD API, API is considered as stable, however this is not a standard way to use memkind
-EXPERIMENTAL API, API is considered as unstable and the subject to change
Return Value
memkind_calloc(), memkind_malloc(), and memkind_realloc(), return the pointer to the allocated memory, or NULL if the request fails. memkind_free() and memkind_error_message() do not have return values. All other memkind API's return 0 upon success, and an error code defined in the Errors section upon failure. The memkind library avoids setting errno directly, but calls to underlying libraries and system calls may set errno.
Kinds
The available kinds of memory
- MEMKIND_DEFAULT
Default allocation using standard memory and default page size.
- MEMKIND_HUGETLB
Allocate from standard memory using huge pages. Note: This kind requires huge pages configuration described in System Configuration section.
- MEMKIND_GBTLB (DEPRECATED)
Allocate from standard memory using 1GB chunks backed by huge pages. Note: This kind requires huge pages configuration described in System Configuration section.
- MEMKIND_INTERLEAVE
Allocate pages interleaved across all NUMA nodes with transparent huge pages disabled.
- MEMKIND_HBW
Allocate from the closest high bandwidth memory NUMA node at time of allocation. If there is not enough high bandwidth memory to satisfy the request errno is set to ENOMEM and the allocated pointer is set to NULL.
- MEMKIND_HBW_ALL
Same as MEMKIND_HBW except decision regarding closest NUMA node is postponed until the time of first write.
- MEMKIND_HBW_HUGETLB
Same as MEMKIND_HBW except the allocation is backed by huge pages. Note: This kind requires huge pages configuration described in System Configuration section.
- MEMKIND_HBW_ALL_HUGETLB
Combination of MEMKIND_HBW_ALL and MEMKIND_HBW_HUGETLB properties. Note: This kind requires huge pages configuration described in System Configuration section.
- MEMKIND_HBW_PREFERRED
Same as MEMKIND_HBW except that if there is not enough high bandwidth memory to satisfy the request, the allocation will fall back on standard memory.
- MEMKIND_HBW_PREFERRED_HUGETLB
Same as MEMKIND_HBW_PREFERRED except the allocation is backed by huge pages. Note: This kind requires huge pages configuration described in System Configuration section.
- MEMKIND_HBW_GBTLB (DEPRECATED)
Same as MEMKIND_HBW except the allocation is backed by 1GB chunks of huge pages. Note that size can take on any value, but full gigabyte pages will allocated for each request, so remainder of the last page will be wasted. This kind requires huge pages configuration described in System Configuration section.
- MEMKIND_HBW_PREFERRED_GBTLB (DEPRECATED)
Same as MEMKIND_HBW_GBTLB except that if there is not enough high bandwidth memory to satisfy the request, the allocation will fall back on standard memory. Note: This kind requires huge pages configuration described in System Configuration section.
- MEMKIND_HBW_INTERLEAVE
Same as MEMKIND_HBW except that the pages that support the allocation are interleaved across all high bandwidth nodes and transparent huge pages are disabled.
- MEMKIND_REGULAR
Allocate from regular memory using the default page size. Regular means general purpose memory from the NUMA nodes containing CPUs.
Errors
- memkind_posix_memalign()
returns the one of the POSIX standard error codes EINVAL or ENOMEM as defined in <errno.h> if an error occurs (these have positive values). If the alignment parameter is not a power of two, or is not a multiple of sizoeof(void *), then EINVAL is returned. If there is insufficient memory to satisfy the request then ENOMEM is returned.
All functions other than memkind_posix_memalign() which have an integer return type return one of the negative error codes as defined in <memkind.h> and described below.
- MEMKIND_ERROR_UNAVAILABLE
Requested memory kind is not available
- MEMKIND_ERROR_MBIND
Call to mbind(2) failed
- MEMKIND_ERROR_MMAP
Call to mmap(2) failed
- MEMKIND_ERROR_MALLOC
Call to jemalloc's malloc() failed
- MEMKIND_ERROR_ALLOCM
Call to jemalloc's allocm() failed
- MEMKIND_ERROR_ENVIRON
Error parsing environment variable (MEMKIND_*)
- MEMKIND_ERROR_INVALID
Invalid input arguments to memkind routine: e.g_HOG_MEMORY
Controls behavior of memkind with regards to returning memory to underlaying OS. Setting MEMKIND_HOG_MEMORY to "1" causes memkind to not release memory to OS in anticipation of memory reuse soon. This will improve latency of 'free' operations but increase memory usage.
- MEMKIND_DEBUG
Controls logging mechanism in memkind. Setting MEMKIND_DEBUG to "1" enables printing messages like errors and general informations about environment to stderr.
- MEMKIND_HEAP_MANAGER
Controls heap management behavior in memkind library by switching to one of the available heap managers.
Values:
JEMALLOC – sets the jamalloc heap manager
TBB – sets the Intel Threading Building Blocks heap manager. This option requires installed
Intel Threading Building Blocks library. If the MEMKIND_HEAP_MANAGER is not set than the jemalloc heap manager will be used by default.
System Configuration
Interfaces for obtaining 2MB (HUGETLB) informations can be found here:
Static Linking
When linking statically against memkind, libmemkind.a should be used together with its dependencies libnuma and pthread. Pthread can be linked by adding /usr/lib64/libpthread.a as a dependency (exact path may vary). Typically libnuma will need to be compiled from sources to use it as a static dependency. libnuma can be reached on github:_default(3), memkind_arena(3), memkind_hbw(3), memkind_hugetlb(3), memkind_pmem(3)
Referenced By
hbwallocator(3), hbwmalloc(3), memkind_arena(3), memkind_default(3), memkind_hbw(3), memkind_hugetlb(3), memkind_pmem(3). | https://www.mankier.com/3/memkind | CC-MAIN-2019-13 | refinedweb | 1,996 | 52.6 |
CodePlexProject Hosting for Open Source Software
I am using the help file WCFWebAPI_Con.chm to follow the section "Write a Web API Client in C#" of the tutorial "Getting Started with WCF Web API". When I compile the project, I get the following error
on the ListAllContacts() method shown below:
Error:
<<System.Net.Http.HttpContent' does not contain a definition for 'ReadAs' and no extension method 'ReadAs' accepting a first argument of type 'System.Net.Http.HttpContent' could be found (are you missing a using directive or an assembly
reference?)>>
Method:
using System.Net.Http;
using System.Web;
using ContactManager;
using ContactManager.Resources;
static void ListAllContacts()
{
HttpClient client = new HttpClient();
HttpResponseMessage resp = client.GetAsync("").Result;
if (resp.IsSuccessStatusCode)
{
List contacts = resp.Content.ReadAs>();
//System.Threading.Tasks.Task> contacts = resp.Content.ReadAsAsync>();
foreach (Contact c in contacts)
{
Console.WriteLine("{0}: {1}", c.ContactId, c.Name);
}
}
}
Please help. Thanks.
Saf
Sorry, the docs in the .chm are a bit stale. In the latest Preview the HttpClient only supports async operations. So try ReadAsAsync<T>(…).
Daniel Roth
Dan, I had tried ReadAsAsync<T>(...) as you may have noticed the line commented out below the ReadAs operation in my original post. But ReadAS Async<T>(...) returrns an object that does not support the foreach loop. How can I loop through the
contacts?
I could not find any documentation for httpContent class for .Net 4.0 (the framework that the tutorial is using) and this
MSDN link for the .NET 4.5 does not indicate any such method (ReadAsAsync<T>(...) for this class.
Thanks,
The easiest thing to do is add ".Result" just like you did on your GetAsync call.
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | http://wcf.codeplex.com/discussions/285682 | CC-MAIN-2017-13 | refinedweb | 311 | 71.61 |
Hudson-ci/Using Hudson/Installing Hudson
Prerequisites
Hudson only needs a Java 5 or newer runtime.
WAR file
After you download [1], you can launch it by executing java -jar hudson.war. This is mostly useful for testing purposes. For production we recommend using native packages for simplified install or deployment in a servlet container that supports Servlet 2.4/JSP 2.0 or later, such as Glassfish, Tomcat 5, JBoss, Jetty 6, etc. See #Containers for more about container-specific installation instruction.
Once the war file is exploded, run Template:Chmod 755 hudson in the exploded Template:Hudson/WEB-INF directory so that you can execute this shell script.
Unix/Linux Installation
The Hudson project provides native packages for various Linux distributions. These are the simplest way to run Hudson in production, since the packages set up user, service and all other configuration as well as integrate with the native upgrade mechanism of the operating system.
- Installing Hudson on Ubuntu
- Installing Hudson on Debian
- Installing Hudson on Oracle Enterprise Linux
- Installing Hudson on Red Hat Enterprise Linux
- Installing Hudson on CentOS
- Installing Hudson on Fedora
- [[Installing Hudson on openSUSE
For other operating systems check out the following pages for help.
- [Installing Hudson as a Unix daemon]
- [Installing Hudson on OpenSolaris]
- [Installing Hudson on Gentoo]
- [FreeBSD]
- [FreeBSD 4|FreeBSD 4.9]
- [Installing Hudson as Solaris 10 service|]
- [Installing Hudson as a Unix daemon] if your flavor of Unix isn't any of the ones above.
Alternatively, if you have a servlet container that supports Servlet 2.4/JSP 2.0, such as Glassfish v2, Tomcat 5 (or any later versions), ou can run them as services, and deploy Template:Hudson.war as you would any other war file. [Container specific|Containers] documentation is available if you choose this route. [_Top of page_|#top]
h1. Windows Installation
If you're running on Windows you might want to run Hudson as a service so it starts up automatically without requiring a user to log in. The easiest way is follow [Installing Hudson as a Windows service]. Alternatively, you can install a servlet container like GlassFish and Tomcat, which can run as a service by itself, and then deploy Hudson to it.
Since Hudson Template:PATH, and copy Template:Sh.exe to Template:C:\bin\sh.exe (or whichever drive you use) to make shebang lines work. This should get you going.
If you're running on Windows you might want to run Hudson as a service so it starts up automatically without requiring a user to log in. One way is to first install Tomcat as a service and then deploy Hudson to it in the usual way. Another way is to use the [Java Service Wrapper|]. However, there may be problems using the service wrapper, because the Main class in Hudson in the default namespace conflicts with the service wrapper main class. Deploying inside a service container (Tomcat, Jetty, etc.) is probably more straightforward, even for developers without experience with such containers.
- [Installing Hudson as a Windows service]
[_Top of page_|#top]
h1..
[_Top of page_|#top] | http://wiki.eclipse.org/index.php?title=Hudson-ci/Installing_Hudson&oldid=265035 | CC-MAIN-2016-40 | refinedweb | 516 | 56.15 |
I have an ‘account_controller.rb’ in an ‘admin’ module
I list the accounts with :,
and get my table
trying to sort it, I use the follwoing sort_link_helper :
def sort_link_helper(text, param) key = param key += "_reverse" if @params[:sort] == param options = { :url => {:action => 'list', :params => @params.merge({:sort =>
key, :page => nil})},
:update => ‘table’,
:before => “Element.show(‘spinner’)”,
:success => “Element.hide(‘spinner’)”
}
html_options = {
:title => “Sort by this field”,
:href => url_for(:action => ‘list’, :params =>
@params.merge({:sort => key, :page => nil}))
}
link_to_remote(text, options, html_options)
end
but the url_for generates the href:
even if I try to insert a :controller => ‘/admin/account’ , nothing
change
@params : “{“action”=>“list”, “controller”=>“admin/accounts”}”
which seems correct, but the href is not… <a
href="/admin/admin/accounts/list?sort=login"
I believe my problem is in the routing map, I have presently only
map.connect ‘:controller/:action/:id’, :controller => ‘user’, :action
=> ‘welcome’
maybe I should have a map identifying the admin module… but I don’t
know how to write it ?
thanks for any help
kad | https://www.ruby-forum.com/t/url-for-controller-routing-error/67718 | CC-MAIN-2021-25 | refinedweb | 168 | 55.54 |
Hello Chris, On Monday 17 of October 2016 03:00:56 Chris Johns wrote: > I have built and tested these patches with no issues and 0 spurious > interrupts. > > The changes look fine to me.
Advertising
Thanks much for testing. I have pushed changes to the master. I have spent considerable time over weekend by debugging i386 SMP build but without success. I have found and fixed some problems, I get to the state when two CPUs runs under QEMU and even IPI is exchanged but then execution finish by different kinds of fatal exceptions. On the other hand, SMP build when only one CPU is enabled runs stable under QEMU. Some found issues and corrections tested with CONFIGURE_SMP_MAXIMUM_PROCESSORS 2 According to the more sources, INIT and SIPI duplication is not required on modern CPUs. INIT is ignored so no problem, but if the second SIPI is delayed too much then the first one causes complete start of the second core reaching RTEMS secondary_cpu_initialize() and then next SIPI resets the core and it starts again so you count CPU twice. The specification is strict about 200usec between two SIPI. But according to some discussion that second SIPI is not required on modern systems. Next modification tries bring up the second core by single SIPI and if the core does not reach RTEMS code in really long time it expect old system and restries the second SIPI. Probably LAPIC version can be used there to switch this workaround for newer cores at all. diff --combined c/src/lib/libbsp/i386/shared/smp/smp-imps.c --- a/c/src/lib/libbsp/i386/shared/smp/smp-imps.c +++ b/c/src/lib/libbsp/i386/shared/smp/smp-imps.c @@@ -301,12 -301,12 +334,21 @@@ */ if (proc->apic_ver >= APIC_VER_NEW) { -- int i; -- for (i = 1; i <= 2; i++) { ++ int retry = 2; ++ while (retry) { ++ int wait_for_ap = 100; ++ ++ printk("boot_cpu sending SIPI\n"); ++ send_ipi(apicid, LAPIC_ICR_DM_SIPI | ((bootaddr >> 12) & 0xFF)); -- UDELAY(1000); ++ do { ++ UDELAY(10); ++ if (acked_ap_cpus != _Atomic_Load_uint(&imps_ap_cpus_acked, ATOMIC_ORDER_SEQ_CST)) ++ retry = 0; ++ } while(wait_for_ap--); } } ++ printk("boot_cpu secondary acked\n"); /* * Generic CPU startup sequence ends here, the rest is cleanup. Another strange thing for SMP is, that secondary CPUs continues to run with temporary GDT set by CPU SIPI trampoline code at 0x70000. So I have added function to setup the secondary CPU the same GDT as is used on the primary diff --combined c/src/lib/libbsp/i386/shared/smp/smp-imps.c --- a/c/src/lib/libbsp/i386/shared/smp/smp-imps.c +++ b/c/src/lib/libbsp/i386/shared/smp/smp-imps.c @@@ -785,7 -785,7 +829,7 @@@ static void secondary_cpu_initialize(vo { int apicid; -- asm volatile( "lidt IDT_Descriptor" ); ++ _load_segments_secondary(); apicid = IMPS_LAPIC_READ(LAPIC_SPIV); IMPS_LAPIC_WRITE(LAPIC_SPIV, apicid|LAPIC_SPIV_ENABLE_APIC); @@@ -794,6 -794,6 +838,8 @@@ enable_sse(); #endif ++ _Atomic_Fetch_add_uint(&imps_ap_cpus_acked, 1, ATOMIC_ORDER_SEQ_CST); ++ _SMP_Start_multitasking_on_secondary_processor(); } +++ b/c/src/lib/libbsp/i386/pc386/startup/ldsegs.S ++ ++ .p2align 4 ++ ++ PUBLIC ( _load_segments_secondary) ++SYM (_load_segments_secondary): ++ ++ lgdt SYM(gdtdesc) ++ lidt SYM(IDT_Descriptor) ++ ++ /* Load CS, flush prefetched queue */ ++ ljmp $0x8, $next_step_secondary ++ ++next_step_secondary: ++ /* Load segment registers */ ++ movw $0x10, ax ++ movw ax, ss ++ movw ax, ds ++ movw ax, es ++ movw ax, fs ++ movw ax, gs ++ ret As for GDT, I am not sure if one GDT for all CPUs can work. Problem is TSS. RTEMS runs in ring 0 only, so TSS is not required to switch between ring 3 and ring 0 but I am not sure if there is not some situation when TSS is accessed anyway. The additional TSS is required for sure if we want to catch double fault exception. To be on safe side, it would worth to move user descriptors to LDT, left global GDT to point to LDT and area for primary CPU TSS and allocate new GDT and TSS for each CPU during bringup. Generally, I am not expert for x86 and do not like this area much so if some student or somebody else wants to exercise in this remains of dark age, I would be happy and help with some hints. On the other hand, RTEMS should provide reasonable x6 SMP support for testing and may it be Jailhouse or other interresting hypervisor base divisions to RT and generic OS domains. Other option is to left i386 SMP support unsolved and add x86_64 support which skips many of these oddities like TSS and GDT. (OK GDT is required but only that simple short lived in the trampoline is enough). Best wishes, Pavel _______________________________________________ devel mailing list devel@rtems.org | https://www.mail-archive.com/devel@rtems.org/msg08966.html | CC-MAIN-2016-44 | refinedweb | 740 | 57.4 |
How to access desktopcouch contacts from outside evolution?
I'd like to be able to query my desktopcouch contacts list from mutt or emacs; this is because, while I use evolution on my full-powered system, on my small bring-it-everywhere laptop I run a minimal X session and emacs. I'd like to have my contacts available on both, though. Can you guys point me to a resource somewhere that documents how i could perform a query on the db externally, say from bash or python; i could then hook into mutt or emacs using their own interfaces.
thanks, i'm really looking forward to desktopcouch!
matt
Question information
- Language:
- English Edit question
- Status:
- Solved
- Assignee:
- Stuart Langridge Edit question
- Solved by:
- Matt Price
- Solved:
- 2009-10-15
- Last query:
- 2009-10-15
- Last reply:
- 2009-10-15
- Whiteboard:
- Stuart, Can you answer this question for Matt? Thanks!
Matt: those docs should be on your machine as well as /usr/share/
Nicola and Stuart, thanks for the pointers. Looking at the
seem entirely straightforward to query the database, say, looking for a
name or an email. so, for instance, i have this couchdb record in my
contacts database (ugly text copied from the html view):
_id "pas-id-
description
address
if i know this in advance i can access the individual fields like this:
>>> db = CouchDatabase(
>>> fetched = db.get_
>>> print fetched[
but what if i want to search for people named 'me', or addresses
containing "gmail"? Will i need to create a design document, then write
a view, and a function that iterates overthe rows returned?
And do the map/reduce functions need to be written in
javascript?
by evolution -- that i can query directly from python? (i noticed the
web interface doesn't think there are any permanent design docs in the
contacts db - i'm wondering also if that's one reason why email-address
completion is currently so slow in evolution).
thanks much for your help. i'll keep mining the web for more info, but
any further assistance you guys can give is much appreciated.
Matt
hey sorry for the lousy formatting on that last response, copied and
pasted from evolution after my last email was rejected...
Yep; the way to query the database is by writing a view to do so. In your example, your view would look like:
function(doc) {
emit(
}
and then you access that view by name
result = db.execute_
print result["me"]
On Thu, 2009-10-15 at 18:06 +0000, Stuart Langridge wrote:
> Your question #85917 on desktopcouch changed:
> https:/
>
>"]
>
since you're generous enough to answer,
this works great, thusly:
>>> map_js = "function(doc) {emit (doc.last_
>>> db.add_
>>> result=
>>> print result[
[<Row id='pas-
'me', 'last_name': 'gmail', '_rev':
'1-74cb956983d1
'http://
'_id': 'pas-id-
{'eed38e30-
'address': '<email address hidden>'}}}>]
print result just gave me the object description:
<ViewResults <PermanentView
'_design/
since what i really want is a completion function, i think i need to
return something like a list of dictionaries, or maybe just of lists:
[['Matt Price', '<email address hidden>'],['Matt
Price','<email address hidden>
What do you think would be the best way to get this information? i
thought from the way you responded last time that there's an efficient
way of grabbing the data directly from couchdb, rather than getting a
big old python object and then massaging it slowly in my python code.
thanks for being patient with me. it's fun to learn this stuff.
matt
--
Matt Price
>"]
getting closer to understanding this. if i do this:
result=
then result.rows returns a tuple of row objects that look like this:
<Row id='pas-
but if instead i define a very simple mapping function:
map_js = "function(doc) {emit (doc.last_
db.add_
result=
then the tuple returned looks like this:
<Row id='pas-
and the key is now a useful value. nonetheless i can't just execute
print result["me"]
and get the values i'm looking for. instead i have a somewhat
ugly-feeling selection logic:
for x in result:
if x.key="me":
for y in x.value[
print x.value[
this seems a little clumsy to me and i'm wondering whether i'm missing a
class somewhere that would make my code less convoluted and ultimately
more reusable by other people.
it seems a bit odd to keep posting to this question, and i know you're
all busy with the release coming up; so just ignore this if it's the
wrong forum.
thanks for all the help,
matt
I'm returning to this after a while and I have some very simple code
that works OK for a query on last name:
[oops, hit ctrl-return]
here's the code:
!/usr/bin/python
import sys
from desktopcouch.
from desktopcouch.
# define the search string
searchString=""
if len(sys.argv) > 1:
searchString= sys.argv[1]
# initialize db
db = CouchDatabase(
#create view
design_doc = "matts_cli_client"
map_js = "function(doc) {emit (doc.last_
db.add_
# results=
results=
# print matching results
for x in results:
if searchString.
print x.value[
if "email_addresses" in x.value:
for y in x.value[
-------
This seems pretty functional for now, and I can figure out the best way
to format the output for emacs or mutt a little later. First, though,
I'd like to improve the search a little. This very simple function lets
me search on last name only, but it would probably be better if it
checked for matches in a variety of fields -- say first, last, and all
the email addresses. It'd be great if my key, in the view, were a
combination of those fields (so that key looked like, say, "Matt Price
<email address hidden>"). In fact, if I could just get the map/reduce
functions to produce a sequence such keys, that's really be all i need.
So I just wondered whether you could tell me where the evolution
contact-search design document is, so I could usei t as a model -- I
assume it's somewhere, but i didn't find it in a quick search through
the source of evolution-couchdb and desktopcouch.
Thanks as always!
Matt,
You may not know that a map function can emit more than one document. So:
import sys
from desktopcouch.
from desktopcouch.
# define the search string
searchString=""
if len(sys.argv) > 1:
searchstring= sys.argv[1]
# initialize db
db = CouchDatabase(
#create view
design_doc = "matts_cli_client"
map_js = """function(doc) {
if (doc.last_name) emit(doc.
if (doc.first_name) emit(doc.
for (k in doc.email_
}
}"""
db.add_
# results=
results=
# print matching results
for contact in results[
print "%(first_name)s %(last_name)s" % contact.value
if "email_addresses" in contact.value:
for eml in contact.
print " %s" % contact.
Hi Matt, there are some initial Python API docs in the desktopcouch/
records/ doc directory in the tarball or branch, try and see if they make sense.
Once you have any integration working with mutt or emacs, it'd be great to have a look at it! | https://answers.launchpad.net/desktopcouch/+question/85917 | CC-MAIN-2015-32 | refinedweb | 1,170 | 70.53 |
Closed Bug 600280 Opened 12 years ago Closed 12 years ago
[gfx
Info] Driver version and date are empty or null for some graphic cards under Windows 2000/XP
Categories
(Core :: Graphics, defect)
Tracking
()
mozilla2.0b11
People
(Reporter: scoobidiver, Assigned: tete009+bugzilla)
References
(Blocks 1 open bug)
Details
Attachments
(5 files, 3 obsolete files)
Build : Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b7pre) Gecko/20100928 Firefox/4.0b7pre Driver version and date are empty in the graphic section of "about:support" page.
Do they show up with gfxbot?
Ftr, "Adapter RAM : Unknown" is bug 591787.
FYI, gfxInfo's mDeviceID gets the following string on my system: PCI\VEN_1002&DEV_4150&SUBSYS_0200174B&REV_00
I tried creating a patch by reference to the following document: "Adding a PnP Device to a Running System" approach: 1. Get "Driver" value from HKLM\System\CurrentControlSet\Enum\<enumerator>\<deviceID>\<instanceID>. 2. Open HKLM\System\CurrentControlSet\Control\Class\<DriverValue> and get DriverVersion and DriverDate. Worrisome point is Microsoft has listed the registry's Enum branch for debugging purposes only. When using SetupAPI, it seems that we can avoid the direct access of Enum key, but I think Windows CE 6 may not include SetupAPI. :-(
(In reply to comment #6) I think this approach needs the instance ID of device which displays primary desktop, but I haven't found a way to get it... > approach: > 1. Get "Driver" value from > HKLM\System\CurrentControlSet\Enum\<enumerator>\<deviceID>\<instanceID>.
Summary: [gfxInfo] Driver version and date are empty for some graphic cards under Windows 2000/XP → [gfxInfo] Driver version and date are empty or null for some graphic cards under Windows 2000/XP
(In reply to comment #6) > Worrisome point is Microsoft has listed the registry's Enum branch for > debugging purposes only. > When using SetupAPI, it seems that we can avoid the direct access of Enum key, > but I think Windows CE 6 may not include SetupAPI. :-( Do we support Windows CE at all? Does it matter for the present purpose of checking desktop graphics cards drivers?
(In reply to comment #8) > Do we support Windows CE at all? Does it matter for the present purpose of > checking desktop graphics cards drivers? I don't own Windows CE and fixing this bug is beyond my knowledge...
I wouldn't worry about supporting Windows CE at all. If needed we can always #ifdef stuff out.
So, this bug is really important and we should review Tetsuro's patch. Will do ASAP...
My patch has a bug. I think, it is necessary to insert the following line before calling RegQueryValueExW: dwcbData = sizeof(value);
Indeed, I had to do something similar to fix another bug recently (bug 621393) Will apply this fix to your patch. Let me keep the r? flag, it ensures I dont forget to review.
When unused device's informations are left in the registry's Enum branch, my current approach may get a wrong information. So I submit a new patch using Setup API and obsolete my current patch.
Comment on attachment 505046 [details] [diff] [review] use Setup API rv1.0 The patch looks good!
Attachment #505046 - Flags: review?(bjacob) → review+
Comment on attachment 505046 [details] [diff] [review] use Setup API rv1.0 Handing this over to Jeff who has more comments.
Attachment #505046 - Flags: review+ → review?(jmuizelaar)
Comment on attachment 505046 [details] [diff] [review] use Setup API rv1.0 for our sanity, please check all 4 function pointers using this if condition (w && x && y && z): >+ if (setupGetClassDevs) { >+ if (devinfo != INVALID_HANDLE_VALUE) { >+ HKEY key; >+ LONG result; >+ WCHAR value[255]; >+ DWORD dwcbData; >+ SP_DEVINFO_DATA devinfoData; >+ DWORD memberIndex = 0; >+ >+ devinfoData.cbSize = sizeof(devinfoData); reference mark A >+ while (setupEnumDeviceInfo(devinfo, memberIndex++, &devinfoData)) { >+ if (setupGetDeviceRegistryProperty(devinfo, please align arguments to first argument, i.e. ^ here: >+ &devinfoData, >+ SPDRP_DRIVER, >+ NULL, >+ (PBYTE)value, >+ sizeof(value), >+ NULL)) { this should be split into an NS_NAMED_LITERAL_STRING (at reference mark A above) and an nsAutoString here. >+ nsAutoString driverKey(L"System\\CurrentControlSet\\Control\\Class\\");
Attachment #505046 - Flags: review-
Comment on attachment 505046 [details] [diff] [review] use Setup API rv1.0 A couple of quick comments (I haven't looked into too much detail yet): - Please drop the WINCE stuff. - It would be nice if the SetupAPI calls were commented more. It's not that easy to follow what they're doing. - Does this cause us to load setupapi.dll when wouldn't otherwise? and will it hurt startup performance?
Attachment #505046 - Flags: review?(jmuizelaar) → review-
* Checking all 4 function pointers of SetupAPI. * Aligned arguments to first argument. * Dropped the WINCE stuff, except the part of "#include <setupapi.h>". * Added comments about SetupAPI calls. I measured elapsed time from LoadLibraryW to FreeLibrary, using QueryPerformanceCounter. Tested under x86 Windows 7 (CPU: Athlon X2 5050e, HDD: IC35L090AVV207-0, Mem: 2GB): cold startup: 2.89452 ms 2.82560 ms 2.84988 ms 2.82544 ms 2.84692 ms hot startup: 1.56996 ms 1.55084 ms 1.55232 ms 1.52980 ms 2.81968 ms I got almost the same results under Windows 2000 SP4.
Attachment #505046 - Attachment is obsolete: true
Attachment #505416 - Flags: review?(jmuizelaar)
Comment on attachment 505416 [details] [diff] [review] use SetupAPI rv1.1 >+#ifndef WINCE >+#include <setupapi.h> >+#endif one WINCE left. >+ /* create a device information set composed of the current display device */ >+ HDEVINFO devinfo = setupGetClassDevs(NULL, >+ mDeviceID.BeginReading(), this should use PromiseFlatString(mDeviceID).get() >+ NULL, >+ DIGCF_PRESENT | DIGCF_PROFILE | DIGCF_ALLCLASSES); >+ Other than that, this looks good. Thanks a lot for getting the timing information!
Attachment #505416 - Flags: review?(jmuizelaar) → review+
Attachment #505416 - Attachment is obsolete: true
Attachment #505753 - Flags: review+
Does this patch need superreview? If necessary, who should I request superreview to? I'm afraid I'm not used to patch-review-checkin process...
No need for superreview, as far as I know.
Comment on attachment 505753 [details] [diff] [review] use SetupAPI rv1.2 Nope, it just needs approval which I recommend.
Attachment #505753 - Flags: approval2.0?
I think this should wait until beta10 is cut to minimize the risk of it breaking things right before the freeze.
Status: NEW → RESOLVED
Closed: 12 years ago
Resolution: --- → FIXED
Assignee: nobody → tete009+bugzilla
Target Milestone: --- → mozilla2.0b11 | https://bugzilla.mozilla.org/show_bug.cgi?id=600280 | CC-MAIN-2022-33 | refinedweb | 1,015 | 60.51 |
I.
When, for example, one side says “Open XML normatively refers to MS’ proprietary WMF” and the other side says “Err, where? Not in the Normative Refences sections” and the first side says “Err, then there is an *implied* normative reference because a mention is made of it elsewhere as a possible kind of graphic that may come in from the clipboard” and the other side says “The ISO usage of ‘normative’ revolves around indispensibility: isn’t ‘possible’ the opposite of ‘indispensible’?…” disinterested observers may think Surely there is a more constructive approach? These silly examples are distractions from serious concerns.
So here is what I suggest, for national bodies reviewing Open XML: adopt a set of general principles and apply them (to Open XML, ODF, and whatever). When someone raises a specific issue, verify that the issue indeed is as claimed, find the general principle, and base your responses on that, with the particular flaw as an examplar. The tactic adopted by some activists is to read the draft text, think of the worst possible interpretation and ramification, then insist it is the case: the “normative reference” example is a good case of this. The trouble with this approach is that it won’t work; impartial reviewers will note that there is some kind of concern but that the actual issue raised does is not a problem. The result will be frustration and a lack of a “meeting of the minds”. Indeed the legitimate issues that underly some of the anti-OpenXML comments risk being unaddressed.
What kind of principles would there be? Here are a few off the top of my head:
Principle 1: A schema must allow standard data notations for atomic, embedded data fields, where the standards exists, and may also allow local, common, optimised or legacy notations.
Applying this to Open XML, for example, it would mean that where DrawingML uses EMUs coordinates, it also should allow inches, cm and points. And where Spreadsheet ML allows numbers for date indexes, it also should direct ISO 8601 dates. Do you see the difference between saying “Open XML should be banned because it uses EMU” and “Open XML should be improved to allow more than EMU”. The most important thing is that this is a superficial change to the exchange language, not to the underlying model: it doesn’t force MS to adopt a different model or require them to generate standard units. (That is a different issue: the issue of profiles or application conformance.)
Principle 2: A schema should allow direct representation of data fields, and may allow optimised forms as well
Applying this to Open XML, we see that the string approach taken by SpreadsheetML conforms: you can have text directly or index to a shared string table. Adopting this principle lets a National Body vet the issues: if someone says “This doesn’t look like HTML! Therefore it is bad!” the NB can say “We adopt the principle that optimized references can be allowed as long as literal content is allowed too”.
Principle 3: A schema language for compound documents should support an indirect or over-riding reference mechanism for entities or resource, and may disallow a direct mechanism.
SGML and XML DTDs have a mechanism called Entities that allow indirect references. This is really important for maintance of large documents, because it disconnects references from names: you can update a graphics file and a single reference. Applying this to Open XML, OPC meets the criterion. OASIS catalogs would also probably fit the bill.
Following from principle 1 and 2, an indirect reference mechanism should allow the standard notation (IRIs) but may also allow a local or optimized form. Applying this to Open XML, this principle would mean double checking that IRIs are allowed (I will check this sometime) in OPC; I don’t think that OPC uses a local, optimized or legacy form (I will check this sometime.)
Principle 4: Notations for legacy or obsolescent technologies may be included in a standard, but should be in an informative part, clause, namespace or annex.
Applying this to Open XML, the sections on VML would be marked “informative”.
Principle 5: A standard should be arranged as a modular, simply layered container, to allow plurality and evolution
I am not sure of the ramifications for Open XML: I need to check the part 5 of the standard, which deals with extenions and future-proofing. Certainly the use of MIME types in OPC follows this principle, but it goes more than that: could DrawingML be augmented or replaced by SVG for example? (I will check this sometime)
Principle 6: A standard core should be platform-neutral and may allow optional platform-dependent extensions, in a separate annex, namespace or clause where appropriate
I think Open XML is OK in this regard: it allows Word macros, Java, and other scripts, but these are not required and IIRC partitioned.
Principle 7: A standard should address a market requirement, and the availability of a standard for one market or set of standards does not preclude the development of a standard for a different market or set of requirements
In other words, no standard should be denied merely on the grounds of “My requirements are more important than yours”. In the case of Open XML, it means that “don’t ignore the elephant in the room” arguments —that the needs for level-playing field basic document exchange by governments and suite vendors (ODF’s supposed sweet spot) trump the needs of integrators, archivists, and so on for Office’s format to be standardized— would be rejected. (Not rejected from all consideration of course, but relegated to their proper place, which is for legislators, regulators, CIO policy makers, and profile makers, not ISO.)
Whither Interoperabilty
When a standard followed the kinds of principles above, it allows both full-fidelity (the main principle behind the design of Open XML) to meet round-tripping/API-replacement/archiving requirements, and it sets the stage for interoperability between different systems: this is where in addition to the broad requirements of the standard, specific limitations are imposed so that all the different kinds of local, legacy, optimized, common-but-non-standard, and platform-dependent notations, media types, scripts and so on are avoided. ODF has just as much need for these kinds of profiles as Open XML does, as far as document interchange goes, by the way.
It is a kind of paradox: an “open” data format must be extensible, but the more that extensions are used, the more that a closed range of applications will be able to use the document; a document format that is “open” in the sense of having a fixed definition that allows guranteed document interchange is actually must be a “closed” (non-extensible) format! The solution? The long-standing policy of SC34 is to standardise “enabling technologies” and to leave profiles to user groups and industry consortia: XML itself is an example of this. ISO SGML allows many different delimiters; the industry consortium W3C picked a particular set of delimiters and features, added some internationalization features, and re-branded their profile “XML” which gives simpler interoperabilty.
In the absense of these kinds of principles, what we have is a line of argument that reduces to “Microsoft is bad, therefore anything they do or make is bad”, even when Microsoft is forced to backflip and to start doing the opposite of what they previously did: in this case, abandoning closed, binary formats. Ten years ago, Bill Gates was saying they would be crazy to open up their file formats, now they are doing it. If users and, most importantly, system integrators, keep on encouraging them to further open up and adopt a more modular architecture, it bodes well for where we will be in ten years time. The future is mix and match.
Good approach, Rick. Fair and future-proof.
I love it when the pros are able to articulate positions that enable the standard to both function and breathe.
++1
len
[blcokquote]could DrawingML be augmented or replaced by SVG for example? (I will check this sometime) [/blockquote]
Probably DrawingML has several more features than SVG does. Even in ODF they have altered the standard SVG for use in Office documents.
I do like your describing of a method. Many a anti-ooxml zealot might want to read up and that and look at the standard proposal as something which can be improved to a very good standard and not as something to slander at as being a poor standard just because MS was involved in producing it.
Hi Rick,
I found your appraisal here quite interesting, and overall quite reasonable.
I certainly agree that many objections to OpenXML could be fixed with minor changes to OpenXML, but isn't that missing the point? Surely these changes need to be be made before it is accepted as a standard? My understanding is that the ISO bodies are being asked to vote on OpenXML as it currently stands, not on what it could be with some number of improvements. And if they are so simple, why haven't these changes been made already, so the bodies could vote on that? In reality, the standard will be whatever is ratified in the vote. So if OpenXML is currently broken in various areas, and is accepted as an ISO standard, then all we have as a result is a broken standard.
I also don't understand your argument that "the needs of integrators, archivists, and so on for Office's format to be standardized" requires making OpenXML an ISO standard.
Surely integrators and archivists will need to convert existing documents from their legacy formats into the new standard format, whether the legacy format is Word, WordPerfect, or whatever, and the new standard be OpenXML, ODF, or something completely different?
I am strongly opposed to OpenXML becoming a standard, primarily because I don't see "legacy support" as a valid reason for crippling a new standard. As a single example, the part of the OpenXML standard which specifies incorrect date interpretations for a small period of time in 1900, for the sake of legacy Microsoft Documents and applications.
Surely the new standard should provide for consistent and non-broken representations, and rely on the conversion process from legacy to the new standard to take care of correcting flaws in the legacy representation?
I really believe that a new standard should represent current best-practice and be forward-looking, rather than represent past mistakes and be backward-looking.
Thanks for listening.
Cheers!
Nik
Nik: I think your understanding is indeed wrong. There are various options for voting: the one that is most often used when there is a lot of interest in a specification is "no with comments" which means "yes if these improvements are made". It is not a simple up/down vote. If there are not enough simple yes votes to get accepted (which I don't expect and would not welcome) but a certain number of "no with comments", then there is a ballot resolution meeting (BRM) by in which editing instructions are developed based on the "no with comments" votes. If the instructions can be adopted (and the resulting spec is still acceptable to the proposing body Ecma) then after another vote (at the BRM) the thing is accepted.
The BRM is the big chance to fix problems. Where did you get the idea that there would be no forum for fixing up problems?
I would expect any standard to have changes during the process and then to have other changes (corrigenda and addenda) even after adoption. Standards that are not maintained or improved are dead standards.
Please spare me the cut-and-pasted boilerplate talking point about Open XML. The two month period more than a century ago in which there can be out by 1 error if dumb formatting is used in one of the spreadsheet date formats is simply not a big enough problem to prevent ISO standardization. It is a trivial edge case, of academic interest, not a showstopping flaw.
On the issue that a standard should represent best-practice, the trouble is that there are many other places in Open XML where there are best practices: so should we have a trade-off so that one flaw cancels one best practice, or should we judge how trivial or important the flaw is in context?
Hi Rick,
Thanks for your thoughtful response, and for clarifying the voting and editing process. However, I still fail to understand why ECMA couldn't have addressed the problems before making the proposal and asking for a fast-track decision. Why the rush?
You state that you expect that the vote will be "no with comments". What happens if that isn't the case, and the broken proposal is accepted as it stands? This surely is a possibility? And why would ECMA put this proposal forward in its current (broken) state unless they wanted it to be accepted in this state?
You state: "I would expect any standard to have changes during the process...". Why? And perhaps more to the point: which part of the process?.
I understood the purpose of all the working groups (the formulating part of the process) to be the place to sort out the obvious problems in the proposal. This is where the industry experts are supposed to get involved to ensure the standard does represent best practice and isn't broken. This part of the process would seem to be over, and we are now in the part where the standards body is being asked to accept the proposal. I don't understand why you would expect this part of the process to be responsible for fixing problems that are already apparent.
You then go on to say: "Standards that are not maintained or improved are dead standards." But this is a completely different thing to fixing a standards proposal before it is a standard. I agree that standards should be maintained once they have become a standard, but that doesn't mean that standards bodies should be asked to accept broken proposals (which is what ECMA is asking) or to fix broken proposals in the voting process.
My apologies for what you consider the "boilerplate" flaw I used as an example. I cited a single example which I felt clarified my point - I never claimed that this point alone should be a show-stopper.
You then state: "so should we have a trade-off so that one flaw cancels one best practice, or should we judge how trivial or important the flaw is in context". I don't understand how you conclude these are the only alternatives. Why can't we have a standard with only best practice, and no flaws?
There is a lot in OpemXML which is already supported in the existing ODF standard. The primary points of difference in OpenXML are all the lagacy support for the Microsoft formats. In terms of best practice, surely the best way forward is to take the existing ODF standard as a starting point, add whatever best-practice parts of OpenXML that are not supported by the existing ODF standard, and discard all the legacy parts of OpenXML.
I also didn't find any response from you to my question as to why there is a requirement to make OpenXML a standard at all. Did you answer that point, and I just failed to understand that, or am I correct that you didn't answer that point?
Thanks again for your response and clarifications.
Cheers!
Nik
Nik: I think there is absolutely no chance the existing text will not have many improvements, either from "no with comments" or even "yes with comments" (which I only found out was available today: it would be used for non-showstopping fixes). The process is there to help make sure the issues that national bodies raise get addressed one way or another.
Which part of the process? Well, for fast track, this is the only place where changes can be made: not changes to the semantics of the technology (that flies in the face of reason: this kind of standard is useful to the extent that it accurately reflects the external reality not because it dictates reality) though. Even in the normal standards process, you would expect comments right up to the final vote: it is not an easy process to make a good standard.
Why should it become a standard? Because there is a market requirement (e.g. integrators who current work with .DOC binary formats, archivists, and remember that it was the EU that asked MS to submit their formats for international standardization in the first place). Because the final text will be of an acceptable quality and the IP issues will be sorted out (though I expect there are people who will never be satisfied in both areas). Because it doesn't conflict with ODF: the drivers for ODF adoption are different from the drivers for Open XML adoption, and the latter will not cancel out the former. Because Open XML is good for the open source world and open-source-using developers (such as myself, where my company uses Java on Linux for a large part), allowing better reach into MS-dominated sites.
And because the underlying data models of Open XML and ODF are different enough both at the heart and in their details that they are not substitutes for each other in their current or short-term forms. Check out ODF editor Patrick Durusau's recent comments at INCITs in this regard.
Rick,
Thanks for your answers. It's nice to get a clear response rather than zealotry.
However, I still don't understand how OpenXML represents a benefit to integrators and archivists that ODF does not. Surely if existing .DOC documents were stored in ODF, integrators such as yourself as well as archivists etc, would get all the same benefits you attribute to OpemXML.
I had understood from your original post that you felt integrators and archivists needed OpenXML, but you didn't say why. In your response you have repeated the assertion, but still haven't actually said why. I understand from the rest of your response that you believe that an existing .DOC document could not be converted to ODF with the same fidelity as with OpemXML. Have I correctly understood your position?
I would find that surprising given the success I've had converting to and from .DOC documents (and particularly old ones) using OpenOffice, but at this stage I would have to bow to your greater knowledge on this topic. I obviously need to research this further. On that point, thank you for the cited reference to Patrick Durusau.
One question though, in the hope you know the answer: Does the perceived ability of OpenXML to represent .DOC documents with greater fidelity than ODF rely on those parts of the OpenXML stadard which don't state the behaviour explicitly, but instead state that the new application must replicate the behaviour of the old Microsoft application?
The reason I ask is because I would expect that those parts of OpenXML are most likely to feature in the "no with comments" responses, and so if they are removed from the standard, then does the original claim that OpenXML can represent a .DOC document with greater fidelity than ODF still hold true?
One other interesting point: You state: "[...]remember that it was the EU that asked MS to submit their formats for international standardization in the first place". Is that actually the case? From the reports I read, I understood that Microsoft had only been asked to open up their formats - in other words to publish the details. I didn't read anything about EU asking that they be made a standard. In addition, I understood that the directive (I thought it was a directive, not a request) applied to the existing binary formats (eg Word6, Word95) as much or more than a new XML format.
Thanks again for your helpful responses.
Cheers!
Nik.
Iceberg: The trouble is that stripping out things in Open XML that are not in ODF leaves you pretty much with...ODF. It is the differences from ODF, or at least, the completeness of Open XML, that is the value of ISO Open XML.
For reviewers, it becomes difficult to say on one hand that the specification needs to be complete ("wee need more") but also on the other hand that it needs to have obsolescent parts like VML entirely removed ("we need less").
I don't think that people realize it, but MS was pushed on the Open XML path by the EU, who asked them to open up their existing formats as XML and to submit them to an international standards body. In part, they are opening up in order to mollify the Europeans, it seems. | http://www.oreillynet.com/xml/blog/2007/05/reasonable_principles_for_revi.html | crawl-002 | refinedweb | 3,499 | 56.89 |
Originally posted by Pratik R Patel:
I have a license for IDEA and was excited to use it for Grails/Groovy development - but I find it to be sluggish.
[ July 06, 2008: Message edited by: Pratik R Patel ]
Originally posted by dema rogatkin:
Where can I get information about RoR? I'm working on own framework, but maybe I'm wasting time.
Originally posted by Mark Citizen:
Based on my own experience Spring is better for simple web applications (quick deployment, relatively easy to code), while EJB is better for B2B services, and complex distributed apps.
One thing: if you know you need to use EJBs, I wouldn't recommend mixing them with Spring, since you'd need to maintain both application descriptors and Spring xml config files (not to mention the coding part).
Regardless of what Spring framework website says, it's not a walk in the park.
quote:All skeptics aren't wrong.
They aren't right, either. That is, of course they have all the rights to be skeptic. But whether their fears actually would manifest once they try whatever they are skeptic about - who knows...
So what are we supposed to do?
quote:I find that one falls into the zealot trap when the technique becomes the One True Way and no alternatives can even be discussed and criticism is not allowed.
Does that really happen? In the Agile community? I can't remember such an incident - example please?
Originally posted by Ilja Preuss:
Does that really happen? In the Agile community? I can't remember such an incident - example please?
Originally posted by garfild Baram:
Hi,
I need to retrieve SystemRoot variable value withing a java class.
I couldnt find a way to do it.
Can you help me here?
Thanks
Yossi
Originally posted by Stan James:
I have trouble when it swings the other way - the enthusiast insists that no other way can possibly work. To hear/read some agile fans (or OO or whatever the latest thing is) criticize the status quo you'd find it hard to believe any useful software was ever built before they came along, that any company ever had a IT advantage that made them wealthy, that anyone ever had any fun meeting customer needs. I'm approaching 28 years on the job and did NOT spend 25 of them in abject misery until XP saved my soul.
Originally posted by Ilja Preuss:
On the one hand, Agile *always* was broader than XP, naturally. Scrum, for example, basically being a project management method, has always been used for non-software projects, as far as I know.
And on the other hand, all the processes have become broader with increasing experience at applying them in different situations. XP, for example, has been shown to work in much larger projects than initially expected by its creators.
Originally posted by Sam Gehouse:
Is debuggable statement package a part of J2SE? If yes, which version? If no, what is the link to third party jar?
Originally posted by Jeffrey Ye:
i'v installed the jdbc driver to the following:
D:\mysql-connector-java-3.0.17-ga
and i'v set the classpath in property in computer
i'v alse started the mysql server;
the code :
import java.sql.*;
public class Dbconnection {
Connection connector;
public Dbconnect(){
try{
Class.forName("org.gjt.mm.mysql.Driver");
}catch(ClassNotFoundException e){
System.out.println(e.getMessage());
}
try{
connector=DriverManager.getConnection("jdbc:mysql://localhost/test","root","");
}catch(SQLException c){
System.out.println(c.getMessage());
}
}
public static void main(String[]args){
Dbconnection con=new Dbconnection();
}
}.
Originally. | https://coderanch.com/u/109331/Michael-Duffy | CC-MAIN-2020-50 | refinedweb | 598 | 56.96 |
However, I am wondering how exactly a double linked list would be able to be implemented to make moving from room to room a bit easier? It seems like it would make my code look much cleaner and stop my headache. Could someone explain to me how I would use it in concurrence with this code? Or maybe point me in the direction of a good example for double linked lists? Thank you.
import java.util.Scanner; public class HauntedMansion { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub String choice; @SuppressWarnings("resource") Scanner user_in = new Scanner(System.in);//This creates an 'opening' for user input System.out.println("Welcome to the Haunted Mansion. You see an expansive staircase ahead of you and rooms to your left and right."+"\n"+"Type 'upstairs' and hit enter to go up the staircase, 'right' to go to the room on your right,"+"\n"+"or 'left to go to the room on your left."); choice = user_in.nextLine();//this allows the user to input and continue on if(choice.equals("upstairs")){ System.out.println("You are now upstairs."+"\n"+"There is a musky and dark hallway to your left and a door in front of you."+"\n"+"Type 'left' and hit enter to go down the hallway or 'door' to open the door in front of you."); choice = user_in.nextLine(); if(choice.equals("left")){ System.out.println("As you turn to go down the hallway, you notice a dim light floating at the end of the hallway. The rest of the hall is nearly black."+"\n"+" With each step, the floorboard creaks and dust falls from the beams overhead."+"\n"+"The dim light grows brighter the closer you get to it."+"\n"+"'continue' to go on or 'back' to return to the staircase."); choice = user_in.nextLine(); if(choice.equals("continue")){ System.out.println("The glowing light appears to be coming from a cellphone on the floor."+"\n"+"Pick it up? 'Y' or 'N'"); choice = user_in.nextLine(); if(choice.equals("Y")){ System.out.println("There is a new text message on the cellphone."); choice = user_in.nextLine(); }else if(choice.equals("N")){ System.out.println("The cellphone keeps flashing that there is a new message. You notice that the battery is nearly empty. This could serve as a good light if you could find a charge for it. You pick it up anyways. Do you read the message?"+"\n"+"'Y' or 'N'"); choice = user_in.nextLine(); } }else if(choice.equals("back")){ System.out.println("As you turn your back to the light you feel a cold breeze flow through the room. A clammy hand wraps itself around your neck..."+"\n"+"GAME OVER"); } } else if(choice.equals("door")){ System.out.println("You reach your hand out and turn the door handle. There is a slight rattle as you push open the door."); } } else if(choice.equals("right")){ System.out.println("You are now in the kitchen."); } else if(choice.equals("left")){ System.out.println("You are now in the dining room"); } else{ System.out.println("That is not a valid answer."); } } } | http://www.dreamincode.net/forums/topic/288303-question-on-implementing-a-double-linked-list-in-a-text-game/page__pid__1681037__st__0 | CC-MAIN-2016-07 | refinedweb | 516 | 68.67 |
Search Type: Posts; User: ManoloComo
Search: Search took 0.02 seconds.
- 19 Feb 2013 1:14 PM
- Replies
- 0
- Views
- 1,059
Hi,
I'm using Gxt 2.2.5 and Jaws Screen Reader.
My problem is that Jaws doesn't read combobox list content (options).
I think that I don't correctly append the store list because the...
- 17 Jan 2013 2:49 PM
- Replies
- 6
- Views
- 2,261
Thanks for you reply but I still have the problem.
I don't know how create a my implementation of ValueProvider.
My model:
public class User extends GenericObj implements Serializable {...
- 17 Jan 2013 8:36 AM
- Replies
- 6
- Views
- 2,261
I have to create a basic grid:
DataProperties dp = GWT.create(DataProperties.class);
List<ColumnConfig<Data, ?>> ccs = new LinkedList<ColumnConfig<Data, ?>>();
ccs.add(new ColumnConfig<Data,...
Results 1 to 3 of 3 | http://www.sencha.com/forum/search.php?s=3381563290a5091cc5eeae823ffb7436&searchid=10603074 | CC-MAIN-2015-14 | refinedweb | 143 | 60.41 |
CodePlexProject Hosting for Open Source Software
I'd like my theme to require that the user has Url Alternates enabled.
I tried to add this to Theme.txt:
Dependencies: Orchard.DesignTools.UrlAlternates
But this happened when I tried to activate the theme:
It's Designer not Design. ;)
I tried:
Dependencies: Orchard.DesignerTools.UrlAlternates
and I still got the same error.
Using:
Dependencies: Orchard.DesignerTools
almost works, but doesn't require the UrlAlternates feature.
Try just:
Dependencies: UrlAlternates
Looking at the Module.txt in designer tools, there's no namespace for the UrlAlternates feature.
Bingo!
Thanks Pete!
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://orchard.codeplex.com/discussions/253860 | CC-MAIN-2017-13 | refinedweb | 132 | 62.75 |
R is an amazing tool to perform advanced statistical analysis and create stunning visualizations. However, data scientists and analytics practitioners do not work in silos, so these analysis have to be copied and emailed to senior managers and partners teams. Cut-copy-paste sounds great, but if it is a daily or periodic task, it is more useful to automate the reports. So in this blogpost, we are going to learn how to do exactly that.
The R-code uses specific library packages to do this:
- RDCOMClient – to connect to Outlook and send emails. In most offices, Outlook is still the defacto email client, so this is fine. However, if you are using Slack or something different it may not work.
- r2excel – To create an excel output file.
The screenshot below shows the final email view:
As seen in the screenshot, the email contains the following:
- Custom subject with current date
- Embedded image
- Attachments – 1 Excel and 1 pdf report
Code Explanation:
The code and supporting input files are available here, under the Projects page under Nov2018. The code has 4 parts:
- Prepare the work space.
- Pull the data from source.
- Cleaning and calculations
- Create pdf.
- Create Excel file.
- Send email.
Prepare the work space
I always set the relative paths and working directories at the very beginning, so it is easier to change paths later. You can replace the link with a shared network drive path as well.
Load library packages and custom functions. My code uses the r2excel package which is not directly available as an R-cran package. So you need to install using devtools using the code below.
It is possible to do something similar using the “xlsx” package, but r2excel is easier.
library(devtools) install_github("kassambara/r2excel") library(r2excel)
Some other notes:
- you need the first 2 lines of code only for the first time you installation. From the second time onwards, you only need to load the library.
- r2excel seems to work only with 64-bit installations of R and Rstudio.
- you do need Java installed on your computer. If you see an error about java namespace, then check the path variables. There is a very useful thread on Stackoverflow, so take a look.
- As always, if you see errors Google it and use the Stack Overflow conversations. In 99% of cases, you will find an answer.
Pull the data from source
This is where we connect to an Excel CSV (or text) file. In practice, most people connect to a database of some kind. The R-script I am using connects to a .csv file, but I have added the code to a connect to a SQL database.
That code snippet is commented out, so feel free to substitute your own sql database links. The code will also work for Amazon EC2 cluster.
Some points to keep in mind:
- If you are using sqlquery() then please note that if your query has an error then R sadly shows only a standard error message. So test your query on SQL server to ensure that you are not missing anything.
- Some queries do take a long time, if you are pulling from a huge dataset. Also the time taken will be longer in R compared to SQL server direct connection. Using the Sys.time() command before and after the query is helpful to know how long the query took to complete.
- If you are only planning to pull the data randomly, it may make sense to pull from SQL server and store locally. Use the fread() function to read those files.
- If you are using R desktop instead of R-server, the amount of data you can pull may be limited to what your system configuration.
- ALWAYS optimize your query. Even if you have unlimited memory and computation power, only pull the data you absolutely need. Otherwise you end up unnecessarily sorting through irrelevant data.
Cleaning and calculations
For the current data, there are no NAs, so we don’t need to account for those. However, the read.csv() command creates factors, which I personally do not like, as they sometimes cause issues while merging.
Some of the column names have “.” where R converted the space in the names. So we will manually replace those with an underscore using the gsub() function.
We will also rank the apps based on categories of interest, namely:
- Most Popular Apps – by number of Reviews
- Most Popular Apps – by number Downloads and Reviews
- Most Popular Categories – Paid Apps only
- Most popular apps with 1 billion installations.
Create pdf
We are going to use the pdf() function to paste all graphs to a pdf document. Basically what this function does is write the graphs to a file rather than show on the console. So the only thing to remember is that if you are testing graphs or make an incorrect graph, everything will get posted to the pdf until you hit the “dev.off()” function. Sometimes if the graph throws an error you may end up with a blank page, or worse, with a corrupt file that cannot be opened.
Currently, the code I am only printing 2 simple graphs using ggplot() and barplot() functions, but you can include many other plots as well.
Create Excel file.
The Excel is created in the sequence below:
- Specify the filename and create an object of type .xlsx This will create an empty Excel placeholder. It is only complete when you save the Workbook using the saveWorkbook() at the end of the section.
- Use the sheets() to create different worksheets within the Excel.
- The xlsx.addHeader() adds a bold Header to each sheet which will help readers understand the content on the page. The r2excel package has other functions to add more informative text in smaller (non-header) font as well, if you need to give some context to readers. Obviously, this is optional if you don’t want to add them.
- xlsx.addTable() – this is the crucial function that adds the content to Excel, the main “meat” of what you need to show.
- saveWorkbook() – this function will save the Excel to the folder.
- xlsx.openFile() – this function opens the file so you can view contents. I typically have the script running on automated mode, so when the Excel opens I am notified that the script completed.
Send email
The email is sent using the following functions:
- OutApp() – creates an Outlook object. As I mentioned earlier, you do need Outlook and need to be signed in for this to work. I use Outlook for work and at home, so I have not explored options for Slack or other email clients.
- outmail[[“To”]] – specify the people in the “to” field. You could also read email addresses from a file and pass the values here.
- outmail[[“cc’]] – similar concept, for the cc field.
- outmail[[“Subject”]] – I have used the paste0() function to add the current date to the subject, so recipients know it is the latest report.
- outMail[[“HTMLBody”]] – I used the HTML body so that I can embed the image. If you don’t know HTML programming, no worries! The code is pretty intuitive, you should be able to follow what I’ve done. The image basically is an attachment which the HTML code is forcing to be viewed within the body of the email. If you are sending the email to people outside the organization, they may see a small box instead of the image with a cross on the top left (or right) of the box. Usually, when you hover your mouse near box and right click, it will ask them to download images. You may have seen similar messages in gmail, along with a link to “show images” or ‘always show images from this sender’. You obviously cannot control what the recipient selects, but testing by sending to yourself first helps smoothing out potential aesthetic issues.
- outMail[[“Attachments”]] – function to add attachments.
- outMail$Send() – until you run this command, the mail will not be send. If you are using this in office, you may get a popup asking you to do one of the following. Most of these will generally go away after the first use, but if they don’t, please look up the issue on StackOverflow or contact your IT support for firewall and other security settings.
- popup to hit “send”
- popup asking you to “classify” the attachments (internal / public/ confidential) Select as appropriate. For me, this selection is usually “internal”
- popup asking you to accept “trust” settings
- popup blocker notifying you to allow backend app to access Outlook.
That is it – and you are done! You have successfully learned how to send an automated email... | https://www.r-bloggers.com/automated-email-reports-with-r/ | CC-MAIN-2019-09 | refinedweb | 1,447 | 72.76 |
Some Core Concepts That can take away all your Confusion about React JS
In this blog, I’ll try to clear your all confusion about React JS. And now I'm talking about what I think are the key points of React JS. Hope you’ll like it. Let’s get right into it.
What is React? Is it a framework?
React is a JavaScript library, It’s not exactly a framework. Generally, it’s used for building user interfaces. It is maintained by Facebook and a community of individual developers and companies. React can be used as a base in the development of single-page or mobile applications.
React Virtual DOM.
React Performance
ReactJS is known to be a great performer. This feature makes it much better than other frameworks out there today. The reason behind this is that it manages a virtual DOM. The DOM is a cross-platform and programming API that deals with HTML, XML, or XHTML. The DOM exists entirely in memory. Due to this, when we create a component, we did not write directly to the DOM. Instead, we are writing virtual components that will turn into the DOM leading to smoother and faster performance.
How to React rendering works
Every setState() call informs React about state changes. Then, React calls the render() method to update the representation of the components in memory (Virtual DOM) and compares it with what’s rendered in the browser. If there are changes, React does the smallest possible update to the DOM.
Child components know that they need to re-render because their props changed.
I often compare that to a diff mechanism in Git. There are two snapshots of component tree that React compares and just swaps what needs to be swapped.
React adopts Javascript
To generate markup dynamically in React you also use JS. Consider the following example:
const [value, setValue] = useState('');
// declare a useState hook for set user input valueconst handlechange = (e) => {
setValue(e.target.value)
}
// declare a function for catch user input and set value in useState<select value={value} onChange={handleChange}>
{somearray.map(element => <option value={element.value}>
{element.text}
</option>)}
In the above example, the someArray array is mapped using a map function to a list of <option> elements. The only deviation from normal HTML here is a value on the <select> element that sets the selected attribute for you.
React JSX
JSX stands for JavaScript XML. JSX is an XML/HTML like extension to JavaScript. Consider the following example:
const element = <h1>Hello World!</h1>
As you can see above, JSX is not JavaScript nor HTML. JSX is an XML syntax extension to JavaScript that also comes with the full power of ES6. Just like HTML, JSX tags can have a tag name, attributes, and children. If an attribute is wrapped in curly braces, the value is a JavaScript expression. and also mind that JSX does not use quotes around the HTML text string.
React is Declarative
It means that you don’t implement any processes or procedures to render content to the browser; you just describe what you want to show and React handles it for you.
A non-declarative example that doesn’t use React might look something like this:
const div = document.createElement("div"); const text = document.createTextNode("Hello, world!"); const root = document.getElementById("root"); div.appendChild(text); root.appendChild(div);
With React this is simply:
const App = () => <div>Hello, world!</div>
We just declare that we want a div with some text and React implements the individual steps to create and append elements for us.
State in React
The state of a component is an object that holds some information that may change over the lifetime of the component. We should always try to make our state as simple as possible and minimize the number of stateful components. Let’s create a user component with the message state:
import { useState } from 'react';const components = () => {const [message, setMessage] = useState('Hello World');return ( <div> <h1>{message}</h1> </div>);};export default components; the following component functionality:
- Pass custom data to your component.
- Trigger state changes.
- Use via this.props.reactProp inside component’s render() method.
For example, let us create an element with reactProp property:
<Element reactProp={‘hello’} />
This reactProp name then becomes a property attached to React’s native props object which originally already exists on all components created using React library.
props.reactProp
React event handling
In HTML, the event name should be in lowercase. Consider the following example:
<button onclick=’handleClick()’>
Whereas React it follows camelCase convention:
<button onClick={handleClick}>
In HTML, you can return false to prevent default behavior:
<a href=’#’ onclick=’console.log(“The link is clicked.”) />
Whereas in React you must call preventDefault() explicitly:
function handleClick(event){
event.preventDefault();
console.log(‘The link is clicked.’)
}
In HTML, you need to invoke the function by appending () Whereas in react you should not append () with the function name. | https://touhidatik81.medium.com/some-core-concepts-that-can-take-away-all-your-confusion-about-react-js-352d6ba3e3e7?responsesOpen=true&source=user_profile---------1---------------------------- | CC-MAIN-2021-43 | refinedweb | 822 | 57.87 |
.NET;
using System.Diagnostics;
/// <summary>
/// EventProviderTraeListener:
///
/// Trace Switch:
///
/// </summary>
class ETWSample
{
static void Main(string[] args)
{
TraceSource myTraceSource = new TraceSource("TraceSource.
PingBack from
Thanks Thottam for sharing this.
I’ve been experimenting with the new ETW classes as well some time back and was very disappointed with the documentation. But ETW seems to be very powerful and I’m happy that there is a first step to make it available to the .NET world. Please keep us updated with the experiences you make!
Cheers,
Volker
You saved a lot my time on this topic.
Your comment on the state of help on the topic is apt.
ETW is indeed very powerful. I saw some performance numbers for ETW and they were very impressive. This is a good first step in the right direction, but needs some simplicity to be sprinkled on it.
Thanks for the pointers…
BTW, your links to MSDN in the post are actually pointing to some OWA site.
Have you tried xperf? xperf has a viewer that analyze ETL files.
Can we use ETW tracing to capture events from the application running on some other machine in network. | https://blogs.msdn.microsoft.com/thottams/2008/05/29/diagnostics-using-etw-tracing-in-net-3-5-eventprovidertracelistener/ | CC-MAIN-2018-30 | refinedweb | 191 | 75.81 |
Mathfx
From Unify Community Wiki
(Difference between revisions)
Revision as of 01:39, 18 December 2005
Description
The following snippet provides short functions for floating point numbers. See the usage section for individualized information.
Usage
- Hermite - This method will interpolate while easing in and out at the limits.
- Lerp - Short for 'linearly interpolate', this method is equivalent to Unity's Mathf.Lerp, included for comparison.
C# - Mathfx.cs
<csharp>using UnityEngine; using System;
public class Mathfx {
public float Hermite(float start, float end, float value) { float tt = Mathf.Min(Mathf.Max((value - start) / (end - start), 0.0f), 1.0f); return tt * tt * (3.0f - 2.0f * tt); } public float Lerp(float start, float end, float value) { return ((1.0f - value) * start) + (value * end); }
} </csharp> | http://wiki.unity3d.com/index.php?title=Mathfx&diff=prev&oldid=1596 | CC-MAIN-2020-05 | refinedweb | 124 | 52.05 |
Setup
import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers
Introduction
Masking is a way to tell sequence-processing layers that certain timesteps in an input are missing, and thus should be skipped when processing the data.
Padding is a special form of masking where the masked steps are at the start or a utility function to truncate and pad Python lists to a common length:
tf.keras.preprocessing.sequence.pad_sequences.
raw_inputs = [ [711, 632, 71], [73, 8, 3215, 55, 927], [83, 91, 1, 645, 1253, 927], ] #)
[[ 711 632 71 0 0 0] [ 73 8 3215 55 927 0] [ 83 91 1 645 1253 927]] False False False] [ True True True True True False] [ True True True True True True]], shape=(3, 6), dtype=bool) tf.Tensor( [[ True True True False False False] [ True True True True True False] [ True True True True True True]],. = pass the output of the
compute_mask() method of a mask-producing layer
to the
__call__ method of a mask-consuming layer,:]],.
Here is an example of a
TemporalSplit layer that needs to modify the current mask.
class TemporalSplit( [[False False False] [ True True False] [ True True True]], shape=(3, 3), dtype=bool)
Here is another example of a
CustomEmbedding layer that is capable of generating a
mask from input values:
class CustomEmbedding & using layers in a standalone way, you can pass the
maskarguments to layers manually.
- You can easily write layers that modify the current mask, that generate a new mask, or that consume the mask associated with the inputs. | https://www.tensorflow.org/guide/keras/masking_and_padding?hl=en | CC-MAIN-2021-17 | refinedweb | 262 | 60.28 |
C# is a simple and powerful programming language which is used to develop many robust applications. It is pronounced as C Sharp.
C# is supported by a large number of frameworks. C# can be used to develop a lot of web-based applications with the .NET Framework which is a software development platform developed by Microsoft. There are other frameworks like Mono or Xamarin which can be used to develop mobile apps, games or iOS and Android applications.
There are many features of C# which have made it one of the most widely used programming languages. It is an object-oriented language which is compatible with other .NET languages. It is type safe because a C# code can access only that part of memory which it has permission to access and execute, thus increasing the safety of the program.
Applications of C#
C# is used to develop a wide range of applications. Following are some of the applications developed using C# :
- Windows Client Applications → It is used to build Windows client applications using Windows Forms and WPF, which are application templates provided by Microsoft Visual Studio.
- Web Applications → It is used to build modern web applications using ASP.NET combined with JavaScript and other libraries and APIs. ASP.NET is one of the most widely used technology for building web applications and is supported by templates developed by Microsoft Visual Studio.
- Console Applications → It is used to build applications that run in a command-line interface.
- Azure Cloud Applications → It is used to build cloud-based applications for Windows Azure, which is an operating system of Microsoft for cloud computing and hosting.
- iOS and Android Mobile Apps → It is used for cross-platform native mobile app development via Xamarin, which can be used to create platform-specific UI code layer.
- Game Development → It is a quite popular language to build games because it is fast and has much control over memory management. It has a rich library for designing graphics. C# is used in many game engines like Unity Engine and Unreal.
- Internet of Things Devices → It is used for building IoT devices because it doesn't need much processing power.
- Artificial Intelligence → C# has extensive libraries for basic deep learning and embedded systems. As a result, it is used in the fields of Computer Vision and Artificial Intelligence.
How to start C#?
In this entire C# tutorial, you will be learning about writing C# programs. But what after that?
After writing any program, we need to compile and run it to see the output of the program.
So, what is meant by compiling?
When we write our program in C# language, it needs to be converted to machine language (which is binary language consisting of 0s and 1s) so that computer can understand and execute it. This conversion is known as compiling a program. We compile a code with the help of a compiler.
Integrated Development Environment (IDE)
To write and compile our C# program, we have been provided with many IDEs.
An IDE consists of a Text Editor as well as a Compiler. We type our program in the text editor which is then compiled by the compiler.
Text Editor
We write our program in a text editor. The file containing our program is called source file and is saved with a .cs extension.
C# Compiler
After saving our program in a file with the .cs extension, we need to compile it to convert it into machine language that computer can understand.
Now let's see how to compile and execute a C# program on different operating systems. We are only going to show how to run a C# program, the working of the code is explained in the next chapter.
Run C# on Windows
There are many IDEs available for editing and compiling C# programs in Windows like Microsoft Visual Studio and NetBeans. We will see how to run a C# code using Visual Studio and Command Prompt.
Visual Studio
We can edit and compile programs using Visual Studio which allows us to write, compile and run our program. Let's see how to do it.
Before learning to write a program in Visual Studio, first, make sure it is installed in your computer. If not, then download and install it from here.
1. On opening Visual Studio, you will get a window. Click on File → New → Project.
2. The New Project dialog box will appear. Expand Installed, then expand Visual C#, then select Windows Desktop, and then select Console Application.
3. Give the project name in the Name text box, and then click on OK button.
3. A file named Program.cs will get opened in the Code Editor. Replace its content with your C# program given below.
using System; class Program { static void Main(string[] args) { Console.WriteLine("Hello World"); } }
4. To run the project, click on Debug → Start Without Debugging or press
Ctrl+F5 keys.
5. A Command Prompt window will appear with "Hello World" written.
If you want to use the debugger, then put the breakpoint on the last line as shown below. Then to run the project, click on the Start button or press the F5 key.
Command Prompt
1. Write your C# program (code given below) in a text editor like Notepad and save it with a filename having
.cs extension. In our case, we named it as hello.cs.
using System; class HelloWorld { static void Main(string[] args) { Console.WriteLine("Hello World"); } }
2. Open the Command Prompt window (search for cmd).
3. To check if the path of the C# compiler is set as Environment Variable, type
csc and press Enter. You should get something as shown below.
If you are getting this message - 'csc' is not recognized as an internal or external command, operable program or batch file, then you need to the path of the C# compiler as Environment Variable.
4. To set the path as Environment Variable, right click on My Computer and select Properties.
5. Go to the Advanced System Settings tab.
6. Click on the Environment Variables button.
7. In the System Variables box, select Path and click on the Edit button.
8. An Edit Environment variable window will appear. Let's assume that csc.exe is located in C:\Windows\Microsoft.NET\Framework64\v3.5 directory. This directory is the path to the compiler.
Click on the New button and add the compiler path in the text box created as shown below. Then click on Ok.
9. Now open a new Command Prompt window. Use the command cd followed by a directory name to change your working directory. For example, if you are operating in C:\Users\John\Project and want to get to C:\Users\John\Project\prog ,then you need to type cd prog and press Enter. (This new directory should contain your previously created C# file (hello.cs in our case)).
10. Once you are in the correct directory in which you have your program, then you can compile and run your program.
To compile, type
csc filename.cs and press Enter. Here, our filename is hello.cs, so we will write
csc hello.cs. If it shows any error, then try to remove that error from your program and then again save and compile your program.
11. Once the program is compiled, run the program by typing
filename (
hello in our case) and then pressing Enter. We will get the output as shown below.
Run C# on Linux
For Linux, you can write your C# program in various text editors like Vim (or vi), Sublime, Atom, etc. To compile and run our C# program in Linux, we will use Mono which is an open-source implementation of the .NET framework.
So let's see how to create and run a C# program on Linux.
1. Open Terminal (
ctrl+alt+T ).
2. Type the command
sudo apt install mono-complete to install mono-complete.
3. Open a text editor (we are going to use Gedit) and save the following program with a .cs extension. We are going to name our file hello.cs, so we will open the file using
gedit.
Run C# on Mac
In Mac, you can use any text editor of your choice, we are going to use Atom. To compile and run our C# program in Mac, we will use Mono which is an open-source implementation of the .NET framework.
So let's see how to create and run a C# program on Mac.
1. Download and install Mono.
2. Open Terminal
3. Open a text editor (we are going to use Atom) and save the following program with a .cs extension. We are going to name our file hello.cs, so we will open the file using
atom.
| https://www.codesdope.com/course/c-sharp-introduction/ | CC-MAIN-2021-25 | refinedweb | 1,463 | 66.64 |
Portable !!! Not only Linux/Unix but also on OpenVMS, Mac, Windows ...
Portable !!! Not only Linux/Unix but also on OpenVMS, Mac, Windows ...
In principle I like to do portable programming. That is, your program not only runs on Linux/Unix but also on OpenVMS, Mac, Windows ... I mention OpenVMS because of my background. In scientific applications OpenVMS is still strong.
But how to do this ?
For the GUI I use qt . Although many people may argue that other really free GUI libraries are available, I have seen no library supporting all the mentioned OSs. With qt it is possible.
But what's with the low level functions ?
Up to now I tried to wrap this functionality in a own library. If there already is a library for this purpose, please tell me.
Would it be interesting, to setup a project for this purpose (eventually LGPL) ? Help welcome.
I know, I didn't use STL, didn't use namespace, used printf() in C++ and all this bad things !!! May be I'am an old C-hacker, whom can't be helped at all. My idea, is NOT to discuss design or licensing issues. If someone is really interested in such a project, we can talk about.
Differing in breadth, other notable attempts at portable libraries include:
I know of two highly portable "support" libraries that include what you need:
Both are fairly mature, and have been ported and tested on more platforms that you could shake a stick at. Really...
There are also tons of other libraries that work well on a limited range of platforms (which generally means Windows + Unix), like:
One big problem in my opinion with any large and highly portable projects is in their build systems. There are two common cases here:
there is one "big" Makefile for the developer's favorite platform and compilers. Since these are really atrocious things to write, they're generally generated with other tools like, *choke* Automake. These are maintained pretty frequently
there are also a few "small" Makefiles or project files for specific platforms and compilers. However, they need to be updated "by hand" by volunteers, which is a pain in the ass. As a consequence, they're very rarely updated and the library is not correctly tested or even compiled on platforms that differ from the "favorite" one too much. And nobody's going to volunteer to port your library to something like OS/2 or Netware since rewriting the build control files from scratch is too hard.
GLib is a good example of a very good general purpose library that is hard to compile on anything besides Unix and Windows, and that's not because of the sources, but the build system.
There are many ways to avoid these two pitfalls, and all of them requires you to get rid of "make" for something that allows you to decribe your project in higher-level terms in control files while not being specific to a given project (e.g. Jam, Boost.Jam, Aegis which includes Cook and many others).
I would suggest you to carefully choose your build system before anything else if you really want to write highly portable code. Otherwise, the chances that it will be easily ported to different architectures is ridiculously low, unless you've got a set of dedicated system experts who are ready to do the work for you consistently, which I doubt :-)
Why not use Qt (or the TinyQ subset) for the low-level stuff as well? There is no need to use a second library.
Right now Qt provides abstractions for TCP (QSocket), sockets in general (QSocketDevice), threads (QThread), mutexes (QMutex), time (QTime/QDate) and many other things (QNetworkProtocol, QFile, QXml*, QDom*, QLibrary...). You can also find many missing pieces, like ini files, in kdelibs. And at least I would be interested in having a KDE-like library that adds server-specific stuff to Qt, as I am already working on server code with Qt.
While not broken out into really useful libraries, I know that Tcl (maybe Python too?) has pretty good cross platform support in its C library. Might be worth a look, as it's also under a very liberal license.
There are many GUI frameworks that are as good as Qt in terms of portability. See
for details.
There are POSIX libs for all of the above platforms, including OpenVMS. As mentioned above, just use Qt.
ACE looks good. Also Qt offers some portable functions.
But since I have gone this far with my library I will stick to it for my own projects.
I already have the functionality I want and I have full control over the library.
In Qt for example, there are only posix calls, no SYS$ or LIB$* calls.
This will limit the use of Qt at least under OpenVMS.
But may be my idea is not good enough for a new project.
I will just use it as a supplement for
OK, you said you don't want license discussions. But you may still want to know this. Your license statements say that
* This program is free software; you can redistribute it and/or modify * * it under the terms of the GNU General Public License as published by * * the Free Software Foundation; either version 2 of the License, or * * (at your option) any later version. * * * * I make one exception to the above statement: * * You can use this software for commercial purposes * * if you purchase a license * * You will not be allowed to make changes to the softwareIt is confusing. Maybe you should say "this sofware is licensed under the GNU General Public License..." (standard text)
(For these people who cannot follow the GPL) Alternatively you can purchase the commercial license which allows modifications for use in proprietary software...
I said: eventually LGPL !!!
Of course you found what you are showing. But that material should be a base for a discussion about the library. That license statement is obsolete.
You get native widgets on platforms like Win32 (unlike Qt), and as far as utility functions it supports about half of your bullet points.
The documentation is great and the project in general gives off an aura of being very thoughtfully designed and well! | http://www.advogato.org/article/626.html | CC-MAIN-2014-15 | refinedweb | 1,040 | 71.55 |
in reply to
Re^2: Functions (Return V Print)
in thread Functions (Return V Print)
If you're returning title are you assigning it to a variable on it's return?
#!/opt/perl/bin/perl
use strict;
use warnings;
my $title="name";
print "$title\n";
$title = &function64("name");
print "$title\n";
sub function64 {
my ($title) = @_;
$title="$title " x 3;
return ($title);
}
[download]
Ignore me, you are, just re-read your original post.
Errr, ignore my ignore, you've not shown enough code. Could you show us where you're calling the function?
Updated: 20110204 11:53 GMT
Perl Cookbook
How to Cook Everything
The Anarchist Cookbook
Creative Accounting Exposed
To Serve Man
Cooking for Geeks
Star Trek Cooking Manual
Manifold Destiny
Other
Results (145 votes),
past polls | http://www.perlmonks.org/index.pl?node_id=886187 | CC-MAIN-2014-41 | refinedweb | 128 | 57 |
Forked from.
Contents
Using it
In your requirements.txt add:
-e git://github.com/lsst-sqre/lsst_dd_rtd_theme.git@master#egg=lsst_dd_rtd_theme
In your conf.py file:
import lsst_dd_rtd_theme html_theme = "lsst_dd_rtd_theme" html_theme_path = [lsst_dd_rtd_theme.get_html_theme_path()]
Changelog will revert to static positioning. To disable the sticky nav altogether change the setting in conf.py.
Contributing or modifying the theme
The lsst_dd dependencies,
pip install sphinx
- Install sass
gem install sass
- Install node, bower and grunt.
//:
grunt
This default task will do the following very cool things that make it worth the trouble.
- It’ll install and update any bower dependencies.
- It’ll run sphinx and build new docs.
- It’ll watch for changes to the sass files and build css from the changes.
- It’ll rebuild the sphinx docs anytime it notices a change to .rst, .html, .js or .css files.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages. | https://pypi.org/project/lsst-dd-rtd-theme/ | CC-MAIN-2017-39 | refinedweb | 159 | 69.68 |
Marked as answer by G3N3RAL PALLAS Monday, October 25, 2010 12:13 AM Unmarked as answer by G3N3RAL PALLAS Monday, October 25, 2010 12:13 AM Sunday, October 24, 2010 10:07 PM Reply All you need to do is to add a Timer and set its Enabled Property to True: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; Bu videoyu Daha Sonra İzle oynatma listesine eklemek için oturum açın Ekle Oynatma listeleri yükleniyor... On the otherhand, its also nice of CMS to provide the answer here so lazy developers dont have to search all over Google to find the same answer. :o) –BerggreenDK Apr Check This Out
Dawisko1 537 görüntüleme 20:52 Vertical ProgressBar & Coloring it With c# VisualStudio2013 - Süre: 8:16. Bu videoyu bir oynatma listesine eklemek için oturum açın. To make it work - create a console app and add RX-Main NuGet package. What about disposal of the management objects? Clicking Here
That's enough time to reduce your DB calls by 98% (vs querying on each measurement) and still be responsive to changes. –RobH Jan 29 '16 at 8:51 add a comment| up Tried running as administrator but it doesn't work.I have spent hours searching the Web for a fix, but cannot find anything conclusive. Are people of Nordic Nations "happier, healthier" with "a higher standard of living overall than Americans"? I think the better solution here is just to use an alternative method in WMI.
Now I just need to figure out how to remove the decimal places. video tube 1.427 görüntüleme 8:16 (In English) How to Access Any Performance Counter, Category in Net, C#, VB, etc.. - Süre: 9:47. Movie about a girl who had another different life when she dreamed Should we eliminate local variables if we can? C# Get Ram Usage Yükleniyor...
Here are some fake implementations to make test work: class Notifier : INotifier { public void Notify(bool critical) => Console.WriteLine(critical); } class CpuCounter : IPerformanceCounter { public double Value => Math.Sin(DateTime.Now.Millisecond Now let's define some infrastructure: interface IIntervals { TimeSpan Measurement { get; } // how often to measure TimeSpan Notification { get; } // notification interval for high load TimeSpan CriticalNotification The following WMI code snippet I found can be used to get the CPU core usage values: //Get CPU usage values using a WMI query ManagementObjectSearcher searcher = new ManagementObjectSearcher("select * Some time later...
Can we have a vb.net code for this?Thank you, June 9, 2016 at 5:30 AM Post a Comment Newer Post Older Post Home About Me Allen Conway I am a Magenic C# Get Cpu Usage Remote Machine See also: Stack Overflow question checklist" – Andrew BarberIf this question can be reworded to fit the rules in the help center, please edit the question. 1 stackoverflow.com/questions/4679962/… –SwDevMan81 Oct Marked as answer by G3N3RAL PALLAS Monday, October 25, 2010 12:14 AM Sunday, October 24, 2010 9:49 PM Reply | Quote 0 Sign in to vote You could also try to This API version does not include the CPU usage of threads (the code is very similar to that of process code); I was too lazy to write it.
You have 1200 characters left. Now Javascript is disabled. 0 Comments (click to add your comment) Comment and Contribute Your name/nickname Your email WebSite Subject (Maximum characters: 1200). C# Get Cpu Usage Of Process Düşüncelerinizi paylaşmak için oturum açın. C# Get Cpu Usage Of Current Process Marked as answer by G3N3RAL PALLAS Monday, October 25, 2010 12:13 AM Sunday, October 24, 2010 10:43 PM Reply | Quote All replies 1 Sign in to vote You can use
This looks cool in theory, unfortunately the code doesn't work. his comment is here otherwise you just get 100% * number of cores. –steve cook Mar 24 '14 at 2:17 add a comment| up vote 12 down vote It's OK, I got it! c# cpu-usage system.diagnostics share|improve this question edited Oct 28 '15 at 9:42 Wai Ha Lee 4,207102639 asked Oct 28 '15 at 9:34 Buda Gavril 8,3222276122 It is fluctuating on I have thought about setting a timer and calling SettingsService.GetSettings() after a set amount of time but this seems like a smelly workaround. C# Get Total Cpu Usage
Both of the executables and their source code throw 'Access is Denied' error. Sign In·ViewThread·Permalink Re: Anybody has the complete code? Perfect solution for what I needed.Best,Dax December 16, 2015 at 9:01 AM Anonymous said... Differential high voltage measurement using a transformer Encryption - How to claim authorship anonymously?
This might lead you to think that inserting cpuCounter.NextValue() before the return line would fix the problem however this is not the case. C# Performancecounter Cpu Usage How did Adebisi make his hat hanging on his head? How did Adebisi make his hat hanging on his head?
Well, I don't know why for some reason Microsoft decided not to allow gathering any information about the system idle process (which was allowed in .NET 1.1). Arun Yadav 3.477 görüntüleme 9:47 CPU Temperature check Visual Basic - Süre: 12:03. share|improve this answer answered Jan 27 '16 at 11:35 Dmitry Nogin 2,192118 Hm, I've never used Reactive Extensions. C# Performancecounter Process Cpu Usage Not anyone I'm guessing, which mean this code is a part of a page reload or top level refresh in your app that is occurring often.
public class Form1 { int totalHits = 0; public object getCPUCounter() { PerformanceCounter cpuCounter = new PerformanceCounter(); cpuCounter.CategoryName = "Processor"; cpuCounter.CounterName = "% Processor Time"; cpuCounter.InstanceName = "_Total"; // will always memcache and then cache the settings for 1 minute. I am able to get the free RAM but the CPU usage it's not correct, compared with the value from task manager. navigate here So 40% utilization means work 40ms and sleep 60ms if (watch.ElapsedMilliseconds > percentage) { Thread.Sleep(100 - percentage); watch.Reset(); watch.Start(); } } } public static void TimerElapsed(object source, ElapsedEventArgs e) { float
This function gets us four parameters CreationTime, ExitTime, KernelTime and UserTime. It appears the 1 second value was not arbitrary either and is required in order for the reading to refresh the value. I need it to be able to work on computers with single core processor and with multi-core procesors. Full code is given below using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Text; using System.Windows.Forms; using System.Diagnostics; namespace WindowsApplication5 { public partial class Form1 : Form {
Hi folks,look into stackoverflow were very close to the correct solution. Parking lot supervisor Install Homebrew package with all available options Can I make a woman who took a picture of me in a pub give the image to me and delete Dawisko1 10.841 görüntüleme 3:31 Circular Progressbar using C# - Süre: 22:21. | http://juicecoms.com/cpu-usage/c-get-cpu-usage-of-process.html | CC-MAIN-2017-51 | refinedweb | 1,162 | 56.76 |
allegro 4.0.5+5.2.0
D binding to the Allegro5 game development library
To use this package, run the following command in your project's root directory:
Manual usage
Put the following dependency into your project's dependences section:
This is a D binding to the Allegro 5 library:
Should work fine with Allegro 5.2. Pretty much all of the cross-platform functions are bound. Non cross-platform functions are absent, but they will be added eventually.
Tested with LDC and DMD compilers on Linux 64 bit and DMD on Windows 32 bit (XP). See general notes below to see how to use this binding in your D program.
Installation
You have two options here. You can copy all the modules into your project, and just use them like that. Alternatively, you can compile the binding into a static library for convenience:
Linux:
If you have LDC:
./buildlibldc.sh ./buildexampleldc.sh
If you have DMD:
./buildlibdmd.sh ./buildexampledmd.sh
Windows:
If you have MinGW 4.5 (or earlier) compiled libraries:
buildlibdmd.bat -version=ALLEGROMINGW4_5
If you have MSVC/DMC/MinGW 4.6+ compiled DLLs:
buildlibdmd.bat
Compiling the example
Linux:
If you have LDC:
./buildexampleldc.sh
If you have DMD:
./buildexampledmd.sh
Try the example by running:
./example
It should run and not crash.
Windows:
buildexampledmd.bat
Try the example by running:
example.exe
It should run and not crash.
Unstable API
If you want to use the unstable API, set the
ALLEGRO_UNSTABLE version.
General Notes
Using Allegro in a D program is a little bit different than using it in a C/C++ program. Specifically, you must run your code through the alrunallegro function like so:
import allegro5.allegro; void main() { al_run_allegro( { al_init() //your code goes here return 0; }); }
alrunallegro will block until your code returns. On some platforms it will run your code in a different thread (you generally don't need to worry about this). This is done for cross-platform (specifically OSX) compatibility. Note that alinit/alinstallsystem should be called inside the delegate you pass to alrun_allegro.
This binding is equipped with pragma(lib) constructs that should allow you to not have to specify which Allegro libraries to link to. It only works this way in D2, however. It expects libraries to be named the same way they are named by the build process or generated by the import library generator script (Windows only). Monolith libraries are not supported. This mechanism can be overriden by using the version ALLEGRONOPRAGMA_LIB.
The module allegro5.dutil contains some utility functions for converting between D strings and ALLEGROUSTR.
Windows Notes
- You will need to generate the import libraries for Allegro's dll files using implib.exe that you can download here:
You can then use the createimportlibs.bat to create the import libraries. The script expects the Allegro DLLs to be in the same directory as the script. It also expects the implib to be callable from the command line (place it in PATH or into the directory alongside the script). E.g. if your directory had these DLL's in it:
allegro-5.2.dll allegro_primitives-5.2.dll
Running the script will produce these import libraries:
allegro.lib allegro_primitives.lib
If you're using dmd then when compiling you must pass the following linker flags to it for things to work properly:
-L/SUBSYSTEM:WINDOWS:4.0
If you want console output (this will also spawn the console whenever you run the program outside the command prompt) then use these flags:
-L/SUBSYSTEM:CONSOLE:4.0
- Note that if you are using MinGW 4.5 (or earlier) or DMC compiled DLLs you will need to set the version to ALLEGROMINGW4_5 when compiling your own programs as well as when building the library.
- You can obtain pre-built libraries at
- Registered by SiegeLord
- 4.0.5+5.2.0 released 7 months ago
- SiegeLord/DAllegro5
- github.com/SiegeLord/DAllegro5
- Zlib
- Authors:
-
- Dependencies:
- none
- Versions:
- Show all 16 versions
- Download Stats:
0 downloads today
0 downloads this week
7 downloads this month
1450 downloads total
- Score:
- 2.1
- Short URL:
- allegro.dub.pm | https://code.dlang.org/packages/allegro | CC-MAIN-2022-33 | refinedweb | 686 | 58.38 |
Re: [gentoo-dev] Needs ideas: Upcoming circular dependency: expat <> CMake
On 19.12.19 18:37, Michał Górny wrote: > We have a better alternative that lets us limit the impact on the users. > Why not use it? Which one? The CMake bootstrap copy? The adding to stage3 one?
Re: [gentoo-dev] Needs ideas: Upcoming circular dependency: expat <> CMake
Hey! On 19.12.19 17:03, Michał Górny wrote: >> B) Introduce USE flag "system-expat" to CMake similar to existing >>flag "system-jsoncpp", have it off by default, keep reminding >>CMake upstream to update their bundle >> >> [..] > > It violates the policy on bundled libraries. Same for the dev-util/cmake-bootstrap approach, right? > What's worse, the awful > USE flags solution means that most of the Gentoo devs end up using > bundled libraries just because people are manually required to figure > out what to do in order to disable them. I didn't say that it's perfect :) It's the same approach that we have have with the system-jsoncpp USE flag already so that was considered good enough at some point in the past. I guess we want the same for Expat and jsoncpp? Which alternative do you see as better than a new flag system-expat? Best Sebastian
Re: [gentoo-dev] Needs ideas: Upcoming circular dependency: expat <> CMake
Hey! Thanks everyone for your thoughts so far! >From what I heard, these two options seem realistic to me: A) Ask the KDE team for help with teaming up on a new package dev-util/cmake-bootstrap, keep it in sync with dev-util/cmake, make sure both packages co-exists with full disjoint operation, i.e. zero file conflicts + zero cross package file usage (tricky?). B) Introduce USE flag "system-expat" to CMake similar to existing flag "system-jsoncpp", have it off by default, keep reminding CMake upstream to update their bundle I favor (B) by more than just a bit. Does anyone have strong concerns against moving in the dev-util/cmake[-system-expat] (B) direction? Is it acceptable if I make those changes to the CMake ebuild myself? Thanks again Sebastian
Re: [gentoo-dev] Needs ideas: Upcoming circular dependency: expat <> CMake
On 19.12.19 14:32, Rolf Eike Beer wrote: > These things _are_ updated regularly To be fair they update because I keep opening update requests:
[gentoo-dev] Needs ideas: Upcoming circular dependency: expat <> CMake
Hi all, I noticed that dev-util/cmake depends on dev-libs/expat and that libexpat upstream (where I'm involved) is in the process of dropping GNU Autotools altogether in favor of CMake in the near future, potentially the next release (without any known target release date). CMake bundles a (previously outdated and vulnerable) copy of expat so I'm not sure if re-activating that bundle — say with a new use flag "system-expat" — would be a good thing to resort to for breaking the cycle, with regard to security in particular. Do you have any ideas how to avoid a bad circular dependency issue for our users in the future? Are you aware of similar problems and solutions from the past? Thanks and best Sebastian
[gentoo-dev] Packages up for grabs: Gimp and related (gegl, babl, mypaint)
Hello, I need to admit that I don't have enough time to keep up with maintaining the Gimp-related packages well enough in Gentoo. Latest ebuild of Babl and Gegl ebuilds are using Meson by now, Gimp is up next and 2.10.14 is just out the door. These packages are up for grabs now: media-gfx/gimp media-libs/babl media-libs/gegl media-gfx/mypaint (not mine but maintainer-needed and related) media-gfx/mypaint-brushes media-libs/libmypaint Best Sebastian
[gentoo-dev] Last rites: sys-fs/pytagsfs
# Sebastian Pipping (22 May 2019) # Masked for removal in 30 days (bug #686562) # Unfixed bug, dead upstream, not relevant enough sys-fs/pytagsfs
Re: [gentoo-dev] Merge 7 Fedora wallpapers packages to single one with slots?
Hi Alec, On 27.01.2018 22:58, Alec Warner wrote: > > I noticed that we have 7 packages on Fedora wallpapers with names that > > only explain themselves to Fedora insiders: > > So traditionally we follow upstream package naming. If we aim to > deviate, I'd prefer we have strong reasons for it. good point. > >. > > Why not just make x11-themes/fedora-backgrounds, a metapackage that > includes all of the packages? With one file and use flags for each version or with one ebuild file per slot? Fedora 21 was the last release with a release name so if we package 22+ later, their ebuilds would be non-meta in nature. I'm not sure how to blend that into the use-flag version (yet for a meta package all these files seem overkill too). Do you have some third option in mind? Best Sebastian
Re: [gentoo-dev] Merge 7 Fedora wallpapers packages to single one with slots?
On 27.01.2018 19:06, Sebastian Pipping wrote: > 11-solar > 12-constantine > 13-goddard > 14-laughlin > 15-lovelock > 16-verne Correction: 10-solar 11-leonidas 12-constantine 13-goddard 14-laughlin 15-lovelock 16-verne
Re: [gentoo-dev] Merge 7 Fedora wallpapers packages to single one with slots?
Hi, On 27.01.2018 17:32, Michael Orlitzky wrote: > If you do merge them, then it might be better to use flags for the > different sub-packages rather than slots. There's no place to describe > what a slot is for, but having a local USE=solar with a corresponding > description in metadata.xml is (relatively) discoverable. use flags work well if we make a single ebuild offering to install one to all of these. That would be natural if the goddard/13 source rpm included files of constantine/12 and solar/11 as well and so on but it doesn't seem to. I would rather go with one ebuild per mayor release number of Fedora: Needs less use flag configuration as well. About slot names, if "12" is not good enough we could use 11-solar 12-constantine 13-goddard 14-laughlin 15-lovelock 16-verne or so for SLOT to have a mapping to names? Best Sebastian
Re: [gentoo-dev] time to retire
Stefan, thanks for your work on Gentoo! All the best Sebastian
[gentoo-dev] Merge 7 Fedora wallpapers packages to single one with slots?
Hi! I noticed that we have 7 packages on Fedora wallpapers with names that only explain themselves to Fedora insiders: # eix background | fgrep -B3 Fedora * x11-themes/constantine-backgrounds Available versions: 12.1.1.4-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/goddard-backgrounds Available versions: 13.0.0.3-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/laughlin-backgrounds Available versions: 14.1.0.3-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/leonidas-backgrounds Available versions: 11.0.0.2-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/lovelock-backgrounds Available versions: 14.91.1.1-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/solar-backgrounds Available versions: 0.92.0.5-r1 Homepage: Description: Fedora official background artwork -- * x11-themes/verne-backgrounds Available versions: (~)15.91.0.1-r1 Homepage: Description: Fedora official background artwork. Any objections? Best Sebastian
Re: [gentoo-dev] [RFC] Addition of a new field to metadata.xml
On 01.06.2017 23:18, Jonas Stein wrote: > 2. Specification > > A space separated list of the corresponding debian packages should be > written in the field > > > It should be NONE, if debian has no corresponding package. > UNSET or no field, if the creator of the ebuild did not set the field (yet). Please pick NONE or require absence eventually, but not multiple options. Else we're asking for inconsistent data from the beginning. > example: > app-arch/tar/metadata.xml > tar > > app-office/libreoffice-bin/metadata.xml > libreoffice libreoffice-base libreoffice-base > libreoffice-dev libreoffice-dmaths libreoffice-draw > libreoffice-evolution libreoffice-impress Since the difference between source and binary packages has already been brought up, please adjust "" some way to indicating if the text content is a source or a binary package (even if we don't end up supporting both) to be 100% clear. Otherwise people will mix it up, and may not even notice. Best Sebastian
Re: [gentoo-dev] [rfc] dev-libs/expat[unicode] and dev-libs/libbsd dependency
Hi! Just quick note for the record: 2.2.0-r2 has these changes now, no need to have that wait for the next release: Best Sebastian signature.asc Description: OpenPGP digital signature
Re: [gentoo-dev] [rfc] dev-libs/expat[unicode] and dev-libs/libbsd dependency
Hi, On 31.05.2017 21:16, Michał Górny wrote: >> How do you evaluate these options: >> >> a) Keep libexpatu.so + change libexpatw.so to CPPFLAGS=-DXML_UNICODE >> >> b) Drop libexpatu.so + change libexpatw.so to CPPFLAGS=-DXML_UNICODE > > Does any other distribution use libexpatu.so? If not, then there's > probably no point in keeping it. I found none but CoreOS, which is derived from Gentoo (..). >> )" > > I'd dare say the feature is 'arc4random', then that should be the name > of the flag. Good point. Best Sebastian signature.asc Description: OpenPGP digital signature
[gentoo-dev] [rfc] dev-libs/expat[unicode] and dev-libs/libbsd dependency
Hi! The next release of dev-libs/expat is not far away and there are two things that I would appreciate input with, before the next bump in Gentoo: -DXML_UNICODE_WCHAR_T issues and Gentoo/Debian mismatch === With USE=unicode, on Gentoo two extra libraries are built: * libexpatu.so (with CPPFLAGS=-DXML_UNICODE) * libexpatw.so (with CPPFLAGS=-DXML_UNICODE_WCHAR_T) ^ However, -DXML_UNICODE_WCHAR_T has only ever worked with 2-byte wchar_t, while 4-byte wchar_t seems mainstream on Linux (and GCC -fshort-wchar would required libc to have the same, if you actually wanted to pass those wchar_t strings to wprintf and friends). So libexpatw.so in Gentoo is not functional at the moment. To make things worse, Debian has libexpatw.so with CPPFLAGS=-DXML_UNICODE, which corresponds to current libexpatu.so in Gentoo, rather than libexpatw.so. How do you evaluate these options: a) Keep libexpatu.so + change libexpatw.so to CPPFLAGS=-DXML_UNICODE b) Drop libexpatu.so + change libexpatw.so to CPPFLAGS=-DXML_UNICODE Depend on dev-libs/libbsd = The next release is very likely to add (optional but helpful) support for arc4random_buf that dev-libs/libbsd provides (especially on systems with glibc prior to 2.25) [1]. I wonder if Expat's proximity to @system has any strong implications on whether )" C) libbsd could even go into DEPEND and RDEPEND directly, or RDEPEND="dev-libs/libbsd" D) libbsd should not become any kind of future dependency of dev-libs/expat. Thanks for your time! Best Sebastian [1]
[gentoo-dev] Last rites: media-gfx/drqueue
# Sebastian Pipping <sp...@gentoo.org> (08 Oct 2016) # Dead upstream for years, ebuild needs work, 5 open bugs # Masked for removal in 30 days. media-gfx/drqueue
Re: [gentoo-dev] News item: Apache "-D PHP5" needs update to "-D PHP"
On 05.01.2016 20:35, Michael Orlitzky wrote: > I just pushed a new revision with this fix. In eselect-php-0.8.2-r1, > we ship both the new 70_mod_php.conf and the old 70_mod_php5.conf. The > latter comes with a big warning at the top of it, stating that it is for > backwards compatibility only. Cool, sounds like a great idea to me. I guess we don't need a news item any more then? Sebastian
Re: [gentoo-dev] News item: Apache "-D PHP5" needs update to "-D PHP"
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 04.01.2016 11:45, Lars Wendler wrote: > Hi Sebastian, > > to be honest I was very upset when I first stumbled upon this > problem. And yes I only found about it when my apache webserver > started to deliver php source code instead of the real sites. Exactly the same with me. > Doing such a change without getting in contact with me as apache > maintainer before the change was done is very... eh... impolite at > best. Just for the record, it wasn't me :) Best Sebastian -BEGIN PGP SIGNATURE- Version: GnuPG v2 iEYEARECAAYFAlaNbXwACgkQsAvGakAaFgDNmgCfXwHI2i15LT30MFw6eV7cDgyk sZYAnRwFHtwDAG/Z/p5zS4UvFXyvemGX =Xlrd -END PGP SIGNATURE-
[gentoo-dev] News item: Apache "-D PHP5" needs update to "-D PHP"
Hi! Better late then never. Posting 72 hours from now the earliest as advised by GLEP 42. Feedback welcome as usual. === Title: Apache "-D PHP5" needs update to "-D PHP" Author: Sebastian Pipping <sp...@gentoo.org> Content-Type: text/plain Posted: 2016-01-04 Revision: 1 News-Item-Format: 1.0 Display-If-Installed: app-eselect/eselect-php[apache2] With >=app-eselect/eselect-php-0.8.1, to enable PHP support for Apache 2.x file /etc/conf.d/apache2 no longer needs to read APACHE2_OPTS=". -D PHP5" but APACHE2_OPTS=". -D PHP" , i.e. without "5" at the end. This change is related to unification in context of the advent of PHP 7.x. With that change, guard "" in file /etc/apache2/modules.d/70_mod_php.conf has a chance to actually pull in PHP support. Without updating APACHE2_OPTS, websites could end up serving PHP code (include configuration files with passwords) unprocessed to website visitors! The origin of this news item is: === Best Sebastian
[gentoo-dev] Re: [gentoo-commits] repo/gentoo:master commit in: media-libs/gegl/
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 09.12.2015 07:41, Michał Górny wrote: > On Tue, 8 Dec 2015 21:54:44 + (UTC) "Sebastian Pipping" > <sp...@gentoo.org> wrote: > >> commit: a1ea06b430e14f68b5b7bf1947a681215157c034 Author: >> Sebastian Pipping gentoo org> AuthorDate: Tue >> Dec 8 21:49:31 2015 + Commit: Sebastian Pipping > gentoo org> CommitDate: Tue Dec 8 21:54:00 2015 >> + URL: >> >> >> media-libs/gegl: Fix ffmpeg/libav dependency (bug #567638) >> >> Package-Manager: portage-2.2.26 >> >> media-libs/gegl/gegl-0.3.4.ebuild | 10 ++ 1 file changed, >> 6 insertions(+), 4 deletions(-) >> >> diff --git a/media-libs/gegl/gegl-0.3.4.ebuild >> b/media-libs/gegl/gegl-0.3.4.ebuild index 764b6c9..c2b9409 >> 100644 --- a/media-libs/gegl/gegl-0.3.4.ebuild +++ >> b/media-libs/gegl/gegl-0.3.4.ebuild @@ -18,7 +18,7 @@ if [[ ${PV} >> == ** ]]; then> + > ...which change is put silently under 'dependency fix' with no > explicit warning, and effectively breaks ~ia64 reverse > dependencies: > > ia-gfx/gimp There > is for that now. If I don't hear from ia64 and/or sparc until tomorrow night, I will drop those keywords from Gimp as well. If it's more urgent, I'm happy with anyone else doing that before me. I hope that's okay for everyone. Else, please let me know. Best, Sebastian -BEGIN PGP SIGNATURE- Version: GnuPG v2 iEYEARECAAYFAlZoxEkACgkQsAvGakAaFgBvmACfRDY19JxNYqClQaYfVREJevp/ GzAAoMHIWJGN39fyNgvL8+RCxvaKbl36 =1w1B -END PGP SIGNATURE-
Re: [gentoo-dev] repoman adding Package-Manager: portage-2.2.20.1 to every single commit
On 19.08.2015 18:33, hasufell wrote: I don't want to start a lot of bikeshed, but I think this information is practically useless. If there has been a problem with a commit, ask the developer about his repoman version (which I believe was the reason for this, unless you want me to add Package-Manager: paludis-2.4.0 to every commit ;). Let's just remove it. With that line removed, how do we notice that people are committing without repoman or not running repoman checks at least? There is quite a risk of things going straight into stable by mistake when repoman is not used. Best, Sebastian
Re: [gentoo-dev] [RFC] Rebooting the Installer Project
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 20.07.2015 10:51, Michał Górny wrote: [..] I think we'd really benefit from having some kind of helper scripts / checklist of tasks to be done prior to/after install. For example, you'd run 'check-my-install' script and it'd tell you what you likely forgot to set up :). +1 Sebastian
[gentoo-dev] Problems updating Qt from 4.8.6 to 4.8.7
Hi there! I'm having trouble updating Qt:4 (dev-qt/qt*-4.8*:4) from 4.8.6 to 4.8.7. Looking at the ebuilds, they require some 4.8.7 versions to be installed already that in turn cannot be installed because other ebuilds require 4.8.6 while not yet upgraded. I am running the latest version of portage. Is there some trick I should know about or am I stuck with Qt 4.8.6 on that box forever? How did you update? Thanks for your help, best, S
Re: [gentoo-dev] Problems updating Qt from 4.8.6 to 4.8.7
On 05.07.2015 20:44, Alexandre Rostovtsev wrote: What I usually end up doing is listing my installed dev-qt/qt* ebuilds, and updating all of them together explicitly: emerge -1 qtcore:4 qtgui:4 qtsql:4 etc. That's what I tried but it doesn't seem to work with this update. Looking at the dependencies of qtgui dev-qt/qtgui-4.8.6-r4 DEPEND ~dev-qt/qtcore-4.8.6 dev-qt/qtgui-4.8.7 DEPEND ~dev-qt/qtcore-4.8.7 I really wonder if there is any update path from having dev-qt/qtcore-4.8.6-r2 dev-qt/qtgui-4.8.6-r4 installed before to dev-qt/qtcore-4.8.7 dev-qt/qtgui-4.8.7 after. Right now, it looks like I have to use emerge -C .. to un-install them completely, temporary breaking Qt and installing 4.8.7 fresh. I'm still hoping for some way to not needing to do that. Alternatively, just try emerge --update --deep world - it probably should work if you have a consistent, complete and updateable world set. That's where I'm coming from. It doesn't stop complaining because of Qt. Best, Sebastian
Re: [gentoo-dev] Re: Review: Apache AddHandler news item
Hello Duncan, On 06.04.2015 06:53, Duncan wrote: Sebastian Pipping posted on Mon, 06 Apr 2015 01:29:19 +0200 as excerpted: Published a slightly improved version now:- apache-addhandler-addtype If there's anything wrong with it, please mail me directly (or put me in CC) so there is zero chance of slipping through. Thanks! [also mailing sp...@gentoo.org as requested] thanks! $ echo Apache AddHandler/AddType vulnerability protection | wc -c 51 GLEP 42 says max title length 44 chars. 51-44=7 chars too long. Actually, echo prints a newline that is also counted. So it's 50 and 6 characters too much but you still have a point :) Off the top of my head, maybe just s/vulnerability/vuln/ ? That'd cut 9 chars for 42, leaving two to spare. Anyone with a better idea? I made it say exploit now: # echo -n 'Apache AddHandler/AddType exploit protection' | wc -c 44 I hope that's correct enough in terms of security language. The fix protections against exploits of the related vulnerability. That's the big one. Here's a couple more minor English usage change suggestions as well. (Changes denoted in caps here, obviously lowercase them): Line 25, add also: may be helpful. Unfortunately, it can ALSO be a security threat. Fixed. Line 74 s/at/in/: You may be using AddHandler or AddType IN other places, Fixed. Thanks for the review. Best, Sebastian
Re: [gentoo-dev] Review: Apache AddHandler news item
Published a slightly improved version now: If there's anything wrong with it, please mail me directly (or put me in CC) so there is zero chance of slipping through. Thanks! Best, Sebastian
[gentoo-dev] Current Gentoo Git setup / man-in-the-middle attacks
Hi! For the current Gentoo Git setup I found these methods working for accessing a repository, betagarden in this case: git://anongit.gentoo.org/proj/betagarden.git (git://git.gentoo.org/proj/betagarden.git) (git://git.overlays.gentoo.org/proj/betagarden.git) () git+ssh://g...@git.gentoo.org/proj/betagarden.git (git+ssh://g...@git.overlays.gentoo.org/proj/betagarden.git) Those without braces are the ones announced at the repository's page [1]. My concerns about the current set of supported ways of transfer are: * There does not seem to be support for https://. Please add it. * Why do we serve Git over git:// and http:// if those are vulnerable to man-in-the-middle attacks (before having waterproof GPG protection for whole repositories in place)? Especially with ebuilds run by root, we cannot afford MITM. So I would like to propose that * support for Git access through https:// is activated, * Git access through http:// and git:// is deactivated, and * the URLs on gitweb.gentoo.org and the Layman registry are updated accordingly. (Happy to help with the latter.) Thanks for your consideration. Best, Sebastian [1]
Re: [gentoo-dev] Current Gentoo Git setup / man-in-the-middle attacks
On 29.03.2015 19:39, Andrew Savchenko wrote: On Sun, 29 Mar 2015 18:41:33 +0200 Sebastian Pipping wrote: So I would like to propose that * support for Git access through https:// is activated, * Git access through http:// and git:// is deactivated, and Some people have https blocked. http:// and git:// must be available read-only. They would not do online banking over http, right? Why would they run code with root privileges from http? Best, Sebastian
Re: [gentoo-dev] Current Gentoo Git setup / man-in-the-middle attacks
On 29.03.2015 19:56, Diamond wrote: Doesn't git:// uses SSH wich is secure? I think that was on github. git:// is the git protocol [1] with absolutely no authentication and no encryption. GitHub does not support git:// but only secure protocols (HTTPS, SSH), see [2]. Best, Sebastian [1] [2]
Re: [gentoo-dev] Review: Apache AddHandler news item
Next round: * Recipe for handling \.(php|php5|phtml|phps)\. manually added * AddType (with similar problems) mentioned, too * Typo momment fixed (* Internel revision bump to 3, will be committed as revision 1) (* Date bumped to today) (* Links renumbered due to new link [2]) Title: Apache AddHandler/AddType vulnerability protection Author: Sebastian Pipping sp...@gentoo.org Content-Type: text/plain Posted: 2015-03-30 Revision: 3 News-Item-Format: 1.0 Display-If-Installed: www-servers/apache Apache's directives AddHandler [1] (and AddType [3] document a multi-language website as a context where that behavior may be helpful. Unfortunately, it can be a security threat. Combined with (not just PHP) applications that support file upload, the AddHandler/AddType. Shipping automatic protection for this scenario is not trivial, but you could manually install protection based on this recipe: FilesMatch \.(php|php5|phtml|phps)\. # a) Apache 2.2 / Apache 2.4 + mod_access_compat #Order Deny,Allow #Deny from all # b) Apache 2.4 + mod_authz_core #Require all denied # c) Apache 2.x + mod_rewrite #RewriteEngine on #RewriteRule .* - [R=404,L] /FilesMatch * You may be using AddHandler (or AddType) at other places, including off-package files. Please have a look. * app-admin/eselect-php is not the only package affected. There is a dedicated tracker bug at [4]. As of the moment,, Michael Orlitzky and Marc Schiffbauer. [1] [2] [3] [4]
Re: [gentoo-dev] Should Gentoo do https by default?
On 27.03.2015 15:33, Hanno Böck wrote: I think defaulting the net to HTTPS is a big step for more security and I think Gentoo should join the trend here. Yes please! Sebastian
Re: [gentoo-dev] Re: Review: Apache AddHandler news item
Hi! I was wondering about the same thing, too. I can commit it as revision 1 for a workaround. If you have some time, please take this question/issue further with the related software and people. Thanks in advance, Sebastian
[gentoo-dev] Review: Apache AddHandler news item
Hi! In context of mjo and agreed that a portage news item would be a good idea. Please review my proposal below. Thank you! Best, Sebastian === Title: Apache AddHandler vulnerability protection Author: Sebastian Pipping sp...@gentoo.org Content-Type: text/plain Posted: 2015-03-26 Revision:] [1] [2] [3]
Re: [gentoo-dev] Review: Apache AddHandler news item
On 26.03.2015 18:02, Michael Orlitzky wrote: The most important reason is missing =) If you are relying on the AddHandler behavior to execute secret_database_stuff.php.inc, then once the change is made, Apache will begin serving up your database credentials in plain text. Good point. Changes: * Revision bump * Add section on .php.inc * Add thanks line Title: Apache AddHandler vulnerability protection Author: Sebastian Pipping sp...@gentoo.org Content-Type: text/plain Posted: 2015-03-26 Revision: and Michael Orlitzky. [1] [2] [3]
Re: [gentoo-dev] Review: Apache AddHandler news item
On 26.03.2015 20:50, Marc Schiffbauer wrote: * Sebastian Pipping schrieb am 26.03.15 um 19:15 Uhr: As of the momment, affected packages include: ^ Typo Thanks. Fixed in my local copy. No need to re-paste, I believe. Best, Sebastian
Re: [gentoo-dev] Naming of repositories: gento-x86 edition, bike shedding wanted
On 14.03.2015 23:25, Robin H. Johnson wrote: Trying to explain to a new user that the Portage tree refers to the collection of ebuilds used by a PMS-compliant package manager (eg Portage) is problematic. Full ack. Let's limit portage to the piece of software, please. Questions: 0. What names for the tree/repository. It's not a tree. Ideally, it would be a directed acyclic graph (DAG), there maybe be some loops, even. I would therefore object to any name that has tree in it. Since there are other Gentoo-based distros, I would say the word gentoo should be in there. Plain gentoo may work. I would be happy with any of gentoo-{core,main,master} as well, if plain gentoo causes trouble for a name in some context. 1. We have some namespaces in Git: proj, dev, priv, data, sites, exp; should the tree be in one of those namespaces, a new namespace, or be without a namespace? git://anongit.gentoo.org/NEW-NAME.git. Not in any of those namespace, please. If in any, make it repos or repositories, please. Thanks for your consideration. Best, Sebastian
Re: [gentoo-dev] Naming of repositories: gento-x86 edition, bike shedding wanted
On 15.03.2015 10:48, Ulrich Mueller wrote: If we want a separate repo/ namespace, we would probably need to consider moving other repositories there -- at least the official ones. Of course, it would be a nice result, having everything hosted on git.g.o as git.g.o/repo/${repo_name}.git. Isn't repo fairly redundant? Everything there is a repository. There are Git repositories that do not contain ebuilds up there. So repo is not redundant if it refers to its overlays kind of meaning. Two examples: Best, Sebastian
Re: [gentoo-dev] [rfc] Rendering the official Gentoo logo / Blender,2.04, Python 2.2
On 07.06.2011 11:15, Mario Bodemann wrote: Hi folks, Sebastian told me about the problem of not being able to render the logo in recent blender versions. So this is were I stepped in: I tried it and used the geometries from the old .blender file, and the yellowish reflecting image. Problem was to recreate the exact representation of the original logo, by new means of rendering and relighting. I tried to solve them by creating a new material for the g and carefully adjusting the parameters. Also I added a new modifier for the geometry to get rid of the ugly seam at the sharp edge. (This does not modify the geometry, only the rendering of it) However, here are my preliminary results: - the modified .blend-file[1] (tested with blender 2.57b) - new rendered logo image [2] - original logo image (for comparison)[3] What do you think? Greetings, Mario. [1];a=blob_plain;f=g-metal.blend;hb=master [2];a=blob_plain;f=g-metal.png;hb=images [3];a=blob_plain;f=g-metal-orig.png;hb=images For the record, I have resurrected that repository at now. For the images [2][3], have a look at the images branch: Best, Sebastian
Re: [gentoo-dev] collab herd for cooperative pkg maintenance
On 23.03.2015 18:22, Tim Harder wrote: With that in mind, I think it would be an interesting experiment if we had a collaborative herd (probably named collab) that signals the status that anyone is generally free to fix, bump, or do sane things to the pkgs with the caveat that you fix what you break. +1 (to any other non-herd marker in metadata.xml to achieve the same effect, too) Sebastian
Re: [gentoo-dev] Re: [rfc] Rendering the official Gentoo logo / Blender 2.04, Python 2.2
On 23.02.2015 23:34, Daniel Campbell wrote: Can't the logo be remade in a more recent version of Blender? Assuming you can run two separate Blender instances, it would mostly be copying the poly/vertex values from one to the other. I'm not versed in 3-D but it would surprise me if there wasn't a standard mesh format. There was an attempt to port to a recent version of Blender. When comparing renderings, the result is close, but not 1:1. Please check Mario's reply of 2011 in this very thread (). Best, Sebastian
Re: [gentoo-dev] Re: [rfc] Rendering the official Gentoo logo / Blender 2.04, Python 2.2
Hi! Please excuse bringing up a topic as old as this, again. Only bringing up half, actually. On 05.05.2011 07:36, Sebastian Pipping wrote:. While doing digital cleaning over here I ran into my patches to make ancient Blender compile again for Gentoo logo rendering. I have streamlined those into ebuilds and a dedicated overlay and filled the void in the Gentoo wiki with a few words and pointers. The ebuilds are: dev-lang/python/ python-2.2-r8.ebuild media-gfx/blender/ blender-2.26.ebuild blender-2.31a.ebuild So whoever needs to render Blender files from 2003 again at some point should find working ebuilds to do that. Feel free to join keeping them in good installable shape. Thanks and best, Sebastian
Re: [gentoo-dev] CPU_FLAGS_X86 gentoo repository migration complete
On 01.02.2015 23:17, Michał Górny wrote: Hi, developers. Just a quick note: the CPU_FLAGS_X86 conversion of the Gentoo repository is complete now. Cool! Thanks for fixing the freeverb3 ebuild, too. Best, Sebastian
Re: [gentoo-dev] arm64
Thanks! The issue and fix are clear by now (for details:). So I don't need shell access any more, at least not in this context. Best, Sebastian On 25.01.2015 18:49, Tom Gall wrote: Least speaking for myself I can help you out starting Feb 15th, presuming all the stars are in alignment. If someone else doesn’t help you before, please mark it on your calendar and bug me again then cause I’m sure I’ll forget! Best, Tom On Jan 25, 2015, at 11:43 AM, Sebastian Pipping sp...@gentoo.org wrote: Hi! I got a bug report for arm64 against the test suite of uriparser. If I could get a temporary arm64 shell somewhere, that could help me understand the issue. Best, Sebastian
Re: [gentoo-dev] arm64
Hi! I got a bug report for arm64 against the test suite of uriparser. If I could get a temporary arm64 shell somewhere, that could help me understand the issue. Best, Sebastian
[gentoo-dev] Where to install Grub2 background images too?
Hi! Debian is putting Grub2 background (or splash) images into /usr/share/images/grub/ [1] but we do no not have an /usr/share/images/ folder. (I'm not referring to full themes, just background images.) If I were to make media-gfx/grub-splashes:2, where would it install to? Thanks, Sebastian [1]
Re: [gentoo-dev] debootstrap equivalent for Gentoo?
Hi! On 28.12.2014 11:26, Johann Schmitz (ercpe) wrote: I wrote gentoo-bootstrap () some time ago to automate the creation of Gentoo Xen DomU's at work. It can do a lot of more things (e.g. installing packages, overlays, etc.). Interesting tool! (shame on me) it doesn't do signature verification yet. I've opened a ticket for it on GitHub now: Best, Sebastian
[gentoo-dev] debootstrap equivalent for Gentoo?
Hi! I'm wondering if there is an equivalent of debootstrap of Debian anywhere. By equivalent I mean a tool that .. * I can run like command FOLDER with a chroot-able Gentoo system in FOLDER after and * for both stage3 and portage tarballs * Downloading tarball * Downloading signature file * Verifying signature * Extracting Has anyone seen something like that? Thanks, Sebastian
Re: [gentoo-dev] Last rites: dev-php/{adodb-ext,eaccelerator,pecl-apc,pecl-id3,pecl-mogilefs,pecl-sca_sdo,suhosin}
Hi Brian, On 02.10.2014 20:29, Brian Evans wrote: # Brian Evans grkni...@gentoo.org ( 1 Oct 2014 ) # Masked for removal in 30 days. # Broken on =dev-lang/php-5.4. No replacements known. [..] dev-php/suhosin is that true for suhosin? Upstream reads has been tested with PHP 5.4 and 5.5 [1] and there is dev-php/suhosin: version bump - 0.9.36 is available and has been tested with PHP 5.4 and 5.5 too. So at least to me this looks like to early or potentially not even needed. If it is broken fro 5.4/5.5 please share details on why it really is and update the bug mentioned above too, please. Thanks! Best, Sebastian [1]
[gentoo-dev] Packages up for grabs
Hello! Below are some packages that I fail to take care of as needed and have not been using myself for a while. Please take over whatever you have interest in: Latest Open bugs app-text/xmlstarlet yes no media-libs/libmp3spltyes yes media-sound/mp3splt-gtk yes yes media-sound/mp3splt yes yes dev-python/html2text no/2014.9.8 no dev-python/inotifyx no/0.2.2no games-action/openlierox no/0.59_beta10 no media-gfx/drqueueyes yes media-libs/opencollada no/?yes sys-fs/pytagsfs yes yes Many thanks, Sebastian
Re: [gentoo-dev] New Python eclasses -- a summary and reminder
Looks like great work so far. On 11.02.2013 01:20, Michał Górny wrote: Secondly, I'd like to make it clear that the old python.eclass is 'almost' deprecated. We're in process of converting the in-tree packages to use the new eclasses but that's a lot of work [3]. [..] [3]: I wonder what would be the best way to help with conversion for devs with a few hours to contribute? - Where does one go for peer review and how many eyes should be on related commits? - Should package owners be contacted in all cases? - Are there any conversion sprints already or in the future? Best, Sebastian
Re: [gentoo-dev] What did we achieve in 2012? What are our resolutions for 2013?
Coming to my mind: There have been continued regular releases of genkernel integrating patches from various people:;a=tags And there has been a constant stream of people asking for user overlay hosting or getting an existing overlay being added to the layman registry that we could serve. Ben, I hope you have the time to make a news post from this thread's collection? Best, Sebastian
Re: [gentoo-dev] Is /var/cache the right place for repositories?
On 20.12.2012 19:14, Ciaran McCreesh wrote: The tree is a database. It belongs in /var/db/. I don't see /var/db in the latest release of the Filesystem Hierarchy Standard: I would prefer something that blends with FHS. Best, Sebastian
Re: [gentoo-dev] Is /var/cache the right place for repositories?
On 20.12.2012 18:27, Ulrich Mueller wrote: Now I wonder: After removal of e.g. the Portage tree from a system, it is generally not possible to restore it. (It can be refetched, but not to its previous state.) Same is true for distfiles, at least to some degree. They may have vanished upstream or from mirrors. Maybe /var/lib would be a better choice? It would also take care of the issue with fetch-restricted files. Thanks for bringing it up. What you address above is the exact reason why Layman's home was moved to /var/lib/layman/ eventually. It has a cache aspect, bit it's not a true cache. Best, Sebastian
Re: [gentoo-dev] Lastrites: net-proxy/paros, net-misc/ups-monitor, app-emulation/mol, net-wireless/fsam7400, net-wireless/acx, net-wireless/acx-firmware, net-wireless/linux-wlan-ng-modules, net-wirele
On 11/24/2012 10:12 PM, Pacho Ramos wrote: # Pacho Ramos pa...@gentoo.org (24 Nov 2012) # Upstream dead and no longer runs (#402669). # Removal in a month app-cdr/dvd95 Bug fixed. I just ripped a DVD with dvd95 successfully. + 02 Dec 2012; Sebastian Pipping sp...@gentoo.org package.mask: + Keep dvd95 since bug #402669 is now fixed + # Pacho Ramos pa...@gentoo.org (24 Nov 2012) # Fails to build with gcc-4.7 and maintainer is ok with dropping (#424723). # Removal in a month. app-shells/localshell FYI bug fixed, removal prevented by robbat2. Best, Sebastian
[gentoo-dev] Last rites: app-admin/smolt
# Sebastian Pipping sp...@gentoo.org (27 Nov 2012) # Masked for removal in 30 days. # Server and software development discontinued upstream (bug #438082) app-admin/smolt
[gentoo-dev] Last rites: app-admin/profiler
# Sebastian Pipping sp...@gentoo.org (27 Nov 2012) # Masked for removal in 30 days. # Licensing issues, turned out not distributable (bug #444332) app-admin/profiler
Re: [gentoo-dev] License groups in ebuilds
Re: [gentoo-dev] gtk3 useflag and support of older toolk. Best, Sebastian
[gentoo-dev] Re: [gentoo-dev-announce] Lastrite: 4suite, amara and testoob (mostly for security)
On 05/16/2012 10:40 AM, Samuli Suominen wrote: # Samuli Suominen ssuomi...@gentoo.org (16 May 2012) # Internal copy of vulnerable dev-libs/expat wrt #250930, # CVE-2009-{3720,3560} and CVE-2012-{0876,1147,1148}. # # Fails to compile wrt bug #368089 # Bad migration away from dev-python/pyxml wrt #367745 # # Removal in 30 days. dev-python/4suite Can I veto on 4suite (without fixing things myself yet) ? Thanks, Sebastian
Re: [gentoo-dev] Arbitrary breakage: sys-fs/cryptsetup
On 03/22/2012 03:20 PM, Alexandre Rostovtsev wrote: [1] For one, genkernel should bomb out if it can't comply with a command-line arg instead of just putting non-alert text up. There is already a bug open about this issue: With that bug fixed by now is there still need for a news entry? Best, Sebastian
[gentoo-dev] Re: [gentoo-dev-announce] Last rites: Various horde packages
Hello! Would it make sense to move these ebuilds to a dedicated overlay? I can think of one IPS that uses both Gentoo and Horde [1] (though I'm not sure which version and if in combination). A imagine that a dedicated overlay could be both a service to people who still rely on horde and at the same time encourage them to step up to maintenance. With a dedicated overlay we wouldn't need to worry about write access as much as with the main tree. [1] On 03/28/2012 06:26 PM, Alex Legler wrote: Up for removal in 4 weeks: # Alex Legler a...@gentoo.org (28 Nov 2010) # Not maintained, multiple security issues. # Use the split horde ebuilds instead. While I don't use horde from a Gentoo perspective, I'm curious: which remaining split ebuilds are you referring to? For Horde 3 webmail: which ebuild would a user need now? www-apps/horde-webmail www-apps/horde-groupware # Alex Legler a...@gentoo.org (28 Mar 2012) # Leftover packages from a packaging attempt of Horde-4 # These can be readded when someone picks the package up dev-php/Horde_ActiveSync [..] dev-php/Horde_Yaml For those interested a diff says that these Horde packages remain: www-apps/horde www-apps/horde-chora www-apps/horde-dimp www-apps/horde-gollem www-apps/horde-hermes www-apps/horde-imp www-apps/horde-ingo www-apps/horde-jeta www-apps/horde-kronolith www-apps/horde-mimp www-apps/horde-mnemo www-apps/horde-nag www-apps/horde-passwd www-apps/horde-pear www-apps/horde-turba Best, Sebastian
Re: [gentoo-dev] [rfc] Which ebuild category should these ebulds go into?
On 02/01/2012 09:42 AM, ScytheMan wrote: Take a look at g15daemon (useful for some logitech keyboards). There you have: app-misc/g15daemon dev-libs/libg15 Great, thanks! Best, Sebastian
[gentoo-dev] [rfc] Which ebuild category should these ebulds go into?
Hello! Anthoine and I are working on some new ebuilds related to a 3D mouse at the moment. For two of these I wonder what package category makes a good fit. While I would save your time on such a simple thing, I would like to avoid moving around things later, too. I have inspected the related metadata.xml files already. Which categories do you advise for? spacenavd driver daemon (with optional X support) -- sys-apps/spacenavd ? -- app-misc/spacenavd ? -- .. ? libspnav library accessing before-mentioned daemon -- dev-libs/libspnav ? -- media-libs/libspnav ? -- sys-libs/libspnav ? -- .. ? spnavcfg X11/GTK GUI tool for configuration -- x11-misc/spnavcfg seems right Thanks in advance! Best, Sebastian
[gentoo-dev] New maintainer needed: net-misc/aria2
Hello! While someone else is the official maintainer of net-misc/aria2, I have done the last 5 version bumps or so on net-misc/aria. I have gotten a little behind with it lately: 1.12.1 is the latest in tree, upstream has 1.13.0, 1.14.0 and the very fresh 1.14.1. One reason for that is that I don't use aria2 myself that much if at all. Another is that the next version bump needs more care than just copy-and-commit: some dependencies have changed. I would like to pass package net-misc/aria2 on to one of you. With the help of the proxy maintainer team this could even be someone who is not yet a Gentoo developer. Besides the test suite, nothing needs patching in the latest ebuild of 1.12.1. There is three open bugs [1]. If you want to take over, please go ahead. Maybe leave a short reply in this thread. Thanks! Best, Sebastian [1];list_id=712171 Original Message Subject: aria2 maintenance Date: Sat, 31 Dec 2011 19:00:56 +0100 From: Sebastian Pipping sp...@gentoo.org To: ..@gentoo.org Hello .., it looks like you don't really have time or interest to keep aria2 up to speed. More or less the same on my end. Would you mind if I publicly ask for a new maintainer for aria2? If I do not hear from you within a week I take no answer as a yes. Best, Sebastian
[gentoo-dev] unison needs some love
Hello! Version in Gentoo: 2.32.52 Version upstream: 2.40.63 The bug is old enough to justify a takeover to me, provided you act with resonable care. Sebastian
Re: [gentoo-dev] unison needs some love
On 08/03/2011 07:37 PM, Alexis Ballier wrote: I'm more or less alone in the ml herd (maintainer) and I don't use unison :( While you mention the herd: how come this is herd=ml? Best, Sebastian
Re: [gentoo-dev] Last Rites: sys-fs/evms
On 07/03/2011 11:34 AM, Markos Chandras wrote: Hi Sebastian, sys-fs/evms is now gone Thanks for the notification. I have updated genkernel 3.4.17 accordingly. Sebastian
[gentoo-dev] Re: [gentoo-commits] gentoo-x86 commit in net-misc/aria2: aria2-1.12.0.ebuild ChangeLog
On 07/01/2011 10:03 AM, Peter Volkov wrote: В Чтв, 30/06/2011 в 19:27 +, Sebastian Pipping (sping) пишет: Log: net-misc/aria2: Bump to 1.12.0, looks trivial EAPI=2 inherit bash-completion ... pkg_setup() { if use scripts use !xmlrpc use !metalink; then ewarn Please also enable the 'xmlrpc' USE flag to actually use the additional scripts fi } This really calls for REQUIRED_USE from EAPI=4. REQUIRED_USE=scripts? ( ^^ ( xmlrpc metalink ) ) If we use EAPI 4 in that ebuild we cannot make it stable anytime soon, correct? Sebastian
Re: [gentoo-dev] Re: [gentoo-commits] gentoo commit in xml/htdocs/proj/en/qa: index.xml
On 06/09/2011 03:37 PM, Rich Freeman wrote: do we need some kind of policy around membership on special project teams. QA and Devrel are the most obvious examples, Infra might be another. in my eyes we do. too much power to be unregulated. what does it take to get this rolling? sebastian
Re: [gentoo-dev] Reviving webapp-config
Questions: - What does reviving mean in detail? A re-write? A somewhat compatible re-write? Getting back to maintaining the current code? Why did you choose how you did? - Have you spoken to Andreas Nüsslein who worked on a re-write in context of an earlier GSoC? Best, Sebastian
Re: [gentoo-dev] Reviving webapp-config
On 06/10/2011 05:38 PM, Matthew Summers wrote: Why did you choose how you did? I do not understand this sentence, I intended to write as you did, sorry. If that's still bad English: I wanted to hear about your rationale, which you have explained by now. Thanks. [..] this tool has an important role in Gentoo and therefore needs to be revived. I wished people were thinking like that about genkernel :-) Best, Sebastian
Re: [gentoo-dev] Gentoo package statistics -- GSoC 2011
Re: [gentoo-dev] Last Rites: sys-fs/evms
On 06/03/2011 11:32 PM, Markos Chandras wrote: # Markos Chandras hwoar...@gentoo.org (3 Jun 2011) # Dead upstrea, many open bugs, partially working with # kernel-2.6 # Bugs #159741, #159838, #165120, #165200, #231459, # #273902, #278949, #305155, #330523, #369293 # Masked for removal in 30 days # Alternative: sys-fs/lvm2 sys-fs/evms EVMS is a soft dependency of genkernel. If sys-fs/evms is removed, EVMS support will have to be removed from genkernel, too. If you go forward please notify the genkernel team once EVMS has been removed so we can update genkernel accordingly. Thanks! Best, Sebastian
[gentoo-dev] Test request: open-iscsi 2.0.872
Hello! Would be great to have a few people test open-iscsi 2.0.872 before moving it from overlay betagarden to the main tree. To get it installed please run: # layman -a betagarden # emerge -av =sys-block/open-iscsi-2.0.872 Important: Please include a description of what you did while testing in your feedback. At best, post your feedback as a reply to bug 340425: Thanks in advance! Best, Sebastian
[gentoo-dev] Re: Test request: open-iscsi 2.0.872
PS: I noticed the typo in gentoo-users@lists.g.o ^ and sent a new mail to gentoo-user@lists.g.o now. Sebastian
[gentoo-dev] Genkernel needs more hands
Hello, Genkernel's situation (reduced to the three currently most active players) looks like this to me: - aidecoe - is focussed on the transition to Dracut and related things - is fixing bugs in present genkernel from time to time - xake - is fixing bugs in the current genkernel releases - likes his patches to be reviewed - cannot do releases as he has no developer account, yet - sping (me) - writes and applies patches from time to time. (Commitment varies, at a low right now.) - has never used half of the many technologies involved himself (iSCSI, dmraid, netboot, ...) - is the bottleneck on some reviews and releases There are various bugs around, some just need attention, some could use insider knowledge that we lack. Furthermore the kernel configs shipped by Genkernel are mainly from a time before the three of us took over and need fixes, a concept and documentation too. There is no test suite (virtual machines?) that I knew of catching regressions (of which we had few only, in that light). Nevertheless genkernel has fun aspects: it's in much better shape than 3.4.10.907 was, including documentation. It's a core Gentoo tool used by quite a few people so the work you do on Genkernel matters. With that in mind: Are you interested in joining Genkernel? Thanks,? If 2.26 still produced good results, 2.37a already does not. Bisecting involves fixing compilation for each version. I stopped getting 2.30 to compile because it seemed to take forever (longer than fixing 2.26, 2.37a and 2.40 together) and two people had their hands on a port of the logo to Blender 2.57 by then, one of them still has. It's too early to give details. What I can say is that personally I would want a very close match in case of a Blender-based replacement, closer than what I have seen so far. It still seems possible though. Best, Sebastian
Re: [gentoo-dev] Re: [rfc] Rendering the official Gentoo logo / Blender 2.04, Python 2.2
On 05/01/2011 08:06 AM, Michał Górny wrote: Isn't it possible to create a better SVG then? It may be. Of the three variants trying to match the Blender version that I have seen so far, none is a replacement of equal quality on the bling scale to my impression. They feel like tradeoffs, not like the real thing. Maybe they try to come too close to the ray-traced rendering, but I'm not sure if I really want to propose a different direction either. I think such a variant would be much more portable and reproducible than blender files. What I dislike about the idea of moving to a new logo is that we would give up part of our culture just because we were unable to move it from past to present to future. Imagine this dialog: A: Hey guys, I noticed you have a new logo? B: Yeah, blender rendering changed - so we dropped it. I don't really want to be B in that dialog. I see the pragmatic aspect of moving to SVG but it also has the taste of giving up to me. To vercome that taste, a very strong replacement would be needed. If we replace the Blender g we may also need a substitute for the red-white Blender gentoo as seen at*docroot*/images/gentoo-new.gif if just for the sake of consistency. I am wondering what effect the Blender nature of a logo does have on the capability and will of people to create fan art based on it compared to an SVG version. It seems like there is only a handful of 3D Gentoo wallpapers but does that mean it would have been more with an SVG version, instead? On what levels could SVG work as a catalyst? If we ported the logo to Blender 2.57 now: what can we do to not be running after Blender rendering changes for all time or to reduce their impact on us? Is this a natural cost or an evil one? Just my 2 cents. Best,. Both of these seem to run fine with Python 2.4.6, which is still in Gentoo. Without good image diffs, I cannot tell for sure if the rendering has changed since Blender 2.04. Best, Sebastian
[gentoo-dev] [rfc] Rendering the official Gentoo logo / Blender 2.04, Python 2.2
Hello! Gentoo's official logo originates from a Blender file [1] created by Daniel Robbis over 8 years ago. He used Blender 2.04 and Python 1.6 at that time. When rendering that .blend file with Blender 2.49b (or a more recent version), Blender does not apply the reflection texture needed [2] to give the metal look that you know. I don't know why that is. All I know is that Blender does find the file: it's not about the location. Trying Blender 2.04 binaries on a Windows VM, it turned out that Blender 2.04 is still able to render our logo as expected. In my eyes rendering our logo should not depend on a proprietary operating system or binary blobs. The source tarball of Blender 2.04 is hard to find (if available at all), the available sources of 2.03 [7] are incomplete. Binaries of 2.04 [8] are 32bit only and crash on startup on my system. The earliest source tarball after 2.04 that upstream offers for download [3] is Blender 2.26. That version does not compile with GCC 4.4 and turns out to be home with Python 2.2. In hope that this version would be able to render our logo in the way that Blender 2.04 did, I tried fixing compilation against GCC 4.4.5. That worked [4]. The need for Python 2.2 became clear when all of Python 2.4, 2.4 and 2.6 made it segfault in Python related code instantly. Therefore I tried bringing our old Python 2.2-r7 ebuild to life. Smaller changes like -fPIC were needed but it wasn't too hard. You can find the Python 2.2-r8 in the betagarden overlay [6]. In the end I could do sudo eselect python set python2.2 to compile and run Blender 2.26 and make it render g-metal.blend (after adjusting the path to the reflection texture) with metal look in a resolution of a few megapixel on transparent background. I have the impression, that the rending is the same as of Blender 2.04. However, this is not a good long-term solution. For instance Portage doesn't operate under Python 2.2 so an ebuild for Blender 2.25 is a tricky thing to do nowadays. Among the options I see is the following: A) Find out how to render g-metal.blend with recent Blender (2.57b at best) to give pixel-identical results to Blender 2.04. Needs an advanced Blender user ideally. B) Port Blender 2.26 to a recent version of Python. Are there any other options? What do you think? I would also like to encourage you to reproduce the process I described to spot any problems I overlooked. Thanks for reading up to this point. Best, Sebastian [1] [2] [3] [4];a=summary [5] [6];a=commitdiff;h=a3712c45dee61717cbc09b39ff868af7a3ccaa89 [7] [8]
[gentoo-dev] Why a betagarden overlay
Hello! First: If betagarden were a normal overlay, I would not be writing about it here. If you're in a hurry just skip the introduction and jump down to section Betagarden overlay. Introduction The betagarden overlay has been around for a while. I always wanted to write about its purpose and invite you to collaboration but I haven't got to it before. I understand betagarden as a third place supplementing the Gentoo main tree (sometimes known as gentoo-x86 or portage) and the special overlay of Project Sunrise [1]. It fills a gap that these other two repositories leave open. Let's have a look: Gentoo Main tree - Post-publishing review - Territorial write access: Gentoo Developers (only) - Full write access: Gentoo QA maybe? - High quality standards sunrise overlay --- - Pre-publishing review - Reduced write access: Anyone passing a simple test [2] - Full write access: Project Sunrise developers (only) - High quality standards From these lists a few things can be observed: 1. Both trees require high quality from ebuilds. This includes - Full integration with Gentoo (menu entries, init scripts, etc.) - Cleaning the ebuild - Support for LDFLAGS - ... 2. Gentoo developers who are not fully committed to sunrise do not have full write access to it -- Wouldn't it be nice to have a place where polishing is optional (as long as the ebuilds are still safe) with more liberal write access? But there's another group of repositories that I would also like to have a look at: Gentoo developer overlays - When you go to you see them instantly - most Gentoo devs have one: dev/aballier.git Developer overlay Alexis Ballier dev/alexxy.gitDeveloper overlay Alexey Shvetsov dev/anarchy.git Developer overlay Jory Pratt dev/angelos.git Developer overlay Christoph Mende [..] Many of these overlays currently combine two groups of ebuilds: - Stuff useful to themselves, only - Stuff useful to a wider audience (that they didn't feel like adding to the Gentoo main tree) With such a mix it often makes no sense for somebody else to keep that overlay installed over time. -- Wouldn't it be nice to have the stuff useful to others in a more central place (and reduce your developer to stuff that basically is only interesting to you)? Hollow and I (sping) have been trying to do that with our overlays: moving stuff useful to others over to betagarden, a shared overlay. Betagarden overlay == So now that I have shared my view on the Gentoo main tree, the sunrise overlay and developer overlays let me summarize how betagarden fits in: - Full write access to all Gentoo Developers That means more freedom than in the main tree or sunrise. - Reduced (but essential) quality standards (hence the beta in betagarden) - Keeping really useful stuff off the developer overlays How to join --- All devs have write access to betagarden already. 1. Clone git+ssh://g...@git.overlays.gentoo.org/proj/betagarden.git 2. Add yourself to the betagar...@gentoo.org alias: # ssh dev.gentoo.org # nano -w /var/mail/alias/misc/betagarden 3. Start adding (or moving over existing) ebuilds If you have trouble pushing commits please contact overl...@gentoo.org. In bugzilla, you can assign bugs to betagar...@gentoo.org by now. Expected criticism -- I expect some of you to be worried: does that mean people stop adding quality ebuilds to the Gentoo main tree and move on to betagarden? No. If an ebuild is really important it belongs into the main tree. In that case someone will take the time to ensure high quality standards and move it from betagarden to the main tree. I hope some of you do see something good in this project. Thanks for your interest, Sebastian [1] [2]
Re: [gentoo-dev] News item: Dropping Java support on ia64 (retry)
The sentence If there is no interest, the removal of Java support well be done during the second half of March 2011. seems to have some bugs. I suppose well be done was meant to be will be done? ^ But maybe the removal [..] will be done could use re-writing, too. How about this: If there is no interest, Java support will be removed from IA64 during the second half of March 2011. Best, Sebastian
[gentoo-dev] Downgrading glibc?
Hello! In relation to bug 354395 [1] I would like to downgrade my glibc back to 2.12.2. Portage doesn't allow me to do that: * Sanity check to keep you from breaking your system: * Downgrading glibc is not supported and a sure way to destruction * ERROR: sys-libs/glibc-2.12.2 failed (setup phase): * aborting to save your system Can anyone guide me or point me to a guide how to savely do that manually? Thanks, Sebastian [1]
Re: [gentoo-dev] Downgrading glibc?
A little update from my side: I was abe to downgrade glibc to 2.12.2 and my sound problem [1] is now gone again! If it's not glibc itself, it's one of the packages re-installed after (again, see [1] for the list). If anyone considers masking glibc 2.13 for now: please take my vote. Best, Sebastian [1]
Re: [gentoo-dev] Downgrading glibc?
On 02/11/11 13:26, Paweł Hajdan, Jr. wrote: Just curious, what downgrade method did you use? Just untaring an older glibc package? This is what I did: 0) Log out of X, log in to root console 1) Collect packages emerged after previous update to glibc from files in PORT_LOGDIR (using simple shell scripting) 2) Emerge glibc 2.12.2 3) Re-emerge packages from (1) 4) Reboot WARNING: It may not work as well on your system. Best, Sebastian
Re: [gentoo-dev] Re: Downgrading glibc?
On 02/11/2011 01:27 PM, Diego Elio Pettenò wrote: It should have been masked _beforehand_, masking it now is going to cause more trouble. Portage will propose a downgrade of glibc on emerge-update-world, okay. How bad would that be? Does it cause any other trouble? Remember: unless you're able to rebuild everything that was built afterwards without _using_ it, your system is going to be totally broken. Sure it sucks, haven't I said that enough times, regarding pushing stuff that's going to break other stuff straight to ~arch? In your eyes, is there anything we can do to improve the current situation? Best, Sebastian
Re: [gentoo-dev] Unused eclasses] Unused eclasses
-base:inherit eutils php-pear-manylibs-r1 kolab/dev-php/horde-framework-kolab/.svn/text-base/horde-framework-kolab-3.2_rc3-r20080528.ebuild.svn-base:inherit eutils php-pear-manylibs-r1 kolab/dev-php/horde-framework-kolab/horde-framework-kolab-3.2_rc3-r20080529.ebuild:inherit eutils php-pear-manylibs-r1 kolab/dev-php/horde-framework-kolab/horde-framework-kolab-3.2_rc1.ebuild:inherit eutils php-pear-manylibs-r1 kolab/dev-php/Horde_iCalendar/Horde_iCalendar-0.1.0.ebuild:inherit php-pear-r1 eutils kolab/dev-php/Horde_iCalendar/.svn/text-base/Horde_iCalendar-0.1.0.ebuild.svn-base:inherit php-pear-r1 eutils kolab/dev-php/Horde_Serialize/Horde_Serialize-0.0.2.ebuild:inherit php-pear-r1 eutils kolab/dev-php/Horde_Serialize/.svn/text-base/Horde_Serialize-0.0.2.ebuild.svn-base:inherit php-pear-r1 eutils kolab/dev-php/Horde_DataTree/Horde_DataTree-0.0.3.ebuild:inherit php-pear-r1 eutils kolab/dev-php/Horde_DataTree/.svn/text-base/Horde_DataTree-0.0.3.ebuild.svn-base:inherit php-pear-r1 eutils laurentb/dev-php5/phpunit/phpunit-3.5.10.ebuild:inherit php-pear-lib-r1 lordvan/dev-php/PEAR-XML_Feed_Parser/PEAR-XML_Feed_Parser-1.0.3.ebuild:inherit php-pear-r1 ohnobinki/dev-php/PEAR-Services_Yadis/PEAR-Services_Yadis-0.2.3.ebuild:inherit php-pear-r1 ohnobinki/dev-php/PEAR-Services_Facebook/PEAR-Services_Facebook-0.2.8.ebuild:inherit php-pear-r1 php/dev-php/PEAR-PHP_CodeSniffer/PEAR-PHP_CodeSniffer-1.3.0_rc1.ebuild:inherit php-pear-r1 php/dev-php/PEAR-Net_DNS/PEAR-Net_DNS-1.0.1.ebuild:inherit php-pear-r1 depend.php php/dev-php/PEAR-I18Nv2/PEAR-I18Nv2-0.11.4.ebuild:inherit php-pear-r1 php/dev-php/PEAR-Net_Sieve/PEAR-Net_Sieve-1.2.1.ebuild:inherit php-pear-r1 php/dev-php/PEAR-XML_Util/PEAR-XML_Util-1.1.4.ebuild:inherit php-pear-r1 depend.php php-4/dev-php4/phpunit/phpunit-1.3.2.ebuild:inherit php-pear-lib-r1 php-4/dev-php4/phpunit/.svn/text-base/phpunit-1.3.2.ebuild.svn-base:inherit php-pear-lib-r1 webapps-experimental/dev-php/PEAR-Net_GeoIP/PEAR-Net_GeoIP-1.0.0_rc1.ebuild:inherit php-pear-r1 depend.php webapps-experimental/dev-php/PEAR-Net_GeoIP/.svn/text-base/PEAR-Net_GeoIP-1.0.0_rc1.ebuild.svn-base:inherit php-pear-r1 depend.php webapps-experimental/dev-php/PEAR-Structures_Graph/.svn/text-base/PEAR-Structures_Graph-1.0.2.ebuild.svn-base:inherit php-pear-r1 webapps-experimental/dev-php/PEAR-Structures_Graph/PEAR-Structures_Graph-1.0.2.ebuild:inherit php-pear-r1 zugaina/dev-php/PEAR-Net_Sieve/PEAR-Net_Sieve-1.2.1.ebuild:inherit php-pear-r1 php5_2-sapi.eclass : dev-zero/dev-lang/php/php-5.2.12.ebuild:inherit versionator php5_2-sapi apache-module poppler.eclass : devnull/dev-libs/poppler-glib/poppler-glib-0.10.7.ebuild:inherit autotools poppler flag-o-matic kde-sunset/dev-libs/poppler-qt3/poppler-qt3-0.12.1.ebuild:inherit qt3 poppler kde-sunset/dev-libs/poppler-qt3/poppler-qt3-0.12.0.ebuild:inherit qt3 poppler kde-sunset/dev-libs/poppler-qt3/poppler-qt3-0.12.3.ebuild:inherit qt3 poppler kde-sunset/dev-libs/poppler-qt3/poppler-qt3-0.10.7.ebuild:inherit qt3 poppler kde-sunset/dev-libs/poppler-qt3/poppler-qt3-0.10.6.ebuild:inherit qt3 poppler tla.eclass : sunrise/dev-util/tla-tools/tla-tools-20060509.ebuild:inherit tla sunrise/dev-util/tla-tools/.svn/text-base/tla-tools-20060509.ebuild.svn-base:inherit tla Ycarus Le 04/02/2011 15:03, Sebastian Pipping a écrit :] Upcoming changes to hosting of Git repos on git.gentoo.org (NOT overlays.git.gentoo.org)
On 01/22/11 13:32, Theo Chatzimichos wrote: Well, the distinction for unofficial/official overlays happen mostly in layman -L, I don't think users pay attention to our git repo list. Furthermore, I got at least three requests from developers to move their repo from user/ to dev/ (same problem when devs retired). This distinction doesn't make any sense. Three request over what time? Compared to a screen height of user repos created, maybe that's not much. Sebastian
Re: [gentoo-dev] Upcoming changes to hosting of Git repos on git.gentoo.org (NOT overlays.git.gentoo.org)
On 01/22/11 09:55, Robin H. Johnson wrote: - On one hand, I would like user repositories to have a separate namespace, so that other users realize a given repo is NOT from a developer. Seconding that. - On the other side, what do we do when a user with a repo becomes a developer (and when they retire?) To avoid a move, you'd have to give away distinction. To be able to do path-based distinction, you have to move on status updates. It seems that you cannot have both at the same time. Sebastian
Re: [gentoo-dev] Upcoming changes to hosting of Git repos on git.gentoo.org (NOT overlays.git.gentoo.org)
On 01/21/11 23:15, Robin H. Johnson wrote: On Fri, Jan 21, 2011 at 03:47:03PM -0600, Donnie Berkholz wrote: Sweet, we actually got an invitation to bikeshed! Here's my contributions: gentoo-tree.git gentoo-portage-tree.git portage-tree.git (the name 'portage' derives from bsd ports, so it makes sense to keep that connection to make it recognizable to that audience) Please note that I said _location_. I'm not so happy about putting them in in the toplevel namespace. I see. If the long-term goal is too have multiple packages trees, than maybe tree/main.git or tree/core.git would make sense and go well with proj/, as that is not plural either: no projs/, no trees/. It could make tree/core.git tree/science.git tree/games.git tree/... some day. You need to provide TWO names: 1. The current tree that we will start with. 2. The read-only graftable tree with full history (going back to the start of Gentoo commits). Any of these suffixes for the other one would work for me: * past * before * old * history historical is fine, just a bit long, maybe without need to. As much as I like the original Portage tree, I do agree it's lead to confusing of the source code of the package manager vs. the ebuild tree. Great to hear that you share this worry. Best, Sebastian
[gentoo-dev] genkernel 3.4.11.1 released
Hello! This release fixes two bugs both affecting 3.4.11 (not earlier releases). Bugs fixed == 351906 Move application of kernel config after make mrproper as that deletes .config (whereas make clean does not) 351909 busybox 1.18.1: Return of mdstart as an applet (regression) Special thanks go to Xake. Thanks for your interest. Sebastian
Re: [gentoo-dev] genkernel 3.4.11.1 released
On 01/20/11 21:08, Jeroen Roovers wrote: On Thu, 20 Jan 2011 16:00:06 +0100 Sebastian Pipping sp...@gentoo.org wrote: This release fixes two bugs both affecting 3.4.11 (not earlier releases). I'm a Gentoo developer. I've never used genkernel for private purposes. So I don't see why you would send this to gentoo-dev@ and gentoo-dev-announce@. I don't think a bit of extra visibility can hurt with this. Still, I may take it off the list if another Gentoo developer seconds that request. Sebastian
Re: [gentoo-dev] genkernel 3.4.11.1 released
On 01/20/11 21:38, Fabian Groffen wrote: Like Jeroen, I don't think new package releases should be announced on these developer-related lists. It's not about the package, it's about the release itself. I don't send mails on package bumps I do. Sebastian
Re: [gentoo-dev] genkernel 3.4.11.1 released
On 01/20/11 21:45, Rich Freeman wrote: On Thu, Jan 20, 2011 at 3:38 PM, Fabian Groffen grob...@gentoo.org wrote: Like Jeroen, I don't think new package releases should be announced on these developer-related lists. Tend to agree, at least in general. If a genkernel upgrade impacted multiple teams/etc, such as requiring changes to install media, or the handbooks, etc, then I'd consider it completely fair game for the lists. Likewise if some big change that will really impact the distro is being considered I'd consider that fair game as well. Fair point. I'll keep posting to Planet Gentoo. How about gentoo-dev-announce? That said, there are some nice genkernel changes being made and I for one appreciate them (even though I don't yet run it - the mdadm inclusion will probably push me over the edge)! If you get the chance please try genkernel-9 (five nines) exposing the experimental branch. That may save both of us the trouble to fix things post release. Best, Sebastian
Re: [gentoo-dev] genkernel 3.4.11.1 released
On 01/20/11 21:55, Fabian Groffen wrote: Unless if you are on some git repo, we have commit mails which can serve this purpose very well. I am on a git repo, and a commit list serves a different purpose: low level tracking of changes. Sebastian
Re: [gentoo-dev] genkernel 3.4.11.1 released
On 01/20/11 22:06, Jeroen Roovers wrote: Version bumps have no place on the dev-announce list /unless/ they impact developers' work directly. Fine. (and not because you want to celebrate the glory of another version release). I'm not sure if I'm just interpreting things, but I wish you would speak to me in a way, where I would not have to wonder on each mail, if you're just trying to piss me off. Thanks. Sebastian
Re: [gentoo-dev] Tomoyo tools need attention
Bumped to 2.3.0-p20100820. Sebastian
[gentoo-dev] genkernel 3.4.11 released
Hello! I have just released genkernel 3.4.11 to the testing tree. From a high level point of view this release brings: - Slightly faster startup - Updated versions of busybox, LVM, e2fsprogs/blkid - A few new features, e.g. GnuPG support - A bunch of bug fixes (see below) Below you can find details on the changes since 3.4.10.908. Besides the people contributing bug reports special thanks go to: - Amadeusz Zolnowski (LVM update) - Christian Giessner (UUID crypt_root) - dacook (GnuPG 1.x support) - Denis Kaganovich (Busybox patch porting) - devsk (Multi-device patch) - Fabio Erculiani (Slowusb fixes) - Kai Dietrich (Symlink analysis) - Kolbjorn Barmen (Arithmetic fix) Please open bugs for any issues you run into. New features 217959 Add GnuPG 1.x support 315467 Add support for UUID to crypt_root 303529 Add minimal btrfs support 267383 Add virtio support by updating LVM 244651 Run make firmware_install if CONFIG_FIRMWARE_IN_KERNEL != y Component updates = 291822 Update e2fsprogs/blkid to 1.41.14 331971 Update busybox to 1.18.1 255196 Update LVM to 2.02.74 Bug fixes = 351047 Do not sleep after vgscan 271528 Handle missing kernel .config better 323317 Improve slowusb handling 246370 Check return codes of cpio 307855 Create /bin/vg* symlinks when called as /linuxrc, too 303531 Pick first device when several devices are matching real_root 347213 Fix warning cannot remove `/var/cache/genkernel/src' 326593 Allow configuring the list of busybox applets 339789 Fix arithmetic bug in defaults/initrd.scripts Thanks for your interest. Sebastian | https://www.mail-archive.com/search?l=gentoo-dev%40lists.gentoo.org&q=from:%22Sebastian+Pipping%22&o=newest&f=1 | CC-MAIN-2021-31 | refinedweb | 11,990 | 58.69 |
The project contains a few features that, in general, make the life of Episerver editors easier. The vision is to make it possible to edit and publish blocks directly on the page without a need for switching context. The page is selected at all times, and all actions around local blocks is performed inline.
The list of current features is:
All features work together, but you can decide which ones are enabled by Configuring enabled features.
This is an extra command available in the global menu. It automatically checks the "For this page" folder of the current page and lists all draft versions of blocks that could be published with the page.
After running the command, a dialog box with a list of all draft versions of local blocks is displayed. The editor can decide which blocks will be published using check boxes next to the local block name.
The command will publish the page and all selected blocks.
Using this feature, the editor do not have to manually click through all local blocks just to check if all of them have already been published.
This feature allows editors to edit block content directly on the page. There is a new "Inline block edit" command added to content area items menu.
The command opens a dialog box with an editable block form. The editor can edit blocks the same way as when switching to blocks editing context.
As you can see, the layout is a bit different than in the Forms view. Tabs are replaced with sections which makes more sense for blocks that usually have only a few properties.
The changes can also be published directly from the dialog box.
Additionally, the command is available from the assets pane.
In many scenarios, blocks are not using
Name and
Categories properties during rendering. This is the reason why we introduced the
InlineBlockEditSettings configuration attribute. You can apply it to your block content type and hide those properties. Additionally, you can also use this attribute to hide specific groups to make the editing form cleaner.
The attribute contains three properties:
For example, the only property that is editable for the Editorial block type in the Alloy templates is "Main body". There is no need to display other built-in properties or group properties into sections:
Another example is the Teaser block which has just a few simple properties:
To turn on the
Name property:
[SiteContentType(GUID = "67F617A4-2175-4360-975E-75EDF2B924A7", GroupName = SystemTabNames.Content)] [SiteImageUrl] [InlineBlockEditSettings(ShowNameProperty = true)] public class EditorialBlock : SiteBlockData { [Display(GroupName = SystemTabNames.Content)] [CultureSpecific] public virtual XhtmlString MainBody { get; set; } }
And below is an example of how to display
Name and
Categories properties and
Settings group.
[SiteContentType(GUID = "9E7F6DF5-A963-40C4-8683-211C4FA48AE1")] [SiteImageUrl] [InlineBlockEditSettings(ShowNameProperty = true, ShowCategoryProperty = true, HiddenGroups = "")] public class AdvancedBlock : SiteBlockData { [Display(Order = 1, GroupName = SystemTabNames.Content)] public virtual string Text1 { get; set; } [Display(Order = 2, GroupName = SystemTabNames.Content)] public virtual string Text2 { get; set; } [Display(Order = 1, GroupName = Global.GroupNames.Products)] public virtual string Text3 { get; set; } [Display(Order = 2, GroupName = Global.GroupNames.Products)] public virtual string Text4 { get; set; } }
Another enhancement is the way to get a few more details about particular content area items. Each content area item will display status icons similar to the page tree. You will now see if a block is a draft or if a language branch is missing.
Additionally to help distinguish local blocks from shared blocks, there is a new "Local block" icon.
Thanks to those flags, the editor can easily see if the page is ready to be published or not.
This feature is just a convenient way to publish Content Area Items directly from the list, without the need of switching context.
There is a new command available in the content area menu.
And also from the assets pane.
Allow editors to preview draft versions of content area blocks.
There is now a new button in the top bar.
By default in edit view, the editor sees the published blocks versions when a page is rendered.
The editor can use the new "Content Draft View" button to get an overview of how the page will look after all blocks have been published.
To turn off one or more feature, use the
BlockEnhancementsOptions options class and then, for example, in the initialization module, set
false on the feature that should not be available. All features are enabled by default.
[InitializableModule] public class CustomBlockEnhancementsModule : IInitializableHttpModule { public void Initialize(InitializationEngine context) { var options = ServiceLocator.Current.GetInstance<BlockEnhancementsOptions>(); options.InlineEditing = false; options.PublishWithLocalContentItems = true; options.ContentDraftView = true; options.InlinePublish = false; options.StatusIndicator = false; } public void Uninitialize(InitializationEngine context) { } public void InitializeHttpEvents(HttpApplication application) { } }
This documentation can also be found on our github page.
The NuGet is available on the EPiServer NuGet feed at:
Episerver Labs projects are meant to provide the developer community with pre-release features with the purpose of showcasing ongoing work and getting feedback in early stages of development.
You should be aware that Labs are trials and not supported Episerver releases. While we hope you use them, there are a few things you should expect:
The software is provided “As is” without warranty of any kind or promise of support. In no event shall Episerver be liable for any claim, damages or liability in relation to the software. By using this software, you are agreeing to our developer program terms .
You're the real MVP - as always!
Btw, it's not possible to delete comments here. But you deserve two of them anyway:)
These are really cool enhancements. Great to see those new features.
Great to see some experimenting with improving the UX for block editing!
Nice work Grzegorz! Next need to find some time to play around with this feature.
Nice work Dojo master! | https://world.episerver.com/blogs/grzegorz-wiechec/dates/2019/7/episerver-labs---block-enhancements/ | CC-MAIN-2019-47 | refinedweb | 962 | 56.66 |
Hello implement and use enums when programming in Java. We will be doing practical analysis of the situations where enum makes life of a Java developer easy. The only prerequisite is the understanding of Object oriented programming. That said, let’s fasten our seatbelts and zoom into Java enums. As a result, I hope we will feel more comfortable with enums.
Enumerations
Enumerations in Java are available since JDK 1.5. Most of the other programming languages like C, C++, and etc also provide support for enumerations. In C++, enumerations are the user-defined data types, which has specified set of values. The variable of this type can only be assigned a value from the set of values we defined in the enum. This way we can make sure our variable expects only the valid values. For example, let’s write a small code to show how enum in c++ would work.
#include <iostream> using namespace std; enum board{White, Black, Sheet} displayBoard; int main() { displayBoard = Black; cout<<"Black = "<< displayBoard; return 0; }
As we can see, in the code snippet above we created an enum: board. Variable displayBoard is now of type board. Enums work in the scenario where we have a guarantee that a field’s value can only be one of the predefined constants. Here we have defined 3 such constants {White, Black, Sheet} to represent the different board materials. As as result, values that can be assigned to displayBoard are White, Black, or Sheet. Try initializing an enum type with a different value and you will get a compilation error. The above program would print: Black = 1. It prints the ordinal value of the constant assigned to displayBoard.
What’s an ordinal value? It specifies the positioning of a constant in the enum declaration, more like indexes in arrays. In enums, ordinal value starts from 0. Thus, ordinal values for constants are White=0, Black=1, and Sheet=2. This is a very basic example to show how enums work in C++, obviously they could be used in switch statements as well. How? That, we will see when we will look at enums in Java.
Java Enums
In its simplest form an enumeration is a list of named constants, defining a new data type & its legal values. In Java, enumerations defines a class type. This greatly expands the capability of Java enums. In java, an enum can have constructor(s), methods, and instance variables like a class. Enums are created using the keyword enum. All enumerations automatically contain two predefined methods: values() and valueOf(). Their signatures are:
public static enum-type [] values() public static enum-type valueOf(String str)
The values() method returns an array that contains a list of the enumeration constants. We can iterate through these constants to find their ordinal value or their names. The valueOf() method, on the other hand would return a enumeration constant whose value matches the passed string. Let’s quickly go through a small example to show usage of ordinal(), values(), and valueOf() methods.
In the above code snippet we have created an enum BoardMaterial, this contains the materials from which a board could be created. You can have a WhiteBoard, BlackBoard or a SheetBoard. Let’s qucikly understand what’s happening in this code block.
Workflow
Lines 9-10, we created a variable of type BoardMaterial, assigned a constant enum value to it. Thereafter, we print it out. When we print out an enum, the name of the enum is printed.
In lines 14-17, we invoked the method values() on enum BoardMaterial and iterated over its constants via foreach. We then printed the ordinal(position) value of each constant along with the constant name.
From lines 21-28, we show the usage of valueOf() method. Pay attention to the implementation here, we wrapped the invocation in a try/catch block. This is because, if the string value passed in valueOf() doesn’t match any constant in enum, it throws an IllegalArgumentException. When we run the program we see the following output on our screen:
It is important to note here, that every constant defined in an enum is a public static final member of that enum. Now, let’s see some of the advantages of using enums in such a way.
Advantages with Java enums
- Enumeration types can only be assigned an enumeration constant in that enum type. This solves the problem of misleading/incorrect assignments to an enum. Here it would throw a compiler error, and such a mistake can be avoided at compile time itself.
- Enumerations in Java can be compared usign == operator. Safe comparison because enum constants are static and final by default.
- We can use enum parameter in a switch case. This makes constants driven computations better and faster.
- Java enums can have constructor(s), instance variables and methods in them. They can even implement interfaces.
- Enum constructors are always private.
Comparing enums
Java enums can be compared using == operator, compareTo(), or equals() method. Let’s see how the comparison using enums looks like. See the gist below:
I believe the above code is self explanatory. I would though, like to mention few important points here.
- When we try to compare two enum variables from different enumeration using == or compareTo() method, it results in a compiler error. Though, it is possible to have equals() compare enums of 2 different type, which will always return false.
- The equals() method would result in false, if we try to compare instances of two different enumerations that might have same ordinal value individually. Thus, by having same ordinal value doesn’t mean that the two enums are equal.
- When we use compareTo() to compare 2 enums of same type, the result could be:
- 0(Zero) : meaning that the ordinal value of invoking object is equal to the compared object.
- NegativeInteger : meaning that the ordinal value of inkvoking object is less than the ordinal value of compared object.
- Positive Integer : mening that the ordinal value of inkvoking object is greater than the ordinal value of compared object.
Here’s the output of the above code snippet.
I hope by now we have gathered some good understanding of Java enums. Now, let’s look at some of the restrictions with Java enums.
Restrictions with Java Enums
- Though Java enums define class types, we cannot instantiate them using the new Keyword. We will see how the instantiation process works for enums in while.
- An Enumeration cannot inherit another class.
- An enum cannot act as a superclass to others. Thus, we can say that enums can’t be extended.
There’s, however, an exception to point #2 & #3 above. All enums, by default inherit the class java.lang.Enum. The methods ordinal(), compareTo(), and equals() are defined in the class Enum.
Instantiating enums
Now that we have established some good understanding of enums, let’s see how they are instantiated. We know, enums can have constructor(s), let’s examine that.
From the above code snippet, it is clear that we have created an enum Rose with the colors they are available in. Now, let us start the code analysis.
Code analysis
In the Rose enum, we have created a list of constants along with 2 instance variables, 1 constructor and a method. Every enum constant has a name, which we can get from enumInstance.name() method. Using name(), we added a print statement in the constructor, which is printed whenever enum’s instance is created. Inside the main method we created an instance of Rose enum. We also saw, how the default values to the boolean field are provided by the enum constants. Next, we traversed through constants values in the main method. Finally, we showcase the usage of enums in a switch statement. That is pretty much we did in the above code, now let’s see the output of the above program.
We see the output includes first 5 statements saying instance of multiple constants is created. But if you remember, we created only 1 instance and that was:
Rose myRose = Rose.WHITE;
Then why would there be so many statements saying instance created? The answer is quite simple. We know that each enumeration constant is an object of it enumeration type. Thus, a constructor defined inside an enum is called for each of the constants defined in the enum. So, even when we assigned Rose.WHITE in the above statement, it invoked constructor for every enum constant. It is more important to understand that every constant maintains its own copy of the instance variables defined in a enum.
Now coming to the switch statement. We passed the myRose instance in the switch expression, and the control reached to the statement Case WHITE:. We do not have to explictly write Case Rose.WHITE. This is where Java’s type inference comes into picture.
Business use for Java enums
I hope the above example explains how Java enums work much like classes. As a result, we can now move to see a more concrete scenario where we should use enums. Consider the business use case where you write your exception classes to wrap known exceptions from the code(microservices here). Ever considered assigning error codes to your exception classes? Reason? While debugging it can be used to fetch the error messages quickly. As a matter of fact, when building large applications there’s huge error log data that is generated. Using error codes can help you classify this large data into unique chunks. Further, you can then focus on logs containing a particular error code rather than the whole pile of logs.
Let’s see how we can implement error codes in our application using Java enums.
I have created a small spring boot application with just a controller and few other necessary classes in exceptions package. The intent here is to show creating error codes using enums and then use them in our defined exceptions.
Here’s my controller, very basic one nothing special in it. I am just returning an error from the endpoint.
@RestController @RequestMapping("/exception") public class MyController { @GetMapping("/code") public Mono getErrorCodeWithException() { return Mono.fromCallable(() -> { throw new MyException(); }); } }
And, here’s the MyException class that we throw from our controller.
@Getter public class MyException extends RuntimeException { private static final String INTERNAL_MESSAGE = "Generated my exception with error code = "; public final String errorCode = ErrorCode.MY_EXCEPTION.getError(); public MyException() { super(INTERNAL_MESSAGE); } }
Pay special attention to the field errorCode, see how it is initialized. Now let me show you the enum that is created to store the standardized error codes.
@Getter public enum ErrorCode { MY_EXCEPTION("01", MyException.class); private final String error; private final Class<? extends Throwable> tClass; ErrorCode(String code, Class<? extends Throwable> tClass) { this.error = "error-code::" + code; this.tClass = tClass; } @Override public String toString() { return error; } }
For generating the proper error response in spring we need to have one more class annotated with @RestControllerAdvice. In our case it is
ExceptionHandlerAdvice.java
that takes care of rendering the proper error response on our screen, provided an exception occured. Here’s the class.
@RestControllerAdvice public class ExceptionHandlerAdvice { @ExceptionHandler(value = {MyException.class}) public ResponseEntity<String> handleMyException( MyException myException) { return ResponseEntity.status(HttpStatus.EXPECTATION_FAILED) .body(myException.getMessage() + myException.getErrorCode()); } }
Output
We can hit the endpoint at once our service has started running on port 9000. Here’s the output that you will get.
Well, that is all we implemented in the service for generating exceptions with the error codes. Obviously, this is not the best we could have done. This is just a starting point and we can beautify the response further as per our needs & application standards. The complete code is present on the github repo. Please feel free to clone it & work on the application parallely as you read the post.
With that, I would conclude this post. Stay tuned for upcoming posts on Annotations in Java, and Record in java that came as a preview with JDK 1.14.
Conclusion
I hope this blog would help clear all your doubts regarding Java enums. It shows how Java enums are different than those in other Object oriented laguages like C++. This showcases the areas where enums should be used when programming in Java. I hope you find the content useful, and easy to understand. If you have any doubts or questions please add them in the comments section below. Please like this post if you are now comfortable with Java enums and understood the concepts I have shared above.
References
- Java The complete Reference (Book).
-
- Orcale Docs. | https://blog.knoldus.com/understanding-java-enums/ | CC-MAIN-2021-43 | refinedweb | 2,090 | 66.33 |
I'm working on Python 3.5.1 and I want to be able to tell if a function has returned as coroutine object but I cannot find where the coroutine type is defined, instead as of late I've been using the below snippet to get the type through instantiating a coroutine with a function.
async def _f():
pass
COROUTINE_TYPE = type(_f())
Probably the best way to access the coroutine type is through the
types module:
import types types.CoroutineType # here it is
That's not actually where the coroutine type is defined -
types.py does pretty much the same thing you're doing to get at it - but it's the standard Python-level way to access the type.
If you want to see the actual definition of the type, that's in
Include/genobject.h and
Objects/genobject.c. Look for the parts that say
PyCoroWhatever or
coro_whatever. | https://codedump.io/share/EQMFpCEFQUiK/1/where-is-python39s-coroutine-type-defined | CC-MAIN-2017-13 | refinedweb | 151 | 71.24 |
Python on CUED's central system
In the Linux system's "Applications" menu, in the Programming submenu there are options to run a python3 environment (with Spyder, etc) and an option to start a terminal window with paths set up ready to use python 3.
Typing
/usr/local/apps/anaconda3/bin/python in a terminal window on a CUED Linux machine will run the Python interpreter that Mich term first years use in the cloud. There are many other versions of Python installed, so beware. Typing
python3 will give you a similar version, but the choice of packages is different.
You can install private versions of packages. First you need to create a folder for them. These instructions will assume that you have a folder called
mypython in your home folder. Let's suppose that you want to install a python3 package called
mysqlparse
- From a terminal window run
/usr/local/apps/anaconda3/bin/pip3 install --install-option="--prefix=$HOME/mypython" mysqlparse
- Type
export PYTHONPATH=${PYTHONPATH}:${HOME}/mypython/lib/python3.4/site-packages
(the "3.4" may be different on your system) so that python will be able to find your packages. You'll need to do this at the start of each session (though you can make it happen automatically when you login).
- If from the same terminal window you now run
/usr/local/apps/anaconda3/bin/python
you'll find that
import mysqlparse
works. | http://www-h.eng.cam.ac.uk/help/languages/python/pythonatcued.html | CC-MAIN-2017-39 | refinedweb | 236 | 62.78 |
. Tests should be independent and repeatable. It's a pain to debug a test that succeeds or fails as a result of other tests. Google C++ Testing Framework isolates the tests by running each of them on a different object. When a test fails, Google C++ Testing Framework allows you to run it in isolation for quick debugging. 1. Tests should be well organized and reflect the structure of the tested code. Google C++ Testing Framework groups related tests into test cases that can share data and subroutines. This common pattern is easy to recognize and makes tests easy to maintain. Such consistency is especially helpful when people switch projects and start to work on a new code base. 1. Tests should be portable and reusable. The open-source community has a lot of code that is platform-neutral, its tests should also be platform-neutral. Google C++ Testing Framework works on different OSes, with different compilers (gcc, MSVC, and others), with or without exceptions, so Google C++ Testing Framework tests can easily work with a variety of configurations. (Note that the current release only contains build scripts for Linux - we are actively working on scripts for other platforms.) 1. When tests fail, they should provide as much information about the problem as possible. Google C++ Testing Framework doesn't stop at the first test failure. Instead, it only stops the current test and continues with the next. You can also set up tests that report non-fatal failures after which the current test continues. Thus, you can detect and fix multiple bugs in a single run-edit-compile cycle. 1. The testing framework should liberate test writers from housekeeping chores and let them focus on the test content. Google C++ Testing Framework automatically keeps track of all tests defined, and doesn't require the user to enumerate them in order to run them. 1. Tests should be fast. With Google C++ Testing Framework, you can reuse shared resources across tests and pay for the set-up/tear-down only once, without making tests depend on each other.
Since Google C++ Testing Framework is based on the popular xUnit architecture, you'll feel right at home if you've used JUnit or PyUnit before. If not, it will take you about 10 minutes to learn the basics and get started. So let's go!
Note: We sometimes refer to Google C++ Testing Framework informally as Google Test.
Setting up a New Test Project¶ (deprecated) and
CMakeLists.txt for CMake (recommended).
Basic Concepts¶
When using Google Test, you start by writing assertions, which are statements that check whether a condition is true. An assertion's result can be success, nonfatal failure, or fatal failure. If a fatal failure occurs, it aborts the current function; otherwise the program continues normally.
Tests use assertions to verify the tested code's behavior. If a test crashes or has a failed assertion, then it fails; otherwise it succeeds.
A.
Assertions¶
Google Test assertions are macros that resemble function calls. You test a class or function by making assertions about its behavior. When an assertion fails, Google Test prints the assertion's source file and line number location, along with a failure message. You may also supply a custom failure message which will be appended to Google Test's message.
The assertions come in pairs that test the same thing but have different
effects on the current function.
ASSERT_* versions generate fatal failures
when they fail, and abort the current function.
EXPECT_* versions generate
nonfatal failures, which don't abort the current function. Usually
EXPECT_*
are preferred, as they allow more than one failures to be reported in a test.
However, you should use
ASSERT_* if it doesn't make sense to continue when
the assertion in question fails.
Since a failed
ASSERT_* returns from the current function immediately,
possibly skipping clean-up code that comes after it, it may cause a space leak.
Depending on the nature of the leak, it may or may not be worth fixing - so
keep this in mind if you get a heap checker error in addition to assertion
errors.
To provide a custom failure message, simply stream it into the macro using the
<< operator, or a sequence of such operators. An example:
ASSERT_EQ(x.size(), y.size()) << "Vectors x and y are of unequal length"; for (int i = 0; i < x.size(); ++i) { EXPECT_EQ(x[i], y[i]) << "Vectors x and y differ at index " << i; }
Anything that can be streamed to an
ostream can be streamed to an assertion
macro--in particular, C strings and
string objects. If a wide string
(
wchar_t*,
TCHAR* in
UNICODE mode on Windows, or
std::wstring) is
streamed to an assertion, it will be translated to UTF-8 when printed.
Basic Assertions.
Value arguments must be comparable by the assertion's comparison
operator or you'll get a compiler error. We used to require the
arguments to support the
<< operator for streaming to an
ostream,
but it's no longer necessary since v1.6.0 (if
<< is supported, it
will be called to print the arguments when the assertion fails;
otherwise Google Test will attempt to print them in the best way it
can. For more details and how to customize the printing of the
arguments, see this Google Mock recipe.)..
Arguments are always evaluated exactly once. Therefore, it's OK for the arguments to have side effects. However, as with any ordinary C/C++ function, the arguments' evaluation order is undefined (i.e. the compiler is free to choose any order) and your code should not depend on any particular argument evaluation order.
ASSERT_EQ() does pointer equality on pointers. If used on two C strings, it
tests if they are in the same memory location, not if they have the same value.
Therefore, if you want to compare C strings (e.g.
const char*) by value, use
ASSERT_STREQ() , which will be described later on. In particular, to assert
that a C string is
NULL, use
ASSERT_STREQ(NULL, c_string) . However, to
compare two
string objects, you should use
ASSERT_EQ.
Macros in this section work with both narrow and wide string objects (
string
and
wstring).
Availability: Linux, Windows, Mac.
String Comparison¶ Google Test Guide.
Simple Tests. The test's result is determined by the assertions; if any assertion in the test fails (either fatally or non-fatally), or if the test crashes, the entire test fails. Otherwise, it succeeds.
TEST(test_case_name, test_name) { ... test body ... }
TEST() arguments go from general to specific. The first argument is the
name of the test case, and the second argument is the test's name within the
test case. Both names must be valid C++ identifiers, and they should not contain underscore (
_)..
Test Fixtures: Using the Same Data Configuration for Multiple Tests¶. If needed, define subroutines for your tests to share.
When using a fixture, use
TEST_F() instead of
TEST() as it allows you to
access objects and subroutines in the test fixture:
TEST_F(test_case_name, test_name) { ... test body ... }
Like
TEST(), the first argument is the test case name, but for
TEST_F()
this must be the name of the test fixture class. You've probably guessed:
_F
is for fixture.
Unfortunately, the C++ macro system does not allow us to create a single macro that can handle both types of tests. Using the wrong macro causes a compiler error.
Also, you must first define a test fixture class before using it in a
TEST_F(), or you'll get the compiler error "
virtual outside class
declaration".
For each test defined with
TEST_F(), Google Test will:
1. Create a fresh test fixture at runtime
1. Immediately initialize it via
SetUp() ,
1. Run the test
1. Clean up by calling
TearDown()
1. Delete the test fixture. Note that different tests in the same test case have different test fixture objects, and Google Test always deletes a test fixture before it creates the next one. Google Test does not reuse the same test fixture for multiple tests. Any changes one test makes to the fixture do not affect other tests.
As an example, let's write tests for a FIFO queue class named
Queue, which
has the following interface:
template <typename E> // E is the element type. class Queue { public: Queue(); void Enqueue(const E& element); E* Dequeue(); // Returns NULL if the queue is empty. size_t size() const; ... };
First, define a fixture class. By convention, you should give it the name
FooTest where
Foo is the class being tested.
class QueueTest : public ::testing::Test { protected: virtual void SetUp() { q1_.Enqueue(1); q2_.Enqueue(2); q2_.Enqueue(3); } // virtual void TearDown() {} Queue<int> q0_; Queue<int> q1_; Queue<int> q2_; };
In this case,
TearDown() is not needed since we don't have to clean up after
each test, other than what's already done by the destructor.
Now we'll write tests using
TEST_F() and this fixture.
TEST_F(QueueTest, IsEmptyInitially) { EXPECT_EQ()); delete n; }
The above uses both
ASSERT_* and
EXPECT_* assertions. The rule of thumb is
to use
EXPECT_* when you want the test to continue to reveal more errors
after the assertion failure, and use
ASSERT_* when continuing after failure
doesn't make sense. For example, the second assertion in the
Dequeue test is
ASSERT_TRUE(n != NULL), as we need to dereference the pointer
n later,
which would lead to a segfault when
n is
NULL.
When these tests run, the following happens:
TEST() and
TEST_F() implicitly register their tests with Google Test. So, unlike with many other C++ testing frameworks, you don't have to re-list all your defined tests in order to run them.
After defining your tests, you can run them with
RUN_ALL_TESTS() , which returns
0 if all the tests are successful, or
1 otherwise. Note that
RUN_ALL_TESTS() runs all tests in your link unit -- they can be from different test cases, or even different source files.
When invoked, the
RUN_ALL_TESTS() macro:. Repeats the above steps for the next test, until all tests have run. a compiler error. The rationale for this design is that the
automated testing service determines whether a test has passed based on its
exit code, not on its stdout/stderr output; thus your
main() function must
return the value of
RUN_ALL_TESTS().
Also, you should call
RUN_ALL_TESTS() only once. Calling it more than once
conflicts with some advanced Google Test features (e.g. thread-safe death
tests) and thus is not supported.
Availability: Linux, Windows, Mac.
Writing the main() Function¶
You can start from this boilerplate:
#include "this/package/foo.h" #include "gtest/gtest.h" namespace { // The fixture for testing class Foo. class FooTest : public ::testing::Test { protected: // You can remove any or all of the following functions if its body // is empty. FooTest() { // You can do set-up work for each test here. })); } // Tests that Foo does Xyz. TEST_F(FooTest, DoesXyz) { // Exercises the Xyz feature of Foo. } } // namespace int main(int argc, char **argv) { ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); }
The
::testing::InitGoogleTest() function parses the command line for Google
Test flags, and removes all recognized flags. This allows the user to control a
test program's behavior via various flags, which we'll cover in AdvancedGuide.
You must call this function before calling
RUN_ALL_TESTS(), or the flags
won't be properly initialized.
On Windows,
InitGoogleTest() also works with wide strings, so it can be used
in programs compiled in
UNICODE mode as well.
But maybe you think that writing all those main() functions is too much work? We agree with you completely and that's why Google Test provides a basic implementation of main(). If it fits your needs, then just link your test with gtest_main library and you are good to go.
Important note for Visual C++ users¶; }
__declspec(dllexport)is not required. Now, in your main program, write a code that invokes that function:
int PullInMyLibrary(); static int dummy = PullInMyLibrary();!
Where to Go from Here¶
Congratulations! You've learned the Google Test basics. You can start writing and running Google Test tests, read some samples, or continue with AdvancedGuide, which describes many more useful Google Test features.
Known Limitations¶
Google Test is designed to be thread-safe. The implementation is
thread-safe on systems where the
pthreads library is available. It
is currently unsafe to use Google Test assertions from two threads
concurrently on other systems (e.g. Windows). In most tests this is
not an issue as usually the assertions are done in the main thread. If
you want to help, you can volunteer to implement the necessary
synchronization primitives in
gtest-port.h for your platform. | https://microsoft.github.io/mu/dyn/mu_tiano_platforms/Common/MU_TIANO/CryptoPkg/Library/OpensslLib/openssl/boringssl/third_party/googletest/docs/V1_6_Primer/ | CC-MAIN-2022-27 | refinedweb | 2,097 | 65.01 |
This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.
On Tue, Jun 22, 2010 at 18:12, Pedro Alves <pedro@codesourcery.com> wrote: > Hi Hui, > > On Sunday 20 June 2010 08:28:40, Hui Zhu wrote: >> On Fri, Jun 11, 2010 at 21:55, Pedro Alves <pedro@codesourcery.com> wrote: >> > I'm felling a bit dense, and I don't see what is that actually >> > catching. ?If going backwards, the assertion always ends up >> > evaled as true, nomatter if sofware single-steps are inserted >> > or not, or whether `step' is set. ?Did you mean to assert >> > that when going backwards, there shouldn't ever be software >> > single-step breakpoints inserted? >> > >> > This patch is okay otherwise. ?Thanks. >> >> Thanks Pedro. >> I was also confused by this issue too. ?I thought it will never happen >> too. ?But Ping said he got this issue. ?And I didn't have the risc >> board to test. ?So I gived up and put this patch to him. >> >> So I think this patch is not very hurry to checked in until some one >> post a risc prec support patch. ?At that time, I will make this issue >> clear. > > I'd be fine with putting the patch in now, but without the change to > that gdb_assert. ?It looked like a step in the right direction, > and we can fix any left issues later. > > -- > Pedro Alves > Agree with you. I delay this patch some days because I want make it check in after 7.2. Now, following patch checked in. Thanks, Hui 2010-07-19 Hui Zhu <teawater@gmail.com> * breakpoint.c (single_step_breakpoints_inserted): New function. * breakpoint.h (single_step_breakpoints_inserted): Extern. * infrun.c (maybe_software_singlestep): Add check code. * record.c (record_resume): Add code for software single step. (record_wait): Ditto. --- breakpoint.c | 10 +++++++++ breakpoint.h | 1 infrun.c | 3 +- record.c | 64 ++++++++++++++++++++++++++++++++++++++++++++++++++++++----- 4 files changed, 72 insertions(+), 6 deletions(-) --- a/breakpoint.c +++ b/breakpoint.c @@ -10468,6 +10468,16 @@ insert_single_step_breakpoint (struct gd paddress (gdbarch, next_pc)); } +/* Check if the breakpoints used for software single stepping + were inserted or not. */ + +int +single_step_breakpoints_inserted (void) +{ + return (single_step_breakpoints[0] != NULL + || single_step_breakpoints[1] != NULL); +} + /* Remove and delete any breakpoints used for software single step. */ void --- a/breakpoint.h +++ b/breakpoint.h @@ -984,6 +984,7 @@ extern int remove_hw_watchpoints (void); twice before remove is called. */ extern void insert_single_step_breakpoint (struct gdbarch *, struct address_space *, CORE_ADDR); +extern int single_step_breakpoints_inserted (void); extern void remove_single_step_breakpoints (void); /* Manage manual breakpoints, separate from the normal chain of --- a/infrun.c +++ b/infrun.c @@ -1515,7 +1515,8 @@ maybe_software_singlestep (struct gdbarc { int hw_step = 1; - if (gdbarch_software_single_step_p (gdbarch) + if (execution_direction == EXEC_FORWARD + && gdbarch_software_single_step_p (gdbarch) && gdbarch_software_single_step (gdbarch, get_current_frame ())) { hw_step = 0; --- a/record.c +++ b/record.c @@ -1011,9 +1011,43 @@ record_resume (struct target_ops *ops, p if (!RECORD_IS_REPLAY) { + struct gdbarch *gdbarch = target_thread_architecture (ptid); + record_message (get_current_regcache (), signal); - record_beneath_to_resume (record_beneath_to_resume_ops, ptid, 1, - signal); + + if (!step) + { + /* This is not hard single step. */ + if (!gdbarch_software_single_step_p (gdbarch)) + { + /* This is a normal continue. */ + step = 1; + } + else + { + /* This arch support soft sigle step. */ + if (single_step_breakpoints_inserted ()) + { + /* This is a soft single step. */ + record_resume_step = 1; + } + else + { + /* This is a continue. + Try to insert a soft single step breakpoint. */ + if (!gdbarch_software_single_step (gdbarch, + get_current_frame ())) + { + /* This system don't want use soft single step. + Use hard sigle step. */ + step = 1; + } + } + } + } + + record_beneath_to_resume (record_beneath_to_resume_ops, + ptid, step, signal); } } @@ -1089,12 +1123,16 @@ record_wait (struct target_ops *ops, /* This is not a single step. */ ptid_t ret; CORE_ADDR tmp_pc; + struct gdbarch *gdbarch = target_thread_architecture (inferior_ptid); while (1) { ret = record_beneath_to_wait (record_beneath_to_wait_ops, ptid, status, options); + if (single_step_breakpoints_inserted ()) + remove_single_step_breakpoints (); + if (record_resume_step) return ret; @@ -1134,8 +1172,12 @@ record_wait (struct target_ops *ops, } else { - /* This must be a single-step trap. Record the - insn and issue another step. */ + /* This is a single-step trap. Record the + insn and issue another step. + FIXME: this part can be a random SIGTRAP too. + But GDB cannot handle it. */ + int step = 1; + if (!record_message_wrapper_safe (regcache, TARGET_SIGNAL_0)) { @@ -1144,8 +1186,20 @@ record_wait (struct target_ops *ops, break; } + if (gdbarch_software_single_step_p (gdbarch)) + { + /* Try to insert the software single step breakpoint. + If insert success, set step to 0. */ + set_executing (inferior_ptid, 0); + reinit_frame_cache (); + if (gdbarch_software_single_step (gdbarch, + get_current_frame ())) + step = 0; + set_executing (inferior_ptid, 1); + } + record_beneath_to_resume (record_beneath_to_resume_ops, - ptid, 1, + ptid, step, TARGET_SIGNAL_0); continue; } | http://sourceware.org/ml/gdb-patches/2010-07/msg00273.html | CC-MAIN-2014-52 | refinedweb | 681 | 68.77 |
Posted by: bmorrise on: February 24, 2009
You can find the code for this tutorial here: Classes_2.zip
One of the most important parts of Object Oriented Programming is that of creating classes. In the last tutorial on classes I went over how to create a simple class that represented a ball.
In this tutorial will be talking inheritance using simple definitions for shapes.
Inheritance means that you create a base or abstract class that has characteristics that are common and then create child classes that have characteristics of the base class plus their own individual characteristics.
For example:
You could create an object called Car and give it the properties of having four wheels, an engine, a windshield and so on. Then you would create child classes that inherited from the Car class called Truck, Sedan, Compact and they would have their own properties.
For this code example I will use Shape as the base or abstract class and then extend it to make two child classes called Circle and Square. Here is the code as follows:
This will be a file called Shape.as and would be in the same folder as the Circle.as and Square.as
package com.display
{
public class Shape
{
public var color:String;
public function Shape(color:String)
{
this.color = color;
}
}
}
Our shape class is pretty simple and only has one property: color. All of our shapes that we create will at least have a color. Let’s look at this line of code:
this.color = color;
this.color denotes that it’s the variable associated with the class and not the one being past into the function.
Now we’ll look at our two child classes:
Once again, this would be in a separate file called Circle.as. It’s important that you name your files the same as the class name or it won’t work.
package com.display
{
public class Circle extends Shape
{
public var diameter:Number;
public function Circle(color:String, diameter:Number):void
{
this.diameter = diameter;
super(color);
}
}
}
This is block is in a file called Square.as that is in the same folder as Shape.as and Circle.as
package com.display
{
public class Square extends Shape
{
public var width:Number;
public var height:Number;
public function Square(color:String, width:Number, height:Number):void
{
this.width = width;
this.height = height;
super(color);
}
}
}
Each of these classes have their own properties and the extend the Shape class. Meaning that although the Circle class has a diameter, it also has a color because it’s a child class of the Shape class.
Let’s look at the following line:
super(color);
Each of these classes have this line and all it means is that it’s calling the constructor from the parent class and passing the color value into it.
That’s all for this tutorial. Please leave comments about questions you may have.
Posted by: bmorrise on: February 10, 2009
I’ve started a new website called flashminigames.net that will have a bunch of mini games that can be embedded on any website. I would be happy to share with anyone the source code for these games, just let me know which games you’d like to see. Currently the selection is pretty thin, but we’ll be adding more every week, so keep checking back.
Posted by: bmorrise on: February 7, 2009
Here is a very nice blog entry that I found for tracking your Flash with Google analytics:
One useful thing to do while programming is have a timer that fires every so often to repeat a specific action such as a picture fade. You create and use a timer thus:
//Imports the appropriate library files
import flash.utils.Timer;
import flash.events.TimerEvent;
//Create a timer that will fire every 1000 milliseconds or every second
var timer:Timer = new Timer(1000);
//Add a listener that gets called every second when the timer fires
timer.addEventListener(TimerEvent.TIMER, handleTimer);
//Start the timer
timer.start();
//The handler that runs every second
function handleTimer(event:TimerEvent):void
{
//Do whatever you want to do
trace("timer");
}
Posted by: bmorrise on: September 29, 2008
Just for fun on Saturday I built a 2D lego builder in Flash. It’s not polished and has many bugs, but it was fun to make. You can check it out here:. If you would like the source for it I would be glad to share. Just drop a comment and I’ll tell you the link.
Posted by: bmorrise on: September 25, 2008
I one week I will be launching an new tutorial site based on Flash and Flex development. I’m purchasing a domain and will host it on a server so that I can embed the movies directly onto the site. If anyone has any input on what they would like to see for the new site, it would be appreciated. I plan on having a whole new set of video tutorials and a large set of sample code.
Posted:
Posted by: bmorrise on: June 3, 2008
ActionScript 3 is an object oriented programming language. I’m going to explain what that means in a minute, but there is also a good definition at Wikipedia. So far there are two things that we’ve talked about that are used in objects. The first is variables and the second is functions.
Objects contain both of these things. To think of an object in programming I’m going to use a real world example:a ball. Here is a list of the properties of a ball:
Color
Diameter
Weight
Texture
And here are some of the things that a ball might do:
Bounce
Pop
Inflate
So, this ball is our object. Now let’s convert this over to code:
//Ball.as
package
{
public class Ball
{
public var color:String;
public var diameter:Number;
public var weight:Number;
public var texture:String;
public function Ball() {
}
public function bounce()
{
//Bounce code
}
public function pop()
{
//Pop code
}
public function inflate()
{
//Inflate code
}
}
}
Unfortunately, the source code here comes out unformated in WordPress, but imagine it will indentations an so forth. Okay, so now that we have created our Ball class, what do we do with it? From our class comes an object like so:
var myBall:Ball = new Ball();
myBall.color = "red";
myBall.diameter = 10;
myBall.inflate();
trace(myBall.color);
There are some examples of creating a new ball and setting some of its properties and calling some of its methods. A method is what we call the function that is part of the object.
Okay, let’s look at the code line by line:
First we have the package block which tells us where the class file is located. the Ball.as file should be in the same directory so we just use package. If we would have created another directory where the BallTutorial.fla file is located called classes then we would have used this:
package classes
You can even nest files further. We could have created the following directory structure:
\classes\other\something\Ball.as
In this case we would use:
package classes.other.something
The reason we do this is because we could have two classes with exactly the same name, but put them in different packages. Remember, the package has to reflect the file location or it won’t work.
Next we have the public class Ball block. This declares our class as being public. I’ll explain what this means in Part 2. The class is called Ball. It’s important to note that the file name need to be the same as the class name, which is why we called this file Ball.as.
The next few lines is where we define the properties of the class which are variables. The are defined in a similar fashion as we’ve defined variables before, but this time we use the public keyword.
Next we have the method definitions. The methods are just function declarations that we have done before, but we’ve also declared these as being public.
One thing to note is that we have a function called Ball. This is called the constructor. The constructor is a function that is automatically called when we instantiate the object. Instantiating the object is just the act of declaring a new Ball variable. You can see how this works by putting a trace statement in the constructor before you test the code. You’ll see that the trace statement is executed automatically without having to call the Ball function. The constructor is case sensitive, so make sure you get it right.
And that’s our class definition. As you can see in the second block we access the properties and methods with the dot operator: myBall.color or myBall.inflate() are examples.
In order to get to work you’ll want to create a new Flash file and call it BallTutorial.fla. Then you’ll want to paste the second block of code into the first action pane. Then create a new file called Ball.as and make sure you save it in the same directory as the BallTutorial.fla file. When that is done you should be able to run the BallTutorial.fla file and see the trace output.
The source code for this tutorial can be found here: BallTutorial.zip
Posted by: bmorrise on: June 2, 2008
Event listeners are an important part of Flash. What they allow you to do is to “listen” for an event that occurs on a specific object. For example, one event that often occurs in Flash is a button getting clicked. When a button is clicked the button object dispatches an event telling flash that is was clicked. Often times we want to respond to a click and function. Say we have a window that pops up and on the window there is a cancel button. We want the window to close when the cancel button is clicked. Here is the code:
cancelButton.addEventListener(MouseEvent.CLICK, handleClick);
function handleClick(event:MouseEvent):void
{
//Write the code here that closes the window
}
In this example I have added a button to the window and given it the instance name cancelButton. This allows me to listen to that button and respond to the events.
Posted by: bmorrise on: May 27, 2008
This week I am going to post some new tutorials. I’m going to create the follow-up post for the video player tutorial and a new drawing tutorial.
Recent Comments | http://flashdevelopment.wordpress.com/ | crawl-002 | refinedweb | 1,757 | 73.47 |
Hello all.
I'm trying to read a line of data in from a csv file and then assign the fields to it to an int w, string y, and double z **See code below**
However, I have two problems:
1. The first is in reading in the all contents of the second field and assigning it to a string y WITH white-space included. If I compile this code it will only output the first word in the second field even with the noskipws **line 36**
2. The second is reading in the complete number with both decimal places (only the 12 shows) and assigning it to a double z **lin 42**
For simplicity sake the contents of file "clients.txt" read are:
503,long meadow,12.29
But the output to screen is:
503 long 12
#include <cstdlib> #include <fstream> #include <ios> #include <iomanip> #include <iostream> #include <sstream> #include <string> #include <sstream> using namespace std; int main() { int w; string word; string y; double z; ifstream infile("clients.txt"); while (getline( infile, word )) { if (word.empty()) continue; istringstream ss( word ); { string val; getline( ss, val, ',' ); stringstream( val ) >> w; cout<<" "<<w<<" "; } { string val; getline(ss,val, ',' ); stringstream( val )>>noskipws>> y; cout<<y<<" "; } { string val; getline( ss, val, ',' ); stringstream( val ) >> z; cout<<z<<endl; } } system("PAUSE"); return 0; } | https://www.daniweb.com/programming/software-development/threads/133306/stingstream-and-assigning-fields-of-csv-files-to-variables | CC-MAIN-2017-17 | refinedweb | 218 | 71.07 |
Feature #12093
Updated by nobu (Nobuyoshi Nakada) about 4 years ago
- Project changed from CommonRuby to Ruby master
Updated by nobu (Nobuyoshi Nakada) about 4 years ago
Depending on the context, an identifier may be a local variable or a method call.
I think that
RubyVM::InstructionSequence#compile would need the binding, instead of
#eval.
Updated by shyouhei (Shyouhei Urabe) about 4 years ago
"ISeq#compile's need of binding" means a template engine cannot cache compiled ISeqs for later invocation, right? I doubt the benfit of compile's taking bindings.
Updated by nobu (Nobuyoshi Nakada) about 4 years ago
Do you mean same template with different contexts, a name is a variable one time, but a method call next time?
I doubt that it is a common use case.
Updated by nobu (Nobuyoshi Nakada) over 3 years ago
Updated by dalehamel (Dale Hamel) 8 months ago
Howdy,
Sorry to ping a 3 year old issue, i Just wanted to add my 2 cents here.
I came across this issue when googling for a way to evaluate an instruction sequence with a particular binding. I'm working on an experimental gem that would inject "breakpoints" in arbitrary lines in ruby methods, with the idea that eBPF / bpftrace can be used to read values from these overridden methods.
Right now i'm using a block to 'handle' the original source code in its original binding, but i have to use ruby's 'eval' method to do this.
I'd ideally like to precompile the original source code sequence, and evaluate this with the original binding.
Updated by dalehamel (Dale Hamel) 8 months ago
- File 0001-RubyVM-InstructionSequence-eval_with.patch 0001-RubyVM-InstructionSequence-eval_with.patch added
- File 0002-Update-iseq.eval-to-accept-optional-binding-FIXES-Bu.patch 0002-Update-iseq.eval-to-accept-optional-binding-FIXES-Bu.patch added
Here's the current draft of the patch set, which I intend to submit a github pull request for as well.
I've retained Nobu's patch, and built on it.
Updated by shevegen (Robert A. Heiler) 8 months ago
Nobu recently added it for the next developer meeting (in August; see) so stay tuned. :)
Updated by dalehamel (Dale Hamel) 8 months ago
Awesome I just saw that - thanks for the update!
The latest patch is now at and so that's where the review should go.
I'll stay-tuned and watch for updates from that meeting, thanks Robert!
Updated by ko1 (Koichi Sasada) 8 months ago
What the last line should output?
def a; :m_a end def b; :m_b end def bind a = :l_a b = :l_b binding end eval('p [a, b]', bind()) #=> [:l_a, :l_b] RubyVM::InstructionSequence.compile("p [a, b]").eval #=> [:m_a, :m_b] RubyVM::InstructionSequence.compile("p [a, b]").eval(bind()) #=> ???
I believe we shouldn't introduce
binding option to
ISeq#eval.
Updated by dalehamel (Dale Hamel) 8 months ago
Yes when I test out Koichi's sample, the iseq look like:
disasm: #<ISeq:<compiled>@<compiled>:1 (1,0)-(1,6)> (catch: FALSE) 0000 putself ( 1)[Li] 0001 opt_send_without_block <callinfo!mid:a, argc:0, FCALL|VCALL|ARGS_SIMPLE>, <callcache> 0004 putself 0005 opt_send_without_block <callinfo!mid:b, argc:0, FCALL|VCALL|ARGS_SIMPLE>, <callcache> 0008 newarray 2 0010 leave
So there is no way for the local variables from the binding to be evaluated, as the original instruction sequence expects a method call. I hadn't realized that when compiling the iseq string, methods calls are found in this way.
This indicates that yeah, Nobu's comment above appears correct, you must have the binding available when the iseq is compiled. It appears to do so implicitly based on the current binding.
It looks works with the struct example because the values for
a and
b have method calls that can receive and respond instead of these local variables, avoiding the problem. This seems inconsistent with
Kernel#eval and
binding#eval, which is counterproductive.
Updated by ko1 (Koichi Sasada) 7 months ago
- Status changed from Open to Rejected
Ok. I reject this ticket, and pls remake your proposal if you find a good way.
Updated by dalehamel (Dale Hamel) 7 months ago
Understood, I’ve closed the pull request.
Updated by nobu (Nobuyoshi Nakada) 7 months ago
Indeed
eval with an arbitrary
Binding doesn't make a sense.
How about
eval on a given object?
Currently, iseqs eval always on the top-level object without any argument, and I've needed code like:
RubyVM::InstructionSequence.compile("proc {...}").eval.call(obj).call(*args)
I think it should be simpler.
RubyVM::InstructionSequence.compile("proc {...}").bind(obj).call(*args) # or RubyVM::InstructionSequence.compile("proc {...}", receiver: obj).call(*args)
dalehamel (Dale Hamel), does this suffice your use case?
Updated by dalehamel (Dale Hamel) 7 months ago
does this suffice your use case?
Interesting, I'll need to investigate this - it certainly has potential.
My use case is for experimental tracing work, and I basically want to be
able to pre-compile original source with added instructions, and execute them
within the context they were originally intended to be executed within.
This is why I had a use for being able to execute within arbitrary bindings, but
if I can target right receiver / bind to the right object, this could work.
I will try a prototype be seeing which receiving I am presently binding to, and
look into modifying the patch to support the prototype you suggest to see if
it can fit my use case by passing this receiver rather than the binding.
Thank you for response and feedback Nobu.
Also available in: Atom PDF | https://bugs.ruby-lang.org/issues/12093 | CC-MAIN-2020-16 | refinedweb | 929 | 63.09 |
Start() vs Run()
There is a small detail that often goes by unnoticed (well it did for me!) that does make a huge difference when you want to do some multi-threading work.
As you know (if you don’t check this post), you can create a Runnable which contains a run() method in which, in turn, is where you put your work (a.k.a. code) that you want to be executed on a different thread.
Now you would pack this in a Thread object and which method you are going to call?
The misconception
Thread does contain both the run() and the start() methods. One can think that they are interchangeable since they are both on the Thread class.
That is where the problem lies! I have made this mistake when I was first learning multi-threading and I kept doing because no one pointed it out for me… until they did and I got embarrassed for not knowing this. 😛
Know your stuff
Unfortunately (or fortunately?) these functions are NOT the same. They have a very important difference that may make your project behave completely different of what you’ve expected:
run() : using run() will make the code that you’ve passed on the Runnable to run on the same thread as the one you’ve used to make the method call.
start() : using start() will indeed make the code contained inside the Runnable to be executed on a different thread.
This is of course a huge different since your program would be actually running sequentially on the run() case!!!
Example
Here is a small code so you can test it out for yourself the truth of what I stated above 😉
public class ThreadingTest { public static void main(String[] args){ Runnable r1 = new MyRunnable(); new Thread(r1).start(); // new Thread(r1).run(); try { Thread.sleep(20); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println("*****************Finished!**********************"); } private static class MyRunnable implements Runnable{ @Override public void run() { for (int i = 0; i < 1000; i++) { System.out.println(String.format("%03d - My runnable runs a riot!", i)); } } } }
The code is pretty simple: it creates a Runnable which contains inside its run() method, nothing more than a loop printing a sentence to the console.
This Runnable will get wrapped around a thread, with which we call either run or start. The main thread will proceed to wait just 20 milliseconds and will print a stary “Finished”.
Try running this code a couple of times with the start() method call and a couple times with the run() method call. You will see that EVERY TIME you use the run method, the loop will print its 1000 times and AFTER that the Finished will appear. Now on the other hand if you use start, the finished message will be printed somewhere around the loop, showing in this case that the codes are being executed in parallel.
Conclusion
The design of the Thread class allow you to choose at runtime what you’d like to do, and that can be positively leverage. It can also lead to confusion so that why “Knowledge is power” | http://fdiez.org/run-vs-start/ | CC-MAIN-2019-09 | refinedweb | 519 | 78.99 |
New Features in EJB 3.1 - Part 2.
This Article Covers
EJB specification article of this series, I covered two of the earliest discussed features – optional interfaces for Session beans and Singleton beans. I also provided an overview of the rest of the features being discussed..
Thank You!
In the first article, I urged you to provide feedback directly to the JCP at jsr-318-comments@jcp.org as well as CCing me at rrahman@tripodtech.net. Before going farther, I would like to thank everyone who took the time to send in comments! I hope you will continue to send in your thoughts as I write more articles in this series. I am also very grateful for all of the encouraging comments on the series itself.
It's Time for Timer Service Features
Scheduling is an important part of many applications for tasks such as report generation, database maintenance, generating OLAP summaries or data synchronization. If you have used the EJB Timer Service in its current form, you know that it is useful, but pretty limited. The biggest limitations of the current EJB Timer Service are that it is not all that flexible and scheduled jobs can only be created programmatically, not declaratively. Some of these weaknesses were outlined by Richard Monson-Haefel in the EJB 2.x time-frame. This TheServerSide article outlines Richard's views:.
Let's take a super-quick look at the Timer Service as supported in EJB 3.0. Here is an example from EJB 3 in Action:
@Stateless public class PlaceBidBean implements PlaceBid { @Resource TimerService timerService; public void addBid(Bid bid) { ... Code to add the bid goes here... timerService.createTimer(15*60*1000, 15*60*1000, bid); } @Timeout public void monitorBid(Timer timer) { Bid bid = (Bid) timer.getInfo(); ... Code to monitor the bid goes here... } }
The Stateless Session bean above creates a timer that is triggered every fifteen minutes, starting with a fifteen minute delay when a bid is created in the addBid method. Every time the trigger fires, the monitorBid method annotated with the @Timeout annotation is invoked to see if the bidder was outbid.
While this functionality is fine for what PlaceBidBean does, imagine a slightly different scenario–a beginning-of-the-month newsletter mailing for all ActionBazaar customers. Implementing this in terms of millisecond intervals through the current programmatic TimerService interface would be a hazard at best. You'll also have to write some pretty awkward code so that the timer is created when the application starts up. There are several existing mechanisms in place today to achieve this kind of flexible declarative schedules in Java EE. You can use a popular Open Source scheduler like Quartz, you can use a commercial tool like Flux or you can use scheduling services specific to your application server such as the ones available for WebLogic or Sybase EAServer. The problem with these solutions is that they tend to be pretty cumbersome if all you really need is a declarative equivalent of UNIX cron in Java EE. All these solutions are also vendor-specific. Enter the Timer Service enhancements in EJB 3.1.
The most important one in this set of enhancements is the ability to declaratively create cron-like schedules to trigger EJB methods (there are more advanced features; feel free to check them out when the spec draft comes out). For example, all you would have to do is annotate an EJB method with the @Schedule annotation to implement the beginning-of-the-month ActionBazaar newsletter like so:
@Stateless public class NewsLetterGeneratorBean implements NewsLetterGenerator { @Schedule(second="0", minute="0", hour="0", dayOfMonth="1", month="*", year="*") public void generateMonthlyNewsLetter() { ... Code to generate the monthly news letter goes here... } }
The following table describes the attributes of the @Schedule annotation as well as default values:
Note any of the attributes support the cron-style "*" wildcard to represent all values, a comma separated list (such as "Jan, Feb, Mar" for the month attribute) or a dash-separated range (such as "Mon-Fri" for the day of week attribute). Should the expression syntax support the "/" operator as well? How about supporting fully expanded abbreviations such as "January" instead of "Jan"? Also, should a compact, full cron-expression format be supported as well? Our little example could be expressed as:
@Schedule(expression="0 0 0 1 * * *")
Some folks argue that this "pure cron style expression" is way too cryptic, while others point out that a lot of developers are so used to it that it should be supported in EJB. New methods were added to the TimerService interface to support the programmatic version of cron-like scheduling. The programmatic version supports defining the activation and deactivation dates for a given schedule. For example, our newsletter could become active at a predetermined time in the future instead of being active as soon as the timer is created. Should similar support be added to the @Schedule annotation? How about supporting defining a finite number of occurrences a cron-based trigger will fire? Can you think of any other features?
Stripped Down EJB Packaging
Making XML deployment descriptors optional in EJB 3.0 has significantly simplified packaging and deployment of Java EE applications. However, Java EE packaging is still clearly oriented towards strict modularization. Namely, you must create separate jar files for web and EJB modules. In a typical Java EE deployment scenario, an EAR file will contain a war archive and a separate EJB jar. Figure 1 depicts the current Java EE packaging scheme. Roughly, the idea is that the EJB jar represents "modularized" business services that are consumed by the "client" web module. While modularization is very important, the problem is that it is overkill for simple web applications where business services are unlikely to be shared across clients in multiple other Java EE modules.
Figure 1: Current Java EE packaging.
Simplified EJB packaging for web applications is aimed at addressing this issue. In the new scheme, there is no need to create a separate EJB jar module. Rather, EJBs (especially in the form of annotated POJOs) can be directly dropped into the WEB-INF/classes directory and deployed as part of the WAR. In a similar vein, the ejb-jar.xml deployment descriptor, if you happen to be using one, can be placed into the WEB-INF directory along with the web.xml file. It may also be possible to place an EJB jar into the WEB-INF/lib directory (do you think this is important?). The new packaging scheme is depicted in Figure 2.
Figure 2: Simplified EJB packaging for web applications.
For me, a very interesting implication of this is that the simplified packaging scheme makes EJBs much more agnostic of the rigidly defined structure of Java EE EAR files. There is another really nice side-effect for those of us that still live in the land of occasional XML configuration and JNDI look-ups instead of 100% annotations and DI. All EJB references, resource references or environment entries defined anywhere in the WAR can now be shared. This is because the entire WAR file has only one local component environment (bound to the JNDI java:comp/env namespace). Let's say you define a data source reference in the web.xml file like so:
<resource-ref> <description>My data source</description> <res-ref-name>jdbc/mydb</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref>
You can now do a lookup like the following not only in web container components like Servlets but also inside EJBs packaged inside the WAR:
// Looking up my data source. DataSource ds = (DataSource) envCtx.lookup("java:comp/env/jdbc/mydb");
What are your thoughts on this? There is one more reason I really like the simplified packaging enhancement—it goes hand-in-hand with EJB Lite. EJB Lite is a very minimal sub-set of EJB features designed for use in stripped-down applications. I'll talk more about EJB Lite in a later article in the series. One interesting possibility is that many vendors will likely start implementing EJB Lite on top of Servlet containers like Tomcat or Jetty, with EJBs directly deployable to WAR files, completely by-passing Java EE EARs. I see JBoss AS Express, GlassFish Express or Tomcat+OpenEJB as possibilities that are difficult to ignore, especially given Java EE 6 Profiles. What do you think of these possibilities?
More to Come
Believe it or not, the features discussed in the first and second parts of this series are still just the tip of the iceberg. Here are some of the more interesting ones I'll cover in this series:
- EJB support in minimal containers in the form of EJB Lite. This would be similar to what is already available in the Open Source world in the form of Embedded JBoss, OpenEJB and EasyBeans plugging in on top of Tomcat.
- Support for asynchronous Session Bean invocation.
- Support for stateful web services via Stateful Session Bean web service endpoints.
- The standardization of JNDI mapping instead of keeping it for vendors to decide is being preliminarily discussed.
Something else I am intending to discuss in the series is using EJB through Web Beans. As you might already know, Web Beans is a very powerful integration framework that makes it possible to use some very interesting DI features with EJB, an area Java EE has been criticized by a number of folks. The Web Beans specification is being led by Gavin King and incorporates ideas from JBoss Seam and Google Guice ("Crazy" Bob Lee of Guice fame is working on WebBeans too). Until then, wish the expert group luck and keep the feedback rolling!
References
- JSR 318: Enterprise JavaBeans 3.1,.
- JSR 299: Web Beans,.
As per Linux conventions, either 0 or 7 can be used to represent Sunday. | http://www.theserverside.com/news/1363655/New-Features-in-EJB-31-Part-2 | CC-MAIN-2016-40 | refinedweb | 1,639 | 53.71 |
“The needs of digital content design, not to mention physics and economics, are coming into conflict with current OS architectures. A new definition, the Media OS, can unlock the door to more powerful media-based personal systems, and extract more performance from the systems we are using today.”
This is the summary of a very interesting technical white paper, The Media OS,, which points out a harsh reality of today's computer systems: Even though the hardware necessary to create a blazingly fast media workstation is available now (and at affordable prices), legacy operating systems won't let go of the leash. Unshackling the hardware is exactly what we have been trying to address in designing the BeOS to smoothly handle multiple heavy duty tasks at the same time.
Be's commitment to the media world has been strongly strengthened in the
last few months, thanks to the work of our new media team. On the graphic
server side, adding support for 15 and 16 bits per pixel color depth was
the first step toward integrating the new media framework. The creation
of
BDirectWindow, the topic of today's lecture, is a second step.
Let's imagine you have a fast and cheap piece of video acquisition hardware—say, the Bt848 based card ($90.00)—and a fast, moderately-priced Matrox AGP graphics accelerator ($200-250). You want the AGP to display a video stream sent in realtime by the Bt848. How are you going to do that? On a legacy OS, the standard answer is to DMA every frame to an off-screen buffer in main memory and then shuffle each buffer to the screen using the graphics system. In no time you've earned yourself a hefty overhead and some synchronization problems.
So the geek in the corner raises his hand and asks: "Why not DMA directly into the frame buffer?"
Good question.
It's possible—but you'll have to switch to exclusive full screen mode. No more windows, and, in many cases, no more graphic system support (who needs that ;-). What you gain in improved bandwidth, you lose in extensibility, scaleability, and general usefulness.
Then our friend speaks again: "So why not access the frame buffer directly, but stay synchronized with the windowing system at the same time?"
And the rest of the class laughs.
Back in the real world, there are a few architectures that attempt to juggle direct buffer access with a general windowing system, but they're usually limited to "temporary exclusive access." This means you can lock the entire screen, query the state of a window, do whatever direct access you need, and then unlock the screen.
Unfortunately, these implementations perform poorly, they're very heavy to use, and they don't respect any reasonable scheduling expectations (it's impossible to avoid dropping frames if you do anything else at the same time). Clearly not what you would expect from a multimedia workstation.
Now it's time to look at
BDirectWindow.
Simply put,
BDirectWindow gives you exactly what we've been describing --
except it should really work.
BDirectWindow
is derived from
BWindow and,
unlike its bastard cousin
BWindowScreen, it quacks
just like a
BWindow.
Every function implemented by
BWindow is
supported by
BDirectWindow; even
the constructors use the same parameters.
BDirectWindow is so similar to
BWindow that you can replace all your
BWindow objects with
BDirectWindows, and your app would look and perform exactly as it did
before.
In addition to supporting the
BWindow
API,
BDirectWindow defines five new
functions, the most interesting of which is the hook function
DirectConnected():
virtual void
DirectConnected(direct_buffer_info *
info);
DirectConnected() communicates directly with the window manager in the
app server. It gives you a full description of the region of the graphics
frame buffer that you're allowed to access directly (i.e. the visible
part of the content area of your window) and is called whenever the state
of the region changes, such as when your window is obscured by some other
window. You job is to do whatever you need to do to and then get out --
the window manager is waiting for you!
(Typically, "what you need to do" means changing the graphic context that's used by the thread that you created to control the animation or streaming—or whatever you're doing.)
The argument to
DirectConnected() is a
pointer to a direct_buffer_info
struct:
typedef struct { direct_buffer_state
buffer_state; direct_driver_state
driver_state; void *
bits; void *
pci_bits; int32
bytes_per_row; uint32
bits_per_pixel; color_space
pixel_format; ... ... uint32
clip_list_count; clipping_rect
window_bounds; clipping_rect
clip_bounds; clipping_rect
clip_list[1]; } direct_buffer_info;
buffer_state, when masked with
B_DIRECT_MODE_MASK, tells you the state
of frame buffer access:
switch (
info->
buffer_state&
B_DIRECT_MODE_MASK) { case
B_DIRECT_START: /* Access is initiated. */ ... case
B_DIRECT_STOP: /* Access has ended. */ ... case
B_DIRECT_MODIFY: /* The buffer has changed. */ ... }
You always get a
B_DIRECT_START when the window is opened, followed by
any number of
B_DIRECT_MODIFY states, and
then a
B_DIRECT_STOP when the
window is dragged or closed, or if the screen resolution changes. (A
"window dragged" or "resolution changed" stop command would be followed
immediately by another start.)
Other buffer_state masks tell you if the visible portion of the buffer
has changed (
B_CLIPPING_MODIFIED), if the window has been resized
(
B_BUFFER_RESIZED) or has moved
(
B_BUFFER_MOVED), or if the frame buffer
has been reset (
B_BUFFER_RESET).
driver_state indicates the state of the
graphics card.
B_MODE_CHANGED
means that the resolution/depth of the screen changed since the last
DirectConnected was called.
B_DRIVER_CHANGED means that your window was
moved onto another monitor.
bits is a pointer to the frame buffer data (in your app's memory space).
pci_bits is the address of the frame buffer in the pci memory space.
You need this if you're doing DMA.
bytes_per_row,
bits_per_pixel, and
pixel_format describe the size and
format of the frame buffer pixel data.
window_bounds and
clip_bounds
are bounds rectangles that enclose the
entire window and the visible portion of the window.
clip_list_count and
clip_list
are the number of rectangles that make up
the visible portions of the window, and the rectangles themselves.
RESPECTING THE CLIPPING AREA IS ABSOLUTELY MANDATORY. It won't be
enforced for you by the system (it can't, as you get direct access to the
frame buffer). This means your clipping code has to be as good as the
system clipping code. This is probably the biggest drawback of
BDirectWindow.
The direct_buffer_info structure itself is obsolete as soon as
DirectConnected() exits, but the information it contains remains valid as
defined by the protocol.
This brief excerpt shows a simplified
BDirectWindow constructor,
destructor, and
DirectConnected() implementation. The object is designed
to use DMA:
MyDirectWindow::
MyDirectWindow(...) :
BDirectWindow(...) {
open_dma_control();
connected=
false;
connection_disabled=
false;
my_locker= new
BLocker(); }
MyDirectWindow::
~MyDirectWindow() {
connection_disabled=
true;
Hide();
Sync(); delete
my_locker;
close_dma_control(); } void
MyDirectWindow::
DirectConnected(
info) { if (!connected && connection_disabled) return;
my_locker->
Lock(); switch (
info->
buffer_state&
B_DIRECT_MODE_MASK) case
B_DIRECT_START:
connected=
true;
init_direct_screen_access_context();
start_dma(); break; case
B_DIRECT_STOP:
stop_dma();
connected=
false; break; case
B_DIRECT_MODIFY:
update_direct_screen_access_context();
modify_dma(); break; }
my_locker->
Unlock(); }
Note the
Hide() and
Sync()
calls in the destructor. This sequence, and
the "connection" flags that predicate the
DirectConnected() call, ensure
that the
BDirectWindow will stop direct access after the destruction has
started.
Many more examples using
BDirectWindow should be available in the
following weeks.
BDirectWindow is a dual object, with two completely independent parts (a
regular
BWindow and a direct screen access context). The parts should
intermingle as little as possible. This split is necessary for two
reasons:
Reason #1: The direct window context lives in the present;
BWindow lags
behind. The graphics state information you get from
DirectConnected() is
guaranteed to be valid (within the limits defined by the protocol)
because the function is synchronized with the app server.
BWindow, on the
other hand, is detached from the server by a mostly asynchronous protocol.
Reason #2: The two parts use entirely different protocols for
communicating with the app server. If you mix the protocols—in other
words, if you make normal
BWindow calls from
within
DirectConnected() --
you'll deadlock. If that happens, the app_server will look for any teams
that aren't responding to
DirectConnected() calls within a reasonable
amount of time (a couple seconds), and kill them. (It's not pretty, but
at least you won't bring the whole graphics system down.)
Unfortunately, you have to use
BWindow
(or
BView) calls to get event
messages, so you can't shut out the
BWindow world altogether. Just be
extremely careful to never use a "normal"
BWindow call in a portion of
code that can block
DirectConnected(). Note that it's possible for the
Interface Kit and the Game Kit contexts to share the content area of a
window, but, again, you have to be very careful. We'll post some sample
code to the web site to demonstrate how to do it.
BDirectWindow is certainly a very powerful API, but at the same time it's
a tricky one. When you use the direct frame buffer access capability of
BDirectWindow, you assume responsibility for two non-trivial operations:
You have to...
Perform all drawing operations yourself (no drawing functions are available to help you), and...
Respect the clipping region.
The first task usually means writing more drawing code (including handling different frame buffer formats, which means recognizing different pixel depths and endiannesses). The second task just makes the first one much more complex, since you need to do both drawing and clipping in one pass. This compounded complexity should weed out "casual" development. (In the future, we'll work on a low-level API that combines software and the hardware's accelerated functions to help you perform these tasks, but for this release you're on your own.)
Furthermore,
BDirectWindows can be difficult to debug since a deadlock or
crash in the
DirectConnected() function will force the app_server to kill
your team...which doesn't leave you much to debug. But, by design, only a
small portion of code should be executed in
DirectConnected()—it
should only change the drawing context, it shouldn't do the drawing
itself—so deadlocks and crashes should be rare.
We recommend
BDirectWindows for use in these four scenarios (only):
BDirectWindow is ideal if you're DMAing a stream of graphic frames
(like video) directly from another PCI card. It will be very fast,
will use barely any CPU cycles, will use your PCI bus more efficiently
and will get you a much smoother streaming.
BDirectWindow is also the way to go if you want to smoothly animate
a small number of pixels inside a big area of the screen.
If you need to guarantee that your animation (in general) is as
smooth as possible,
BDirectWindow can give you *some* advantage. A
thread that's performing
BDirectWindow-based animation is limited only
by scheduling issues and the frame buffer's bus bandwidth limitations,
whereas threads going through the app server are limited by the
client/server protocol and the server synchronization mechanism.
Nevertheless, in common cases this difference isn't perceptible, so
only people who really know what they're doing (and what they want)
should try to use BDirectWindow in this case (crazy geeks for example
:-).
The last scenario is a more specific application: If you're creating an engine that processes an input stream to generate a big graphic output stream (typically video), then you can benefit by sending the output directly to the frame buffer instead of through an off-screen buffer. The idea is that you'll avoid a useless pass through the main memory system, and, in some cases, you'll also reduce the bandwidth going through your L2 cache since you're reducing the exchange between your L2 and your main memory system. This sort of bandwidth reduction can significantly improve the overall performance of the system. Also, since you have to do both processing and clipping at the same time, this is a very nice way to implement lazy processing.
In all other cases, we strongly recommend that you either...
Use the standard
BView drawing function to draw directly on screen,
or...
Draw into a (main memory) off-screen buffer (i.e. a
BBitmap) first,
and then blit the bitmap through
BView's
DrawBitmap() function.
An aside: One thing we *strongly* discourage is the reimplementation of
DrawBitmap(). You may think you've found some great trick that makes
DrawBitmap() faster, or that extends its features, but by cutting
yourself loose from Be's drawing mechanism you lose any future
improvements that we incorporate into the system (due to graphic driver
architecture changes, for example).
In conclusion, we would like you to imagine what will happen when you use
BDirectWindow in your applications. You will see ordinary apps that are
streaming live video, doing smooth pixel animation, doing real time lazy
video buffering, and certainly a lot of other cool effects (I'm scared
just thinking what our geeks are going to do with this :-). You will move
direct windows, resize them, superimpose them, switch resolutions on the
fly. For example, we tried 3 video windows streaming 30 fps, two 320x240,
one 640x480, 16 or 32 bpp, or 8 software animations in 400x300.
And the great thing is that it all looks like a "normal" graphics system;
The user won't notice that
DirectConnected() functions are being sent in
parallel, or that multiple threads are generating new DMA commands as the
window sizes change, or that interrupts are reprogramming DMA controllers
on the fly for every change...
And that's when innovation is at its best, when it gives us more of what we want, but with a transparency that "hides" the machinery.
This week brings a bevy of topics from the BeDC and BeDevTalk.
There was quite a furor on BeDevTalk over this topic following my last Newsletter article previewing Release 3. Many of these concerns and questions reappeared during various sessions at last week's BeDC. I figured I'd take the opportunity to clarify our position regarding importing and exporting of symbols, and to go into detail about the issue overall.
First off, we are specifically discussing the exporting of symbols from libraries (including add-ons), and the importing of those symbols into applications. There are three different methods that work for both PPC and x86, and each has its strengths and weaknesses. Before detailing these, however, we need to review some general issues that affect all of them.
The main caveat is that on x86 whether a symbol is exported or imported is determined at compile time, not link time. Furthermore, the very first time a symbol is found determines whether it is imported or exported. This is very important. If you want to import or export a symbol it's important to specify it at the very beginning of your code. Any other import or export specification is ignored.
Another caveat is that the import and export of symbols is compiler- and linker-specific. A method that is supported in one compiler is not guaranteed to be supported by another. For now, while only the Metrowerks compiler and linker are available for the platform, this is not an issue. But in the future, as more compiler vendors choose to support the BeOS, this will become significant. Before questions about additional compiler support start rolling in, we are actively pursuing the matter, but Metrowerks will be our provider for the foreseeable future.
Importing symbols is of primary importance when linking against a library. When you are loading an add-on you are explicitly finding the symbols yourself, so they only need to be exported. Even when you are dealing with a library, importing symbols from that library isn't mandatory. On PPC, there is no need to declare any global symbols as importable (although there is no harm either).
On x86, however, there is an advantage to declaring symbols importable: speed. With the symbol declared importable, you need only a single instruction to find the associated code. If the symbol is not declared importable, two instructions are required. This slows down your code, perhaps imperceptibly, but slows it nonetheless. We recommend explicitly importing symbols, to gain the speed benefit on x86.
Now, on to the various import and export methods.
The first and least compatible method for importing and exporting symbols is through the use of export files. The theory behind this method is to first compile the library and explicitly export all global symbols to a text file. Then this text file is edited and used to define exactly what symbols are imported or exported.
While this method works for both platforms, the format of the export
files differs across platforms. On PPC an .exp file is used, and on x86
either the CMD or
.def file format is used. This requires a different
file for each platform, and these files require updating whenever new
symbols are needed for export. This could lead to a lot of management
issues.
The traditional PPC method for exporting symbols is to wrap a group of
symbols in
#pragma export on and
#pragma export reset. These symbols are
then exported. This is what you are probably familiar with in your
current code. It works perfectly well with the current compiler (much
better than the export files, as the declarations are contained in your
current header or code files) and is much easier to keep updated.
There will be some amount of shuffling to make sure that the export or
import declarations are wrapped around the first declaration of the
symbols, but this system works well on both platforms. The shuffling
needed could either be a good deal of work, or not, depending on the
state of your current code. However, this method is unlikely to be
supported by different compilers for the BeOS, as
#pragmas are by
definition very compiler specific.
As you may have heard by now, this is our recommended method for both
platforms. While it's not very pleasing aesthetically,
__declspec() does
have redeeming qualities. Foremost among them is that it is "the way," as
defined by a large, unnamed software company with close to 95% of the OS
market. Also, compiler vendors who cater to that market (and who are the
most likely candidates for new tools for the BeOS) already implement that
method. To put it plainly, most if not all development tools we are
likely to see ported to the BeOS will support this method.
Regardless of whether you choose to use
#pragma export or
__declspec(),
we recommend that you implement a forward declaration header file. This
file declares every global symbol to import or export from your library
or application. The file can then be included at the top of every file to
make sure that the first declaration of a symbol properly specifies its
import or export status. The declaration header file can easily handle
both import and export of the symbols.
Since you'd need to write this file from scratch we recommend that you
just use the
__declspec() format, to ease the move to other compilers in
the future. The real advantage of a forward declaration file is that it
is a single file that works on both platforms and explicitly declares
what will be exported, and it does not require any code changes when
something needs to be exported. A simple change to the header file and a
recompile does the trick.
As requested on BeDevTalk, here is an example application and library that shows an example of using forward declaration files to control export and import of symbols. It uses the multi-platform makefile I mentioned in my last Newsletter article, updated for PPC for Release 3. The makefile can be downloaded separately from the second link:
Import Export sample:
Release 3 PPC and x86 makefile:
Another topic discussed at the BeDC was cross-compiling. The new Metrowerks tools that will soon be available include PPC compilers and linkers that produce x86 executables and x86 compilers and linkers that produce PPC executables. To use these tools you need to copy over the appropriate libraries from either x86 or PPC to link against.
At this point, we do not feel confident recommending the cross-platform tools for a simple reason: testing. We haven't done enough testing at Be to ensure that these tools work as they are supposed to, although Metrowerks has done extensive tests. The main reason we are discouraging their use, however, is that we feel developers need to adequately test their own applications—on both platforms. If developers are going to have both platforms available for testing, they might as well compile on the appropriate platform and be done with it.
What we suggest for people who do not have access to both platforms -- especially our new x86 developers who might have difficulty obtaining compatible PPC hardware—is that developers team up to make sure that software is thoroughly tested before being released to the public. We cannot state this strongly enough. There are enough byte order issues and other problems to make testing both binaries exceedingly important—as important as ensuring that both flavors of the BeOS have all of the same applications available.
My final topic concerns a failure on our part when producing the Release 3 CDs that will be available to you soon. In the documentation for Moving from PR2 to Release 3, we discuss some rather limited resource conversion tools rescvt_PPC and rescvt_intel. These tools extract the resource fork from a PPC executable, perform some byte-swapping on its contents, and convert the data to the x86 resource format. Full docs on their use are in the above file. The problem is that the tools themselves can't be found in /boot/beos/bin as noted. We forgot them.
So here they are:
Some quick reminders about these tools are in order. They are extremely
limited. They will convert and swap Application Info data,
BMessages,
Icons, mime-types, and not much else. Any developer-defined resources
will not be swapped or exported, and user-defined data in
BMessages may
be corrupted by this process. If you only use resources for basic
application information, these tools should work fine. If you use them
for anything else, you are much better off creating the tools from
scratch for x86.
We are working on a cross-platform resource format, and hope to have it available by R4, but it is possible it might slip beyond that time.
We had a grand time last week at the BeDC, our developers' conference in Santa Clara. In many respects, it was a coming out party or, if you prefer, our first baby steps in the Intel world—well-received ones, it seems. And today, I want to pay homage to someone who played a key role in the development of our company and our product: Erich Ringewald.
Around this time last year, as Intel engineers moved into a cubicle in our office and started working with us on the port, and as we were making fund raising plans, Erich let me know he didn't see himself forever in his combined CTO and VP of Engineering jobs. We started looking at what this meant for him and for us, whether, for instance, he would some day dedicate himself full-time to the CTO job.
A year later, the BeOS runs on Intel, we've raised about $26 million (we got more investors in the last few days preceding yesterday night's "last call"), and Erich has decided to get a taste of the consulting world. I've always valued Erich's unique combination of technical and business insights and I wasn't surprised when he told us he'd immediately found two valuable assignments as a consultant.
I first met Erich at Apple. His office wasn't far from mine on the 3rd floor of the De Anza III building in Cupertino, where he worked on MultiFinder with Phil Goldman, now at WebTV. He later moved to Apple's European R&D group in Paris. That's where I met him during Christmas 1990, shortly after starting Be. He'd heard I was "doing something" with Steve Sakoman, he was interested, he joined us, moved back to the US, became the head of our software group, and built a small group of very capable programmers who, in turn, built a small and very capable OS.
Erich sometimes jokingly referred to himself as a "cheap-seeking" missile. I, personally, and the company, more generally, owe a great deal to his penchant for parsimony, whether in furniture or in system architecture. After years of working for a very rich employer, I needed this kind of detox, and the company would have died from "rich" architectural decisions. Erich fit very much the player-coach model we wanted for executives in our company. He wasn't afraid to crawl under desks to wire the company, observing that CTO really meant Chief Telephony Officer.
In February 1994, AT&T told us they weren't going forward with their Hobbit microprocessor development. This meant Steve Sakoman's labor of love, a two-CPU, three-DSP dream media and communications machine was gone. Imagine you're the publisher and your author just lost his manuscript. Lightning struck and the hard disk and the back-up are both gone. What do you do? You tell the author this is a great opportunity: the great novel is really in him, not on the disk, and this is his chance to fix the problem that was bugging him with the main character, and to rework the ending of chapter two. But it's highly unlikely the author will have the psychic energy to restart from a blank screen. Steve didn't believe he could build a new machine from a new processor again and left Be, for a while, to go work at Zenith Data Systems and Silicon Graphics. He rejoined us in 1996.
When Steve left, Erich took the reins. We hired Joe Palmer, who built the BeBox on the basis of a design started by Glenn Adler, now at Phillips in Holland. In the Fall of 1995, we made our debut at Agenda. Dave Marquardt, who was to become our lead investor, was in the room when the BeBox demonstrated by Steve Horowitz got a standing ovation. As a result, we finally got support from premier Silicon Valley venture firms. The rest is pretty much in the public record.
As you can see, when I wrote earlier Erich played "a key role in the development of our company and our product," I wasn't embellishing the facts. For this, and for many other more personal aspects of our relationship, including his encyclopedic interests and nanosecond wit, I am in Erich's debt, as is our entire company. We all wish him the best in the next phase of a fulfilling professional life and hope he'll see fit to give us the benefit of his insights from time to time.
And let's also wish success to our VP of Engineering, Steve Sakoman, who helped cement our collaboration with Intel and who will lead us into honoring the opportunity before us.:.
Fred Fish would like Be to come up with a better development tool solution. He would like to see (at least) the following improvements:
Fully documented file formats for objects, archives, executables, and debugging formats.
Fully documented C++ runtime requirements and data layout (exception support, vtables, etc.).
System include files that are non-vendor specific and are owned by Be, so that Be has absolute control over making changes when necessary.
Mr. Fish goes on to nominate GNU as a reasonable solution.
Eric Berdahl thinks Mr. Fish is not out of water, but...,
“These suggestions, while not out of line with Be's interests, do not directly contribute to their business... Although it's far from optimal for us developers, multiple executable formats and tool sets don't prevent us from developing BeOS applications—it just makes it a little bit harder.”
And from Jon Watte:
“There is no such [binary/archive] format for which tools are readily available, else it would probably have been done a long time ago... The biggest problem with switching formats is not the switching itself, it's finding compilers, linkers and debuggers that work with the new format on BeOS.”
There was some debate over the merits of the GNU formats and tools: Is GNU/GCC/GDB a reasonable candidate for a universal solution.
Hamish Alan Carr turned the topic on its head: Rather than a single universal solution, why not go for a universal accommodation:
“...the ideal solution would be one in which the primary tool gave equal support to the Unix way, the Mac way, and, yes, the Win95 way... I think that what this means in practice is: Ability to run *multiple* executable formats - let the market decide which is the best.”
The thread then turned to technical issues: How similar to COFF is PE? Is PEF the same as ELF? What changes would be needed at the kernel level to support new (or multiple) formats?
The Release Notes for Release 3 mentioned resource conversion tools called "rescvt_intel" and "rescvt_PPC". They don't seem to be on the CD. What happened?
THE BE LINE: The resource conversion tools accidentally fell off the CD. You should be able to get them from the web site; see Stephen Beaulieu's article in this Newsletter for details.
More mouse event discussion, but first...
A CORRECTION: When last we met, the summary irresponsibly agreed with a
statement that a
B_MOUSE_MOVED event is sent only once for every four
pixels of movement. There *is* a (private) heuristic that throws out
"excessive" mouse messages, but it *isn't* based on a hard pixel count.
The truth is this: You are guaranteed to get at least one mouse moved event for every "burst" of mouse movement, regardless of how far the mouse travelled; furthermore, you're guaranteed to get an event message for the mouse's "at rest" location after a burst.
If the mouse moves slowly enough, you'll get a message for every pixel the mouse touches. We apologize for the confusion.
Other questions:
Are mouse coordinates in sub-pixel precision? (No)
Is
MouseExited() necessary? Some other view will receive a
corresponding
MouseEntered() (in which the cursor can be set), so it
isn't clear what
MouseExited() is supposed to do. It was argued that
this would mean that EVERY view would have to implement
MouseEntered()
to set the cursor.
Do non-active windows need to track the mouse? They do if they want to special case the cursor for drag-and-drop.
In the meantime, Yukio Hirose proposed that the OS be able to handle multiple mice. Christian Bauer found a bug in this notion...
“...having two mouse pointers would also mean having two active windows and unless you also have two keyboards, you have a problem deciding which window receives keyboard events.”
...but then solved the bug with hardware:
“On the other hand, if you _have_ two keyboards, two people could work on one computer at the same time without the need for additional terminals and that's something I really wish to have sometimes. If BeOS had support for multiple monitors, this could even be a replacement for the lack of a networked GUI for some applications.”
Dave Haynie noticed that this sort of set up sounds a lot like a feature of the Amiga OS:
“There was a central input.device task that managed a variety of general purpose I/O. In came events, like moving a mouse, pen on a tablet, etc., maybe both at the same time. Out came 'cooked' formal event objects, which could describe any kind of I/O event. All you would need is a facility to manage a couple of this kind of event stream, and of course an easy program option to select which event stream or stream the program listens to.” | https://www.haiku-os.org/legacy-docs/benewsletter/Issue3-12.html | CC-MAIN-2019-51 | refinedweb | 5,326 | 60.24 |
Good Robot has been on my mind lately. I keep coming back to the fact that I put a few months into it and didn’t finish it. But then I remember that on top of gameplay concerns I’ve also got a bunch of annoying technology problems to worry about and I lose my enthusiasm.
The central part of the technology problems come from [my usage of] GLSL – the OpenGL shader language. The rest of my code – talking to the filesystem, input devices, AI, and gameplay – is basically solid, but the shaders are a mess. On one machine robots would strobe, randomly rendering or not from one frame to the next. Another tester reported that walls didn’t render. Another had the robots and walls render fine, but powerups were invisible. Another person had strange slowdowns that should not have been happening on their given hardware.
The center of the problem is that I’m not very knowledgeable regarding GLSL. I’ve only gotten around to messing with it in the last couple of years, and I’ve only learned enough to accomplish the few simple things I need to do. This leaves great big blind spot in my knowledge where problems can hide. It creates situations where I can do something the wrong way and have it work on my machine, and malfunction a dozen different ways elsewhere.
But to correct this problem I have to deal with another one: The GLSL resources are an absolute mess. The language has undergone sweeping changes at least twice since its inception. Programs that were once valid are now wrong, which means nearly all of the basic “hello world” tutorials are now broken to the point where they won’t even compile. The docs are filled with side-notes about what’s “new” in a version people stopped using five years ago. The simple stuff is out of date, and the advanced stuff is written with the assumption that you already know what you’re doing and you just need a few pointers on how things have changed. It’s amazing how many pages of documentation fail to explain how things actually work, but instead explain how things differ from how they used to work. It’s like giving someone directions based on scenery that no longer exists. “Drive until you get to the place that used to be a drugstore, turn left onto old main street, and turn left when you see the empty lot where the recycling center used to be.” You’ll probably be be able to find the place eventually, but those layers of ambiguity and uncertainty are a major hindrance.
Further exacerbating the problem is that the updated and proper way of doing things is a lot more complex. The entire OpenGL matrix stack has been deprecated. In plain language, this means that the built-in systems for managing tables of numbers and performing the spatial transformations needed to take 3D scenery and draw it on a 2D screen are no longer supposed to be used. You’re supposed to write all of those systems yourself. Now, there are a lot of good reasons for that, but it’s also a massive undertaking when you’re just trying to get a handle on the basics. I’m sure there are open-source tools to do the job, but integrating those is a pain and it adds huge levels of complexity to what should otherwise be simple and straightforward learning exercise. (I already have my own matrix code, although it’s not nearly as well-developed as it could be.) When you get a blank screen you have to wonder: “Am I still not doing this shader properly, or am I mis-applying this new matrix system?” You can fall back to using the old matrix stack, but only if you find the directive to do so and only if you can intuit how it works. (It’s not well documented and I have yet to see it appear in example code.)
If you post some short shader code and ask for help you’re likely to get immediate dismissal: “The old matrix stack is deprecated! Don’t use it!” Which is like asking someone why your internal combustion engine doesn’t work and having them give you a hard time because it doesn’t have a gearbox, a catalytic converter, or a starter motor.
As if this wasn’t enough of an adventure, the OpenGL pages are somewhat capricious. In the process of working on this project the official docs would vanish for hours or days. You can download the docs as a PDF, but PDFs generally suck and Google can’t help you find what you’re looking for if they’re on your hard drive. And of course snarky RTFM forum posts all become that much less helpful as fallback documentation, since now they’re just insults with dead links.
Basically: Learning GLSL is is tough hill to climb, it keeps getting steeper, and the topography keeps changing without anyone updating the maps. The people who reached the top years ago don’t really have a concept of how convoluted the road has gotten and so they often give unhelpful advice.
But maybe I’ll have an easier time getting back to Good Robot if I force myself to do something sophisticated with shaders. Right now in Good Robot I’m just using them for simple stuff: I’m drawing basic squares and using the GPU (the graphics card, basically) to rotate them, scale them, and adjust the texture mapping so they draw from the proper section of the sprite sheet. That’s silly easy. My hope is that by doing a bunch of stuff that’s far more complex, I’ll actually get some depth to my GLSL knowledge and be able to spot the mistakes I’m making in Good Robot.
Goals:
- Aside from the shader work, I want to be really comfortable with what I’m doing. Which means doing something I’ve done before. Which means heightmap-based terrain. That’s easy, it’s familiar, and it looks interesting. (Enough.)
- I want to do something that lets me dump a bunch of difficult processing onto the graphics card. Since we’re working with terrain, I figure erosion is a good choice. I’ll make some real-time erosion code and have it run on the GPU.
- I want to explore how difficult it is to support Oculus Rift levels of performance. Most games run at 30fps. If the scene gets crowded or if the player stumbles into some scripted event, it’s generally acceptable (or tolerated) if the framerate dips down to 20 or so for a few seconds. But if you’re programming for VR, you need incredible performance. You need 60 frames per second. Worse, you need to render everything twice – one for each eye. If that’s not hard enough, it’s extremely uncomfortable to miss frames, so you can’t ever dip below 60fps unless you want to risk making people sick. So now the program has to render twice as much, twice as fast, with no margin for error.
I can’t afford (or justify) getting a Rift dev kit right now, but it should be simple to set up a test to see if my program can keep up with the Rift performance demands.
- I’m going to stop being so stubborn and just use the same dang coordinate system everyone else uses. More on this in another post.
So those are our goals. I’m calling this “Frontier Rebooted” just because I’m re-hashing a lot of the ideas from Project Frontier, although the code is going to be all new. We’re going to have hills and water and grass and trees like before, but they’ll all be made using new techniques with shaders.
Step one: Grab the latest version of my various 3D tools (which come from the Good Robot codebase) and drop them into a new project. I set up an empty scene with a one-meter panel drawn at the origin. Just to make sure I’ve got the axis all pointing the right way, I build a literal axis arrow at the world origin. Red is X, Y is green, blue is Z.
Was that worth reading 1,500 words? I hope it was worth reading 1,500 words to see that. Next time we’ll actually accomplish something.
Self-Balancing Gameplay
There's a wonderful way to balance difficulty in RPGs, and designers try to prevent it. For some reason.
This Game is Too Videogame-y
What's wrong with a game being "too videogameish"?
Programming Language for Games
Game developer Jon Blow is making a programming language just for games. Why is he doing this, and what will it mean for game development?
Silent Hill 2 Plot Analysis
A long-form analysis on one of the greatest horror games ever made.
Trekrospective
A look back at Star Trek, from the Original Series to the Abrams Reboot.
94 thoughts on “Frontier Rebooted Part 1: Back to School”
I’ve been going through this same process recently, with having to learn OpenGL in general on top of it (And learning OpenGL is even worse than GLSL- it has all the same problems of outdated stuff on the web, plus every tutorial uses some third party library to do some of the work, making starting from the very basics really hard).
From the sounds of it, you’re moving from using the fixed-pipeline style of rendering to the modern shader-based one. It definitely does put a lot more work on the programmer, in exchange for a ton more flexibility.
I too have been learning OpenGL and GLSL recently and given how easy it is to encounter old material on the internet I did run into a lot of the old fixed function pipeline stuff. This might depend on if you knew it before hand but I found it more confusing than doing it the more modern way. Doing it all yourself might not be easier, but I found it simpler.
I’m currently in a Computer Graphics class. My teacher basically limited us to using OpenGL 2.0 (a very outdated version) for this very reason. Going much above that would simply take too much time to learn how to use properly.
I gave up making 3d games entirely for these reasons. It made me feel like an idiot trying to figure this stuff out, discounted myself as stupid, and I went back to not making anything with code.
At least I know I’m not alone, now.
“I’m not smart enough” or “I’m stupid” is a bad reason to ever give up on anything. Everything is difficult, until it is easy.
“I could spend my time more effectively doing something else” is a good one.
“I could spend my time more effectively doing something else” can be a very low bar when the task at hand is sufficiently frustrating.
For example, if your honest expectation is that pressing further on graphics programming is going to be a painful slog that never results in you actually building anything that works, you might decide that it’s more rewarding (even in the ‘long term’) to spend that time tickling your pleasure centre by ingesting some nice empty calories.
True, but I leave that particular calculation up to the individual to determine.
At least it’s less discouraging than “I’m not smart enough.” I taught a middle-school dropout with poor math skills how to convert between binary, decimal, and hexadecimal in 2 hours.
And your point is?
I have had some wonderful times ingesting empty calories. Mmmm, chocolate.
Maybe so, but there are few things more discouraging than being unable to prove to yourself that you’re capable of doing some mental task. I now know that I’m not the problem; the problem is that the information base is held together with toothpicks and gum(not duct tape, that would actually have some substance).
But yes, eventually I felt like I was wasting my time because I have artistic obligations, and there are easier ways to make games, so long as those ways aren’t intended to result in a game with 3D graphics, and also not involving using OpenGL to make a 2D game.
I’m not gonna sit here and argue that it’s impossible. That’d be blind to the fact that there are those who do accomplish things with these systems. But the barrier for entry is obscene, it seems. It might be easier to learn how to play dwarf fortress over the phone.
I can’t help but wonder, from a business standpoint, how many work hours are lost just trying to get the misshapen puzzle pieces to fit together.
Well, if your inclinations lean more towards artistic talent than staring at endless lines of text, that’s still useful. Engineers tend to make terrible artists.
From what Shamus has said, I’d have to agree that the barriers to entry are rather obscene. It may provide more performance at the end of the day, but at what cost? The result is a system that you can only really learn if you’re a maniac who isn’t willing to ever admit defeat, which is just another reason why so many programmers are crazy.
That’s not efficient, that’s just elitist.
If you still want to learn the programming side, I would encourage you not to give up. For anything. If you think it’s just not worth the effort… well, nothing to be ashamed of. Emphasize the things you’re good at. If you can’t think of anything you’re good at, find something you want to be good at and practice it until you are.
It’s actually entirely worth it once you learn how. And it doesn’t take *that* long in the grand scheme of things- I learned it to a functional level in maybe 2-3 of weeks.
It’s just incredibly frustrating because so often you find yourself in a place where making *any* progress is painfully hard because you’re trying to cobble together some sort of guide out of dozens of different sources on the internet. You can spend hours feeling like you’ve accomplished absolutely nothing at all, which makes is really hard to soldier on and continue.
Once you get it working, though, there’s no comparison to fixed-pipeline stuff. It’s not just better performance-wise; the fixed pipeline was done away with because it was too limiting. Shaders give you way more options.
Just because I’m an artist doesn’t mean I’m not also good at logical deduction. I programmed a working JPS algorithm, at least. :P
And I’m an Engineer who also has a lot of artistic talent.
There’s always 2D games. With technology these days and the massive amounts of memory, I have always wanted to create a huge 2D world, something procedurally generated to explore, perhaps using isometric graphics.
Diablo 2 was a great example of 2D graphics in a large environment. If you look closely, you will see the game doesn’t have any hills at all, it’s all flat terrain, yet it is so nicely done, you never really notice.
There’s plenty to do out there. Personally, I think 3D is overdone sometimes, or maybe it’s just the same old games, oh look, another WW2 shooter… I haven’t seen that for a while! ;)
Also, with all the handheld devices around these days, it is breathed new life into 2D games which has been quite refreshing to see.
With that said, I do like Frontier and am happy to see something on here again as I love the idea of this, and exploring it. I would like to see some ruins randomly placed in it, maybe some old roads etc… something to explore. I love exploring, not really much of a killer. ;)
Ugh, yes, I have the very same problem in one of my FLOSS projects, xoreos (a reimplementation of the BioWare 3D games; think ScummVM for Neverwinter Nights and later).
In short, my OpenGL knowledge is stuck somewhere in OpenGL 1.2, and that too only from old tutorials. The code I wrote works…somewhat. It’s slow, missing a lot of features, and my 3D engine code in general is a mess. I tried learning the recent OpenGL API, only to be bogged down in conflicting information. What I have gleamed, and some infodumps by nice individuals with knowledge, just tought me how much I don’t know. I basically gave up on trying to learn all that.
I have tried cramming the Ogre3D engine into it, with middling success. Again, it kinda works, but I have to work around several Ogre3D peculiarities, and I lack the knowledge to judge whether I made the right choices. In fact, I know I didn’t, because a screen full of text drops the FPS into single digits.
Right now, I’m kinda hoping I’ll find a person willing to work with me on xoreos, a person with OpenGL knowledge and some time to spare. Hope dies last, and all that.
EDIT: Okay, two links is already enough to trigger the spam moderation queue? :)
My approach has been to create a Renderer class that contains all of the actual OpenGl calls. It helps keep the messy OpenGL stuff in one place, while everything else in the engine can have the interfaces I want them to have. Every renderable object is basically just sent to the Renderer in an array each frame.
That’s what I did, too. Built a draw manager that was able to organize everything to suit all the features I wanted, then every frame, I send it any updated objects, and say “Update, draw.”
It’s a much more rewarding system when you get a handle on it – allowing you to do fancier things like deffered shading to allow practically unlimited lights to render accurately.
I followed this tutorial a few months ago and it really helped to get to know everything and seems to be up to date.
I hadn’t seen that one before, it seems reasonable – but skip the first three tutorials!
They’re fixed-function pipeline, and won’t actually work on a lot of hardware as the fixed-function is only guaranteed to exist in OpenGL 2.x and older contexts.
– They made me think the whole thing was deprecated, but it quickly jumps up to OpenGL 3.3.
It is also very important to specify the exact OpenGL context you want – the various underlying toolkits offer ways to specify this, and you really must do as otherwise you get whatever’s default.
(And most tutorials don’t say how. Which sucks.)
– Minor hint, nVidia may be slightly slower in pure Core Profile, so develop in Core, release in Compatibility.
Yes, the OpenGL tutorials are awful, and most of the tutorials that pop up at the top of Google are deprecated!
The best tutorial site I’ve found so far is this one:
My recommendation is to target pure OpenGL 3.3 Core Profile.
If you stick to the core profile and don’t use any extensions or anything marked as deprecated, it should run correctly on all hardware and reasonable drivers from the last few years.
Extensions are where the trouble starts, and mixing shaders with fixed-function (eg OpenGL 1.1) is asking for trouble. (OSX is particularly hairy, and has broken almost every time Apple released an update.)
You’ll need OpenGL 3.2 or newer if you want to do geometry shaders, which are a wonderful way to do sprites and procedural geometry, as you can offload almost everything to the GPU.
I’ve had a lot of fun with these!
I agree with all of the above, apart from the fact that I really should learn geometry shaders but am too thick to do so.
It’s fairly easy to get your hands on a set of matrix classes these days – I stuck the one I use together from bits of online tutorials and my own diseased imaginings, but I’m pretty sure one was included with, for example, the OpenGL Superbible.
Shamus, maybe you’d get more useful advice on GLSL if you posted about your particular problems here; I have personally found it easier to deal with GLSL’s quirks because it does at least do vector and matrix operations natively very much in the way that C doesn’t. However it’s often not clear from your column exactly which operations you’re having trouble with, and a basic lighting model isn’t more than a few lines in GLSL so between your commenters it should be pretty easy to get you moving in the right direction!
I don’t know anything about GLSL, OpenGL or shaders at all, and I have no intention of using them.
But it was absolutely worth reading 1500 words, because it’s still an interesting topic. Thanks.
I always love reading your programming posts, Shamus. Now, I ain’t a programmer, nor do I play one on TV, but it is still a lot of fun to be a spectator in watching you build something.
Now, what is meant by ‘deprecated’ in this context? I can guess a lot by context, but I’d like to know more precisely.
It means old and shouldn’t be used, but hasn’t been removed as not to break old code.
OK, that’s basically what I thought.
Still, that feels like handing a new mechanic at a shop a tool box and pointing to some of the tools and saying ‘Don’t use those tools, we only use those when fixing older cars.’
I think a better analogy is if you’re building a car, and you have a choice between part A and part B. While you can use part A, it’s no longer manufactured, so you won’t be able to replace it if it breaks.
It’s still not a perfect analogy, but it’s closer.
I would flip the analogy around – a deprecated car part is not used in new vehicles, but is still made for spares in older ones.
To modify your anology, I think it’s more like saying “here’s your tools, these ones aren’t that useful any more but we keep them around because they’re required to work on older machines.”
It wouldn’t surprise me if it CAN be like that at some mechanic shops, though I can’t say. I know almost as much about cars as I do about programming.
Me too! – everything I know about motoring I learnt from reading car analogies on Twenty Sided… XD
Anyway, this post talks a little bit more (if only a little bit!) about deprecation:
Or maybe it’s like how thousands of businesses have computers still running xp, 98 or hell, even MS-DOS just in case they need to use a piece of ancient hardware or software that refuses to talk to anything newer?
When a function is depreciated, that basically is warning you that it will eventually be removed entirely and no longer be available. It can also mean something better exists to do the same task. They leave the function in to give you a chance to learn the new way to do things while still having the old one available.
You can basically count on a depreciated function eventually no longer being available. It gives you some time to adjust anyhow.
Small typo in the paragraph before “Step one”: “We're going to have jills and water and grass and trees like before…”
As far as getting help, have you tried StackExchange? I don’t know if they have a community for OpenGL/GLSL, or how good it is if it exists, but it’s been a great resource for any math or programming topic I’ve needed help on.
The GameDev portion of Stack Exchange might be a good place to start. Especially since they’re less strict about the “good” questions rule than StackOverflow is. Like, you’ll get more questions which are less focused, but you’ll also get answers which are genuinely useful for people trying to figure stuff out. SO often has questions locked as “not useful”, or “too broad” or whatever, but which are exactly the same thing I was about to ask… ^^;
I had the same thought. Any time I run into a poorly documented language, feature set, API, or whatever, StackExchange/StackOverflow is where I look first. (Well, usually I google it and look for the stack exchange links, since google does a better job finding stuff)
This is almost always the most effective way forward, since the site is populated by people who are both
a) very knowledgeable in whatever niche you might need,
b) motivated to provide helpful advice, because helpful advice is what gets modded up and gives you “points”
Here’s a link to the top voted questions tagged GLSL in stackoverflow, one could probably learn a ton by just browsing through these questions and becoming familiar with the answers:
I read all 1500 words with a tinge of guilt, and I don’t think I ever apologised for breaking everything, so let me officially apologise for breaking everything.
And if it turns out the problem is because I’m using three-year-old drivers, you have permission to slap me.
(Seriously, though, even though I haven’t done programming since leaving school, I still love these posts of yours. A lot of your complaints about documentation apply to the support/server side of things too.)
Documentation sucks for pretty much anything more complicated than a toaster. Software, hardware, textbooks…everything. :S
Relevant:
Making tools that accomplish a task in an obvious way is difficult. Documenting the in-obvious ways in which tasks can be accomplished with poorly designed tools is equally difficult.
It really comes down to how much effort you’re willing to invest up-front, and how much you’re willing to offload onto the user-base. If the tools are only going to be used by a few specialists, it might make sense to document them poorly and simply pass the knowledge along as needed, or require that the experts derive usage from first principles. Ideally, of course, you’d have elegant tools AND thorough documentation… but that gets really expensive. When you’re paying “free on the internet” it’s hard to take complaints about quality of both tools and documentation very seriously.
The paid tools I’ve used have had documentation that was barely better than the free stuff on the internet. (Many free projects are actually better.) So, where’s all that money going? XD
I totally agree that purchased goods carry an expectation of comprehensibility. That so many free tools are better than professional ones is a reversal for sure (and a happy one at that). I just think it’s a bit unfair to hold free tools to the same standards that we expect from ones for which the developers are being compensated.
My one complaint with the article: I wanted more words! :O I have really really missed your programming posts. So glad they are coming back!
Seconding! In a way, programming stories are like the ancient Heinlein mode of science fiction adventure: weird and unexpected problems, overcome by daring heroes with knowledge and ingenuity. :)
For a few years I’d been trying to learn OpenGL without success. What finally made modern OpenGL and GLSL actually click for me last year was restricting myself to OpenGL ES (for WebGL). It allowed me to focus on a much, much smaller API, a simpler shader language, and very little historical cruft. I could do searches for “OpenGL ES” specifically and get better results. I also found the WebGL specification to be a good, quick reference: except for GLSL, everything I needed was on that one page. There are also a couple of good OpenGL ES books out there.
I got an error here: “This page has moved”
So
is not valid…
should be used instead.
And do note that the url is not the same as
Which has a higher revision but is actually a older standard (how the hell did they mess up that numbering this way?).
The truly great thing about standards is how badly documented they are.
I linked “/registry/webgl/specs/latest/” because I assumed that would always redirect to the latest version. Somehow I’m not surprised to see Khronos screw this up.
They’ve been a tad messy over the years yeah. The redirect works but threw an error/message here.
A better url would be
and then just click the “WebGL current draft specification” link.
The extension registry is also linked from this page.
Yes, this; so very much this. :-)
Focusing on webgl, which outright prevents the use of any of the older deprecated stuff because it’s built on ES, and also outright prevents the use of any extension unless you explicitly ask for it, turned out to be amazingly helpful when trying to port the original Frontier to a browser. (Not that I’ve gotten very far. But trees render, at least, and it’s doing order-independent transparency! Which I think I’ve talked about before, at length.)
Turns out the OpenGL thick client setup allows you to call any particular extension function that the GL library includes, whether you negotiate it or not, as long as you have a function prototype. (In one sense it doesn’t have a choice, since wglGetProcAddress / glxGetProcAddress have to work the same as the native GetProcAddress / dlsym functions, which look at the DLL’s / shared-object-file’s public symbol table. Doing the symbol lookup at program-link time instead of runtime has to work, but it means you can accidentally mess up extension requirements or whatever.) Since webgl doesn’t let you use a symbol until you’ve asked for the extension that contains it — the extensions are each registered in their own namespace so your code can’t just refer to them — it can’t run into the same issues.
Of course, after saying all that, it *does* mean moving from C++ to javascript, and from thick client code to a browser. So that might be another whole learning curve for you (==Shamus), which luckily I didn’t have to climb since I did it a few years before.
I wonder if it’s possible to do OpenGL ES in a windows program. Hmm. SDL might make it hard though…
GLFW3 seems to be able to open OpenGL ES contexts (I don’t use ES myself, but their “New features” page claim you can do it). For me, moving from SDL to GLFW3 was quite easy. Unfortunately, that means that I’ll need to look elsewhere for playing sounds.
I would be interested in reading about your order-independant transparency. Where have you written about it?
It is – that’s what ANGLE does under Windows
Qt5.x currently has two available builds on Windows desktop (as well as the compiler – ANGLE (default) and OpenGL.
In the ‘ANGLE’ build, you have the OpenGL ES API, and ANGLE translates this into DirectX.
While this probably comes at a performance cost, I haven’t tried it so couldn’t say whether it’s significant.
In the OpenGL build, you’ve got normal desktop OpenGL.
I’ve only been using the OpenGL builds so far, with OpenGL 3.3 or 4.x depending on the project.
This is very reassuring to hear as I have been wanting to use 3D graphics in Javascript programs and have in the past tried and failed to understand OpenGL. It is good to know that WebGL simplifies things somewhat, thanks for the link to the specification.
Shamus, among all the things I come to Twenty Sided for these programming articles are some of my favourite. More words spent on the intricacies of coding please!
Shamus can you update your Projects page to include the Good Robot stuff (and other stuff not on there)?
I think they’re all in ‘Programming’ rather than ‘Projects.’
That’s what I meant, and no they’re not.
You don’t see them via the link below – starting five or six posts down? (Perhaps after a page refresh?)
(Please do not share that link with any clone troopers you may know.)
I think the above might refer to the page reached by clicking the big button that says “Programming” in the right column.
Reckon you might be right, there! Never even noticed that; d’oh.
Yes. That however (as Packbat notes) is not reachable via the Programming button.
I have no idea where you found your link, it’s not anywhere I can find, but thanks for it. It’ll make going back and reading those articles far easier than trying to go back blog post by blog post.
I fiddled with the sidebar some weeks ago and managed to remove the category selector. It’s been on my list of stuff to fix for a while now.
Cool. I await it’s return!
There are publishers out there that appear to specialize in cheap books that turn opaque documentations into actual how-to books. With the predictable reaction by some that: “LOL, this is just selling free docs to the clueless”, but, you know, whatever.
Sure enough, I can find at least one such book on GLSL from such a publisher.
Is it good? Accurate? UP to date? Couldn’t possibly tell you, but even if this specific one is not what you need, there must be other titles out there, and a few bucks may save you hours of headaches.
Boy do you hit the nail on the head about the OpenGL docs. I originally learnt OpenGL through uni when using 2.1, but for my projects now I like to use OpenGL 3.3. At first it was an absolute pain getting info on 3.3, which had completely overhauled all the function calls and of course the matrix stack. But finding a tutorial for 3.3 was impossible, all I kept getting was 2.1 or even immediate mode tutorials….
I eventually stumbled upon which is a superb list of tutorials that go through how to code with 3.3 and has examples (Just skip the first three, they basically just explain how it “used to be done”).
On 3.3:
I love the new matrix stack, it vastly improves how you can set up your rendering pipeline. For my project I needed to do lighting calculations done by vertex, rather then by pixel, with 3.3 it makes it really easy to do my lighting calculations in the vertex shader before the perspective matrix is applied.
It is also much cleaner to both send variables to the shaders and then sending variables between shaders, and simply by looking at the names, you can tell what variable is an input and an output for the shader.
I am so glad I decided to learn SFML instead of putzing around with all that OpenGL code. I actually got somewhere quickly without too much trouble. It’s a solid library.
I would have loved to use SFML, but it does (did?) not seem to be able to open sRGB contexts. How do you do your sRGB encoding? By hand, or does SFML do it now?
The words were absolutely worth reading; I love your programming posts as I get huge motivation hurdles of my own and your blog lets me experience the programming vicariously.
… I should really dive back into that last project of mine …
I always hate the ‘snark’ as you call it when you want to learn new things like these. A sad fact is that it is both boring and hard to write good documentation that can be read by someone who actually needs it, so it hardly ever gets done. This problem appears to be worse in open source projects, I assume because of the boring part.
My favourite example is all the Microsoft documentation on for instance C# (but it is universal for everything they do). The official pages can only be read if you have a degree in that language it seems and I’ve never actually met anyone that can use them effectively. It is like they go out of their way to make it look more complex than it is. Even as a reference in case you just want to remember how it works it doesn’t seem to work. Thousands (millions?) of lines of documentation that are useless to most people.
I understand it can get frustrating to see the same question over and over, but if people keep asking it over and over it is likely something is wrong with the documentation and not those people…
Oh so agree with this. I work in document solutions and my new job has had me shift from a purpose built WYSIWYG program with GUI to dabbling in C# instead. There are loads of web based resources for the beginner but the Microsoft library is an unwelcoming maze of self referencing jargon. I generally end up on StackOverflow sifting through the snark trying to find the one toungue-in-cheek ‘gag’ response which actualy answers my ridiculously simple noob question.
I’ve found MS’s docs to be pretty good, actually.
They’re references, not tutorials. You aren’t supposed to be going to them to learn the language itself, they’re for learning the libraries.
I know this might be even less helpful than RTFM posts, but have you considered using Cg instead? I know nobody really uses it, unless he is programming for PS3, but it can output both DirectX and OpenGL shader programs and the syntax is (obviously wuite similar to C and other derivatives).
So here’s what I did. Do some of the basic tutorials like NeHe at to understand all the concepts. Then throw away all that sample code and start again.
Download the OpenGL 3.3 language and shader reference manuals:
Now either use SDL to get everything initialized, or do it yourself (painful.)
As you write your new code, use the reference manuals to find the features you want, then Google for uses of those features, checking to make sure the uses are version 3.3. StackOverflow has lots of good stuff.
Because you are starting with the reference manuals, you won’t be using the wrong functions. But do make *very* sure when you Google that the examples are the right version.
You can switch to earlier versions (2.1 is close to WebGL) or later versions (4.0), and the principle is the same. Anchor yourself with reference manuals and then Google for working code.
OpenGL is a funny thing to write in. My experience a few semesters ago, writing a flow visualization, was that you REALLY need to test on multiple platforms. Some platforms (Windows/NVIDIA) are more lenient than others. I had what I thought was a working program because it ran on my primary development machine (my MacBook in Windows). Then I tried to test it in Linux and Mac on the same machine and got garbage out, ditto for my much more powerful AMD-based desktop.
I had to strip the GL-based stuff down to 1.2 level stuff to get it to work everywhere before the deadline. I skimmed the comments and bookmarked one of the tutorial sites listed — hopefully that will be helpful. I look forward to this series, no matter what happens.
Yay, new programming project! I’m always so interested by these. Every pedantic little detail is fascinating to me, don’t worry.
Also, I really enjoyed Frontier the first time around, so Frontier Rebooted should be interesting. Are you still planning to just leave it as a proof of concept sort of thing for the terrain, or are you going to do more on the “how to make this an actual game” side of things?
A game all about erosion would be kind of weird, but at least it hasn’t been done to death.
You have probably already come across this one, but I thought I’d still drop this one here:
It’s an OpenGL 4.0 tutorial series, covering some of the basics up to simple lighting. I have no clue on about how good it is, but his(her?) DirectX tutorials are pretty good.
How ubiquitous are OpenGL 4 machines these days? I know that my five-years old laptop stops at 3.3. I’ll get a new laptop soon, but I am wondering if I should continue with 3.3 or if pretty much everybody can run 4.0 by now.
Whenever you have this kind of question, about the install base of different things, I strongly recommend you check Steam’s hardware survey. Here:
Annoyingly, they don’t say anything about OpenGL, but you can deduce that from the graphics card’s statistics.
After a bit of research, it would seem that OpenGL 3.2 is about the sweet spot for modern OpenGL and cross platform compatibility (has Apple updated their drivers yet?)
Thank you for the tip, this is very useful. Is there a way to convert a DirectX version into a OpenGL version? Like “DirectX 9 shader model 2” into “OpenGL 2.1”? I understand that I am violently mixing software and hardware here, but there may be such a correspondance table somewhere.
only 1500 words? was hoping for so many more :)
Absolutely adore your programming posts and I am glad you are back at it :)
I find it a little odd when I see people mention they are having trouble with certain things about this or that GL stuff because I never had much trouble getting into it. Then I remember my programming course is focused on how to make games and get you a job doing it, so graphical programming is rather important in my education but its not something easy to locate online, nor is it something other fields seem to care about (also I tend to assume everyone knows more than I do about everything).
If you have any maths troubles I’d recommend looking into OpenGL Mathematics as I’ve found it solid to use, much better than my own maths library.
So maybe when Shamus is finished doing this he can write a book about how to do this stuff which is comprehensible and useful, and make $$$. I mean, the market is doubtless not that large, but to say the least it does not seem to be saturated.
Possible odd marketing trick: His loyal followers (us) could go around to all those places Google searches take you to, where people are asking questions about this stuff, and recommend Shamus’ excellent book. Then everyone looking for answers and going to look in those places will see that their prayers can be answered for a couple of bucks and a quick download from Amazon.
An idea for the title of the book!
The F***ing Manual
by Shamus Young
+1
That would be hilarious.
Oh, yes!
+1
I’m really pleased to see that you’re getting back to this Shamus. I’m planning to get into the GLSL myself when I get some free time again over the summer, but I know how tricky this stuff is having tinkered with it on mobile phones and Linux..!
My advice, if you’re serious about learning OpenGL and GLSL, invest in the OpenGL SuperBible (get the version for whichever OpenGL version you mean to target). It doesn’t assume you know anything, and walks you through the basics without referencing the old ways.
It also has a ton of example code, which you can also download for free if you want just examples of the shaders. You can get those here:
It goes from the most basic (drawing a single triangle) to some pretty advanced stuff, so even just reading the code should be helpful.
The Superbible seems to rely a lot on a library written by the authors themselves. I guess that’s fine(ish) when you program in C++, but I don’t (I code in go these days). If their sb6 library abstracts too much, then the book won’t be that useful to me. Indeed, I don’t want to learn THEIR OpenGL, I want to learn the ‘real’ one. This is why I have not purchased it yet. Do you think that their sb6 library is light enough for people like me?
I used SB5, since that’s the one I bought a while ago. That said, they simply used the library to handle things after explaining why they did it that way, or to hide things until they can get around to explaining it.
Since you have the source to the library anyway, you can dive in and see how things work in as much detail as you want. You can even modify the library and see how it works.
For example, even though the examples in SB5 used GLUT, I never bothered with it and instead I used their library with SFML to handle the windowing. SB6 seems to have done away with GLUT and provides a framework for the application better suited to their purpose.
Take the simplest example I was able to find in the SB6 source bundle: singletri. If what you want is to learn GLSL, none of that is abstracted by the library. The shader code is right there for you to see (in this case, a vertex shader that creates a triangle and a fragment shader that colors it). For someone like Shamus, that says he’d like to see examples of modern shader code, these are perfect.
So, to answer your actual question: Do I think the library abstracts too much? The library abstracts all the boilerplate code to create a window and an OpenGL context, with an object oriented approach. That’s all the “not OpenGL” stuff you don’t care about.
At least for the 5th edition (I can’t speak about the 6th edition), the book explained how OpenGL is modeled, and why you need to take each step. I’ve found it very helpful to understand OpenGL (though I’m not an expert by any stretch, and I haven’t used anything more complex than textured triangles in my own work).
Thank you for taking the time to clarify this. I am relieved :). It will be nice to have an `exhaustive’ and consistent documentation other than the Reference Pages for once.
It was well worth reading the 1500 words. It’s not the end result I enjoy, it’s the journey getting there. Your thought processes and sense of humor I personally enjoy more than the end product usually.
Ooooh, how exciting! I loved reading the Project Frontier entries, they made me want to learn how to do programming. Don’t have time for that unfortunately, so I’ll just have to live vicariously through these posts.
Ok, this is OT and may even considered to be a troll — but it is not intended as such.
In general, I probably wouldn’t play stuff (on a PC) that works at 30 FPS. I can easily see the difference between 30 and 60 FPS and 30 just isn’t smooth at all. 20 is borderline unplayable, imo.
In other words — why do you say that most games run at 30 FPS (on a PC)? I can’t truly remember the last game I’ve played that was intended to run at this FPS — possibly Atlantica Online — and it was annoying as hell.
I strongly believe that ‘the world’ has moved away from ’30 FPS is enough’ long ago. 60 is the new 30 :)
Sadly, the opposite seems to be the case in some places (read: Not PC). Time was, you could reasonably expect consoles to attain 60 (or 50) consistently, but I’ve read that these days, consoles are completely failing to keep up with new graphics technology, and many games choose to sacrifice 60fps in favour of “more shineys” at 30. And of course, if you complain about this, you get called a Nazi. (I’m not even kidding)
The reason that console players at least have the option now is because the complaining about it was so numerous and so vocal.So while there were many idiots that called people nazi for it,they> | https://www.shamusyoung.com/twentysidedtale/?p=22825 | CC-MAIN-2020-10 | refinedweb | 8,185 | 70.33 |
Command-line interface¶
This page documents the details of Fabric’s command-line interface,
fab.
Options & arguments¶
Note
By default,
fab honors all of the same CLI options as Invoke’s
‘inv’ program; only additions and overrides are listed here!
For example, Fabric implements
--prompt-for-passphrase and
--prompt-for-login-password because they are SSH specific, but
it inherits a related option – –prompt-for-sudo-password – from Invoke, which handles sudo autoresponse
concerns.
-H
,
--hosts
¶
Takes a comma-separated string listing hostnames against which tasks should be executed, in serial. See Runtime specification of host lists.
-i
,
--identity
¶
Overrides the
key_filenamevalue in the
connect_kwargsconfig setting (which is read by
Connection, and eventually makes its way into Paramiko; see the docstring for
Connectionfor details.)
Typically this can be thought of as identical to
ssh -i <path>, i.e. supplying a specific, runtime private key file. Like
ssh -i, it builds an iterable of strings and may be given multiple times.
Default:
[].
--prompt-for-login-password
¶
Causes Fabric to prompt ‘up front’ for a value to store as the
connect_kwargs.passwordconfig setting (used by Paramiko when authenticating via passwords and, in some versions, also used for key passphrases.) Useful if you do not want to configure such values in on-disk conf files or via shell environment variables.
--prompt-for-passphrase
¶
Causes Fabric to prompt ‘up front’ for a value to store as the
connect_kwargs.passphraseconfig setting (used by Paramiko to decrypt private key files.) Useful if you do not want to configure such values in on-disk conf files or via shell environment variables.
-S
,
--ssh-config
¶
Takes a path to load as a runtime SSH config file. See Loading and using ssh_config files.
Seeking & loading tasks¶
fab follows all the same rules as Invoke’s collection loading, with the sole exception that the default collection
name sought is
fabfile instead of
tasks. Thus, whenever Invoke’s
documentation mentions
tasks or
tasks.py, Fabric substitutes
fabfile /
fabfile.py.
For example, if your current working directory is
/home/myuser/projects/mywebapp, running
fab --list will cause Fabric to
look for
/home/myuser/projects/mywebapp/fabfile.py (or
/home/myuser/projects/mywebapp/fabfile/__init__.py - Python’s import system
treats both the same). If it’s not found there,
/home/myuser/projects/fabfile.py is sought next; and so forth.
Runtime specification of host lists¶
While advanced use cases may need to take matters into their own hands, you can
go reasonably far with the core
--hosts flag, which specifies one or
more hosts the given task(s) should execute against.
By default, execution is a serial process: for each task on the command line,
run it once for each host given to
--hosts. Imagine tasks that simply
print
Running <task name> on <host>!:
$ fab --hosts host1,host2,host3 taskA taskB Running taskA on host1! Running taskA on host2! Running taskA on host3! Running taskB on host1! Running taskB on host2! Running taskB on host3!
Note
When
--hosts is not given,
fab behaves similarly to Invoke’s
command-line interface, generating regular instances of
Context instead of
Connections.
Executing arbitrary/ad-hoc commands¶
fab leverages a lesser-known command line convention and may be called in
the following manner:
$ fab [options] -- [shell command]
where everything after the
-- is turned into a temporary
Connection.run
call, and is not parsed for
fab options. If you’ve specified a host list
via an earlier task or the core CLI flags, this usage will act like a one-line
anonymous task.
For example, let’s say you wanted kernel info for a bunch of systems:
$ fab -H host1,host2,host3 -- uname -a
Such a command is equivalent to the following Fabric library code:
from fabric import Group Group('host1', 'host2', 'host3').run("uname -a")
Most of the time you will want to just write out the task in your fabfile (anything you use once, you’re likely to use again) but this feature provides a handy, fast way to dash off an SSH-borne command while leveraging predefined connection settings. | https://docs.fabfile.org/en/2.5/cli.html | CC-MAIN-2020-24 | refinedweb | 675 | 56.05 |
.1: Streams
About This Page
Questions Answered: How do I write a program that reads in an unspecified amount of input or repeats some other operation a initially unknown number of times? How can I operate on a bunch of data one at a time without first storing all of it in memory?
Topics: Finite and infinite streams. By-name parameters.
What Will I Do? Read and program.
Rough Estimate of Workload:? A couple of hours? This is a fairly long chapter but much of it is optional reading.
Points Available: A190. (There isn’t all that much to do, but the points value is high because the assignments are intended as near-mandatory.)
Related Projects: Sentiments (new), HigherOrder.
Introduction
As a warm-up for the main subject of this chapter, streams, let’s consider another topic.
Suppose we intend to create a function
fiveTimes. This function should return a five-element
vector where each element is the value of a given
Int expression:
fiveTimes(100)res0: Vector[Int] = Vector(100, 100, 100, 100, 100) fiveTimes(1 + 1)res1: Vector[Int] = Vector(2, 2, 2, 2, 2)
We really want five separate evaluations of the same given expression. If different evaluations yield different results, that should be reflected in the output:
import scala.util.Randomimport scala.util.Random fiveTimes(Random.nextInt(100))res2: Vector[Int] = Vector(43, 55, 21, 46, 87) fiveTimes(Random.nextInt(100))res3: Vector[Int] = Vector(33, 65, 62, 31, 73)
Passing Unevaluated Parameters
Here’s a version of
fiveTimes that works. It’s nearly identical to our earlier
attempt:
def fiveTimes(numberExpression: =>Int) = Vector.tabulate(5)( anyIndex => numberExpression )
As we just re-established, ordinary parameters — also known as by-value parameters — are evaluated before the function call begins. What gets passed in is a value. In contrast, a by-name parameter is not evaluated before the function call begins: what gets passed in is an unevaluated expression. That parameter expression will be evaluated only when the function body uses it (if in fact it does use it). That is, the parameter is evaluated while the function is already running.
In case a function uses a by-name parameter multiple times, the parameter gets evaluated multiple times:
def fiveTimes(numberExpression: =>Int) = Vector.tabulate(5)( anyIndex => numberExpression )
tabulateforms a five-element vector, placing an
Intat each of the indices from 0 to 4. Each element is obtained by evaluating
numberExpression. If
numberExpressionis, say,
Random.nextInt(100), five distinct random numbers will be placed in the vector.
You can think of by-name parameters as a sort of parameterless function that is passed to another function as a parameter.
Here’s an illustrated example of a by-name parameter:
In O1, you aren’t required to define methods that take by-name parameters. However, you
will need to use such methods. There’s nothing much to it, and indeed you have already
done so, since
getOrElse on
Options takes a by-name parameter.
getOrElse has been
defined to either a) return the contents of the
Option wrapper and leave the parameter
untouched, or b) evaluate the parameter expression and return the result if there’s nothing
in the
Option. This is why the method works as discussed above.
Another familiar example
In Chapter 4.4 you saw that the subexpression to the right of the logical
operators
&& and
|| is evaluated only in case the subexpression on the
left isn’t enough to decide the value of the logical expression. You also
know that each of those operators is actually a method on
Boolean objects
(Chapter 4.5); the “subexpression to the right” is actually a by-name
parameter of that method.
More jargon
If a parameter expression is evaluated at least once, the parameter is
termed strict (tiukka). The parameter of
fiveTimes is strict; so
are all by-value parameters. If there is a possibility that a parameter
does not get evaluated at all, it’s termed non-strict (väljä);
getOrElse’s parameter is one example.
Now to the main subject of this chapter.
Challenge: An Unknown Number of Repetitions
So far in O1, we have repeated commands by gathering all the necessary data in a collection,
then working through that collection one element at a time. The number of repetitions has
equalled the size of the collection, which has been known to our programs by the time we
start traversing the collection. For example, we have often collected data in a vector,
which is wholly stored in the computer’s memory; we have then used a
for loop or a
higher-order method to do something with some or all of the vector’s elements.
In contrast, consider these common scenarios:
- We intend to read inputs from the user for the program to process until the user indicates they wish to stop. The program doesn’t know in advance how many inputs the user will enter before stopping; the user may not know, either.
- We intend to process data from a source (a file, a network, a physical sensor governed by the program, etc.), processinga single element at a time. We don’t know the number of elements before starting and there could be so many of them that we don’t know if it’s possible to store all of them at once in the computer’s memory.
- We intend to compute increasingly precise approximations of a desired quantity by repeatedly applying a particular computation; Newton’s method for finding roots is an example. We need an iterative process that applies the computation to the previous result until we reach sufficient precision. We don’t know in advance how many iterations it will take before that happens.
More generally still: we wish to repeat an operation an initially unknown number of times. In principle, there is no limit to the number of repetions; we might have a source that continuously generates more data, for instance.
A more concrete example
Let’s write a toy program that reports the lengths of four strings. There is nothing new here yet.
object LengthReport extends App { def report(input: String) = "The input is " + input.length + " characters long." def lines = Vector("a line of text", "another", "not written by the user", "but hardcoded into the program") lines.map(report).foreach(println) }
Now, let’s undertake to edit that program so that it works like this, in the text console:
Enter some text: hello The input is 5 characters long. Enter some text: hello again The input is 11 characters long. Enter some text: third input The input is 11 characters long. Enter some text: fourth The input is 6 characters long. Enter some text: stop already The input is 12 characters long. Enter some text: stop The input is 4 characters long. Enter some text: please
"please". Before that, the user may any number of inputs large or small.
How can we specify when the input-reading and report-printing should stop when we don’t know the number of inputs?
One way to do that is to use something called a stream. Before we apply streams to that problem, however, let’s start from some basics.
A Stream of Elements
A stream (virta) is collection of elements. The word is meant to evoke the notion that a flowing stream “brings” elements for processing.
You can create a stream by listing its elements, just like you can create other collections:
val streamOfData = Stream(10.2, 32.1, 3.14159)streamOfData: Stream[Double] = Stream(10.2, ?)
toStringon
Streams, which is here used by the REPL, doesn’t list the entire contents of the stream. This is a symptom of how streams work; we’ll say more about that in a bit.
Another approach is to use
toStream, which creates a stream from an existing collection
(like
toVector,
toBuffer, etc.; Chapter 4.1):
val vectorOfWords = Vector("first", "second", "third", "fourth")vectorOfWords: Vector[String] = Vector(first, second, third, fourth) val streamOfWords = vectorOfWords.toStreamstreamOfWords: Stream[String] = Stream(first, ?)
Streams have the collection methods you know. As an example, the following command skips past a couple of the first elements in the stream, then picks out the first of the remaining ones:
streamOfWords.drop(2).headres9: String = third
A loop works, too:
for (word <- streamOfWords) { println("data from the stream: " + word) }data from the stream: first data from the stream: second data from the stream: third data from the stream: fourth
As do higher-order methods:
streamOfWords.filter( _.length > 5 ).map( _ + "!" ).foreach(println)second! fourth!
Nothing groundbreaking there. How are streams different?
Infinite Streams
One special thing about streams is that their size doesn’t need to be finite. You can define an endless stream of values. (Cf. infinite sequences in math.)
The factory method
continually creates an infinite stream:
val myStream = Stream.continually("SPAM")myStream: Stream[String] = Stream(SPAM, ?)
We have just created a collection that has the string
"SPAM" as each of its elements
and that contains an infinite number of these identical elements. Let’s
take the first
five and print them out:
myStream.take(5).foreach(println)SPAM SPAM SPAM SPAM SPAM
takereturns a finite stream of the given length. That stream contains some of the elements from the infinite stream.
foreachdoesn’t have an infinite amount of work to do since we apply it to only the five-element stream returned by
take.
Consider another example of an infinite stream. (It features the
++ operator from Chapter 4.1,
which combines two collections.)
val words = Vector("first", "second", "third", "fourth")words: Vector[String] = Vector(first, second, third, fourth) def wordStream = words.toStream ++ Stream.continually("OUT OF WORDS")wordStream: Stream[String] wordStream.take(7).foreach(println)first second third fourth OUT OF WORDS OUT OF WORDS OUT OF WORDS
toStreamhere, our code wouldn’t work. Why? What would happen? You can try it in the REPL, but if you have any unsaved work, save it first.
In the examples above, the infinite part of each stream was made up of identical elements. In the example below, that isn’t the case. Let’s form a stream of pseudorandom numbers.
val randomStream = Stream.continually( Random.nextInt(10) )randomStream: Stream[Int] = Stream(8, ?)
Now let’s take a few numbers:
randomStream.take(5).foreach(println)8 9 5 6 8
And here we generate random numbers between 0 and 99 until we happen to hit one that exceeds 90:
Stream.continually( Random.nextInt(100) ).takeWhile( _ <= 90 ).mkString(",")res10: String = 31,84,16,45,72,81,41,36,87,19,79,62,13,60,47,45,66,58,85,15,8,9,7,30,68,41,48,80,21,78,72,27 Stream.continually( Random.nextInt(100) ).takeWhile( _ <= 90 ).mkString(",")res11: String = 0,65,83,38,75,33,11,18,75,51,3
takeWhilereturns a stream that terminates just before the first element that fails to match the given criterion. Even though our original stream is infinite, the resulting stream is not. However, like
continuallyand
take,
takeWhiledoes not yet generate all the numbers; it merely returns a stream that is capable of generating them until a terminating element is reached.
mkStringproduces a string that describes all the elements in the stream. Had we invoked it on an infinite stream, we would crash the program by filling the available memory. On a finite stream, however, this works. In this example, the streams’ contents are random and we get streams of different lengths when we run the command multiple times.
How Streams Work
Obviously, a computer cannot store an infinite number of distinct elements, which is where
the key feature of streams comes in: the whole stream isn’t constructed as soon as you create
a stream object. Instead, the stream generates new elements only as needed. For instance, in
order to run
mkString above, it was necessary to generate some random numbers by repeatedly
evaluating
Random.nextInt(100). Those parts of the stream were actually generated in memory,
but only when
mkString needed them and only to the extent necessary.
Earlier in this chapter, we discussed by-name parameters, which are evaluated by the called
function as needed. It is precisely such a by-name parameter that we passed to
continually;
the method forms a collection that keeps evaluating the parameter whenever we access the
stream for a new element.
This principle of generating elements only when needed will be crucial as we use streams for handling input, next.
Back to Our Input-Processing Program
Earlier, we came up with this program that reports the lengths of four specific strings.
object LengthReport extends App { def report(input: String) = "The input is " + input.length + " characters long." def lines = Vector("a line of text", "another", "not written by the user", "but hardcoded into the program") lines.map(report).foreach(println) }
What we wanted instead is a program that reports on the lengths of an arbitrary number
of inputs and terminates with
"please". With streams, this is easy to accomplish. Here’s
an almost working implementation:
object SayPlease extends App { def report(input: String) = "The input is " + input.length + " characters long." def inputs = Stream.continually( readLine("Enter some text: ") ) inputs.map(report).foreach(println) }
readLinewhenever we need a new element.
mapgenerates a “stream of reports” whose each element is formed (as needed) by calling
readLineand applying
reportto the resulting string. This command also doesn’t prompt the user for the inputs, nor does it call
reporton them. It simply prepares the stream to do so if and when we later access the elements.
foreachto print out the elements of the report stream. In order to print an element, it’s necessary to determine what that element is by prompting the user for input and applying
report. In practice, what we get is a program that repeatedly receives keyboard input and reports its length.
That pretty much works. But we didn’t attend to the magic word yet: the above program just keeps prompting for new inputs till the cows come home. This one doesn’t:
object SayPlease extends App { def report(input: String) = "The input is " + input.length + " characters long." def inputs = Stream.continually( readLine("Enter some text: ") ) inputs.takeWhile( _ != "please" ).map(report).foreach(println) }
takeWhilestems the stream at
"please". No more elements (user inputs) will be generated once that element is reached. (Cf. the earlier example of
takeWhileon a random stream.)
Assignment: Sentiments about Movies
Let’s create a program that reads in the user’s short textual comments about movies. The program then tries to figure out whether each of those user inputs is positive or negative in tone. That estimate will be based on thousands of earlier movie reviews that have been written by real people, manually marked as positive or negative by a human, and made accessible to the program.
The more difficult and laborsome parts of this program have already been written. You’ll only need to put the user interface in order.
Task description
Your task is to fetch Sentiments and complete its user interface in
MovieSentimentApp.scala.
The UI should work in the text console as shown below.
Please comment on a movie or hit Enter to quit: This is a masterpiece, a truly fantastic work of cinema art. I think this sentiment is positive. (Average word sentiment: 0.36.) Please comment on a movie or hit Enter to quit: I hated it. I think this sentiment is negative. (Average word sentiment: -0.41.) Please comment on a movie or hit Enter to quit: The plot had holes in it. I think this sentiment is negative. (Average word sentiment: -0.28.) Please comment on a movie or hit Enter to quit: Adam Sandler I think this sentiment is negative. (Average word sentiment: -0.80.) Please comment on a movie or hit Enter to quit: It was great. I think this sentiment is positive. (Average word sentiment: 0.04.) Please comment on a movie or hit Enter to quit: It wasn't great. I think this sentiment is negative. (Average word sentiment: -0.16.) Please comment on a movie or hit Enter to quit: It was "great". I think this sentiment is positive. (Average word sentiment: 0.04.) Please comment on a movie, or hit Enter to quit: Bye.
The training data tells our program which words occur in positive reviews and which
words occur in negative ones. You can find that data in
sample_reviews_from_rotten_tomatoes.txt
within the Sentiments project. There’s no need for you to touch that data, but do take
a look; it will help you understand how the program works.
The analyzer’s programmatic implementation is in
SentimentAnalyzer.scala. Go ahead
and browse the Scaladocs and the code if you want (but it's not required).
Implement the user interface in
MovieSentimentApp.scala in package
o1.sentiment.ui.
The parts of the puzzle are there already, but you’ll need to use a stream and its methods
to piece them together.
Instructions and hints
- The UI should work much like our previous example program.
- In fact, there is rather little the you need to do. You can solve the entire assignment with a couple of lines of code, or even just one.
Submission form
A+ presents the exercise submission form here.
Assignment: Reading in Measurements
This assignment provides a bit more practice on reading in a stream of keyboard input.
Task description
In Chapter 6.3, you wrote
averageRainfall. That function computes the average of a
vector of rainfall measurements (
Ints), stopping at 999999, which marks the end of
the first data series. Any negative numbers in the input vector are ignored.
Now implement the same functionality as an interactive program. Place the program in
project HigherOrder, in
RainfallApp.scala
The program should work precisely as illustrated in the examples below.
Enter rainfall (or 999999 to stop): 10 Enter rainfall (or 999999 to stop): 5 Enter rainfall (or 999999 to stop): 100 Enter rainfall (or 999999 to stop): 10 Enter rainfall (or 999999 to stop): 5 Enter rainfall (or 999999 to stop): -100 Enter rainfall (or 999999 to stop): -100 Enter rainfall (or 999999 to stop): 110 Enter rainfall (or 999999 to stop): 999999 The average is 40.
Enter rainfall (or 999999 to stop): 999999 No valid data. Cannot compute average.
Enter rainfall (or 999999 to stop): -123 Enter rainfall (or 999999 to stop): -1 Enter rainfall (or 999999 to stop): 999999 No valid data. Cannot compute average.
Instructions and hints
- In this assignment, you do not have to trouble yourself with what happens if the user enters invalid values such as the empty string "", "llama", or "3.141". You can assume that the entered text can be successfully interpreted as an integer by the
toIntmethod (Chapter 4.5).
- You could solve the assignment by reading the inputs with a stream, copying them into a vector (
toVector), and then computing the average as you did in the earlier rainfall assignment. On the other hand, you don’t need to use a vector at all, since a stream also has the necessary methods.
- For instance, it’s fine to call
sizeand
sumon a stream, as long as it’s finite.
- If you call those methods on a stream, use a
valthat refers to the stream.
- Make sure to spell the program’s output exactly right so that the automatic grader will give you the points you deserve.
Submission form
A+ presents the exercise submission form here.
Streams of Numbers, Conveniently
A vector (or a buffer or any other strictly evaluated collection) won’t do, because all of the vector’s elements are stored in memory and an infinite vector would need an infinite amount of storage.
But a stream will do:
def positiveNumbers = Stream.from(1)positiveNumbers: Stream[Int] positiveNumbers.take(3).foreach(println)1 2 3
from, a helpful method that creates a stream of increasing numbers. We pass in the first number.
Here we increment by ten at each element:
def evenTens = Stream.from(0, 10)evenTens: Stream[Int] evenTens.take(3).foreach(println)0 10 20
And here we increment by -1, which gives us a stream of decreasing numbers:
def negativeNumbers = Stream.from(-1, -1)negativeNumbers: Stream[Int] negativeNumbers.take(3).foreach(println)-1 -2 -3
Of course,
from also works in combination with other methods:
val firstBigSquare = Stream.from(0).map( n => n * n ).dropWhile( _ <= 1234567 ).headfirstBigSquare: Int = 1236544
Optional Reading: Different Kinds of Streams
A stream that iterates
iterate is another method worth mentioning. It creates a stream that
generates each element by re-applying a function to the previous element:
def alternating = Stream.iterate(1)( x => -2 * x )alternating: Stream[Int] alternating.take(4).foreach(println)1 -2 4 -8 def babble = Stream.iterate("")( "blah" + _ )babble: Stream[String] babble.take(4).foreach(println) blah blahblah blahblahblah
The example below uses
iterate to generate a stream that implements an
algorithm that estimates the square root of a given number without resorting
to the
sqrt library function. (The program applies Newton’s method
for approximating square roots of positive numbers.)
def squareRoot(n: Double) = { def isTooFar(approx: Double) = (approx * approx - n).abs > 0.0001 def nextApprox(prev: Double) = (prev + n / prev) / 2 def streamOfApproximations = Stream.iterate(1.0)(nextApprox) streamOfApproximations.dropWhile(isTooFar).head }squareRoot: (n: Double)Double squareRoot(9)res12: Double = 3.000000001396984 squareRoot(654321)res13: Double = 808.9011064400888
nextApproxrepeatedly.
A recursively defined stream
Methods such as
continually,
from, and
iterate work well for many needs.
But what if we didn’t have these factory methods? Or what if you want to define
a stream that doesn’t match what those methods do?
You can use recursion to tailor exactly the sort of stream that you need.
Below is a very simple example of a recursive (i.e., self-referential)
definition of a stream. This definition produces a stream of positive integers
(just like
Stream.from(1)).
def positiveNumbers(first: Int): Stream[Int] = first #:: positiveNumbers(first + 1)positiveNumbers: (first: Int)Stream[Int] positiveNumbers(1).take(3).foreach(println)1 2 3
#::combines a single value and a stream, yielding another stream. The value to the left of the operator becomes the first element; it’s followed by the elements of the stream on the right-hand side.
In the Scala API, methods such as
from and
continually have recursive
implementations. The same basic principle works for defining any kind of stream;
experiment!
Recursion is a versatile and powerful technique; streams are just one of its countless applications. Chapter 11.2 will say more.
A code-reading challenge
Here’s a more elaborate example of streams. Can you figure out what this function does and how it does it?
The names of the function and its components are vague but not deliberately misleading.
This is a math-y sort of program.
def mystery(limit: Int): Vector[Int] = { import scala.math.sqrt val odd = Stream.from(3, 2) def candidates = odd.takeWhile( _ <= limit ) def initialVals = odd.takeWhile( _ <= sqrt(limit).toInt ) def multiples(n: Int) = Stream.from(n * n, 2 * n).takeWhile( _ <= limit ) def rejected = initialVals.flatMap(multiples) val result = candidates.diff(rejected) result.toVector }
Optional Reading: Properties of Streams
What is the difference between these two programs?
val result = "llama".length println(result) println(result)
def result = "llama".length println(result) println(result)
The first program computes the length of a string, stores the result (in a
val), and
prints it out twice. The second program recomputes the length of the string upon request
(using the
def), which happens twice in the example. The first program uses an additional
memory slot to avoid repeating work.
What does that have to do with streams?
Streams are non-strict and lazy
We’ve already established that all the elements of a stream aren’t necessarily generated at all; in other words, a stream is a non-strict collection. We also noted that a stream element is generated only when that element is actually needed for some purpose.
Let’s return to this example:
val randomStream = Stream.continually( Random.nextInt(100) )randomStream: Stream[Int] = Stream(12, ?) randomStream.take(5).mkString(",")res14: String = 12,62,28,14,31
Random.nextInt(100)only when or if a new element is needed. In this example, that happens five times, since
mkStringforces the stream to access five of its elements. In order to do that, the stream needs to evaluate the randomizing expression repeatedly.
Careful now: an element is generated into the stream whenever a new element is needed. What if you reuse a previously generated element?
randomStream.take(5).mkString(",")res15: String = 12,62,28,14,31 randomStream.take(5).mkString(",")res16: String = 12,62,28,14,31 randomStream.take(10).mkString(",")res17: String = 12,62,28,14,31,27,79,18,78,43
In programmer-speak, a stream is not merely non-strict but lazy (laiska): it not only doesn’t generate elements unless it needs them but also stores the elements to save itself the future trouble of having to generate them again.
Laziness is even more conspicuous in the next example.
Let’s start by creating a function that generates new elements and notifies us about it:
var counter = 0counter: Int = 0 def generateElement() = { val newElem = counter println("I guess I have to bother to generate a new element: " + newElem) counter += 1 newElem }generateElement: ()Int
Now we form a stream that uses the above function to generate its elements:
val streamOfInts = Stream.continually( generateElement() )I guess I have to bother to generate a new element: 0 streamOfInts: Stream[Int] = Stream(0, ?)
toStringoutput. (N.B. In future versions of the Scala API, this behavior will not occur; a stream will not generate even the first element like this.)
Examining the first five elements necessitates additional
generateElement calls:
streamOfInts.take(5).mkString(",")I guess I have to bother to generate a new element: 1 I guess I have to bother to generate a new element: 2 I guess I have to bother to generate a new element: 3 I guess I have to bother to generate a new element: 4 res18: String = 0,1,2,3,4
Re-examining them does not, and the stream doesn’t do anything it doesn’t need to:
streamOfInts.take(5).mkString(",")res19: String = 0,1,2,3,4
Examining further elements makes the stream work:
streamOfInts.take(10).mkString(",")I guess I have to bother to generate a new element: 5 I guess I have to bother to generate a new element: 6 I guess I have to bother to generate a new element: 7 I guess I have to bother to generate a new element: 8 I guess I have to bother to generate a new element: 9 res20: String = 0,1,2,3,4,5,6,7,8,9
val,
def, streams, and memory usage
As long as you have a reference to a stream stored somewhere, you don’t lose the stream
to the garbage collector. This was illustrated in the examples just
above, where we had variables (
randomStream,
streamOfInts) that referred to our
streams. What if we hadn’t had them?
def randomStream = Stream.continually( Random.nextInt(100) )randomStream: Stream[Int] randomStream.take(5).mkString(",")res21: String = 42,72,7,86,30 randomStream.take(5).mkString(",")res22: String = 2,2,64,87,54
randomStreamcalls a parameterless function. Whenever we evaluate this expression, we get an entirely new stream.
Now that we didn’t store a reference to either of the random streams in a variable, the stream objects won’t stay in memory. The garbage collector promptly releases the memory for other use.
Let’s compare two more code fragments. Here’s the first one, in which a
val refers
to a stream:
val longStreamOfInputs = Stream.continually( readSomeDataFromSomewhere() ) longStreamOfInputs.map(doStuffWithDataElement).foreach(println) longStreamOfInputs.map(doSomethingElseWithElement).foreach(println)
longStreamOfInputsstores a reference to all the data.
Here’s the same code but with a
def instead of a
val:
def longStreamOfInputs = Stream.continually( readSomeDataFromSomewhere() ) longStreamOfInputs.map(doStuffWithDataElement).foreach(println) longStreamOfInputs.map(doSomethingElseWithElement).foreach(println)
This program doesn’t store the stream and its contents anywhere. In fact, even while the computer processes the stream one element at a time, each previously processed element becomes fair game for the garbage collector, which operates in the background. Especially if the stream is long, it can happen that the first elements have been processed and removed from memory before the last elements have even been generated.
Whether we store the stream in a variable can thus make a difference. Circumstances dictate whether it’s a good idea to do so:
- If the stream is short, if there is plenty of memory available, and especially if the goal is to traverse the same elements multiple times, then it’s a good idea to use a variable.
- If the elements are traversed only once and each one can be discarded as soon as it’s dealt with (as in our
SentimentAnalyzer, for instance), then
defis the better choice. This approach also has the benefit that all the elements don’t have to fit in memory at once.
When operating on small amounts of data, this difference between the alternatives is often negligible.
The internal implementation of streams is based on linking each element to the next element in the stream. This linked structure makes it possible to discard previously processed parts of the stream from memory, as described above; on the other hand, it also means that the only efficient way to process a stream is to advance linearly from the head rather than picking out elements at arbitrary indices (which is fine on buffers and vectors, for example). You’ll learn more about linked structures and efficiency in various follow-on courses, including Programming 2.
Summary of Key Points
- A stream is a collection whose elements aren’t formed as soon as the collection is created. They are generated and stored as needed.
- A stream is intended for traversing the elements in linear order.
- You can process a stream’s elements one at a time without storing them all simultaneously in the computer’s memory or even knowing how many elements there are or will be.
- Unlike the previously familiar collections, a stream can be either finite or infinite. You can use methods such as
takeand
takeWhileto select a finite part of an infinite stream.
- There are many ways to form a stream. For instance, you can repeat an operation that creates elements (
continually), produce elements in numerical order (
from), or repeatedly apply an operation to the previous element (
iterate).
- So-called by-name parameters are received by the function as unevaluated expressions. This contrasts with ordinary (by-value) parameters, which receive the previously computed values of expressions.
- The receiving method may evaluate a by-name parameter once, more than once, or not at all, as appropriate.
- These parameters are highly useful for some purposes, such as forming streams.
- Links to the glossary: stream; by-name parameter; strict, non-strict, and lazy evaluation. rainfall assignment is a version of a classic programming problem by Elliot Soloway.
The sentiment-analyzer assignment is an adaptation of a programming assignment by Eric D. Manley and Timothy M. Urness, which is in turn based on a programming contest on the web site Kaggle. | https://plus.cs.aalto.fi/o1/2018/w07/ch01/ | CC-MAIN-2020-24 | refinedweb | 5,211 | 57.37 |
Hello, Could you tell me what is the proper way of registering ModelAdmin classes for extensions. Should I create extra AdminSite instance for this purpose or use django.contrib.admin.site? I thought that just registering ModelAdmin classes in myext.admin module would be OK but it is not picked up by Django. I have added explicit import of admin module in urls:
Advertising
from django.conf.urls.defaults import include, patterns, url from myext.admin import admin urlpatterns = patterns('myext.views', url(r'^$', 'dashboard'), (r'^db/', include(admin.site.urls)), ) and although it is working, there's always a list of all applications in both myext/db and admin/db but in admin/db the myext application urls are not valid. This doesn't look to good... Could you suggest a right way of doing this? Any help would be greatly appreciated. Thanks, Bartek -- Want to help the Review Board project? Donate today at Happy user? Let us know at -~----------~----~----~----~------~----~------~--~--- To unsubscribe from this group, send email to reviewboard+unsubscr...@googlegroups.com For more options, visit this group at | https://www.mail-archive.com/reviewboard@googlegroups.com/msg08297.html | CC-MAIN-2017-30 | refinedweb | 180 | 61.53 |
0
Hi Everyone !
I am newbie in C# and i have problem when start new project.
My project include : One mainform and two child form ( Assetform and Loginform) and in Asset form i have a button call " Remove". here is some describe of how my program work :
- At the beginning of program, the form Asset start with the main form and the button " Remove " is disabled
- When i login, with form login => the button " Remove" is Enabled
and my problem is, i can't control the button in form1.
I have read some thread in our communicity and found somethings exciting. i tried it and gave this code for my problem :
in frmAsset form
Public Button Removebutton() { get{ return (btnRemove);} }
in Login Form :
private void Loginbtn_Click(object sender, EventArgs e) { if (tlogin.Text == "Administrator" && tpass.Text == "fch.123") { frmAsset Form2 = new frmAsset(); frmAsset.RemoveButton.Enabled=true; MessageBox.Show("Welcome", "Note", MessageBoxButtons.OK, MessageBoxIcon.Information); Close(); } else { MessageBox.Show("Wrong password and username", "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } }
But with this code , i still can't solve my problem. is there anyone has experience about this? Can you help me ? | https://www.daniweb.com/programming/software-development/threads/442550/how-to-control-button-with-login-form | CC-MAIN-2016-40 | refinedweb | 189 | 58.99 |
syncok, wcursyncup, wsyncdown, wsyncup - synchronise a window with its parents or children
#include <curses.h> int syncok(WINDOW *win, bool bf); void wcursyncup(WINDOW *win); void wsyncdown(WINDOW *win); void wsyncup(WINDOW *win);
The syncok() function determines whether all ancestors of the specified window are implicitly touched whenever there is a change in the window. If bf is TRUE, such implicit touching occurs. If bf is FALSE, such implicit touching does not occur. The initial state is FALSE.
The wcursyncup() function updates the current cursor position of the ancestors of win to reflect the current cursor position of win.
The wsyncdown() function touches win if any ancestor window has been touched.
The wsyncup() function unconditionally touches all ancestors of win.
Upon successful completion, syncok() returns OK. Otherwise, it returns ERR.
The other functions do not return a value.
No errors are defined.
Applications seldom call wsyncdown() because it is called by all refresh operations.
doupdate(), is_linetouched(), <curses.h>. | http://pubs.opengroup.org/onlinepubs/007908799/xcurses/wcursyncup.html | crawl-003 | refinedweb | 158 | 51.95 |
Optimising Firedrake Performance¶
“Premature optimisation is the root of all evil”
—Donald Knuth
Performance of a Firedrake script is rarely optimal from the outset. Choice of solver options, discretisation and variational form all have an impact on the amount of time your script takes to run. More general programming considerations such as not repeating unnecessary work inside of a loop can also be signficant.
It is always a bad idea to attempt to optimise your code without a solid understanding of where the bottlenecks are, else you could spend vast amounts of developer time resulting in little to no improvement in performance. The best strategy for performance optimisation should therefore always be to start at the highest level possible with an overview of the entire problem before drilling down into specific hotspots. To get this high level understanding of your script we strongly recommend that you first profile your script using a flame graph (see below).
Automatic flame graph generation with PETSc¶
Flame graphs are a very useful entry point when trying to optimise your application since they make hotspots easy to find. PETSc can generate a flame graph input file using its logging infrastructure that Firedrake has extended by annotating many of its own functions with PETSc events. This allows users to easily generate informative flame graphs giving a lot of insight into the internals of Firedrake and PETSc.
As an example, here is a flame graph showing the performance of the scalar wave equation with higher-order mass lumping demo. It is interactive and you can zoom in on functions by clicking.
One can immediately see that the dominant hotspots for this code are
assembly and writing to output so any optimisation effort should be
spent in those. Some time is also spent in
firedrake.__init__ but
this corresponds to the amount of time spent importing Firedrake and
would be amortized for longer-running problems.
Flame graphs can also be generated for codes run in parallel with the reported times in the graph given by the maximum value across all ranks.
Generating the flame graph¶
To generate a flame graph from your Firedrake script you need to:
Run your code with the extra flag
-log_view :foo.txt:ascii_flamegraph. For example:
$ python myscript.py -log_view :foo.txt:ascii_flamegraph
This will run your program as usual but output an additional file called
foo.txtcontaining the profiling information.
Visualise the results. This can be done in one of two ways:
Generate an SVG file using the
flamegraph.plscript from this repository with the command:
$ ./flamegraph.pl foo.txt > foo.svg
You can then view
foo.svgin your browser.
Upload the file to speedscope and view it there.
Adding your own events¶
It is very easy to add your own events to the flame graph and there are a few different ways of doing it. The simplest methods are:
With a context manager:
from firedrake.petsc import PETSc with PETSc.Log.Event("foo"): do_something_expensive()
With a decorator:
from firedrake.petsc import PETSc @PETSc.Log.EventDecorator("foo") def do_something_expensive(): ...
If no arguments are passed to
PETSc.Log.EventDecoratorthen the event name will be the same as the function.
Caveats¶
The
flamegraph.plscript assumes by default that the values in the stack traces are sample counts. This means that if you hover over functions in the SVG it will report the count in terms of ‘samples’ rather than the correct unit of microseconds. A simple fix to this is to include the command line option
--countname uswhen you generate the SVG. For example:
$ ./flamegraph.pl --countname us foo.txt > foo.svg
If you use PETSc stages in your code these will be ignored in the flame graph.
If you call
PETSc.Log.begin()as part of your script/package then profiling will not work as expected. This is because this function starts PETSc’s default (flat) logging while we need to use nested logging instead.
This issue can be avoided with the simple guard:
from firedrake.petsc import OptionsManager # If the -log_view flag is passed you don't need to call # PETSc.Log.begin because it is done automatically. if "log_view" not in OptionsManager.commandline_options: PETSc.Log.begin()
Common performance issues¶
Calling
solve repeatedly¶
When solving PDEs, Firedrake uses a PETSc
SNES (nonlinear solver)
under the hood. Every time the user calls
solve()
a new
SNES is created and used to solve the problem. This is a
convenient shorthand for scripts that only need to solve a problem
once, but it is fairly expensive to set up a new
SNES and so
repeated calls to
solve() will introduce
some overhead.
To get around this problem, users should instead instantiate
a variational problem (e.g.
NonlinearVariationalProblem)
and solver (e.g.
NonlinearVariationalSolver) outside of
the loop body. An example showing how this is done can be found
in this demo.
Other useful tools¶
Here we present a handful of performance analysis tools that users may find useful to run with their codes.
py-spy¶
py-spy is a great sampling profiler that outputs directly to SVG flame graphs. It allows users to see the entire stack trace of the program rather than just the annotated PETSc events and unlike most Python profilers it can also profile native code.
A flame graph for your Firedrake script can be generated from py-spy with:
$ py-spy record -o foo.svg --native -- python myscript.py
Beyond the inherent uncertainty that comes from using a sampling profiler, one substantial limitation of py-spy is that it does not work when run in parallel.
pyinstrument¶
pyinstrument is a great sample-based profiling tool that you can use to easily identify hotspots in your code. To use the profiler simply run:
$ pyinstrument myscript.py
This will print out a timed callstack to the terminal. To instead
generate an interactive graphic you can view in your browser pass
the
-r html flag.
Unfortunately, pyinstrument cannot profile native code. This means that information about the code’s execution inside of PETSc is largely lost.
memory_profiler¶
memory_profiler is a useful tool that you can use to monitor the memory usage of your script. After installing it you can simply run:
$ mprof run python myscript.py $ mprof plot
The former command will run your script and generate a file containing the profiling information. The latter then displays a plot of the memory usage against execution time for the whole script.
memory_profiler also works in parallel. You can pass either of the
--include-children or
--multiprocess flags to
mprof
depending on whether or not you want to accumulate the memory usage
across ranks or plot them separately. For example:
$ mprof run --include-children mpiexec -n 4 python myscript.py
Score-P¶
Score-P is a tool aimed at HPC users. We found it to provide some useful insight into MPI considerations such as load balancing and communication overhead.
To use it with Firedrake, users will also need to install Score-P’s Python bindings. | https://www.firedrakeproject.org/optimising.html | CC-MAIN-2022-40 | refinedweb | 1,157 | 65.32 |
Various minor documentation changes to match the latest gforth.ds
\ EXTEND.FS CORE-EXT Word not fully tested! 12may93jaw \ Copyright (C) 1995. \ May be cross-compiled decimal \ .( 12may93jaw : .( ( : ' drop alias d>s ( d -- n ) \ double d_to_s : m*/ ( d1 n2 u3 -- dqout ) \ double m-star-slash >r s>d >r abs -rot s>d r> xor r> swap >r >r dabs rot tuck um* 2swap um* swap >r 0 d+ r> -rot r@ um/mod -rot r> um/mod nip swap r> IF dnegate") "lit ; : CLiteral postpone (c") here over char+ allot place align ; immediate restrict : compile, ; immediate \ CONVERT 17may93jaw : convert ( ud1 c-addr1 -- ud2 c-addr2 ) \ core-ext \G OBSOLESCENT: superseded by @code{>number}. char+ true >number drop ; \ ERASE 17may93jaw : erase ( addr len -- ) \ core-ext over 2r@ swap -text 0= if 2swap 2drop 2r> 2drop true exit endif 1 /string repeat 2drop 2r> 2drop false ; \ SOURCE-ID SAVE-INPUT RESTORE-INPUT 11jun93jaw : else linestart ! blk ! sourceline# r@ <> blk @ 0= and loadfile @ 0= and if drop rdrop true EXIT then then r> loadline ! >in ! false ; \ ! ; | https://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/extend.fs?sortby=log;f=h;only_with_tag=MAIN;rev=1.36 | CC-MAIN-2021-21 | refinedweb | 174 | 57.5 |
I am coding a program to transmit public key from a client to server.
// SERVER
import java.security.Key;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import...
I am coding a program to transmit public key from a client to server.
// SERVER
import java.security.Key;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import...
Ya. i did it. again i am getting the same error .....:confused:
Actually i am not aware whether the related dll files are in the right path. Can u help me to place them at the right path?
Mr.Json, compiled my project using the following command, where "d:\java packages\hyperic-sigar-1.6.3\hyperic-sigar-1.6.3\sigar-bin\lib\sigar.jar" is the path of my sigar.jar.
There is no error....
I tried SIGAR to find the CPU info. I used the following code
package org.hyperic.sigar.cmd;
import org.hyperic.sigar.CpuPerc;
import org.hyperic.sigar.Sigar;
import...
Pal, follow my steps:
1. Copy ur program into the "bin" folder. For eg. copy ur program into "c:\jdk1.6\bin". U can compile ur program from ur current working directory itself. But it is little...
to compile this program
to run this program
I compiled ur program and found the following errors dude:
To fix this bug, do the following:
For 1st error:
public void House(double assvalue,...
To get value from user, use the following code
actually the "Double.parseDouble()" is used to convert input String type into Double. BUt i dont know how for this syntax works. If it does not...
ok thank u for all ur help
I am using JDK1.6. i need to create executable jar file. Do i need any IDE or its possible with jdk itself? If jdk can do it, how can i create executable jar file of my program.
this code will help u:
In the System.out.print('' "+" "+a[j]); line add spaces according to ur needs.
Note: Pass ur inputs during run time itself.
That is
U need to give space between...
Mr.Json, thank u for ur help. yet i have some doubts.Is "List<File> listFiles" in 1st line the class name or simply a method ? If it is a method, is it just enough to create an object and pass...
actually i have to create a list of all currently running processes. if a new process is started, it must be automatically paused and wait for user authentication. if user clicks OK, it should run,...
How to search all the local disks and list all the files present in the disk? how to detect the removable disk, so that i can search removable disks also. i will be helpfull if i can get a code...
Is it possible to control processes running ina system? i know its is difficult or not at all possible to control OS's processes. Atleast is there any way to control processes like vlc.exe,...
actually i am doing a project to attach hardware IDs with email. so that we can easily track the email. it will have uses in some other fields also.
i will be helpful if i have code snippet.coz i am new to java. thank u in advance.
sorry pal, as i am new to this forum, i am not able to mark it "answered". i am trying......
how to find the hardware ID like, Harddisk serial number, processor serial number, motherboard serial number, etc. in java?
its working pal....!!!! thanks a lot.
ok. let me try ur trick. thank you.
import javax.mail.*;
import javax.mail.internet.*;
import java.util.*;
import java.io.*;
class mailx
{
public static void main(String args[])throws Exception
{
Properties myprop=new... | http://www.javaprogrammingforums.com/search.php?s=f5c85cf983715e2ae34b46e7edc4410b&searchid=1929999 | CC-MAIN-2015-48 | refinedweb | 625 | 70.9 |
On Feb 28, 2012, at 11:29 AM, Arvind Prabhakar wrote:
> On Tue, Feb 28, 2012 at 11:19 AM, Alan D. Cabrera <list@toolazydogs.com> wrote:
>>
>> On Feb 28, 2012, at 10:53 AM, Arvind Prabhakar wrote:
>>
>>> On Tue, Feb 28, 2012 at 10:39 AM, Alan D. Cabrera <list@toolazydogs.com>
wrote:
>>>>
>>>> On Feb 28, 2012, at 10:13 AM, Patrick Hunt.
>>>>
>>>>
>>>> public class MySQLManager
>>>> extends org.apache.sqoop.manager.MySQLManager {
>>>>
>>>> public MySQLManager(final SqoopOptions opts) {
>>>> super(opts);
>>>> }
>>>>
>>>> }
>>>>
>>>> If all the code is like this it is absolutely ridiculous to have this at
Apache and not Cloudera.
>>>
>>> Please see [1] for details on why the code is like this. The short
>>> summary is that binary compatibility requires us to respect all
>>> extension points within the code.
>>>
>>> [1]
>>
>> IIUC, this document merely outlines how the move should be performed. This has been
completed and what's left are bindings for those who wish to use the old bindings from the
old project. There's no technical reason why those bindings for the old project must be housed
here at Apache.
>
> You are right that it outlines how the move should be performed. Along
> with that it also describes the motivation and specific technical
> requirements to be fulfilled. Following are the relevant quotes from
> the document:
>
> [From Overview - Motivation]
> Considering that there is a lot of third party code that is developed
> on top of/to work with Sqoop, this migration is particularly risky for
> backward compatibility and thus requires careful handling. This
> document outlines the steps that seem reasonable for such migration.
>
> [From Migration-General Approach - Technical Requirement]
>.
I think that it's good to have binary compatibility with Cloudera's old bindings. I still
don't see why it's a requirement for Apache to house code whose sole use is to provide backward
compatible bindings for Cloudera's old bindings.
Regards,
Alan | http://mail-archives.apache.org/mod_mbox/incubator-general/201202.mbox/%3CD37A625B-C7FF-410D-AE7F-0D2758F721B1@toolazydogs.com%3E | CC-MAIN-2014-10 | refinedweb | 315 | 62.58 |
This article has been reproduced from the Costa Digital blog.
nearForm has been working with Costa Cruises on strengthening its digital presence and infrastructure.
This post provides an insight into how nearForm consultants plan and execute technical solutions.
Combining Riot.js, Node.js, Browserify, Pure.css and ES6 to rapidly prototype a performant conference voting application
Business demands change and fluctuate constantly. There are occasions that require a quick turnaround with a hard deadline.
This was one of those occasions.
The hard deadline in this case was an internal conference, where our goal was to engender excitement for technology among stakeholders.
We decided to build a real-time application that took input from personal devices (mobile, tablet, laptop) and displayed results on the presenter’s screen.
We called our concept ‘Atmos’, the real-time conference voting application.
We had one developer (myself) and two and a half days to have a working proof of concept, followed by the weekend and Monday for tidy-up, polishing, cross browser testing and performance testing. All told, from the back end to the front end, Atmos was completed in around five days.
Live demo
A running demo of Atmos can be found at.
Try using the demo in a browser and on a tablet or mobile device at the same time.
Viewing the source
Atmos can be found on GitHub here. The code is intended both as a reference for this article and as a potential starting point for anyone wanting to build a scalable (both in project scope and deployment) real-time application using some of the latest tools and techniques.
Setting up
To run Atmos locally, you’ll need Node.js 0.12+ or io.js 2+.
Clone from the v1 branch:
$ git clone --branch v1 $ cd atmos && npm run setup $ npm run start:dev # start dev and real-time servers $ npm run stop:dev # power down servers
There’s a large amount of development dependencies. Setup takes approximately three to five minutes to complete.
Several of the
npm run scripts will likely fail on Windows; however, examining
package.json contents should reveal how to get going on a Windows machine.
Considerations
Interestingly, the primary constraints for our scenario match those of many other projects, though for different reasons and at difference scales.
Time
As mentioned, we had five days and there was no room for deadline extension.
Network
There’s no such thing as a reliable network. In particular, conference WiFi is typically poor at handling high load. What’s more, esoteric firewall setups or proxy setups at the venue could have caused issues.
Robustness
This was a live demo to demonstrate the virtues of technology to non-developers. Noticeable failure could significantly hinder our message.
Process
With little time for bike-shedding, the top considerations had to influence our tools, priorities and workflow.
Device support
Given the time constraints, we opted to support modern browsers and modern devices only. All conference participants had received iPhones and iPads; however, there was still an inclination towards BlackBerry devices.
As a trade-off, we supported touch screen BlackBerries, but did not implement support for key-only BlackBerries (adjusting the user interface alone would demand a significant time investment that we could not afford).
Nor did we pay much attention to IE. Even IE11 can be a time hog when it comes to compatibility and ~99% of our audience would be on (non-Windows) mobile devices anyway.
Progressive enhancement for SEO and accessibility was not followed on this project. However, our design and tool choices have made it easy to retrofit progressive enhancement with server-side rendering.
EcmaScript 6
There’s a direct correlation between code size and product quality. Defect density can be measured in bugs per 1000 lines of code and averages around .59 bugs per 1000 lines for open source projects or .72 bugs per 1000 lines on proprietary code bases. Either way, there will always be a ratio of bugs to code size. The more boilerplate we can shed, the better.
EcmaScript 6 (a.k.a EcmaScript 2015) was finalized in June 2015. Parts of it have already been implemented in modern browsers and in Node.js. However, for cross-browser consistency and to run Node without additional flags, we transpiled the ES6 code into ES5 code as part of the build process (see the ‘Build process’ section for details).
There was only a subset of ES6 features we wanted to use for this project to help keep the code clean and readable.
We stuck with micro-syntax extensions such as the following:
- lambdas (arrow functions)
- destructuring
- default parameters
- enhanced object literals
- rest operator
- spread operator
- const
These little pieces of sugar helped keep code clean, light and descriptive.
In particular, we used destructuring to emulate configuration-based enums, which were then used to establish lightweight multiplexing (more on this later).
Since
const tokens are transpiled to
var keywords, usage of
const was more of a mental assist. It prevented the (generally) bad habit of reassignment and made us think about what exactly constitutes an actual variable reference. Whilst there wouldn’t be a direct performance benefit from using
const in a transpilation context, we’re still facilitating the JavaScript engine by using variables that won’t be reassigned. In other words, the interpreter has to jump through hoops when a reference keeps changing. Employing an enforceable practice of non-reassignment should make runtime optimizations more likely.
Another gem is the lambdas. The removal of noise around a function enhances readability. However, there are a couple of caveats.
First, there’s a potential debugging issue (a similar problem to using anonymous functions). The code base was small enough in our case to let that go on this occasion. Second, the lexical treatment of
this differs from standard functions.
The context (represented by
this) in an arrow function takes on the context of the surrounding closure. If the surrounding closure isn’t called with
new, or given a context via
call,
apply or
bind then
this in a lambda function defaults to either the global object or
undefined when in strict mode. All of that is fine; what may be unexpected though, is that the lexical lambda context rule supercedes the context binding methods (e.g.
call,
bind,
apply).
function f(fn) { fn.call({some: 'instance'}) } (function () { 'use strict'; f(function () { console.log('ƒ this: ', this) }) f(() => console.log('λ this: ', this) ) }()) // logs: // ƒ this: Object {some: "instance"} // λ this: undefined
It’s important to know this difference. Some libraries do set callback function context.
For instance, the through2 module allows us to call
this.push inside the user supplied callback. If the supplied callback is an arrow function, calling
this.push will fail (or worse, if there’s a global push method). Instead of the object which
through2 attempted to supply as the context for the callback function the
this keyword will refer to the global object or
undefined (depending on the mode). In such cases we either have to supply a normal function, or pass values via the second parameter of the
cb argument (we’ll talk more about
through2 later).
Adopting ES6 syntax resulted in less code being written than using ES5, without obfuscating the intent of the code (in some cases, quite the opposite).
For this project, we steered clear of macro-syntax extensions such as classes, modules and generators.
For one thing, whilst learning the syntax of these language additions is straightforward, understanding the implications of their usage is a longer journey than we had time for.
Further, there’s the issue of code bloat during transpilation of macro-syntax extensions, plus runtime inefficiency (generators being the prime candidate for slow execution when transpiled).
Finally, it was important to keep the project simple. Classes aren’t the right paradigm for linear/cyclical data flow management (actually… they aren’t the right paradigm for a prototypal language but that’s another story!). Iterators (counterpart to generators) encourage a procedural approach (which is somewhat backwards). Also, the ES6 module system isn’t a good fit for current tooling. In my opinion, CommonJS modules are cleaner.
We also used the following ES6 language additions:
The
Object.assign and
Array.from methods simply afforded a nice way to do mixins and convert array like objects to arrays-proper (no more
Array.prototype.slice.call(ArrayLikeThing), hooray!).
The
Set constructor returns a unique list object. By pushing unique ids (determined browserside) onto a set, we could keep a constant running total of voters, which allowed us to calculate aggregated percentages.
And one EcmaScript 7 method: Object.observe.
Object.observe was fundamental to model management.
Using
Object.observe meant that we could store data in a plain JavaScript object and react to changes in that object. This drove the data flow: when a vote for an item came into the server, the relevant corresponding object was modified. When the object was changed, the change was both broadcast to all open sockets and persisted to disk.
Back-end platform
We considered building a peer-to-peer real-time application using WebRTC, but it wasn’t practical.
For one thing, iOS Safari does not support WebRTC. Even if it did, we would need to research and define an adequate peer-to-peer network architecture for sharing data among the 300 devices. Whilst this would be an interesting diversion, there wasn’t time to spare.
On top of that WebRTC peer connections aren’t serverless. We would still have needed a switchboard (for making connections) and relay servers (for bypassing firewall restrictions).
We settled instead on creating a mediator server that would count incoming votes and broadcasting totals. We used Node.js for this task.
Node is excellent at high-concurrency real-time connections; it has much higher concurrency than our needs.
Also the language of Node is JavaScript.
Working in multiple languages isn’t just about switching syntax – it’s about approach. The flow of logic is different. Writing everything in one language sped up full-stack prototyping because it eliminated the need to context switch between languages. It also made sharing code and configuration between environments trivial.
We built the back end against Node 0.12 and io.js 2.3. This allowed us to compare reliability and speed of platforms.
Object.observe is natively implemented in Node 0.12 and io.js 2.3 which means our server won’t run on Node 0.10 (polyfilling
Object.observe is expensive, so that’s not an option; this is also why it wasn’t used in the browser).
Node’s ecosystem was also leveraged for the build process. I’ll talk more about this as we go.
Choosing a front-end framework
To speed up development time, we wanted some form of view layer that provides data-binding capabilities.
We also wanted to be able to isolate features into components in order to reduce time-wasting bugs that result from global collisions and general state confusion.
Angular is the predominant framework in use at Costa Digital. Whilst componentization is a little muddied, Angular is nevertheless an excellent framework with a strong ecosystem.
However, for this project we chose Riot.js. The driving factor in this decision was file size.
The less data we have to send across the wire, the faster the app will load and establish a real-time connection. Ideally, using a framework should result in less code than writing an equivalent implementation without a framework.
When minified, Angular is 145.5kb, whereas Riot.js is 11 times smaller at 12.75kb.
Other alternatives were also deemed too large: Ember clocks in at a whopping 493kb, almost half a megabyte before we write a single line of application code!
Whilst Ember is between 126kb and 155kb gzipped, mobile browsers (Safari in particular) have low cache sizes.
Ember will still decompress in the browser to take up half a megabyte prior to any initialization, taking up a significant portion of the cache (increasing the likelihood of a reload after switching tabs).
In fairness, Ember isn’t primarily a view layer like React and Riot, it’s an entire MVC suite. But then so is Angular and it’s a third of the size of Ember.
React is 121.7kb and that’s before you include a flux implementation.
Another option was to write Atmos using future standards with the Web Components polyfill (which is the basis for Polymer). The promise of this approach, is that over time we’ll be able to shed pieces of the (currently 117kb) polyfill as browser support grows. However, Web Components haven’t been implemented as fast as expected by browser vendors, and anyway we had five days, not five years.
Riot.js
We built our real-time application in Riot.js.
Riot feels like Angular: templates are essentially HTML with a DSL overlay. It’s also inspired by React’s virtual DOM, where changes are measured and executed by diffing an efficient DOM representation.
The API surface of Riot is a small yet powerful set of primitives, which makes for a short and shallow learning curve. Perfect for our time-limited needs.
Unlike React where HTML can be written inline alongside JavaScript code (the JSX format), Riot’s paradigm leads us to declare component specific JavaScript code inline, among the HTML.
For instance, here’s how a React component might be written:
var Hello = React.createClass({ change: function (e) { this.setState({msg: e.target.value}) }, getInitialState: function () { return {msg: this.props.msg} }, render: function() { return (<div> <div>Hello {this.state.msg}</div> <input onBlur={this.change}/> </div>) } }); React.render(<Hello msg="World" />, document.body)
We can view the results here.
Here’s the equivalent in Riot.js:
<hello msg=World></hello> <script type="riot/tag"> <hello> <div> Hello {msg} </div> <input onchange={change} /> this.msg = opts.msg; this.change = function (e) { this.msg = e.target.value } </hello> </script>
The
script element with type
riot/tag creates a Riot context inside a standard HTML page. We don’t use inline
riot/tag
script elements on the Atmos code base. Instead, we compile tag files separately (which also eliminates the need to load the Riot compiler in the client).
To inject a Riot tag into the DOM we use
riot.mount:
riot.mount('msg')
In some ways, this looks like the return of the 90s… but there is a vital difference.
The lack of componentization and handler scoping were the primary driving forces behind the failure of attribute references handlers in early web applications. Not least because the early web was document-centric, rather than the application delivery platform it is today.
The event handler attributes in a Riot component can only reference methods that exist in their scope (which is determined by the base element, e.g. the element which gets mounted,
<hello> in the example). Vanilla HTML handler attributes can only reference methods on the global scope – which we know is a recipe for disaster.
Application structure
The Riot.js philosophy is one of “tools not policy”, which means we had to define our own structural approach for our application. To establish clean code boundaries, we wanted a modular structure. Writing small single purpose modules helps to avoid human error.
Client-side modularity
We used Browserify for modules in the browser. Browserify enables us to write CommonJS modules for our front-end code. CommonJS is the module system implemented in Node. Use
require to load a module, use
module.exports to export a module.
For example, Atmos has a front-end module (located in app/logic/uid.js), which enables us to consistently identify a devices browser between page refreshes or closing and opening the browser.
//app/logic/uid.js const uid = () => Math.random().toString(35).substr(2, 7) module.exports = () => (localStorage.uid = localStorage.uid || uid())
The
sync.js module app/logic/sync.js (which provides real-time communication) uses the
uid module by requiring it (also converting it into an array of integers representing byte values, in preparation for sending binary data across the wire):
const uid = require('./uid')().split('').map(c => c.charCodeAt())
For demonstration purposes, let’s see how Browserify processes a
require statement.
In the
atmos/app folder we can run the following:
sudo npm install -g browserify browserify <(echo "require('"$PWD"/logic/uid')")
Standardizing a paradigm across environments by using the same module system for server and client implementations yields similar cognitive benefits to writing the entire stack in the same language.
View components
Browserify can be augmented with transforms. Riotify is a Browserify transform that allows us to
require a Riot view (a
.tag file).
This allows us to create view-packages, where a view is a folder that contains
package.json,
view.tag and
view.jsfiles (and optionally a
style.tag file).
In Atmos, the
tabs view is a tiny component that outputs links based on the configuration of a menu array.
app/views/tabs/package.json
{ "main": "view.tag" }
The
package.json file has one purpose: to define the entry-point for the
tabs folder as the
view.tag file instead of the default
index.js as per Node’s module loading algorithm. This allows us to require the
tabs folder (instead of
tabs/view.tag). Requiring a view folder helps to enforce the idea that the view is an independent module.
<tabs> <div class="pure-menu pure-menu-horizontal"> <ul class="pure-menu-list"> <li class="pure-menu-item" each={item, i in menu}> <a href="{item.href}" class="pure-menu-link">{item.name}</a> </li> </ul> </div> <script> require('./view')(this) </script> </tabs>
The
view.tag file employs the
each attribute (part of Riot’s markup-level DSL), to loop through objects in
menu, referencing each object as
item. Then we output the
item.name linking into to the
item.href for each item.
At the bottom we
require the
view.js file (
.js is implied when omitted).
It’s important to understand that the
tag file actually represents a sort of component object. We’re just building that object using HTML syntax.
The root tag (
<tabs> in this case) is a declaration of a component. When we pass
this to the function returned by
require('./view') we are giving the
view.js‘ exported function the components instance. Another way to think of it is: we’re giving
view.js the components scope.
const menu = require('@atmos/config/menu.json') module.exports = (scope) => scope.menu = menu
The
view.js file is the component controller (or perhaps it’s a ViewController…). When we attach the
menu array to the
scope object (e.g. the
this object from the
view.tag file) we make it available to the component.
Finally, our applications entry point can load the tab view and mount it.
const riot = require('riot') /* ... snip ... */ require('./views/tabs') /* ... snip ... */ riot.mount('*') /* .. snip ... */
Passing the asterisk to
riot.mount essentially tells
riot to mount all required tags.
Scoped styles
Modularizing CSS seems to be the final frontier of front-end development. Due to the global nature of CSS selectors, it’s all too easy for web application styles to become entangled and confusing. Disciplines such as OOCSS and SMACSS have arisen to tackle this problem.
But when it comes to protecting the sanity of a code base, tools are better than convention.
Riot.js supports scoped style tags. For instance:
<my-tag> <p> Awesome </p> <style scoped> p {font-size: 40em} :scope {display:block; outline: 1px solid red} </style> </my-tag>
This won’t style all
p tags at size 40em, only
p tags inside
my-tag. Also the special pseudo-selector
:scopeapplies to the
my-tag tag.
Scoped styles were proposed as a native specification, but sadly may never be implemented across all browsers. Fortunately, Riot.js does support the syntax.
Style modules
It’s possible to compose a tag from several sources by re-declaring the tag and compiling each declaration separately. Browserify in conjunction with Riotify automatically compiles the tags via the
require statement.
This means we can decouple style from structure whilst also isolating its domain of influence to a particular view.
Let’s take a look at the
excitement-in view (this is the view that uses emoticons for user input):
app/views/excitement-in/view.tag
<excitement-in> <p class=question>How excited are you?</p> <face onclick={ fastcheck.bind(null, 'excited') }> <input onclick={ excited } <label for="r-excited" class="pure-radio"><img src="assets/excited.{ext}"></label> </face> <face onclick={ fastcheck.bind(null, 'neutral') }> <input onclick={ neutral } <label for="r-neutral" class="pure-radio"><img src="assets/neutral.{ext}"></label> </face> <face onclick={ fastcheck.bind(null, 'bored') }> <input onclick={ bored } <label for="r-bored" class="pure-radio"><img src="assets/bored.{ext}"></label> </face> <script> require('./view')(this) require('./style.tag') </script> </excitement-in>
The views
style.tag is required in the
view.tag.
app/views/excitement-in/style.tag
<excitement-in> <style scoped> face {display:block;margin-top:1em;margin-bottom:1em;text-align:center;} label {opacity:0.5;width:9em;} label img {width:9em;} input[type=radio] {display:none;} input[type=radio]:checked + label {opacity:1;} .question { margin: ; margin-top: 0.7em; margin-bottom: 0.1em; } </style> </excitement-in>
In the
style.tag file, the base element (
<excitement-in>) is declared again and the view components styles are placed inside a scoped style element.
There’s a little more boilerplate than the standard CSS file. The advantage of having the base tag in the styles file is that it reinforces the specific relationship between the styles and the view.
The styles for each component are pulled into the JavaScript bundle during the build process. Consequently there is a single HTTP connection for all JavaScript and component styles.
Scoped package names
Let’s take a look at the
package.json file in the
config folder:
{ "name": "@atmos/config", "version": "1.0.0" }
The
name is using a fairly new npm feature: scoped package names.
Using scoped names prevents us from accidental public publishing, whilst leaving the door open for private publishing.
If we don’t have a paid npm account called ‘atmos’ and we accidentally run
npm publish, it will fail.
If we have an unpaid account called ‘atmos’ it will still fail unless we run
npm publish --access public, which is much less likely to happen by accident.
The
app,
config,
inliner and
srv packages all have names scoped to
@atmos.
Using scopes also makes it easy for us to self-host modules on our own repository.
Let’s take a look at
.npmrc
@atmos:registry = ""
An
.npmrc file alters npm settings for that folder only. In this case we associated the @atmos namespace with
localhost port
4873. So if we tried to
npm publish (with or without the
--access) flag npm won’t publish to the public npm repository, but instead will attempt to publish to
localhost:4873.
We can run a local repository with the excellent sinopia module (which defaults to running on localhost:4873).
Whilst Sinopia was set up and left for future use (see the
scripts.repo field in app/package.json), we ended up using
npm link because it eliminates the need to reinstall updated packages. Additionally, only two of the packages (inliner and config) were sub-dependencies of
app and/or
srv so it didn’t seem worth it.
Shared configuration
Dependency resolution in Browserify and Node is generally equivalent. We can also require package-modules as opposed to just referencing files by path.
The
npm link command creates a symbolic link to a package. If we
sudo npm link in a folder containing a
package.json file, the module will be linked from the global npm installs directory (type
npm get prefix to see where that’s located on your system). We can then link to the global link by running
npm link <package name> in a folder which requires the linked package as a sub-dependency.
With
npm link we can share our configuration with both the frontend and backend code:
$ sudo npm link config $ pushd app $ npm link @atmos/config $ popd $ pushd srv $ npm link @atmos/config
The
npm link command removes the need to reinstall every time we change configuration settings.
This was the process we used during most of the development. However, for convenience, the linking process has been automated. Simply execute
npm run setup in the
atmos directory.
In the config folder we have four files:
- package.json
- .npmrc
- menu.json
- chans.json
We’ve examined
package.json and
.npmrc already. Let’s take a look at
menu.json:
[ {"href": "#mood", "name": "Mood"}, {"href": "#results", "name": "Results"} ]
This is only used on the front end; we’ve seen it already in the
app/views/tabs/view.js file.
The final (and most interesting) file is
chans.json:
{ "excitement": { "EXCITED": , "NEUTRAL": 1, "BORED": 2 }, "pace": { "FAST": 3, "PERFECT": 4, "SLOW": 5 }, "topic": { "TOPIC_A": 6, "TOPIC_B": 7, "TOPIC_C": 8, "TOPIC_D": 9, "TOPIC_E": 10 } }
The
chans.json file is used in both the client and server. It provides a shared contract allowing us to segregate data sent across the wire into channels. We use it to multiplex real-time streams.
Real-time connections
Transport
We decided to use WebSockets, which are supported in all current major desktop browsers, iOS Safari, all modern Android webKit browsers and even BlackBerry 10 OS.
Employing WebSockets kept the JavaScript payload small. For instance, the engine.io library (which provides transport progressive enhancement) is an additional 54kb when Browserified and minified.
We also chose to build our own thin abstraction around the transport on the server side (app/logic/sync.js). Again, this avoided the extra weight that socket.io (91kb) or websocket-stream (200kb) would add on the client side. Our small abstraction isolates transport communication logic, making it easy to dynamically switch out the transport in the future (in order to provide support for old browsers, implement a new paradigm like peer-to-peer, or interact with a third-party data service like Firebase).
We did use websocket-stream on the server side so we could easily attach our data pipeline to each connection.
Streams
For any server task that involves shuffling data around, Node streams are almost always the right way to go. They’re essentially an implementation of asynchronous functional programming, where the immutable objects are binary chunks of a data set (or actual objects in the case of object streams). They’ve been called “arrays in time”, and that’s a great way to think about them.
With streams we can process data in a memory-controlled way. In this particular project, that’s of no major benefit because we’re only taking in eight bytes per vote, and sending out floating point numbers to every connection when a percentage changes. Pipeline capacity is not a problem in our case.
The main benefit of streams in this project is the ability to architect data-flow as a pipeline.
Let’s take a look at the srv/server.es file. On line 9 we call the
transport function and pass it a callback. The
transport function can be found in srv/lib/transport.js. All it does is accept an incoming WebSocket and wrap it in a stream.
In the callback we use that stream:
transport(stream => { // register incoming votes stream.pipe(sink()) // send votes out to all clients broadcast(stream) })
Incoming data flow is extremely simple. We pipe incoming data to a
sink stream. The
sink function can be found in srv/lib/conduit.js, and it looks like this:
const sink = () => through((msg, _, cb) => { msg = Array.from(msg) const stat = msg.pop() //grab the channel const uid = msg.map(c => String.fromCharCode(c)).join('') const area = areaOf(stat) registerVoter(uid, stat, area) Object.keys(stats[area]) .forEach(n => { n = +n if (isNaN(n)) return //undefined instead of false, so that //properties are stripped when stringified //(deleting is bad for perf) stats[area][n][uid] = (n === stat) || undefined }) cb() })
The
through function is imported from the through2 module. It provides a minimal way to create a stream. This stream processes each incoming message from the client, registers new voters and records or updates their votes.
Using streams allows us to describe a birds eye view (the pipeline), that can be zoomed into at each processing point (the stream implementation).
Channels
HTTP connections are expensive. They take time and resources to establish.
WebSockets are effectively bidirectional, long-lived HTTP connections (once established, more like TCP connections).
WebSocket connections are resource intensive. They constantly use a devices antenna, requiring power, CPU and memory. This affects mobile devices in particular.
We wanted a way to segregate and identify incoming and outgoing data without using multiple transports. This is called multiplexing, where multiple signals can be sent through one transport.
Let’s take a look at the
transport call at line 9 of server.es again:
transport(stream => { // register incoming votes stream.pipe(sink()) // send votes out to all clients broadcast(stream) })
We’ve already discussed incoming data. Let’s see how we send data out by taking a look at the
broadcast function on line 17 of server.es:
function broadcast (stream) { stream.setMaxListeners(12) // declarative ftw. source(EXCITED).pipe(channel(EXCITED)).pipe(stream) source(NEUTRAL).pipe(channel(NEUTRAL)).pipe(stream) source(BORED).pipe(channel(BORED)).pipe(stream) source(FAST).pipe(channel(FAST)).pipe(stream) source(PERFECT).pipe(channel(PERFECT)).pipe(stream) source(SLOW).pipe(channel(SLOW)).pipe(stream) source(TOPIC_A).pipe(channel(TOPIC_A)).pipe(stream) source(TOPIC_B).pipe(channel(TOPIC_B)).pipe(stream) source(TOPIC_C).pipe(channel(TOPIC_C)).pipe(stream) source(TOPIC_D).pipe(channel(TOPIC_D)).pipe(stream) source(TOPIC_E).pipe(channel(TOPIC_E)).pipe(stream) }
The astute among you may note that this could have been written in about three lines of code. This code is repetitive on purpose.
Whilst it’s true that in many cases “Don’t repeat yourself” is an axiom worth observing, there are times when a declarative approach has more value. In this case, we’re describing data flow at the top level, so we want to be explicit.
Our channels are represented by constants that refer to an integer. These constants are set in config/chans.json and are shared between the server and the client. In srv/lib/enums.js we load the
chans.json file and flatten out the object structure, leaving us with a shallow object containing the channel names and numbers.
enums.js processes
chan.json into an object that looks like this:
{ EXCITED: , NEUTRAL: 1, BORED: 2, FAST: 3, PERFECT: 4, SLOW: 5 TOPIC_A: 6, TOPIC_B: 7, TOPIC_C: 8, TOPIC_D: 9, TOPIC_E: 10 }
At the top of
server.es we load these as constants:
const { EXCITED, NEUTRAL, BORED, FAST, PERFECT, SLOW, TOPIC_A, TOPIC_B, TOPIC_C, TOPIC_D, TOPIC_E } = require('./lib/enums')
This is where EcmaScript 6 destructuring really shines. It doesn’t matter what order we specify for the constants, as long as they match the properties of the object. This means as long as we keep the names the same in
chans.json we can change the number of each channel and add new channels without disrupting anything.
Streams are built on EventEmitters, which have a default soft limit of 11 listeners. Nothing breaks if this limit is met; however, a warning of a potential memory leak is displayed. We happen to be creating eleven pipelines and attaching them all to the same stream. This leads to an
end event listener getting added to the
stream object eleven times. Since we know it’s not a memory leak, we call
stream.setMaxListeners and bump the limit from 11 to 12 to avoid outputting the warning.
If we wanted to add hundreds of channels, we could pass an object as the second argument to each of the
.pipe(stream) calls. The object would contain an
end property with a value of
false:
source(TOPIC_A).pipe(channel(TOPIC_A)).pipe(stream, {end: false})
This would stop the listener from being added. If necessary, we could then add a single listener for clean up. However, since we’re only exceeding by one we simply bumped the maximum listeners setting.
Let’s take a look at the
channel stream, at line 31 of srv/lib/conduit.js.
const channel = chan => through((data, enc, cb) => { cb(null, Buffer.concat([Buffer([chan]), data])) })
Each time a chunk passes through the stream, we prefix the channel number to it. This gives us a maximum of 256 channels. If we wanted more than that we would consider using the varint module which can create and recognize variable byte-length integers in a chunk of binary data. We only needed 12 channels, so we stuck with a single byte slot to hold the channel number.
Notice how we us
cb instead of
this.push to pass data down-stream. As discussed in the EcmaScript 6 section, this is because we’re using a lambda function as the callback so in the above case
this would refer to
undefined instead of our stream instance.
Finally we’ll take a look at the
source stream on line 4 of srv/lib/conduit.js.
const source = stat => { var init var stream = through() const area = areaOf(stat) const voters = stats[area].voters const subject = stats[area][stat] if (!init) { init = true stream.push(percentages[stat] + '') } Object.observe(subject, () => { const votes = Object.keys(subject) .map(uid => subject[uid]) .filter(Boolean).length percentages[stat] = (votes / voters.size || ) stream.push(percentages[stat] + '') }) return stream }
Here we refer to the channel as the
stat. For our purposes, these terms are interchangeable depending on context. In srv/lib/data.js we take advantage of the ES6 computed properties to set up clean and clear models.
For example, here’s the
stats object:
const stats = fs.existsSync(at('stats')) ? Object.seal(require(at('stats'))) : Object.seal({ excitement: { voters: new Set(), [EXCITED]: hash(), [NEUTRAL]: hash(), [BORED]: hash() }, pace: { voters: new Set(), [FAST]: hash(), [PERFECT]: hash(), [SLOW]: hash() }, topic: { voters: new Set(), [TOPIC_A]: hash(), [TOPIC_B]: hash(), [TOPIC_C]: hash(), [TOPIC_D]: hash(), [TOPIC_E]: hash() } })
Again, we’re being purposefully declarative (and therefore somewhat repetitive).
Whenever we set up a
source stream in
server.es we begin to observe the object that exists at the property corresponding to the channel number in the
stats object (the
subject).
Any time the
subject changes, we recalculate the vote percentages for that particular subject area.Then we push the new percentage along the stream (where the channel number is added then sent out across the WebSocket transport).
We also use EcmaScript 6 destructuring to manage channels on the browser side.
For instance, in app/views/topic-out/view.js:
const sync = require('../../logic/sync') const chans = require('@atmos/config/chans.json') const {TOPIC_A, TOPIC_B, TOPIC_C, TOPIC_D, TOPIC_E} = chans.topic const map = (name) => (n) => { return { [name]: parseInt(n * 100, 10) + '%', ['_' + name]: n } } module.exports = (scope) => { sync(TOPIC_A, scope, map('topicA')) sync(TOPIC_B, scope, map('topicB')) sync(TOPIC_C, scope, map('topicC')) sync(TOPIC_D, scope, map('topicD')) sync(TOPIC_E, scope, map('topicE')) }
We’re only interested in the topic channels. Each of these channel numbers is passed to the
sync function, which listens for data on the transport whose prefix corresponds to a specified channel number. It then pops the channel number off the chunk, converts the byte array into floating point number, runs it through the supplied
map function and mixes the resulting object into the
scope, calling
scope.update to ensure the UI reflects the updated object.
See app/logic/sync.js for implementation details.
Channels are used in the same way when sending data to the server, for instance app/views/pace-in/view.js:
const sync = require('../../logic/sync') const chans = require('@atmos/config/chans.json') const {FAST, PERFECT, SLOW} = chans.pace module.exports = (scope) => { scope.fast = () => sync.vote(FAST) scope.perfect = () => sync.vote(PERFECT) scope.slow = () => sync.vote(SLOW) }
Each of the channels is passed to
sync.vote, which adds the channel number to the outgoing byte-array (the outgoing byte-array in this case is the 7 byte
uid we created for the device).
We don’t use streams on the client side. Once the core
stream module is required, it adds
100kb to the payload when Browserified. We could have used the very lightweight implementation of streams called
pull-stream by Dominic Tarr. However, for this project, simple callbacks on the browser side were sufficient.
UI
As with every other part of the project, we wanted to create the UI quickly and with minimal resource impact.
Pure.css
For styling the application, we used Pure.css, mostly for its responsive grids.
This was our first time using Pure.css, but we found it was easy to get moving quickly and it made responsive prototyping effortless.
Whilst Pure.css already has a small footprint, we used an optimization process to extract only the styles we needed (see ‘Preprocessing’ below).
Visual scaling
The application needed to work on a wide range of screen sizes, from small mobile screens up to 1080p resolution on a large projector screen. All visuals were created for limitless, pixelation-free scaling without sending large images to the client. This meant all graphics had to be created with HTML and CSS or with SVG. The smiley faces are SVG images, with small PNG fallback images on BlackBerry.
We used
em units for all measurements (instead of pixels or percentages). This meant we could scale all elements by changing the base font size. However, with time running out, we simply used browser zoom at the venue to get the right size for the projector screen, whereas we used responsive grids to reflow the layout on smaller devices.
Preprocessing
All of our code needed to be processed prior to deployment, both on the server side and client side. On the server, we needed ES6 to ES5 transpilation and linting. On the client, we needed Browserification, Riotification (if those are words), ES6 transpilation, CSS, JavaScript and HTML minification.
npm: the task runner
There’s a few strong task runners with great ecosystems out there.
Well-known task runners include Grunt, Gulp and Broccoli.
Nevertheless, if we’re not dealing with a massive application with complex build process requirements, we prefer to use
package.json
scripts.
The
scripts field in
package.json allows us to define shell tasks that run in a context-specific environment, in that the path of these shell tasks includes the
node_modules/bin folder. This allows us to drop the relative path when referencing executable dependencies.
The shell is extremely powerful. It has a streaming interface: we use the pipe (
|) to connect outputs. We can also use
&& to create task chains,
& to run tasks in parallel and
|| for fallback tasks.
To execute a task ,we use
npm run. For instance, the
dev task in app/package.json
scripts object looks like this:
"dev": "light-server -s . -w 'views/**, logic/**, main.js, index.dev.html' -c 'npm run build:app'"
This starts a server on and watches files, rebuilding when they change.
To run this we execute:
npm run dev
EcmaScript 6
To transpile our ES6 code for the client side, we included the following in app/package.json
scripts field:
"build:app": "Browserify -t babelify -t riotify ./main.js -o build/app.js",
Browserify and the Riotify have already been covered.
Babelify is another Browserify transform that uses the babel library to convert our ES6 code into ES5 code.
On the server,
babel itself is listed as dependency.
In srv/index.js we do the following:
require('babel/register') require('./server.es')
Requiring
babel/register alters the requiring process itself, so any modules required after that will be transpiled (if necessary). In effect, we transpile on initialization.
Standard
During rapid development, code discipline is not a primary focus. Ultimately, though, we want neat, readable code to come back to.
Standard is a type of linter that enforces a non-configurable code style. The idea behind this is philosophical, the premise being “let’s stop bike-shedding and just go with something”. This seemed to have cohesion with project priorities, so we used it to determine code discipline for this project.
Standard has a
--format mode which rewrites code according to the rules of Standard. With this, we could partially automate (it’s not perfect) the tidy up process, thus saving time for more thought-intensive tasks.
Standard uses eslint as the parser. We’re able to change the parser to babel-eslint to apply standard linting and formatting to EcmaScript 6 code by installing babel-eslint as a dependency and adding a
standard.parser property set to
babel-eslint in the
package.json files.
For instance in the srv/package.json file we have:
... "standard": { "parser": "babel-eslint" }, "scripts": { "lint": "standard" }, "dependencies": { "babel": "^5.6.14", "babel-eslint": "^3.1.20", ...
The notable thing about Standard is that it restricts semi-colon usage to the rare edge cases. This is why there are no semi-colons in the code examples.
It’s difficult to talk about semi-colons without bike-shedding, so we won’t.
If Standard offends sensibilities there’s also Semistandard (…of course there is).
Uncss, Inliner and HTML Minify
We didn’t use a CSS preprocessor like Sass, LESS, or Stylus. The benefits of scoped styles combined with Pure.css were sufficient for our purposes.
We did use uncss, an awesome utility that loads the page in a headless browser and cross references stylesheets DOM selector matches. Then it outputs the net CSS.
Let’s take a look at the
build:compress task in app/package.json
scripts field.
"build:compress": "uncss | cleancss --s0 --skip-import --skip-aggressive-merging | ./node_modules/.bin/@atmos/inliner index.dev.html | html-minifier --collapse-whitespace --remove-attribute-quotes --collapse-boolean-attributes > index.html",
For this to work, we have to also be running the
dev task so we have a server on
localhost:4000.
Notice how we load the index.dev.html page (rather than the index.html page).
Each of the executables in this tasks pipeline are project dependencies.
Once we have the CSS subset, we pass it through the cleancss utility cutting further bytes.
Then, we pipe it through
@atmos/inliner, which was written for the project.
Unfortunately,
npm currently has a bug with scoped package executables. The relative path has to be specified, which is why we couldn’t simply write
inliner or
@atmos/inliner.
The
inliner takes an HTML file, and parses it using JSDOM, removing all
link tags (it leaves inlined
style tags alone). Then it creates a new
style tag and writes the CSS that is piped to the process (our minified CSS subset). Finally the
inliner outputs an HTML file when done.
On both mobile networks (which participants ended up using due to slow WiFi), and strained WiFi networks the major issue is not broadband speed, but connection latency.
In other words, the making of the connection is the bottleneck. This is why watching video over 3G isn’t always terrible, but it generally takes longer for the video to start playing than on a typical functioning WiFi connection.
The
link tag blocks page rendering until it has downloaded, which means that in non-optimized form, rendering is reliant on three (probably low-latency) HTTP connections.
By inlining the CSS we reduce render blocking connections down to zero, avoiding potential sluggish page loading.
Each views’ CSS is actually compiled by Riot into JavaScript. The
script tag to load the applications JavaScript is placed at the bottom of the page. This setup allows styles for page structure to load close to instantly even on a slow connection, while component styles load alongside component functionality.
The only other HTTP request on page load is the font import. Again, we place this at the bottom of the HTML structure to avoid render blocking.
Finally, we pass it through
html-minifier to squeeze out all the slack we can.
Task composition, flag delegation and Minifyify
Let’s take a look at the
build:dist and
build tasks in
package.json
script field:
"build:dist": "npm run build:assets; npm run build:app -- -d -p [minifyify --map app.js.map --output build/app.js.map]", "build": "npm run build:compress && npm run build:app && npm run build:dist",
Because
npm run is just another shell command, we can execute other tasks by their
script alias. Now we can compose tasks from other tasks. This is what our
build task does.
We can also pass flags by proxy to the executables that are called within another task. In the
build:dist we use a double dash (
--) to instruct the task runner that the following flags apply to the last executable in the
build:apptasks (which is the
Browserify executable).
We specify the
-d flag which tells Browserify to retain data for creating sourcemaps, then we add the
-p flag to load the Minifyify plugin (Minifyify is a Browserify plugin, not a Browserify transform).
Long story short, by the end of the build process we have minified JavaScript (with a sourcemap).
Reliability and recovery
There were some significant unknowns. We didn’t know whether a bug in our code or in a sub-dependency might be triggered by interactions between ~300 active WebSocket connections and we didn’t have time to stress test. Even if we had time, there’s no guarantee that we would perfectly emulate a live environment.
So if the server crashed, we needed to fail gracefully (or rather, fail in a way that nobody notices).
Persistence
If the server crashed, we needed to retain state which could be reloaded on server restart. Various database options were considered, but this meant another process to deploy and monitor. LevelDB was a prime candidate because it runs in-process with the leveldown/levelup modules). However, since our deployment environment (Digital Ocean) ran on solid-state disks, we decided to keep it simple and persist directly to disk. This was one of the last things we added. With time running out, choosing straightforward file system manipulation meant avoiding learning another API.
Reconnection
If the server crashed, the clients would lose their connection. The clients needed to be able to reconnect when the server came back online, without the user noticing.
In addition, the ability to reconnect would smooth over any short-term intermittent connectivity issues the mobile or WiFi networks at the venue might have.
Thanks to closure scope and asynchronous mutual recursion, we were able to implement a reconnection strategy in
sync.js quickly and with a small amount of code.
On line 5 of app/logic/sync.js we create our WebSocket connection:
var ws = wsab('ws://' + location.hostname + ':4001')
wsab is a small function near the bottom of
sync.js. It simply creates a binary WebSocket that uses ArrayBuffers instead of the default Blobs.
This is one of the few places where we use
var to declare a reference. The
ws token is a variable because if the client should disconnect for any reason we point
ws to a new
WebSocket instance holding the new (hopefully live) connection.
Lines 22-36 of app/logic/sync.js contain most of the rest of the magic:
const recon = (attempt = ) => { const factor = (attempt + 1) / 3 const t = ~~(Math.random() * (2e3 * factor)) + 1e3 * factor setTimeout(() => { ws = wsab('ws://' + location.hostname + ':4001') ws.addEventListener('error', () => recon(attempt + 1)) ws.addEventListener('open', attach); }, t) } const attach = () => { ws.addEventListener('close', () => recon()) ws.addEventListener('message', e => update(e.data)) reg = true }
Whilst the WebSocket connection is created as soon as
app/logic/sync.js is required, the
attach function is invoked the first time its exported function is called.
The
attach function has two roles. It routes incoming messages to the
update function (which parses incoming messages then populates and updates a relevant components scope accordingly). It also attaches a
close listener to the WebSocket. This is where the
recon function comes in.
The
recon function returns a function that will repeatedly attempt to establish a new connection to the server. There is no limit to the amount of attempts, however each attempt will take longer than the last.
Whilst the server could probably handle 300 (exactly) simultaneous connection requests, time for proving this assertion was lacking. So we introduced pseudo-randomness to the exponential backoff strategy to prevent such a scenario. Without the variability in time till reconnect, all clients would try to connect simultaneously.
Time allowing, we could have made a completely seamless offline-first experience by recording current selections in
localStorage and sending the selections back to the server upon reconnection.
Supervisor
Finally, we used the supervisor utility to secure minimal downtime in the event of a server crash.
npm install -g supervisor@0.6
We had to use the 0.6.x version as the 0.7.x line seems to break (on Ubuntu) when used with
nohup (at least on Digital Ocean this appears to be the case).
Supervisor watches a process and restarts it if the process dies.
This was the command we ran from the
atmos directory to start our server:
nohup supervisor srv &
Behaviour consistency
Several libraries are required to ensure cross-platform consistency near the top of app/main.js (our front ends entry point).
Lines 5-11 of app/main.js:
// polyfills/behaviour consistency require('core-js/fn/set'); require('core-js/fn/array/from'); require('core-js/fn/object/assign') require('fastclick')(document.body) require('./logic/support').blackberry()
The core-js module is divided up by feature. So we get to load only what we need.
The fastclick module removes the 300ms delay before a touch is registered on mobile devices. Without this, mobile interaction seems lethargic.
Finally, our purpose-written app/logic/support.js library is used to customize the display by adding a
blackberry class to the
html element if the device is a BlackBerry. The
support library is used elsewhere to detect SVG support, and load PNG faces instead of SVG faces (again this primarily for BlackBerry).
Deployment
We kept deployment very simple. We used an Ubuntu Digital Ocean instance with
node and
git installed on it. We pulled in changes onto the server with git, and ran the server with
nohup. The
nohup (“no hangup”) command allows us to start a process over SSH and terminate the client session without killing the process.
Due to its high performance and aggressive caching policy, we used nginx to serve static files, simply creating symlinks to the local atmos git repository from the nginx serving folder.
Testing
Unfortunately, like most time-constrained projects, we didn’t set up any automated tests.
TDD is awesome when there’s time and forethought; however we were prototyping and exploring possibilities as we went.
Moving forward, the testing strategy will mostly be at the component level.
We could also do with a stress-testing suite to see how much activity the server can take before it comes under strain.
Future
We’d like to break up Atmos more, decouple the view components and make them interchangeable. We’d like to make it very easy to create custom components so that Atmos can be repurposed, yet utilize the real-time infrastructure. We’ll also look into an easy zero-config deployment strategy (possibly with docker containers).
Offline vote recording as discussed in the Reconnection section would also be a nice feature.
We could look into using nginx to round-robin multiple WebSocket servers as well as serving static files. This would further protect availability in the event of a server crash: disconnected clients would quickly reconnect to the an alternative WebSocket server while the crashed server restarts. We would at this point either switch to a database to manage persistence across processes (LevelDB looks like a good choice) or implement a lateral transport layer (e.g. TCP or maybe a message bus) that achieves eventual consistency across the services (maybe we’d use SenecaJS to abstract away the details).
We should probably also fix the layout issue in Internet Explorer.
Conclusion
Being informed about current progressions in the ecosystem allowed us to make decisions that increased productivity whilst avoiding time sink-holes.
Before commencing a project, it’s worth weighing up the priorities and choosing an approach that meets the criteria. For us, it was mainly about payload size because we wanted the real-time application to feel instant, despite poor network conditions.
Cross-platform functionality was less important because we had a specific target audience. Therefore, we sacrificed universal compatibility in favour of small file size. For instance, we used plain WebSockets instead of engine.io or socket.io because old browsers simply weren’t important.
We hadn’t heard of Riot.js before starting this project, but it was a breeze to work with. We think the reasons for this include Riot’s small API ideology, single-minded purpose and concepts that parallel pre-existing paradigms without wrapping abstractions in esoteric language (transclusion, anyone?).
In summary, small is beautiful, research time is never lost time, tailor tools to priorities and always be open to different approaches.
Stay curious and see you next time!
Want to work for nearForm? We’re hiring.
Phone +353-1-514 3545 | https://www.nearform.com/blog/quickly-build-social-real-time-application/ | CC-MAIN-2017-39 | refinedweb | 8,875 | 58.08 |
- NAME
- SYNOPSIS
- ABOUT
- Security warning
- FUNCTIONS
- API METHODS
- OPERATOR
- THE DPATH LANGUAGE
- Iterator style
- INTERNAL METHODS
- AUTHOR
- CONTRIBUTIONS
- SEE ALSO
- AUTHOR
NAME
Data::DPath - DPath is not XPath!
SYNOPSIS
use Data::DPath 'dpath'; my $data = { AAA => { BBB => { CCC => [ qw/ XXX YYY ZZZ / ] }, RRR => { CCC => [ qw/ RR1 RR2 RR3 / ] }, DDD => { EEE => [ qw/ uuu vvv www / ] }, }, }; # Perl 5.8 style @resultlist = dpath('/AAA/*/CCC')->match($data); # ( ['XXX', 'YYY', 'ZZZ'], [ 'RR1', 'RR2', 'RR3' ] ) # Perl 5.10 style using overloaded smartmatch operator $resultlist = $data ~~ dpath '/AAA/*/CCC'; # [ ['XXX', 'YYY', 'ZZZ'], [ 'RR1', 'RR2', 'RR3' ] ]
Note that the
match() function returns an array but the overloaded
~~ operator returns an array reference (that's a limitation of overloading).
Various other example paths from
t/data_dpath.t (not neccessarily fitting to above data structure):
$data ~~ dpath '/AAA/*/CCC' $data ~~ dpath '/AAA/BBB/CCC/../..' # parents (..) $data ~~ dpath '//AAA' # anywhere (//) $data ~~ dpath '//AAA/*' # anywhere + anystep $data ~~ dpath '//AAA/*[size == 3]' # filter by arrays/hash size $data ~~ dpath '//AAA/*[size != 3]' # filter by arrays/hash size $data ~~ dpath '/"EE/E"/CCC' # quote strange keys $data ~~ dpath '/AAA/BBB/CCC/*[1]' # filter by array index $data ~~ dpath '/AAA/BBB/CCC/*[ idx == 1 ]' # same, filter by array index $data ~~ dpath '//AAA/BBB/*[key eq "CCC"]' # filter by exact keys $data ~~ dpath '//AAA/*[ key =~ /CC/ ]' # filter by regex matching keys $data ~~ dpath '//CCC/*[ value eq "RR2" ]' # filter by values of hashes
See full details in
t/data_dpath.t.
You can get references into the
$data data structure by using
dpathr:
$data ~~ dpathr '//AAA/BBB/*' # etc.
You can request iterators to do incremental searches using
dpathi:
my $benchmarks_iter = dpathi($data)->isearch("//Benchmark"); while ($benchmarks_iter->isnt_exhausted) { my $benchmark = $benchmarks_iter->value; my $ancestors_iter = $benchmark->isearch ("/::ancestor"); while ($ancestors_iter->isnt_exhausted) { my $ancestor = $ancestors_iter->value; print Dumper( $ancestor->deref ); } }
This finds all elements anywhere behind a key "Benchmark" and for each one found print all its ancestors, respectively. See also chapter Iterator style.
ABOUT
With this module you can address points in a datastructure by describing a "path" to it using hash keys, array indexes or some wildcard-like steps. It is inspired by XPath but differs from it.
Why not XPath?
XPath is for XML. DPath is for data structures, with a stronger Perl focus.
Although XML documents are data structures, they are special.
Elements in XML always have an order which is in contrast to hash keys in Perl.
XML elements names on same level can be repeated, not so in hashes.
XML element names are more limited than arbitrary strange hash keys.
XML elements can have attributes and those can be addressed by XPath; Perl data structures do not need this. On the other side, data structures in Perl can contain blessed elements, DPath can address this.
XML has namespaces, data structures have not.
Arrays starting with index 1 as in XPath would be confusing to read for data structures.
DPath allows filter expressions that are in fact just Perl expressions not an own sub language as in XPath.
Comparison with Data::Path
There is a similar approach on CPAN, Data::Path. Here is a comparison matrix between Data::Path and Data::DPath.
(Warning: alpha grade comparison ahead, not yet fully verified, only evaluated by reading the source. Speed comparison not really benchmarked.)
--------------------------------------------------------------------- Criteria Data::Path Data::DPath --------------------------------------------------------------------- real XPath syntax no no --------------------------------------------------------------------- allow strange, YES YES non-xml but perl-like although hash keys limited, see next --------------------------------------------------------------------- allows special no YES chars of own path syntax in you can quoting everything hash keys ("/[]|*.") --------------------------------------------------------------------- call subs in YES no data structure, like: /method() --------------------------------------------------------------------- callbacks on YES no not found keys --------------------------------------------------------------------- element "//" no YES for "ANYWHERE" (//foo/bar) --------------------------------------------------------------------- element "." no YES for "NOSTEP" or "actual position" (/.[filter expr]) --------------------------------------------------------------------- element ".." no YES for "PARENT" (//foo/..) --------------------------------------------------------------------- element "::ancestor" no YES for "ANCESTOR" (//foo/::ancestor) --------------------------------------------------------------------- element no YES "::ancestor-or-self" --------------------------------------------------------------------- element "*" no YES for "ANYSTEP" or "all subelements" (/foo/*) --------------------------------------------------------------------- array access YES YES like /foo[4] although including negative indexes limited and whitespace awareness --------------------------------------------------------------------- complex no YES filter expressions like full Perl expressions /foo[size == 3] or plus sugar functions /.[isa("Foo::Bar")] --------------------------------------------------------------------- works with YES YES blessed subelements --------------------------------------------------------------------- arrays start YES YES with index 0 (in contrast to 1 as in XPath) --------------------------------------------------------------------- array semantics /foo[2] /foo/*[2] is a bit different --------------------------------------------------------------------- handling of croak RETURN EMPTY not matching paths but can be overwritten as callback --------------------------------------------------------------------- usage sugar none overloaded '~~' operator --------------------------------------------------------------------- Speed FAST quite fast - raw Perl - probably comparable - considered fast speed with expressions that Data::Path handles - slower on fuzzy paths, eg. with many "//" in it --------------------------------------------------------------------- Perl Versions 5.6+ 5.8+ --------------------------------------------------------------------- Install chance 100% 90% ( .cpantesters .org) ---------------------------------------------------------------------
Summary
Generally Data::Path is for simpler use cases but does not suffer from surrounding meta problems: it has no dependencies, is fast and works on practically every Perl version.
Whereas Data::DPath provides more XPath-alike features, but isn't quite as fast and has more dependencies.
Security warning
Watch out! This module
evals parts of provided dpaths (in particular: the filter expressions). Don't use it if you don't trust your paths.
Since v0.41 the filter expressions are secured using Safe.pm to only allow basic Perl core ops. This provides more safety but is also significantly slower. To unrestrict this to pre-v0.41 raw
eval behaviour you can set
$Data::DPath::USE_SAFE to False:
local $Data::DPath::USE_SAFE; # dpath '//CCC//*[ unsecure_perl_expression ]'
Read Safe.pm to understand how secure this is.
FUNCTIONS
dpath( $path_str )
Meant as the front end function for everyday use of Data::DPath. It takes a path string and returns a
Data::DPath::Path object on which the match method can be called with data structures and the operator
~~ is overloaded.
The function is prototyped to take exactly one argument so that you can omit the parens in many cases.
See SYNOPSIS.
dpathr( $path_str )
Same as
dpath but toggles that results are references to the matched points in the data structure.
dpathi( $data )
This is a different, iterator style, approach.
You provide the data structure on which to work and get back a current context containing the root element (as if you had searched for the path
/), and now you can do incremental searches using
isearch.
See chapter Iterator style below for details.
API METHODS
match( $data, $path )
Returns an array of all values in
$data that match the
$path.
OPERATOR
~~
Does a
match of a dpath against a data structure.
Due to the matching nature of DPath the operator
~~ should make your code more readable.
THE DPATH LANGUAGE
Synopsis
/AAA/BBB/CCC /AAA/*/CCC //CCC/* //CCC/*[2] //CCC/*[size == 3] //CCC/*[size != 3] /"EE/E"/CCC /AAA/BBB/CCC/*[1] /AAA/BBB/CCC/*[ idx == 1 ] //AAA/BBB/*[key eq "CCC"] //AAA/*[ key =~ /CC/ ] //CCC/*[value eq "RR2"] //.[ size == 4 ] /.[ isa("Funky::Stuff") ]/.[ size == 5 ]/.[ reftype eq "ARRAY" ]
Modeled on XPath
The basic idea is that of XPath: define a way through a datastructure and allow some funky ways to describe fuzzy ways. The syntax is roughly looking like XPath but in fact have not much more in common.
Some wording
I call the whole path a, well, path.
It consists of single (path) steps that are divided by the path separator
/.
Each step can have a filter appended in brackets
[] that narrows down the matching set of results.
Additional functions provided inside the filters are called, well, filter functions.
Each step has a set of points relative to the set of points before this step, all starting at the root of the data structure.
Special elements
//
Anchors to any hash or array inside the data structure below the currently found points (or the root).
Typically used at the start of a path to anchor the path anywhere instead of only the root node:
//FOO/BAR
but can also happen inside paths to skip middle parts:
/AAA/BBB//FARAWAY
This allows any way between
BBBand
FARAWAY.
*
Matches one step of any value relative to the current points (or the root). This step might be any hash key or all values of an array in the step before.
..
Matches the parent element relative to the current points.
::ancestor
Matches all ancestors (parent, grandparent, etc.) of the current node.
::ancestor-or-self
Matches all ancestors (parent, grandparent, etc.) of the current node and the current node itself.
.
A "no step". This keeps passively at the current points, but allows incrementally attaching filters to points or to otherwise hard to reach steps, like the top root element
/. So you can do:
/.[ FILTER ]
or chain filters:
/AAA/BBB/.[ filter1 ]/.[ filter2 ]/.[ filter3 ]
This way you do not need to stuff many filters together into one huge killer expression and can more easily maintain them.
See Filters for more details on filters.
- If you need those special elements to be not special but as key names, just quote them:
/"*"/ /"*"[ filter ]/ /"::ancestor"/ /".."/ /".."[ filter ]/ /"."/ /"."[ filter ]/ /"//"/ /"//"[ filter ]/
Difference between
/step[filter] vs.
/step/.[filter] vs.
/step/*[filter]
The filter applies to the matched points of the step to which it is applied, therefore
/part[filter] is the normal form, but see below how this affects array access.
The "no step" "/." stays on the current step, therefore
/part/.[filter] should be the same as
/part[filter].
Lastly,
/part/*[filter] means: take all the sub elements ("*") below "step" and apply the filter to those. The most common use is to take "all" elements of an array and chose one element via index:
/step/*[4]/. This takes the fifth element of the array inside "step". This is explained in even more depth in the next section.
Difference between
/affe[2] vs.
/affe/*[2]
Read carefully. This is different from what you probably expect when you know XPath.
In XPath "/affe[2]" would address an item of all elements named "affe" on this step. This is because in XPath elements with the same name can be repeated, like this:
<coolanimals> <affe>Pavian</affe> <affe>Gorilla</affe> <affe>Schimpanse</affe> </coolanimals>
and "//affe[2]" would get "Schimpanse" (we ignore the fact that in XPath array indexes start with 1, not 0 as in DPath, so we would actually get "Gorilla"; anyway, both are funky fellows).
So what does "/affe[2]" return in DPath? Nothing! It makes no sense, because "affe" is interpreted as a hash key and hash keys can not repeat in Perl data structures.
So what you often want in DPath is to look at the elements below "affe" and takes the third of them, e.g. in such a structure:
{ affe => [ 'Pavian', 'Gorilla', 'Schimpanse' ] }
the path "/affe/*[2]" would return "Schimpanse".
Filters
Filters are conditions in brackets. They apply to all elements that are directly found by the path part to which the filter is appended.
Internally the filter condition is part of a
grep construct (exception: single integers, they choose array elements). See below.
Examples:
/FOO/*[2]/
A single integer as filter means choose an element from an array. So the
*finds all subelements that follow current step
FOOand the
[2]reduces them to only the third element (index starts at 0).
/FOO/*[ idx == 2 ]/
The
*is a step that matches all elements after
FOO, but with the filter only those elements are chosen that are of index 2. This is actually the same as just
/FOO/*[2].
/FOO/*[key eq "CCC"]
In all elements after
FOOit matches only those elements whose key is "CCC".
/FOO/*[key =~ /CCC/ ]
In all elements after step
FOOit matches only those elements whose key matches the regex
/CCC/. It is actually just Perl code inside the filter which works in a grep{}-like context.
//FOO/*[value eq "RR2"]
Find elements below
FOOthat have the value
RR2.
Combine this with the parent step
..:
//FOO/*[value eq "RR2"]/..
Find such an element below
FOOwhere an element with value
RR2is contained.
//FOO[size >= 3]
Find
FOOelements that are arrays or hashes of size 3 or bigger.
Filter functions
The filter condition is internally part of a
grep over the current subset of values. So you can write any condition like in a grep and also use the variable
$_.
Additional filter functions are available that are usually written to use $_ by default. See Data::DPath::Filters for complete list of available filter functions.
Here are some of them:
- idx
Returns the current index inside array elements.
Please note that the current matching elements might not be in a defined order if resulting from anything else than arrays.
- size
Returns the size of the current element. If it is an arrayref it returns number of elements, if it's a hashref it returns number of keys, if it's a scalar it returns 1, everything else returns -1.
- key
Returns the key of the current element if it is a hashref. Else it returns undef.
- value
Returns the value of the current element. If it is a hashref, return the value. If a scalar, return the scalar. Else return undef.
Special characters
There are 4 special characters: the slash
/, paired brackets
[], the double-quote
" and the backslash
\. They are needed and explained in a logical order.
Path parts are divided by the slash
/.
A path part can be extended by a filter with appending an expression in brackets
[].
To contain slashes in hash keys, they can be surrounded by double quotes
".
To contain double-quotes in hash keys they can be escaped with backslash
\.
Backslashes in path parts don't need to be escaped, except before escaped quotes (but see below on Backslash handling).
Filters of parts are already sufficiently divided by the brackets
[]. There is no need to handle special characters in them, not even double-quotes. The filter expression just needs to be balanced on the brackets.
So this is the order how to create paths:
- 1. backslash double-quotes that are part of the key
-
- 2. put double-quotes around the resulting key
-
- 3. append the filter expression after the key
-
- 4. separate several path parts with slashes
-
Backslash handling
If you know backslash in Perl strings, skip this paragraph, it should be the same.
It is somewhat difficult to create a backslash directly before a quoted double-quote.
Inside the DPath language the typical backslash rules of apply that you already know from Perl single quoted strings. The challenge is to specify such strings inside Perl programs where another layer of this backslashing applies.
Without quotes it's all easy. Both a single backslash
\ and a double backslash
\\ get evaluated to a single backslash
\.
Extreme edge case by example: To specify a plain hash key like this:
"EE\E5\"
where the quotes are part of the key, you need to escape the quotes and the backslash:
\"EE\E5\\\"
Now put quotes around that to use it as DPath hash key:
"\"EE\E5\\\""
and if you specify this in a Perl program you need to additionally escape the backslashes (i.e., double their count):
"\"EE\E5\\\\\\""
As you can see, strangely, this backslash escaping is only needed on backslashes that are not standing alone. The first backslash before the first escaped double-quote is ok to be a single backslash.
All strange, isn't it? At least it's (hopefully) consistent with something you know (Perl, Shell, etc.).
Iterator style
The iterator style approach is an alternative to the already describe get-all-results-at-once approach. With it you iterate over the results one by one and even allow relative sub searches on each. The iterators use the Iterator API.
Please note, that the iterators do not save memory, they are just holding the context to go step-by-step and to start subsequent searches. Each iterator needs to evaluate its whole result set first. So in fact with nested iterators your memory might even go up.
Basic usage by example
Initialize a DPath iterator on a data structure using:
my $root = dpathi($data);
Create a new iterator context, with the path relative to current root context:
my $affe_iter = $root->isearch("//anywhere/affe");
Iterate over affe results:
while ($affe_iter->isnt_exhausted) { my $affe_point = $affe_iter->value; # next "affe" point my $affe = $affe_point->deref; # the actual "affe" }
Nested iterators example
This example is taken from the Benchmark::Perl::Formance suite, where the several plugins are allowed to provide their results anywhere at any level down in the result hash.
When the results are printed we look for all keys
Benchmark and regenerate the path to each so we can name it accordingly, e.g.,
plugin.name.subname.
For this we need an iterator to get the single
Benchmark points one by one and evaluate the corresponding ancestors to fetch their hash keys. Here is the code:
my $benchmarks_iter = dpathi($results)->isearch("//Benchmark"); while ($benchmarks_iter->isnt_exhausted) { my $benchmark = $benchmarks_iter->value; my $ancestors_iter = $benchmark->isearch ("/::ancestor"); while ($ancestors_iter->isnt_exhausted) { my $ancestor = $ancestors_iter->value; print Dumper( $ancestor->deref ); #(1) print $ancestor->first_point->{attrs}{key}; #(2) } }
Note that we have two iterators, the first one (
$benchmarks_iter) over the actual benchmark results and the second one (
$ancestors_iter) over the ancestors relative to one benchmark.
In line #(1) you can see that once you have the searched point, here the ancestors, you get the actual data using
$iterator->value->deref.
The line #(2) is utilizing the internal data structure to find out about the actual hash key under which the point is located. (There is also an official API to that:
$ancestor->first_point->attrs->key, but there it's neccessary to check for undefined values before calling the methods attrs and key, so I went the easy way).
INTERNAL METHODS
To make pod coverage happy.
build_dpath
Prepares internal attributes for dpath.
build_dpathr
Prepares internal attributes for dpathr.
build_dpathi
Prepares internal attributes for dpathi.
AUTHOR
Steffen Schwigon,
<schwigon at cpan.org>
CONTRIBUTIONS
Florian Ragwitz (cleaner exports, $_ scoping, general perl consultant)
SEE ALSO
There are other modules on CPAN which are related to finding elements in data structures.
- Data::Path
- XML::XPathEngine
- Tree::XPathEngine
- Class::XPath
- Hash::Path
AUTHOR
Steffen Schwigon <ss5@renormalist.net>
This software is copyright (c) 2014 by Steffen Schwigon.
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | https://metacpan.org/pod/release/SCHWIGON/Data-DPath-0.50/lib/Data/DPath.pm | CC-MAIN-2015-14 | refinedweb | 3,013 | 63.9 |
Houdy.
As told in previous posts, I have read Learn You a Haskell for a great good. The book content, in addition to be humorous and filled with the author drawn-by-hand pictures, is worth while reading for people - who like me - embraced functional programming a few months ago.
The reading took some time, and in some sense started changing my way of coding. Do not misunderstand me. As a Java developer by day I have no other choice than coding in a Object Oriented approach - as far as Java permits me - and do not specifically reject this approach. I simply found some answers to technical questions and more personal way of looking at code.
Embracing a new paradigm leverages your ability to constantly make your approach of coding evolves. It is a matter of personal opinion, but I want my own code to evolve, always. Learn you a Haskell was a first shot, and at this very moment Programming Haskell and the Craft of functional programming lays beside me on the desktop while my pdf ordered copy of Real World Haskell patiently wait on my SSD USB key.
For those who know me, I love both Clojure and Scala, and this is not just an excuse to buy fancy geeky t shirts, I mean it . Practicing so long Haskell, while not doing Scala has been a burden. At some point I had to come back to Scala doing a simple exercise of my own in order to restart coding in the language. I needed a small kata, nothing specific in particular in order to get in Scala again, even a single class. Unfortunately, without current position nor pet project to tease my mind, the ideas were not easy to find. I found amusing considering the last chapter of Learn you a Haskell and started coding a very small zipper.
Today post is a just a very small rambling on coming back to Scala and trying to feel the code differently.
In 1997 Gerard Huet from INRIA (even me sometimes have french references), published an interesting article on the Zipper, as a simple approach to tree navigation and modification. Instead of considering a tree as a global context, one can find suitable to focus on a node to work on, to navigate from, or to change a local information to, like some data bound to the node. And this is it .
A zipper is mainly a magnifying glass focusing a specific branch of a piece of tree bound to a node. In order for us to move locally up, right, left from this branch after manipulating it, we just need to remember the little world around us and the path taken to get to this position (frequently the path issued from the root)..
As far as trees are concerned , knowing the world around our branch means knowing the owning node and the other tree branches.
Everything starts with a tree definition.
Everything starts with a tree definition.
After 3 months of Haskell, I could no see a tree definition as class template defining object grapes but as a union of types (You must trust me on this assertion :)). Stephan Boyer post on this point of view is quite explicit. A tree is either an empty leaf or a node. In haskell it is defined as :
data Tree a = Empty | Node a (Tree a) (Tree a) deriving (Show)
which becomes in Scala
package com.promindis.data sealed trait Tree[+A] object Empty extends Tree[Nothing] {override def toString = "Empty"} final case class Node[+A](value: A, left: Tree[A], right: Tree[A]) extends Tree[A]
The scala definition concedes two lines to the Haskell one (no big deal frankly, a a certain point, the line battle does not mean anything, generally battles around languages do not mean a lot).
Why making things complex? A tree definition can be hosted by a simple trait. The parameterized A type define the type of the data bound a tree node. A Tree instance can be Empty or a Node instance.
The Tree value constructors are defined to be co-variant. As so, a node binding a String value is compatible with a node binding a AnyRef value. A leaf by definition must be compatible with anything. In Scala all types have a common lower bound type impersonated by the Nothing type. Then the Empty object extending Tree[Nothing] will be compatible with Tree instances holding data of any type. Considering the uniqueness of a leaf, I found more natural making an object of it.
I call the little world around a tree node a Spot (and not a breadcrumb like Miran Lipovaca, I leave to him that cute definition :)). Spot definitions gathered in lists will make a complete path, noticing for each step whether we came from the right or the left. Therefore our type definition becomes:
package com.promindis.data sealed trait Spot[+A] final case class LeftSpot[+A](value: A, right: Tree[A]) extends Spot[A] final case class RightSpot[+A](value: A, left: Tree[A]) extends Spot[A]
Spot type is defined to be co-variant versus its parameterized type, the meaning attached to this co-variance reflecting the one attached to the Tree type co-variance. So far so good (I told you there would be little code).
At that point we are ready to play with zippers. What is a zipper ? A friendly (immutable of course) object encompassing the current working Tree node, and a path we are coming from, so:
At that point we are ready to play with zippers. What is a zipper ? A friendly (immutable of course) object encompassing the current working Tree node, and a path we are coming from, so:
final case class Zipper[+A](tree: Tree[A], spots: List[Spot[A]])
As explained before, the list of spots makes a path.
By default the tree and spots fields are final (val in scala), so immutable. Good.
Focusing, on the up, right, left and update functions in our tree, we can simply create a new immutable Zipper as a result of invoking each operation. Or can we ? What would happen if by any mean we were rambling too deep in our tree structure ? No doubt that something bad would occur as no one can go left or right from a leaf for example.
This is a potential error case.
Not watching our steps and going too far should lead us nowhere, but we should not suffer from nasty side effect like null pointers or invalid method calls. A nice Scala solution does exist, that would confine our Zipper in a "cotton" context, protecting us from side effects .
We can take back our Zipper whenever we want from the context, but the "cotton" context will not harm us. The Option type is our solution. Our Zipper is simply optional. It can look like Some(Zipper(tree, list)) when existing or None which represent no Zipper. As so, the left() method definition becomes:
def left(): Option[Zipper[A]]
The right and up methods will return too Option instances.
Frankly, at that point, we are nearly done. Of course I am going to use a test driven approach. But wait a minute. In order to ramble through my tree, check my values , will I have to extract my zipper at each step?
That can become particularly cumbersome ! Scala provides us with the same tools as with Map or List. One can extract the content of an Option using the tools used for collection iteration, the <- sugar form. So, starting from the following structure
val leftChild = Node(2,Node(3, Empty, Empty),Node(4, Empty, Empty)) val rightChild = Node(5, Node(6, Empty, Empty), Empty) val updatedChild = Node(7, Node(6, Empty, Empty), Empty) val tree = Node(1, leftChild, rightChild)
we can test the zipper expression after going left and up this way :
def e2() = {for ( left <- Zipper(tree, List()).left(); up <- left.up() ) yield up}.should(beSome(Zipper(tree, List())))
Applying <- to Zipper(tree, List()).left(), binds the extracted value to the local left variable. We then reuse it on the next line in order to go up. The result is again bound into a protecting Option context. At this point Specs2 provides you with the suitable DSL to extract, then challenge the value.
You may have guessed it, the Option type is a Monad, offering you a context to embed your values and manipulate them protecting your code from some side effect. Quoting Miran Lipovaca, the list comprehension for and its <- form provide us with an easy to read syntax gluing together monadic values in sequence. In our case the for comprehension is very similar to the do notation in Haskell.
Of course Monads are not just programmatic applicable design patterns and everyone can learn from category theory their more profound meaning, but you can view them as design patterns very suitable in solving programming problems. The Option monad offers you a possible failure context wrapping around optional values, while a List Monad for example can be assumed as embedding non deterministic values.
At some point one have to start working with this notions in order to progress and I think possible to start from the application level before leveraging to the more general notions in the category theory domain. The application level offers the advantage of the every day practice.
So, now before I forget, the testing code I wrote:
package com.promindis.data import org.specs2.Specification final class ZipperSpecification extends Specification{def is = "Zipper Specification" ^ p^ "Left method in Zipper should" ^ "go left from current scope" !e1^ "return to root applying up to go Left" !e2^ p^ "Right method in Zipper should" ^ "go right from root" !e3^ "return to root applying up to go right" !e4^ p^ "Navigation should" ^ "prevent me from going too far" !e5^ p^ "Update should" ^ "allow me to update right node" !e6 val leftChild = Node(2,Node(3, Empty, Empty),Node(4, Empty, Empty)) val rightChild = Node(5, Node(6, Empty, Empty), Empty) val updatedChild = Node(7, Node(6, Empty, Empty), Empty) val tree = Node(1, leftChild, rightChild) def e1() = Zipper(tree, List()).left() .should(beSome(Zipper(leftChild, List(LeftSpot(1, rightChild))))) def e2() = {for ( left <- Zipper(tree, List()).left(); up <- left.up() ) yield up}.should(beSome(Zipper(tree, List()))) def e3() = Zipper(tree, List()).right() .should(beSome(Zipper(rightChild, List(RightSpot(1, leftChild))))) def e4() = {for ( right <- Zipper(tree, List()).right(); up <- right.up() ) yield up}.should(beSome(Zipper(tree, List()))) def e5 = { for ( right <- Zipper(tree, List()).right(); deeperRight <- right.right(); tooFar <- deeperRight.right() ) yield tooFar}.should(beNone) def e6 = {for( right <- Zipper(tree, List()).right() ) yield right.updated(_ => 7)} .should(beSome(Zipper(updatedChild, List(RightSpot(1, leftChild))))) }
leading to the implementation:
package com.promindis.data final case class Zipper[+A](tree: Tree[A], spots: List[Spot[A]]) { def updated[B >: A](f: A => B): Zipper[B] = { tree match { case Node(value, left, right) => Zipper(Node(f(value), left, right), spots) case Empty => this } } def left(): Option[Zipper[A]] = { tree match { case Node(value, left, right) => Some(Zipper[A](left, LeftSpot(value, right)::spots)) case Empty => None } } def right(): Option[Zipper[A]] = { tree match { case Node(value, left, right) => Some(Zipper[A](right, RightSpot(value, left)::spots)) case Empty => None } } def up(): Option[Zipper[A]] = { spots match { case LeftSpot(value, right)::xs => Some(Zipper(Node(value, tree, right), xs)) case RightSpot(value, left)::xs => Some(Zipper(Node(value, left, tree), xs)) case Nil => None } } }
Progressively coming back to Scala, more determined than ever you walked by my side creating a small Zipper finally making use of the Option Monad. Not bad for a Christmas day. Merry Christmas to all. Be seeing you !!! :) | http://patterngazer.blogspot.com/2011/12/where-my-scala-makes-my-zipper-optional.html | CC-MAIN-2019-09 | refinedweb | 1,957 | 63.49 |
Publishing Working Group Telco — Minutes
Date: 2017-08-14
See also the Agenda and the IRC Log
Attendees
Present: Leonard Rosenthol, Ivan Herman, Tzviya Siegman, George Kerscher, Ben Dugas, Laurent Le Meur, Baldur Bjarnason, Avneesh Singh, Jun Gamou, Benjamin Young, Bill Kasdorf, Bill McCoy, Rachel Comerford, Deborah Kaplan, Romain Deltour, Evan Yamanishi, Yuri Khramov, Mateus Teixeira, Marisa DeMeglio, Katie Haritos-Shea, Ric Wright, Garth Conboy, Ben Schroeter, Reinaldo Ferraz, Peter Krautzberger, Chris Maden, Hadrien Gardeur, Tim Cole, Brady Duga, David Stroup
Regrets: Vladimir Levantovsky, Matt Garrish, Luc Audrain, Charles LaPierre
Guests:
Chair: Tzviya Siegman
Scribe(s): Leonard Rosenthol, Dave Cramer
Content:
Tzviya Siegman: getting ready to start…
… most likely a packed meeting
Tzviya Siegman:
Tzviya Siegman: first order of business - minutes…
… minutes approved!
Resolution #1: Last week’s meeting minutes approved
Tzviya Siegman: introductions? Who’s new?
Garth Conboy: <lurking from a plane, you all work hard! :-)>
BenShroeter: a11y manager at Pearson
Leonard Rosenthol: (hard to hear - very echoy - other Ben, who hasn’t identified himself)
Bill Kasdorf: Ben Dugas from Kobo
Leonard Rosenthol: what’s his handle?
Katie Haritos-Shea: been with WCAG since 2001. excited with DPUB work. will be involved as can
… large involvement in a11y
Leonard Rosenthol: and Katie too please, Rachel
George Kerscher: welcome Katie
1. Manifest synthesis, scope, outstanding issues
Tzviya Siegman: what is a manifest? let’s talk some more about this…
Tzviya Siegman:
Tzviya Siegman: there is a link about the abstract vs. concrete issues
… first we decide what we need (but not how it looks)
… and then we can figure out what it looks like and how they works
… but we started on MUST and SHOULD (and then to fallbacks, explicit, et.)
Romain Deltour: asking, are these concepts (abstract/concrete) that will appear in the document or just for helping us figure it out?
Tzviya Siegman: could go either way…but we do need to understand is needed
… the what then the how
Ivan Herman: I find it helpful to have that info in there in the doc, as it will hopefully also have info about fallbacks and such that would be relevant in the doc
… the concepts (but maybe not terms) are helpful
… it will also help people who are new to the grup (and the area)
… but maybe we will remove it in the future, but is good for now
… but my original reason for being on the queue was…
… to explain what we tried to do with the document and trying to come to consensus
… and where we knew there were issue (like secondary resources) made those should’s
… also listed outstanding issues so we have them tracked
… for FPWD, this is a good thing that people se where we are
… and with respec, it could easily go into the main doc
Bill Kasdorf: default reading order in abstract as a must. Consensus?
… I am not longer convinced it should be a must, but won’t argue
… item #7, its a must
George Kerscher: default reading order is an interesting creature, as some documents don’t necessary have one
… but there always needs to be a well defined entry point
… so let’s embrace the concept that documents are designed to be consumed differently
Bill Kasdorf: let me clarify, things like newspapers, magazines, etc. don’t need a default reading order
… but George’s suggestion is also good too, about navigating to other primary resources
… identification + navigation
Deborah Kaplan: there needs to be an order for the reader - where there is an algorithm
… that a tool can use to “scrape the text” and be sure that every page is hit
Chris Maden: Traversal requires a set, not a list.
Deborah Kaplan: is there an order where every resource can be reached?
Benjamin Young: recently worked on a magazine and there were huge discussions about the “reading order”, there was an order/sequence for the user
Rachel Comerford: +1 bigbluehat
Leonard Rosenthol: we’re mixing two concepts: identifying the default order
… that’s what
… ‘s suggested but not the only one
… and there’s what Deborah brought up, which is navigation or traversal
… those are different concepts
Leonard Rosenthol: let’s not complicate default reading order (of which there can be many others) with traversal
Bill McCoy: I wanted to say: IMO we should avoid being so general as to make every web page (even every static web page) to automatically be a “publication”… this is not the Web Platform WG
Bill Kasdorf: also agrees with @bigbluehat
Benjamin Young: +1 to the importance of “default” in “default reading order”
Bill McCoy: thus I agree with dkaplan3 that a order of the content is one distinguishing characteristic of a publication/document vs. arbitrary web page / web site
Ivan Herman: an abstract manifest should have an identifier - we missed that :(.
Ivan Herman: do we want to try to cross all the ‘t’s today and move this into the doc?
Katie Haritos-Shea: default is just one, not the only. and let’s not mix up navigation with default reading order
Avneesh Singh: i like this document, esp. the open issues
… going through the comments, from a11y, navigation is a stronger requirement than default reading order
… so, we may have a statement that default reading order can be optional if proper navigation is provided for all primary resources.
Tzviya Siegman: perhaps we need a best practices for some of these things?
Deborah Kaplan: +1
Tzviya Siegman: building on @ivan, how do we feel about using this as a starting point?
Tzviya Siegman: +1
Bill Kasdorf: +1 to FPWD
Ivan Herman: +1
Jun Gamou: +1
Benjamin Young: +1
Bill McCoy: +1
Leonard Rosenthol: +1
Tim Cole: +1
Katie Haritos-Shea: +1
Laurent Le Meur: +1
Rachel Comerford: +1
George Kerscher: +1
Ben Schroeter: +1
Ben Dugas: +1
Evan Yamanishi: +1
Peter Krautzberger: +1
Romain Deltour: 0
Ric Wright: +1
Baldur Bjarnason: 0
Avneesh Singh: +1
Tzviya Siegman: any comments on the zeros?
Dave Cramer: just to clarify on FPWD vs. Editors draft? we’re not close to FPWD
Ivan Herman: FPWD by end of year and the ED leads to that
Dave Cramer: that’s fine just worried about us ending up in corner
Ivan Herman: everything will evolve, and also beyond the FPWD
… as long as things are spread all over the place it’s hard for the editor. Moving stuff into the main doc should help things.
Romain Deltour: my 0 is really about the abstract manifest and information set and I would like to see that in the FPWD
… otherwise things are confusing from the draft and the terms may make it into normative text (and then are harder to get out)
Tzviya Siegman: so only speak on concrete?
Romain Deltour: yes
Ivan Herman: what I would propose, related to manifest, is to see how abstract moves to concrete
… big elephant is serialization and embedded
… I will write up something about defining the concrete and we can work through that
… but in the meantime, let’s leave it as is
… undecided but have a bias
Resolution #2: the document can be used to go into the Editor’s draft, modulo minor editorial changes as discussed at the meeting
Tzviya Siegman: let’s not use the term abstract, it’s confusing
… and that means lots of work for Matt
… so let’s start on some concrete items
… URLs, IRIs, etc., a big topic. Do you want to pick it up now?
2. URL, URI, IRI
Tim Cole: I submitted a pull request about this
Ivan Herman: so maybe we should hold off on this?
… trying to find the issue #
Tzviya Siegman:
Ivan Herman: most of us would agree that for spec purity we should use IRI
… but in practice the web dev community no longer use the term IRI and instead everything is a URL
… so if we use IRI, those people won;t know what we are talking about (and how it relates to the web, which doesn’t use that term)
Tim Cole:
Ivan Herman: if you look at HTML 5.x spec, there is a strange approach to URL with a reference and a note about using the term URL (but it’s not necessary what is written in the RFCs)
… and while this is completely crazy, it’s what we’re stuck with
Romain Deltour: the newer URL ref, which points to the whatwg:
Romain Deltour:
Katie Haritos-Shea: +1 to URL per HTML 5
Ivan Herman: so we should reference URL in HTML 5.x and just follow their lead (even if we don’t agree with them)
Laurent Le Meur: +1 to Ivan
Garth Conboy: +1 (URL)
Tim Cole: I am sympathetic to that, but I have concerns
… syntactically (URI and IRI) they are defined by the same spec
… but they aren’t the same in term of the way things are used (links vs. namespaces, for example)
… a big problem in the library community, for example
… perhaps we might lose some traction with groups that do care about the distinction (in favor of the folks who dont)
… for example, the identifier discussion plays right into this in many ways
… maybe start with the distinction and then drop later. (easier than other way)
Ivan Herman: where are the places where the differentiation matters?
… identifiers is very important!
… locators is also a place (since we mean IRI but folks look for URLs)
… so when we talk about some of these, we may indeed want to use the IRI term. but exception and not rule
… but URL is the norm
Katie Haritos-Shea: +1
Baldur Bjarnason: +1
Romain Deltour: +1
Tim Cole: +1
Benjamin Young: +1
Garth Conboy: Still +1
Resolution #3: Use URL-s and use IRI/URI when it becomes strictly important
Leonard Rosenthol: +1 let’s use URL everywhere except where we explicitly need IRI
Bill Kasdorf: +1
Tim Cole: one last bit on this rathole
… having the concept of identifier in addition to address is useful
Ivan Herman: Extra Pull Request:
Tim Cole: I did a pull request that address many of these issues
… but left open others such as #27 (canonical identifier)
… and use both IRI and URL, clear to differentiate them.
Tzviya Siegman: Tim’s PR
Tim Cole: so maybe we can raise issues between now and then on the PR, but lets accept it
Tzviya Siegman: email vote
Laurent Le Meur: reading over the pull request, there is a canonical identifier which is persistent and immutable
… if a WP is moved to another server, then the URLs will all change. is this the same pub or not?
Ivan Herman: very existential issue that we can spend a year on
Chris Maden: This is exactly the point for distinguishing between identifiers and locators…
Ivan Herman: depending on the needs of the publisher and publication
Bill Kasdorf: +1 to cmaden2
Ivan Herman: but we shouldn’t say anything about this specifically in our docs
Laurent Le Meur: does the fact that the identifier is in the manifest impact this, since the manifest must change if the identifier changes?
Tim Cole: by having this, it leaves open the option of how to handle moving between servers
… and you don’t want to lose the ability to identify it
… of course, that’s up to the publisher as they may wish to lose the identify. so you need both
Laurent Le Meur: so then there is still a requirement that we change the locator w/o changing the identifier? and so this has technical implications
Baldur Bjarnason: lots of prior art in the web community around this - same problem that feeds (Atom, RSS, etc.) all had to deal with
Laurent Le Meur: perhaps this is better called a persistent identifier
Ivan Herman: the term is defined by saying it is persistent in the doc
Tzviya Siegman: next steps
Ivan Herman: need a resolution to close issue 27.
… then hand over PR to editor?
Resolution #4: Close issue 27, hand over the PR to the editor
Ivan Herman: everyone OK with that?
Laurent Le Meur: thanks @bigbluehat I like the PR, yes
Leonard Rosenthol: it would get more eyes in the main doc - so let’s accept it and review there
Katie Haritos-Shea: I have a question: must’s and should’s, for example the natural language. Why a should?
Avneesh Singh: Title is still under discussion.
Leonard Rosenthol: it’s already in the HTML docs, so why force duplication in the manifest?
Ivan Herman: that’s why it should, not a must - because it’s very important but not required there in the manifest
Deborah Kaplan: there is not universal agreement on this however
… and books that have no words, there are interesting exceptions
… so let’s just call it a should for now and move on
… but many people still argue on WCAG concerns
Ivan Herman: if someone could put in an explicit issue about this (should vs. must)
Leonard Rosenthol: @ryladog will do
Avneesh Singh: title is still under heavy discussion in github. Language hasn’t had as much (yet). The language is also important for discovery, the language was included in EPUB metadata. And we still need to discuss how to deal with metadata and if it should be inside manifest or outside. So, many things are not yet settled.
Tzviya Siegman: thanks everyone who is working on the document - please keep them coming
… if anyone new wants to help, let us know
… and if you want to learn about the tools (respec, github), we can help
… and don’t forget about TPAC!
Tzviya Siegman:
Tzviya Siegman: be srue to book your hotel
3. Resolutions
- Resolution #1: Last week’s meeting minutes approved
- Resolution #2: the document can be used to go into the Editor’s draft, modulo minor editorial changes as discussed at the meeting
- Resolution #3: Use URL-s and use IRI/URI when it becomes strictly important
- Resolution #4: Close issue 27, hand over the PR to the editor | https://www.w3.org/publishing/groups/publ-wg/Meetings/Minutes/2017/2017-08-14-minutes | CC-MAIN-2019-39 | refinedweb | 2,333 | 51.35 |
for connected embedded systems
Customizing Image Startup Programs
In this chapter...
- Introduction
- Anatomy of a startup program
- Structure of the system page
- Callout information
- The startup library
- Writing your own kernel callout
- PPC chips support
Introduction
The first program in a bootable Neutrino image is a startup program whose purpose is to:
- Initialize the hardware.
- Initialize the system page.
- Initialize callouts.
- Load and transfer control to the next program in the image.
You can customize Neutrino for different embedded-system hardware by changing the startup program.
Initialize hardware
You do basic hardware initialization at this time. The amount of initialization done here will depend on what was done in the IPL loader.
Note that you don't need to initialize standard peripheral hardware such as an IDE interface or the baud rate of serial ports. This will be done by the drivers that manage this hardware when they're started.
Initialize system page
Information about the system is collected and placed in an in-memory data structure called the system page. This includes information such as the processor type, bus type, and the location and size of available system RAM.
The kernel as well as applications can access this information as a read-only data structure. The hardware/system-specific code to interrogate the system for this information is confined to the startup program. This code doesn't occupy any system RAM after it has run.
Initialize callouts
Another key function of the startup code is that the system page callouts are bound in. These callouts are used by the kernel to perform various hardware- and system-specific functions that must be specified by the systems integrator.
Anatomy of a startup program
Each release of Neutrino ships with a growing number of startup programs for many boards. To find out what boards we currently support, please refer to the following sources:
- the boards directory under bsp_working_dir/src/hardware/startup
- QNX docs (BSP docs as well as startup-* entries in the Utilities Reference)
- the Community area of our website,
Each startup program is provided as a ready-to-execute binary. Full source and a Makefile are also available so you can customize and remake each one. The files are kept in this directory structure as illustrated:
Startup directory structure.
Generally speaking, the following directory structure applies in the startup source for the startup-boardname module:
bsp_working_dir/src/hardware/startup/boards/boardname
Structure of a startup program
Each startup program consists of a main() with the following structure (in pseudo code):
Global variables main() { Call add_callout_array() Argument parsing (Call handle_common_option()) Call init_raminfo() Remove ram used by modules in the image if (virtual) Call init_mmu() to initialize the MMU Call init_intrinfo() Call init_qtime() Call init_cacheattr() Call init_cpuinfo() Set hardware machine name Call init_system_private() Call print_syspage() to print debugging output }
Creating a new startup program
To create a new startup program, you should make a new directory under bsp_working_dir/src/hardware/startup/boards and copy the files from one of the existing startup program directories. For example, to create something close to the Intel PXA250TMDP board, called my_new_board, you would:
- cd bsp_working_dir/src/hardware/startup/boards
- mkdir my_new_board
- cp -r pxa250tmdp/* my_new_board
- cd my_new_board
- make clean
For descriptions of all the startup functions, see “The startup library” section in this chapter.
Structure of the system page
As mentioned earlier (see the section “Initialize system page”), one of the main jobs of the startup program is to initialize the system page.
The system page structure struct syspage_entry is defined in the include file <sys/syspage.h>. The structure contains a number of constants, references to other structures, and a union shared between the various processor platforms supported by Neutrino.
It's important to realize that there are two ways of accessing the data within the system page, depending on whether you're adding data to the system page at startup time or reading data from the system page later (as would be done by an application program running after the system has been booted). Regardless of which access method you use, the fields are the same.
Here's the system page structure definition, taken from <sys/syspage.h>:
/* * contains at least the following: */ struct syspage_entry { uint16_t size; uint16_t total_size; uint16_t type; uint16_t num_cpu; syspage_entry_info system_private; syspage_entry_info asinfo; syspage_entry_info hwinfo; syspage_entry_info cpuinfo; syspage_entry_info cacheattr; syspage_entry_info qtime; syspage_entry_info callout; syspage_entry_info callin; syspage_entry_info typed_strings; syspage_entry_info strings; syspage_entry_info intrinfo; syspage_entry_info smp; syspage_entry_info pminfo; union { struct x86_syspage_entry x86; struct ppc_syspage_entry ppc; struct mips_syspage_entry mips; struct arm_syspage_entry arm; struct sh_syspage_entry sh; } un; };
Note that some of the fields presented here may be initialized by the code provided in the startup library, while some may need to be initialized by code provided by you. The amount of initialization required really depends on the amount of customization that you need to perform.
Let's look at the various fields.
size
The size of the system page entry. This member is set automatically by the library.
total_size
The size of the system page entry plus the referenced substructures; effectively the size of the entire system-page database. This member is set automatically by the library and adjusted later (grown) as required by other library calls.
type
This is used to indicate the CPU family for determining which union member in the un element to use. Can be one of:
SYSPAGE_ARM, SYSPAGE_MIPS, SYSPAGE_PPC, SYSPAGE_SH4, or SYSPAGE_X86.
The library sets this member automatically.
num_cpu
The num_cpu member indicates the number of CPUs present on the given system. This member is initialized to the default value 1 in the library and adjusted by the library call init_smp() if additional processors are detected.
system_private
The system_private area contains information that the operating system needs to know when it boots. This is filled in by the startup library's init_system_private() function.
asinfo
The asinfo section consists of an array of the following structure. Each entry describes the attributes of one section of address space on the machine.
struct asinfo_entry { uint64_t start; uint64_t end; uint16_t owner; uint16_t name; uint16_t attr; uint16_t priority; int (*alloc_checker)(struct syspage_entry *__sp, uint64_t *__base, uint64_t *__len, size_t __size, size_t __align); uint32_t spare; };
The attr field
The attr field can have the following bits:
- #define AS_ATTR_READABLE 0x0001
- Address range is readable.
- #define AS_ATTR_WRITABLE 0x0002
- Address range is writable.
- #define AS_ATTR_CACHABLE 0x0004
- Address range can be cached (this bit should be off if you're using device memory).
- #define AS_ATTR_KIDS 0x0010
- Indicates that there are other entries that use this one as their owner. Note that the library turns on this bit automatically; you shouldn't specify it when creating the section.
- #define AS_ATTR_CONTINUED 0x0020
- Indicates that there are multiple entries being used to describe one “logical” address range. This bit will be on in all but the last one. Note that the library turns on this bit and uses it internally; you shouldn't specify it when creating the section.
Address space trees
The asinfo section contains trees describing address spaces (where RAM, ROM, flash, etc. are located).
The general hierarchy for address spaces is:
/memory/memclass/....
Or:
/io/memclass/....
Or:
/memory/io/memclass/....
The memory or io indicates whether this is describing something in the memory or I/O address space (the third form is used on a machine without separate in/out instructions and where everything is memory-mapped).
The memclass is something like: ram, rom, flash, etc. Below that would be further classifications, allowing the process manager to provide typed memory support.
hwinfo
The hwinfo area contains information about the hardware platform (type of bus, devices, IRQs, etc). This is filled in by the startup library's init_hwinfo() function.
This is one of the more elaborate sections of the Neutrino system page. The hwinfo section doesn't consist of a single structure or an array of the same type. Instead, it consists of a sequence of symbolically “tagged” structures that as a whole describe the hardware installed on the board. The following types and constants are all defined in the <hw/sysinfo.h> file.
Each structure (or tag) in the section starts the same way:
struct hwi_prefix { uint16_t size; uint16_t name; };
The size field gives the size, in 4-byte quantities, of the structure (including the hwi_prefix).
The name field is an offset into the strings section of the system page, giving a zero-terminated string name for the structure. It might seem wasteful to use an ASCII string rather than an enumerated type to identify the structure, but it actually isn't. The system page is typically allocated in 4 KB granularity, so the extra storage required by the strings doesn't cost anything. On the upside, people can add new structures to the section without requiring QNX Software Systems to act as a central repository for handing out enumerated type values. When processing the section, code should ignore any tag that it doesn't recognize (using the size field to skip over it).
Items
Each.”
Device trees
The hwinfo section contains trees describing the various hardware devices on the board.
The general hierarchy for devices is:
/hw/bus/devclass/device
where:
- hw
- the root of the hardware tree.
- bus
- the bus the hardware is on (pci, eisa, etc.).
- devclass
- the general class of the device (serial, rtc, etc.).
- device
- the actual chip implementing the device (8250, mc146818, etc.).
Building the section
Two basic calls in the startup library are used to add things to the hwinfo section:
- hwi_alloc_tag()
- hwi_alloc_item()
void *hwi_alloc_tag(const char *name, unsigned size, unsigned align);
This call allocates a tag of size size with the tag name of name. If the structure contains any 64-bit integer fields within it, the align field should be set to 8; otherwise, it should be 4. The function returns a pointer to memory that can be filled in as appropriate. Note that the hwi_prefix fields are automatically filled in by the hwi_alloc_tag() function.
void *hwi_alloc_item(const char *name, unsigned size, unsigned align, const char *itemname, unsigned owner);
This call allocates an item structure. The first three parameters are the same as in the hwi_alloc_tag() function.
The itemname and owner parameters are used to set the itemname and owner fields of the hwi_item structure. All hwi_alloc_tag() calls done after a hwi_alloc_item() call are assumed to belong to that item and the itemsize field is adjusted appropriately.
Here are the general steps for building an item:
- Call hwi_alloc_item() to build a top-level item (one with the owner field to be HWI_NULL_OFF).
- Add whatever other tag structures you want in the item.
- Use hwi_alloc_item() to start a new item. This item could be either another top-level one or a child of the first.
Note that you can build the items in any order you wish, provided that the parent is built before the child.
When building a child item, suppose you've remembered its owner in a variable or you know only its item name. In order to find out the correct value of the owner parameter, you can use the following function (which is defined in the C library, since it's useful for people processing the section):
unsigned hwi_find_item(unsigned start, ...);
The start parameter indicates where to start the search for the given item. For an initial call, it should be set to HWI_NULL_OFF. If the item found isn't the one wanted, then the return value from the first hwi = hwi_find_item(HWI_NULL_OFF, "foobar", NULL);
The following call finds the first occurrence of an item called “foobar” that's owned by “sam”:
item_off = hwi_find_item(HWI_NULL_OFF, "sam", "foobar", NULL);
If the requested item can't be found, HWI_NULL_OFF is returned.
Other functions
The following functions are in the C library for use in processing the hwinfo section:
- unsigned hwi_tag2off(void *);
- Given a pointer to the start of a tag, return the offset, in bytes, from the beginning of the start of the hwinfo section.
- void *hwi_off2tag(unsigned);
- Given an offset, in bytes, from the start of the hwinfo section, return a pointer to the start of the tag.
- unsigned hwi_find_tag(unsigned start, int curr_item, const char *tagname);
- Find the tag named tagname. The start parameter works the same as the one in hwi_find_item(). If curr_item is nonzero, the search stops at the end of the current item (whatever item the start parameter points into). If curr_item is zero, the search continues until the end of the section. If the tag isn't found, HWI_NULL_OFF is returned.
Defaults
Before main() is invoked in the startup program, the library adds some initial entries to serve as a basis for later items.
HWI_TAG_INFO() is a macro defined in the <startup.h> header and expands out to the three name, size, align parameters for hwi_alloc_tag() and hwi_alloc_item() based on some clever macro names.
void hwi_default() { hwi_tag *tag; hwi_tag *tag; hwi_alloc_item(HWI_TAG_INFO(group), HWI_ITEM_ROOT_AS, HWI_NULL_OFF); tag = hwi_alloc_item(HWI_TAG_INFO(group), HWI_ITEM_ROOT_HW, HWI_NULL_OFF); hwi_alloc_item(HWI_TAG_INFO(bus), HWI_ITEM_BUS_UNKNOWN, hwi_tag2off(tag)); loc = hwi_find_item(HWI_NULL_OFF, HWI_ITEM_ROOT_AS, NULL); tag = hwi_alloc_item(HWI_TAG_INFO(addrspace), HWI_ITEM_AS_MEMORY, loc); tag->addrspace.base = 0; tag->addrspace.len = (uint64_t)1 << 32; #ifndef __X86__ loc = hwi_tag2off(tag); #endif tag = hwi_alloc_item(HWI_TAG_INFO(addrspace), HWI_ITEM_AS_IO, loc); tag->addrspace.base = 0; #ifdef __X86__ tag->addrspace.len = (uint64_t)1 << 16; #else tag->addrspace.len = (uint64_t)1 << 32; #endif }
Predefined items and tags
These are the items defined in the hw/sysinfo.h file. Note that you're free to create additional items — these are just what we needed for our own purposes. You'll notice that all things are defined as HWI_TAG_NAME_*, HWI_TAG_ALIGN_*, and struct hwi_*. The names are chosen that way so that the HWI_TAG_INFO() macro in startup works properly.
Group item
#define.
Bus item
#define HWI_TAG_NAME_bus "Bus" #define HWI_TAG_ALIGN_bus (sizeof(uint32)) struct hwi_bus { struct hwi_item item; };
The Bus item tells the system about a hardware bus. Item names can be (but are not limited to):
#define HWI_ITEM_BUS_PCI "pci" #define HWI_ITEM_BUS_ISA "isa" #define HWI_ITEM_BUS_EISA "eisa" #define HWI_ITEM_BUS_MCA "mca" #define HWI_ITEM_BUS_PCMCIA "pcmcia" #define HWI_ITEM_BUS_UNKNOWN "unknown"
Device item
#define.
location tag
#define.
irq tag
#define HWI_TAG_NAME_irq "irq" #define HWI_TAG_ALIGN_irq (sizeof(uint32)) struct hwi_irq { struct hwi_prefix prefix; uint32_t vector; };
Note that this is a simple tag, not an item. The vector field gives the logical interrupt vector number of the device.
diskgeometry tag
#define.
pad tag
#define HWI_TAG_NAME_pad "pad" #define HWI_TAG_ALIGN_pad (sizeof(uint32)) struct hwi_pad { struct hwi_prefix prefix; };
Note that this is a simple tag, not an item. This tag is used when padding must be inserted to meet the alignment constraints for the subsequent tag.
cpuinfo
The cpuinfo area contains information about each CPU chip in the system, such as the CPU type, speed, capabilities, performance, and cache sizes. There are as many elements in the cpuinfo structure as the num_cpu member indicates (e.g. on a dual-processor system, there will be two cpuinfo entries).
This table is filled automatically by the library function init_cpuinfo().
The flags member contains a bitmapped indication of the capabilities of the CPU chip. Note that the prefix for the manifest constant indicates which CPU family it applies to (e.g. PPC_ indicates this constant is for use by the PowerPC family of processors). In the case of no prefix, it indicates that it's generic to any CPU.
Here are the constants and their defined meanings:
syspage_entry cacheattr
The:
Two-processor system with separate L1 instruction and data caches..
syspage_entry qtime
The qtime area contains information about the timebase present on the system, as well as other time-related information. The library routine init_qtime() fills these data structures.
The parameters timer_rate and timer_scale relate to the external counter chip's input frequency, in Hz, as follows:
Yes, this does imply that timer_scale is a negative number. The goal when expressing the relationship is to make timer_rate as large as possible in order to maximize the number of significant digits available during calculations.
For example, on an x86 PC with standard hardware, the values would be 838095345UL for the timer_rate and -15 for the timer_scale. This indicates that the timer value is specified in femtoseconds (the -15 means “ten to the negative fifteen”); the actual value is 838,095,345 femtoseconds (approximately 838 nanoseconds).
If you need to change the number of nsecs that the OS adds to the time when a tick fires, you can manually adjust the nsec_inc value in SYSPAGE_ENTRY (qtime).
The idea is to adjust for differences between the clock interval and the real expired time. The closer they become the less need there is for ClockAdjust() calls.
What you'll need to do is find out the physical address of the syspage. If it's already in nsec_inc you won't need to modify startup. If not, modify startup to put it there. Then use the mmap_device_memory() function to make the physical address of the syspage writable. That is, get the offset to the read-only page, and map a new block of memory to the address.
You could give ClockAdjust() a value of 0 for the number of ticks, to indicate that you want to make this adjustment “permanent”. If you don't want to do that, you can give the ClockAdjust() function the maximum possible value for tick_count.
When you call and modify nsec_inc, you overwrite the ClockPeriod() function. The timer_rate and timer_scale fields are used as the input frequency to the clock hardware. The code uses these fields and the requested tick rate to calculate the number of input frequency clocks to count before generating an interrupt. The number of input frequency clocks that are counted, combined with timer_rate and timer_scale provides the nsec_inc value. For example:
timer_load = requested_ticksize / (timer_rate ** timer_scale) nsec_inc = timer_load * (timer_rate ** timer_scale)
The nsec_inc value is used to adjust the time of day when the clock interrupt goes off.
The changed value in ClockPeriod() is used to determine the new ticksize.
callout
The (MIPS and PowerPC eval boards,.
callin
For internal use.
typed_strings
The typed_strings area consists of several entries, each of which is a number and a string. The number is 4 bytes and the string is NULL-terminated as per C. The number in the entry corresponds to a particular constant from the system include file <confname.h> (see the C function confname() for more information).
Generally, you wouldn't access this member yourself; the various init_*() library functions put things into the typed strings literal pool themselves. But if you need to add something, you can use the function call add_typed_string() from the library.
strings
This member is a literal pool used for nontyped strings. Users of these strings would typically specify an index into strings (for example, cpuinfo's name member).
Generally, you wouldn't access this member yourself; the various init_*() library functions put things into the literal pool themselves. But if you need to add something, you can use the function call add_string() from the library.
intrinfo
The intrinfo area is used to store information about the interrupt system. It also contains the callouts used to manipulate the interrupt controller hardware.
On a multicore system, each interrupt is directed to one (and only one) CPU, although it doesn't matter which. How this happens is under control of the programmable interrupt controller chip(s) on the board. When you initialize the PICs.
The intrinfo area is automatically filled in by the library routine init_intrinfo().
If you need to override some of the defaults provided by init_intrinfo(), or if the function isn't appropriate for your custom environment, you can call add_interrupt_array() directly with a table of the following format:
The cpu_intr_base member
The interpretation of the cpu_intr_base member varies with the processor:
The flags member
The flags member takes two sets of flags. The first set deals with the characteristics of the interrupts:
- INTR_FLAG_NMI
- Indicates that this is a NonMaskable Interrupt (NMI). An NMI is an interrupt which can't be disabled by clearing the CPU's interrupt enable flag, unlike most normal interrupts. NonMaskable interrupts are typically used to signal events that require immediate action, such as a parity error, a hardware failure, or imminent loss of power. The address for the handler's NMI is stored in the BIOS's Interrupt Vector table at position 02H. For this reason an NMI is often referred to as INT 02H..
- INTR_FLAG_CASCADE_IMPLICIT_EOI
- Indicates that an EOI to the primary interrupt controller is not required when handling a cascaded interrupt (e.g. it's done automatically). Only used if this entry describes a cascaded controller.
- INTR_FLAG_CPU_FAULT
- Indicates that one or more of the vectors described by this entry is not connected to a hardware interrupt source, but rather is generated as a result of a CPU fault (e.g. bus fault, parity error). Note that we strongly discourage designing your hardware this way. The implication is that a check needs to be inserted for an exception into the generated code stream; after the interrupt has been identified, an EOI needs to be sent to the controller. The EOI code burst has the additional responsibility of detecting what address caused the fault, retrieving the fault type, and then passing the fault on. The primary disadvantage of this approach is that it causes extra code to be inserted into the code path.
- PPC_INTR_FLAG_400ALT
- Similar to INTR_FLAG_NMI, this indicates to the code generator that a different kernel entry sequence is required. This is because the PPC400 series doesn't have an NMI, but rather has a critical interrupt that can be masked. This interrupt shows up differently from a “regular” external interrupt, so this flag indicates this fact to the kernel.
- PPC_INTR_FLAG_CI
- Same as PPC_INTR_FLAG_400ALT, where CI refers to critical interrupt.
- PPC_INTR_FLAG_SHORTVEC
- Indicates that exception table doesn't have normal 256 bytes of memory space between this and the next vector.
The second set of flags deals with code generation:
- INTR_GENFLAG_LOAD_SYSPAGE
- Before the interrupt identification or EOI code sequence is generated, a piece of code needs to be inserted to fetch the system page pointer into a register so that it's usable within the identification code sequence.
- INTR_GENFLAG_LOAD_INTRINFO
- Same as INTR_GENFLAG_LOAD_SYSPAGE, except that it loads a pointer to this structure.
- INTR_GENFLAG_LOAD_INTRMASK
- Used only by EOI routines for hardware that doesn't automatically mask at the chip level. When the EOI routine is about to reenable interrupts, it should reenable only those interrupts that are actually enabled at the user level (e.g. managed by the functions InterruptMask() and InterruptUnmask()). When this flag is set, the existing interrupt mask is stored in a register for access by the EOI routine. A zero in the register indicates that the interrupt should be unmasked; a nonzero indicates it should remain masked.
- INTR_GENFLAG_NOGLITCH
- Used by the interrupt ID code to cause a check to be made to see if the interrupt was due to a glitch or to a different controller. If this flag is set, the check is omitted — you're indicating that there's no reason (other than the fact that the hardware actually did generate an interrupt) to be in the interrupt service routine. If this flag is not set, the check is made to verify that the suspected hardware really is the source of the interrupt.
- INTR_GENFLAG_LOAD_CPUNUM
- Same as INTR_GENFLAG_LOAD_SYSPAGE, except that it loads a pointer to the number of the CPU this structure uses.
- INTR_GENFLAG_ID_LOOP
- Some interrupt controllers have read-and-clear registers indicating the active interrupts. That is, the first read returns a bitset with the pending interrupts, and then immediately zeroes the register. Since the interrupt ID callout can return only one interrupt number at a time, that means that we might fail to process all the interrupts if there's more than one bit on in the status register.:
- If the storage is nonzero, the callout uses it to identify another interrupt to process, knocks that bit down, writes the new value back into the storage location and returns the identified interrupt number.
- If the storage location is zero, the callout reads the hardware status register (clearing it) and identifies the interrupt number from it. It then knocks that bit off, writes the value to the storage location, and then returns the appropriate interrupt number.
- If both the storage and hardware register are zero, the routine returns -1 to indicate no interrupt is present as per usual.
config return values
The config callout may return zero or more of the following flags:
- INTR_CONFIG_FLAG_PREATTACH
- Normally, an interrupt is masked off until a routine attaches to it via InterruptAttach() or InterruptAttachEvent(). If CPU fault indications are routed through to a hardware interrupt (not recommended!), the interrupt would, by default, be disabled. Setting this flag causes a “dummy” connection to be made to this source, causing this level to become unmasked.
- INTR_CONFIG_FLAG_DISALLOWED
- Prevents user code from attaching to this interrupt level. Generally used with INTR_CONFIG_FLAG_PREATTACH, but could be used to prevent user code from attaching to any interrupt in general.
- INTR_CONFIG_FLAG_IPI
- Identifies the vector that's used as the target of an inter-processor interrupt in an SMP system.
syspage_entry union un
The un union is where processor-specific system page information is kept. The purpose of the union is to serve as a demultiplexing point for the various CPU families. It is demultiplexed based on the value of the type member of the system page structure.
un.x86
This structure contains the x86-specific information. On a standard PC-compatible platform, the library routines (described later) fill these fields:
- smpinfo
- Contains info on how to manipulate the SMP control hardware; filled in by the library call init_smp().
- gdt
- Contains the Global Descriptor Table (GDT); filled in by the library.
- idt
- Contains the Interrupt Descriptor Table (IDT); filled in by the library.
- pgdir
- Contains pointers to the Page Directory Table(s); filled in by the library.
- real_addr
- The virtual address corresponding to the physical address range 0 through 0xFFFFF inclusive (the bottom 1 megabyte).
un.x86.smpinfo (deprecated)
The members of this field are filled automatically by the function init_smp() within the startup library.
un.ppc (deprecated)
This structure contains the PowerPC-specific information. On a supported evaluation platform, the library routines (described later) fill these fields. On customized hardware, you'll have to supply the information.
- smpinfo
- Contains info on how to manipulate the SMP control hardware; filled in by the library call init_smp().
- kerinfo
- Kernel information, filled by the library.
- exceptptr
- Points at system exception table, filled by the library.
un.ppc.kerinfo
Contains information relevant to the kernel:
- pretend_cpu
- Allows us to specify an override for the CPU ID register so that the kernel can pretend it is a “known” CPU type. This is done because the kernel “knows” only about certain types of PPC CPUs; different variants require specialized support. When a new variant is manufactured, the kernel will not recognize it. By stuffing the pretend_cpu field with a CPU ID from a known CPU, the kernel will pretend that it's running on the known variant.
- init_msr
- Template of what bits to have on in the MSR when creating a thread. Since the MSR changes among the variants in the PPC family, this allows you to specify some additional bits that the kernel doesn't necessarily know about.
- ppc_family
- Indicates what family the PPC CPU belongs to.
- asid_bits
- Identifies what address space bits are active.
- callout_ts_clear
- Lets callouts know whether to turn off data translation to get at their hardware.
un.mips
This structure contains the MIPS-specific information:
- shadow_imask
- A shadow copy of the interrupt mask bits for the builtin MIPS interrupt controller.
un.arm
This structure contains the ARM-specific information:
- L1_vaddr
- Virtual address of the MMU level 1 page table used to map the kernel.
- L1_paddr
- Physical address of the MMU level 1 page table used to map the kernel.
- startup_base
- Virtual address of a 1-1 virtual-physical mapping used to map the startup code that enables the MMU. This virtual mapping is removed when the kernel is initialized.
- startup_size
- Size of the mapping used for startup_base.
- cpu
- Structure containing ARM core-specific operations and data. Currently this contains the following:
- page_flush
- A routine used to implement CPU-specific cache/TLB flushing when the memory manager unmaps or changes the access protections to a virtual memory mapping for a page. This routine is called for each page in a range being modified by the virtual memory manager.
- page_flush_deferred
- A routine used to perform any operations that can be deferred when page_flush is called. For example on the SA-1110 processor, an Icache flush is deferred until all pages being operated on have been modified.
un.sh
This structure contains the Hitachi SH-specific information:
- exceptptr
- Points at system exception table, filled by the library.
smp
The smp area is CPU-independent and contains the following elements:
pminfo
The pminfo area is a communication area between the power manager and startup/power callout.
The pminfo area contains the following elements which are customizable in the power manager structure and are power-manager dependent:
Callout information
All the callout routines share a set of similar characteristics:
- coded in assembler
- position-independent
- no static read/write storage.
Debug interface
The debug interface consists of the following callouts:
- display_char()
- poll_key()
- break_detect().
These three callouts are used by the kernel when it wishes to interact with a serial port, console, or other device (e.g. when it needs to print out some internal debugging information or when there's a fault). Only the display_char() is required; the others are optional.
Clock/timer interface
Here are the clock and timer interface callouts:
- timer_load()
- timer_reload()
- timer_value().:
- Reloading the divisor value (because some timer hardware doesn't have an automatic reload on the timer chip — this type of hardware should be avoided if possible).
- Telling the kernel whether the timer chip caused the interrupt or not (e.g. if you had multiple interrupt sources tied to the same line used by the timer — not the ideal hardware design, but…).
The timer_value() callout is used to return the value of the timer chip's internal count as a delta from the last interrupt. This is used on processors that don't have a high-precision counter built into the CPU (e.g. 80386, 80486).
Interrupt controller interface
Here are the callouts for the interrupt controller interface:
- mask()
- unmask()
- config()
In addition, two “code stubs” are provided:
- id
- eoi
The mask() and unmask() perform masking and unmasking of a particular interrupt vector.
The config() callout is used to ascertain the configuration of an interrupt level.
For more information about these callouts, refer to the intrinfo structure in the system page above.
Cache controller interface
Depending, like the MIPS and PowerPC, the cache controllers need to be told to invalidate portions of the cache when certain functions are performed in the kernel.
The callout for cache control is control(). This callout gets passed:
- a set of flags (defining the operation to perform)
- the address (either in virtual or physical mode, depending on flags in the cacheattr array in the system page)
- the number of cache lines to affect).
System reset callout
The miscellaneous callout, reboot(), gets called whenever the kernel needs to reboot the machine.
The reboot() callout is responsible for resetting the system. This callout lets developers customize the events that occur when proc needs to reboot — such as turning off a watchdog, banging the right registers etc. without customizing proc each time.
A “shutdown” of the binary will call sysmgr_reboot(), which will eventually trigger the reboot() callout.
Power management callout
The power() callout gets called whenever power management needs to be activated. The power() callout is used for power management.
The startup library
The following are the available library functions (in alphabetical order):
add_cache()
add_callout()
add_callout_array()
add_interrupt()
add_interrupt_array()
add_ram()
add_string()
add_typed_string()
alloc_qtime()
alloc_ram()
as_add()
as_add_containing()
as_default()
as_find()
as_find_containing()
as_info2off()
as_off2info()
as_set_checker()
as_set_priority()
avoid_ram()
calc_time_t()
calloc_ram()
callout_io_map_indirect()
callout_memory_map_indirect()
callout_register_data()
chip_access()
chip_done()
chip_read8()
chip_read16()
chip_read32()
chip_write8()
chip_write16()
chip_write32()
copy_memory()
del_typed_string()
falcon_init_l2_cache()
falcon_init_raminfo()
falcon_system_clock()
find_startup_info()
find_typed_string()
handle_common_option()
hwi_add_device()
hwi_add_inputclk()
hwi_add_irq()
hwi_add_location()
hwi_add_nicaddr()
hwi_add_rtc()
hwi_alloc_item()
hwi_alloc_tag()
hwi_find_as()
hwi_find_item()
hwi_find_tag()
hwi_off2tag()
hwi_tag2off()
init_asinfo()
init_cacheattr()
init_cpuinfo()
init_hwinfo()
init_intrinfo()
init_mmu()
init_pminfo()
init_qtime()
init_qtime_sa1100()
init_raminfo()
init_smp()
init_syspage_memory() (deprecated)
init_system_private()
jtag_reserve_memory()
kprintf()
mips41xx_set_clock_freqs()
openbios_init_raminfo()
pcnet_reset()
ppc400_pit_init_qtime()
ppc405_set_clock_freqs()
ppc600_set_clock_freqs()
ppc700_init_l2_cache()
ppc800_pit_init_qtime()
ppc800_set_clock_freqs()
ppc_dec_init_qtime()
print_syspage()
rtc_time()
startup_io_map()
startup_io_unmap()
startup_memory_map()
startup_memory_unmap()
tulip_reset()
uncompress()
x86_cpuid_string()
x86_cputype()
x86_enable_a20()
x86_fputype()
x86_init_pcbios()
x86_pcbios_shadow_rom()
x86_scanmem()
add_cache()
int add_cache(int next, unsigned flags, unsigned line_size, unsigned num_lines, const struct callout_rtn *rtn);
Add an entry to the cacheattr section of the system page structure. Parameters map one-to-one with the structure's fields. The return value is the array index number of the added entry. Note that if there's already an entry that matches the one you're trying to add, that entry's index is returned — nothing new is added to the section.
add_callout()
void add_callout(unsigned offset, const struct callout_rtn *callout);
Add a callout to the callout_info section of the system page. The offset parameter holds the offset from the start of the section (as returned by the offsetof() macro) that the new routine's address should be placed in.
add_callout_array()
void add_callout_array (const struct callout_slot *slots, unsigned size)
Add the callout array specified by slots (for size bytes) into the callout array in the system page.
add_interrupt()
struct intrinfo_entry *add_interrupt(const struct startup_intrinfo *startup_intr);
Add a new entry to the intrinfo section. Returns a pointer to the newly added entry.
add_interrupt_array()
void add_interrupt_array (const struct startup_intrinfo *intrs, unsigned size)
Add the interrupt array callouts specified by intrs (for size bytes) into the interrupt callout array in the system page.
add_ram()
void add_ram(paddr_t start, paddr_t size);
Tell the system that there's RAM available starting at physical address start for size bytes.
add_string()
unsigned add_string (const char *name)
Add the string specified by name into the string literal pool in the system page and return the index.
add_typed_string()
unsigned add_typed_string (int type_index, const char *name)
Add the typed string specified by name (of type type_index) into the typed string literal pool in the system page and return the index.
alloc_qtime()
struct qtime_entry *alloc_qtime(void);
Allocate space in the system page for the qtime section and fill in the epoch, boot_time, and nsec_tod_adjust fields. Returns a pointer to the newly allocated structure so that user code can fill in the other fields.
alloc_ram()
paddr_t alloc_ram (paddr_t addr, paddr_t size, paddr_t align)
Allocate memory from the free memory pool initialized by the call to init_raminfo(). The RAM is not cleared.
as_add()
unsigned.
as_add_containing()
unsigned.
as_default()
unsigned as_default(void);
Add the default memory and io entries to the asinfo section of the system page.
as_find()
unsigned as_find(unsigned start, ...);
The start parameter indicates where to start the search for the given item. For an initial call, it should be set to AS_NULL_OFF. If the item found isn't the one wanted, then the return value from the first as =.
as_find_containing()
unsigned as_find_containing(unsigned off, paddr_t start, paddr_t end, const char *container);
Find an asinfo entry with the name pointed to by container that at least partially covers the range given by start and end. Follows the same rules as as_find() to know where the search starts. Returns the offset of the matching entry or AS_NULL_OFF if none is found. (The as_add_containing() function uses this to find what the owner fields should be for the entries it's adding.)
as_info2off()
unsigned as_info2off(const struct asinfo_entry *);
Given a pointer to an asinfo entry, return the offset from the start of the section.
as_off2info()
struct asinfo_entry *as_off2info(unsigned offset);
Given an offset from the start of the asinfo section, return a pointer to the entry.
as_set_checker()
void.
as_set_priority()
void as_set_priority(unsigned as_off, unsigned priority);
Set the priority field of the indicated.
avoid_ram()
void.
calc_time_t()
unsigned long calc_time_t(const struct tm *tm);
Given a struct tm (with values appropriate for the UTC timezone), calculate the value to be placed in the boot_time field of the qtime section.
calloc_ram()
paddr32_t calloc_ram (size_t size, unsigned align)
Allocate memory from the free memory pool initialized by the call to init_raminfo(). The RAM is cleared.
callout_io_map_indirect()).
callout_memory_map_indirect()
void *callout_memory_map_indirect(unsigned size, paddr_t phys, unsigned prot_flags);
Same as mmap_device_memory() in the C library — provide access to a memory-mapped device. The value is for use in any kernel callouts (i.e. they live beyond the end of the startup program and are maintained by the OS while running).
callout_register_data()
void callout_register_data( void *rp, void *data );
This function lets you associate a pointer to arbitrary data with a callout. This data pointer is passed to the patcher routine (see “Patching the callout code,” below.
The rp argument is a pointer to the pointer where the callout address is stored in the system page you're building. For example, say you have a pointer to a system page section that you're working on called foo. In the section there's a field bar that points to a callout that's pointed at by foo->bar, &some_interesting_data_for_patcher is passed to it.
chip_access()
void.
chip_done()
void chip_done(void);
Terminate access to the hardware chip specified by chip_access().
chip_read8()
unsigned chip_read8(unsigned off);
Read one byte from the device specified by chip_access(). The off parameter is first scaled by the reg_shift value specified in chip_access() before being used.
chip_read16()
unsigned chip_read16(unsigned off);
Same as chip_read8(), but for 16 bits.
chip_read32()
unsigned chip_read32(unsigned off);
Same as chip_read16(), but for 32 bits.
chip_write8()
void chip_write8(unsigned off, unsigned val);
Write one byte from the device specified by chip_access(). The off parameter is first scaled by the reg_shift value specified in chip_access() before being used.
chip_write16()
void chip_write16(unsigned off, unsigned val);
Same as chip_write8(), but for 16 bits.
chip_write32()
void chip_write32(unsigned off, unsigned val);
Same as chip_write16(), but for 32 bits.
copy_memory()
void copy_memory (paddr_t dst, paddr_t src, paddr_t len)
Copy len bytes of memory from physical memory at src to dst.
del_typed_string()
int del_typed_string(int type_index);
Find the string in the typed_strings section of the system page indicated by the type type_index and remove it. Returns the offset where the removed string was, or -1 if no such string was present.
falcon_init_l2_cache()
void falcon_init_l2_cache(paddr_t base);
Enable the L2 cache on a board with a Falcon system controller chip. The base physical address of the Falcon controller registers are given by base.
falcon_init_raminfo()
void falcon_init_raminfo(paddr_t falcon_base);
On a system with the Falcon system controller chip located at falcon_base, determine how much/where RAM is installed and call add_ram() with the appropriate parameters.
falcon_system_clock()
unsigned falcon_system_clock(paddr_t falcon_base);
On a system with a Falcon chipset located at physical address falcon_base, return the speed of the main clock input to the CPU (in Hertz). This can then be used in turn to set the cpu_freq, timer_freq, and cycles_freq variables.
find_startup_info()
const.
find_typed_string()
int find_typed_string(int type_index);
Return the offset from the beginning of the type_strings section of the string with the type_index type. Return -1 if no such string is present.
handle_common_option()
void handle_common_option (int opt)
Take the option identified by opt (a single ASCII character) and process it. This function assumes that the global variable optarg points to the argument string for the option.
Valid values for opt and their actions are:
- A
- Reboot switch. If set, an OS crash will cause the system to reboot. If not set, an OS crash will cause the system to hang.
- D
- Output channel specification (e.g. kprintf(), stdout, etc.).
-. Also sets the speed field in the cpuinfo section of the system page.
- cycles_freq — the frequency at which the value returned by ClockCycles() increments. Also sets the cycles_per_sec field in the qtime section of the system page.
- timer_freq — the frequency at which the timer chip input runs. Also sets the timer_rate and timer_scale values of the qtime section of the system page.
- K
- kdebug remote debug protocol channel.
- M
- Placeholder for processing additional memory blocks. The parsing of additional memory blocks is deferred until init_system_private().
- N
- Add the hostname specified to the typed name string space under the identifier _CS_HOSTNAME.
- R
- Used for reserving memory at the bottom of the address space.
- r
- Used for reserving memory at any address space you specify.
- S
- Placeholder for processing debug code's -S option.
- P
- Specify maximum number of CPUs in an SMP system.
- j
- Add Jtag-related options. Reserves four bytes of memory at the specified location and copies the physical address of the system page to this location so the hardware debugger can retrieve it.
- v
- Increment the verbosity global flag, debug_flag.
hwi_add_device()
void hwi_add_device(const char *bus, const char *class, const char *name, unsigned pnp);
Add an hwi_device item to the hwinfo section. The bus and class parameters are used to locate where in the device tree the new device is placed.
hwi_add_inputclk()
void hwi_add_inputclk(unsigned clk, unsigned div);
Add an hwi_inputclk tag to the hw item currently being constructed.
hwi_add_irq()
void hwi_add_irq(unsigned vector);
Add an irq tag structure to the hwinfo section. The logical vector number for the interrupt will be set to vector.
hwi_add_location()
void hwi_add_location(paddr_t base, paddr_t len, unsigned reg_shift, unsigned addr_space);
Add a location tag structure to the hwinfo section. The fields of the structure will be set to the given parameters.
hwi_add_nicaddr()
void hwi_add_nicaddr(const uint8 *addr, unsigned len);
Add an hwi_nicaddr tag to the hw item currently being constructed.
hwi_add_rtc()
void.
hwi_alloc_item()
void *hwi_alloc_item(const char *tagname, unsigned size, unsigned align, const char *itemname, unsigned owner);
Add an item structure to the hwinfo section.
hwi_alloc_tag()
void *hwi_alloc_tag(const char *tagname, unsigned size, unsigned align);
Add a tag structure to the hwinfo section.
hwi_find_as()
unsigned hwi_find_as(paddr_t base, int mmap);
Given a physical address of base and mmap (indicating 1 for memory-mapped and 0 for I/O-space-mapped), return the offset from the start of the asinfo section indicating the appropriate addrspace field value for an hwi_location tag.
hwi_find_item()
unsigned hwi_find_item(unsigned start, ...);
Search for a given item in the hwinfo section of the system page. If start is HWI_NULL_OFF, the search begins at the start of the hwinfo section. If not, it starts from the item after the offset of the one passed in (this allows people to find multiple tags of the same type; it works just like the find_startup_info() function). The var args portion is a list of character pointers, giving item names; the list is terminated with a NULL. The order of the item names gives ownership information. For example:
item = hwi_find_item(HWI_NULL_OFF, "foobar", NULL);
searches for an item name called “foobar.” The following:
item = hwi_find_item(HWI_NULL_OFF, "mumblyshwartz", "foobar", NULL);
also searches for “foobar,” but this time it has to be owned by an item called “mumblyshwartz.”
If the item can't be found, HWI_NULL_OFF is returned; otherwise, the byte offset within the hwinfo section is returned.
hwi_find_tag()
unsigned.
hwi_off2tag()
void *hwi_off2tag(unsigned off);
Given a byte offset from the start of the hwinfo section, return a pointer to the hwinfo tag structure.
hwi_tag2off()
unsigned hwi_tag2off(void *tag);
Given a pointer to the start of a hwinfo tag instruction, convert it to a byte offset from the start of the hwinfo system page section.
init_asinfo()
void init_asinfo(unsigned mem);
Initialize the asinfo section of the system page. The mem parameter is the offset of the memory entry in the section and can be used as the owner parameter value for as_add()s that are adding memory.
init_cacheattr()
void init_cacheattr (void)
Initialize the cacheattr member. For all platforms, this is a do-nothing stub.
init_cpuinfo()
void init_cpuinfo (void)
Initialize the members of the cpuinfo structure with information about the installed CPU(s) and related capabilities. Most systems will be able to use this function directly from the library.
init_hwinfo()
void init_hwinfo (void)
Initialize the appropriate variant of the hwinfo structure in the system page.
init_intrinfo()
void init_intrinfo (void)
Initialize the intrinfo structure.
- x86
- You would need to change this only if your hardware doesn't have the standard PC-compatible dual 8259 configuration.
- MIPS
- The default library version sets up the internal MIPS interrupt controller.
- PowerPC
- No default version exists; you must supply one.
- ARM
- No default version exists; you must supply one.
- SH
- The default library version sets up the SH-4 on-chip peripheral interrupt. You need to provide the external interrupt code.
If you're providing your own function, make sure it initializes:
- the interrupt controller hardware as appropriate (e.g. on the x86 it should program the two 8259 interrupt controllers)
- the intrinfo structure with the details of the interrupt controller hardware.
This initialization of the structure is done via a call to the function add_interrupt_array().
init_mmu()
void init_mmu (void)
Sets up the processor for virtual addressing mode by setting up page-mapping hardware and enabling the pager.
On the x86 family, it sets up the page tables as well as special mappings to “known” physical address ranges (e.g. sets up a virtual address for the physical address ranges 0 through 0xFFFFF inclusive).
The 400 and 800 series processors within the PowerPC family are stubs; the others, i.e. the 600 series and BookE processors, are not. On MIPS and SH, this function is currently a stub. On the PowerPC family, this function may be a stub.
On the ARM family, this function simply sets up the page tables.
init_pminfo()
*init_pminfo (unsigned managed_size)
Initialize the pminfo section of the system page and set the number of elements in the managed storage array.
init_qtime()
void init_qtime (void)
Initialize the qtime structure in the system page. Most systems will be able to use this function directly from the library.
This function doesn't exist for ARM. Specific functions exist for ARM processors with on-chip timers; currently, this includes only init_qtime_sa1100().
init_qtime_sa1100()
void init_qtime_sa1100 (void)
Initialize the qtime structure and kernel callouts in the system page to use the on-chip timer for the SA1100 and SA1110 processors.
init_raminfo()
void init_raminfo (void)
Determine the location and size of available system RAM and initialize the asinfo structure in the system page.
If you know the exact amount and location of RAM in your system, you can replace this library function with one that simply hard-codes the values via one or more add_ram() calls.
- x86
- If the RAM configuration is known (e.g. set by the IPL code, or the multi-boot IPL code gets set by the gnu utility), then the library version of init_raminfo() will call the library routine find_startup_info() to fetch the information from a known location in memory. If the RAM configuration isn't known, then a RAM scan (via x86_scanmem()) is performed looking for valid memory between locations 0 and 0xFFFFFF, inclusive. (Note that the VGA aperture that usually starts at location 0xB0000 is specifically ignored.)
- MIPS
PowerPC
ARM
SH
- There's no library default. You must supply your own init_raminfo() function.
init_smp()
void init_smp (void)
Initialize the SMP functionality of the system, assuming the hardware (e.g. x86, PPC, MIPS) supports SMP.
init_syspage_memory() (deprecated)
void.
init_system_private()
void init_system_private (void)
Find all the boot images that need to be started and fill a structure with that information; parse any -M options used to specify memory regions that should be added; tell Neutrino where the image filesystem is located; and finally allocate room for the actual storage of the system page. On all platforms, this shouldn't require modification.
jtag_reserve_memory()
void jtag_reserve_memory (unsigned long resmem_addr, unsigned long resmem_size, uint8_t resmem_flag)
Reserve a user-specified block of memory at the location specified in resmem_addr. If the resmem_flag is set to 0, clear the memory.
kprintf()
void kprintf (const char *fmt, ... )
Display output using the put_char() function you provide. It supports a very limited set of printf() style formats.
mips41xx_set_clock_freqs()
void mips41xx_set_clock_freqs(unsigned sysclk);
On a MIPS R41xx series chip, set the cpu_freq, timer_freq, and cycles_freq variables appropriately, given a system clock input frequency of sysclk.
openbios_init_raminfo()
void openbios_init_raminfo(void);
On a system that contains an OpenBIOS ROM monitor, add the system RAM information.
pcnet_reset()
void pcnet_reset(paddr_t base, int mmap);
Ensure that a PCnet-style Ethernet controller chip at the given physical address (either I/O or memory-mapped as specified by mmap).
ppc400_pit_init_qtime()
void ppc400_pit_init_qtime(void);
On a PPC 400 series chip, initialize the qtime section and timer kernel callouts of the system page to use the on-board Programmable Interval Timer.
ppc405_set_clock_freqs()
void ppc405_set_clock_freqs (unsigned sys_clk, unsigned timer_clk);
Initialize the timer_freq and cycles_freq variables based on a given timer_clk. The cpu_freq variable is initialized using a multiplication of a given system clock (system_clk). The multiplication value is found using the CPCO_PSR DCR.
ppc600_set_clock_freqs()
void ppc600_set_clock_freqs(unsigned sysclk);
On a PPC 600 series chip, set the cpu_freq, timer_freq, and cycles_freq variables appropriately, given a system clock input frequency of sysclk.
ppc700_init_l2_cache()
void ppc700_init_l2_cache(unsigned flags);
On a PPC 700 series system, initialize the L2 cache. The flags indicate which bits in the L2 configuration register are set. In particular, they decide the L2 size, clock speed, and so on. For details, see the Motorola PPC 700 series user's documentation for the particular hardware you're using.
For example, on a Sandpoint board, flags might be:
PPC700_SPR_L2CR_1M | PPC700_SPR_L2CR_CLK2 | PPC700_SPR_L2CR_OH05
This would set the following for L2CR:
- 1 MB L2 cache
- clock speed of half of the core speed
- “output-hold” value of 0.5 nsec.
ppc800_pit_init_qtime()
void ppc800_pit_init_qtime(void);
On a PPC 800 series chip, initialize the qtime section and timer kernel callouts of the system page to use the on-board Programmable Interval Timer.
ppc800_set_clock_freqs()
void ppc800_set_clock_freqs(unsigned extclk_freq, unsigned extal_freq, int is_extclk);
On a PPC 800 series chip, set the cpu_freq, timer_freq, and cycles_freq variables appropriately, given input frequencies of extclk_freq at the EXTCLK pin and extal_freq at the XTAL/EXTAL pins.
If is_extclk is nonzero, then the extclk_freq is used for the main timing reference (MODCLK1 signal is one at reset). If zero, extal_freq is used at the main timing reference (MODCLK1 signal is zero at reset).
Note that the setting of the frequency variables assumes that the ppc800_pit_init_qtime() routine is being used. If some other initialization of the qtime section and timer callouts takes place, the values in the frequency variables may have to be modified.
ppc_dec_init_qtime()
void ppc_dec_init_qtime(void);
On a PPC, initialize the qtime section and timer kernel callouts of the system page to use the decrementer register.
print_syspage()
void print_syspage (void)
Print the contents of all the structures in the system page. The global variable debug_level is used to determine what gets printed. The debug_level must be at least 2 to print anything; a debug_level of 3 will print the information within the individual substructures.
Note that you can set the debug level at the command line by specifying multiple -v options to the startup program.
You can also use the startup program's -S command-line option to select which entries are printed from the system page: -Sname selects name to be printed, whereas -S~name disables name from being printed. The name can be selected from the following list:
rtc_time()
unsigned long rtc_time (void)
This is a user-replaceable function responsible for returning the number of seconds since January 1 1970 00:00:00 GMT.
- x86
- This function defaults to calling rtc_time_mc146818(), which knows how to get the time from an IBM-PC standard clock chip.
- MIPS
PowerPC
ARM
- The default library version simply returns zero.
- SH
- The default function calls rtc_time_sh4(), which knows how to get the time from the SH-4 on-chip rtc.
Currently, these are the chip-specific versions:
- rtc_time_ds1386()
- Dallas Semiconductor DS-1386 compatible
- rtc_time_m48t5x()
- SGS-Thomson M48T59 RTC/NVRAM chip
- rtc_time_mc146818()
- Motorola 146818 compatible
- rtc_time_rtc72423()
- FOX RTC-72423 compatible
- rtc_time_rtc8xx()
- PPC 800 onboard RTC hardware.
startup_io_map()
uint).
startup_io_unmap()
void startup_io_unmap(uintptr_t port);
Same as unmap_device_io() in the C library — remove access to an I/O port on the x86 (on other systems, unmap_device_io() is the same as startup_memory_unmap()) at the given port location.
startup_memory_map()
void *startup_memory_map(unsigned size, paddr_t phys, unsigned prot_flags);
Same as mmap_device_io_memory() in the C library — provide access to a memory-mapped device. The value is for use during the time the startup program is running (as opposed to callout_memory_map(), which is for use after startup is completed).
startup_memory_unmap()
void startup_memory_unmap(void *vaddr);
Same as unmap_device_memory() in the C library — remove access to a memory-mapped device at the given location.
tulip_reset()
void.
uncompress()
int.
x86_cpuid_string()
int. 386).
x86_cputype()
unsigned x86_cputype (void)
An x86 platform-only function that determines the type of CPU and returns the number (e.g. 386).
x86_enable_a20()
int x86_enable_a20 (unsigned long cpu, int only_keyboard).
x86_fputype()
unsigned x86_fputype (void)
An x86-only function that returns the FPU type number (e.g. 387).
x86_init_pcbios()
void x86_init_pcbios(void);
Perform initialization unique to an IBM PC BIOS system.
x86_pcbios_shadow_rom()
int x86_pcbios_shadow_rom(paddr_t rom, size_t size);
Given the physical address of a ROM BIOS extension, this function makes a copy of the ROM in a RAM location and sets the x86 page tables in the _syspage_ptr->un.x86.real_addr range to refer to the RAM copy rather than the ROM version. When something runs in V86 mode, it'll use the RAM locations when accessing the memory.
The amount of ROM shadowed is the maximum of the size parameter and the size indicated by the third byte of the BIOS extension.
The function returns:
- 0
- if there's no ROM BIOS extension signature at the address given
- 1
- if you're starting the system in physical mode and there's no MMU to make a RAM copy be referenced
- 2
- if everything works.
x86_scanmem()
unsigned x86_scanmem (paddr_t beg, paddr_t end)
An x86-only function that scans memory between beg and end looking for RAM, and returns the total amount of RAM found. It scans memory performing a R/W test of 3 values at the start of each 4 KB page. Each page is marked with a unique value. It then rescans the memory looking for contiguous areas of memory and adds them to the asinfo entry in the system page.
A special check is made for a block of memory between addresses 0xB0000 and 0xBFFFF, inclusive. If memory is found there, the block is skipped (since it's probably the dual-ported memory of a VGA card).
The call x86_scanmem (0, 0xFFFFFF) would locate all memory in the first 16 megabytes of memory (except VGA memory). You may make multiple calls to x86_scanmem() to different areas of memory in order to step over known areas of dual-ported memory with hardware.
Writing your own kernel callout MMU is enabled (the callout would have to disable it if necessary)
- you are running on the kernel stack
- you are executing code copied into the system page so no functions in the startup program are available..
Find out who's gone before
The:
- cache
- cache control routines
- debug
- kernel debug input and output routines
- interrupt
- interrupt handling routines
- timer
- timer chip routine
- reboot
- rebooting the system.
Why are they in assembly language?
Since()).
Starting off
Find.
“Patching” the callout code
You may need to write a callout that deals with a device that may appear in different locations on different boards. You can do this by “patching” the callout code as it is copied to its final position. The third parameter of the CALLOUT_START macro is either a zero or the address of a patcher() routine. This routine has the following prototype:
void patcher(paddr_t paddr, paddr_t vaddr, unsigned rtn_offset, unsigned rw_offset, void *data, struct callout_rtn *src );
This routine is invoked immediately after the callout has been copied to its final resting place. The parameters are as follows:
- paddr
- Physical address of the start of the system page.
- vaddr
- Virtual address of the system page that allows read/write access (usable only by the kernel).
- rtn_offset
- Offset from the beginning of the system page to the start of the callout's code.
- rw_offset
- See the section on “Getting some R/W storage” below.
- data
- A pointer to arbitrary data registered by callout_register_data() (see above).
- src
- A pointer to the callout_rtn structure that's being copied into place.
Here's an example of a patcher routine for an x86 processor:
patch_debug_8250: movl 0x4(%esp),%eax // get paddr of routine addl 0xc(%esp),%eax // ... movl 0x14(%esp),%edx // get base info movl DDI_BASE(%edx),%ecx // patch code with real serial port movl %ecx,0x1(%eax) movl DDI_SHIFT(%edx),%ecx // patch code with register shift movl $REG_LS,%edx shll %cl,%edx movl %edx,0x6(%eax) ret CALLOUT_START(display_char_8250, 0, patch_debug_8250) movl $0x12345678,%edx // get serial port base (patched) movl $0x12345678,%ecx // get serial port shift (patched) .... CALLOUT_END(display_char_8250)
After the display_char_8250() routine has been copied, the patch_debug_8250() routine is invoked, where it modifies the constants in the first two instructions to the appropriate I/O port location and register spacing for the particular board. The patcher routines don't have to be written in assembler, but they typically are to keep them in the same source file as the code they're patching. By arranging the first instructions in a group of related callouts all the same (e.g. debug_char_*(), poll_key_*(), break_detect_*()), the same patcher routine can be used for all of them.
Getting some R/W storage
Your callouts may need to have access to some static read/write storage. Normally this wouldn't be possible because of the position-independent requirements of a callout. But you can do it by using the patcher routines and the second parameter to CALLOUT_START. The second parameter to CALLOUT_START is the address of a four-byte variable that contains the amount of read/write storage the callout needs. For example:
rw_interrupt: .long 4 patch_interrupt: add a1,a1,a2 j ra sh a3,0+LOW16(a1) /* * Mask the specified interrupt */ CALLOUT_START(interrupt_mask_mips, rw_interrupt, patch_interrupt) /* * Input Parameters : * a0 - syspage_ptr * a1 - Interrupt Number * Returns: * v0 - error status */ /* * Mark the interrupt disabled */ la t3,0x1234(a0) # get enabled levels addr (patched) li t1, MIPS_SREG_IMASK0 .... CALLOUT_END(interrupt_mask_mips)
The rw_interrupt address as the second parameter tells the startup library that the routine needs four bytes of read/write storage (since the contents at that location is a 4). The startup library allocates space at the end of the system page and passes the offset to it as the rw_offset parameter of the patcher routine. The patcher routine then modifies the initial instruction of the callout to the appropriate offset. While the callout is executing, the t3 register will contain a pointer to the read/write storage. The question you're undoubtedly asking at this point is: Why is the CALLOUT_START parameter the address of a location containing the amount of storage? Why not just pass the amount of storage directly?
That's a fair question. It's all part of a clever plan. A group of related callouts may want to have access to shared storage so that they can pass information among themselves. The library passes the same rw_offset value to the patcher routine for all routines that share the same address as the second parameter to CALLOUT_START. In other words:
CALLOUT_START(interrupt_mask_mips, rw_interrupt, patch_interrupt) .... CALLOUT_END(interrupt_mask_mips) CALLOUT_START(interrupt_unmask_mips, rw_interrupt, patch_interrupt) .... CALLOUT_END(interrupt_unmask_mips) CALLOUT_START(interrupt_eoi_mips, rw_interrupt, patch_interrupt) .... CALLOUT_END(interrupt_eoi_mips) CALLOUT_START(interrupt_id_mips, rw_interrupt, patch_interrupt) .... CALLOUT_END(interrupt_id_mips)
will all get the same rw_offset parameter value passed to patch_interrupt() and thus will share the same read/write storage.
The exception that proves the rule
To clean up a final point, the interrupt_id() and interrupt_eoi() routines aren't called as normal routines. Instead, for performance reasons, the kernel intermixes these routines directly with kernel code — the normal function-calling conventions aren't followed. The callout_interrupt_*.s files in the startup library will have a description of what registers are used to pass values into and out of these callouts for your particular CPU. Note also that you can't return from the middle of the routine as you normally would. Instead, you're required to “fall off the end” of the code.
PPC chips support
The PPC startup library has been modified in order to:
- minimize the number of locations that check the PVR SPR.
- minimize duplication of code.
- make it easier to leave out unneeded chip-dependent code.
- make it easier to add support for new CPUs.
- remove the notion of a PVR split into “family” and “member” fields.
- automatically take care of as much CPU-dependent code as possible in the library.
The new routines and data variables all begin with ppcv_ for PPC variant, and are separated out into one function or data variable per source file. This separation allows maximum code reuse and minimum code duplication.
There are two new data structures:
- ppcv_chip
- ppcv_config
The first is:
struct ppcv_chip { unsigned short chip; uint8_t paddr_bits; uint8_t cache_lsize; unsigned short icache_lines; unsigned short dcache_lines; unsigned cpu_flags; unsigned pretend_cpu; const char *name; void (*setup)(void); };
Every supported CPU has a statically initialized variable of this type (in its own source file, e.g. <ppvc_chip_603e7.c>).
If the chip field matches the upper 16 bits of the PVR register, this ppcv_chip structure is selected and the pccv global variable in the library is pointed at it. Only the upper 16 bits are checked so you can use the constants like PPC_750 defined in <ppc/cpu.h> when initializing the field.
The paddr_bits field is the number of physical address lines on the chip, usually 32.
The cache_lsize field is the number of bits in a cache line size of the chip, usually 5, but sometimes 4.
The icache_lines and dcache_lines are the number of lines in the instruction and data cache, respectively.
The cpu_flags field holds the PPC_CPU_* flag constants from <ppc/syspage.h> that are appropriate for this CPU. Note that the older startups sometimes left out flags like PPC_CPU_HW_HT and depended on the kernel to check the PVR and turn them on if appropriate. This is no longer the case. The kernel will continue to turn on those bits if it detects an old style startup, but will NOT with a new style one.
The pretend_cpu field goes into the ppc_kerinfo_entry.pretend_cpu field of the system page and as before, it's used to tell the kernel that even though you don't know the PVR, you can act like it's the pretend one.
The name field is the string name of the CPU that gets put in the cpuinfo section.
The setup function is called when a particular ppcv_chip structure has been selected by the library as the one to use. It continues the library customization process by filling the second new structure.
The second data structure is:
struct ppcv_config { unsigned family; void (*cpuconfig1)(int cpu); void (*cpuconfig2)(int cpu); void (*cpuinfo)(struct cpuinfo_entry *cpu); void (*qtime)(void); void *(*map)(unsigned size, paddr_t phys, unsigned prot_flags); void (*unmap)(void *); int (*mmu_info)(enum mmu_info info, unsigned tlb); //NYI: tlb_read/write };
There's a single variable defined of this type in the library, called ppcv_config. The setup function identified by the selected ppcv_chip is responsible for filling in the fields with the appropriate routines for the chip. The variable is statically initialized with a set of do-nothing routines, so if a particular chip doesn't need something done in one spot (typically the cpuconfig[1/2] routines), the setup routine doesn't have to fill anything in).
The general design rules for the routines are that they should perform whatever chip-specific actions that they can perform that are not also board-specific. For example, the old startup main() functions would sometimes turn off data translation, since some IPLs turned it on. With the new startups this is handled automatically by the library. On the other hand, both the old and new startups call the ppc700_init_l2_cache() manually in main(), since the exact bits to put in the L2CR register are board-specific. The routines in the libraries should be modified to work with the IPL and initialize the CPU properly, rather than modifying the board-specific code to hack around it (e.g. the aforementioned disabling of data translation).
The setup routine might also initialize a couple of other freestanding variables that other support routines use to avoid them having to check the PVR value again (e.g. see the ppc600_set_clock_freqs() and ppcv_setup_7450() functions for an example).
The new startup (and kernel, when used with a new startup) no longer depends on the PVR to identify the chip family. Instead the “family” field is filled in with a PPC_FAMILY_* value from <ppc/syspage.h>. This is transferred to the ppc_kerinfo_entry.family field on the system page, which the kernel uses to verify that the right version of procnto is being used.
If the kernel sees a value of PPC_FAMILY_UNKNOWN (zero) in the system page, it assumes that an old style startup is being used and will attempt to determine the family (and cpuinfo->flags) fields on its own. DO NOT USE that feature with new startups.
Fill in the ppcv_config.family and ppcv_chip.cpu_flags field properly. The cpuconfig1 routine is used to configure the CPU for use in startup, and is called early before main() is called. For example, it makes sure that instruction and data translation is turned off, the exception table is pointed at low memory, etc. It's called once for every CPU in an SMP system, with the cpu parm indicating the CPU number being initialized.
The cpuconfig2 routine is called just before startup transfers control to the first bootstrap executable in the image file system. It configures the CPU for running in the bootstrap environment, e.g. turning on CPU-specific features such as HID0 and HID1 bits. Again it's called once per CPU in an SMP system with the cpu parm indicating which one.
The cpuinfo routine is called by init_one_cpuinfo() to fill in the cpuinfo_entry structure for each CPU. The qtime routine is called by init_qtime() to set up the qtime syspage section.
The map and unmap routines used to create/delete memory mappings for startup and callout use, are called by:
- startup_map_io
- startup_map_memory
- startup_unmap_io
- startup_unmap_memory
- callout_map_io
- callout_map_memory
There's one more data variable to mention. This is ppcv_list, which is a statically initialized array of pointers to ppcv_chip structures. The default version of the variable in the library has a list of all the ppcv_chip variables defined by the library so, by default, the library is capable of handling any type of PPC chip.
By defining a ppcv_list variable in the board-specific directory and adding only the ppcv_chip_* variable(s) that can be used with that board, all the chip-specific code for the processors that can't possibly be there will be left out.
For example, the new shasta-ssc startup with the default ppcv_list is about 1 KB bigger than the old version. By restricting the ppcv_list to only ppcv_chip_750, the new startup drops to 1 KB smaller than the original.
Adding a new CPU to the startup library
For a CPU called xyz, create a <ppcv_chip_xyz.c> and in it put an appropriately initialized struct ppcv_chip ppcv_chip_xyz variable. Add the ppcv_chip_xyz variable to the default ppcv_list (in <ppcv_list.c>).
If you were able to use an already existing ppcv_setup_*() function for the ppcv_chip_xyz initialization, you're done. Otherwise, create a <ppcv_setup_xyz.c> file with the properly coded ppcv_setup_xyz() function in it (don't forget to add the prototype to <cpu_startup.h>).
If you were able to use already existing ppcv_* routines in the ppcv_setup_xyz() function, you're done. Otherwise, create the routines in the appropriate <ppcv_*_xyz.c> files (don't forget to add the prototype(s) to <cpu_startup.h>). When possible, code the routines in an object-oriented manner, calling already existing routines to fill more generic information, e.g. ppcv_cpuconfig2_700() uses ppcv_cpuconfig2_600() to do most of the work and then it just fills in the 700 series-specific info.
With the new design, the following routines are now deprecated (and they spit out a message to that effect if you call them):
- ppc600_init_features(), ppc600_init_caches(), ppc600_flush_caches()
- Handled automatically by the library now.
- ppc7450_init_l2_cache()
- Use ppc700_init_l2_cache() instead. | http://www.qnx.com/developers/docs/6.4.1/neutrino/building/startup.html | crawl-003 | refinedweb | 11,297 | 53.1 |
136
Joined
Last visited
Community Reputation163 Neutral
About chosenkill6
- RankMember
Personal Information
- LocationCanada
- Programs running in the system tray need to use Win32 or a library like Qt. Programs that don't require any visuals serve many purposes. If you're on a Windows operating system, do Ctrl+Shift+Esc and switch over to the 'Processes' and 'Services' tab. How many of those programs do you see actually visible on-screen? Many programs you can run from the command line, and some are very powerful. Microsoft, Linux, and Mac all have many programs built in that don't have any interface. ImageMagick is a famous one (downloadable) that lets you manipulate image files in-bulk. Many webservers have it pre-installed, but I use it on rare occasions on my Windows machine. Here's another example of one: Open a command prompt (Start -> Run -> cmd.exe), and type in 'ping' to have your computer send a network packet to Google's webservers, and measure the amount of time it takes to go there and back. 'ping' is a program built into almost every operating system. Non-visual tools like these often do one thing, and do it well, and people chain the output and input of multiple "command-line" programs to run complex tasks on bulk files. Some can be dangerous to use, though, if you accidentally ask them to delete files you didn't want deleted - I almost did that the other day. Game-wise, these kinds of programs don't serve much purpose... but game servers don't have visual interfaces, and so don't need graphics APIs (They all use other kinds of APIs, though - like the built-in networking APIs most machines have, or file-access APIs, or etc...). Oh i understand now, I thought you meant they could be made with the standard C++ libraries which is what confused me haha. I figured I'd need some sort of API for it anyway. Thank you very much for the helpful information, I know what I need to do now :D
- The advantages of third party ones is that the code works on Windows, Mac, and Linux, and sometimes other platforms like Android, iPhone (with additional work required), instead of just working on Windows machines. Oftentimes, they're better designed than Win32 also (IMO), because Win32 has to remain backwards compatible with almost two decades of features that are deprecated, and also has to work across many different languages and so can't take advantage of certain language features. Other Microsoft APIs like DirectX do alot better, because each DirectX version has the freedom redesign itself from scratch. The more modern Microsoft APIs I don't have experience with, mostly because they either use different languages (like C#) or they require you to use Microsoft's tools, or they require non-standard dependencies on Microsoft Runtimes (like .NET). Maybe someone with experience with them can comment. I'm a huge Microsoft fan (love Microsoft Excel and Win7), but I don't like working with the Microsoft libraries I've interacted with so far - though I admittedly haven't tried too many. But really, your API depends on your goals. If you're wanting to make games, consider SDL 2.0 or SFML 2.0 for 2D hardware-accelerated graphics, and input and sound. If you want to make cross-platform desktop applications, I'd re-recommend Qt. If you're wanting to make services that run without anything visible onscreen, then pure C++ would work without a graphics API. C++ is a very huge language that takes several years to master. By all means continue to explore it! But when it comes to C++, it's often better to underestimate than to overestimate your capabilities. Or maybe I'm just misunderestimating your capabilities for you. The book I mentioned won't teach you all of C++ (no single book can), only C++'s standard library. You'll probably want to bookmark cplusplus.com, cppreference.com, and the notable C++ FAQ as go-to sources for information. Thanks for the detailed explanation! I think I will look into Qt and maybe some other 3rd party API for now because I do plan on releasing it for other platforms as well and the backwards compatibility seems like an unnecessary disadvantage I can overcome by simply using a more modern API. I've started reading on cplusplus.com already so for now I think I'll just get myself up to speed on C++11 and then start my application development. I'm curious about the part where you mentioned that programs that run in the background don't need a graphics API. What would I need for such a program? Just a simple program running in the system tray.
- That book looks exactly like something I am looking for, I will look into it for sure! Would it be better to use the Microsoft wrapper API? What is that called and how does it differ from third-party ones such as QT? I would prefer to stick with the official one by Microsoft unless there are specific advantages to using a third-party API. I made 2 games (Pong and Pacman) in C only but I learned C++ shortly after and started remaking my Pong using C++ and OOP, didn't finish the game but I'd say I was fairly sufficient in C++. I would like to continue with C++ but I was unsure which would be more relevant to Windows programming.
chosenkill6 posted a topic in For BeginnersI've been away from C/C++ programming for quite some time now, been doing a lot of work in Java (mostly making games) but I have made games in C in the past using SDL so it's not like I am learning a new language but I feel like if I just dove right back into it I would struggle. What would be the best way to get a quick refresher in C/C++ because I would like to start getting into some Windows API programming now. I looked up some C/C++ tutorials but they all start with very basics such as what a variable is etc. I would just like to get familiar with the syntax and language specifics quickly. What would be the best way to get started developing as fast as possible? Thanks!
chosenkill6 posted a topic in Your AnnouncementsVery original name.. I know.... I have just published my very first android game and very first published game as well! It isn't much but I am proud of it. You can download it here. Any feedback would be appreciated. Thanks!
chosenkill6 posted a topic in General and Gameplay ProgrammingIs it possible to create a java popup without a windows bar without the close, minimise buttons etc? I would like to create a java app that runs in the background but when certain conditions are met it will create a popup but I don't want a window but instead a small popup I can design. Any classes that Java provides to get me started would really help. Thanks
- Thanks for the java collections link! It worked :D Thanks for the help :D
- The array of students is created elsewhere and each student contains firstname, lastname, grade, age, email. I would like to list by first name but then have the appropriate lastname, grade etc with that first name. So if i have student[0] and it contains 'b' for every field and i have student[1] and it contains 'a' for every field i would like to have output: a a a a a b b b b b
- That is exactly what I am doing. I have an array of 20 students. Each read method takes a student object as a parameter and reads the appropriate information from a file for that student. How would I implement such a custom comparator?
chosenkill6 posted a topic in For Beginners
chosenkill6 posted a topic in Math and Physics?
- The analogy you presented, the child can have more children, kind of confuses me. If I am using recursion, I am making another call to the method I am in, so does that not mean that I am simply going back and forth between 2 children? The recursive function operates on each node, which calls the recursive function on all it's children, i.e: if we void Func(Node N){ for(int i=0;i<N.Children.Count;i++) Func(N.Children[i]); return; } which allows you to operate on an indeterminable number of children. Made a small change for doing unnecessary work. That code has a side effect. You're returning a void method? What is the point of that. What you should do is say that once it returns successfully, you break out of the loop: if (coin[i] != toonie) { coin[i] = 0 + (int) (Math.random() * ((6 - 0) + 1)); checkCoins(coin, turns); break; } Once it returns successfully, you break out and avoid a side-effect of the method. (Yes, I know it's an unimportant side effect that doesn't actually affect anything, however consistency !) (I just want to point out that there are some non-elegant pieces of your code, such as using an int assigned to a random number to toonie. You could create a class with an state called isToonie class Coin { bool isToonie = false; } and then have your code make more sense (instead of: if (coin[i] != toonie), we get: if (coins[index].isToonie))). Cheers ! Thanks for the pointers! Fixed my code, works like a charm
- package mainPackage; public class RecursionCoin { final static int toonie = 6; static int[] coin = {0,0,0,0}; static int turns = 0; public static void main(String[] args) { for(int i = 0; i < 4; i++){ coin[i] = 0 + (int)(Math.random() * ((6 - 0) + 1)); } checkCoins(coin, turns); System.out.println(turns); } private static void checkCoins(int[] coin, int turns2) { turns = turns2 + 1; for(int i = 0; i < 4; i++){ if(coin[i] != toonie){ coin[i] = 0 + (int)(Math.random() * ((6 - 0) + 1)); checkCoins(coin, turns); } } } } I am going to try a few more problems on my own now just to make sure I fully understand this. Can someone just skim through my code, see if this makes sense? Thanks!
- Okay, so we could use some recursion: (Code is in JavaScript because JavaScript is basically universally easy to read for people who have a background in almost any standard programming language. That doesn't make it good, though ). function findSmallestNumber(currentNumber, currentFunctionCall, amountOfFactors) { if(currentFunctionCall > amountOfFactors) { return currentNumber; } else if (currentNumber % currentFunctionCall === 0) { ++currentFunctionCall; return findSmallestNumber(currentNumber, currentFunctionCall, amountOfFactors); } else if (currentNumber % currentFunctionCall !== 0) { ++currentNumber; currentFunctionCall = 1; return findSmallestNumber(currentNumber, currentFunctionCall, amountOfFactors); } } var amountOfFactors; do { amountOfFactors = prompt("How many factors are there (0 - ?)"); } while (amountOfFactors.isNaN); console.log(findSmallestNumber(1, 1, amountOfFactors)); (What's awesome about learning recursion is that this makes sense )! So, I tested this with 1-10 and it worked. However, try it with 1-20 (This link will bring you to a page with my code. Click the "run" button to see it in action!) You should get the error "Too much recursion". The compiler can't handle 25,000,000 function calls (I solved the code euler problem, obviously, so yeah, there's that many). So, we have to translate it into loops: function findSmallestNumber(amountOfFactors) { var currentNumber = 1; var currentFactor = 1; while(currentFactor <= amountOfFactors) { if(currentNumber % currentFactor === 0) { ++currentFactor; continue; } else if(currentNumber % currentFactor !== 0) { ++currentNumber; currentFactor = 1; continue; } } return currentNumber; } var amountOfFactors = 0; do { amountOfFactors = prompt("How many factors (1 - ?)", ""); } while (amountOfFactors.isNaN); console.log(findSmallestNumber(amountOfFactors)); This is the same function, using loops. We see that it works the same, however it will work for an (almost) infinite amount of number. (One more thing, don't try the script using a while() loop with the value "20". It will cause your browser to freeze because Javascript really isn't meant for this Because it's an interpreted language and it's not actually that useful ). Cheers ! Ah that makes sense! I think that is where I was struggling, failing to understand when to use which. Thanks for clearing that up :D
- That, in my opinion is a really bad problem to introduce recursion to. also, if you use Juliean's solution(which is probably the solution i'd come up with), their's the tiniest of tiny chances that you could overflow the stack because you never actually get the required number of toonies, the odds are astronomically small, but that solution does open up such possibility's. Your instructor should have presented tree based searching as an introduction to recursion, as this is where recursion is not only the best option, but is pratically the only option. for example, say you had this tree structure: (N = Node): N / | \ N N N / \ / | \ / \ N N N N N N N | / \ N N N | N each node has x number of children, and each child can also have any number of children, how would you reliable iterate over all the nodes? That is where recursion shines, however the problem you have can, and imo is safer/faster to solve using loops rather than recursion. edit: actually, as it is now, Juliean's solution will cause a crash since you can never pick a toonie(change the value of toonie to 6) The analogy you presented, the child can have more children, kind of confuses me. If I am using recursion, I am making another call to the method I am in, so does that not mean that I am simply going back and forth between 2 children? | https://www.gamedev.net/profile/187657-nitishk/?tab=reputation&app_tab=forums&type=received | CC-MAIN-2017-30 | refinedweb | 2,289 | 61.56 |
On Thu, Sep 22, 2016 at 9:22 AM, Stephen Finucane stephenfinucane@hotmail.com wrote:
Hey,
I recently published a flake8 plugin on GitHub:
In my experience, finding flake8 plugins is easier said than done and the best are found on the 'PyCQA' org on GitHub/GitLab. Would this project be something that the organization would be interested in moving under their umbrella? I'm thinking of making use of it in some OpenStack projects, but who knows...
Hi Stephen!
So, we generally try to take on projects that are already widely used and benefit from the a larger group's attention. If you'd really like the project to live in the PyCQA, we can accomodate that. The rest of us are already overloaded with commitments so we'd need a commitment from you that you are going to maintain this. We can create a team for you and make you team maintainer so you can add more people to the team as more contributors come along.
If your project is more geared towards OpenStack, you might instead want to include it in the OpenStack project namespace as a non-Big-Tent project. Hacking lives there and OpenStack is happy having it there. (Hint: I also work on hacking, although not as much as I've wanted to as of late.) They also hate "third-party" dependencies and will probably be happier (if it's geared towards them) if you would maintain it there.
Also, it looks as if you didn't closely follow the Flake8 documentation around building a Flake8 extension (...) which points out that you should add the "Framework :: Flake8" classifier for extensions to make it easier to find things. I haven't had time to update the discoverable Flake8 projects to use this universally but number of the more popular ones are using it.
Cheers, Ian | https://mail.python.org/archives/list/code-quality@python.org/message/26FZAYKWQ3KS5W7DW56IEPEB7K4CDR6Y/ | CC-MAIN-2022-05 | refinedweb | 310 | 68.4 |
Credit: Nanette Hoogslag straight-forward—all the more so for a programming task, where programmers have an unparalleled ability to construct their own tools. Programmers frequently solve programming problems by creating new tool programs, such as scripts that generate source code from tables of data.
Since programmers often build task-specific tools, one way to make them more productive is to give them better tool-making tools. When tools take the form of program generators, this idea leads to libraries for creating languages that are directly extensible. Programmers may even be encouraged to think about a problem in terms of a language that would better support the task. This approach is sometimes called language-oriented programming.3
Racket is both a programming language and a framework for building programming languages. A Racket program can contain definitions that extend the syntax of the language for use later in the same program, and language extensions can be packaged as modules for use in multiple programs. Racket supports a smooth path from relatively simple language extensions to completely new languages, since a programming tool, like any other piece of software, is likely to start simple and grow as demands on the language increase.
As an example task, consider the implementation of a text-adventure game (also known as interactive fiction), where a player types commands to move around in a virtual world and interact with objects:
To make the game interesting, a programmer must populate the virtual world with places and things that have rich behavior. Most any programming language could implement this virtual world, but choosing the right language construct (that is, the right tool) to represent each game element is a crucial step in the development process.
The right constructs allow commands, places, and things to be created easily—avoiding error-prone boilerplate code to set up the world's state and connections—while also allowing a programming language's full power to implement behaviors.
In a general-purpose programming language, no built-in language construct is likely to be a perfect fit. For example, places and things could be objects, while commands could be implemented as methods. The game's players, however, don't call methods but instead type commands that have to be parsed and dynamically mapped to responses for places and things. Similarly, saving and loading a game requires inspecting and restoring the state of places and things, which is partly a matter of object serialization but also of setting variables to unmarshalled values (or else using an indirection through a dictionary for each reference from one object to another).
Some programming languages include constructs—such as overloading or laziness—that a clever programmer can exploit to encode a domain-specific language. The design of Racket addresses the problem more directly; it gives programmers tools to explicitly extend the programming language with new syntax. Some tasks require only a small extension to the core language, while others benefit from the creation of an entirely new language. Racket supports both ends of the spectrum, and it does so in a way that allows a smooth progression from one end to the other. As a programmer's needs or ambitions grow for a particular task, the programmer can take advantage of ever more of Racket's unified framework for language extension and construction.
The text-adventure example presented here illustrates the progression from a simple embedding in Racket to a separate domain-specific language (including IDE support for syntax coloring), explaining relevant Racket details along the way; no prior knowledge of Racket is necessary. Readers who prefer a more complete introduction to the language should consult The Racket Guide.1
The example is a "toy" in multiple senses of the world, but it is also a scale model of industry practice. Most every video-game developer uses a custom language, including the Racket-based language that is used to implement content for the Uncharted video-game series.2 Evidently, when billions of entertainment dollars are on the line, the choice of programming language matters—even to the point of creating new, special-purpose languages.
The World in Plain Racket
Our text adventure game contains a fixed set of places, such as a meadow, house, or desert, and a fixed set of things, such as a door, key, or flower. The player navigates the world and interacts with things using commands that are parsed as either one or two words: a single verb (that is, an intransitive verb, since it does not have a target object) such as help or look; or a verb followed by the name of a thing (that is, a transitive verb followed by a noun) such as open door or get key. Navigation words such as north or in are treated as verbs. A user can save the game using the save and load verbs, which work everywhere and prompt the user for a file name.
To implement a text-adventure game in Racket, you would start by declaring structure types for each of the three game elements shown in Figure 1.
Racket is a dialect of Lisp and a descendant of Scheme, so its syntax uses parentheses and a liberal grammar of identifiers (for example,
transitive? is an identifier). A semicolon introduces a newline-terminated comment. Square brackets are interchangeable with parentheses but are used by convention in certain contexts, such as grouping a field name with modifiers. The
#:mutable modifier declares a field as mutable, since fields are immutable by default.
The first
struct form in the code binds
verb to function taking one argument for each field and creating a verb instance. For example, you can define a
south verb with alias
s as
Lisp and Racket programs tend to use strings for the text that is to be shown to an end user—for example, verb descriptions such as
"go south". A symbol, written with a leading single quote (for example,
'south), is more typically used for an internal name, such as a verb alias.
Given the definition of
south and a thing,
flower, you could define a
meadow place where the
south verb moves the player to a
desert place as illustrated in Figure 2.
The
list function creates a list, while
cons pairs two values. The
cons function usually pairs an element with a list to form a new list, but here
cons is used to pair a verb with a function that implements the verb's response. The
lambda form creates an anonymous function, which in this case expects zero arguments.
When a verb's response function produces a place, such as
desert in the example, the game execution engine will move the player to the returned place. The game engine's support for saving and loading game state, meanwhile, requires a mapping between places and their names. (Places can be implemented as objects that can be serialized, but restoring a game requires both deserialization and updating Racket-level variables such as
meadow.) The
record-element! function implements mappings between names and places as shown in Figure 3.
Things must be defined and registered in much the same way as places. Verbs must be collected into a list to be used by the game's command parser. Finally, the parsing and execution engine needs a set of verbs that work everywhere, each with its response function. All of those pieces form the interesting part of the game implementation, while the parsing and execution engine is a few dozen lines of static infrastructure. See the accompanying sidebar for links to the complete game implementation. The code needed to construct the virtual world is particularly verbose.
Syntactic Abstraction
Although the data-representation choices discussed previously are typical for a Racket program, a Racket programmer is unlikely to write the repetitive code that directly defines and registers places, since it includes so many boilerplate lists,
conses, and lambdas. Instead, a Racket programmer would write
and would add a
define-place form to Racket using a pattern-based macro. The simplest form of such a macro uses
define-syntax-rule as depicted as Figure 4.
The form immediately after
define-syntax-rule is a pattern, and the form after the pattern is a template. A use of a macro that matches its pattern is replaced by the macro's template, modulo substitutions of pattern variables for their matches. The
id, desc, thng, vrb, and
expr identifiers in this pattern are pattern variables.
Note the
define-place form cannot be a function. The
desert expression after
south is, in general, an expression whose evaluation must be delayed until the
south command is entered. More significantly, the form should bind the variable meadow so that Racket expressions for commands can refer to the place directly. In addition, the variable's source name (as opposed to its value) is used to register the place in the table of elements.
The
define-place macro so far matches exactly one thing in a place and exactly one verb and response expression. To generalize to any number of things, verbs, and expressions, you add ellipses to the pattern as shown in Figure 5.
Verbs are slightly trickier, because you want to make simple verbs especially compact to specify, and you need one kind of pattern for intransitive verbs and another for transitive verbs. The following example illustrates the target syntax:
This example defines four verbs:
quit as an intransitive verb with no aliases;
north as an intransitive verb with alias
n and a preferred description
"go north";
knock as a transitive verb (as indicated by the underscore) with no aliases; and
get as a transitive verb with aliases
grab and
take and preferred description
"take". Finally, all of these verbs are collected into a list that is bound to
all-verbs for use by the game's command parser.
Implementing the
define-verbs form requires a more general kind of pattern matching to support different shapes of verb specifications and to match = and _ as literals. An implementation of
define-verbs can defer the work of handling an individual verb to a
define-one-verb macro, which uses
define-syntax and
syntax-rules as shown in Figure 6.
The
define-place, define-thing, and
define-verb macros are examples of syntactic abstraction. They abstract over repeated patterns of syntax, so that a programmer can avoid boilerplate code and concentrate on the creation of interesting verbs, places, and things.
The revised game implementation, which has a compact and readable implementation of the virtual world, is available online (see the accompanying sidebar for links).
Syntactic Extension
A Racket programmer who is interested in writing a single text-adventure game would likely stop extending the language at this point. If the text-adventure engine should be reusable for multiple worlds, however, a Racket programmer is likely to take a step beyond syntactic abstraction to syntactic extension.
The difference between abstraction and extension is partly in the eye of the beholder, but extension suggests that functions such as
place and
record-element! can be kept private, while
define-place is exported for use in the world-defining module with implementation-independent semantics. In the world-defining module, macros such as
define-place have the same status as built-in forms such as
define and
lambda.
To make this shift, you can put the
define-verbs, define-place, define-thing, and
define-everywhere definitions in their own module, called world.rkt.
This module imports txtadv.rkt, which exports
define-verbs as well as functions used in verb responses such as
save-game and
load-game. Meanwhile, txtadv.rkt keeps private the structures and other functions that implement the world data types.
The
#lang racket line that starts each module indicates that the module is implemented in the racket language. In world.rkt, require additionally imports both the syntactic extensions and functions that are exported by the txtadv.rkt module.
Since macro binding is part of the Racket language, as opposed to being implemented as a separate preprocessor, macro bindings can work with module imports and exports the same as variable bindings. In particular, the definition of the
define-verbs macro can see the verb constructor function because of the rules of lexical scope, while code in the world.rkt module cannot access verb directly because of the same scoping rules. Since a use of
define-verbs in world.rkt expands to a use of verb, considerable language machinery is required for Racket to maintain lexical scope in the presence of macro expansion, but the result is that syntactic extension is easy for programmers.
The modular game implementation is available online (see the accompanying sidebar for links).
Module Languages
Although the world.rkt module cannot directly access constructor functions such as
verb, the module still has access to all of the Racket language and, via
require, any other module's exports. More constraints on world.rkt may be appropriate to ensure that assumptions of txtadv.rkt are satisfied.
To exert further control, you can convert txtadv.rkt from a module that exports a language extension to one that exports a language. Then, instead starting with
#lang racket, world.rkt starts with
For now,
s-exp indicates that the language of world.rkt uses S-expression notation (that is, parentheses), while txtadv.rkt defines syntactic forms. Later, the S-expression and syntactic-form specifications are combined into a single name, analogous to
#lang racket.
Along with changing world.rkt, you can change txtadv.rkt to export everything from
racket:
Instead of
(all-from-out racket), you could use
(except-out (all-from-out racket) require) to withhold the
require form from world.rkt. Alternatively, instead of using
all-from-out and then naming bindings to withhold, you could explicitly export only certain pieces from
racket.
The exports of txtadv.rkt completely determine the bindings that are available in world.rkt—not only the functions, but also syntactic forms such as
require or
lambda. For example, txtadv.rkt could supply a
lambda binding to world.rkt that implements a different kind of function than the usual
lambda, such as functions with lazy evaluation.
More commonly, a module language can replace the
#%module-begin form that implicitly wraps the body of a module. Specifically, txtadv. rkt can provide an alternate
#%module-body that forces world.rkt to have a single
define-verbs form, a single
define-everywhere form, a sequence of
define-thing declarations, and a sequence of
define-place declarations; if world.rkt has any other form, it can be rejected as a syntax error. Such constraints can enforce restrictions to limit the power of the txtadv.rkt language, but they can also be used to provide domain-specific checking and error messages.
The game implemented with a txtadv.rkt language is available online (see the accompanying sidebar for url information). The
#%module-begin replacement in the implementation requires
define-verbs followed by
define-everywhere, then allows any number of other declarations. The module must end with a place expression, which is used as the starting location for the game.
Static Checks
The
define-verb, define-place, and
define-thing forms bind names in the same way as any other Racket definition, and each reference to a verb, place, or thing is a Racket-level reference to the defined name. This approach makes it easy for verb-response expressions, which are implemented in Racket, to refer to other things and places in the virtual world. It also means, however, that misusing a reference as a thing can lead to a run-time error. For example, the incorrect reference to desert as a thing in
triggers a failure only when the player enters room, and the game engine fails when trying to print the things within the place.
Many languages provide type checking or other static types to ensure the absence of certain runtime errors. Racket macros can implement languages with static checks, and macros can even implement language extensions that perform static checks within a base language that defers similar checks to runtime. Specifically, you can adjust
define-verb, define-place, and
define-thing to check certain references, such as requiring that the list of initial things in a place contain only names that are defined as things. Similarly, names used as verbs with responses can be checked to ensure they are declared as verbs, suitably transitive or intransitive.
Implementing static checks typically requires macros that are more expressive than pattern-matching macros. In Racket, arbitrary compile-time code can perform the role of expander for a syntactic form, because the most general form of a macro definition is
where transformer-expr is a compile-time expression that produces a function. The function must accept one argument, which is a representation of a use of the
id syntactic form, and the function must produce a representation of the use's expansion. In the same way that
define-syntax-rule is shorthand for
define-syntax plus
syntax-rules and a single pattern,
syntax-rules is shorthand for a function of one argument that pulls apart expressions of a certain shape (matching a pattern) and constructs a new expression for the result (based on a template).
The compile-time language that is used for transformer-expr can be different from the surrounding runtime language, but
#lang racket seeds the language of compile-time expressions with essentially the same language as for runtime expressions. New bindings can be introduced to the compile-time phase with
(require (for-syntax ....)) instead of just
require, and local bindings can be added to the compile-time phase through definitions wrapped with
begin-for-syntax.
For example, to check for verbs, things, and places statically,
begin-for-syntax can define a new
typed structure as illustrated in Figure 7 to associate the binding
gen-desert to the type
"place". The
#:property prop:procedure clause in the declaration of
typed makes a typed instance act as a function (for reasons explained later). The function takes one argument in addition to the implicit
self argument, but it ignores the argument and returns the
typed instance's
id.
You can use
typed by changing the
define-place form to bind a place name
id to a compile-time
typed record. At the same time,
define-place binds a generated name
gen-id to the runtime place record shown in Figure 8.
Since a typed record acts as a function, a use of
id expands to
gen-id, so
id still can be used as a direct reference to the place. At the same time, other macros can look at the
id binding and determine that its expansion will have the type
"place".
Other macros inspect types by using a
check-type macro. The implementation of
check-type is in the complete code online, but its essential feature is that it uses a compile-time function
syntax-local-value to obtain the compile-time value of an identifier; the check-type macro then uses
typed? to check whether the com-pile-time value is a type declaration, in which case it uses
typed-type to check whether the declared type is the expected one. As long as the type check passes,
check-type expands to its first argument.
The
define-place macro uses
check-typed to check whether the list of things at the place contains only names that are defined as things. The
define-place macro also uses
check-typed to check whether verbs that have responses in the
place are defined as intransitive verbs (see Figure 9).
The
define-one-verb macro must change to similarly declare each verb as either type
"transitive verb" or
"intransitive verb". The
define-thing macro changes to declare its binding as a
"thing", and it checks that each handled verb is defined as a
"transitive verb".
Racket macros can implement languages with static checks, and macros can even implement language extensions that perform static checks within a base language that defers similar checks to runtime.
See the sidebar for the online availability of the code for the game with static checks.
The implementation of
check-form uses
syntax-case, which provides the pattern-matching functionality of
syntax-rules, but pairs each pattern with an expression rather than a fixed template.
New Syntax
A Racket programmer who defines a custom text-adventure language for other Racket programmers is especially likely to stop at this point. If the text-adventure language is to be used by others who are less familiar with Racket, however, a different notation may be appropriate. For example, others may prefer a notation such as the following from world.rkt:
In this notation, instead of forms such as
define-verbs and
define-everywhere, sections of the program are introduced by tags such as
===VERBS=== and
===EVERY-WHERE===. Names in the
===VERBS=== section implicitly define verbs, listing aliases afterward through a comma-separated sequence followed by an optional description of the verb. Similarly, each name in the
===EVERYWHERE=== section implicitly defines the response to a verb; the responses are still written as Racket expressions, but they could be in any alternate notation, if desired. Each thing and place is defined by its own subsection, such as
—cactus—, with per-object verb responses in the same way as in
===EVERYWHERE===.
Non-S-expression syntax is enabled in world.rkt by starting with
#lang reader "txtadv-reader.rkt" instead of
#lang s-exp "txtadv.rkt". The
reader language constructor, unlike the
s-exp language constructor, defers parsing of the program's text to an arbitrary parsing function that is exported by the named module, which in this case is txtadv-reader.rkt. The parser from txtadv-reader.rkt is responsible for processing the rest of the text and converting it into S-expression notation, including the introduction of txtadv.rkt as the module language for the parsed world.rkt module.
More precisely, a reader function parses input into a syntax object, which is like an S-expression that is enriched with lexical-context and source-location information. It also acts as the representation of code for macro-transformer arguments and results. The syntax-object abstraction provides a clean separation of character-level parsing and tree-structured macro transformations. The source-location part of a syntax object automatically connects the result of macro expansion back to the original source; if a runtime error occurs in the code generated from world.rkt, then the error can point back to the relevant source.
The game code with nonparentheses syntax is available online (see the sidebar for urls).
The parser in txtadv-reader.rkt is implemented in an especially primitive way with regular expressions. The Racket distribution includes better parsing tools such as Lex- and Yacc-style parser generators.
IDE Support
One of the benefts of S-expression notation is that a programming environment's functionality adapts easily to syntactic extension, since syntax coloring and parentheses matching can be independent of macro expansion. Some of those benefts are intact with the new syntax for describing a world, since the parser keeps source locations with identifiers and since the code ultimately expands to Racket-level binding forms. For example, the Check Syntax button in DrRacket can automatically draw arrows from the binding instance of cactus to each bound use of cactus.
DrRacket needs more help from the language implementer for IDE features, such as syntax coloring, that depend on the character-level syntax of the language. Filling in this piece of the sample text-adventure language takes two steps:
- Install the language's reader as a
txtadvlibrary collection instead of relying on a relative path such as txtadv-reader.rkt. Moving to the namespace of library collections allows DrRacket and the program to agree on which language is being used (without requiring project-style configuration of the IDE).
- Add a function to the
txtadvreader module that identifies additional support for the language, such as a module that implements on-the-fly syntax coloring. Again, since DrRacket and the module use the same specification of the module's language, the syntax color can be precisely tailored to the module's language and content.
The code for the game with a DrRacket plug-in for syntax coloring is available online. You will find links in the accompanying sidebar.
This plug-in colors the program according to the game language's syntax instead of Racket's default rules, highlighting lexical syntax errors in red.
More Languages
The source code of the Racket distribution includes dozens of unique
#lang lines. The most common is
#lang racket/base, a stripped-down variant of
#lang racket. Other common lines include
#lang scribble/manual for documentation sources,
#lang racket/unit for externally linkable components,
#lang scheme for legacy modules, and
#lang setup/infotab for library metadata. Most Racket languages use S-expression notation, but
scribble/manual is a notable exception; even parentheses-loving Racketeers concede that an S-expression is a poor notation for documentation prose.
Different languages in the Racket distribution exist for different reasons, and they use Racket's language-creation facilities to different degrees. Racket developers do not create new languages lightly, but the benefits of a new language sometimes outweigh the cost of learning a language variant. These benefits are as readily available to Racket users as to the core Racket developers.
Racket's support for S-expression languages and language extensions is particularly rich, and the examples in this article only scratch the surface of that toolbo x. Racket's toolbox for non-S-expression syntax is still evolving, especially with respect to composable parsers and language-triggered IDE plug-ins. Fortunately, Racket's
#lang protocol moves most of the remaining work out of the core system and into libraries. This means that Racket users are as empowered as core Racket developers to develop improved syntax tools.
Related articles
on queue.acm.org
DSL for the Uninitiated
Debasish Ghosh
The World According to LINQ
Erik Meijer
OCaml for the Masses
Yaron Minsky
1. Flatt, M., Findler, R.B. PLT. 2011. The Racket Guide;.
2. Liebgold, D. Functional mzScheme DSLs in game development. Presented at Commercial Users of Functional Programming (2011)
3. Ward, M. Language-oriented programming. Software—Concepts and Tools 15, 4 (1994), 147–161.
Figure 1. Distribution of value for the iPhone based on Ken Kraemer's research.
Figure 2. Example place definition.
Figure 3. Registering game element definitions.
Figure 4. The
define-place macro.
Figure 5. The generalized
define-place and
define-thing macros.
Figure 6. The
define-one-verb and
define-everywhere macros.
Figure 7. The
typed compile-time structure.
Figure 8. The revised
define-place macro with type declaration.
Figure 9. The revised
define-place macro with type checking. | https://www.tefter.io/bookmarks/68965/readable | CC-MAIN-2019-47 | refinedweb | 4,454 | 51.78 |
Using the Image Class #
The most important class in the Python Imaging Library is the Image class, defined in the module with the same name. You can create instances of this class in several ways; either by loading images from files, processing other images, or creating images from scratch.
To load an image from a file, use the open function in the Image module.
>>> import Image >>> im = Image.open("lena.ppm")
If successful, this function returns an Image object. You can now use instance attributes to examine the file contents.
>>> print im.format, im.size, im.mode PPM (512, 512) RGB
The format attribute identifies the source of an image. If the image was not read from a file, it is set to None. The size attribute is a 2-tuple containing width and height (in pixels). The mode attribute defines the number and names of the bands in the image, and also the pixel type and depth. Common modes are “L” (luminance) for greyscale images, “RGB” for true colour images, and “CMYK” for pre-press images.
If the file cannot be opened, an IOError exception is raised.
Once you have an instance of the Image class, you can use the methods defined by this class to process and manipulate the image. For example, let’s display the image we just loaded:
>>> im.show()
(The standard version of show is not very efficient, since it saves the image to a temporary file and calls the xv utility to display the image. If you don’t have xv installed, it won’t even work. When it does work though, it is very handy for debugging and tests.)
The following sections provide an overview of the different functions provided in this library.
Reading and Writing Images #
The Python Imaging Library supports a wide variety of image file formats. To read files from disk, use the open function in the Image module. You don’t have to know the file format to open a file. The library automatically determines the format based on the contents of the file.
To save a file, use the save method of the Image class. When saving files, the name becomes important. Unless you specify the format, the library uses the filename extension to discover which file storage format to use.
import os, sys import Image for infile in sys.argv[1:]: f, e = os.path.splitext(infile) outfile = f + ".jpg" if infile != outfile: try: Image.open(infile).save(outfile) except IOError: print "cannot convert", infile
A second argument can be supplied to the save method which explicitly specifies a file format. If you use a non-standard extension, you must always specify the format this way:
import os, sys import Image size = 128, 128 for infile in sys.argv[1:]: outfile = os.path.splitext(infile)[0] + ".thumbnail" if infile != outfile: try: im = Image.open(infile) im.thumbnail(size) im.save(outfile, "JPEG") except IOError: print "cannot create thumbnail for", infile
It is important to note that the library doesn’t decode or load the raster data unless it really has to. When you open a file, the file header is read to determine the file format and extract things like mode, size, and other properties required to decode the file, but the rest of the file is not processed until later.
This means that opening an image file is a fast operation, which is independent of the file size and compression type. Here’s a simple script to quickly identify a set of image files:
import sys import Image for infile in sys.argv[1:]: try: im = Image.open(infile) print infile, im.format, "%dx%d" % im.size, im.mode except IOError: pass
Cutting, Pasting and Merging Images #))
Note that for a single-band image, split returns the image itself. To work with individual colour bands, you may want to convert the image to “RGB” first.
Geometrical Transforms #.
Colour Transforms #).
Image Enhancement #
The Python Imaging Library provides a number of methods and modules that can be used to enhance images.
Filters #
The ImageFilter module contains a number of pre-defined enhancement filters that can be used with the filter method.
import ImageFilter out = im.filter(ImageFilter.DETAIL)
Point Operations # = Image.merge(im.mode, source).
Enhancement #
For more advanced image enhancement, you can")
Image Sequences #...
Postscript Printing #()
More on Reading Images #)
Controlling the Decoder #. | http://effbot.org/imagingbook/introduction.htm | crawl-001 | refinedweb | 726 | 66.23 |
Keystroke Handling
Keystroke handling is generic. Any component inherited from XulElement can handle the key event in the same way.
ENTER and ESC
To handle ENTER key pressing, you can to listen to the event:
- onOK (notice O and K are both in upper case).
To handle ESC key pressing, you can to listen to the event:
- onCancel
For example:
<grid id="form" apply="org.zkoss.reference.developer.uipattern.KeystrokeComposer"> <rows> <row>Username: <textbox id="username"/> </row> <row>Password: <textbox id="password" type="password"/> </row> <row> <button label="Login" forward="form.onOK"/> <button label="Reset" forward="form.onCancel"/> </row> </rows> </grid>
Then, you could implement a composer as follows.
package org.zkoss.reference.developer.uipattern; import org.zkoss.zk.ui.Component; import org.zkoss.zk.ui.select.SelectorComposer; import org.zkoss.zk.ui.select.annotation.*; import org.zkoss.zul.Textbox; public class KeystrokeComposer extends SelectorComposer<Component> { @Wire private Textbox username; @Wire private Textbox password; @Listen("onOK = #form") public void onOK() { //handle login System.out.println("ok"); } @Listen("onCancel = #form") public void onCancel() { username.setValue(""); password.setValue(""); } }
Notice that the
onOK and
onCancel events are sent to the nearest ancestor of the component that has the focus. In other words, if you press ENTER in a textbox, then ZK will look up the textbox, its parent, its parent's parent and so on to see if any of them has been registered a listener for
onOK. If found, the event is sent to it. If not found, nothing will happen.
Also notice that, if a button gains the focus, ENTER will be intercepted by the browser and interpreted as pressed. For example, if you move the focus to the Reset button and press ENTER, you will receive
onCancel rather than
onOK (since
onClick will be fired and it is converted to
onCancel because of the forward attribute specified).
Control Keys
To handle the control keys, you have to specify the keystrokes you want to handle with XulElement.setCtrlKeys(String). Then, if any child component gains the focus and the user presses a keystroke matches the combination, the
onCtrlKey will be sent to the component with an instance of KeyEvent.
Like ENTER and ESC, you could specify the listener and the
ctrlKeys property in one of the ancestors. ZK will search the component having the focus, its parent, its parent's parent and so on to find if any of them specifies the
ctrlKeys property that matches the keystroke.
For example,
<vbox ctrlKeys="@c^a#f10^#f3" onCtrlKey="doSomething(event.getKeyCode())"> <textbox/> <datebox/> </vbox>
As shown, you could use KeyEvent.getKeyCode() to know which key was pressed.
Allowed Control Keys
Document-level Keystrokes
Since 5.0.6
When you set the library property org.zkoss.zk.ui.invokeFirstRootForAfterKeyDown.enabled to true. If there is no widget gaining a focus when an end-user presses a keystroke, ZK can forward a key event to the first root component. For example, when visiting the following page, the div component will receive the onOK event.
<div onOK="doSomething(event)" ctrlKeys="^K" onCtrlKey="doSomething(event)" > press enter key or ctrl+k. <zscript><![CDATA[ public void doSomething(KeyEvent e){ Clients.showNotification(e.getKeyCode()+""); } ]]></zscript> </div>
In other words,
doSomething() will be called if the user presses ENTER, even though no widget ever gains the focus.
Nested Components
Keystrokes are propagated up from the widget gaining the focus to the first ancestor widget that handles the keystroke. For example,
<div onOK="doFirst()"> <textbox id="t1"/> <div onOK="doSecond()"> <textbox id="t2"/> </div> </div>
Then,
doSecond() is called if
t2 is the current focus, and
doFirst() is called if
t1 has the focus.
Key handling and onChange event
When a onChange listener alone is registered on a component, onChange will be triggered by blur events exclusively.
However, some key events will cause a check for change value and will fire a change event if necessary.
These key events are: onOK onCancel onCtrlkeys. If a listener for any of these events is registered and triggers, an onChange event calculation will be triggered, and an onChange event will be fired if the value of the control have changed.
Version History | https://www.zkoss.org/wiki/ZK_Developer's_Reference/UI_Patterns/Keystroke_Handling | CC-MAIN-2022-33 | refinedweb | 689 | 55.74 |
Related
Conceptual article
Understanding How To Render Arrays in React
Introduction
This article will teach you how to render an array in React and the best practices to use when rendering different elements within components.
One of the advantages of using a modern web language like JavaScript is that you can quickly automate the generation of HTML chunks.
Using something like a loop against an array or an object means you only have to write the HTML per item one time. Better yet, any future edits only have to be applied once.
Rendering Multiple Elements
To render multiple JSX elements in React, you can loop through an array with the
.map() method and return a single element.
Below, you loop through the
reptiles array and return a
li element for each item in the array. You can use this method when you want to display a single element for each item in the array:
function ReptileListItems() { const reptiles = ["alligator", "snake", "lizard"]; return reptiles.map((reptile) => <li>{reptile}</li>); }
The output will look like this:
Output- alligator - snake - lizard
In the next example, you will examine why you would want to add a unique
key to a list of elements rendered by an array.
Rendering a Collection of Elements Inside a Component
In this example, you loop through an array and create a series of list item components like the previous example.
To start, update the code to use the
<ol> component to hold the
<li> items. The
<ol> component will create an ordered list of the items:
function ReptileList() { const reptiles = ["alligator", "snake", "lizard"]; return ( <ol> {reptiles.map((reptile) => ( <li>{reptile}</li> ))} </ol> ); }
However, if you look at the console, you will see a warning that each child in an array or iterator should have a unique key.
The warning appears because when you attempt to render a collection inside a component, you must add a
key.
In React, a unique
key is used to determine which of the components in a collection needs to be re-rendered. Adding a unique
key prevents React from having to re-render the entire component each time there is an update.
In this step, you will render multiple elements in a component and add a unique
key. Update the code to include a
key on the list items to resolve the warning:
function ReptileList() { const reptiles = ['alligator', 'snake', 'lizard']; return ( <ol> {reptiles.map(reptile => ( <li key={reptile}>{reptile}</li> ))} </ol> ); }
Now that you’ve added in a
key, the warning will no longer be in the console.
In the next example, you will see how to render adjacent elements without encountering a common syntax error.
Rendering Adjacent Elements
In JSX, to render more than one element in a component you must add a wrapper around them.
In this example, you will first return a list of items without looping through an array:
function ReptileListItems() { return ( <li>alligator</li> <li>snake</li> <li>lizard</li> ); }
This will give you a hard error in the console:
To fix this error you need to wrap the block of
li elements in a wrapper. For a list you can wrap them in an
ol or
ul element:
function ReptileListItems() { return ( <ol> <li>alligator</li> <li>snake</li> <li>lizard</li> </ol> ); }
The adjacent
<li> elements are now wrapped in an enclosing tag,
<ol>, and you will no longer see an error.
In the next section, you will render a list in a wrapper using a
fragment component.
Rendering Adjacent Elements with
React.fragment
Prior to React v16.2, you could wrap a block of components in a
<div> element. This would lead to an application full of
divs, often referred to as “div soup”.
To fix this issue, React released a new component known as the
fragment component:
When you need to render a list inside an enclosing tag but want to avoid having to use a
div, you can use
React.Fragment instead:
function ReptileListItems() { return ( <React.Fragment> <li>alligator</li> <li>snake</li> <li>lizard</li> </React.Fragment> ); }
The rendered code will only include the
li elements and the
React.Fragment component will not appear in the code.
Also, note with
React.fragment there is no need to add a key.
You may notice writing
React.fragment is more tedious than adding a
<div>. Fortunately, the React team developed a shorter syntax to represent this component. You can use
<> </> in place of
<React.Fragment></React.Fragment>:
function ReptileListItems() { return ( <> <li>alligator</li> <li>snake</li> <li>lizard</li> </> ); }
Conclusion
In this article, you explored various examples of how to render arrays in a React application.
When rendering an element inside another component, you should use a unique
key and wrap your elements inside a wrapper element.
Depending on your use case, you can create simple lists wrapped in a
fragment component that does not need a key.
To learn more about best practices in React, follow the full How To Code in React.js series on DigitalOcean. | https://www.digitalocean.com/community/conceptual_articles/understanding-how-to-render-arrays-in-react | CC-MAIN-2020-34 | refinedweb | 832 | 53.51 |
Time difference between expected time and given time
Given the initial clock time h1:m1 and the present clock time h2:m2, denoting hour and minutes in 24-hours clock format. The present clock time h2:m2 may or may not be correct. Also given a variable K which denotes the number of hours passed. The task is to calculate the delay in seconds i.e. time difference between expected time and given time.
Examples :
Input: h1 = 10, m1 = 12, h2 = 10, m2 = 17, k = 2
Output: 115 minutes
The clock initially displays 10:12. After 2 hours it must show 12:12. But at this point, the clock displays 10:17. Hence, the clock must be lagging by 115 minutes. so the answer is 115.
Input: h1 = 12, m1 = 00, h2 = 12, m2 = 58, k = 1
Output: 2 minutes
The clock initially displays 12:00. After 1 hour it must show 13:00. But at this point, the clock displays 12:58. Hence, the clock must be lagging by 2 minutes. so the answer is 2.
Approach:
- Convert given time in h:m format to number of minutes. It is simply 60*h+m.
- Calculate both the computed time(adding K hours to the initial time).
- Find the difference in minutes which will be the answer.
Below is the implementation of the above approach.
C++
Java
Python3
# Python3 program to calculate clock delay
# Function definition with logic
def lagDuration(h1, m1, h2, m2, k):
lag, t1, t2 = 0, 0, 0
# Conversion to minutes
t1 = (h1 + k) * 60 + m1
# Conversion to minutes
t2 = h2 * 60 + m2
# Calculating difference
lag = t1 – t2
return lag
# Driver Code
h1, m1 = 12, 0
h2, m2 = 12, 58
k = 1
lag = lagDuration(h1, m1, h2, m2, k)
print(“Lag =”, lag, “minutes”)
# This code has been contributed
# by 29AjayKumar
C#
PHP
Lag = 2 minutes
Time Complexity: O(1)
Recommended Posts:
- Changing One Clock Time to Other Time in Minimum Number of Operations
- Program to find the time after K minutes from given time
- Add given n time durations
- Convert given time into words
- Print system time in C++ (3 different ways)
- Calculate speed, distance and time
- Java | Current date and time
- Convert timestamp to readable date/time in PHP
- Time Functions in Python | Set-2 (Date Manipulations)
- C++ Program to print current Day, Date and Time
- C program to print digital clock with current time
- Minimum time required to complete a work by N persons together
- Minimum time required to fill a cistern using N pipes
- Find time when hour and minute hands superimpose
- Find a time for which angle between hour and minute hands is given the, Akanksha_Rai, 29AjayKumar | https://www.geeksforgeeks.org/time-difference-between-expected-time-and-given-time/ | CC-MAIN-2019-22 | refinedweb | 444 | 67.99 |
Related content is a new feature introduced by People Tools release 8.52. PeopleSoft has enhanced it and made it much better in the release 8.53.
Related content is a powerful feature introduced by PeopleSoft. With related content, you will be able to bring up data from another component while within a component and the data from the second component will be related (based on the values in the first component) to the first component. To explain it, if you are at voucher component and entering a vendor, the related content service will bring up the details of the vendor in a second frame on the same page. The second frame will be on the right side of the main component or can be on the bottom based on your configurations. Having said that I will explain how to create a related content for your own application. Believe me; it is much simpler than you anticipate.
Related content need not be a PeopleSoft component. You can also bring up external url’s, iscripts, PS Query, pagelets etc. For this you need to create a related content service, which I will explain in the next post.
Let’s see how you can establish a related content service which will open an existing content reference as a related page in a new frame.
Navigate to: People Tools > Portal > Related Content Service > Manage Related Content Service
Now click on the Assign Related Content to Application Page link at the bottom of the page.
It will open up all your menu navigation in a tree structure. From that tree expand the required folders to reach down to the content reference or component to which your new related content should be applied.
Now on the landing page you have two grids for component level and page level related content services. If you add anything to page level, the related content will be visible only for that page. On the other hand if you chose to add on component level, then the related content defined will be visible across all the pages of the component.
Now to add another registered content reference as a related content to the selected component, you should select Service Type as Content Reference. The Service service type is to be selected when you plan to have query, iScript or external url as a related content. For that you need to define a service first, which will be covered in the next post.
Now you click on the prompt button. It will again bring up the menu in tree structure. You have select your component or content reference from the tree structure which will be added as a related content to the base component. Now you can give a meaningful label to the related content added and click on configure.
This is the page where you define the relation between the base component and related component. This page will bring up all the search keys of the related component. You have options Required Flag to, Refresh Service on change and Is Value required. The refresh service on change property if checked will refresh the related component if you change the value in the base component. This is helpful where the keys of the related component is present on the base component as an editable field.
Now to map the fields of both component, you have Mapping type options. There are different mapping types. If you select Key Value as an option then in the prompt you will get the key fields of the base component. You select the corresponding field. If mapping type is selected as Fixed Value, then you will get an edit box to enter the value. If it is System Variable, then in the prompt you will get a list of system variables like %Date, %UserId and all.
On the bottom of the page you have another important parameter called service filter. This is used when you want to hide or unhide the related content from the base component based on the values\conditions on the base component. For this you need to create an application class.
The class should extent a base class and the linkVisible method should return Boolean. Based on the return value, the related content is made visible\hidden. I am pasting a sample piece of code.
import PT_RCF:ServiceFilter;
class RCFilter implements PT_RCF:ServiceFilter;
method linkVisible() Returns boolean;
end-class;
method linkVisible
/+ Returns Boolean +/
/+ Extends/implements PT_RCF:ServiceFilter.linkVisible +/
/* Replace it with your own logic or conditions. */
If %Mode = "A" Then
Return False;
Else
Return True;
End-If;
end-method;
In the Service Filter parameters of the configuration page, give the name of the application class you have created. If you want the related content to be displayed always then you can ignore this.
Once you have done this, click on ok and save the component. Believe me you are done with setting up related content service for your base component. If you want to add more components to your base component, just click on the + button on the grid and follow the same steps. You have two more pages on this configuration Configure Related Actions (which I will cover later) and Configure layouts. When you have more than one related content tagged to a page, then Configure Layouts page will come as handy. There you can define the order and structure in the content should be displayed on the base component which is pretty much simpler and you could figure it out yourself.
Hope it give you a better understanding on configuring related contents for a component. | http://www.peoplesoftjournal.com/2013/05/related-contents-in-peoplesoft.html | CC-MAIN-2017-39 | refinedweb | 935 | 62.27 |
I am challenging myself to write a Palindrome tester using only SL algorithms, iterators etc. I also want to program to work with raw strings. Below, I use the raw pointer
pal
copy_if
begin(pal)
end(pal + size)
#include <algorithm>
#include <iterator>
#include <cctype>
using namespace std;
bool isPalindrome(const char* pal) {
if (!pal) { return(false); }
int size = strlen(pal);
string pal_raw;
pal_raw.reserve(size);
// Copy alphabetical chars only (no spaces, punctuations etc.) into pal_raw
copy_if(pal, pal+size, back_inserter(pal_raw),
[](char item) {return isalpha(item); }
);
// Test if palindromic, ignoring capitalisation
bool same = equal(begin(pal_raw), end(pal_raw), rbegin(pal_raw), rend(pal_raw),
[](char item1, char item2) {return tolower(item1) == tolower(item2); }
);
return same;
}
int main(){
char pal[] = "Straw? No, too stupid a fad. I put soot on warts.";
bool same = isPalindrome(pal);
return 0;
}
copy_if()
equal()
!isalpha(item)
Iterators implement the concept of pointers, when it comes to C++ library algorithms. And, as you've discovered, C++ library algorithms that take iterators are perfectly happy to also take pointers. It's the same concept.
And when you already have pointers to begin with there is no iterator, of some kind, that the pointers can be converted to.
It is true that
std::begin(arr)
and
std::end(arr)
are defined on flat arrays. But, guess what: they return a pointer to the beginning and the end of the array, and not an iterator class of some kind.
However, you cannot use
std::begin(), and
std::end() because by the time you need to use it, inside your function, the array was already decayed to a
char *.
std::begin() and
std::end() works on real arrays, and not decayed pointers.
If you insist on using iterators, you should pass a
std::string to your palindrome function, instead of a
char *.
std::string implements a
begin() and an
end() method that return a
std::string::iterator, that you can use. | https://codedump.io/share/RIuFiSCbS5h3/1/get-an-iterator-from-a-char-pointer-c | CC-MAIN-2018-05 | refinedweb | 318 | 56.35 |
Why try functional programming, specifically with FC++?
Some of the advantages that functional programming has over other programming paradigms, such as OOP, are:
- Conciseness of code
- Programming that's free of side effects (no global/static variables manipulated by endless set/get routines)
- Fast prototyping
- FC++ provides a wealth of syntax and library functions that help make the transition smooth for Haskell programmers.
You get around the fact that C++ does not have any functional programming constructs by using libraries. FC++ is the best available open source implementation of a C++ based functional programming library that you can plug in with legacy C++ code. FC++ has been used in projects such as BSFC++, which is a library for functional bulk synchronous parallel programming in C++.
Download and installation
FC++ is available for download from SourceForge (see Resources). Unpacking the installed compressed (.zip) file reveals a collection of header files. Including the
prelude.h header in the user application sources is all you need to do to get started. Listing 1 shows you how to compile sources that use FC++ code. Note that this is a header-dependent installation, and no other libraries are involved.
Listing 1. Compiling sources that use FC++ code
g++ user_source1.cpp I<path to FC++ installation>
Note: All code in this article was tested using FC++ 1.5 with g++ 3.4.4.
Understanding CFunType
The functional programming paradigm lets functions accept other functions as
arguments. Clearly, the base versions of C/C++ don't allow for such syntax. To
circumvent this problem, FC++ functions are expressed as instances of classes that
follow certain coding conventions, and this is where
CFunType comes into play. C++ function objects are characterized by
the presence of the
operator ( ) in the class definition.
Listing 2 below is an example.
Listing 2. Typical use of C++ function objects
struct square { int operator( ) (int x) { return x * x; } }; square sqr1; int result = sqr1(5);
The problem with the implementation in Listing 2 is that, in mathematical terms, the function type for
sqr1 is
int —> int, but the C++ type for
sqr1 is
struct square. FC++ introduces the template
CFunType, which is used for encoding the type signature information. The last argument in
CFunType is the return type of the function, and the rest are input type information in the same order they appear in the function prototype. Listing 3 shows how square looks using
CFunType.
Listing 3. Using
CFunType to encode function signature for square operation
#include prelude.h struct square : public CFunType<int, int> { int operator( ) (int x) { return x * x; } }; square sqr1; int result = sqr1(5);
Listing 4 is another example that inserts an integer into a list and returns the updated list.
Listing 4. Using
CFunType to encode function signature for list manipulation
#include prelude.h struct Insert : public CFunType<int,List<int>,List<int> > { List<int> operator()( int x, const List<int>& l ) const { // code for determining where to insert the data goes here }
Note: The
List data type in Listing 4 is a predefined FC++ type described later in this article.
Transforming functions into objects
For functions to accept functions as input arguments, the functions must be somehow transformed into objects. FC++ defines the
FunN category of classes that are built on top of
CFunType and the
ptr_to_fun routine, which actually carries out the transformation. Take a look at Listing 5.
Listing 5. Using
ptr_to_fun to convert function to FC++ function object
int multiply(int m, int n) { return m*n; } Fun2<int, int, int> mult1 = ptr_to_fun (&multiply); int result = mult1(8, 9); // result equals 72
As in
CFunType, signature for
Fun2 implies that this object represents a function that accepts two integer inputs and returns an integer. Likewise, you may have
Fun3<int, double, double, string>, which represents a function that accepts one integer, two doubles and returns a string.
List and laziness basics
List manipulation is at the heart of functional programming. FC++ defines its own list data type, which is different from Standard Template Library (STL) list. FC++ lists are lazy. You can create lists with infinite elements in FC++, but they are only evaluated on a need basis. Listing 6 demonstrates what this means.
Listing 6. Defining and using lazy lists
List<int> numbers = enumFrom (33); List<int> even_and_greater_than_33 = filter (even, numbers); assert (take(4, even_and_greater_than_33)) = list_with (34, 36, 38, 40);
The
enumFrom,
filter,
even,
take, and
list_with elements are part of predefined functionality in FC++. In
Listing 6 above,
enumFrom returns an infinite list of numbers starting from 33. The
filter routine returns another infinite list with numbers that are even and greater than 33. Finally, the
take routine actually extracts the first four elements from this list. Clearly, none of the lists store an infinite list of numbers—the evaluation is strictly on a need basis.
Table 1 describes some of the typical functions used with lists in FC++.
Table 1. Functions that are used in conjunction with the FC++
Listing 7 is another example that shows how to create and display the contents of a list.
Listing 7. Creating a list, checking its contents, and displaying data
#include <iostream> #include prelude.h int main( ) { int x=1, y=2, z=3; List<int> li = cons(x,cons(y,cons(z,NIL))); // head also removes the 1st element from the list assert( head(li) == 1 ); // tail returns whatever is left of in the list, and list_with is // used to define small sized list assert( tail(li) == list_with(2,3) ); while( li ) { std::cout << li.head() << " "; li = li.tail(); } return 0; }
Note: In the creation of the
li list, the
cons routine adds elements to the front of a list; z, y and x are added in that order to create the final list.
Faster list implementation
FC++ 1.5 provides an additional variant of the
List data
structure called
OddList, which is defined in list.h.
OddLists have exactly the same interface as
Lists, but they are faster. All FC++ routines that operate on
List operate on
OddList, too. The efficiency in
OddList is gained by caching the next node in the list. Listing 8 sums up some of the subtler aspects of using
OddList.
Listing 8. Subtler aspects of using
OddList
OddList<int> odd1 = enumFrom (1); List<int> list1 = odd1.tail ( ); // always returns List<int>!! OddList<int> odd2 = enumFrom (1); List<int> list2 = odd2.delay ( ); // create a List<int> with same data as odd2 List<int> list3 = enumFrom (1); OddList<int> odd3 = list3.force ( ); // creates an OddList<int> with same data as list3
OddLists don't have support for STL style iterators that exist for
Lists. See Resources for further detail on
OddList implementation.
Creating your own filters
If you want to create your own filter in Listing 6 (for instance
all numbers that are divisible by 100 and greater than 33), all you need to do is define your own filter function and then call
ptr_to_fun to convert it into a function object. Listing 9 shows you how.
Listing 9. Using
CFunType to encode function signature for list manipulation
bool div_by_100 (int n) { return n % 100 ? false : true; } List<int> num = enumFrom(34); List<int> my_nums = filter( ptr_to_fun(&div_by_100), num);
Note that FC++
Lists and
filters are completely generic in nature and can accommodate any data type.
Next, look into two fundamental functional techniques: currying and composition.
Currying
Currying is a functional programming technique that binds a subset of some function's arguments to fixed values, thus creating new functions. Listing 10 is an example that curries the
f function.
Listing 10. Using currying to create new functions
int multiply(int m, int n) { return m * n; } Fun2<int, int, int> f2 = ptr_to_fun (&multiply); Fun1<int, int> f1 = curry2 (f2, 9); std::cout << f1(4) << std::endl; // equivalent to multiply(9, 4) Fun1<int, int> f1_implicit = f2(9); std::cout << f1_implicit(4) << std::endl; // same as f1(4)
The predefined
curry2 routine binds the first argument of
f2 to
9. FC++ 1.5 provides
curry1,
curry2, and
curry3 operators that fix the first
N arguments to specific values. Additionally, FC++ also defines the bind routines to create new functions that prefix values to specific arguments of existing functions. For example,
bind2and3of3 (f, 8, 9) is equivalent to
f(x, 8, 9) where f(x, y, z) is a 3-input function. Yet another interesting way of specializing arguments is to use an underscore (
_). For instance,
greater (_, 10) is the same as f(x) = (x > 10). Note that greater is predefined in FC++. Listing 11 provides some more examples of currying.
Listing 11. More currying examples
List<int> integers = enumFrom (1); List<int> int_gt_100 = filter(greater(_, 100), integers); // This list will add 3 to all elements of integers. List<int> plus_3 = map (plus(3), integers);
Listing 12 shows a code snippet that displays all the factors of a number, including the number itself.
Listing 12. Displaying all the factors of a number
#include "prelude.h" using namespace fcpp; #include <iostream> using namespace std; bool divisible( int x, int y ) { return x%y==0; } struct Factors : public CFunType<int,OddList<int> > { OddList<int> operator()( int x ) const { return filter( curry2(ptr_to_fun(&divisible),x), enumFromTo(1,x) ); } } factors; int main() { OddList<int> odd = factors(20); while (odd) { cout << head(odd) << endl; odd = tail(odd); } return 0; }
The key to understanding Listing 12 lies in this snippet:
return filter( curry2(divisible,x), enumFromTo(1,x) );. You are creating a filter for the list returned by
enumFrom(1, 20) such that all numbers that perfectly divide 20 form a part of the final list. The
curry2 routine binds 20 to the first argument of the
divisible function. Note that
ptr_to_fun makes
divisible a function object that can be passed as an argument to
curry2.
Composition
Functional programming produces new functionality by combining existing code. The
compose ( ) operator composes two unary functions,
f(x) and
g(x), to yield a new function,
h(x), such that h(x) = f(g(x)). For example,
compose (head, tail) on a list returns the second element in the list. This is functional coding in its proper sense;
g(x) serves as the argument to
f(x). Listing 13, obtained from "Functional Programming with the FC++ Library" (see Resources), is an example that uses composition.
Listing 13. Using
compose and
tail to obtain the second element of a list
std::string s=foo, t=bar, u=qux; List<std::string> ls = cons(s, cons(t, cons(u, NIL))); ls = compose(tail, tail) (ls); // tail(tail(ls)); assert (head(ls) == qux); // s, t are removed
Listing 14 is another example that increments all elements of a list by two.
Listing 14. Using
compose to increment list elements
List<int> integers = enumFrom (1); map (compose(inc, inc), integers); // this modifies integers to an infinite list [3, 4, 5 ...]
Lambda functions
Any discussion about functional programming is incomplete without a mention of lambda functions. Lambda abstraction is used for defining anonymous functions. This is useful when you don't want to define separate functions for small pieces of code. To use lambda functionality in code, you need to define the
FCPP_ENABLE_LAMBDA macro. Listing 15 succinctly defines new mathematical and logical functions from existing code. Notice how
factorial is defined.
Listing 15. Defining lambda functions
// a new function where f(x) = 3*x+1 lambda(X)[ plus[multiplies[3,X],1] ] // a new function where f(x) = x! (factorial x) lambda(X)[ l_if[equal[X,0],1,multiplies[X,SELF[minus[X,1]]]] ]
The code in Listing 15 is self-explanatory. Routines
plus,
multiplies, and so on are defined as part of the FC++ library, and you use the
lambda operator to create new functionality from existing code.
Conclusion
FC++ provides:
- Objects of type
CFunType, which you can easily extend to serve functional programming needs
- Implementation of lazy lists that can potentially hold infinite sequences
- Several functional programming operators such as
head,
tail,
map,
filter,
ptr_to_fun, and so on
- The ability to create new functions from existing functions using currying operators,
lambda, or
compose
Probably the singular drawback of FC++ is the lack of standardized documentation that describes the functions defined in its headers. This article introduced the most useful ones:
compose,
curry,
bind,
take,
map,
ptr_to_fun, and
filter.
Resources
Learn
- Wikipedia provides an interesting introduction for beginners to functional programming.
- Getting started with FC++ provides informal documentation for readers who are familiar with C++ but new to functional programming.
- For a more complete overview, read "Functional Programming with the FC++ Library" by Brian McNamara and Yannis Smaragdakis.
- For more detail on
OddListimplementation, see FC++ lazy list implementation on the Georgia College of Tech Computing website.
- See Currying in FC++ for more information about currying.
- To learn more about lambda functionality, see FC++ lambda
- XL C/C++ for AIX
- Download FC++.
- Download the FC++ client for further insight into its workings.
-. | http://www.ibm.com/developerworks/aix/library/au-learningfc/index.html | CC-MAIN-2016-36 | refinedweb | 2,151 | 52.8 |
Table of contents
- 1 General questions
- 1.1 Is MonoRail stable? Why it's not 1.0?
- 1.2 Is there any public site using MonoRail?
- 1.3 Where to ask for help?
- 2 Installation
- 2.1 Cassini refuses to start
- 3 Ajax Support
- 3.1 How do you provide Ajax support?
- 3.2 What other javascript libraries come with MonoRail?
- 3.3 I'm trying to pass some parameters to my action but it's not working. What's wrong?
- 4 Security
- 4.1 Is there anything I should be concerned about related to security when using MonoRail?
- 5 Authentication/Authorization
- 5.1 How to handle authentication / authorization?
- 6 NVelocity View Engine
- 6.1 Is there a way to render a content of another view within a view?
- 6.2 Is there any way to generate arrays with NVelocity?
- 6.3 I'm trying to access an indexer on a NVelocity template with no success
- 6.4 Can I use WebForm Controls on NVelocity template?
This page has a list of frequently asked questions.
General questions
Is MonoRail stable? Why it's not 1.0?
Yes, very stable, albeit there's always room for improvements. Check our issue tracker.
We are not 1.0 because there is an important feature not implemented yet: Caching support.
Is there any public site using MonoRail?
See this forum section
Where to ask for help?
The best place for ask for help - and to check if your question hasn't been asked before - is our forum.
Installation
Cassini refuses to start
Make sure you have registered Cassini.dll in the GAC (global assembly cache).
> gacutil /i Cassini.dll
Ajax Support
How do you provide Ajax support?
We did not reinvent the wheel. We use the awesome prototype js library
What other javascript libraries come with MonoRail?
I'm trying to pass some parameters to my action but it's not working. What's wrong?
You need to use the with parameter used by the AjaxUpdater/AjaxRequest. For example:
$AjaxHelper.LinkToRemote("Some action", "ProcessItem.rails", "%{update='resultdiv', with='productid=10'}")
Controller's code:
public void ProcessItem(int productid) { ... }
Security
Is there anything I should be concerned about related to security when using MonoRail?
Yes, a few things.
First if your view directory is on the web folder then clients can potentially see the source code of the views, which is not good. To prevent this, associate the view extension with a IHttpHandler that comes with ASP.Net.
For nvelocity view engine:
<system.web> <httpHandlers> <add verb="*" path="*.vm" type="System.Web.HttpForbiddenHandler"/> ...
For brail
<system.web> <httpHandlers> <add verb="*" path="*.boo" type="System.Web.HttpForbiddenHandler"/> ...
And for the StringTemplate view engine
<system.web> <httpHandlers> <add verb="*" path="*.st" type="System.Web.HttpForbiddenHandler"/> <add verb="*" path="*.sti" type="System.Web.HttpForbiddenHandler"/> <add verb="*" path="*.stg" type="System.Web.HttpForbiddenHandler"/> ...
Second, if you use the DataBinder to populate classes, you might want to inform a Exclude or Allow list to prevent people from populating properties that are not on the form. Check the DataBind documentation for more information.
Authentication/Authorization
How to handle authentication / authorization?
There is not only one way of handling this. You can rely on the standard support offered by ASP.Net framework by setting up the an authentication strategy (using FormsAuthentication or PassportAuthentication or WindowsAuthentication) and use the OnAuthenticate event to supply an implementation of IPrincipal. Then control the authorization using the
authorization node.
Or you can create your own by using a MonoRail filters. A filter can check if the user is authenticated by looking for a cookie or an entry in the session, for example. If not authenticated, it can redirect the client to a login action and return false to stop further processing for the request.
Another option is to mix both methods. Providing an implementation of
IPrincipal and setting it on the request activity allows you to enfore some authorization using the PrincipalPermission. For example
public class AdminController : AbstractSecureController { [PrincipalPermission(SecurityAction.Demand, Role="IsAdmin")] public void Index() { } ... [PrincipalPermission(SecurityAction.Demand, Role="CanChangePasswords")] public void ChangeUserPassword(...) { ... } }
The good thing is that the PrincipalPermission is part of the .Net security infrastructure, so it's up to it to enforce the rules. You can also associate a rescue with the SecurityException so you can present a nice error message for the user.
For more information read the Authentication/Authorization document on the User's guide.
NVelocity View Engine
Is there a way to render a content of another view within a view?
Sometimes this is asked in another form: "What's the equivalent of render partial in NVelocity?"
Use the #parse directive. Suppose you have the following directory structure
WebFolder Views Shared
And on the ''shared'' folder there's a 'header.vm' file. You can then, from any view, invoke it
#parse("Shared/header.vm")
Is there any way to generate arrays with NVelocity?
Yes. See the following examples:
#foreach($day in [1..30]) $day #end #set($months = ['Jan', 'Feb'])
I'm trying to access an indexer on a NVelocity template with no success
You can use get_Item or a different format when the key is fixed:
$dict.get_Item("key")
$dict.key
Both code snippets are equivalent. But the former allows the key to come from a variable:
#set($var = "key") $dict.get_Item($var)
Can I use WebForm Controls on NVelocity template?
No. There are hacks, though. But the best way to handle these situations is to use the Composite View Engine and have NVelocity and WebForm (aspx) views on the same project. | http://www.castleproject.org/monorail/faq.html | crawl-001 | refinedweb | 922 | 52.97 |
REST: How to pass query parameters
Hello.
I'm having difficulties trying to figure (if possible) how to create an URL that also matches query parameters.
I tried:
<Route Url="/:namespace/test(\?id\=):id" Method="GET" Call="Test"/>
<Route Url="/:namespace/test?(id)=:id" Method="GET" Call="Test"/>
<Route Url="/:namespace/test?id=:id" Method="GET" Call="Test"/>
But none of these worked.
Is it possible when using %CSP.REST or am I restricted to using route parameters?
Thank you.
EDIT:
Forget about the "id" parameter name. It's a bad sample: normally id are by definition, able to be hierarchically included in the URI.
Think on id as being something that's not hierarchical, like the sample below:
/users/1?fields=list,of,fields,i,want,to,show
This would force the URI to be designed in a way that can't be expressed by route paramers, since "fields" doesn't belong to users, as it's really abstract.
So let's consider the URLMap as:
<Route Url="/users/:id(\?fields\=):fields" Method="GET" Call="Test"/>
<Route Url="/users/:id?(fields)=:fields" Method="GET" Call="Test"/>
<Route Url="/users/:id?fields=:fields" Method="GET" Call="Test"/> | https://community.intersystems.com/post/rest-how-pass-query-parameters | CC-MAIN-2022-21 | refinedweb | 197 | 53.88 |
I want to create a live template for test files that have name like "SomeComponent.spec.jsx", where the component being tested is automatically imported, e.g.
import './$FILENAME$.jsx';
The filenameWithoutExtension function doesn't work here as it only strips the last extension, leaving me with "SomeComponent.spec". I tried a few combinations of other functions but couldn't find anything that worked. The groovy one looked promising until I ran into the same problem as this:-
Is there a way of getting the filename without any extension in this case with the existing (working!) webstorm live template functions?
Hi there,
Have you thought about calling that "filenameWithoutExtension function" again? Something like this (or whatever the right syntax is):
Ooo, nice idea, hadn't thought of that! Sadly it doesn't seem to work - I still get "SomeComponent.spec" as the value - I suspect that fileNameWithoutExtension ignores any arguments I give it and just returns the value based off the current filename, so nesting it has no effect
I did this using regularExpression:
I had the same problem. I decided to use just a simple variable and use the autocomplete function to suggest the filename for me. | https://intellij-support.jetbrains.com/hc/en-us/community/posts/115000787270-LiveTemplate-file-name-without-extension?page=1#community_comment_4797492356498 | CC-MAIN-2022-33 | refinedweb | 198 | 54.12 |
NEW: Learning electronics? Ask your questions on the new Electronics Questions & Answers site hosted by CircuitLab.
Microcontroller Programming » Where do 'PORTS' come from?
I've been going through the ATMEL datasheet for the ATMega168 and searching online, but I haven't found an explanation on where the 'PORT' character comes from?
eg. When changing the DDR Register, where does one get DDRC from?
I'm not quite sure I fully understand your question but if you are asking where the terms come from, they are both named registers on the chip. The PORT registers (there are 3 on the ATMEGA168 PORTB, PORTC, and PORTD) control the stage of the pins of the micro-controller. The DDR registers (Again their are 3 DDRB, DDRC, and DDRD) are the Data Direction Registers. They control whether a given pin on a port is input or output.
Here is a quote from the datasheet:
Each port pin consists of three register bits: DDxn, PORTxn, and PINxn. As shown in ”Register Description” on page 92,).
Hope that helps clarify it a bit,
Rick
Rick,
Thanks. I've read that paragraph a number of times.
What I'm trying to understand is what the 'B, C and D' PORT registers refer to. eg. Are they referring to the C in PC3(PIN26) or B in PB6 (PIN9)?
Hi TuffLux, yes exactly!!
Pin 14, 15, 16, 17, 18, 19, 9, and 10 make the 8 pins (bits) of PORTB.
The same for PORTC and PORTD.
Ralph
Ralph,
Thanks. That's what I assumed, but I couldn't find any documentation referencing it.
TuffLux -
PORTD, PORTC and PORTB are memory addresses inside the MCU. You have to really do some digging inside each of the "include" header files to find the actual address but it is there. So if you write:
PORTD = 1;
You are actually telling the MCU to put 1 in address register 0x0B inside the ATmega168p. The MCU then uses that info to apply 5v to any of it's 8 pins depending on the value stored at memory address 0x0B. The symbols PD1, PD2 etc are defined as values 1, 2, etc and are just used to keep things easier to read. Same goes for PC1, PC7, PB3, PB5...the last number in the statement is the value it represents. So...
PORTD |= (1<<PD3);
is exactly the same as:
PORTD |= (1<<PB3);
Except it won't make as much sense when you look at the code a few months later. Each statement tells the MCU to read the address 0x0B, bitwise OR that reading with 00001000 (1<<3) or (decimal 8) and store the results back into 0x0B. Then internally it will apply 5v to any pin that has a corresponding 1 in the value stored in 0x0B (PORTD).
How can:
PORTD |= (1<<PD3);
equal
PORTD |= (1<<PB3);
When PD3 is referencing PIN5 and PB3 is referencing 17?
PD3 and PB3 are just constants defined in an included file. Both are equal to the number 3 so when you write either one, it is equivalent to the compiler because it substitutes 3 for them like this:
PORTD |= (1<<3);
The constants are just there to make it more understandable from a programming point of view and are defined as the bit position in the register for a given pin on the micro.
If that is the case, how does the microcontroller know which pin you're trying to change?
Ultimately you're saying that:
(1<<PB3) and (1<<PD3) boil down to (1<<3)
PORTD |= (1<<PD3)
is a kind of short hand way of writing
PORTD = PORTD | (1<<PD3)
Since PD3 is defined as 3 in a define file that is included.
(1<<3) is a one Byte (binary 00000001) shifted left 3 times to give (binary 00001000)
The compiler is instructed to
take the byte that is in the PORTB register OR it with (1<<3) binary 00001000. Then take the results of that OR operation and overwrite whatever is in PORTB with that new value. The results of this operation will be to set (to high) the PD3 bit in the PORTB register while maintaining all the other bits in that register as they where.
The PORTB on the left of the = sign tells the complier what byte to work with, the 1<<PD3 tells the complier what bit to set in that byte.
Darryl
Where is "PD3 defined as 3"??
PD3 is bit 4 of of PORTD (PD0, PD1, PD2, PD3, PD4, PD5, PD6, PD7).
And why would it be 3 and not 4?
“Where is "PD3 defined as 3"??”
I do not know where it is defined it will be in one of these included files
#include <stdio.h>
#include <avr/io.h>
#include <avr/pgmspace.h>
#include <inttypes.h>
It is defined as 3 because this number is used as a bit shift value not the actual bit location. or bit name reference. As you have pointed out the bit name references also start at zero not at one.
For example; if we shift left once from the LSB location PD0 00000001, (1<<1) we have set the bit at PD1 00000010. Shift once to set PD1, shift twice to set PD2 and so on.
“And why would it be 3 and not 4?”
Because if we shift four times left (1<<4) form PD0 we would end up setting the PD4 bit not the PD3 bit. Using (1<<PD3) to set the PD4 bit would really be confusing
Thanks Darryl, I forgot the references I'll look at them again, it has been a while.
I believe "shift left once from the LSB location PD0 00000001" is the key to understanding this.
When doing bit shifting it always starts at bit 1 (LSB).
There is an assumption (which is correct) that one is starting out from a bit 1 location (00000001 LSB), not from a empty PORT 00000000 and shifting into the PORT which almost makes sense.
Yes
The thing is you have to have something to shift. There is not really any meaning to shifting an "empty” port as you call it. The shift left and shift right operations just moves each bit of a byte left or right a number of times. If there are no bits set there is nothing to move. (1<<3) moves the bit values of the number 1 to the left 3 times. I believe that any other byte can be shifted in the same way. (2<<3) or (6<<3) are just as valid an operation.
(9<<3) expressed in binary would be 00001001 becomes 01001000
We are using the value of 1 as a starting point because with one only the LSB is set. So we are just shifting that one set bit to set a new bit location where we need it.
Ralph -
Good question. If you look in:
#include <avr/io.h>
You will find a whole bunch of "ifdef" statements pertaining to the model number of the chip you are using. One of them "iom168p.h" (I think) is included here. I believe you will find the PD3 defines in there. It is not obvious because the defines are kind of nested inside that file in an unusual way.
Ralph said:
"
I believe "shift left once from the LSB location PD0 00000001" is the key to understanding this.
When doing bit shifting it always starts at bit 1 (LSB).
"
(1<<PB3) = (1<<3) = (00000001 << 00000011) = 00001000 = 4
(2<<PB3) = (2<<3) = (00000010 << 00000011) = 00010000 = 16
The shift starts at whatever number you tell it to. 1<<3 = 8 where 2<<3 = 16
Rick and I are saying the exact same thing. bit shift left moves each bit left and fills the new bit values on the right with 0's any bits that are moved out of the byte on the left are lost.
(29<<5) that is 00011101 shifted left 5 times will be 10100000
Yep Darryl,
We are saying the same thing...
Oh, and I need to correct a typo in my post above...
Where I had:
(1<<PB3) = (1<<3) = (00000001 << 00000011) = 00001000 = 4
It should read:
(1<<PB3) = (1<<3) = (00000001 << 00000011) = 00001000 = 8
The reason I was trying to help illustrate this was for Ralph where he said:
" -- There is an assumption (which is correct) that one is starting out from a bit 1 location (00000001 LSB), not from a empty PORT 00000000 and shifting into the PORT which almost makes sense. -- "
I was pointing out that it starts at location 1 because that is the number we tell it to shift. If we told it to shift 2<<3, the bit being shifted would start at location 00000010.
Yeah, thanks Darryl and Rick.
I was pointing out that it starts at location 1 because that is the number we tell it to shift.
I was pointing out that it starts at location 1 because that is the number we tell it to shift.
Of course.
It's funny I have been using the (1<<PB3) syntax for a couple of years now and just accepted that it worked. I never really thought about why it was working, I knew what it was supposed to do but had never really thought about it until now.
So thanks once again Darryl and Rick.
Hey TuffLux, have you got a handle on where the PORT comes from now any other questions?
Your welcome Ralhp
It seams we may be a little off topic with some of this, or maybe not. I hope we have at least managed to answer some of the original questions.
Man! my keyboarding is bad I meant Ralph, sorry.
Wow. I'm surprised there was so many responses to this question. It's interesting how a discussion on PORTS has become more of a discussion on bitshift.
I understand where and what the PORTS are. My concern was that a lot of the documentation references PORTA, B, C, D and I couldn't find PORTA in any of the ATMega168 datasheet.
That's what I was eplaining in my first post. There are other Atmel micro's that do have a PORTA. For instance an ATMEGA128 has PORTA, PORTB, PORTC, PORTD, PORTE, PORTF, and PORTG.
Theres something interesting to dig into with the original question: the PORT, PIN, DDR etc actually just reference memory locations via pointers. So setting DDRC bit 5 to one boils down to this ((volatile uint8_t)0x27 |= 1<<4 which says that an 8 bit wide variable located at address 27 needs to have its 5th bit set.
The compiler converts these low addresses into indirect memory addressing via x, y or z registers and would have maybe 5 instructions. 2 to load the address into a 16 bit register, 1 to load the current value of the register, 1 to do the logical or and 1 to store the result back.
Please log in to post a reply. | http://www.nerdkits.com/forum/thread/2385/ | CC-MAIN-2019-04 | refinedweb | 1,835 | 79.9 |
Safely set bits in a variable (QNX Neutrino)
#include <atomic.h> void atomic_set( volatile unsigned * loc, unsigned bits );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The atomic_set() function is a thread-safe way of doing an (*loc) |= bits operation.
When modifying a variable shared between a thread and an interrupt handler, you must either disable interrupts or use atomic operations.
The atomic_*() functions are also useful for modifying variables that are referenced by more than one thread (that aren't necessarily in the same process) without having to use a mutex.
To safely set the 1 bit in a flag:
#include <atomic.h> … volatile unsigned flags; … atomic_set( &flags, 0x01 ); | http://www.qnx.com/developers/docs/7.0.0/com.qnx.doc.neutrino.lib_ref/topic/a/atomic_set.html | CC-MAIN-2018-43 | refinedweb | 121 | 56.66 |
From: Johan Råde (rade_at_[hidden])
Date: 2006-08-18 12:01:25
Richard Hadsell wrote:
> Paul A Bristow wrote:
>
>> But a NaN is a NaN, whatever the sign, only the exponent field determines
>> NaN-ness AFAIK.
>>
>> Apart from sign there are also lots of significand bits whose meaning is
>> officially undefined.
>>
>>
> The exponent field is all 1's, just like infinity, but the mantissa
> field has at least one 1 bit. All 0's in the mantissa would be infinity.
For more information about negative nan, and related beasts, read the
the section on the IEEE 754 standard in documentation of my library. If
you still want to know more, follow the links listed there.
The library is in the vault: serialization/non_finite_num_facets.zip
Hubert Holin wrote:
> *Negative* Not-A-Number? What is such a beast, and what should
it be used for? Next, I vote for an Octonionic Not-A-Number!
I have good news for you Hubert ;-)
There already are octonionic not-a-number's.
The following program gives the output "false" when I run it on VC++
7.1. (On some compilers you may have to turn off the optimizations.)
#include <boost/math/octonion.hpp>
int main()
{
boost::math::octonion<double>
oct(std::numeric_limits<double>::quiet_NaN());
std::cout << std::boolalpha << (oct == oct) << std::endl;
}
--Johan Råde
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2006/08/109318.php | CC-MAIN-2021-43 | refinedweb | 241 | 59.3 |
Write a C++ program to determine a student's grade. It reads three test scores (between 0 and 100) and calculates the
average score and converts it to a letter grade.
Grade Conversion Rules:
rule a. If the average score is 90 or more the grade is 'A'.
rule b. If the average score is 70 or more and less than 90 then check the third test score. If the third score is 90 or more
the grade is 'A' otherwise the grade is 'B'.
rule c. If the average score is 50 or more and less than 70 then check the average of the second and third scores. If the
average of the last two is 70 or more, the grade is 'C' otherwise it is a 'D'
rule d. If the average score is less than 50 then the grade is 'F'.
Rounding Rule: Midpoint Rounding
Calculate the grade average as a double. Round up to the next int if the fractional part is .5 or greater, otherwise
truncate the fraction by casting to an int.
The algorithm is: Add .5 to the average and cast the result to an int.
Example: average = (int)(average+0.5);
#include <iostream> #include <iomanip> using namespace std; int main() { //variable declarations double t1, t2, t3, avg; //Input cout << "Enter in three test grades that the sudent received:\n"; cin >> t1, t2, t3; //Process avg = (t1 + t2 + t3) / 3; //Output if (avg >= 0) { if (avg >=90) { cout << "The student's average is an A:\n" << avg << endl; } else if (70 <= avg >= 89) { cout << "The student's average is a B:\n" << avg << endl; } else if (50 <= avg >= 69) { cout << "The student's average is a C:\n" << avg << endl; } else (avg <= 49) { cout << "The student's average is a F:\n" << avg << endl; } } else cout << "The test scores cannot be less than 0. Please enter 0 or more." << endl; system("pause"); return 0; }
I need help. At this point, I do not know what I am doing. I can guess that my if statements are screwed, but I don't know how to fix them.
Please help.
Thank you. | https://www.daniweb.com/programming/software-development/threads/500144/c-no-specific-error-given | CC-MAIN-2018-13 | refinedweb | 357 | 79.5 |
What it is
It is a mod/port that adds Xperia apps to your devices. Apps taken from many internet sources (see below)Disclaimer
This mod is worked to be a stable as possible. There are no changes into apps. I remind you that this mod i do for my devices and I will try to improve this for other devices and custom roms.
UPDATED ON 08/10/2017: I am back with some new little updates!
Code:
#include <std_disclaimer.h> /* * * I am not responsible for bricked devices, dead SD cards, * thermonuclear war, or you getting fired because something app failed. Please * do some research if you have any concerns about features included in this mod * before flashing it! YOU are choosing to make these modifications, and if * you point the finger at me for messing up your device, I will laugh at you. * */
It is only for AOSP 7.x.x - based ROM and Lineage OS 14.1 - based ROMPrerequisites:
- It's highly recommended to use this mod on AOSP-based ROM or Lineage OS 14.1-based ROM. MIUI or other custom ROM support not guaranteed.
- TWRP installed in the FOTAKernel/recovery partition
Downloads:
Downloads are available on androidfilehost.com - >>>CLICK<<<. If u want support from PlayStore then install this thing!
Sources:
How to install:
- Shutdown the device
- Boot into recovery (TWRP recommended)
- Wipe cache and do system and data backup
- Install package zip
- Done! Wipe cache
- Reboot
Known bugs:
- Album didn't work due Sony DRM(((
What's new on 2017.10.08 ::
Code:
- Removed some apps, also added Music Player with Sony Sound Enhancement Features, updated some apps, updated updater-script.
Sony Xperia Apps for Nougat Devices, App for all devices (see above for details)
Contributors
jimmy123322
Version Information
Status: Beta
Current Stable Version: None
Current Beta Version: 2017-10-08
Beta Release Date: 2017-10-08
Created 2017-06-07
Last Updated 2017-10-08 | https://forum.xda-developers.com/android/software/7-0-sony-xperia-apps-nougat-devices-t3618532 | CC-MAIN-2018-09 | refinedweb | 320 | 64.3 |
StealJS 2.0 is out and available on npm! 🎆 Check out the migration guide to help you upgrade.
This release includes:
- Tree-shaking
- Native promises by default
- Support for .mjs modules
- Simplified demo pages
- Removal of development code in many popular libraries
StealJS' mission is to make it cheap and easy to do the right thing. Doing the right thing, when building for the web, includes things such as writing tests and breaking your applications into smaller mini-applications (modlets) that can be composed together.
Steal 2.0 expands on this mission while minimizing the number of changes you need to make to your app. Even for big apps the upgrade can be done in an afternoon.
Like other DoneJS projects, we added these features based on our community survey results.
Tree Shaking
This has been the top requested feature from the community survey for quite a while, and something we get asked about in Gitter, at meetups, and anywhere else we are discussing DoneJS.
Tree Shaking is a bundling optimization, a form of dead code removal, that examines a dependency graph based on the use of exports. When it encounters an unused export (one that is not used by any parent modules) it can remove that code. The follow example has code that can be removed:
math.js
export function add(a, b) { return a + b; }; export function subtract(a, b) { return b - a; };
main.js
import { add } from './math'; add(2 ,3);
In the above example, StealJS will perform the following steps:
- Examine math.js and see that it exports
addand
subtract.
- Walk up the parents of math.js, in this case only main.js, and see which of those functions are used.
- Since
subtractis not used, its code, and any code it depends on that is not used elsewhere, can be removed.
The final bundled output will be something like:
define("math", [], function(exports, module, require){ exports.add = function(a, b) { return a + b; }; }); define("main", ["./math"], function(exports, module, require){ var _add = require("./math").add; _add(2, 3); });
StealJS does tree shaking both in the client (in steal), and during the build (with steal-tools). We do tree shaking in the client to avoid loading entire modules, sometimes entire packages, that are not used by an application.
This is how StealJS is able to tree-shake CanJS. The
can package contains a module that re-exports from a bunch of sub-packages. It looks a little like:
can.js
export { default as Component } from "can-component"; export { default as DefineMap } from "can-define/map/map"; export { default as stache } from "can-stache"; export { default as fixture } from "can-fixture";
Our app then uses it:
main.js
import { Component } from "can"; Component.extend({ tag: 'my-app', view: `Hello, this is an app`, ViewModel: {} });
Here we can see that only
Component is used, which means only the can-component package is used.
Steal is able to see this and recompile can.js to be:
export { default as Component } from "can-component";
This is a big win, saving us from having to fetch the package.json, the main, and likely many other modules from each of those unused packages.
Later, if another parent of can is detected, steal will re-perform the same operation and, if needed, recompile and re-execute the can.js module.
Without tree-shaking the above example would result in an optimized build output of 134kb. With tree-shaking it comes to 60.9kb; that’s less than half the size!
Native Promises
More and more teams have dropped support for IE and only support browsers supporting native Promises. Since the Promise polyfill included in steal.js in 1.x was quite large, we added the steal-sans-promises.js script in steal 1.5.
In 2.0 we thought it would be a good time to flip this; now steal.js does not contain the Promise polyfill and we’ve created steal-with-promises.js which does. All of our documentation and examples use steal.js since we assume most people getting started are using modern browsers for development.
If you want to support IE11, just change your script tag to use the new promises-included script:
<script src="./node_modules/steal/steal-with-promises.js" main="~/app"> </script>
Likewise, when you build out your project with steal-tools it will no longer include the version of steal that contains the Promise polyfill, so if you need that you can add this flag to your build options:
const stealTools = require("steal-tools"); stealTools.build({}, { bundlePromisePolyfill: true });
Support for .mjs extension
Now that native modules have landed in browsers, we're starting to see some libraries ship native module compatible builds with the .mjs extension. This article explains the reasoning behind the new extension in detail. Google's Chrome team also recommends using this extension on the web to differentiate module from non-module scripts.
We are planning on making StealJS work directly with native modules in the future, but in the meantime steal 2.0 can now import modules with the .mjs extension:
import * as math from "./math.mjs"; math.add(2, 3);
Simplified demo pages
Steal has always automatically loaded the main module when it boots up. This makes getting started super simple: just add a script tag pointing to steal.js. However once applications grow and you add more and more pages, most pages are not utilizing the app’s main,. To prevent loading the main module, ou would need to do weird things like:
<script src="node_modules/steal/steal.js" main="@empty"></script>
Here
@empty is a special module defined in steal; it's essentially a noop. Once you understand that it makes sense but is a bit difficult to explain to new users.
With that being the case Steal 2.0 no longer automatically loads the main module. We feel that sacrificing a tiny bit of DX in getting started is worth it to make things easier once your app grows. And this makes things a bit more consistent; steal only loads the config by default now. You have to tell it what you want to load. You can do that by:
Providing a main
Explicitly specifying a module to load:
<script src="node_modules/steal/steal.js" main="~/app"></script>
Or using the new main boolean attribute to load the package.json main:
<script src="node_modules/steal/steal.js" main></script>
Using a steal-module
<script src="node_modules/steal/steal.js"></script> <script type="steal-module"> import { Component } from "framework"; // ... </script>
Use the dynamic import API
<script src="node_modules/steal/steal.js"></script> <script> steal.import("~/app").then(function() { // ... }); </script>
Removal of development code
steal-tools will already remove development code that uses steal-remove-start/end comments like so:
//!steal-remove-start console.warn("Don't do that."); //!steal-remove-end
However this only works in steal. Many frameworks such as React use a different approach. They check the
process.env.NODE_ENV global like so:
if(process.env.NODE_ENV !== "production") { console.warn("Don't do that."); }
This is supported in steal-tools 1.x but you need to pass the
--envify flag in order to enable it. Because this is so widely used we thought it would be a good idea to enable it by default in steal-tools 2.0, so we did!
What’s Next?
This is an important release of StealJS by making defaults out of some of the recent features we have recently completed. The next version of steal and steal-tools will likely be a much bigger change, but we’re still thinking about the direction it should go.
In the meantime with StealJS 2 and CanJS 5 out, we need a new release of DoneJS supporting all of these. Look for DoneJS 3 in the near future, to include:
- CanJS 5 with tree-shakable modules.
- StealJS 2
- Improved, and now default, incremental rendering.
- What the community votes for on the surveys! | https://www.bitovi.com/blog/steal-2.0 | CC-MAIN-2019-35 | refinedweb | 1,327 | 66.84 |
Adding a prefix to an EditText
Adding an always visible prefix to an EditText is surprisingly hard! Users can delete your prefix, type or paste text in the middle of it. Handling this in a TextWatcher is a lot of code. However, there is an easier way!
Using a TextWatcher to add an always visible prefix is fairly complicated, if someone clicks in the middle of your prefix, you have to intercept the text and move it to the end or more the cursor to the end immediately. All of this can result in a LOT of code. You need to disable copy/paste, you need to handle onClick, you need to handle text change etc.
An other approach may be to put a TextView in the background and add padding to the EditText, however, it’s tricky to align the background text and the EditText. The reason for that is that padding is in DIPs and text sizes are in SP. If you do get things to align perfectly, fragmentation of the EditText view, accessibility changes and different devices densities may still cause the text to show up unaligned. Look at compatibility issues here for more on EditText fragmentation.
The best way I can think of to address this issue is to create a custom EditText which literally uses the same paint object to draw the prefix text and and add padding according to the prefix text size. Below is some quick and dirty code to do the same. The below example prefixed the EditText with a country code for input of phone numbers (for example).
Notes:
- It doesn't support prefix text changing at the moment, but I think looking at the code, it’s pretty obvious how you can support it.
- I also use tag to pass in the prefix. This could be a bad idea, so you may be better off adding a custom attribute.
- Also, I haven’t bothered to check to right to left displays but that should be simple enough.
- Finally, TalkBack won’t be able to read the prefix.
The result
The code
public class PrefixEditText extends AppCompatEditText {
float mOriginalLeftPadding = -1;
public PrefixEditText(Context context) {
super(context);
}
public PrefixEditText(Context context, AttributeSet attrs) {
super(context, attrs);
}
public PrefixEditText(Context context, AttributeSet attrs,
int defStyleAttr) {
super(context, attrs, defStyleAttr);
}
@Override
protected void onMeasure(int widthMeasureSpec,
int heightMeasureSpec) {
super.onMeasure(widthMeasureSpec, heightMeasureSpec);
calculatePrefix();
}
private void calculatePrefix() {
if (mOriginalLeftPadding == -1) {
String prefix = (String) getTag();
float[] widths = new float[prefix.length()];
getPaint().getTextWidths(prefix, widths);
float textWidth = 0;
for (float w : widths) {
textWidth += w;
}
mOriginalLeftPadding = getCompoundPaddingLeft();
setPadding((int) (textWidth + mOriginalLeftPadding),
getPaddingRight(), getPaddingTop(),
getPaddingBottom());
}
}
@Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
String prefix = (String) getTag();
canvas.drawText(prefix, mOriginalLeftPadding,
getLineBounds(0, null), getPaint());
}
}
Usage:
<com.alimuzaffar.customwidgets.PrefixEditText
fontPath="fonts/Lato-Light.ttf"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:gravity="bottom"
android:textSize="24sp"
android:tag="+61 "
android:
Another approach
Another approach and in some respects a better approach may be to create a custom drawable and add it as a drawable right or drawable left. The custom drawable will simply draw the text. The reason I didn’t choose this approach is:
- I would have to pass in the paint object or build one from the theme, why bother when EditText already contains one.
- Aligning the text drawn by my drawable with the EditText is not going to be easy and prone to getting misaligned.
- To fix both the issues above, the best approach would be to pass a reference to the EditText to our Drawable, and this code is just becoming messy at that point IMHO.
UPDATE 7th March 2016: I created a TextDrawable that you can use to add a prefix to an EditText or use emojis, Unicode characters or text as a Drawable.
Finally
In order to build great Android apps, read more of my articles. | https://medium.com/@ali.muzaffar/adding-a-prefix-to-an-edittext-2a17a62c77e1 | CC-MAIN-2018-05 | refinedweb | 646 | 51.78 |
Quiz
The Windows Forms classes are located in the ___________________ namespace.
By setting the _______________ property in a control, I can make sure that my code is not changed if someone inherits my form.
The ______________ property returns an array of MdiChildren in an MDI application.
What are the properties in the Properties window that enable you to specify the location, docking, and resizing capabilities of a control?
True or False: I can tell many controls to perform the same method by setting the SameMethod property in the Properties window.
True or False: The Windows Forms Designer Generated code should not be changed.
The _____________ argument and the _____________ argument are always passed as parameters to events for a control.
Quiz Answers
System.Windows.Forms.
You need to change the Modifiers property to Private.
The MdiChildren property will allow you to loop through the collection of MdiChild forms in your application. You can use the MdiChildren.Length property to determine the number of open MdiChildren in you application.
The Dock, Alignment, and Location properties allow you to control how and where controls are positioned on your forms.
False. There is no SameMethod property. In order to have your controls perform the same function, use the Handles keyword in VB.NET or simply create a new delegate in C#.
True. You can change the Windows Forms Designer Generated code, but you risk the Forms designer changing it once you switch back to Design view.
System.Object and System.EventArgs. | https://www.informit.com/articles/article.aspx?p=31560&seqNum=12 | CC-MAIN-2021-21 | refinedweb | 248 | 66.33 |
Run linters against staged git files and don't let 💩 slip into your code base!.
If you've written one, please submit a PR with the link to it!
npm install --save-dev lint-staged husky
.eslintrc,
.stylelintrc, etc.
package.jsonlike this:
Now change a few files,
git add some of them to your commit and try to
git commit them.
See examples and configuration below.
I recommend using husky to manage git hooks but you can use any other tool.
NOTE:
If you're using commitizen and having following npm-script
{ commit: git-cz },
precommithook will run twice before commitizen cli and after the commit. This buggy behaviour is introduced by husky.
To mitigate this rename your
commitnpm script to something non git hook namespace like, for example
{ cz: git-cz }
Starting with v3.1 you can now use different ways of configuring it:
lint-stagedobject in your
package.json
.lintstagedrcfile in JSON or YML format
lint-staged.config.jsfile in JS format
See cosmiconfig for more details on what formats are supported.
Lint-staged supports simple and advanced config formats.
Should be an object where each value is a command to run and its key is a glob pattern to use for this command. This package uses minimatch for glob patterns.
package.jsonexample:
.lintstagedrcexample
This config will execute
npm run my-task with the list of currently staged files passed as arguments.
So, considering you did
git add file1.ext file2.ext, lint-staged will run the following command:
npm run my-task -- file1.ext file2.ext
To set options and keep lint-staged extensible, advanced format can be used. This should hold linters object in
linters property.
linters—
Object— keys (
String) are glob patterns, values (
Array<String> | String) are commands to execute.
gitDir— Sets the relative path to the
.gitroot. Useful when your
package.jsonis located in a subdirectory. See working from a subdirectory
concurrent— true — runs linters for each glob pattern simultaneously. If you don’t want this, you can set
concurrent: false
chunkSize— Max allowed chunk size based on number of files for glob pattern. This is important on windows based systems to avoid command length limitations. See #147
subTaskConcurrency—
2— Controls concurrency for processing chunks generated for each linter.
verbose— false — runs lint-staged in verbose mode. When
trueit will use.
globOptions—
{ matchBase: true, dot: true }— minimatch options to customize how glob patterns match files.
It is possible to run linters for certain paths only by using minimatch patterns. The paths used for filtering via minimatch are relative to the directory that contains the
.git directory. The paths passed to the linters are absolute to avoid confusion in case they're executed with a different working directory, as would be the case when using the
gitDir option.
// .js files anywhere in the project"*.js": "eslint"// .js files anywhere in the project"**/*.js": "eslint"// .js file in the src directory"src/*.js": "eslint"// .js file anywhere within and below the src directory"src/**/*.js": "eslint"
Supported are both local npm scripts (
npm run-script), or any executables installed locally or globally via
npm as well as any executable from your $PATH.
Using globally installed scripts is discouraged, since lint-staged may not work for someone who doesn’t have it installed.
lint-staged is using npm-which to locate locally installed scripts, so you don't need to add
{ "eslint": "eslint" } to the
scripts section of your
package.json. So in your
.lintstagedrc you can write:.
Tools like ESLint/TSLint or stylefmt can reformat your code according to an appropriate config by running
eslint --fix/
tslint --fix. After the code is reformatted, we want it to be added to the same commit. This can be done using following config:
Starting from v3.1, lint-staged will stash you remaining changes (not added to the index) and restore them from stash afterwards. This allows you to create partial commits with hunks using This is still not resolved
git add --patch.
If your
package.json is located in a subdirectory of the git root directory, you can use
gitDir relative path to point there in order to make lint-staged work.
All examples assuming you’ve already set up lint-staged and husky in the
package.json.
Note we don’t pass a path as an argument for the runners. This is important since lint-staged will do this for you. Please don’t reuse your tasks with paths from package.json.
*.jsand
*.jsxrunning as a pre-commit hook
--fixand add to commit
This will run
eslint --fix and automatically add changes to the commit. Please note, that it doesn’t work well with committing hunks (
git add -p).
prettierfor javascript + flow or typescript
stylefmtand add to commit | https://www.npmjs.com/package/lint-staged | CC-MAIN-2017-34 | refinedweb | 791 | 66.84 |
Introducing Krikos - A Python ML Framework for Learning and ExperimentationJuly 23, 2017
I am pleased to announce that I have published my first Python library: Krikos!
If you have been reading my previous posts, you have followed along with me as we developed a neural network micro-framework, which I have been using to augment my tutorial series. I realized that this was a super cool project which I wanted to make accessible to anyone by open-sourcing it and making it available on PyPI. Now, anyone can contribute to Krikos, and anyone can use it to start learning about and experimenting with neural networks.
As a slimmed down neural network framework, it is perfect for the learning AI researcher: it is simple enough to pick up quickly, but barebones enough that you are heavily involved in the development of your NN. Krikos is easy to use and experiment with, but demands the programmer’s effort and involvement in development.
Installing Krikos
Setting Krikos up for use in your project is as easy as:
pip install krikos
That’s it!
Using Krikos to Learn about ML
Krikos currently has two primary packages:
nn and
data.
nn Package
There are currently four classes in
nn:
Layer,
Loss,
Regularization, and
Network. The
Layer class defines various network layers; the
Loss class defines losses which can be used as your network’s objective; the
Regularization class defines regularization that your network can employ; and the
Network class defines an actual network architecture.
The superclasses can be inherited from to create your own layers, losses, and network architectures. For example, if you’d like to define a layer, simply do:
from krikos.nn.layer import Layer class CustomLayer(Layer): ...
The convention is to have a dictionary of parameters and gradients. Use parameters from the dictionary in the forward pass and compute gradients and save it to the dictionary in the backward pass. Also, a cache is used to save necessary values computed in the forward pass for the backward pass computation. The Loss requires only a forward and backward pass, as per convention. Following these conventions will allow your Layer to work with the Sequential network.
The Network superclass is perhaps the most salient class to inherit from. It has a train and eval function, and it instantiates the list of classes with different forward pass depending on train-time or test-time. To create your custom architecture, you can must call both forward and backward on all of its layers, and you must aggregate the gradients on your own. In future releases, gradients may be aggregated automatically, and the programmer need only reset the gradients to zero. Any changes made to the scheme of gradient computation will be documented on this blog. More detailed documentation is coming very soon.
data Package
The data class has the Loader superclass, which is used to create the CIFARLoader class. You can inherit from this class to create your own data loaders. Because the function to get a batch of data is already written, you must only take care of loading and preprocessing the data as you’d like.
There are also some useful functions in utils.py in the data package.
Examples
You can find examples of framework usagehere. There are currently examples of fully-connected and convolutional networks.
Contributing to Krikos
The source repository for Krikos can be found here. Feel free to add layers, network architectures, etc. and submit pull requests. I am excited to see how this project is received and built upon! | https://shubhangdesai.github.io/blog/Krikos | CC-MAIN-2018-13 | refinedweb | 590 | 54.73 |
describes. detailed online documentation. -XMagicHash extension (The magic hash).
The primops make extensive use of unboxed types and unboxed tuples, which we briefly summarise here. The magic hash). For some primitive types we have special syntax for literals, also described in the same section. * -XMagicHash extension bring anything into scope. For example, to bring Int# into scope you must import GHC.Prim (see.
Allow use of view pattern syntax.
View patterns are enabled by the flag -XViewPatterns. More information and examples of view patterns can be found on the Wiki page. Trac -XRec.
The flag -XRec:.
Note: the final statement must match one of these patterns exactly:).
Your code should just work as before when -XApplic flag -XTransform:
[.
GHC normally imports Prelude.hi files for you. If you’d rather it didn’t, then give it a -XNoImplicitPrelude option. The idea is that you can then import a Prelude of your own. (But don’t call it Prelude; the Haskell module namespace is flat, and you must not conflict with any Prelude module.) extension does not extend to the left-hand side of function definitions; you must define such a function in prefix form.
Allow the use of tuple section syntax
The -XTupleSections flag enables Python-style. -XUn has, foo has the legal (in GHC) type:
foo :: forall x. x -> [x]
GHC currently does kind checking before expanding synonyms (though even that could be changed)..
After expanding type synonyms, GHC does validity checking on types, looking for the following mal-formed a does not appear in the result type of either constructor. Although it is universally quantified in the type of the constructor, such a type variable is often called “existential”. Indeed, the above declaration declares precisely the same type as the data Foo must be the same (modulo alpha conversion). The Child constructor -XGADTs. The -XGADTs flag also sets -XGADTSyntax and -XMon data type above, the type of each constructor must end with Term ty, but the ty need not be a type variable (e.g. the Lit constructor). clause.
Note that:
Record punning can also be used in an expression, writing, for example,
let a = 1 in C {a}
instead of
let a = 1 in C {a = a}
The expansion is purely syntactic, so the expanded right-hand side expression refers to the nearest enclosing variable that is spelled the same as the field name. bindings.)..
Allow the use of stand-alone deriving declarations.
GHC allows stand-alone deriving declarations, enabled by -X). declaration attached to a data declaration, paramter!
GHC now permits such instances to be derived instead, using the flag -XGeneralizedNewtypeDeriving, so one can write
newtype Dollars = Dollars Int deriving (Eq,Show,Num)
and the implementation uses the same Num dictionary for Dollars as for Int. Notionally, the compiler derives an instance declaration of the form
instance Num Int => Num Dollars
which just adds or removes the newtype constructor according to the type. built-in derivation applies (section 4.3.3. of the Haskell Report). (For the standard classes Eq, Ord, Ix, and Bounded it is immaterial whether the standard method is used or the one described here.) patterns synoyms [Jones2000].Mark Jones in=N...
It is perfectly fine to declare new instances of IsList, so that list notation becomes useful for completely new data types. Here are several example instances: essentially provide type-indexed data types and named functions on types, which are useful for generic programming and highly parameterised library interfaces as well as interfaces with enhanced static information, much like dependent types. They might also be regarded as an alternative to functional dependencies, but provide a more functional style of type-level programming than the relational style of functional dependencies.
Indexed type families, or type families for short, are type constructors that represent sets of types. Set members are denoted by supplying the type family constructor with type parameters, which are called type indices. The difference between vanilla parametrised type constructors and family constructors is much like between parametrically polymorphic functions and (ad-hoc polymorphic) methods of type classes. Parametric polymorphic functions behave the same at all type instances, whereas class methods can change their behaviour in dependence on the class type parameters. Similarly, vanilla type constructors imply the same data representation for all type instances, but family constructors can have varying representation types for varying type indices..
Data families appear in two flavours: (1) they can be defined on the toplevel or (2) they can appear inside type classes (in which case they are known as associated types). The former is the more general variant, as it lacks the requirement for the type-indexes. declarations of.
Here are some examples of admissible and illegal type instances:.
In order to guarantee that type inference in the presence of type families, *). WARNING: this facility may be withdrawn in the future.. isedised sort function in terms of an explicitly parameterised = head (sort xs)
Without lifting a finger, the ?cmp parameter is propagated to become a parameter of least as well. With explicit parameters, the default is that parameters must always be explicit‘s call site is quite unambiguous, and fixes the type a.
An implicit parameter is bound using the standard let or where binding forms. For example, we define the min function by binding cmp.
min :: Ord a => [a] -> a min = let ?cmp = (<=) in least
A group of implicit-parameter bindings may occur anywhere a normal group of Haskell bindings can occur, except at top level. That is, they can occur in a let (including in a list comprehension, or do-notation, or pattern guards), parameter. The bindings are not nested, and may be re-ordered without changing the meaning of the program. For example, consider: -XRankNTypes (which implies -XExplicitForAll) enables higher-rank types. That is, you can nest foralls arbitrarily deep in function arrows. For example, a forall-type (also called a “type scheme”), including a type-class context, is legal:
The -XRank. -XImp same behaviour for “Variable out of scope” errors, it terminates compilation by default. You can defer such errors by using the -fdefer-out-of-scope-variables flag. This flag defers errors produced by out of scope variables until runtime, and converts them into compile-time warnings. These warnings can in turn be suppressed entirely by -fno-warn-deferred-out-of-scope-variables.
The result is that a hole or a variable will behave like undefined, but with the added benefits that it shows a warning at compile time, and will show the same message if it gets evaluated at runtime. This behaviour follows that of the -fdefer-type-errors option, which implies -fdefer-typed-holes and -fdefer-out-of-scope-variables.: error: • Found type wildcard ‘_’ standing for ‘Bool’ To use the inferred type, enable PartialTypeSignatures • In the type signature: not' :: Bool -> _ • Relevant bindings include not' :: Bool -> Bool (bound at Test.hs:5:1)
When a wildcard is not instantiated to a monotype, it will be generalised over, i.e. replaced by a fresh type variable, e.g.
foo :: _ -> _ foo x = x -- Inferred: forall t. t -> a => a -> a).: error: • Couldn't match expected type ‘_a’ with actual type ‘Bool’ ‘_a’ is a rigid type variable bound by the type signature for: foo :: forall _a. _a -> _a at Test.hs:4:8 • In the expression: False In an equation for ‘foo’: foo _ = False • Relevant bindings include foo :: _a -> _a (bound at Test.hs:5:1)
Compiling this program with -XNamedWildCards (as well as -XPartialTypeSignatures) enabled produces the following error message reporting the inferred type of the named wildcard _a.
Test.hs:4:8: warning: [-Wpartial-type-signatures] • Found type wildcard ‘_a’ standing for ‘Bool’ • In the type signature: foo :: _a -> _a • Relevant bindings include foo :: Bool -> Bool (bound at Test.hs:5:1)
The third kind of wildcard. (Enum a, Eq a, Show a) => a -> String -- Error: Test.hs:5:12: error: Found constraint wildcard ‘_’ standing for ‘(Show a, Eq a, Enum a)’ To use the inferred type, enable PartialTypeSignatures In the type signature: arbit: error: Found constraint wildcard ‘_’ standing for ‘()’ To use the inferred type, enable PartialTypeSignatures In the type signature: arbitCs' :: (Enum a, _) => a -> String
An extra-constraints wildcard can also lead to zero extra constraints to be inferred, e.g.
noCs :: _ => String noCs = "noCs" -- Inferred: String -- Error: Test.hs:13:9: error: Found constraint wildcard ‘_’ standing for ‘()’ To use the inferred type, enable PartialTypeSignatures In the type signature: n, except that extra-constraints wildcards are not supported in pattern or expression signatures. In the following example a wildcard is used in each of the three possible contexts.
{-# LANGUAGE ScopedTypeVariables #-} foo :: _ foo (x :: _) = (x :: _) -- Inferred: forall w_. w_ -> w_
Anonymous and named wildcards can occur on the left hand side of a type or data instance declaration; see Wildcards on the LHS of data and type family instances.
Anonymous wildcards are also allowed in visible type applications (Visible type application). If you want to specify only the second type argument to wurble, then you can say wurble @_ @Int where the first argument is a wildcard.
In all other contexts, type wildcards are disallowed, and a named wildcard is treated as an ordinary type variable. For example:
class C _ where ... -- Illegal instance Eq (T _) -- Illegal (currently; would actually make sense) instance Eq _a => Eq (T _a) -- Perfectly fine, same as Eq a => Eq (T a) and -fdefer-out-of-scope-variables flags, which enables this behaviour for typed holes and variables. Should you so wish, it is possible to enable -fdefer-type-errors without enabling -fdefer-typed-holes or -fdefer-out-of-scope-variables, by explicitly specifying -fno-defer-typed-holes or -fno-defer-out-of-scope-variables not. in an expression context.
A name whose second character is a single quote (sadly) cannot be quoted in this way, because it will be parsed instead as a quoted character. For example, if the function is called f'7 (which is a legal Haskell identifier), an attempt to quote it as 'f'7 would be parsed as the character literal 'f' followed by the numeric literal 7. There is no current escape mechanism in this (unusual) situation.
''T has type Name, and names the type constructor T. That is, ' “ | https://downloads.haskell.org/~ghc/8.0.2/docs/html/users_guide/glasgow_exts.html | CC-MAIN-2019-13 | refinedweb | 1,715 | 54.93 |
Auraria Home
|
CU Denver Theses
myAuraria Home
"Linguistic geometry methods for autonomous, mobile robot control"
Item menu
Print
Send
Add
Description
Standard View
MARC View
Metadata
Usage Statistics
PDF
Downloads
Thumbnails
Page Images
Standard
Zoomable
Citation
Permanent Link:
Material Information
Title:
"Linguistic geometry methods for autonomous, mobile robot control"
Creator:
Fletcher, Christopher Martin
Place of Publication:
Denver, CO
Publisher:
University of Colorado Denver
Publication Date:
1996
Language:
Physical Description:
131 leaves : illustrations ; 29 cm
Subjects
Subjects / Keywords:
Artificial intelligence ( lcsh )
Linguistic geometry ( lcsh )
Mobile robots ( lcsh )
Robotics ( lcsh )
Genre:
bibliography ( marcgt )
theses ( marcgt )
non-fiction ( marcgt )
Notes
Bibliography:
Includes bibliographical references (leaf 131).
Thesis:
Submitted in partial fulfillment of the requirements for the degree, Master of Science, computer science
General Note:
Department of Computer Science and Engineering
Statement of Responsibility:
by Christopher Martin Fletcher.
Record Information
Source Institution:
University of Colorado Denver
Holding Location:
Auraria Library
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
37311907 ( OCLC )
ocm37311907
Classification:
LD1190.E52 1996m .F54 ( lcc )
Auraria Membership
Aggregations:
Auraria Library
University of Colorado Denver Theses and Dissertations
Downloads
This item has the following downloads:
Fletcher_Christopher.pdf
Full Text
PAGE 1
"LINGUISTIC GEOMETRY METHODS FOR AUTONOMOUS, MOBILE ROBOT CONTROL" by Christopher Martin Fletcher A thesis submitted to the University of Colorado at Denver in partial fulfillment of the requirements for the degree of Master of Science Computer Science 1996
PAGE 2
This Thesis for the Master of Science degree by Christopher Martin Fletcher has been approved by Boris Stilman Tom Altman
PAGE 3
Fletcher, Christopher Martin (M.S., Computer Science) Linguistic Geometry Methods for Autonomous, Mobile Robot Control Thesis directed by Professor Boris Stilrnan ABSTRACT Autonomous robots have been a practical goal of artificial intelligence research since the beginning of the field. Dexterous, decision-making automatons will permit reconnaissance of hazardous environments and enhance the exploration of space. While robots have made inroads into the factory to execute repetitive tasks, widespread use of mobile intelligent robots has not been realized. This has been due chiefly to the inability of a robot to successfully interact in a dynamic environment. A key factor is that the software has not been adept at reacting to changes in the domain. Moreover, there is a lack of formal methods to represent knowledge and systematic changes in this class of problems. A linguistic approach is proposed in this thesis as the basis for robot control in complex environments. Linguistic geometry provides a formal mechanism for representing knowledge and reasoning in the general class of problems of controlling movement in a complex system. Practical applications include: robotics, scheduling, control systems, military gaming, etc. The approach is rooted in the theory of formal languages as well as the theories of problem solving and planning. This thesis presents a linguistic approach for geometric reasoning and applies the technique to a simulated mobile, autonomous robot operating in a dynamic environment. The software application will demonstrate the ability of this approach to successfully generate solutions to complex scenarios encountered by a mobile robot. This thesis will demonstrate how these geometric reasoning methods can be successfully applied to a realistic, intelligent robotic system. This abstract accurately represents the content of the candidate's thesis. I recommend its publication. Boris Stilman
PAGE 4
CONTENTS 1. Introduction ..................................................................................................................... 1 2. Linguistic Geometry Methods ........................................................................ ................ 5 2.1 Knowledge Representation ............................................................................................ 5 2.2 State Transition in the System ..................................................................................... 7 3. Mobile Robot Planning and Motion Generation .................................................... ...... 12 3.1 Knowledge Representation .......................................................................................... 12 3.2 Robot Path Planning in a Simple Environment .......................................................... 16 3.2.1 Software Design ........................................................................................................ 16 3.2.1.1 Objects ................................................................................................................... 16 3. 2. 1. 2 Methods ................................................................................................................. 20 3.2.1.3 Classes ................................................................................................................... 21 3.2.2 Test Results ............................................................................................................. 24 3.3 Robot Path Planning in an Environment with Obstacles ........................................... 28 3. 3.1 A Language of Admissible Trajectories .................................................................... 29 3.3.2 Design Augmentation ............................................................................................... 32 3.3.3 Objects ...................................................................................................................... 32 3.3.3.1 Methods ................................................................................................................. 33 3.3.3.2 Classes ................................................................................................................... 35 3.3.4 Test Results .............................................................................................................. 38 3. 3. 4.1 "Walls" Scenario ..................................................................................................... 39 3.3.4.2 "Rooms" Scenario ................................................................................................... 47 3.3.4.3 Invisible Obstacles Scenario .................................................................................. 54 3.4. Robot Path Planning with Static and Dynamic Obstacles ......................................... 57 3.4.1. Trajectory Networks ................................................................................................ 59 3.4.2. Design Augmentation .............................................................................................. 61 3.4.2.1. Objects .................................................................................................................. 61 3.4.2.2. Methods ......................................................... ...................................................... 63 3.4.2.3. Classes ...................................................................... ........................................... 65 3.4.3. Test Results ............................................................. . .... .......................................... 67
PAGE 5
3.4.3.1. Single Dynamic Obstacle Scenario ....................................................................... 68 3.4.3.2. Multiple Dynamic Obstacle Scenario ................................................................... 73 3.4.3.3. Static and Dynamic Obstacle Scenario ................................................................ 78 4. Summary and Conclusion ............................................................................................. 87 Appendix A Mobile Robot Simulation Software ................................................................ 91 Appendix B Source Code ................................................................................................... 96 Bibliography .................................................................................................................... 131
PAGE 6
1. INTRODUCTION Research into autonomous, mobile robots has flourished recently due to the tremendous reduction in the size and cost of components especially sufficiently powerful computers. However, the problems encountered by mobile robotic systems remain considerable. A mobile robot operates in the physical world. This world can be uncompromisingly dynamic and unpredictable. Consequently, an intelligent agent must continually monitor a situation and adapt its current and planned activity to a changing environment. In addition to the volatility of the environment, the area of operation may be too complex to fully represent. Thus, an agent may be capable of comprehending only a portion of the domain. Timeliness is another constraint facing autonomous robots. Mobile robots must perform in real-time. Obviously, the criticality is paramount in certain applications, e.g. an autonomously operating airplane versus a mail delivery robotic system, but some measure of a real-time capability is required in all such systems. Why choose to implement totally autonomous systems? There are applications where manual (remote) or semi-autonomous control of a vehicle operation is sufficient. Many applications, however, inherently prevent human interaction. Today, mobile robotic systems are being designed to work in conditions extremely detrimental to humans and where external communications are hampered by the environment. One such system detailed in [MMAG, 1991] is an inspection robot that operates in a hazardous waste facility, evaluating storage container integrity. Additionally, there are scenarios where the situation changes rapidly and at great distances from any possible human intervention. Applications common to this domain. are exploration and reconnaissance. The exploits of the robot Dante exploring an active volcano recently generated front page news. Finally, there is a class of mostly military systems where communications may be impossible due to potential interference or detection by an opposing force. Applications in this domain include covert surveillance, and search and rescue operations. A primary theme shared in all such systems is the removal of direct human 1
PAGE 7
involvement due to the danger and remoteness involved. Furthermore, they require an intelligent, decision-making capability independent of human intervention. The application investigated in this thesis is one of a simulated autonomous robotic vehicle operating in a complex environment. This mobile robot is assigned a task to complete to reach a pre-determined destination or point of interest in the most optimal manner possible. The vehicle operates under some real-world constraints. It may only travel at a limited velocity on a fmite 2-D plane. The robot possesses only limited knowledge about the operational area. This restriction is manifested as a limited field of view. A robot may be re-tasked to a new destination based on changing priorities. A robot must be flexible allowing for new tasking at any point along its travels. Finally, there are obstacles in the area that inhibit free movement. The obstacles may possess any shape and size. They may also move, disappear and reappear in new locations at will. Under these constraints the mobile robot plans paths to a point of interest and executes movements along selected paths until the task is complete. There are several considerations for an intelligent agent operating in these surroundings. One factor is the representation of knowledge. Information concerning the area of operation, the obstacles, the robot itself, and the decision making process must be represented in a manner that streamlines path generation and execution. Functioning in a dynamic environment presents difficulties in knowing how the knowledge base is affected by change. That is, reflecting what has and what has not changed without reconsidering every piece of knowledge. Elements of change in the scenario include: obstacle location and size, destination, and robot location. Finally, there is the search problem. From any location in the area many paths can be considered. Which path takes it closer to the destination? Which is the best path? A comprehensive approach, considering all possible paths, often fails real-time criteria. A strategy is needed to quickly generate only the most promising paths. The approach must consider the short term goal of optimal movement within the field of view as well as the long term goal of premium movement towards the destination. One of the basic ideas in finding solutions to a system is to break it into smaller sub-problems to be solved and then combine the solutions to the smaller problems together to resolve the whole system. This approach suffers generally due to the complexity of real-world 2
PAGE 8
systems. The subsystems are seldom independent and the solutions to these subsystems are, therefore, dependent on the solutions to other subsystems. This research presents an approach for formulating paths and executing movement on the paths that permit the vehicle to reach its destination in an optimal manner. At the core of this proposal is Linguistic Geometry, a concept for reasoning in this class of problems. Linguistic geometry formalizes a mathematical model for the representation of general heuristic knowledge and provides the search infrastructure for deriving an optimal solution. The theory traces its roots back to the early 1960s, with the development of a syntactic approach to natural language. The development of formal grammars by Chomsky (1963) led to application of this research in other new areas. In particular, grammars were utilized for pattern recognition by Fu (1982), Narisimhan (1966), Pavlidis (1977) and picture descriptions languages by Shaw (1969), Feder (1971), and Rosenfeld (1979). Stilman applied similar techniques to hierarchical complex systems evolving into linguistic geometry. The PIONEER project provided the early framework for linguistic geometry. PIONEER is a system that investigated applying sophisticated human heuristics to computer-based chess. This research resulted in an implementation that produced highly selective searches. This framework was also successfully adapted to a power control and maintenance planning project [Stilman, 1992]. Very recent applications include: a real-time fire vehicle routing application currently being developed into a commercial project under the auspices of the Lockheed Martin corporation, and a high integrity software engineering application to provide the computer-assisted generation of mathematical proofs for software programs. This research is being conducted at the Sandia National Labs. The remainder of this thesis explores linguistic geometry as applied to the mobile robot scenario. Section 2, Linguistic Geometry Methods, presents the theory of linguistic geometry. This includes description of the system variables and a description of movement and state transition. A derivation of a grammar of shortest trajectories is also presented. Section 3, Mobile Robot Planning and Motion Generation, introduces the mobile robot application. It is here that the linguistic technique is applied to the robotic system. The application is examined in this discussion from the representation 3
PAGE 9
of knowledge to the software design and implementation. The shortest trajectory algorithm is augmented through the introduction of the grammar of admissible trajectories. This extension generates optimal (shortest) trajectories and sub-optimal paths. Sub-optimal paths may be required in the presence of obstacles blocking the optimal routes. Various test cases are illustrated and the results are weighed. Appendix A follows the conclusion of the thesis describing the robot simulator software used for robot software testing and visualization. 4
PAGE 10
2. LINGUISTIC GEOMETRY METHODS This chapter introduces the reader to a linguistically based, mathematical tool for heuristic knowledge representation and search generation in a complex system. A complex system is a class of problems that can be represented as a set of elements and positions where elements transition from one state to another. Dividing the problem into a hierarchy of dynamic subsystems replaces a static system with a single goal. The goals of each of the subsystems are independent, but coordinated to the system goal. Linguistic geometry represents the hierarchy of dynamic subsystems with a hierarchy of formal languages. Each sentence a group of words or symbols of a lower level language corresponds to a word of the next higher level language. The first level grammar, the language of trajectories, yields a set of symbols and parameters as illustrated below. a(x1)a(x2)a(xa) ... a(xn) The variables, XI through Xn are the domain specific knowledge of the system. For example, in the mobile robotic system control application, the variables might represent discrete map locations on the planned path of the robot. Second and third level languages build on the strings produced by the language of trajectories to produce higher level decision strings applicable to the environment. Initially, we concentrate on the language of trajectories as a method to create paths. 2.1 KNOWLEDGE REPRESENTATION To begin, there must exist some techniques for formally representing knowledge. Definition of a Como lex System A definition of a complex system [Stilrnan, 1993] is described by the following 8 tuple: (X,P ,Rp,ON ,V ,Si,St,TRANSITION) where: X= {xi) a finite set of points that define locations in an area. 5
PAGE 11
P = {p;J a finite set of elements that define the dynamic objects of the model. P is the union of two non-intersecting subsets: p, and P2. RP (x,y) is a set of binary functions of reachability in X. Where: x, y are cells from X. pis a member ofP. RP (x,y) is true if element p located at cell x can reach cell y. Otherwise, it is false. ON(p) = x where ON is a partial function of placement from Ponto X. Vis an evaluation or cost function for the set P depicting a value associated with each member. S; is the description of the set of initial states of the system by a certain collection ofWell Formed Formulas ofthe first order predicate calculus: {ON(p;) =Xi} s, is the description of the set of target states of the system. TRANSITION(p,x,y) is the description of the operators for transition of the system from one state to another. If an element p wants to transition from its current location x to a new location y, it is described by the following states: precondition: delete: add: 6 ON(p) = x & Rp(x,y) ON(p) = x ON(p) = y
PAGE 12
Representation ofDistance Geometric properties of the system are a key representation concept in linguistic geometry. In linguistic geometry, distance, measured as the minimum amount of time required to reach a given location, is represented in a mapping (MAP) function. This function uses the notion of Rp(x,y), the reachability of locations in the domain. Critical to MAP is the concept that a set of locations is reachable in a certain amount of steps and is not reachable in any less. Figure 2.1 illustrates this concept. The set Mkx.p is a fmite subset of points from the set X specific for element p and for a given location x. Its membership is made up of cells that are reachable in k steps from x and are not reachable in k-1 or fewer steps. Stated formerly, is the set of all Mkx.P (k=l,n) where the Rp(x,y) is true and n is the number of steps from x to y. We apply this function in the next section to help in constructing a grammar of shortest trajectories. Figure 2. 1 Reachability for a MAP Function 2.2 STATE TRANSITION IN THE SYSTEM A Language of Shortest Trajectories Assume a robot must generate optimal trajectories from a start location xo to a destination location yo. The robot possesses a MAP of distances between locations in a fixed domain. We wish to generate strings of locations that describe the optimal path(s) a robot may travel to reach a pre-determined destination. Table 2.1 is a presentation of the controlled Grammar GtO> of Shortest Trajectories that is capable of generating the strings [Stilman, 1993]. 7
PAGE 13
L 1 2i 3 Table 2. 1 Grammar of Shortest Trajectories Q Q1 Q2 Q3 Vr ={a}, VN = {S,A}, VPR = Kernel, rr .. A(x,y,l) A(x,y,l) Pred = {Q1, Q2, Q3} Q1(x,y,l) =
PAGE 14
MOVEt(x) =SUM n STt(x) n STto.J+t(xo) if MOVEt(x) = {mt, m2, ... mr} != 0 then The MOVE set at length l is not empty nexti (x, l) = mi for i :::::; r next returns each member of set per reference else nexti (x, l) = x end if MOVE is empty robot has no next move In order to facilitate understanding of Gt
PAGE 15
5 5 5 5 5 5 4 4 4 4 4 5 3 3 3 3 4 5 2 2 2 3 4 5 I I 2 3 4 5 0 I 2 3 4 5 I 2 3 4 5 61 Figure 2.2 6x6 Example Domain At this stage, we encounter two functions. The function f is trivial, simply subtracting 1 from the current length of the trajectory. The next function produces the a member of the set of next possible locations in the trajectory. This is an iterating function that returns a new value with each application, until there are no more. Iteration is indicated by the i subscript. Function next is the result of intersecting three sets. The first, SUM, contains cells that on-the-way to the destination. At least one trajectory will pass through each of these SUM cells. A cell is on-the-way when the MAP' ed distance from the start to a cell is summed with the distance from the same cell to the destination is exactly the total distance from start cell to the destination cell. In our example, this set is: { (1, 1); (2, 1); (2,2 ); (3,2 ); (3, 3); ( 4, 3); ( 4,4 ); (5, 4)} The second set, STk, contains those cells that are reachable from the starting location in exactly lo-1+ 1 steps. In the example, k = 1. This set is: { (2, 2); (2, 1); (1,2)} The fmal set, ST1, contains cells that are reachable from the current cell in exactly one step. Of course, at step = 1 this is the same set as STk. So, the intersection of the sets is: {(2,1);(2,2)} The application of production 2, combined with the next evaluation produces: A((1, 1),(5,4),4) _. a((1, 1))A(2, 1), (5,4), 3) _. a((1, 1))A((2,2),(5,4), 3) Predicate Q2 is a check for 1 >= 1. With 1 = 3, this evaluates to true and we execute jump to branch two. Production 2, expressed in this manner, is meant to indicate new 10
PAGE 16
productions should be applied in parallel to all non-terminals generated from previous applications. The grammar continues in this fashion until the value of length decrements to 0. At this stage, repeated applications of production 2 has resulted in the following strings: -+ a(l, 1)a(2, 1)a(3,2)a(4,3)A((5,4), (5,4), 0) -+ a(1, 1)a(2,2)a(3,3)a(4,4)A((5,4),(5,4),0) -+ a(l, 1)a(2,2)a(3,2)a(4,3)A((5,4),(5,4),0) -+ a(l, 1)a(2,2)a(3,3)a(4,3)A((5,4),(5,4),0) Each production fails the predicate check, Q2(0);t (l 1), and moves to production 3. This production adds the destination a(5,4) to the string. The result of executing this grammar is that the robot has planned all optimal paths from the initial location (1, 1) to the destination location (5,4). For this scenario, all of the paths are essentially as optimal as the next. So, to reach the destination all that remains is for the robot to select a path upon which to move and to generate movement on the trajectory. 11
PAGE 17
3. MOBILE ROBOT PLANNING AND MOTION GENERATION This section introduces a robot planning and motion engendering application utilizing a linguistic geometry control model. The application employs a robot that must plan and execute a path from its current location to a destination location. The robot is staged in increasingly complex environments --from no obstacles, to static obstacles, to dynamic obstacles. Each enhancement to the model is measured with regards to the impact on the design and implementation, as well as the performance of the design. We measure performance in several ways. First, the robot must be able to execute its tasks in the most optimal manner supported by the environment. A key feature of this requirement is the ability of the software to plan and execute a task without search inefficiencies or backtracking. Second, the robot shall be capable of responding to changes in the domain. A key feature of this requirement is the effectiveness of the software in recognizing systematic changes and reacting. The first requirement still applies in these situations, so the software must implement changes in the most optimal manner available. Third, the robot shall execute tasks in a timely fashion. Success at each stage will be demonstrated via computer simulation. Features of the robot: field of view, speed and the work area: size, obstacle location are parameters of this simulation. Appendix A details the simulation software. Section 3.1 is a review of the key concepts for representing knowledge in the linguistic geometry model. The concepts will be explored with regards to the requirements of a robot control application. Sections 3.2 through 3.5 are the design & implementation details of the robot control application. 3.1 KNOWLEDGE REPRESENTATION RePresentation of a comvlex system Earlier, a linguistic geometry model was defmed for a generic complex system. This concept is now focused on the specifics of the proposed mobile robot control paradigm. 12
PAGE 18
A complex system is defined as the subsequent 8-tuple: (X,P ,Rp,ON, V ,Si,St,TRANSITION) where: X= {xi) a finite set of points that define the arena of operation. X represents the work area where the robots operate; a 2-dimensional space divided into equally sized, atomic cells. The work area dimensions for this application will be a parameter of the simulation. Although the work area is somewhat benign in definition, calculating the distance between discrete cells, determining reachability, and representing obstacles are key design issues that are tightly coupled to the design of the work area. P = {pi) a finite set of elements that define the dynamic objects of the model. P is the union of two non-intersecting subsets: Pt and Pz. P represents the robot in the application. In the general model presented earlier, P was the sum of two sets --each representing a opposing side in a gaming or interception scenario. Initially, P will consist of the single robot under test. Dynamic obstacles, essentially functioning as an opposing element to the robot will be introduced in the fmal manifestation of the application. The robot is the source of intelligent activity in the model. It operates on the work area using methods that measure distance, determine reachability, derive trajectories, etc. Rp (:x:,y) is a set of binary functions of reachability in X. Where: :x:, y are cells from X. Pis a member ofP. Rp (:x:,y) is true if element p located at cell x can reach cell y. Otherwise, it is false. This definition directly applies to the robot control paradigm. In the simplest stage of the model, with no obstacles, the reachability relation is true for all cells that the robot can reach given the robots speed, etc. When obstacles are introduced in later stages of the design the reachability function must incorporate those cells occupied by obstacles, but still within the robots ability, to reflect an unreachable status. Cells that are 13
PAGE 19
entirely shut off from access by obstacles, as illustrated in the example below are also classified as unreachable. Unreachable Cell Figure 3.1 Unreachable cell not containing an obstacle The reachability function can not be as rigidly defmed in the presence of dynamic (moving) obstacles. A cell may be unreachable in one time interval only to be re evaluated as reachable in a subsequent time interval as an obstacle or robot moves. In this environment, the robot control implementation must provide a reachability function that incorporates another variable: the particular state of a cell at the time the function is to be applied. ON(p) = s where ON is a partial function of placement from Ponto X. This function is used to describe the cell, x, currently occupied by an element of P, i.e. a robot or a dynamic obstacle. A robot can occupy only one cell, while a dynamic obstacle may occupy one or more cells, depending upon its size. A cell is either occupied or is not occupied, i.e. there is no partial occlusion. Vis an evaluation function on the set P describing the value of each member. The evaluation function does not apply in this model. S; is the description of the set of initial states of the system by a certain collection ofWell Formed Formulas ofthe first order predicate calculus: {ON(p;) = :x:;) This set represents the initial locations in X of a robot and dynamic obstacles in the application. Both are provided their initial location as a parameter of the simulation. 14
PAGE 20
St is the description of the set of target states of the system. This set represents the destination location in X of a robot in the application. A robot will be provided its destination location as a parameter of the simulation. Obstacles in this model do not possess target states. TRANSITION(p,x,y) is the description of the operators for transition of the system from one state to another. TRANSITION characterizes change in the system. At each time interval, the change in the state of the work area is reflected in two lists. The remove list contains the current locations of the dynamic elements in the model while the add list contains the new locations, after a time step increment has occurred. Dynamic elements characterize change through their locations on the work area. Measurement ofDistance Distance measurement is a simple, but key concept in geometric reasoning. Earlier, a map component was introduced to provide elements operating in an area with the capability of determining the distance from one location to another. This paradigm carries over to the implementation of such a system. In the robot control application, a function is provided to a robot to map the distance from one cell in the work area to another. As in the theoretical presentation, distance is defmed as the smallest number of time intervals required to reach a given cell from a start cell. 15
PAGE 21
3.2 ROBar PATH PLANNING IN A SIMPLE ENVIRONMENT The initial robot control application introduced in this section contains a single robot operating in a 30x30 work area containing no obstacles A robot is defmed with two parameters Velocity is measured in units of work area cells the robot may travel in a single time interval. There are no restrictions dictating direction of travel, i.e. the robot can change direction without a loss of velocity. Field of view, the second parameter, measures how far a robot may see. This parameter, also measured in cells, defmes the visible horizon of the robot. This is a critical robot characteristic in as it defmes a local arena in which the robot plans and moves. Information about cells outside the field of view is limited to simple distance The field of view defmes a square area around the current robot location essentially simulating an omni-directional sensor capability. The robots view stops at the edges of the work area, creating a more rectangular view in those instances Two robots with slightly different functional characteristics will be presented in separate test cases. The first robot travels two cells in any direction in one time interval. It has a field of view of six cells The robot in the second example travels three cells in any direction in one time interval. It has a smaller field of view of only three cells 3.2.1 Software Design The concepts here represent the basis of the design that will be augmented in further sections as a more complex environment is introduced An object oriented methodology characterizes elements of the design using the following steps : Identify and classify objects & methods from the requirements Group objects & methods into classes Demonstrate class interaction 3 2 .1.1 Objects Location A Location object identifies a unique position in the work area. The two-dimensional presentation of the work area drive x and y attribute parts of the object. Examples of different instantiations of robot Locations are : start. current, destination 16
PAGE 22
The cell object is a single, atomic component of the work area. Prior to path planning, a cell consists only of its location tag. \Vhen a path is planned through a cell, however, it acquires other attributes describing its position in the trajectory(s). This is detailed in the following discussions of trajectory and plan. Work Area The work area is a matrix of cells. The work area dimension is set by x and y size elements supplied as parameters to the simulation. The robot is strictly confmed to this arena. At the start of a job, the work area cells are independent elements representing only locations. As a trajectory is built up defming the path plan, the cells are bound to form a network of locations over which the robot may travel to reach the job destination. Robot A robot object generates all activity in the work area. It integrates and controls all of the previously presented objects, using those objects to plan and execute movement to a goal. A robot is characterized by its speed and field of view. Mru! The map object is a critical and powerful element of the design. It is the foundation of the robot control implementation. Map represents distance from a location in the work area to another location in the work area. There are two basic approaches to designing the map. A static map plots all of the distances from a location to any other location in a pre-determined data structure. This design incorporates a relative distance map that places an arbitrary location at the center of the data structure. Other elements of the data structure are representative of delta x's and delta y's from the center of the structure (delta x = 0, delta y = 0). Each of the elements of the structure contains a distance relative from to the center location. Distance is determined from operational robot parameters such as speed and direction. When a distance calculation is needed, the starting location is mapped to the center element of the data structure. All locations adjacent to the starting location are mapped to those cells adjacent to the center, (delta x=l, delta y=O; delta x=O, delta y=l; ... ) and so on until all of the cells are assigned relative locations in the structure. For example, assume the cells adjacent to 17
PAGE 23
the center location are labeled with distances as illustrated in figure 3.2. An arbitrary location (x=lO, y=20) is assigned to the center location. The distances to all locations from (10,20) are immediately known based on each location's delta x, delta y from (10.20). Keep in mind the assignment of (10,20) to the center location is variable. If the robot is in a new location, for example (21,12), and distances are required, (21,12) is simply assigned to the center location and all distances from (21, 12) are known. The representation of the static map must be four times a large as the work area, as illustrated in the figure. This is so locations at the extremes of the work area can also be placed in the center location and still map the full extent of the work area. Also, work area boundaries must be considered when mapping dynamic locations to absolute locations. +2n +2 +1 0 -1 -2 -2n -2n -2-10 +1+2 +2n I I I I I I 2 2 2 2 2 ....... 2 1 1 1 2 2 1 \ 1 2 ....... 2 2 2 \2 2 ....... 3 \ 3 :\ Center Loc Figure 3.2 Static Map Representation of distances from (10,20) A dynamic map requires that only operational parameters, e.g. speed and direction coefficients of a robot be stored. Distances are computed based on the difference between the x coordinates and the y coordinates of two locations, and factoring in the speed of the robot. A simple algorithm for dynamic distance is presented below. Max_ Diff MAX ( ABS ( Startz Finish:), ABS ( Starty Finishy)) DistfttrJ.JII MaxDo/Robot_ Speed + (Max_ Diff% Robot_ Speed) There are advantages to both kinds of Map representation. The static map is particularly useful for fmding all cells in a particular set. For example, to determine all cells that are a distance 5 from location (20,20) involves assigning (20,20) to the center location and searching the static structure to fmd all locations that are a distance of 5. With the dynamic map, a distance must be calculated from the start location to each 18
PAGE 24
location in the work area (or within the robot field of view) to determine if that location is in the set. A static map also distinguishes the extent of blocking by an obstacle. The dynamic map provides a greater advantage when we consider a more complex environment .. Since the static map works off of relative locations and not absolute, it can not consider obstacles in the environment. Obstacles also affect distances in the static map. More detail will be provided in later sections when obstacles are considered. So, for the simple environment presented here, we will use the static map. Trajectory The notion of a trajectory was introduced in previous sections. In the implementation, a trajectory is a path on the work area from a start location to a destination. It is formed by linking cells in a parent-child relationship. A parent is a predecessor cell from which a given cell can be reached in a trajectory. A child is a successor cell that a given cell can reach. The decision logic to link cells in a parent child relationship is the result of the linguistic geometry path planning algorithm. 1 1 1 2 3 4 Figure 3.3 Cells Linked to Form a Trajectory The relationship between cell, trajectory, and path plan objects is the central theme of the implementation. (Path) Plan A path plan object is a bundle of trajectories from a start location to a destination for a given job. This is the network that is formed by generating all of the trajectories that provide a shortest path. Trajectories may coincide with other trajectories, along the same path, at a particular cell. It is important to the design that these coinciding cells be the same in one trajectory as in another. That is, a cell must be instantiated once in a plan. All trajectories passing through that location must be passing though the same cell and not a copy. There are two reasons for this. First, there is a gain in efficiency in only expanding a cell's children once. If another trajectory passes through the same 19
PAGE 25
cell, then the children, and their children, are already in place. An additional benefit is gained when obstacles are introduced. If a cell is blocked from access, it must inform only one set of parents. Time Interval A time interval is perhaps more appropriately presented as part of the simulation. It is mentioned in this design section since it is the attribute that drives action in the application. Time is an artificial notion in the design that does not typify a temporal object as much as it represents a functional object. In this regard, time is closely related to the Reachability function. A robot's speed is a defmed in terms of how many cells it can travel in a single time interval. 3. 2. 1. 2 Methods Generate Transition Locations Since a robot may not have the sensor capability to completely plan to the destination location, there must exist a method to select intermediate locations along the way to the fmal destination. Through the Map object, the robot possesses the knowledge of relative distances between cells in the work area. This method must select optimal transitional cells that are within the robot field of view and bring the robot optimally closer to the destination. The most optimal transition locations are those that utilize the full extent of the field of view and that bring the robot the nearest to the destination location. Plan a Path Planning a path is the method through which trajectories are generated from a start to the selected transition locations. In this implementation, this is accomplished through the language of shortest trajectories. 20
PAGE 26
Execute a Path Upon executing the method to plan a path, the robot has not yet moved. The Execute method moves the robot on a path to the destination location. At this point in the application, with no obstructions in the work area, any of the trajectories chosen suffice as a shortest path. Calculate Distance A method to calculate distance is required in many steps of planning a path. The characterization of the map object makes this method a table search. 3.2.1.3 Classes The objects and methods of the robot control design are compiled into the classes illustrated below. The robot class drives the generation and movement along a path in this scenario. Upon user selection, the simulator creates a Robot object defming as its parameters: speed, and field of view. The robot creates a Robot Map object for itself providing work area size as a parameter. The Area Map constructor creates the static map, calculating relative distance from the center location. The absolute area, an array of cells, is created and initialized. Two utility classes, implemented as templates, are utilized in the planning algorithm. The List class, is a double linked list used as a container for Cells in the algorithm. A Set container inherits from List. The Set class characterizes methods for the intersection and union of cells. 21
PAGE 27
ROBOT SIMULATOR """' ExecutA Get Path Info Create De AREA MAP Move let
PAGE 28
algorithm when the robot reaches a field of view boundary. So, since the planning algorithm produces cells that are along a shortest trajectory and no others, the Execute Move method is guaranteed an optimal path to the destination location. 23
PAGE 29
3.2.2 Test Results Implementation results are presented in two test cases. The first simulation initially places a single robot at grid location 14, 14. The robot in this test is capable of moving two cells in a single time interval and has a horizon of six in all directions. The destination location is set to 0,5. So, at two cells per time interval, the robot should, optimally, attain this location in seven time steps. The test results illustrated below demonstrate the planning and movement done to attain the goal. In the first frame, the robot has performed the initial planning required to move towards the destination location. Since the destination is not within the field of view, the robot can only travel to the frontier of the extent and then must replan. The dark, thin lines in the frame represent the planned trajectories. The lighter, thicker single line illustrates the selected path moved upon by the robot. In the second frame, the robot has reached the view frontier and has planned again. This time, the destination is within the robot view and all trajectories terminate at the destination. The third frame shows the fmal path of the robot from start to destination. Robot Path Generator Simulation 6 6 6 6 6 6 6 6 6 6 6 6 6 6 5 5 5 5 5 5 5 5 5 5 5 6 6 5 4 4 5 6 6 5 4 3 3 3 3 3 3 3 4 5 B 6 5 2 2 2 2 3 4 5 6 B 5 1 1 2 3 4 5 6 6 123.56 1 2 3 4 5 6 2 2 2 3 5 6 3 3 3 3 3 3 4 5 6 4 4 5 6 5 5 5 5 5 5 5 5 5 5 6 6 6 8 6 6 6 6 6 6 6 6 6 29 28 27 28 25 24 23 22 21 20 19 18 17 16 15 1. 13 12 11 10 9 8 7 6 5 3 l-:o:-:-1 -=2:-::-3 -:-:5:--::-6 -:7-:8=-=9 7.1 o::-::171 -:-::1 2:-:-1 :::-:31:-:-4-::15:-:-1 :::-:6 2:;:::0::;:21:-:n=n:-::24:-:2;;:-5 -::;;26:-::;2:;-:7 28-:;;;-::;;29 2 1 0 0 Obstacle 0 Start Destination 181 Path Robot Definition Speed Fleld VIew [U Figure 3.5 Test Case 1/Frame 1 Planning from initial location 24
PAGE 30
,U: Robot Path Generator Simulalion 29 28 0 Obslacle 27 26 25 0 Start 24 23 Destination 22 21 8 6 6 6 8 6 6 6 6 6 6 6 6 20 D Grid 6 5 5 5 5 5 5 5 5 5 5 5 6 19 6 5 4 4 4 4 4 4 4 4 4 5 6 18 6 5 4 3 3 3 3 3 3 3 4 5 6 17 6 5 4 2 2 2 2 3 4 5 8 16 .. 6 5 1 1 2 3 4 5 6 15 6 1 2 3 4 5 6 14 1 2 3 4 5 8 13 2 2 3 4 5 6 12 3 3 3 4 5 6 11 4 4 4 5 6 10 5 5 5 5 6 9 6 6 6 6 6 8 7 Robot Definition 6 5 Speed [] 4 3 Fleld VIew [iJ 2 1 0 0123456 7 8 9 1011121314151617191920 21 22 23 24 25 2627 29 29 Figure 3.6 Test Case 1/Frame 2 Planning from Field of View Boundary ; Robot Pdth Gener11lor Simulillinn 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0 Obstacle 0 Start Destination D Path Definition Speed [3] Fleld VIew GJ Figure 3. 7 Test Case 1/Frame 3 Final Path 25
PAGE 31
In the second simulation a robot is placed on location (1,0). This robot can travel faster than the robot in the first simulation --three cell locations in a single time interval. The field of view for the robot is modified to only three omni-directional increments. With this combination of speed and field of view, the robot must replan at the conclusion of every move. The destination location chosen for this robot is the last cell of the work area, (29,29). The time required to travel a shortest trajectory in this scenario is 10 time intervals. In the figures below, the progression of planning trajectories and movement along those trajectories is illustrated. f:. Robot Path Generator Simulation 1 2 3 1 0 9 8 7 6 5 4 3 2 3if!i3 3 1 __ 1 0 0 1 2 3 4 5 6 7 8 9 101112131415161118192021 222324 252627 2829 0 Obstacle 0 Start @ Destination 181 Path Definition Speed Field VIew [] Figure 3.8 Test Case 2/Frame 1 Planning from initial location 26
PAGE 33
v: Robol Pnlh Generalor Simulalion .. 4 3 2 1 0 9 8 7 6 5 4 3 2 0 0 Obatadc 0 Start Destination l8l Path DiGrili: Robot Definition Speed GJ Fleld VIew [] Figure 3.9 Test Case 2/Frame 2 Last Planning Stage from Field of View Boundary Robot Path Generator Simul a tion 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 1-o:;--;,-=-2 -=3:-:-4 -=s-=-s -=1-=-a 0 Obstade 0 Start @ Destination 0 Grid Definition Speed [] Flcld VIew [] Figure 3.10 Test Case 2/Frame 3 Final Path 27
PAGE 34
3.3 ROBar PATH PI...A.'lNING IN A."' ENVIRONMENT WITH OBSTACLES Almost every non-trivial application of this technology contain territories that a robot must be capable of avoiding. A mobile robot navigation scenario that omits the possibility of obstacles is unrealistic. Obstacles can be static, i.e. they are placed in the environment in given locations and stay in those locations for the duration. The robot can detect the presence of obstacles only within its sensor capabilities, defmed by the field of view. A variation on this theme is obstacles that are invisible to the robot until it encounters them in the process of moving along a path. This situation may be due to faulty sensors on the robot that initially failed to detect the blockage. It could also be the result of one set of obstacles obscuring another, preventing sensor readings. Whatever the reason, a robot may encounter, and must plan for, undetected impediments within the field of view. This section applies static obstacles to the implementation. Mobile obstacles are elements that are capable of movement. These can be other robots, vehicles, people, etc. that interact within the work area in a very dynamic fashion. This modification is applied in the next section. We provide an additional capability to the robot to work within the static obstacle environment. If the robot is unable to "see" the extent of the obscura within the assigned field of view, it is allowed to expand that view until a path is located. This is analogous to a robot possessing a variable sensor capability. Under most situations, a sensor that provides a limited view of the area is desirable. This capability uses less resources, provides for faster movement, etc. In certain situations, however, the robot must select a high performance sensor to examine a larger extent. While this may cost the robot in resources and time, it eliminates guess-work on the part of the robot as to the most optimal path. A guess made by the robot could potentially be much more costly than utilizing the more resource expensive sensor. Key to the introduction of obstacles to this environment is the possibility that an optimal path is not attainable. The Linguistic Geometry model generates optimal and non-optimal paths with a new paradigm .. the Language of Admissible Trajectories [Stilrnan, 1993]. 28
PAGE 35
3.3.1 A Language of Admissible Trajectories Assume a robot has determined through the language of shortest trajectories that an optimal path to a destination is not possible due to obstacles. In these circumstances the robot must have the means to plan a path to the destination along a longer, less optimal trajectory. Table 3.1 is a presentation of the programmed Grammar Gt<2l of Shortest and Admissible Trajectories. L 1 2i 3i 4 5 Table 3.1 Grammar of Shortest and Admissible Trajectories Q Q1 Q2 Q3 Q4 Q5 Vr ={a}, VN = {S,A}, VPR = Kernel, n., On S(x,y A(x,y ,1) A(x,y,l) mecL (x,y,l), lmecL (x,y,l)) A(mecL (x,y,l), y, 1lmecL (x,y,l)) A(x,y,l) A(x,y,l) A(x,y,l) Pred = {Q1, Q2, Q3, Q4, Q5} Q 1(x,y,l) = CMAPx.p(y) 1 < 2 X MAPx.p(y)) A (1 < 2n) Q2(x,y,l) = (MAPx,p(y) :1; 1) Q3(x,y,l) = (MAPx,p(y) = l) A (1 1) Q4(y) = (y = yo) Q5(y) = (y :t; yo) Var = {x, y, 1} 29 FT FF two 0 three three three 4 three 5 three 0
PAGE 36
Con = {xo, yo, lo, p} Func = Fcon U Fvar Fcon = {f, next1, next2, ... nextn, med1, meru, ... medn, lmed1, lmeru, ... lmedn} (n= I X I), fO) = l 1, D(f) = z. \ {0} Fvar = {xo, yo, lo, p} E = Z+ U X U P is the subject domain; Parm: At the beginning of the derivation: x=xo; y=yo; l=lo; xo, yo E X; lo E Z+; p E P. For this language, two new functions were created (in addition to the next function, a carry over from the language of shortest trajectories). They are defmed as: med.; (x, y, I) Domain: X x X x Z+ x P Define a set: DOCK(x) = {v I v from X, MAPxo,p (v) + MAPy0,p (v) = I} if DOCK.i (x) = {v1, V2, va, ... vm} != 0 then med.; (x, y, I)= Vi for 1 i m else medi (x, y, I)= x end if 30 The DOCK set is not empty med returns a unique DOCK point for each reference DOCK is empty robot stays at x position
PAGE 37
lmed. (x, y, l) Domain: X X X X Z+ X p lmed. (x, y, l) = :MAPx.p (med. (x, y, l)) :MAP distance from x to y The Language of Shortest and Admissible Trajectories extends the Language of Shortest Trajectories into an algorithm that allows the consideration of less than optimal paths. The thrust of this new grammar is to identify cells in the work area that serve as intermediate points accessible on shortest paths from the start location and the destination location. These DOCK cells are not necessarily on a shortest path. That is, they are not elements of the SUM set. Graphically, this can be depicted as the combination of two shortest trajectories as shown in the figure below. Destin Common Between 2 Shortest Trajectories Figure 3.11 Two Shortest Trajectories combined to Form a Non-Optimal Trajectory It is important to note that shortest trajectories are also promulgated from this language. Thus, if a shortest trajectory does exist, then it will be generated from this grammar. In this instance, the length is the shortest distance between locations and the DOCK set and the SUM set are equivalent. If optimal trajectories are not spawned due to obstructions, the length between two locations is longer and less direct derivations are attempted. 31
PAGE 38
3.3.2 Design Augmentation The basic objects described in the simple model also exist in this design, although they must be modified to accommodate the new environment. A new object, obstacle, is introduced that describes the areas on the work area where a robot can not occupy or travel through. Methods will undergo modification as the planning strategy now allows the possibility that a path is not achievable for a particular distance. In summary, the algorithm is modified to consider different levels of trajectories. 3.3.3 Objects Map The Map object represents distance from a location in the work area to any other location. In the previous manifestation of the design, we introduced the concepts of a static map and a dynamic map. With the introduction of obstacles, the distances represented by the static map are no longer accurate. Indeed, a given location on the map may not be reachable at all. The static map was a relative mapping independent of absolute coordinate assignment. Since the obstacles are absolute objects, the static map can not incorporate this knowledge into the database. Early designs attempted to work around this restriction by incorporating the obstacle knowledge into the planning algorithm instead of the map. In experimental testing, these designs proved to be easily defeated by non-trivial obstacle patterns. An accurate local distance map proved to be a critical feature of a good design. The dynamic map object calculates distances in real-time based on the state of the work area within the robot field of view. A list is formulated in the following manner. Working in a radial fashion outward from the current robot location, a Map is formed by considering simply what is "adjacently reachable" from a given location. Adjacently reachable is: the set of locations exactly one cell away from the current location that do not contain obstacles. These locations are tagged with the current distance and are added to the list to be considered in the next expansion. When the field of view is reached, the algorithm completes, leaving in its wake a calculated true distance for each cell. 32
PAGE 39
A form of a static Map object must also be retained to reflect distances of locations outside the field of view. The object is needed because the robot requires some knowledge of distances to locations outside the sensor range, such as fmal destination locations. This knowledge is incomplete since the robot has no indication of obscura outside the field of view that would affect distance. It can be used, however, to rate the path potential of a cell relative to another cell within the field of view. In this manifestation of the design, the static Map will be similar to the dynamic distance calculation presented in simple model. The algorithm will incorporate knowledge of the local distances into the calculation to obtain the most accurate total distance possible. Obstacle The obstacle object manifests itself in the design as state of a cell, a single atomic component of the work area. In the previous implementation, a cell contained a static structure for location as well as dynamic information describing any path infonnation (descendants and ancestors) that passed through the cell. In the modified design, we add a dynamic feature that allows the cell to be flagged to contain an obstacle. 3. 3. 3. 1 Methods Calculate Distance Using the radial technique described in the above Map object, this method calculates a true (dynamic) distance for all locations within the robot field of view. The algorithm considers current obstacle configuration. As an artifact of this algorithm, a parent relationship is established between cells in the field of view. This is done to facilitate trajectory generation in the path planning phase. The method also retains the capability to calculate static distances to those locations outside the field of view. The static algorithm factor local obstacle interference into the computation, however, it can not take into consideration obstacle interaction external of the robot sensor range. 33
PAGE 40
Generate Transition Locations This method produces intermediate locations that are within the current field of view for the robot and are also on the most optimal path to the fmal destination. The algorithm executed here is similar to the technique used to generate DOCK locations in the grammar of admissible trajectories. The method makes use of the true distances to the boundary locations to determine which of the cells have the greatest potential for expansion to the fmal destination. The algorithm adds dynamic Map distances to the transitional locations and the static Map distances from the transitional locations to the fmal destination. The smallest of the sums point to the most promising transitional locations to expand. If all of the boundary locations have potential that is less than the current location, then the path planning algorithm will simply plan to all boundary locations. This describes an environment in which the obscura is extensive enough that the robot can not usee" around it with the assigned field of view In such situations, the move method will invoke special processing to compensate. Plan a Path In this implementation, the plan is produced by formulating paths from the start location to the best transitional locations. Thus, the method uses the two algorithms described above to generate the dynamic Map and the optimal transition locations. Using the parent relationship established when the dynamic map was produced, this algorithm works backwards from the optimal transition locations to the parent cells. The parent cells are expanded to their ancestors, and so on until the start location is reached. This forms the trajectories on which the robot will travel. Execute a Path In the absence of obstacles, this method simply moved the robot on a pre-planned trajectory until it reached the end of the field of the view or reached the fmal destination. If the destination was not attained, the algorithm would kick-off another planning session. With the advent of obstacles, in particular the so-called invisible obstacles, this method looks ahead at each cell along the trajectory to ascertain if that cell contains a hidden obstacle. If all cells are blocked on the planned path, then the method must formulate new trajectories from the current location by calling the planning method. 34
PAGE 41
The move algorithm must also invoke special processing in case the obstacle layout prevents the robot from determining the best course. The processing is energized by a message from the planning algorithm that all boundary location potentials are worse than the starting location. In this situation, the robot will expand the field of view from the nominal state provided at robot creation. Mter each expansion, the planning algorithm is invoked to determine the potential of the new boundary locations. Once the potential exceeds that of the start location, the robot begins movement on the planned trajectories. The field of view is reset to nominal state. Finally, this method plans after every full robot movement. This is done to take advantage of the gain in the field of view attained whenever the robot moves. Experimentally, it was determined that this permitted the robot to see blind alleys earlier in the path than if the robot simply followed the older trajectories 3.3.3.2 Classes The attributes and methods of the augmented robot control design are compiled into the classes illustrated below. The robot simulator now updates obstacle locations with a set/reset obstacle location message to the Area Map. A new message from Robot to the Area Map allows the robot to inquire about obstacle presence at a given location. Existing methods and attributes were modified as described in the above discussion. 35
PAGE 42
Create ROBOT SIMI. J LATOR Obstacle Location Updates "'llr "'llr ExecutA Get Path Info CreatE De AREA MAP Move lete ... "'llr "'llr Clear Path ROBOT Link Cells Calculate Distance Generate Transition Locatic p Set/Reset Obstacle t.. Reset Path Plan a Path RonnPt ()hat.,,.]., n. I" Execute Move Link Cells Absolute Area I" Return Path Information Area Size t.. rAJt"uJAto niatAn,... I" l..l o .... I"V ()hat .... l .. n Area Map Speed Field of View Graph Start Current Graph DestinatiorLocation Last Location Figure 3 .12 Class Diagram of Robot Software in Static Obstacle Environment A state diagram presented below demonstrates the planning and movement state transitions performed within the robot class 36
PAGE 43
Initialization From Robot Sim. ,, Initialize Local Attributes Path Planning Existing Path Info. ,, Generate Transition Locations ,, Generate Admissible Trajectories Path Execution Move on Trajectory To Path Planning No Done -------1 Figure 3.13 Planning and Path Execution State Diagram The start-up scenario is similar to what was presented earlier. On startup, a creation message is sent to Area Map providing the user-defmed size of the work area. Using the Robot simulator, a user selects a starting location and a destination location for a robot. The user also supplies the functional capabilities of the robot: speed and field of view. The robot simulator sends a create message to a Robot providing an Area Map object as well as the speed, field of view. The user controls the movement of the robot from the simulator. Based on the field of view distance, the robot builds a network of trajectories from the start (current) cell to the selected transition cells. The network is fashioned by the planning algorithm. Motion is generated along one of the planned trajectories. The move process is repeated until the current location is the destination cell or the field of view limit is reached. The Execute Move method with automatically re-task the planning algorithm when the robot reaches a field of view boundary, or an invisible obstacle prevents further movement. The Execute Move method also invokes changes to the field of view as described in the above discussion. 37
PAGE 44
3.3.4 Test Results Experimental results are presented in three sets of test cases. The first simulation places the robot into an environment with numerous, overlapping walls. The second simulation requires that the robot plan to exit a room and gain entry to a room in order to reach the destination. The final test case introduces invisible obstacles. These are blockage that are not perceived by the robot until it is close to the obstacle. 38
PAGE 45
3.3.4.1 "Walls" Scenario The start and destination location are located on opposite sides of the work area: location (0,16) and location (26,10) respectively. The robot is created with a velocity of 3 cells per time interval with a field of view of 6 cells in all directions. In the initial frame, figure 3.14, the planning algorithm identifies three optimal boundary locations (4,22) at a distance of 6 from the start location, (6, 11) at a distance of 8 and (6, 10) also at a distance of 8. Note that the distances labels on the far side of the obstacle reflect actual travel time required to reach the cells and not a straight line distance. Also note that the boundary locations directly in front of the robot are rejected by the planning algorithm due to local blockage. Thus, the robot seeks a route around the obstacle with the planned trajectories. j Robot Path Generator I I 9 B 7 6 5 4 3 r:i .,o,........,..t -:2-3:---:-4-:5:-::-6 -=7,...8,.......,.9...,..1 -:-::17,...,.1 3 2 I 0 0 Obstacle 0 Start Destination 181 Path 0 Grid Robot Definition Speed [] FleldVIew Figure 3.14 Test Case 4/Frame 1 Initial Planning and Motion 39
PAGE 46
In the second frame, figure 3.1.5, the robot has completed its fourth time interval. At location (10,10), it has cleared the first two walls and successfully planned around a third. The move execution algorithm has selected a more satisfactory route based on better proximity to a (theoretical) straight line path to the destination. ; Robot Path Generator Simulation I I 9 II 7 6 s 4 3 2 l: 1 0 9 II 7 6 s 4 3 2 1 1-,0::--:-1 """2,......,....3 ....,.4....,5:--:-6 -:7,..-B:--:-9 ...,.1 0""'1'"'"1 "'"'12:-:-1-::-3 1:-:-4-:-:15:-:176 1:-::7""'"'1 B::-:179 20=-::-21'""'22=23'""2"'"'4 25=26:-:2-::-7 O 0 Obetede 0 Start Destination l8l Path .. Robol DeHniUon Speed [2] FleldVIew Figure 3.15 Test Case 4/Frame 2 Fourth Time Interval The next frame, figure 3.16, illustrates the robot situation at time interval 6. The planning algorithm has selected three optimal boundary locations: (18,14), (18,13), and (17,6). Once again the robot selects the northern route, demonstrated in figure 3.17. At this stage, time = 10, the destination is within the field of view and thus is the only identified transition location. The fmal frame illustrates the as-executed path for this scenario. The robot reached the destination in 12 time intervals, traveling over 36 cells. Without the blockage, the robot would have required 9 time intervals, traveling over 26 cells. 40
PAGE 47
Robot Path Simulation 29 29 27 26 .... 111111 23 22 21 20 19 18 17 16 15 14 13 12 r:11 10 9 8 7 6 5 4 3 2 1 0 0 Obstade 0 Start Destination 0 Grid Robot Definition Speed [iJ Fleld VIew [iJ Figure 3.16 Test Case 4/Frame 3 Sixth Time Interval ; Robot Path Generator Simulillion 0 Obatacle 0 Start Destination D Grid -Definition Speed [] FleldVIew Figure 3. 17 Test Case 4/Frame 4 Final Planning Stage 41
PAGE 48
. Robol Palh Generalor Simulation I I 0 Obstade 0 Start Deatlnadon 0 Path .. Robot Definition Speed [] Fleld VIew [U Figure 3.18 Test Case 4/Frame 5 As Executed Path Display 42
PAGE 49
The next "walls" scenario demonstrates the effect of local blockage on the overall path chosen by the robot. In this experiment, the start location is moved up one cell to (0, 17), while the destination remains at location (26, 10). The obstacle locations are also unchanged. With the one cell change to the start location, the planning algorithm initiates a completely different plan than in the previous scenario. A single transition location was determined to be optimal: (5,23). This boundary location is six cells from the start location and offers the greatest potential. The cells selected in the previous scenario, below the initial location are either blocked (5, 12) or proffer a worse potential (3, 11) than (5,23). , 1 Robot Path Generdlor Sirnulatinn 4 3 4 2 3 4 1 2 3 4 1 1 2 3 4 2 2 2 3 4 3 3 3 3 4 4 4 4 4 4 5 5 5 5 5 8 8 8 8 I .. I r: ...,o:--:-1 -=2,.......,...3 ""'4""'5=--=-s -=7:-e:--::-s-:-1 -=-o 1:-:-1 ""1 2,....,1.,.31.,...,4....,1 s::-:1'-="s "'"'11,...,.1 ==2s=-=2:-:8 21=2e=-=29= 4 3 2 1 0 0 Obstacle 0 Start Deatlnatlon 0 Grid Robot Definition Speed [] AeldVIew Figure 3.19 Test Case 5/Frame 1 Initial Planning and Motion In the second frame, the robot is well into the scenariO. The robot has the destination in the field of view but can not reach it. The fmal frame illustrates the as executed path for this scenario. 43
PAGE 50
Robot P111h Generator Simul111ion [.?,r:J I 7 7 7 7 7 s s 6 8 6 5 5 5 5 6 4 4 5 8 3 3 4 5 6 4 56 58 I'VII>""""OVO.. 5 6 s .. I 1 0 Obltade 0 Start Destination DiGrf.t: -Definition Speed GJ AeldVIew @] Figure 3.20 Test Case 5/Frame 2 Destination in Field of View and Not Reachable { Robot Path Generator Simulation I 1 0 9 8 7 6 5 4 3 2 1 0 0 Obstacle 0 Start Destination 0 Path -Definition Speed Aeld VIew [iJ Figure 3.21 Test Case 5/Frame 3 As Executed Path Display 44
PAGE 51
The next Mwalls" scenario fmds the robot trapped behind a obstacle that it is unable to plan around given a nominal, omni-directional field of view of 6 cells. The robot applies extra resources and expands the field of view to 7 cells to permit planning to a cell of greater potential than the initial location. ;;,'j Robot Path Generator Simulation 7 7 7 6 8 6 5 5/l>
PAGE 52
' : Robot Path Generator Simulation 5 ' ' 5 6 5 4 3 3 3 3 3 3 4 5 6 5 3 2 2 2 2 s 6 543211 6 5 3 2 5 4 3 5 4 3 3 5 4 4 5 5 s s 6 6 6 6 7 7 7 7 I I 0 Obstade 0 Start Destination 181 Path 0 G rid Definitio n Speed FJeld VIew [] Figure 3.23 Test Case 6/Frame 2 Reestablish Original Field of View Robot Path Generator Simulation 0 Obstade 0 Start Destlnadon 0 Grid Robot DeflnltJon Speed GJ FleldVIew Figure 3 24 Test Case 6/Frame 3 As Executed Path Display 46
PAGE 53
3.3.4.2 "Rooms" Scenario The next set of scenarios require the robot to navigate out of room with only one exit and process to a destination in another room with only one entrance. In the fust experiment the robot is given a speed of 2 and a field of view of 5 cells in all directions. The robot is placed at an initial position of (13,4), while the selected destination is assigned to (8,26). Note from figure 3.25 that both the initial robot location and the destination are placed inside of confmed areas that the robot must plan around in order to accomplish its goal. In the initial plan, illustrated in figure 3.25, the planning algorithm immediately plans a route out of the room. All paths are directed to the left of the initial location because the field of view restricts the robots view of what is outside the far right wall of the room. The algorithm did not require an expansion of the field of view for its initial planning. ;/;, Robol Palh Gencralor Simulation tac I ...... 1--11. 15 1514 15141 I I 5.1 4.1 0 1 2 3 4 5 6 7 8 g 10 .. .., 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 0 Obstacle 0 Start @ Destination 181 Path 0 Grid Robot Definition Speed [?] Field VIew [D Figure 3.25 Test Case 7/Frame 1 Initial Planning and Motion The robots second advance, presented in the second frame (figure 3.26), shows the effect of re-planning each motion. The robot, now capable of seeing outside the right 47
PAGE 54
wall of the room now rejects the original transition locations and paths to the left of the room as being less satisfactory than those to the right. Basically, these new trajectories avoid traveling the length of the bottom wall of the room in order to clear the obstruction. c 0 Obetacle 0 Start P.. Destination I I l8l Path Di'Grl.ii 9 I I B 7 6 -I I 5 4 3 2 L II 1 0 9 e 16161616161514131 7 Robot DeflnHion 1515151 6 141414 s Speed [] 131313 4 121212 3 Field VIew [] 121111 2 1211 1 12,11 0 Figure 3.26 Test Case 7/Frame 2 Improve Path after First Motion By the third frame, the robot has successfully navigated the obstacles to fmd the destination in the field of view. It can not, however, map a path to the destination due to obscura. In order to improve the path potential relative to the current location, the planning algorithm expands the field of view from 5 to 6 cells. Note that the entrance to the room containing the destination (4,28) is now within the field of view of the robot and a trajectory is generated to the destination. This is illustrated in the fourth frame, figure 3.28. The fmal frame, figure 3.29, presents the as-executed robot path for this experiment. The robot navigated through the obstacles through 47 cells in 24 time intervals. 48
PAGE 55
' Robol P111h Generalor Simulalion I I I .. 9 e 7 6 5 4 J 2 1 0 9 e 7 6 5 4 3 2 1 0 0 Obstacle 0 Start DuUnaUon 1:8:1 Path Robot Definition Speed [] Field VIew 5J Figure 3.27 Test Case 7/Frame 3 Approach Task Destination Robot P111h Gener111or Simul>tlion J-0::-:-1-2::::-::3-4-:-::5:---=-6 -=1:---=-a 9 8 7 8 5 4 J 2 1 0 9 8 7 6 5 4 J 2 1 0 0 Obstacle 0 Start Destination 1:8:1 Path Definition Speed [] Flcld VIew 5J Figure 3.28 Test Case 7/Frame 4 Plan to Entrance to Room 49
PAGE 56
I I I I Robol Palh Generalur Simulallon J 0 Obstacle 0 Start DuUnatlon 0 Patti Robot Definition Speed [] FleldVIew t:0::--:-1 -::2:-3::--:-4-:5:--::-8 -::7:-a::--::-9-:-1 S:-:1-::-81':":7::-1 9 a 7 6 5 4 3 2 1 0 9 a 7 8 s 4 3 2 1 0 Figure 3.29 Test Case 7/Frame 5 As Executed Path Display The second "room" scenario greatly increases the complexity of the domain. In this scenario, the robot must first exit a room and then navigate through a maze of rooms to finally attain the destination. The robot retains the characteristics of the previous experiment: a velocity of 2 cells per time interval and a field of view of 5 cells in all directions. The initial location of the robot is placed at (24,20) and the destination is selected at (2,29). In the first step, the robot must expand the field of view (to 7 cells) just to locate transition locations that get the robot into a greater potential than the initial location. This expansion is depicted in frame 1, figure 3.30. The second frame (figure 3.31) shows the robot heading for the "room maze" with a trajectory through the entrance. .50
PAGE 57
252525252525252525252526 24 24 24 24 24 24 24 24 24 24 25 26 23 23 23 23 23 23 23 23 23 24 25 26 222222 222222 22 22 23 24 25 26 212121212121 20 20 20 20 ....... 9191919 61618 717 20 1 0 9 8 7 6 5 4 J 2 1 o::--:-1--=-2 -:3,....-,-4 --=s:-:-s --=1:-:-e --=9:-1:-::0-:-11,....1=-=2-:-1 3""1""'4..,., 0 Obstade 0 Stan Destination I8J Path 0 Grid": .. Robot Deflnillon Speed [U FleldVIew Figure 3. 30 Test Case 8/Frame 1 Initial Planning and Motion 0 Obstade 0 Stan Destination l8l Path 0 Grid Robot Deflnillon Speed [3] FleldVIew Figure 3.31 Test case 8/Frame 2 Find Entrance to Room 51
PAGE 58
The third frame (figure 3.32) depicts an interesting robot plan. The planning algorithm formulated a trajectory to location (5,21) in an attempt to gain access to the destination from the south (0 .. 4, 21). Frame four illustrates how on the next plan the algorithm rejects this solution and instead expands the field of view to 8. This permits access to the destination location, a path in which the robot follows to complete its goal (figure 3.34). From location (6,20) the robot would have attained a more direct path to the destination through (6,21) than though (5,20). The attempt to gain access to the destination generated a one cell perturbation towards the fmal destination. Of course, the perturbation confirmed that a more direct route was not possible. 9 8 7 6 5 4 3 2 0 5 6 7 8 9 1011121314151617181920 0 Obstade 0 Start @ Destination [81 Path 0 Grid Robot Definition Speed EJ Field VIew Figure 3.32 Test Case 8/Frame 3 Direct Route is Blocked 52
PAGE 59
0 Obstade 0 Start Destination 0 Grid Robot DeHnltlon Speed [U Field VIew EJ Figure 3. 33 Test Case 8/Frame 4 Expand Field of View to Destination 9 8 7 6 s 4 J 2 1 0 Obstacle 0 Start Destination D Path Robot Definition Speed [] Field VIew GJ Figure 3.34 Test Case 8/Frame 5 As Executed Path Display 53
PAGE 60
3.3.4.3 Invisible Obstacles Scenario The fmal experiment for static obstacles leverages off of the previous scenario. In this version, however, we introduce the so-called uinvisible" obstacles into the environment. Recall that these obstacles are not detected in the planning phase and can be encountered blocking planned trajectories. The movement generation algorithm must detect this unplanned obscura and if necessary, replan around it. The robot path display simulates invisible obstacles by supporting the placement of obscura while the robot is in the process of planning and moving. The initial state of this scenario was precisely as it was in the previous experiment. In the first frame (figure 3.35), the robot is 7 steps into the plan when, from the robot simulator, new obstacles are defmed in rows 19 and 21 as noted on the graph. The robot executes two more time steps of the current plan before realizing that the trajectories are now blocked. The second and third frames of the scenario depict the next two time intervals of the experiment and demonstrate the perturbations introduced by the new obstacles. Frame 2 shows the robot, with a nominal field of view, attempting to bypass the invisible obstacles to the left. Frame 3 shows that after movement along that trajectory, the robot can not improve its position any further and expands the field of view allowing navigation through the invisible obstacles. The remainder of the scenario is similar to the previous presentation. 54
PAGE 61
Invisible Obstacles 25 25 25 25 25 25 25 25 25 25 25 26 24 24 24 24 24 24 24 24 24 24 25 26 23 23 23 23 23 23 23 23 23 24 25 26 22 22 22 22 22 2222 22 23 24 25 26 21 21 21 21 1 0 9 8 7 6 5 4 3 2 1 0 Start 0 Destination t8l Path .. .. 0 Grid Definition Speed AeldVIew Figure 3.35 Test Case 9/Frame 1 Invisible Obstacles Placed in Area 9 8 7 6 5 4 3 2 1 0 Obstacle 0 Start 0 Destination I8J Path Definition Speed [!] FleldVIew Figure 3.36 Test Case 9/Frame 2 Plan Rejects Route to Destination 55
PAGE 62
5 4 5 4 5 J J J J J 4 5 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 6 10 9 8 7 7 7 7 7 7 7 7 7 7 7 7 7 10 9 B 8 8 8 El El B B El 8 B El El B 10 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 10 101010101010101010101010101010101010101010 Obstacle 0 Start 0 Desdnatlon 0 Grid Robot Definition Speed [U Field VIew GJ Figure 3.37 Test Case 9/Frame 3 Plan Around Invisible Obstacles 56
PAGE 63
3.4. RoBar PATH PLANNING WITH STATIC AND DYNAMIC OBSTACLES Additional mobile elements are introduced to the environment in this section. In general, mobile elements can relate to our robot in several different ways. They may be of an adversarial nature, attacking the robot in an attempt to capture or destroy. Mobile elements may also be cooperating systems working on the same or different tasks. The last category, dynamic obstacles, is the type of system we will introduce to our environment. Mobile elements of this type simulate several different reallife situations encountered by a task oriented robot. In a factory oriented application, obstacles represent mobile equipment, people, or other robotic systems. In a space based application, mobile obstacles might represent satellites operating in the same area. These systems are not inherently opposed to the robot, i.e. they do not directly attack. They do, however, occupy space, and move concurrently within the same area as the robot. Similar to the static obstacle scenarios, the robot must avoid the moving impediments in the process of completing a task as quickly as possible. The planning problem is made more difficult by the mobile nature of the obscura. Since the obstacles move concurrently with the robot, the planning algorithm must consider how each obstacle travels during a given time interval as well as the starting and destination cells. That is, the robot must account for the route taken by mobile obstacles. Before presenting the robot system response to this environment, it is important to defme the behavior of the dynamic obstacles used in the model. The obstacles may be of any shape and size that can be practically contained in the work area. Similar to the robot, they are assigned a velocity describing how fast and far they may move in a single time interval. Mobile obstacles are assigned an "area of operation". Their movement is restricted to this area. When any portion of a mobile obstacle attempts to move outside the designated area of operation, it will change direction and move back into the area. Mobile obstacles are allowed to occupy the same space as other mobile obstacles and static obstacles without affecting movement. The robot must also follow certain rules for interaction within the new environment. Similar to the static obstacle scenario, the robot can only consider dynamic obstacles that are inside the field of view. Moreover, if only a part of a dynamic obstacle is within 57
PAGE 64
the field of view, the robot may only consider that portion of the obscura Obviously, the robot can not coexist in the same location as a dynamic obstacle Because of concurrent movement the robot can not travel over the same route taken by a dynamic obstacle to reach its new location in the same time interval The route is inclusive of the starting locations of the dynamic obstacle. Two examples shown below demonstrate legal and illegal movement on the part of the robot In both examples the obstacle is moving two cells in the y direction as indicated by the arrows emanating from the obstacles in the figure. This makes the cell directly under the current robot location inaccessible. Legal Movement Illegal Movement Figure 3 38 Examples of legal robot movement and illegal robot movement To accommodate a dynamic environment the planning algorithm must consider current and future interaction between trajectories and moving obstacles. The Linguistic Geometry model provides a set of tools for this paradigm 58
PAGE 65
3.4.1. Trajectory Networks So far in the exploration of Linguistic Geometry we have derived tools for expressing the movement of a single element in a system. This is a lower level operation that does not necessarily provide system level solutions. Tools are required to express the interaction between multiple agents in the system. This is the basis for breaking down a system into smaller subsystems. The subsystems in this case are the inter-connected trajectory networks formed by the movement activity of elements in an area. The elements may be attempting to accomplish a goal, preventing an element from accomplishing a goal, or supporting an element. The general idea for network generation will be demonstrated with the scenario in figure 3. 39. (2,2) (4,4) (3,3) Figure 3.39 Trajectory Network Example Element Po is planning to move along trajectory (1, 1), (2,2), (3,3), (4,4) to reach a goal destination of (5,5). Element Qo and Q1 are opposed to this activity and can intercept the Po trajectory at (3,3) and (4,4) respectively. Elements Pl and P2 support the goal and can inhibit the interception of Po by controlling locations along the Qo and Q1 trajectories. We state that a trajectory connection relation, C(t1, t2), exists between two trajectories if the end link of t1 coincides with an intermediate link of t2. In the above example the Q1 and Po trajectories are connected at (4,4). The connectivity relation can be indirect also. The Pt and Po trajectories can be considered connected through the Q1 trajectory. This is considered a degree 2 connection since it is not a direct connection to the main trajectory but it is part of the network that will determine 59
PAGE 66
whether Po can complete its goal. The degree of the connection is determined by how far the trajectory is removed from the main trajectory. The Qo trajectory is attempting to control a location along the main trajectory, therefore. it is a degree 1 relation to the main trajectory. We formally describe a trajectory network,W, relative to a trajectory to as a fmite set of trajectories to, t1, ... tk from the language LtH(S) (Language of Trajectories) that have the following property. For every trajectory from W there is transitive closure to the main trajectory. That is, each trajectory from the network W is connected to to [Stilman, 1996]. A family of trajectory network languages Lc(S) in a state S of a complex system defmition is the family of languages that produce strings of the form: t(to, param)t(t1, param) ... t(tm, param) where param is defmed by the specific parameters of the language. The strings produced by a network language should look vaguely familiar since they roughly resemble the strings produced by the Language of Trajectories. In the same way that trajectory languages describe one dimensional objects in a system by forming a string of symbols based on a reachability relation, a network language describes higher level objects using the trajectory connection relation [Stilman, 1996]. Different grammars can be generated from this family of languages that correspond to a particular solution. One grammar, Zones (Gz), is particularly useful in describing systems similar to our path planning problem. The detailed derivation of this grammar can be found in [Stilman, 1993]. It will serve here informally as the theoretical basis for a solution to a dynamic obstacle environment. A Language of Zones produces strings of the type: t(po,to, 'to)t(p 1, t1, 'tt) ... t(pk, tk,'tJ where p represents the elements and t represents the trajectories of those elements. represents the time allocated for motion along the trajectory to either intercept or support the main trajectory. If the length of the trajectory is greater than the amount of time available to affect the main trajectory, then that trajectory can be eliminated from consideration. In our system, the goal is roughly the same to avoid intercepting elements (dynamic obstacles) in the process of accomplishing a task. There are not, 60
PAGE 67
however, supporting elements to block the interceptors. In contrast to the more adversarial environments, dynamic obstacles will simply pass through a particular interception point on the main trajectory and will generally not wait to attack the robot. The concept of incorporating a time that intercepting trajectories may affect the main trajectory will play an important role in the software design. The implementation of this concept is detailed in the following section. 3.4.2. Design Augmentation The basic objects described in earlier presentations also apply to this environment. Several objects and methods will undergo significant modification to accommodate defining and generating motion for dynamic obstacles. In general, the robot software must respond to a much more dynamic environment than in previous implementations. The planning algorithm will incorporate predictions of where dynamic obstacles will be in future time intervals. This data affects which transition locations and trajectories are admissible. The movement algorithm will also incorporate current dynamic obstacle activity to avoid collisions and institute new planning when required. The following sections details additional objects and methods and changes to existing objects and methods. 3.4.2.1. ()bjects ()bstacle This object, in the previous design a simple indicator describing the state of a cell, must now be expanded to engender movement. Obstacle now defmes a group of obstacles that share common movement criteria. From time interval to time interval, the locations that the obstacles occupy changes, thus changing the state of the cells at those locations. All state changes to cells are communicated through the Map object so there is one repository for work area cell status. Obstacle contains several attributes that describe the initial state of the group. This includes: starting locations of the obstacles, the amount of movement allowed in the x and in the y direction, and the allowable area of movement (min/max x, min/max y). Additional attributes describe the dynamic state of group: current location of the group members, current direction in x 61
PAGE 68
andy, and the set of locations that the obstacle traveled over in execution of the current time interval. 62
PAGE 69
3 4 2 .2. Methods Execute Obstacle Movement This is the first of two-step algorithm to engender movement for a dynamic obstacle. Obstacle movement is staged to simulate simultaneous movement of obstacles and robot. This stage of the algorithm executes prior to robot planning and movement in a given time interval and performs three basic functions. First, it determines, with the next application of movement criteria if any portion of the obstacle will fall outside the defmed area of movement. If it detects such a condition, it reverses the direction of movement in whichever coordinate is affected (x or y). The second function calculates for each member of the obstacle group, the intermediate locations over which the member will travel to reach its new destination These locations are identified to the Map object and are set to a special cell state indicating that an obstacle has moved over this area in the current time interval. Finally, this method calculates the fmal destination for this time interval for each member of the obstacle group These locations are also identified to the Map object to update the location of the obstacle on the map in second stage of the dynamic obstacle movement algorithm. Complete Obstacle Movement In the second stage of obstacle movement the object completes the movement cycle By this point the robot has completed its planning and movement for the given time interval. The obstacle is still on the map in its old location with the cells identified as to its new locations and the route it traveled to reach them. This method removes the obstacle tag from the old cells and resets indicators for the traveled upon intermediate locations. Finally, the new cell locations are tagged as containing obstacles. These rather complex steps create a facsimile of simultaneous movement to the robot. The same obstacles potentially shows up in several locations but with different state flags identifying old travel and new position The robot planning algorithm uses this information to plan its path accordingly. Calculate Distance 63
PAGE 70
The basic thrust of this method stays the same in this implementation. That is, at each time interval it dynamically calculates distances from the start location to all locations within the field of view. The algorithm must now consider the movement of dynamic obstacles in the current time interval when determining distance. A particular unblocked cell, adjacent to the start location, may not be assigned a distance value of 1. If a dynamic obstacle is moving into that cell at the current time, the distance value may incorporate robot movement around the obstacle. Alternatively, a cell that is currently blocked may have a distance assigned if it will be reachable when the dynamic obstacle leaves the cell. Predict Obstacle Location This new method predicts where blockage will occur in future time intervals. This data is incorporated into the trajectory forming phase of the planning algorithm. In general, a robot operating in this type of scenario must look at obstacle movement over time to ascertain the dynamic characteristics of the object. In this application, however, the robot is allowed to query any dynamic obstacle (through the Map object) in the field of view to get the velocity and the direction of obstacle movement. The robot does not know the area in which a given dynamic obstacle operates, nor does it know the complete geometry of the obstacle unless it is all contained in the field of view. With the knowledge it does possess, though, this algorithm computes a predicted location for each component of the obstacle group for robot distances as determined by the Calculate Distance method. This is accomplished using the following equations: Where: Predicted (x) = current (x) + (time direction (x) *velocity (x)) Predicted (y) =current (y) + (time direction (y) *velocity (y)) current(n) direction(n) velocity(n) time = n (x or y) coordinate of current location. =Direction of movement in n (I =forward -l=back, O=none). = Speed of obstacle in n coordinates. =Delta time from current. If a cell is predicted to contain an obstacle, a special flag is set in association with the time that an obstacle is expected to be at that cell. Plan a Path 64
PAGE 71
This method makes use of the distance information and the predicted obstacle location data to form trajectories that give the robot the best chance to optimize movement to the destination and to avoid colliding with obstacles. Recall from previous implementations that trajectories were formed by determining the best transition locations to the destination and then working back to the start location. The trajectory path links were established using the distance information for adjacent cells. This basic algorithm is retained in this implementation with added enhancements. The predicted obstacle data is used to determine if an obstacle trajectory will intercept a robot trajectory to a transition location. If this does occur, that path is eliminated from the set of possible moves. In certain circumstances, all paths to transition locations may be cut off. Here, there are still options. If available, it can construct a partial path on the way to a transition location and determine if a change in the field of view offers a better path. It may, if the robot is not in danger of having an obstacle collide with it, elect to stay at the current location until circumstances improve. We will discuss this method in some detail when we examine specific test cases. 3.4.2.3. Classes The attributes and methods of the augmented robot control design are compiled into the classes illustrated in figure 3.40. A new class is added, Obstacle, to defme a dynamic obstacle. It receives Create, Execute Move, and Complete Move messages from the robot simulator. It also registers itself with the Area Map class. When changes occur in obstacle location or direction, it sends Update messages to the Map. The Area Map class is modified to keep a list of dynamic obstacles in the current scenario. The Set/ Reset Obstacle method is modified to reflect many different states that are assigned to a cell: static obstacle, current dynamic obstacle, on path of dynamic obstacle, new dynamic obstacle, and predicted obstacle. It is renamed to Set Cell State. The Report Obstacle Presence method is now called Report Cell State. The Robot class is modified to send a Dynamic Obstacle Query message to the Area Map class to obtain data for those dynamic obstacles in the field of view. Robot also adds the ability to set the state of a cell to a predicted obstacle. 65
PAGE 72
ROBOT SIMULATOR Execu e Complete Set Cell Creau Create State Move Move (Static Obstacle) .. Execute Get Path Info CreatE Delete Register Dynamic AREA MAP Move Obstacle .. Clear Path r Update Dyn Link Cells Obstacle Calculate Distance Set Cell State Report Cell State Register Dynamic Obstacle Update Dynamic Obstacle Report Dynamic Obstacle Absolute Area Area Size Dynamic Obstacle List .. "'r .. II' .... .. .. T ROBOT OBSTACLE R-Pt PAth GenerateTransition Location Execute Move -l.inlc (CpiJ,. Plan a Path Complete Move Jn .. ta ....... Execute Move Reset To Initial State Return Path Information Quprv r. .. n StAte_ Initial Locations Set Cell State (Predicted Obstacle) Area Map Current Locations Query Dynamic Obstacle Speed Intermediate Locatiort Field of View Direction Graph Start Velocity Current Graph Area of Operation Destinatiod.ocation Area Mao Last Location Figure 3.40 Class Diagram 66
PAGE 73
3.4.3. Test Results Experimental results are presented in three sets of test cases with increasing complexity. The first simulation shows robot interaction with a single dynamic obstacle. This test will demonstrate the planning concepts discussed previously in an uncluttered environment. The second test case adds additional dynamic obstacles to the environment and examines robot response to more difficult situations. The fmal test case re-introduces static obstacles with more dynamic obstacles to create a very complex environment. The robot simulator display was modified slightly for dynamic obstacles. These obstacles are cast in a dark gray with black borders to help the user distinguish between the static and mobile obscura. 67
PAGE 74
3.4.3.1. Single Dynamic Obstacle Scenario In this initial example, one dynamic obstacle group is introduced into the environment. It is formed at locations {(2,25) through (6,25), (5,26), (5,24)} as illustrated in figure 3.41. The obstacle group is assigned a velocity of 2 cells in they direction and 0 cells in the x direction per time interval. The obstacle is allowed to range over the entire work area in the y coordinate system. So, in the first time interval the dynamic obstacle will move from its current locations through {(2,26) (6,26), (5,27), (5,25)} to a new set of locations: {(2,27) (6,27), (5,28), (5,26)}. The robot starts this scenario at location (3,27) with a task destination at the bottom of the work area at (3, 1). The robot field of view is 5 cells and its speed is 2. Notice that if the robot does not move in the first time interval, it will be struck by the dynamic obstacle moving into its current location. The robot also can not move into the locations directly below (3,27) since they are in the travel path of the dynamic obstacle. ; ;, Robot Path Generator Simulation f3f'-', .... 29 28 21 26 25 24 23 22 21 20 19 18 11 16 15 14 13 12 11 10 g 8 7 6 s 4 3 2 1 0 0 Obstacle 0 Start Destination I:Bl Path 0 Grid Robot Definition Speed (U Aeld VIew [ill Figure 3.41 Test Case 10/Frame 1 Initial Assignment 68
PAGE 75
In frame 2, figure 3.42, the robot has planned and executed its initial move. Carefully note how the robot planned its paths. Although the dynamic obstacle now obscures the initial location of the robot, the robot planned initial movement up from (3,27) to (2,28). This particular movement means the robot can avoid the initial dynamic obstacle movement and it apparently clears the robot from further interference from the obstacle on the next time interval. From (2,28), the planning algorithm identifies a lateral movement to (1,27) or down and over to (2,27). It may be odd to see a path planned through an obstacle, however, the dynamic nature of the obstacle means the planning algorithm can construct a path through an existing obstacle location with the understanding that the obstacle will not be in that location at a later time. So, with the assigned distance of 3 and a robot speed of 2, it is predicted that the obstacle will have vacated the (2,27) cell by the second time interval. The blank space in the field of view is the space just vacated by the obstacle. Recall that trajectory generation is not allowed in this area. ;': Rubol P11lh Gt:nerillor SirnuJ,Iiun g;'jp 3 2 2 2 2 2 3 4 5 32 11 345 3 4 5 5 5 s 6 6 8 7 7 7 7 e e e e e 9 9 9 .. 9 e 7 6 5 4 3 2 1 0 g 8 7 6 5 4 3 2 1 0 t.,o=--=-1 -:2"""'3=--=-4 -:s'""'e=--=-7 29-:: 0 Obstacle 0 Start Destination 181 Path 0 Grid Definition Speed [] FleldVIew Figure 3.42 Test Case 10/Frame 2 Initial Movement for Obstacle and Robot 69
PAGE 76
Frame 3 (figure 3.43) shows a dynamic obstacle changing direction. On this time interval a portion of the obstacle (5,28) would fall outside the work area if the delta y (2) was added. Therefore, the entire group changes direction and moves down the work area. The robot path planning reflects this new condition. Since the obstacle has changed direction and can match the speed of the robot, it will prevent the robot from planning a path inward on the work area lining itself up with the destination. ; . Robol Palh Gr:ner111or Simulalion 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 t.0::-:-1 -:2:-:;-3 -;4-:5;--;;-6 0:::-:1:-:-1 7::1 2:-:-1 O 0 Obstacle 0 Start @ Destination .. Robot Definition Speed FleldVlew Figure 3.43 Test Case 10/Frame 3 Obstacle Changes Direction Finally, in time interval 11 the robot has the destination location in the field of view and has planned a direct path to that location. The obstacle, illustrated in figure 3.44, will overlay the destination in the next time interval, preventing the robot from reaching that location. In this situation, if the robot is not in danger of being struck by the obstacle, it can simply wait for the obstacle to clear the area and then resume its path to the destination. This is illustrated in figure 3.45. This particular behavior is special processing added to the movement generation method as a result of problems discovered in test. If a dynamic obstacle overlaid the destination just as the robot was 70
PAGE 77
poised to complete the goal, erratic activity on the part of robot was noted. In general. special checking was avoided so as not to dilute the basis of the algorithm . ;, Robot Path Genl"rator Simulation 5 4 3 2 1 1 5 5 s s s 4 4 4 4 5 3 3 3 4 5 2 2 3 4 s 1 2 3 s 7 .. 9 e 7 6 s 4 3 2 1 0 9 8 7 6 s 4 J 2 1 3_3 __ 4_5 7 0 0 1 2 3 4 5 6 7 8 9 1011121314151617181920212223242526272929 0 Obstade 0 Start @ Destination O!G"rllit Robot Definition Speed II] Field VIew EJ Figure 3.44 Test Case 10/Frame 4 Obstacle Overlay the Destination 71
PAGE 78
, Robot Palh Generator Simul111ion 5 4 3 2 1 5 5 5 6 7 #+f, _1_1_2_3 __________________ __. 0 Obstacle 0 Start Destination 181 Path .. .. .. 0 Grid Robot Definition Speed Field VIew Figure 3A5 Test Case 10/Frame 5 Robot Waits for Obstacle to Clear Destination :: .. ; Rnbol Pnth Generntnr Simulation 5 4 3 2 1 2 3 5 3 4 5 .. 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 11 2345 0 o_1_2_J_4_5_e 7 e 9 10111213141516171B192021222J242S26272629 0 Obstacle 0 Start Destination 181 Path 0 !Grid"; .. .. Robot Definition Speed [U FleldVIew Figure 3A6 Test Case 10 As Executed Path 72
PAGE 79
3.4.3.2. Multiple Dynamic Obstacle Scenario In the second of our three scenarios the robot is given a tougher gauntlet to run. Additional and more complex dynamic obstacles are placed in the environment. Table 3.2 presents a defmition of the mobile obstacles that are operating in the scenario. In order to facilitate identification of the mobile obstacle while the test is executing, a letter indicator (A through D) is assigned to each group. The robot is placed at an initial location of (23,4) and given a destination goal of (8,24). The purpose of this test is to see how the robot plans avoidance with multiple mobile obstacle groups in the field of view. Table 3.2 Dynamic Obstacles Defmition for Test Case 11 B (15, 13) ... (16-18) 10 through 23 c (9, 15) ... (12,20) 5 through 14 D (26.24) ... (26,27) 0 through 29 [0,3] 73
PAGE 80
Robot Path Generator Simulation "' I 9 8 7 6 5 4 3 2 1 0 9 e 7 6 5 4 3 2 1 '"' o:-:-1 """'2_,3,.....,..4 """'e-,s"""1-=o"'"'11,...,.1 .,...21-=3..,.,14,...,.15""'1""6 ""11,...,.1 e:-:1""9 :-:20""21:-:22c:-2:l:-:c:-24:-:25-::-2:-:6-:-:27::-::2""e 29:-:1 0 Obstacle 0 Start Destination 0 Grid Robot Definition Speed IIJ Fleldvtew Figure 3.47 Test Case 11/Frame 1 Initial State We pick up the robot situation seven time intervals into the scenario. At this stage (figure 3.48), the robot is moving away from the destination to avoid colliding with obstacle C from table 3.2 moving towards the robot. Obstacle B is also in the field of view moving up on a parallel course with the robot. In the next time interval (figure 3.49), the planning algorithm has encountered a situation where all of the boundary transition locations are either directly blocked or are predicted to be blocked at the point when the robot will reach those locations. In this situation, the planning algorithm processes children nodes of the original transition locations as the most optimal cells in which to move. This is illustrated in 3.49 by a set of trajectories planned only to a distance of 3. 74
PAGE 81
,, 111110 9 10101010 9 9 9 9 e e 8 e 7 7 7 7 7 6 6 7 II 5 7 6 5 4 I 7 6 5 4 3 3 7 6 5 4 4 4 7 6 5 5 5 5 Robot Pi!th Generator Simulation 7 7 3 4 3 2 3 2 3 3 3 4 4 4 5 5 5 5 t .. 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 Obstacle 0 Start Destination l8l Path 0 Grid Robot Definition Speed II] Field VIew Figure 3.48 Test Case 11/Frame 2 Two Dynamic Obstacles in Field of View 75
PAGE 82
;' Robot Path Generator Simulatinn .. I 9 8 7 t 6 s 4 3 2 1 0 9 8 7 6 s 4 3 2 1 0 l-:o:-:-1 -=2:--=-3 -:4,..-:-5 -::s""'7=""""='e-=9,....,""'o,..,.11,....1"'"2"'"'1 3'""1,..,.4 .,...,1 5,...,.1 1=-=1""'1 e='"'1"'"9 ""20'""21,..Z2=n::-:2:-:-4"""2s::-:2"'"s 0 Obstacle 0 Start Desdnallon t8l Path 0 Grid Robot Dellnhlon Speed [] Field VIew [EJ Figure 3.49 Test Case 11/Stage 3 Blocked Boundary Transition Locations In the fmal two frames of this scenario we see the robot apparently on the brink of attaining the assigned goal location (8,24). Obstacle D, however, will overlay the destination of the next time interval. The planning algorithm projects a trajectory around the destination and waits for the obscura to clear the area before moving to the goal. 76
PAGE 83
; /. Robot P11th Generator Simulation I s 5 5 5 5 5 5 4 4 4 4 3 3 3 4 2 2 3 4 2 3 4 2 2 2 3 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 II 7 6 s 4 3 2 1 0 0 Obstade 0 Start Destination O!Gril:ir Robot Definition Speed llJ AeldVIew Figure 3.50 Test Case 11/Stage 4 Unable to Reach Destination Location Robot Path Generator Simulation D!!J 7 6 7 6 7 7 6 4 5 3 5 4 3 5 4 3 s 4 3 5 4 4 5 5 5 5 5 s s 5 5 4 4 4 4 4 5 3 3 3 3 4 5 2 2 2 3 4 5 1 1 2 3 4 5 1 2 3 4 5 2 3 4 5 3 4 5 4 5 s 5 5 0 Obstade 0 Start Destination 0 Grid Definition Speed AeldVIew Figure 3. 51 Test Case 11/Stage 5 Project Trajectory Around Blocked Destination 77
PAGE 84
3.4.3.3. Static and Dynamic Obstacle Scenario In the fmal test scenario the complexity of the environment is again increased. In addition to placing more dynamic obstacles in the area, static obstacles are once again introduced into the domain. Table 3.3 details the movement characteristics of the mobile groups. Table 3.3 Dynamic Obstacles Defmition for Test Cases 12 and 13 A (15, 15) ... (16, 17) 12 through 19 12 through 20 [1,1] B (20,20) 19 through 21 [1,0] c (21,21) 19 through 21 [1,0] D (2,25) ... (5,26) 0 through 29 [0,2] E (2,0) ... (6, 1) 0 through 12 0 through 15 [ 1' 1] F (15,8) ... (17, 13) 5 through 20 5 through 20 [1,1] G (27,0) ... (29,4) 0 through 29 0 through 29 [1.2] H (28,26) ... (29,27) 0 through 29 [3,0] Our first test case in the new environment places the robot inside of a large room at (25, 19). The goal location is placed inside a smaller room at (20,22). The smaller room is blocked by two one-cell dynamic obstacles that traverse the entrance to the room. This test demonstrates the planning algorithms capability to integrate static and mobile obstacle avoidance. Figure 3.53 illustrates the activity that took place in the first time interval. Note the robot expanded the field of view from 5 cells to 8 cells in order to locate an exit to the room. The planning algorithm invoked this option since it could not fmd a location that improved its position relative to the destination. 78
PAGE 85
0 1 .I ] 0 Obstacle 0 Start Destination 0Grtd Definition Speed II] FleldVIew Figure 3.52 Test Case 12/Frame 1 Initial State Robot Path Gcner11tor Simulation -+ 29 28 27 3 2 0 Obstade 0 Start Destination Robot Definition Speed [U Field VIew [U Figure 3.53 Test Case 12/Frame 2 Robot Expands Horizon 79
PAGE 86
Several time intervals later (figure 3.54), the robot is still movmg along the expanded path created in the first time interval. Recall from scenarios presented in previous sections that the execute movement method will not invoke new planning until it has moved to a location that is better than the current location relative to the destination. When this plan was generated, obstacle G was not in the field of view, while in the current time interval the motion of the obstacle has it blocking the path just outside the exit location of the room. Note that the robot could not run at full speed for this time interval: from (25,2) to (26,2). This frame also illustrates for the first time that obstacles overlay each other. Recall from our initial discussion that dynamic and static obstacles can share the same space. ':''; nobol Palh C.cneralor Sirnulalion 0 g 8 1 6 5 4 3 2 1 0 Obatade 0 Start Destination 181 Path 0Grld Definition Speed GJ FieldView Figure 3.54 Test Case 12/Frame 3 Waiting for Obstacle G to Clear Exit Frames 4 and 5 show the robot planning to enter the room containing the goal location. In both frames (figure 3.55, 3.56), the robot is blocked from entry by the two guarding obstacles. In frame 5 however, the planning algorithm should see a path around the obstacles through (19,20) and (19,21). In the fmal frame this path was indeed planned and executed by the robot. 80
PAGE 87
; Robot Path Generiltor Simulation II -+ 5 5 5 5 5 5 4 4 4 4 5 4 J J J 5 4 3 2 2 3 5 4 3 2 1 1 5 4 J 2 1 5 4 3 2 1 1 ] Hi 5 5 5 ..... 0 Obstade 0 Start @ Destination 1:81 Path D Grid Robot Dennltlon Speed [ZJ Field VIew ECJ Figure 3.55 Test Case 12/Frame 4 Blocked From Entry into Room 81
PAGE 88
, Robol P111h Gener111or Simulalion II .... 6 s 5 5 5 4 5 4 5 4 5 4 J 5 4 J 2 s 4 3 s s 4 8 s s 5 9 8 7 6 5 4 J 2 1 0 1 2 3 4 s 6 7 8 9 10 0 Ob&tade 0 Start Deatlnatlon DiGrld! Robot Definition Speed Fleld VIew 5J Figure 3,56 Test Case 12/Frame 5 Set-up Entry into Room Rulool Palh Gener.11nr II .... 0 Obstacle 0 Start Destination 0 Grid Definition Speed Fleld VIew [I] Figure 3.57 Test Case 12/Frame 6 Plot a Trajectory Past the "Sentries" 82
PAGE 89
In a second test case utilizing this obstacle group setup, we place the robot and the goal destination at (25,27) and (0, 12) respectively. At this particular starting point, the robot is in immediate jeopardy from obstacle group H which moves at 3 cells per time interval. To make the initial movement more difficult, a row of obstacles blocking any escape to the south is put in place using the robot simulator static obstacle capability. The purpose of this test is to examine the planning and movement executed by the robot over a long distance in a complex environment. -+ 0 1 .I ] 0 Obetede 0 Start l8l Path 0 Grid -II DellniUon Speed [!] AeldVIew Figure 3.58 Test Case 13/Frame 1 Initial State Frame 2 through 4 (figures 3.59, 3.60, 3.61) are the first 3 time intervals of the test. They demonstrate the robots ability to avoid a faster moving obstacle. In the initial time interval the planning method recognizes the futility of a downward route and plans paths up in the work area, while obstacle H gains a cell on the robot in each time interval. 83
PAGE 90
.I ] 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 J.o,..-1...,2,.....,.3-4_,.5 ""'e,.....,..1 """e-g,.--,1 o,-1-11"""'2-13=-1-4-15_1_6 -17-1-B1_9_20_21_Z2_23_24 __ 25_26 __ 27_2_8......J29 0 Obatade 0 Start @ Destination 181 Path 0 Grid -Jlobot Definition Speed Aeld VIew @] Figure 3.59 Test Case 13/Frame 2 Avoid Faster Element, Time= 1 ] 0 Obstade 0 Start @ Deatlnatlon l8l Path 0 Grid -;:tobot Deftnltlon Speed EJ FleldVIew Figure 3.60 Test Case 13/Frame 3 Avoid Faster Element, Time= 2 84
PAGE 91
Robol P111h Simulalion li] ] 0 Obstacle 0 Start Oeadnatlon 181 Path 0 Grid Definition Speed [] Field VIew EJ Figure 3.61 Test Case 13/Frame 4 Avoid Faster Element, Time= 3 In frame 5 (figure 3.62), we pick up the robot towards the conclusion of the test. The robot plans a path around obstacle E which is overlaying the destination location. Two time intervals later (figure 3.63), the obstacle clears the destination and the robot replans a direct path to the destination. 85
PAGE 92
' Robot Path Generator Simu l ation 0 Obstade 0 Start Destination 0 Grid Definition Speed [] Aeld VIew !TI Figure 3.62 Test Case 13/Frame 5 Blocked Destination : Robot Path c;eneriltor ::;,mulat1on En 5 5 5 5 5 5 5 5 5 4 4 4 4 4 4 4 4 3 3 3 3 3 3 3 5 3 2 2 2 2 2 4 5 3 2 3 4 5 1 2 3 4 5 112345 2 2 2 2 3 4 5 0 Obstade 0 Start Deatlntlon 0 Grtd Definition Speed tJ Field VIew [D Figure 3.63 Test Case 13/Frame 6 Replan After Obstacle Clears Destination 86
PAGE 93
4. SUMMARY AND CONCLUSION This report makes contributions to methods for autonomous mobile robot control. Foremost is the development of geometric reasoning techniques for planning and moving in a complex and dynamic environment. The architecture has demonstrated: Efficient planning and motion for what the domain allows. Responsiveness and adaptability to changes in the environment. Timely completion of all planning activities. Efficiency in the application of resources. Scalability to larger, more complex problem domains. Efficient Planning and Motion Examination of the test cases 1, 2, and 3 easily reveal optimized path planning for the simple environment. With the introduction of static obstacles (test cases 4 through 9) it is somewhat more difficult to demonstrate optimization. Essentially, the planning software optimizes within the field of view. That is, only local obscura can be considered when fmding an optimal path. Global inefficiencies are mitigated by executing the planning algorithm for each motion. This gives the robot the ability to discover global problems more quickly and accommodate them into the plan. Only in test cases 7 and 8 do we see the robot actually backtracking along a path. In both of these cases, the algorithm was seeking a more direct path to the destination outside the field of view, akin to probing ahead for a better route. As soon as it was determined a better path was not available, the algorithm replanned for the available route. Dynamic obstacles add a new criteria to efficient planning: obstacle avoidance. This criteria is of equal value with movement efficiency. Results from test cases 10 through 13 demonstrate obstacle avoidance used in conjunction with movement efficiency. In test case 10, the robot halts movement towards the goal to allow an obstacle to clear the destination location. In that particular situation, movement in any direction would offer less potential. In test case 11 the robot moves away from the destination in frame 3 to avoid an on-coming dynamic obstacle. The movement is minimal and the robot recovers and moves back towards the task goal when the collision threat is clear. In all test cases the robot plans efficiently 87
PAGE 94
for what the environment allows and executes the most effective movement towards the destination. Responsiveness to Change Adaptability to a fluid domain is demonstrated in the response to invisible obstacles and with dynamic obstacle interaction. Invisible obstacles are handled by the software in two ways. First, invoking the planning algorithm with each motion allows the robot to detect previously undetected obscura even within the field view. Second, the motion algorithm always verifies that a previously planned path is still available to it prior to executing motion. If necessary, the motion algorithm can send a "plan" message to invoke the planning algorithm if no paths remain available. Responsiveness to invisible obstacles was successfully demonstrated in test case 8 without loss of efficiency. Dynamic obstacles are more complex. For these situations, the planning software predicts the movement of the obstacles and incorporates the infonnation into the plan. Here again the motion algorithm always verifies path availability prior to engendering movement along the trajectory. This is done in case the situation develops that the obstacle changes direction while a concurrent move occurs. New planning can invoked at this time to consider the new infonnation. Timeliness Completion of Planning Absolute time is, of course, a function of the hardware CPU* compiler efficiency, etc. A more interesting comparison is the relative timeliness of the software in the different environments. Table 4. 1 presents a comparison of the times, in milliseconds, required to execute the planning algorithm in a simple environment, an environment with static obstacles, and an environment with dynamic obstacles. Since the field of view can also affect the results, several variations are provided. The robot speed in all cases was set to two cells per time interval. The tests were conducted on 180486 executing at 33 MHz. The compiler was Borland C++ version 3. 1. 88
PAGE 95
Table 4. 1 Scenario Timing Field of View 3 5 7 Scenario No Obstacles (Test Case 2) 48 51 82 Static Obstacle (Test Case 4) 93 115 109 Dynamic Obstacles (Test Case 11) 95 111 121 Efficient Application of Resources This concept is demonstrated through the judicious use of the "enhanced sensor" capability to expand the robot field of view. As demonstrated in test cases 6 and 8, the robot only invokes the expanded field of view when it is required to obtain an unambiguous path. And, also demonstrated in test cases 6 and 8, once the unambiguous path is obtained, the robot reverts back to the original field of view. In this application the only resource to which the usage concept applied is the field of view. This could easily be expanded within this architecture to incorporate: speed, turning capabilities, etc. Scaleable Architecture A scalar architecture allows for the easy expansion of the software design and implementation into more complex problem domains. In this case, a scalar architecture provided an impetus for the use of object oriented design and programming methodologies. Object oriented programming techniques such as: inheritance, polymorphism and function overloading were used in this design to facilitate reuse and expansion of the design. Design modifications could include: complex robot characteristics (directional field of view, variable speeds), multiple robots, a three dimensional work area, etc. The scalar nature of the architecture was demonstrated by the way in which dynamic obstacles were added to the design. In general, this research shows the applicability of linguistic geometry techniques to understanding the geometric properties and movement in dynamic hierarchical systems 89
PAGE 96
such as mobile robot control. These techniques provide a fonnalized, domain independent approach to such problems. Domain independence offers a rich set of applications in which to apply these techniques. The robot control architecture presented in this report is potentially a model for further applications in scheduling/planning, integrated circuits layout, space vehicle navigation, as well as multiple robot (cooperative and opposing) control scenarios. 90
PAGE 97
APPENDIX A MOBILE ROBOT SIMULATION SOFIWARE The software used to monitor the progress of the robot and control the environment was developed by the author of the thesis in parallel with the development of the application. The simulator is a Microsoft Windows 3.1 based program that graphically depicts the environment and the progress of the robot under test throughout an experiment. The simulator essentially creates a factory-like environment for the robot. Static obstacles may represent equipment and work stations operating in the factory. The static obstacles can also be shaped into walls and rooms simulating a partitioned factory floor with several different work areas to which the robot must navigate. Dynamic obstacles represent mobile elements operating in the factory. They may be other autonomous or remote controlled robots, or some kind of mobile delivery systems (e.g. fork lift, truck, etc.). The robot destination location represents tasking given to the robot. This task may be a job to accomplish, a product to examine, or a part to deliver. The simulator provides three broad categories of scenario control. Environmental control defmes the area in which the robot operates. This includes establishing obstacles, and path display. Robot definition establishes a robot object in the work area and characterizes its capabilities. Robot control directs how the simulator moves the robot will move through the environment. ENviRONMENT CONTROL Environmental control offers options to add obstacles in the work area and to specify display options. 0 Obstacle Obstacles may be placed on the work area in two ways. A data file may be specified on simulator start-up that defines dynamic and static obstacles associated with the scenarios. An example data file is illustrated in figure A-1. Static obstacles may also be placed in the environment by selecting the Obstacle button and then pointing at the 91
PAGE 98
desired location(s) on the work area, and selecting the left mouse button. An obstacle is registered with the Map object and the location is turned black for visual indication. An obstacle can be removed in a similar manner. Obscura can not be placed on top of a current, start, or destination robot location. Static obstacles may be placed in the environment at any stage of an experiment. 4 (15, 15) (16, 15) (16, 16) (16, 17) 1 1 12 19 12 20 Number of dynamic obstacle cells Initial location of dynamic obstacle cells Delta X, Delta Y, Min/Max X, Min/Max Y Figure A-1 Example of a Dynamic Obstacle File Record D Path D Grid The user can control two specific display options through the simulator software. Path toggles the display of the trajectories planned by the robot software. These trajectories are portrayed as numbered locations connected with thin dark lines (see A4 below for an example of the icons described here). The values at the locations represent distances, calculated by the robot software, from the previous robot location. The extent of the numbering illustrates the horizon of the robot in the current time intezval. A thicker green line shows the as-executed robot path. The location containing the green icon specifies the robot location in the current time intezval. The as-executed path and current robot location are always displayed and are not controlled from the simulator software. 92
PAGE 99
RoBar DEFINITION A robot is fundamentally defmed by its speed, horizon, start and destination locations. Robot Definition Speed D FleldVIew D The Speed and Field View parameters characterize the capabilities of the robot in the current experiment. Speed indicates maximum velocity in units of cells per time interval. The robot is allowed total maneuverability without loss of velocity. That is, the robot may turn in any direction or even go backwards without losing any capability to move at full speed. Field View specifies the nominal horizon for the robot. This is represented in units of cells in all directions. The extent of the horizon is unaffected by obstacle interference. It is, however, limited by the edges of the work area, i.e. there is no "wrap around" on the work area. Once the robot has begun an experiment, these parameters can not be modified via the simulation software. The horizon may be temporarily expanded by the robot software under circumstances described in the sections 3. 3 and 3. 4. 0 Start 0 Destination Start and Destination create the respective locations on the work area. Once the button is selected, the user simply points at the desired location in the work area and selects the left mouse button. These locations can be modified in this manner up until experiment execution begins. The destination goal can be modified while the robot is actively pursuing a goal. This simulates the robot receiving higher priority tasking while conducting a mission. The simulation software will not engender movement in the work area unless a start and a destination location are specified on the work area. 93
PAGE 100
RoBar CONTROL The software offers two methods to control the movement of the robot: Execute and Step. An additional function, Reset, terminates the current experiment and allows for defmition of a new scenario. -Execute causes the robot to plan and move to the conclusion of the task. Built into the simulation, is a one second delay between time intervals to allow the user the ability to view the progress of the experiment. Upon selection of this option, the simulator software verifies that a start and task destination are established. Once verified, the one second timer is started. When the timer expires a Move message is sent to the robot. The robot's Move method determines if and when new planning is required. The simulation software queries the robot object for trajectory and path information, as well as, querying the Map object for obstacle movement. While in the execute mode, static obstacles and display features may be modified between time intervals using the environmental controls discussed earlier. Step execute a single time interval in the simulator. At the conclusion of the time interval the simulator stops, waiting for more direction. This feature gives the user the capability of examining the selected path and planned trajectory data before moving on to the next time interval. Similar to Execute mode, robot trajectory, path and obstacle movement data is collected and displayed after the step completes. The user can utilize the environmental controls to display path data, add static obstacles, etc. between steps. The robot task destination can also be changed between stepping actions. Reset causes the current experiment to complete. If this action is taken while the robot is in transit to a task destination, it eliminates current path information and 94
PAGE 101
immediately moves the robot start location to the task destination. If the robot has already reached the task destination it performs a similar function, basically making the current location the new start location. It is important to note that this control is mandatory between experiments. DISPLAY ICONS .I This icon represents a dynamic obstacle group of four cells. These four cells move in a certain direction and at a certain velocity throughout an experiment. -This icon represents a static obstacle occupying three locations. Static obstacles do not move in an experiment This icon represents the robot start location. This appears only at the start of task. This icon represents the robot task destination. This is displayed for the duration of the task. These icons represent the planned trajectories, as-executed robot path and the current location of the robot. 95
PAGE 102
APPENDIX B SOURCE CODE Project.h #ifndef _PROJECTH_ #define _PROJECTH_ {************************************************************************** ** ** project.h ** define structures and macros used by all classes in the model. **************************************************************************! #include
#include
#include
#include
#include "Set.h" #include "List.h" II find the maximum or minimum of two values #define MAX(a,b) ((a) < (b) ? (b) : (a)) #define MIN(a,b) ((a) < (b) ? (a) : (b)) II define a single byte typedef unsigned char Byte_t; II define a 2-D location typedef struct { int x; int y; } Location_t; II define a cell structure in a graph typedef struct { Location_t loc; Byte_t obstacle; Byte_t dist; Byte_t blocked; Byte_t predBlock; Byte_t willBeBlocked; Byte_t pathBlock; Byte_t numChild; Byte_t numParent; void *children; void *parents; void *nxtOnPath; } Graph_t; II flag indicates a static obstacle II flag indicates a dynamic obstacle II flag indicates if robot predicts dynamin obs. II shall be in this location II flag indicates this cell will be blocked by II completion of time increment II flag indicating a this cell was used by an II obstacle to get to a new location II this is cast as graphList in operations II also cast as graphList in operations II signifies next node on chosen robot path 96
PAGE 103
II define a structure for distance display typedef struct { int x; int y; int dist; Dist_t; II define a structure for obstacle information typedef struct { Byte_t id; Byte_t dynamic; int int int Location_t } ObsDef_t; deltaX; deltaY; elemCnt; *loc; const int PATH_CLEAR = 4; const int PATH= 3; const int NEW= 2; const int SET= 1; const int CLEAR = 0; II create templates for Lists and Sets of graph cells typedef Set
graphSet; typedef List
graphList; #endif 97
PAGE 104
robotMap.h #ifndef _ROBOTMAP #define _ROBOTMAP #include "project.h" #include
#include
I*************************************************************************** I class robotMap public: !*constructor-create the work area and assign distances*/ robotMap(int sizeX=lOO, int sizeY=lOO); I* destructor--delete allocated mapping structures*/ -robotMapO; II toggle the state of a location on the graph void obsState(Location_t loc); II explicitly set the state of a location on the graph void obsState(Location_t loc, int state); II check for obstacle in graph int obsCheck(Location_t thisLoc); II clear trajectory information from all cells void clearAbs(void); II clear path information from all cells void ClearPath(void); //measure the theoretical (unimpeded) distance between 2 points int MeasDist (Location_t from, Location_t to); II initialize the distance member of area structure void Distlnit(void); II get all distances Dist_t* GetDist(int *cnt); II generate a list of cells adjacent to input location graphList* GenAdj(Location_t loc, int robSpeed); II convert a location to a cell on the graph Graph_t* compPtr(Location_t loc); 98
PAGE 105
private: }; #endif II add a parent cell to list int AddParent (Graph_t*, Graph_t*); II add a child cell to list int AddChild (Graph_t*, Graph_t*); II get a parent list from a cell graph List* GetParent (Graph_t*); II return the size of the work area void GetWorkAreaSize(int *xSize, int *ySize); II clear the predicted obstacle flag for the range of locations void ClearPredObs(int xMin, int x.Max, int yMin, int yMax); II set the predicted obstacle flag for the given location void SetPredObs(Location_t loc); II make a list of obstacle information available List
* GetObsListO; II set up a list of obstacles int SetObsList(Byte_t dyn,int groupSz, int deltaX, int deltaY,Location_t *deO; II update obstacle location for the specified id void UpdateObsList(Byte_t handle, Location_t *loc, int state, int upDx, int upDy); Graph_t *absArea; List
*obsList; int int int obsCnt; maxmX; maxmY; II work area II list of dynamic obstacles II count of registered dynamic obstacles II maximum allowable size of X dimension II maximum allowable size of Y dimension 99
PAGE 106
robotMap.cpp #include "robotMap.h" #include
#include
#include
II Area map constructor robotMap::robotMap(int xSize, int ySize) { int x,y,minDist, maxDist,llX,llY,urX,urY,xff,i,j; char nxt0bs(80]; Graph_t *absPtr; Byte_t *obsFlag; II create the work area absArea =new Graph_t[xSize*ySize]; II create the dyanmic obstacle list obsList = new List
; obsCnt = 0; II set x.y maximums maxmX = xSize; maxmY = ySize; memset(absArea, 0, xSize*ySize*sizeof(Graph_t)); II initialize the work area absPtr = absArea; for(y=O; y < ySize; y++) for(x=O; x < xSize; x++) { absPtr->loc.x = x; absPtr->loc.y = y; absPtr->dist = 255; absPtr++; II area map destructor robotMap: :-robotMapO { delete absArea; delete obsList; II toggle the state of given location between obstacle/clear void robotMap::obsState(Location_t loc) { Graph_t *obsCell; obsCell = compPtr(loc); if(obsCell->obstacle = 1) 100
PAGE 107
obsCell->obstacle = 0; else obsCell->obstacle = 1; II set the state of a location void robotMap::obsState(Location_t loc, int state) { Graph_t *obsCell; obsCell = compPtr(loc); if(state = NEW && !obsCell->obstacle) { obsCell->willBeBlocked = 1; } else if(obsCell->blocked >= 1 && state = CLEAR) { obsCell->blocked--; } else if(state = SET) { obsCell->willBeBlocked = 0; obsCell->blocked++; } else if(state =PATH) obsCell->pathBlock = 1; else if(state = PATH_CLEAR) obsCell->pathBlock = 0; II convert a locaiton into a pointer to the cell containing that location Graph_t* robotMap::compPtr(Location_t loc) { return absArea + ((loc.y maxmX) + loc.x); II resolve if the location blocked by an obstacle (static/dynamic) int robotMap: :obsCheck(Location_t thisLoc) { Graph_t *thisCell; thisCell = compPtr(thisLoc); if(thisCell->obstacle) return 1; else if (thisCell->blocked) return 2; else return 0; II clear the map of all trajectory information void 101
PAGE 108
robotMap: :clearAbsO { Graph_t *nxtCell; int cnt = maxmX maxmY; while(cnt) { cnt--; nxtCell = absArea + cnt; if(nxtCell->children) delete nxtCell->children; if(nxtCell->parents) delete nxtCell->parents; nxtCell->children = 0; nxtCell->parents = 0; nxtCell->numChild = 0; nxtCell->numParent = 0; II clear the map of all as-executed path information void robotMap:: ClearPathO { Graph_t *nxtCell; int cnt = maxmX maxmY; while(cnt) { cnt--; nxtCell = absArea + cnt; nxtCell->nxtOnPath = NULL; II reset all location distances void robotMap:: Distlnit(void) { int maxm = maxmX maxmY, i; for(i=O; i < maxm; i++) (absArea+i)->dist = 255; II return a list of distances for display Dist_t* robotMap::GetDist(int *cnt) { Dist_t *dists =new Dist_t[900]; inti, x, y; *cnt = 0; for(y=O; y < maxmY; y++) { for(x=O; x < maxmX; x++) { if((absArea+{(y maxmX) + x))->dist < 255) { dists[*cnt].x = x; 102
PAGE 109
} dists[*cnt].y = y; dists[*cnt].dist = (absArea+((y maxrnX) + x))->dist; (*cnt)++; return dists; II measure distances from the current location graphList* robotMap::GenAdj(Location_t cur, int robSpeed) { graphList *adjList =new graphList; Graph_t *newLoc, *curCell; inti, j, curDist; curCell = absArea + ((cur.y maxrnX) + cur.x); curDist = curCell->dist + 1; II do for all adjacent cells for (i=cur.y-1; i <= cur.y + 1; i++) { } II is location on the arena? if(i >= 0 && i < maxrn Y) { forG=cur.x-1; j <= cur.x + 1; j++) { II on the arena & don't reprocess current location again if(G >= 0 && j < maxmX) && G != cur.x I I i 1= cur.y)) { newLoc = absArea + ((i maxrnX) + j); II does location contain an obstacle? if(!newLoc->obstacle && !newLoc->blocked && (curDist > robSpeed II 1newLoc->pathBlock) && (curDist > robSpeed II !newLoc->willBeBlocked) ) if(new Loc->dist = 255) { } adj List-> Addltem (new Loc); AddParent(curCell, newLoc); II add a new parent link II to curCell else if(newLoc->dist = curDist) AddParent(curCell, newLoc); II add a new parent link II to curCell return adjList; 103
PAGE 110
II add a parent to call int robotMap::AddParent (Graph_t *parCell, Graph_t *chldCell) { if(!chldCell->numParent) chldCell->parents = new graphList; if(((graphList *)chldCell->parents)->Addltem(parCell)) { } else chldCell->numParent++; return 1; return 0; II add a child to a parent cell int robotMap::AddChild (Graph_t *parCell, Graph_t *chldCell) { if(!parCe ll->n um Child) parCell->children = new graphList; if(((graphList *)parCell->children)->Addltem(chldCell)) { } else parCell->numChild++; return 1; return 0; II retrieve the parent list from a cell graphList* robotMap::GetParent (Graph_t *cell) { if(cell->numParent) return ((graphList *)cell->parents); else return (graphList *)NULL; II perform a simple column/row distance measurement int robotMap::MeasDist(Location_t from, Location_t to) { int absX, absY, max; II use the max of I xI I y I as distance absX = abs(from.x-to.x); absY = abs(from.y-to.y); max= MAX(absX, absY); return max; 104
PAGE 111
II return the work area size void robotMap::GetWorkAreaSize (int *xSize, int *ySize) { *xSize = maxmX; *ySize = maxm Y; II clear the predicted obstacle flags for a range of locations void robotMap::ClearPredObs (int xMin, int xMax, int yMin, int yMax) { int x,y; Graph_t *cell; Location_t loc; II do for range of x,y locations for(y=O; y < maxmY; y++) { for(x=O; x < maxmX; x++) { II make up a location structure loc.x = x; loc.y = y; II get the cell data associated with this location cell= compPtr(loc); cell->predBlock = 0; II set the predicted obstacle flag for a given location void robotMap: :SetPredObs(Location_t loc) { Graph_t *cell; cell= compPtr(loc); cell->predBlock = 1; II register an obstacle int robotMap::SetObsList(Byte_t dyn, int groupSz, int deltaX, int deltaY, Location_t *deO { ObsDef_t *obsSt; Location_t *cLoc; inti; II extract the pertinent fields obsSt =new ObsDef_t; obsSt->id = ++obsCnt; obsSt->dynamic = dyn; 105
PAGE 112
obsSt->elemCnt = groupSz; obsSt->deltaX = deltaX; obsSt->delta Y = delta Y; obsSt->loc =new Location_t[obsSt->elemCnt]; memcpy(obsSt-> loc, def,obsSt->elem Cnt sizeof(Location_t)); II define the initial obstacle location on the this map cLoc = obsSt->loc; for (i=O; i < groupSz; i++) { obsState(*cLoc, SET); cLoc++; obsList-> Addltem(obsSt); return obsSt->id; II update the parameters for a registered obstacle void robotMap::UpdateObsList(Byte_t handle, Location_t *loc, int state,int upDx, int upDy) { ObsDef_t *obs; inti; Location_t *cLoc; II find the obstacle matching this handle obs = obsList->GetFirstO; while(obs) { if(obs->id =handle) break; else obs = obsList->GetNextO; } if(obs) { II now set the new obstacles and update in the list cLoc = obs->loc; for(i=O; i < obs->elemCnt; i++) { } *cLoc = *(loc+i); obsState(*cLoc, state); cLoc++; obs->deltaX = upDx; obs->delta Y = upDy; II make up a list of obstacle information from registered obstacles List
* robotMap: :GetObsListO { return obsList; 106
PAGE 113
Obstacle.h #ifndef _OBSTACLE_ #define _OBSTACLE_ #include "Project.h" #include "robotMap.h" #include
#include
/*************************************************************************** Define an Obstacle class ***************************************************************************/ class Obstacle public: I* constructor for moving obstacle *I Obstacle (robotMap areaMap, int groupSz, Location_t *groupDef, int deltaX,int deltaY, int xMin, int xMax, int yMin, int yMax); I* destructor*/ -ObstacleO; I* movement method*/ int Move (void); void Reset (void); I* complete the move by removing the obstacle from the locations where it was at the beginning of the move cyle . void CompleteMove(void); private: inline int GetSign(int val) { }; if(val > 0) return 1; else if(val < 0) return -1; else return 0; ObstacleO; II No argument constructor is not legal Byte_t myld; int obGroupSz; robotMap *obAreaMap; Location_t *startDef; Location_t *curGroupDef; II the handle for this obstacle II size of obstacle group //local copy of area map II starting locations for obstacle II current obstacle locations 107
PAGE 114
}; Location_t *shGroupDef; Location_t *intGroupDef; II shadow of obstacle after a move II intermediate locations occupied by II obstacle on way to new location int obDeltaX; II movement in X direction int obDeltaY; II movement in Y direction int startDeltaX; II starting delta X int startDeltaY; II starting delta Y int obxMin, obxMax, obyMin, obyMax; II area of movement int areaX, areaY; //local copy of arena definition int moveSize; II number of intermediate locations in a move int intCnt; II size of intermediate area #endif 108
PAGE 115
Obstacle.cpp /************************************************************ ** Define the obstacle class methods. ***********************************************************/ #include "Obstacle.h" II Dynamic Obstacle Obstacle::Obstacle(robotMap *areaMap,int groupSz, Location_t *groupDef, int deltaX, int deltaY, int xMin, int xMax, int yMin, int yMax) II set up obstacle defintion for this object obAreaMap = areaMap; obGroupSz = groupSz; obDeltaX = startDeltaX = deltaX; obDeltaY = startDeltaY = deltaY; moveSize = MAX(obDeltaX, obDeltaY); obxMin = x.Min; obxMax = xMax; obyMin = yMin; obyMax = yMax; II get and store the work area dimensions obAreaMap->GetWorkAreaSize(&areaX, &area Y); II record obstacle initial location, current, shadow struct for moving startDef =new Location_t[groupSz]; memcpy(startDef, groupDef, groupSz sizeof(Location_t)); curGroupDef =new Location_t[groupSz]; memcpy(curGroupDef, groupDef, groupSz sizeof(Location_t)); shGroupDef =new Location_t[groupSz); memcpy(shGroupDef, groupDef, groupSz sizeof(Location_t)); II create a structure to maintain the intermediate locations through II which an obstacle moves to get to its next location. if(moveSize > 1) { intGroupDef =new Location_t[moveSize groupSz]; II add obstacle to areaMap list my Id = obAreaMap->SetObsList(l ,groupSz,deltaX,delta Y ,curGroupDef); II destructor Obstacle: :-Obstacle 0 { if(curGroupDef) delete curGroupDef; if(startDef) delete startDef; if(shGroupDef) delete shGroupDef; if(moveSize > 1) 109
PAGE 116
delete intGroupDef; void Obstacle:: ResetO { int inti; Location_t *cLoc; for(i=O; i < obGroupSz; i++) { cLoc = (curGroupDef+i); obAreaMap->obsState(*cLoc, CLEAR); memcpy(curGroupDef, startDef, obGroupSz sizeof(Location_t)); II update obstacle location on the area map obAreaMap->UpdateObsList(myld, curGroupDef, SET, startDeltaX, startDeltaY); memcpy(shGroupDef, startDef, obGroupSz sizeof(Location_t)); obDeltaX = startDeltaX; obDeltaY = startDeltaY; 0 bstacle:: Move(void) { int dx, dy, i, xCnt, yCnt, xSign, ySign; Location_t *cLoc, *intLoc; int magDeltX = abs(obDeltaX); int magDeltY = abs(obDeltaY); dx = obDeltaX; dy = obDeltaY; II determine if any part of II obstacle leaves the assigned area of the work area itself II reverse the direction of the obstacle if so for(i=O; i < obGroupSz; i++) { cLoc = (curGroupDef+i); if({(cLoc->x + dx) < obxMin) I I ((cLoc->x + dx) > obxMax) I I ((cLoc->x + dx) < 0) I I ((cLoc->x + dx) > areaX)) obDeltaX = -dx; if(((cLoc->y + dy) < obyMin) I I ((cLoc->y + dy) > obyMax) I I ((cLoc->y + dy) < 0) I I ((cLoc->y + dy) > area Y)) obDelta Y = -dy; II copy current definition of obstacle to shadow locations memcpy(shGroupDef, curGroupDef, obGroupSz sizeof(Location_t)); II if the obstacle speed is greater than 1 in x or y then II must calculate here the path the obtacle will travel to get to 110
PAGE 117
II its new locations. xSign = GetSign(obDeltaX); xCnt = xSign; ySign = GetSign(obDeltaY); yCnt = ySign; intCnt = 0; while((abs(xCnt) < magDeltX) I I (abs(yCnt) < magDeltY)) { II compute intermediate locations and set as blocked for(i=O; i < obGroupSz; i++) { cLoc = (curGroupDef+i); intLoc = (intGroupDef + intCnt); intLoc->x = cLoc->x + xCnt; intLoc->y = cLoc->y + yCnt; intCnt++; obAreaMap->obsState(*intLoc, PATH); II increment to next set of x.y locations xCnt += xSign; yCnt += ySign; II one more time through the loop, this time move the obstacle II in the direction and range set by obDeltaX and obDelta Y for(i=O; i < obGroupSz; i++) { cLoc = (curGroupDef+i); cLoc->x += obDeltaX; cLoc->y += obDelta Y; II update obstacle location on the area map obAreaMap->UpdateObsList(myld, curGroupDef, NEW, obDeltaX, obDeltaY); return 1; II remove where the obstacle was in the current move cycle void Obstacle:: CompleteMove(void) { inti; Location_t *cLoc; for(i=O; i < obGroupSz; i++) { cLoc = (shGroupDef+i); obAreaMap->obsState(*cLoc, CLEAR); for(i=O; i < obGroupSz; i++) { 111
PAGE 118
cLoc = (curGroupDef+i); obAreaMap->obsState(*cLoc, SET); II clear off intermediate locations over which obstacle travelled for(i=O; i < intCnt; i++) { cLoc = (intGroupDef+i); obAreaMap->obsState(*cLoc, PATH_ CLEAR); 112
PAGE 119
Robot.h #ifndef _ROBOT_ #define _ROBOT_ #include "Project.h" #include "robotMap.h" #include
#include
I*************************************************************************** I const int STUCK =-1; const int OK = 0; const int DONE = 1; const int areaSizeX = 30; const int areaSize Y = 30; typedef enum { trans Blocked, trans Back, II blocked from max range of fov transFree, transDest II transition cell is the destination mode_t; class Robot public: I* constructor *I Robot(int robotSpeed, int argMaxExtent, robotMap *argMap); I* destructor *I -Robot(void); I* forecast mobile obstacle movement in FOV *I void PrdctDynObs(int curDist, int fovMinX, int fovMaxX, int fovMinY, int fovMaxY); I* plan the path(s) that are admissible for this situation *I void Plan(Location_t start, Location_t destin); I* generate robot movement along an optimal path *I int II return reached/not reached destination, or stuck Move(Location_t, Location_t); I* return the locations passed thru in the last move *I Location_t* II return number of locations path2Loc(int *); I* return the head of the graph for the robot path *I inline Graph_t* getStartNetO { return graphHead; }; II clear existing path data 113
PAGE 120
inline void clearPaths(void) { my Map->clear AbsO; }; inline void GetPlanExt (int *minX, int *maxX, int *min Y, int *maxY) { } ; private: II return set up from last planning *minX = minExtX; *maxX = maxExtX; *minY = minExtY; *maxY = maxExtY; II determine linear distance between two points using coordiantes inline int calcDist (Location_t from, Location_t to); II make the empty virtual Move call private int Move(void); robotMap *my Map; II Local copy of area map int mySpeed; II Velocity Location_t myDestin; II Current destination Graph_t *graph Head; II Start of path Graph_t *currGrph; II Current cell on path Location_t *lastMove; II List of last location visited int moveCnt; II number of locations visited int maxExtent; II nominal field of view int use Extent; II current field of view int minExtX, maxExtX, minExtY, maxExtY; II mix/max horizon mode_t planMode; II planning mode Queue
curQue, ndLst; II Planning queues }; #endif 114
PAGE 121
Robot.cpp #include "Robot.h" #include
#include
II constructor create the data structures for this robot Robot::Robot(int robotSpeed, int argMaxExtent, robotMap *argMap) { mySpeed = robotSpeed; II number of nodes reachable in one time step myMap = argMap; lastMove =new Location_t[robotSpeed]; moveCnt = 0; planMode = transFree; graphHead = NULL; currGrph = NULL; maxExtent = argMaxExtent; II maximum extent for path planning useExtent = maxExtent; II destructor delete this robot Robot::-RobotO { delete lastMove; II forecast mobile obstacle movement in the current field of view void Robot::PrdctDynObs(int curDist, int fovMinX, int fovMaxX, int fovMinY, int fovMaxY) { Location_t locPred; int i, timeCnt; ObsDef_t *obs; List
*obsList; Location_t *obsLoc; II get a list of all obstacle definitions. The nature of II this function (i.e. what is returned) must change if the robot has II to figure out the obstacle movement pattern. obsList = myMap->GetObsListO; II compute locations at previous time step since obstacle already moved timeCnt = curDist-1; obs = obsList->GetFirstO; II while mobile obstacle not procesed wh.ile(obs) { II if current obstacle is mobile if(obs->dynamic) { II compute location at time step-1 (time step -1 *delta (x andy) if(timeCnt >= 0) { obsLoc = obs->loc; 115
PAGE 122
II for each element of mobile obstacle for(i=O; i < obs->elemCnt; i++) { II if current location is inside field of view then if((obsLoc+i)->x >= fovMinX && (obsLoc+i)->x <= fovMax.X && (obsLoc+i)->y >= fovMin Y && (obsLoc+i)->y <= fovMaxY) locPred.x = (obsLoc+i)->x + (timeCnt obs->deltaX); locPred.y = (obsLoc+i)->y + (timeCnt obs->deltaY); II if location is inside field of view then if(locPred.x >= 0 && locPred.x <= 29 && locPred.y >= 0 && locPred.y <= 29) II set predicted flag for cell at new location my Map->SetPredObs(locPred); II plan robot trajectory paths void Robot::Plan (Location_t start, Location_t destin) { int tDist, cnt, done=O, startDist, mapDist, sumDist, i, chkDone=O; int fovMinX, fovMaxX, fovMinY, fovMaxY; int compDist, maxDist=O, distDest, ddist, minDest=255; int lastDist, travelDist, adjDist = 0; Graph_t *grPtr, *adjPtr, *maxLoc, *transCell, *curCell, *parCell, *anothCell; Graph_t *startCell; graphList *adjList, *mapl..ist, *transl..ist, *parl..ist, *backList, *anothList; Location_t curLoc; II clear any existing path (link) information myMap->clearAbsQ; II initialize distance in work area my Map-> DistlnitO; graphHead = myMap->compPtr(start); II current node is header node of this plan currGrph = graphHead; II compute area boundary for field of view (xmin, xmax, ymin, ymax) curLoc = currGrph->loc; if((fovMinX = curLoc.x-use Extent) < 0) fovMinX = 0; if((fovMaxX = curLoc.x +use Extent)> 29) fovMaxX = 29; if((fovMin Y = curLoc.y-use Extent)< 0) 116
PAGE 123
fovMinY = 0; if((fovMaxY = curLoc.y + use Extent) > 29) fovMaxY = 29; II initialize min/max x,y extents minExtX = 30; maxExtX = 0; minExtY = 30; maxExtY = 0; II calculte theoretical (unimpeded) distance from start to destination tDist = myMap->MeasDist(start, destin); II create a list to contain the boundary locations as transition locs mapList = new graphList; backList = new graphList; grPtr = myMap->compPtr(start); grPtr->dist = 0; II do for each entry in the list while(grPtr) { startDist = grPtr->dist + 1; II get start distance for this set if(grPtr->dist > maxDist) maxDist = grPtr->dist; II generate list cells that are adjacent to this location that are II not obstacles adjList = myMap->GenAdj(grPtr->loc, mySpeed); chkDone = 0; II flag to post a boundary condition only once II for each cell in adjacent list adjPtr = adjList->GetFirstO; while(adjPtr) { II if still in field of view mapDist = myMap->MeasDist(start, adjPtr->loc); if(mapDist <= useExtent) { adjPtr->dist = startDist; II place into proper node list for further expansion ndLst. put(adjPtr); II create an adjacent list until boundary location is found if(startDist > adjDist) { adjDist = startDist; II destroy the old list, create a new, add the item if(backList) delete backList; back.List = new graphList; backList-> Addlte m (adj Ptr); 117
PAGE 124
} else { else if(startDist = adjDist) { backList->Addltem(adjPtr); II check for minimum/maximum x,y condition if(grPtr->loc.x < minExtX) minExtX = grPtr->loc.x; if(grPtr->loc.x > maxExtX) maxExtX = grPtr->loc.x; if(grPtr->loc.y < minExtY) minExtY = grPtr->loc.y; if(grPtr->loc.y > maxExtY) maxExtY = grPtr->loc.y; II put boundary location in transition list if(!chkDone) { chkDone = 1; mapList->Addltem(grPtr); if((distDest = myMap->MeasDist(grPtr->loc, destin)) < minDest) minDest = distDest; II get next cell in adjacent list adjPtr = adjList->GetNextO; delete adjList; grPtr = ndLst.getO; ndLst.clearO; II moving into the path generation phase. First determine the best tran/1 sition points. This is accomplished by finding locations that were II mapped in the above algorithm closest to the final destination. II Paths are generated from start location to these "best" transtion //locations. transCell = myMap->compPtr(destin); planMode = transFree; II if destination is not reachable in the current field of view if(transCell->dist = 255) { II if destination location is being blocked and distance to II destination is within speed of robot and robot will not get clobbered 118
PAGE 125
if((transCell->blocked I I transCell->willBeBlocked I I transCell->pathBlock) && (tDist <= mySpeed) && !currGrph->willBeBlocked) transList = NULL; II is minimum dist to destination >=theoretical distance from start? else if(minDest >= tDist) { if(minDest < 255) { } II this means our best transition locations are all in worse II theoretical position than our start. Generate paths to these II locations anyway, since they are the best we could get. if(backList) delete backList; transList = mapList; planMode = transBlocked; II if no maplist was created then robot was blocked from the II maximum extent of travel. Use the backup list to get as far II as we can. else { planMode = transBack; transList = backList; II else find best locations as transition points else { if(backList) delete backList; distDest = 255; transList = (graphList *)NULL; transList =new graphList; II do until best distance is found maxLoc = mapList->GetFirstO; while(maxLoc) { II compute total distance from current point ddist = myMap->MeasDist(maxLoc->loc, destin); II this check replaces an algorithm that would determine the II true distance of a transition cell to the destination (at II least thru the fov). if(ddist < tDist) { compDist = maxLoc->dist + ddist; if(compDist < distDest) { II destroy the existing "best" list if( trans List) delete transList; 119
PAGE 126
} else { } } transList = new graphList; transList->Addltem(maxLoc); distDest = compDist; else if (compDist = distDest) { II put new location on best list transList->Addltem(maxLoc); II next member maxLoc = mapList->GetNextQ; delete mapList; delete mapList; planMode = transDest; transList = new graphList; transList-> Add! tern (transCell); II for each transition cell, generate the parent-child relationship if(transList) { II clear all previous predicted obstacle flags in the field of view myMap->ClearPredObs(fovMinX, fovMaxX, fovMin Y, fovMaxY); lastDist = 0; startCell = myMap->compPtr(start); transCell = transList->GetFirstQ; w bile (transCell) { curCell = transCell; II do until start location is obtained w hile(curCell) { II compute the affect of dynamic obstacle interaction at travel time II into scenario travelDist = (curCell->dist/mySpeed) + (curCell->dist% mySpeed); if(travelDist != lastDist) { II clear all previous predicted obstacle flags in the field of view myMap->ClearPredObs(fovMinX, fovMaxX, fovMin Y, fovMaxY); if(travelDist > 0) II predict future locations only (already moved) PrdctDynObs(travelDist, fovMinX, fovMaxX, fovMinY, fovMaxY); lastDist = travelDist; 120
PAGE 127
} if(!curCell->predBlock) { } else { II for each parent of current cell, add a child parList = myMap->GetParent(curCell); if(parList) { parCell = parList->GetFirstO; while(parCell) { if(myMap->AddChild(parCell, curCell)) curQue.put(parCell); parCell = parList->GetNextO; II add child cells to the transition list if(transCell->numParent) { anothList = (graphList *)transCell->parents; if(anothList) anothCell = anothList->GetFirstO; while(anothCell) { if(! anoth Cell->willBe Blocked) transList->Addltem(anothCell); anothCell = anothList->GetNextO; curCell = curQue.getO; II get next transition cell transCell = transList->GetNextO; curQue.clearO; II reintiahze the queue delete transList; II make the robot move along a planned trajectory int Robot::Move(Location_t start, Location_t destin) { canst int maxEval = -4096; canst int childFactor = 0; canst int willBeFactor = -100; canst int blockedFactor = -200; canst int onPathFactor = -200; 121
PAGE 128
graphList *currList; Graph_t *currNode, *bestNode, *startCell; int bestEval, linDist,n=O, clrPath=O, curEval, currDist, destDist; int pathDist, testDist; static int totMove = 0, firstTime=O, chkDist; static Graph_t *destNode; moveCnt = 0; II if in transDest mode see if the destination is blocked if(planMode = transDest) { destNode = myMap->compPtr(destin); if(destNode->blocked I I destNode->willBeBlocked) planMode = transFree; II if this is the first move for the robot if((currGrph =NULL) I I (destin.x 1= myDestin.x I I destin.y != myDestin. y)) { myDestin =destin; totMove = 0; destNode = myMap->compPtr(myDestin); myMap->ClearPathO; II plan the initial path to the destination Plan(start, destin); II verify that robot is not already at the destination else if((currGrph->loc.x = myDestin.x) && (currGrph->loc.y = myDestin.y)) { *(lastMove+moveCnt) = currGrph->loc ; moveCnt++; totMove++; return DONE; II do I have a move from the current position or is it time to replan? else if(planMode 1= transDest I I currGrph->numChild = 0) llif(currGrph->numChild = 0) { II add here to replan (Plan) from current location to destination totMove = 0; Plan(currGrph->loc, myDestin); II ii plan is blocked from destination if(planMode = transBlocked) { destDist = myMap->MeasDist(currGrph->loc, myDestin); if(destDist <= mySpeed) { destNode = myMap->compPtr(destin); if(destNode->blocked I I destNode->willBeBlocked) { 122
PAGE 129
planMode = transFree; return OK; while(planMode = transBlocked) { } II increment search extent and replan use Extent++; if(useExtent >= 29) { useExtent = maxExtent; return STUCK; II fail this search Plan(currGrph->loc, my Destin); planMode = transDest; firstTime = 1; useExtent = maxExtent; II reset to normal field of view *(lastMove) = currGrph->loc; currDist = calcDist(currGrph->loc, my Destin); startCell = currGrph; n =0; while(startCell && (n < mySpeed)) { II examine current choices for movement currList = (graphList *)currGrph->children; if(!currList) break; currNode = currList->GetFirstQ; if(!currNode) break; best Node = currGrph; II best is current position bestEval = maxEval; while(currNode) { II eliminate this node from consideration if it contains an II obstacle, if my start node will be blocked and this node II is currently blocked, if this node will be blocked and this II is my final move for this plan. if(currNode->obstacle I I (startCell->willBeBlocked && currNode->blocked) I I (currNode->willBeBlocked && (n+l) = mySpeed)) II check to see if this is the destination if(currNode->loc.x = myDestin.x && currNode->loc.y = myDestin.y) return STUCK; 123
PAGE 130
} II evaluate this node compared to others. Each location starts II with a measurement of how much closer it brings the robot to II the destination than the current location. The cell then gets II points for the number of children it has and loses points for II being warped in a certain way: willBeBlocked, blocked, previous II and previous path. else { } II calculate how much closer this takes robot to destination linDist = calcDist(currNode->loc, my Destin); if(linDist = 0) II currNode is the destination { II found the destination, so pick this one and exit currGrph = currNode; *(lastMove+moveCnt) = currGrph->loc; moveCnt++; totMove++; currGrph = NULL; return DONE; testDist = currDist linDist; II add in the number of children on this node testDist += (currNode->numChild childFactor); II take away for abnormalities if(currNode->willBeBlocked) testDist += willBeFactor; if(currNode->blocked) testDist += blockedFactor; if(currNode->nxtOnPath) testDist += onPathFactor; if(testDist > bestEval) { bestEval = testDist; bestNode = currNode; currNode = currList->GetNextQ; II did we fmd no acceptable nodes'? if(bestEval = maxEval && startCell->willBeBlocked && moveCnt = 0) { II try replanning out of this one if(totMove > 0) { } II clear paths may exist but are not planned, better replan currGrph->blocked = 0; totMove = 0; Plan(currGrph->loc, myDestin); else II I have no paths from the current location { 124
PAGE 131
} moveCnt = 1; break; else if(bestEval = maxEval) II try again next time break; else { II the path to follow is thru currNode currGrph->nxtOnPath = bestNode; if(bestNode->nxtOnPath) bestNode->nxtOnPath = NULL; currGrph = bestNode; *(lastMove+moveCnt) = currGrph->loc; moveCnt++; totMove++; n++; II if we are on a dedicated transition location path if(planMode = transDest) { } if( first Time) { else { firstTime = 0; chkDist = myMap->MeasDist(currGrph->loc, destin); II calculate current distance to destination and compare to last II time through. II Reset flag if distance is now better pathDist = myMap->MeasDist(currGrph->loc, destin); if(pathDist >= chkDist) chkDist = pathDist; else planMode = transFree; return OK; Location_t* Robot: :path2Loc(int *numLocs) { *numLocs = moveCnt; return lastMove; inline int Robot::calcDist(Location_t from, Location_t to) { return (((from.x-to.x) (from.x-to.x)) + ((from.y to.y) (from.y to.y))); 125
PAGE 132
List.h II List.h defines the template for a list class #ifndef _LIST_ #define _LIST_ II A template class for doubly linked lists template
class List { II class Node; public: ListO II Constructor (default) no arguments required : ListHead(O), ListTail(O), Pointer(O) { }; -ListO II Destructor Node *n1, *n2; n 1 = List Head; }; int while (n1 != NULL) { n2 = n1->next; delete n1; n1 = n2; Addltem(const Tc *t) }; II Add an element to the list only if it is II not already present Node *n; n = ListHead; while (n) { if(n->item = t) return 0; II get next n = n->next; n =new Node; n->item = (fc *) t; n->next = 0; n->prev = ListTail; if (ListTail) { } else ListTail->next = n; ListTail = n; ListHead = ListTail = n; return 1; 126
PAGE 133
void Deleteltem(Tc *t) II Delete element from the list { } ; Node *n; int cnt=O; n = ListHead; while (n) { if(n->item = t) { if (n->prev) n->prev->next = n->next; else ListHead = n->next; if (n->next) n->next->prev = n->prev; else ListTail = n->prev; if (n = Pointer) Pointer= n->next; delete n; break; II get next n = n->next; Tc *GetFirstO II Get the first item in the list (reset pointer) }; Pointer = ListHead; if (Pointer) return Pointer->item; else return 0; Tc *GetNextO II Get the next item in the list { if (Pointer) }; Pointer = Pointer->next; else Pointer = ListHead; if (Pointer) return Pointer->item; else return 0; Tc *GetltemO const II Get the current item in the list { if (Pointer) return Pointer->item; else return 0; 127
PAGE 134
private: }; #endif struct Node { }; Tc *item; Node *next; Node *prev; Node *ListHead; Node *ListTail; Node *Pointer; II Nodes of the (doubly linked) list II Pointer to the user's data II Pointer to next item in the list II Pointer to previous item in the list II The list of data elements II Pointer to last element in the list II Current position in the list 128
PAGE 135
Queue.h II Queue.h -defines the template for a Queue class #ifndef _QUEUEH_ #define _QUEUEH_ const int MAX_QUEUE_SIZE = 900; II A template class for queues. Use a simple pointer array as the underlying II structure. template
class Queue { public: QueueO II Constructor (default) -no arguments required : qOut(O}, qlns(O) { }; -QueueQ {}; II Destructor II insert a new element into the queue int put(Tc *item) { }; if(qlns < MAX_QUEUE_SIZE) { } else ptrQueue[qlns] =item; qlns++; return 1; return 0; II the queue is f"illed II extract an element from the queue Tc* get(void) { }; if(qOut = qlns) return NULL; else return ptrQueue[qOut++]; int isEmptyQ { }; if(qOut = qlns) return 1; else return 0; void clearQ { II reset pointers qlns = 0; 129
PAGE 136
qOut = 0; }; private: }; #endif Tc *ptrQueue[MAX_QUEUE_SIZE]; int qlns; int qOut; 130
PAGE 137
BIBLIOGRAPHY 1. Craig, J. J. Introduction to Robotics: Mechanics and Control. 2nd Edition. New York : Addison-Wesley. 1989. 2. Ellis, M. A and Stroustrup, B. The Annotated C++ Reference Manual. New York: Addison-Wesley. 1990. 3. Jones, J. L. and Flynn, A. M. Mobile Robots: Inspiration to Implementation. Wellesley, MA : AK Peters Ltd. 1993. 4. Martin Marietta Astronautics Group (MMAG). Technical Prooosal: Intelligent Mobile Sensor System for Autonomous Monitoring and Inspection. Denver, CO. (1991): 11-1-11-25. 5. Parker, L. E. Heterogeneous Multi-Robot Cooperation. Doctoral Thesis, Massachusetts Institute of Technology. 1994. 6. Petzold, C. Programming Windows 3.1. 3rd Edition. Redmond, W A : Microsoft Press. 1992. 7. Rich, E. and Knight, K. Artificial InteJ.liience. 2nd Edition. New York: McGraw Hill. 1991. 8. Stilrnan, B. "From Serial to Concurrent Motions in Multiagent Systems: A Linguistic Geometry Approach." Journal of Systems Engineering (To Appear 1996): 1-33. 9. Stilrnan, B. "Translations of Network Languages." An International Journal: Computers and Mathematics with Applications. Vol. 27, No. 2 (1994): 65-98. 10. Stilrnan, B. "A Syntactic Hierarchy for Robotic Systems." Integrated Computer Aided Engineering Vol. No. 1 (1993): 57-82. 11. Stilrnan, B. "A Linguistic Approach to Geometric Reasoning." An International Journal: Computers and Mathematics with Applications. Vol. 26, No. 8 (1992): 29-58. 131
©Auraria Library
SobekCM
Library Site Index
|
Library FAQs
|
Ask Us
|
Send a Comment | http://digital.auraria.edu/AA00001879/00001 | CC-MAIN-2017-39 | refinedweb | 27,172 | 51.38 |
HI Jeremy I have an aspirin. I can help you with that. you need to create those files sw/airborne/arch/stm32/subsystems/imu/imu_aspirin_arch.c sw/airborne/arch/stm32/subsystems/imu/imu_aspirin_arch.h and then use spi and i2c to talk to the sensors you need to edit conf/autopilot/subsystems/shared/imu_aspirin.makefile add a ifeq ($(ARCH), lpc21) else ifeq ($(ARCH), stm32) endif to separate between architecture dependant sources and flags I2C part should be quite the same you can directly reuse the hardware independant I2C part which is in sw/airborne/subsystems/imu/imu_aspirin.[ch] you need to write a hardware dependant SPI driver for the accelerometer Don't hesitate to ask if you need more details Poine On Sat, Mar 19, 2011 at 1:03 AM, Jeremy Reinertsen <address@hidden> wrote: > Hello, > > Is anyone working on integrating Aspirin IMU or PPZUAV 9DOF IMU with LPC21 > based autopilots? > > Also, what is the EEPROM used for on the Aspirin IMU? > > I started looking through the code last night. Aspirin IMU is already done > for Lisa/L so it should just be a matter or reusing and reworking existing > examples. I think the first step is to drop a copy of > > imu_aspirin.makefile in to the right place but that's all I know so far. > > > > Cheers > > Jeremy > > _______________________________________________ > Paparazzi-devel mailing list > address@hidden > > > | https://lists.gnu.org/archive/html/paparazzi-devel/2011-03/msg00122.html | CC-MAIN-2022-27 | refinedweb | 226 | 63.49 |
Add, change or delete an environment variable
#include <stdlib.h> int putenv( char *env_name );
libc
Use the -l c option to qcc to link against this library. This library is usually included automatically.
The putenv() function uses env_name, in the form name=value, to set the environment variable name to value. This function alters name if it exists, or creates a new environment variable.
In either case, env_name becomes part of the environment; subsequent modifications to the string pointed to by env_name affect the environment.
The space for environment names and their values is limited. Consequently, putenv() can fail when there's insufficient space remaining to store an additional value.
putenv( strdup( buffer ) );
The following gets the string currently assigned to INCLUDE and displays it, assigns a new value to it, gets and displays it, and then removes INCLUDE from the environment.
#include <stdio.h> #include <stdlib.h> int main( void ) { char *path; path = getenv( "INCLUDE" ); if( path != NULL ) { printf( "INCLUDE=%s\n", path ); } if( putenv( "INCLUDE=/src/include" ) != 0 ) { printf( "putenv() failed setting INCLUDE\n" ); return EXIT_FAILURE; } path = getenv( "INCLUDE" ); if( path != NULL ) { printf( "INCLUDE=%s\n", path ); } unsetenv( "INCLUDE" ); return EXIT_SUCCESS; }
This program produces the following output:
INCLUDE=/usr/nto/include INCLUDE=/src/include | http://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/p/putenv.html | CC-MAIN-2018-13 | refinedweb | 206 | 57.77 |
Difference between revisions of "Lab: SPARQL Programming"
Revision as of 10:02,
Redo all the SPARQL queries and updates from Lab 4, this time writing a Python program.
- SELECT all triples in your graph.
- SELECT all the interests of Cade.
- SELECT the city and country of where Emma lives.
- SELECT only people who are older than 26.
- SELECT Everyone who graduated with a Bachelor Degree.
- Use SPARQL Update's DELETE DATA to delete that fact that Cade is interested in Photography. Sergio.
- Write a SPARQL CONSTRUCT query that returns that: any city in an address is a cityOf the country of the same address.. Make sure that the URL you use with SPARQLWrapper has the same address and port as the one you get from running it. on your own computer yet, you can use the UiB blazegraph service: i2s.uib.no:8888/bigdata/#splash. Remember to create your own namespace like said above in the web-interface.
Alternatively, you can instead program SPARQL queries directly with RDFlib.
For help, look at the link below: | https://wiki.app.uib.no/info216/index.php?title=Lab:_SPARQL_Programming&diff=next&oldid=957 | CC-MAIN-2022-40 | refinedweb | 175 | 66.94 |
This is all i got...and i don't know what to do from here on. Can someone give me some hints on what to do next?
Replace this line with your code. def is_prime(x): if (x > 2): return False
This is all i got...and i don't know what to do from here on. Can someone give me some hints on what to do next?
Replace this line with your code. def is_prime(x): if (x > 2): return False
You need a for loop that checks to see if i is in the range made by the div and x. Inside that, you need to check if x modulo i is equal to 0, and return the appropriate response.
what is "div" if i do for i in range()?
div is the variable name I used to define the start of my range, which I decided should be the first prime number, 2. The full code is:
def is_prime(x): if (x > 1): div = 2 for i in range(div,x): if (x % i) == 0: return False else: return False return True
If you examine the code, you should be able to work out the logic
The range has nothing to do with prime numbers. We start with
2 so that all even values for
x are elliminated right off the top. No point continuing the loop if the number is even.
There is no real point in assigning
2 to a variable. It creates confusion for the reader.
for i in range(2, x):
tells the reader what is going on.
Parens are not required in either instance, and it is probably better to not use parens unless they are needed for grouping.
if x > 1: if x % i == 0:
Haha, im so confused now
Think about what a Prime number is...
2, all are odd numbers.
We are using a brute force method to test primeness which is commonly known as the divisibility approach. Any number that can be evenly divided by a number less than itself is not a prime.
The first step we take will be to elliminate any inputs less than 2.
if x < 2: return False # loop and divisibilty test goes here return True
def is_prime(x): if (x < 2): return False for i in range( 2, x): if x % i == 0: return True
error: Oops, try again. Your function fails on is_prime(2). It returns None when it should return True.
BTW the reason why i am having problem with this is because i truly still dont understand prime numbers.
This should be
return False since it is divisible. Leave the
return True line where I wrote it in the example above.
thanks for helping...
Would you mind explain me the range(2,x) and why do i have to add return True to the last line.
For example:
lets x be 4
for i in range( 2 , 4): <---------- what exactly is range(2,4)?
if x % i = 0 <---------- what is i?
You don't have to add it, just leave where it is.
if x < 2: return False # loop and divisibilty test goes here return True
Why? Because
2 does not enter the loop so will drop to this line. Otherwise, only the values for
x that survive the divisibility test will reach this line.
I know I'm dumb, but I still don't get it.
Here is the code:
def is_prime(x): if (x > 1): for i in range(2,x): if (x % i) == 0: return False else: return False return True
If we go step by step:
The function starts.
If x is greater than 1, so if it is 2 or 3 or 4 etc, the loop starts and checks for prime. But why isn't there an else condition for the if x % i == 0 statement?
Then, if x is smaller than 1, meaning 0, it returns False. That I understand.
And then the last line is not connected to any if statements, so shouldn't it always returns True when the functions runs?
I might be wrong but the way I see it:
The problem requires the range to be starting from 2 so if x < 2 it should return false.. the second statement refers to the range we're looking for and reads that if x within that range is divisible by n then we return False.. the final statement refers to all other cases not previously covered and those return the opposite (in this case True)..
You don't need the return False statement after else
So what your say is that in range(2,x) that python will drop any number that is even and if the number is the odd it will go through the divisible by test, correct?
Actually, all even numbers go through the test, but fail immediately. Odd numbers will take at least two iterations of the loop to fail. If the loop completes without failing, the number is prime so the last line returns True.
What makes it to fail? I am still not understanding what range(2,x) <--- why is there a 2 and what does it do with the function range()?
x = 9 for n in range(2, x): print n, # 2 3 4 5 6 7 8
If
x is even, it will return False when
n is
2. The input
9 will fail when
n is
3.
x = 19 for n in range(2, x): print n, # 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
When
x is
19 the loop will iterate the complete range without returning False, so the function will return True.
19 is prime.
Another,
x = 21 for n in range(2, x): print n, # 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
When
x is
21, it will fail when
n is
3. | https://discuss.codecademy.com/t/practice-make-perfect-is-prime-help/79622 | CC-MAIN-2018-39 | refinedweb | 997 | 88.77 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.